content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 35
137
|
---|---|---|---|---|---|---|---|---|
Q:
Documenting class attribute
Following sample is taken from "Dive into python" book.
class MP3FileInfo(FileInfo):
"store ID3v1.0 MP3 tags"
tagDataMap = ...
This sample shows documenting the MP3FileInfo, but how can I add help to MP3FileInfo. tagDataMap
A:
The PEP 224 on attribute docstrings was rejected (long time ago), so this is a problem for me as well, sometimes I don't know to choose a class attribute or an instance property -- the second can have a docstring.
A:
Change it into a property method.
A:
Do it like this:
class MP3FileInfo(FileInfo):
"""Store ID3v1.0 MP3 tags."""
@property
def tagDataMap(self):
"""This function computes map of tags.
The amount of work necessary to compute is quite large, therefore
we memoize the result.
"""
...
Note though you really shouldn't make a separate docstring if the attribute has only a one-line description. Instead, use
class MP3FileInfo(FileInfo):
"""Store ID3v1.0 MP3 tags.
Here are the attributes:
tagDataMap -- contains a map of tags
"""
tagDataMap = ...
|
Documenting class attribute
|
Following sample is taken from "Dive into python" book.
class MP3FileInfo(FileInfo):
"store ID3v1.0 MP3 tags"
tagDataMap = ...
This sample shows documenting the MP3FileInfo, but how can I add help to MP3FileInfo. tagDataMap
|
[
"The PEP 224 on attribute docstrings was rejected (long time ago), so this is a problem for me as well, sometimes I don't know to choose a class attribute or an instance property -- the second can have a docstring.\n",
"Change it into a property method.\n",
"Do it like this:\nclass MP3FileInfo(FileInfo):\n \"\"\"Store ID3v1.0 MP3 tags.\"\"\"\n\n @property \n def tagDataMap(self):\n \"\"\"This function computes map of tags.\n\n The amount of work necessary to compute is quite large, therefore\n we memoize the result.\n\n \"\"\"\n ...\n\nNote though you really shouldn't make a separate docstring if the attribute has only a one-line description. Instead, use \nclass MP3FileInfo(FileInfo):\n \"\"\"Store ID3v1.0 MP3 tags.\n\n Here are the attributes:\n tagDataMap -- contains a map of tags\n\n \"\"\"\n\n tagDataMap = ...\n\n"
] |
[
4,
1,
0
] |
[] |
[] |
[
"attributes",
"class",
"python",
"self_documenting"
] |
stackoverflow_0001347566_attributes_class_python_self_documenting.txt
|
Q:
passing ctrl+z to pexpect
How do I pass a certain key combination to a spawned/child process using the pexpect module? I'm using telnet and have to pass Ctrl+Z to a remote server.
Tnx
A:
use sendcontrol()
for example:
p = pexpect.spawn(your_cmd_here)
p.sendcontrol('z')
|
passing ctrl+z to pexpect
|
How do I pass a certain key combination to a spawned/child process using the pexpect module? I'm using telnet and have to pass Ctrl+Z to a remote server.
Tnx
|
[
"use sendcontrol()\nfor example:\np = pexpect.spawn(your_cmd_here)\np.sendcontrol('z')\n\n"
] |
[
8
] |
[] |
[] |
[
"pexpect",
"python"
] |
stackoverflow_0001348283_pexpect_python.txt
|
Q:
How do I create a file in python without overwriting an existing file
Currently I have a loop that tries to find an unused filename by adding suffixes to a filename string. Once it fails to find a file, it uses the name that failed to open a new file wit that name. Problem is this code is used in a website and there could be multiple attempts to do the same thing at the same time, so a race condition exists.
How can I keep python from overwriting an existing file, if one is created between the time of the check and the time of the open in the other thread.
I can minimize the chance by randomizing the suffixes, but the chance is already minimized based on parts of the pathname. I want to eliminate that chance with a function that can be told, create this file ONLY if it doesn't exist.
I can use win32 functions to do this, but I want this to work cross platform because it will be hosted on linux in the end.
A:
Use os.open() with os.O_CREAT and os.O_EXCL to create the file. That will fail if the file already exists:
>>> fd = os.open("x", os.O_WRONLY | os.O_CREAT | os.O_EXCL)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
OSError: [Errno 17] File exists: 'x'
Once you've created a new file, use os.fdopen() to turn the handle into a standard Python file object:
>>> fd = os.open("y", os.O_WRONLY | os.O_CREAT | os.O_EXCL)
>>> f = os.fdopen(fd, "w") # f is now a standard Python file object
Edit: From Python 3.3, the builtin open() has an x mode that means "open for exclusive creation, failing if the file already exists".
A:
If you are concerned about a race condition, you can create a temporary file and then rename it.
>>> import os
>>> import tempfile
>>> f = tempfile.NamedTemporaryFile(delete=False)
>>> f.name
'c:\\users\\hughdb~1\\appdata\\local\\temp\\tmpsmdl53'
>>> f.write("Hello world")
>>> f.close()
>>> os.rename(f.name, r'C:\foo.txt')
>>> if os.path.exists(r'C:\foo.txt') :
... print 'File exists'
...
File exists
Alternatively, you can create the files using a uuid in the name. Stackoverflow item on this.
>>> import uuid
>>> str(uuid.uuid1())
'64362370-93ef-11de-bf06-0023ae0b04b8'
A:
If you have an id associated with each thread / process that tries to create the file, you could put that id in the suffix somewhere, thereby guaranteeing that no two processes can use the same file name.
This eliminates the race condition between the processes.
|
How do I create a file in python without overwriting an existing file
|
Currently I have a loop that tries to find an unused filename by adding suffixes to a filename string. Once it fails to find a file, it uses the name that failed to open a new file wit that name. Problem is this code is used in a website and there could be multiple attempts to do the same thing at the same time, so a race condition exists.
How can I keep python from overwriting an existing file, if one is created between the time of the check and the time of the open in the other thread.
I can minimize the chance by randomizing the suffixes, but the chance is already minimized based on parts of the pathname. I want to eliminate that chance with a function that can be told, create this file ONLY if it doesn't exist.
I can use win32 functions to do this, but I want this to work cross platform because it will be hosted on linux in the end.
|
[
"Use os.open() with os.O_CREAT and os.O_EXCL to create the file. That will fail if the file already exists:\n>>> fd = os.open(\"x\", os.O_WRONLY | os.O_CREAT | os.O_EXCL)\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nOSError: [Errno 17] File exists: 'x'\n\nOnce you've created a new file, use os.fdopen() to turn the handle into a standard Python file object:\n>>> fd = os.open(\"y\", os.O_WRONLY | os.O_CREAT | os.O_EXCL)\n>>> f = os.fdopen(fd, \"w\") # f is now a standard Python file object\n\nEdit: From Python 3.3, the builtin open() has an x mode that means \"open for exclusive creation, failing if the file already exists\".\n",
"If you are concerned about a race condition, you can create a temporary file and then rename it.\n>>> import os\n>>> import tempfile\n>>> f = tempfile.NamedTemporaryFile(delete=False)\n>>> f.name\n'c:\\\\users\\\\hughdb~1\\\\appdata\\\\local\\\\temp\\\\tmpsmdl53'\n>>> f.write(\"Hello world\")\n>>> f.close()\n>>> os.rename(f.name, r'C:\\foo.txt')\n>>> if os.path.exists(r'C:\\foo.txt') :\n... print 'File exists'\n...\nFile exists\n\nAlternatively, you can create the files using a uuid in the name. Stackoverflow item on this.\n>>> import uuid\n>>> str(uuid.uuid1())\n'64362370-93ef-11de-bf06-0023ae0b04b8'\n\n",
"If you have an id associated with each thread / process that tries to create the file, you could put that id in the suffix somewhere, thereby guaranteeing that no two processes can use the same file name.\nThis eliminates the race condition between the processes.\n"
] |
[
39,
7,
0
] |
[] |
[] |
[
"file",
"multithreading",
"python"
] |
stackoverflow_0001348026_file_multithreading_python.txt
|
Q:
Confusion about the Python path in Python shell vs FCGI server: Why are they different?
I'm trying to deploy my Django app into production on a shared server.
It seems I'm having problems with the Python path because I'm getting the error from the server:
No module named products.models
However, when I go to the root of the app and run the shell the modules load fine.
'>>> from products.models import Answer
'>>> import sys
'>>> sys.path
['/home/SecretUserAcct/django-projects/review_app', ...]
The path above does point to the root of the Django app.
I'm guessing this is an issue with the Python path, but I'm not sure what is going wrong.
Here is the fcgi file:
$ cat ~/public_html/django.fcgi
#!/usr/local/bin/python2.6
import sys
import os
# Insert PYTHONPATH values here, including the path to your application
#sys.path.insert(0, '<path_to_your_app_directory>')
sys.path.insert(0, '/home/SecretUserAcct/django-projects/')
# Provide the location of your application's settings file.
os.environ['DJANGO_SETTINGS_MODULE'] = 'review_app.settings'
from django.core.servers.fastcgi import runfastcgi
runfastcgi(method = "threaded", daemonize = "false", maxchildren=3, minspare=0, maxspare=1)
What understanding am I missing here?
A:
I'm somewhat confused -- if what you have in the path in the working case is:
'/home/SecretUserAcct/django-projects/review_app'
i.e., including the app, why are you instead, in the second non-working case, inserting
'/home/SecretUserAcct/django-projects/'
i.e., WITHOUT the app? Surely you'll need different forms of import depending on what you chose to put on your sys.path, no?
A:
The Django dev server and manage.py shell put the current directory (the directory you ran manage.py from) on your python path for you. When running in production mode, you'll need to adjust your path accordingly if you have code that relies on that feature.
|
Confusion about the Python path in Python shell vs FCGI server: Why are they different?
|
I'm trying to deploy my Django app into production on a shared server.
It seems I'm having problems with the Python path because I'm getting the error from the server:
No module named products.models
However, when I go to the root of the app and run the shell the modules load fine.
'>>> from products.models import Answer
'>>> import sys
'>>> sys.path
['/home/SecretUserAcct/django-projects/review_app', ...]
The path above does point to the root of the Django app.
I'm guessing this is an issue with the Python path, but I'm not sure what is going wrong.
Here is the fcgi file:
$ cat ~/public_html/django.fcgi
#!/usr/local/bin/python2.6
import sys
import os
# Insert PYTHONPATH values here, including the path to your application
#sys.path.insert(0, '<path_to_your_app_directory>')
sys.path.insert(0, '/home/SecretUserAcct/django-projects/')
# Provide the location of your application's settings file.
os.environ['DJANGO_SETTINGS_MODULE'] = 'review_app.settings'
from django.core.servers.fastcgi import runfastcgi
runfastcgi(method = "threaded", daemonize = "false", maxchildren=3, minspare=0, maxspare=1)
What understanding am I missing here?
|
[
"I'm somewhat confused -- if what you have in the path in the working case is:\n'/home/SecretUserAcct/django-projects/review_app'\n\ni.e., including the app, why are you instead, in the second non-working case, inserting\n'/home/SecretUserAcct/django-projects/'\n\ni.e., WITHOUT the app? Surely you'll need different forms of import depending on what you chose to put on your sys.path, no?\n",
"The Django dev server and manage.py shell put the current directory (the directory you ran manage.py from) on your python path for you. When running in production mode, you'll need to adjust your path accordingly if you have code that relies on that feature.\n"
] |
[
0,
0
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0001347851_django_python.txt
|
Q:
Where is the best place to put cache-evicting logic in an AppEngine application?
I've written an application for Google AppEngine, and I'd like to make use of the memcache API to cut down on per-request CPU time. I've profiled the application and found that a large chunk of the CPU time is in template rendering and API calls to the datastore, and after chatting with a co-worker I jumped (perhaps a bit early?) to the conclusion that caching a chunk of a page's rendered HTML would cut down on the CPU time per request significantly. The caching pattern is pretty clean, but the question of where to put this logic of caching and evicting is a bit of a mystery to me.
For example, imagine an application's main page has an Announcements section. This section would need to be re-rendered after:
first read for anyone in the account,
a new announcement being added, and
an old announcement being deleted
Some options of where to put the evict_announcements_section_from_cache() method call:
in the Announcement Model's .delete(), and .put() methods
in the RequestHandler's .post() method
anywhere else?
Then in the RequestHandler's get page, I could potentially call get_announcements_section() which would follow the standard memcache pattern (check cache, add to cache on miss, return value) and pass that HTML down to the template for that chunk of the page.
Is it the typical design pattern to put the cache-evicting logic in the Model, or the Controller/RequestHandler, or somewhere else? Ideally I'd like to avoid having evicting logic with tentacles all over the code.
A:
I've got just such a decorator up in an open source Github project:
http://github.com/jamslevy/gae_memoize/tree/master
It's a bit more in-depth, allowing for things like forcing execution of the function (when you want to refresh the cache) or forcing caching locally...these were just things that I needed in my app, so I baked them into my memoize decorator.
A:
A couple of alternatives to regular eviction:
The obvious one: Don't evict, and set a timer instead. Even a really short one - a few seconds - can cut down on effort a huge amount for a popular app, without users even noticing data may be a few seconds stale.
Instead of evicting, generate the cache key based on criteria that change when the data does. For example, if retrieving the key of the most recent announcement is cheap, you could use that as part of the key of the cached data. When a new announcement is posted, you go looking for a key that doesn't exist, and create a new one as a result.
|
Where is the best place to put cache-evicting logic in an AppEngine application?
|
I've written an application for Google AppEngine, and I'd like to make use of the memcache API to cut down on per-request CPU time. I've profiled the application and found that a large chunk of the CPU time is in template rendering and API calls to the datastore, and after chatting with a co-worker I jumped (perhaps a bit early?) to the conclusion that caching a chunk of a page's rendered HTML would cut down on the CPU time per request significantly. The caching pattern is pretty clean, but the question of where to put this logic of caching and evicting is a bit of a mystery to me.
For example, imagine an application's main page has an Announcements section. This section would need to be re-rendered after:
first read for anyone in the account,
a new announcement being added, and
an old announcement being deleted
Some options of where to put the evict_announcements_section_from_cache() method call:
in the Announcement Model's .delete(), and .put() methods
in the RequestHandler's .post() method
anywhere else?
Then in the RequestHandler's get page, I could potentially call get_announcements_section() which would follow the standard memcache pattern (check cache, add to cache on miss, return value) and pass that HTML down to the template for that chunk of the page.
Is it the typical design pattern to put the cache-evicting logic in the Model, or the Controller/RequestHandler, or somewhere else? Ideally I'd like to avoid having evicting logic with tentacles all over the code.
|
[
"I've got just such a decorator up in an open source Github project:\nhttp://github.com/jamslevy/gae_memoize/tree/master\nIt's a bit more in-depth, allowing for things like forcing execution of the function (when you want to refresh the cache) or forcing caching locally...these were just things that I needed in my app, so I baked them into my memoize decorator.\n",
"A couple of alternatives to regular eviction:\n\nThe obvious one: Don't evict, and set a timer instead. Even a really short one - a few seconds - can cut down on effort a huge amount for a popular app, without users even noticing data may be a few seconds stale.\nInstead of evicting, generate the cache key based on criteria that change when the data does. For example, if retrieving the key of the most recent announcement is cheap, you could use that as part of the key of the cached data. When a new announcement is posted, you go looking for a key that doesn't exist, and create a new one as a result.\n\n"
] |
[
1,
1
] |
[] |
[] |
[
"caching",
"google_app_engine",
"memcached",
"optimization",
"python"
] |
stackoverflow_0001313626_caching_google_app_engine_memcached_optimization_python.txt
|
Q:
Different versions of msvcrt in ctypes
In Windows, the ctypes.cdll.msvcrt object automatically exists when I import the ctypes module, and it represents the msvcrt Microsoft C++ runtime library according to the docs.
However, I notice that there is also a find_msvcrt function which will "return the filename of the VC runtype library used by Python".
It further states, "If you need to free memory, for example, allocated by an extension module with a call to the free(void *), it is important that you use the function in the same library that allocated the memory."
So my question is, what's the difference between the ctypes.cdll.msvcrt library that I already have and the one which I can load with the find_msvcrt function? Under what specific circumstances might they not be the same library?
A:
It's not just that ctypes.cdll.msvcrt automatically exists, but ctypes.cdll.anything automatically exists, and is loaded on first access, loading anything.dll. So ctypes.cdll.msvcrt loads msvcrt.dll, which is a library that ships as part of Windows. It is not the C runtime that Python links with, so you shouldn't call the malloc/free from msvcrt.
For example, for Python 2.6/3.1, you should be using ctypes.cdll.msvcr90. As this will change over time, find_msvcrt() gives you the name of the library that you should really use (and then load through ctypes.CDLL).
Here are the names of a few different versions of the Microsoft CRT, released at various points as part of MSC, VC++, the platform SDK, or Windows: crtdll.dll, msvcrt.dll, msvcrt4.dll, msvcr70.dll, msvcr71.dll, msvcr80.dll, msvcr90.dll.
|
Different versions of msvcrt in ctypes
|
In Windows, the ctypes.cdll.msvcrt object automatically exists when I import the ctypes module, and it represents the msvcrt Microsoft C++ runtime library according to the docs.
However, I notice that there is also a find_msvcrt function which will "return the filename of the VC runtype library used by Python".
It further states, "If you need to free memory, for example, allocated by an extension module with a call to the free(void *), it is important that you use the function in the same library that allocated the memory."
So my question is, what's the difference between the ctypes.cdll.msvcrt library that I already have and the one which I can load with the find_msvcrt function? Under what specific circumstances might they not be the same library?
|
[
"It's not just that ctypes.cdll.msvcrt automatically exists, but ctypes.cdll.anything automatically exists, and is loaded on first access, loading anything.dll. So ctypes.cdll.msvcrt loads msvcrt.dll, which is a library that ships as part of Windows. It is not the C runtime that Python links with, so you shouldn't call the malloc/free from msvcrt.\nFor example, for Python 2.6/3.1, you should be using ctypes.cdll.msvcr90. As this will change over time, find_msvcrt() gives you the name of the library that you should really use (and then load through ctypes.CDLL).\nHere are the names of a few different versions of the Microsoft CRT, released at various points as part of MSC, VC++, the platform SDK, or Windows: crtdll.dll, msvcrt.dll, msvcrt4.dll, msvcr70.dll, msvcr71.dll, msvcr80.dll, msvcr90.dll.\n"
] |
[
11
] |
[] |
[] |
[
"ctypes",
"msvcrt",
"python"
] |
stackoverflow_0001348547_ctypes_msvcrt_python.txt
|
Q:
ctype question char**
I'm trying to figure out why this works after lots and lots of messing about with
obo.librar_version is a c function which requires char ** as the input and does a strcpy
to passed in char.
from ctypes import *
_OBO_C_DLL = 'obo.dll'
STRING = c_char_p
OBO_VERSION = _stdcall_libraries[_OBO_C_DLL].OBO_VERSION
OBO_VERSION.restype = c_int
OBO_VERSION.argtypes = [POINTER(STRING)]
def library_version():
s = create_string_buffer('\000' * 32)
t = cast(s, c_char_p)
res = obo.library_version(byref(t))
if res != 0:
raise Error("OBO error %r" % res)
return t.value, s.raw, s.value
library_version()
The above code returns
('OBO Version 1.0.1', '\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', '')
What I don't understand is why 's' does not have any value? Anyone have any ideas? Thx
A:
When you cast s to c_char_p you store a new object in t, not a reference. So when you pass t to your function by reference, s doesn't get updated.
UPDATE:
You are indeed correct:
cast takes two parameters, a ctypes
object that is or can be converted to
a pointer of some kind, and a ctypes
pointer type. It returns an instance
of the second argument, which
references the same memory block as
the first argument.
In order to get a reference to your string buffer, you need to use the following for your cast:
t = cast(s, POINTER(c_char*33))
I have no idea why c_char_p doesn't create a reference where this does, but there you go.
A:
Because library_version requires a char**, they don't want you to allocate the characters (as you're doing with create_string_buffer. Instead, they just want you to pass in a reference to a pointer so they can return the address of where to find the version string.
So all you need to do is allocate the pointer, and then pass in a reference to that pointer.
The following code should work, although I don't have obo.dll (or know of a suitable replacement) to test it.
from ctypes import *
_OBO_C_DLL = 'obo.dll'
STRING = c_char_p
_stdcall_libraries = dict()
_stdcall_libraries[_OBO_C_DLL] = WinDLL(_OBO_C_DLL)
OBO_VERSION = _stdcall_libraries[_OBO_C_DLL].OBO_VERSION
OBO_VERSION.restype = c_int
OBO_VERSION.argtypes = [POINTER(STRING)]
def library_version():
s_res = c_char_p()
res = OBO_VERSION(byref(s_res))
if res != 0:
raise Error("OBO error %r" % res)
return s_res.value
library_version()
[Edit]
I've gone a step further and written my own DLL that implements a possible implementation of OBO_VERSION that does not require an allocated character buffer, and is not subject to any memory leaks.
int OBO_VERSION(char **pp_version)
{
static char result[] = "Version 2.0";
*pp_version = result;
return 0; // success
}
As you can see, OBO_VERSION simply sets the value of *pp_version to a pointer to a null-terminated character array. This is likely how the real OBO_VERSION works. I've tested this against my originally suggested technique above, and it works as prescribed.
|
ctype question char**
|
I'm trying to figure out why this works after lots and lots of messing about with
obo.librar_version is a c function which requires char ** as the input and does a strcpy
to passed in char.
from ctypes import *
_OBO_C_DLL = 'obo.dll'
STRING = c_char_p
OBO_VERSION = _stdcall_libraries[_OBO_C_DLL].OBO_VERSION
OBO_VERSION.restype = c_int
OBO_VERSION.argtypes = [POINTER(STRING)]
def library_version():
s = create_string_buffer('\000' * 32)
t = cast(s, c_char_p)
res = obo.library_version(byref(t))
if res != 0:
raise Error("OBO error %r" % res)
return t.value, s.raw, s.value
library_version()
The above code returns
('OBO Version 1.0.1', '\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', '')
What I don't understand is why 's' does not have any value? Anyone have any ideas? Thx
|
[
"When you cast s to c_char_p you store a new object in t, not a reference. So when you pass t to your function by reference, s doesn't get updated.\nUPDATE:\nYou are indeed correct: \n\ncast takes two parameters, a ctypes\n object that is or can be converted to\n a pointer of some kind, and a ctypes\n pointer type. It returns an instance\n of the second argument, which\n references the same memory block as\n the first argument.\n\nIn order to get a reference to your string buffer, you need to use the following for your cast:\nt = cast(s, POINTER(c_char*33))\n\nI have no idea why c_char_p doesn't create a reference where this does, but there you go.\n",
"Because library_version requires a char**, they don't want you to allocate the characters (as you're doing with create_string_buffer. Instead, they just want you to pass in a reference to a pointer so they can return the address of where to find the version string.\nSo all you need to do is allocate the pointer, and then pass in a reference to that pointer.\nThe following code should work, although I don't have obo.dll (or know of a suitable replacement) to test it.\nfrom ctypes import *\n_OBO_C_DLL = 'obo.dll'\nSTRING = c_char_p\n\n_stdcall_libraries = dict()\n_stdcall_libraries[_OBO_C_DLL] = WinDLL(_OBO_C_DLL)\nOBO_VERSION = _stdcall_libraries[_OBO_C_DLL].OBO_VERSION\nOBO_VERSION.restype = c_int\nOBO_VERSION.argtypes = [POINTER(STRING)]\n\ndef library_version():\n s_res = c_char_p()\n res = OBO_VERSION(byref(s_res))\n if res != 0:\n raise Error(\"OBO error %r\" % res)\n return s_res.value\n\nlibrary_version()\n\n[Edit]\nI've gone a step further and written my own DLL that implements a possible implementation of OBO_VERSION that does not require an allocated character buffer, and is not subject to any memory leaks.\nint OBO_VERSION(char **pp_version)\n{\n static char result[] = \"Version 2.0\";\n\n *pp_version = result;\n return 0; // success\n}\n\nAs you can see, OBO_VERSION simply sets the value of *pp_version to a pointer to a null-terminated character array. This is likely how the real OBO_VERSION works. I've tested this against my originally suggested technique above, and it works as prescribed.\n"
] |
[
1,
0
] |
[] |
[] |
[
"ctypes",
"python"
] |
stackoverflow_0001347280_ctypes_python.txt
|
Q:
How do you control MySQL timeouts from SQLAlchemy?
What's the right way to control timeouts, from the client, when running against a MySQL database, using SQLAlchemy? The connect_timeout URL parameter seems to be insufficient.
I'm more interested in what happens when the machine that the database is running on, e.g., disappears from the network unexpectedly. I'm not worried about the queries themselves taking too long.
The following script does what you'd expect (i.e., time out after approximately one second) if somehost is unavailable before the while loop is ever reached. But if somehost goes down during the while loop (e.g., try yanking out its network cable after the loop has started), then the timeout seems to take at least 18 seconds. Is there some additional setting or parameter I'm missing?
It's not surprising that the wait_timeout session variable doesn't work, as I think that's a server-side variable. But I threw it in there just to make sure.
from sqlalchemy import *
from sqlalchemy.exc import *
import time
import sys
engine = create_engine("mysql://user:password@somehost/test?connect_timeout=1")
try:
engine.execute("set session wait_timeout = 1;")
while True:
t = time.time()
print t
engine.execute("show tables;")
except DBAPIError:
pass
finally:
print time.time() - t, "seconds to time out"
A:
this isn't possible due to the way TCP works. if the other computer drops off the network, it will simply stop responding to incoming packets. the "18 seconds" you're seeing is something on your TCP stack timing out due to no response.
the only way you can get your desired behavior is to have the computer generate a "i'm dying" message immediately before it dies. which, if the death is unexpected, is completely impossible.
have you ever heard of hearbeats? these are packets that high-availability systems send to each other every second or less to let the other one know they still exist. if you want your application to know "immediately" that the server is gone, you first have to decide how long "immediate" is (1 second, 200 ms, etc.) and then designed a system (such as heartbeats) to detect when the other system is no longer there.
A:
I believe you are reaching a totally different error, this is a dreaded "mysql has gone away" error, If I'm right the solution is to update to a newer mysqldb driver as the bug has been patches in the driver.
If for some reason you can't/won't update you should try the SA fix for this
db= create_engine('mysql://root@localhost/test', pool_recycle=True)
A:
Could this be a bug in the mysql/python connector?
https://bugs.launchpad.net/myconnpy/+bug/328998
which says the time out is hard-coded to 10 seconds.
To really see where the breakdown is, you could use a packet sniffer to checkout the conversation between the server and the client. wireshark + tcpdump works great for this kind of thing.
|
How do you control MySQL timeouts from SQLAlchemy?
|
What's the right way to control timeouts, from the client, when running against a MySQL database, using SQLAlchemy? The connect_timeout URL parameter seems to be insufficient.
I'm more interested in what happens when the machine that the database is running on, e.g., disappears from the network unexpectedly. I'm not worried about the queries themselves taking too long.
The following script does what you'd expect (i.e., time out after approximately one second) if somehost is unavailable before the while loop is ever reached. But if somehost goes down during the while loop (e.g., try yanking out its network cable after the loop has started), then the timeout seems to take at least 18 seconds. Is there some additional setting or parameter I'm missing?
It's not surprising that the wait_timeout session variable doesn't work, as I think that's a server-side variable. But I threw it in there just to make sure.
from sqlalchemy import *
from sqlalchemy.exc import *
import time
import sys
engine = create_engine("mysql://user:password@somehost/test?connect_timeout=1")
try:
engine.execute("set session wait_timeout = 1;")
while True:
t = time.time()
print t
engine.execute("show tables;")
except DBAPIError:
pass
finally:
print time.time() - t, "seconds to time out"
|
[
"this isn't possible due to the way TCP works. if the other computer drops off the network, it will simply stop responding to incoming packets. the \"18 seconds\" you're seeing is something on your TCP stack timing out due to no response.\nthe only way you can get your desired behavior is to have the computer generate a \"i'm dying\" message immediately before it dies. which, if the death is unexpected, is completely impossible.\nhave you ever heard of hearbeats? these are packets that high-availability systems send to each other every second or less to let the other one know they still exist. if you want your application to know \"immediately\" that the server is gone, you first have to decide how long \"immediate\" is (1 second, 200 ms, etc.) and then designed a system (such as heartbeats) to detect when the other system is no longer there.\n",
"I believe you are reaching a totally different error, this is a dreaded \"mysql has gone away\" error, If I'm right the solution is to update to a newer mysqldb driver as the bug has been patches in the driver.\nIf for some reason you can't/won't update you should try the SA fix for this\ndb= create_engine('mysql://root@localhost/test', pool_recycle=True) \n\n",
"Could this be a bug in the mysql/python connector?\nhttps://bugs.launchpad.net/myconnpy/+bug/328998\nwhich says the time out is hard-coded to 10 seconds.\nTo really see where the breakdown is, you could use a packet sniffer to checkout the conversation between the server and the client. wireshark + tcpdump works great for this kind of thing.\n"
] |
[
6,
2,
1
] |
[] |
[] |
[
"mysql",
"python",
"sqlalchemy"
] |
stackoverflow_0001209640_mysql_python_sqlalchemy.txt
|
Q:
Gracefully-degrading pickling in Python
(You may read this question for some background)
I would like to have a gracefully-degrading way to pickle objects in Python.
When pickling an object, let's call it the main object, sometimes the Pickler raises an exception because it can't pickle a certain sub-object of the main object. For example, an error I've been getting a lot is "can’t pickle module objects." That is because I am referencing a module from the main object.
I know I can write up a little something to replace that module with a facade that would contain the module's attributes, but that would have its own issues(1).
So what I would like is a pickling function that automatically replaces modules (and any other hard-to-pickle objects) with facades that contain their attributes. That may not produce a perfect pickling, but in many cases it would be sufficient.
Is there anything like this? Does anyone have an idea how to approach this?
(1) One issue would be that the module may be referencing other modules from within it.
A:
You can decide and implement how any previously-unpicklable type gets pickled and unpickled: see standard library module copy_reg (renamed to copyreg in Python 3.*).
Essentially, you need to provide a function which, given an instance of the type, reduces it to a tuple -- with the same protocol as the reduce special method (except that the reduce special method takes no arguments, since when provided it's called directly on the object, while the function you provide will take the object as the only argument).
Typically, the tuple you return has 2 items: a callable, and a tuple of arguments to pass to it. The callable must be registered as a "safe constructor" or equivalently have an attribute __safe_for_unpickling__ with a true value. Those items will be pickled, and at unpickling time the callable will be called with the given arguments and must return the unpicked object.
For example, suppose that you want to just pickle modules by name, so that unpickling them just means re-importing them (i.e. suppose for simplicity that you don't care about dynamically modified modules, nested packages, etc, just plain top-level modules). Then:
>>> import sys, pickle, copy_reg
>>> def savemodule(module):
... return __import__, (module.__name__,)
...
>>> copy_reg.pickle(type(sys), savemodule)
>>> s = pickle.dumps(sys)
>>> s
"c__builtin__\n__import__\np0\n(S'sys'\np1\ntp2\nRp3\n."
>>> z = pickle.loads(s)
>>> z
<module 'sys' (built-in)>
I'm using the old-fashioned ASCII form of pickle so that s, the string containing the pickle, is easy to examine: it instructs unpickling to call the built-in import function, with the string sys as its sole argument. And z shows that this does indeed give us back the built-in sys module as the result of the unpickling, as desired.
Now, you'll have to make things a bit more complex than just __import__ (you'll have to deal with saving and restoring dynamic changes, navigate a nested namespace, etc), and thus you'll have to also call copy_reg.constructor (passing as argument your own function that performs this work) before you copy_reg the module-saving function that returns your other function (and, if in a separate run, also before you unpickle those pickles you made using said function). But I hope this simple cases helps to show that there's really nothing much to it that's at all "intrinsically" complicated!-)
A:
How about the following, which is a wrapper you can use to wrap some modules (maybe any module) in something that's pickle-able. You could then subclass the Pickler object to check if the target object is a module, and if so, wrap it. Does this accomplish what you desire?
class PickleableModuleWrapper(object):
def __init__(self, module):
# make a copy of the module's namespace in this instance
self.__dict__ = dict(module.__dict__)
# remove anything that's going to give us trouble during pickling
self.remove_unpickleable_attributes()
def remove_unpickleable_attributes(self):
for name, value in self.__dict__.items():
try:
pickle.dumps(value)
except Exception:
del self.__dict__[name]
import pickle
p = pickle.dumps(PickleableModuleWrapper(pickle))
wrapped_mod = pickle.loads(p)
A:
Hmmm, something like this?
import sys
attribList = dir(someobject)
for attrib in attribList:
if(type(attrib) == type(sys)): #is a module
#put in a facade, either recursively list the module and do the same thing, or just put in something like str('modulename_module')
else:
#proceed with normal pickle
Obviously, this would go into an extension of the pickle class with a reimplemented dump method...
|
Gracefully-degrading pickling in Python
|
(You may read this question for some background)
I would like to have a gracefully-degrading way to pickle objects in Python.
When pickling an object, let's call it the main object, sometimes the Pickler raises an exception because it can't pickle a certain sub-object of the main object. For example, an error I've been getting a lot is "can’t pickle module objects." That is because I am referencing a module from the main object.
I know I can write up a little something to replace that module with a facade that would contain the module's attributes, but that would have its own issues(1).
So what I would like is a pickling function that automatically replaces modules (and any other hard-to-pickle objects) with facades that contain their attributes. That may not produce a perfect pickling, but in many cases it would be sufficient.
Is there anything like this? Does anyone have an idea how to approach this?
(1) One issue would be that the module may be referencing other modules from within it.
|
[
"You can decide and implement how any previously-unpicklable type gets pickled and unpickled: see standard library module copy_reg (renamed to copyreg in Python 3.*).\nEssentially, you need to provide a function which, given an instance of the type, reduces it to a tuple -- with the same protocol as the reduce special method (except that the reduce special method takes no arguments, since when provided it's called directly on the object, while the function you provide will take the object as the only argument).\nTypically, the tuple you return has 2 items: a callable, and a tuple of arguments to pass to it. The callable must be registered as a \"safe constructor\" or equivalently have an attribute __safe_for_unpickling__ with a true value. Those items will be pickled, and at unpickling time the callable will be called with the given arguments and must return the unpicked object.\nFor example, suppose that you want to just pickle modules by name, so that unpickling them just means re-importing them (i.e. suppose for simplicity that you don't care about dynamically modified modules, nested packages, etc, just plain top-level modules). Then:\n>>> import sys, pickle, copy_reg\n>>> def savemodule(module):\n... return __import__, (module.__name__,)\n... \n>>> copy_reg.pickle(type(sys), savemodule)\n>>> s = pickle.dumps(sys)\n>>> s\n\"c__builtin__\\n__import__\\np0\\n(S'sys'\\np1\\ntp2\\nRp3\\n.\"\n>>> z = pickle.loads(s)\n>>> z\n<module 'sys' (built-in)>\n\nI'm using the old-fashioned ASCII form of pickle so that s, the string containing the pickle, is easy to examine: it instructs unpickling to call the built-in import function, with the string sys as its sole argument. And z shows that this does indeed give us back the built-in sys module as the result of the unpickling, as desired.\nNow, you'll have to make things a bit more complex than just __import__ (you'll have to deal with saving and restoring dynamic changes, navigate a nested namespace, etc), and thus you'll have to also call copy_reg.constructor (passing as argument your own function that performs this work) before you copy_reg the module-saving function that returns your other function (and, if in a separate run, also before you unpickle those pickles you made using said function). But I hope this simple cases helps to show that there's really nothing much to it that's at all \"intrinsically\" complicated!-)\n",
"How about the following, which is a wrapper you can use to wrap some modules (maybe any module) in something that's pickle-able. You could then subclass the Pickler object to check if the target object is a module, and if so, wrap it. Does this accomplish what you desire?\nclass PickleableModuleWrapper(object):\n def __init__(self, module):\n # make a copy of the module's namespace in this instance\n self.__dict__ = dict(module.__dict__)\n # remove anything that's going to give us trouble during pickling\n self.remove_unpickleable_attributes()\n\n def remove_unpickleable_attributes(self):\n for name, value in self.__dict__.items():\n try:\n pickle.dumps(value)\n except Exception:\n del self.__dict__[name]\n\nimport pickle\np = pickle.dumps(PickleableModuleWrapper(pickle))\nwrapped_mod = pickle.loads(p)\n\n",
"Hmmm, something like this?\nimport sys\n\nattribList = dir(someobject)\nfor attrib in attribList:\n if(type(attrib) == type(sys)): #is a module\n #put in a facade, either recursively list the module and do the same thing, or just put in something like str('modulename_module')\n else:\n #proceed with normal pickle\n\nObviously, this would go into an extension of the pickle class with a reimplemented dump method...\n"
] |
[
3,
0,
0
] |
[] |
[] |
[
"graceful_degradation",
"pickle",
"python"
] |
stackoverflow_0001348315_graceful_degradation_pickle_python.txt
|
Q:
Threads in python
I am beginar in python script. I want read msaccess database records and write into XML file.
Access database table have more than 20000 records.
Now i am able to do but , it is taking 4 to 5 minutes. So i implement threading concept. But threading also taking more than 5 to 6 minutes. Because each thread open datasource reading records from tables and close datasource.
I don't know how to solve the problems.
CODE:
class ConfigDataHandler(Thread):
def __init__(self, dev):
Thread.__init__(self)
self.dev = dev
def run(self):
db_source_path = r'D:\sampleDB.mdb'
db_source = win32com.client.Dispatch(r'ADODB.Connection')
db_source.ConnectionString = 'PROVIDER=Microsoft.Jet.OLEDB.4.0;
DATA SOURCE=' + db_source_path + ';'
db_source.Open()
query = """ SELECT * from table"""
source_rs = win32com.client.Dispatch(r'ADODB.Recordset')
source_rs.Open(query, db_source, 3, 1)
while not source_rs.EOF :
f_units.append(source_rs.fields("Name").Value))
source_rs.MoveNext()
source_rs.Close()
db_source.Close()
out = render(f_units)
open("D:/test.xml", "w").write(out)
d_list = get_dev_list()
for d in d_list:
current = ConfigDataHandler(d)
current.start()
A:
As mentioned please paste your code snippet. First - threads have a synchronisation overhead which is causing multi-threads to run slower.
Second - the msaccess/JET database is very slow and not really suited to multi-threaded use. You might like to consider SQL Server instead - SQL Server Express is free.
Third - it is probably the database slowing down the processing. What indexes do you have? What queries are you making? What does "explain" say?
A:
Undo the threading stuff.
Run the profiler on the original unthreaded code.
Replace the AODB business with ordinary ODBC.
Run the new code through the profiler.
Post your results for further discussion.
|
Threads in python
|
I am beginar in python script. I want read msaccess database records and write into XML file.
Access database table have more than 20000 records.
Now i am able to do but , it is taking 4 to 5 minutes. So i implement threading concept. But threading also taking more than 5 to 6 minutes. Because each thread open datasource reading records from tables and close datasource.
I don't know how to solve the problems.
CODE:
class ConfigDataHandler(Thread):
def __init__(self, dev):
Thread.__init__(self)
self.dev = dev
def run(self):
db_source_path = r'D:\sampleDB.mdb'
db_source = win32com.client.Dispatch(r'ADODB.Connection')
db_source.ConnectionString = 'PROVIDER=Microsoft.Jet.OLEDB.4.0;
DATA SOURCE=' + db_source_path + ';'
db_source.Open()
query = """ SELECT * from table"""
source_rs = win32com.client.Dispatch(r'ADODB.Recordset')
source_rs.Open(query, db_source, 3, 1)
while not source_rs.EOF :
f_units.append(source_rs.fields("Name").Value))
source_rs.MoveNext()
source_rs.Close()
db_source.Close()
out = render(f_units)
open("D:/test.xml", "w").write(out)
d_list = get_dev_list()
for d in d_list:
current = ConfigDataHandler(d)
current.start()
|
[
"As mentioned please paste your code snippet. First - threads have a synchronisation overhead which is causing multi-threads to run slower.\nSecond - the msaccess/JET database is very slow and not really suited to multi-threaded use. You might like to consider SQL Server instead - SQL Server Express is free.\nThird - it is probably the database slowing down the processing. What indexes do you have? What queries are you making? What does \"explain\" say?\n",
"\nUndo the threading stuff.\nRun the profiler on the original unthreaded code.\nReplace the AODB business with ordinary ODBC.\nRun the new code through the profiler.\nPost your results for further discussion.\n\n"
] |
[
5,
0
] |
[] |
[] |
[
"multithreading",
"python"
] |
stackoverflow_0001346098_multithreading_python.txt
|
Q:
How do you add a custom section to the Django admin home page?
In the Django admin each app you have registered with the admin gets its own section. I want to add a custom section for reporting that isn't associated with any app. How do I do that?
A:
To add a section not associated with an app, you'll have to override the admin index template. Create an admin/ directory in your project templates directory, and copy the file django/contrib/admin/templates/admin/index.html into it. Then you can add whatever markup you want to this file. The only downside (unfortunately there's no good way around it at the moment) is that if you upgrade Django, you'll have to be on the lookout for any changes to that index.html file, and copy those changes over into your version as well.
|
How do you add a custom section to the Django admin home page?
|
In the Django admin each app you have registered with the admin gets its own section. I want to add a custom section for reporting that isn't associated with any app. How do I do that?
|
[
"To add a section not associated with an app, you'll have to override the admin index template. Create an admin/ directory in your project templates directory, and copy the file django/contrib/admin/templates/admin/index.html into it. Then you can add whatever markup you want to this file. The only downside (unfortunately there's no good way around it at the moment) is that if you upgrade Django, you'll have to be on the lookout for any changes to that index.html file, and copy those changes over into your version as well.\n"
] |
[
6
] |
[] |
[] |
[
"django",
"django_admin",
"python"
] |
stackoverflow_0001348710_django_django_admin_python.txt
|
Q:
Programming a Self Learning Music Maker
I want to learn how to program a music application that will analyze songs.
How would I get started in this and is there a library for analyzing soundwaves?
I know C, C++, Java, Python, some assembly, and some Perl.
Related question: Algorithm for music imitation
A:
Composition and analysis of music by computer is a huge field. There are two basic areas in this type of work, which overlap somewhat.
Algorithmic composition is concerned with the generation of music. This can be based on statistical approaches such as Markov chaining, mathematical models employing fractal or chaotic processes, or leveraging techniques from AI such as expert systems, neural networks and genetic algorithms.
Music information retrieval is concerned with identifying common grammars, commonalities and similarity metrics between pieces of music, and identifying uniqueness (sometimes called acoustic fingerprinting).
Many, many libraries, tools and specialised programming languages exist which can help with different parts of these problems. Here's a list of music-related programs and libraries for Python. There is a lot of technology available; you should be able to find something that will do the brunt of the work for you. Reimplementing a 'musical parser' through very low-level frequency analysis tools such as Fourier Transforms, as other answers have suggested, while possible, will be quite difficult and is almost certainly unnecessary.
For further advice and specific questions, the International Society for Music Information Retrieval has a mailing list which you would probably find very helpful.
A:
Once you get past the FFT stuff that Lennart mentioned, you might want to have a look at Markov chains for analyzing intervals between notes, and aggregated patterns.
This is kind of treaded ground, but Markov chains have been used in the past to build a kind of statistical model of melodies from various songs which can be used to generate new melodies. Markov chains can do the same with written english sentences. For an example of how that looks, have a play with the megahal chatterbot to see how markov chains can produce mangled output that statistically looks like its input (in megahal's case, it looks like english sentences)
You could concievably mash up the top 100, and have a markov chain generator blast out the next big hit.
On the other hand, you may want to consider the possibility that it is not any quality of the music itself that makes a song popular. Or perhaps it is a quality of music issue combined with marketing.
A:
To analyze soundwaves you need some sort of fourier transformation (fft), so you can split the song up into it's frequencies and how they change over time. There exists fft support in numpy, I haven't used it, so I don't know if it's any good. But it would be a great place to start.
After that you then need to make some sort of statistical analysis on frequencies and patterns, and then I no longer have any clue what I'm talking about.
Cool stuff though, go for it!
A:
You may like to start by looking at the MIDI format, it's reasonable simple compared to the compressed formats, and you can generate some nice things in it.
Depends what you want to do really.
A:
There's the Echo Nest remix API that lets you analyze and manipulate music in Python. Some examples here: Where's the pow and here: You make me quantized miss lizzie. There's a nifty tutorial here: An overview of the Echo Nest API
|
Programming a Self Learning Music Maker
|
I want to learn how to program a music application that will analyze songs.
How would I get started in this and is there a library for analyzing soundwaves?
I know C, C++, Java, Python, some assembly, and some Perl.
Related question: Algorithm for music imitation
|
[
"Composition and analysis of music by computer is a huge field. There are two basic areas in this type of work, which overlap somewhat.\n\nAlgorithmic composition is concerned with the generation of music. This can be based on statistical approaches such as Markov chaining, mathematical models employing fractal or chaotic processes, or leveraging techniques from AI such as expert systems, neural networks and genetic algorithms.\nMusic information retrieval is concerned with identifying common grammars, commonalities and similarity metrics between pieces of music, and identifying uniqueness (sometimes called acoustic fingerprinting).\n\nMany, many libraries, tools and specialised programming languages exist which can help with different parts of these problems. Here's a list of music-related programs and libraries for Python. There is a lot of technology available; you should be able to find something that will do the brunt of the work for you. Reimplementing a 'musical parser' through very low-level frequency analysis tools such as Fourier Transforms, as other answers have suggested, while possible, will be quite difficult and is almost certainly unnecessary.\nFor further advice and specific questions, the International Society for Music Information Retrieval has a mailing list which you would probably find very helpful.\n",
"Once you get past the FFT stuff that Lennart mentioned, you might want to have a look at Markov chains for analyzing intervals between notes, and aggregated patterns. \nThis is kind of treaded ground, but Markov chains have been used in the past to build a kind of statistical model of melodies from various songs which can be used to generate new melodies. Markov chains can do the same with written english sentences. For an example of how that looks, have a play with the megahal chatterbot to see how markov chains can produce mangled output that statistically looks like its input (in megahal's case, it looks like english sentences)\nYou could concievably mash up the top 100, and have a markov chain generator blast out the next big hit.\nOn the other hand, you may want to consider the possibility that it is not any quality of the music itself that makes a song popular. Or perhaps it is a quality of music issue combined with marketing.\n",
"To analyze soundwaves you need some sort of fourier transformation (fft), so you can split the song up into it's frequencies and how they change over time. There exists fft support in numpy, I haven't used it, so I don't know if it's any good. But it would be a great place to start.\nAfter that you then need to make some sort of statistical analysis on frequencies and patterns, and then I no longer have any clue what I'm talking about.\nCool stuff though, go for it!\n",
"You may like to start by looking at the MIDI format, it's reasonable simple compared to the compressed formats, and you can generate some nice things in it.\nDepends what you want to do really.\n",
"There's the Echo Nest remix API that lets you analyze and manipulate music in Python. Some examples here: Where's the pow and here: You make me quantized miss lizzie. There's a nifty tutorial here: An overview of the Echo Nest API\n"
] |
[
10,
6,
3,
0,
0
] |
[] |
[] |
[
"perl",
"python",
"waveform"
] |
stackoverflow_0001344884_perl_python_waveform.txt
|
Q:
Novice needs advice for script that gets data and returns it in a usable format
I have a large number of images that I am putting into web pages. Rather than painstakingly enter all of the image attributes, I thought I could write a script that would do the work for me. I just need a little push in the right direction.
I want the script to get the width and height of each image, then format this into an img tag that would include the url I'm using. I'm thinking that first I would loop through the files in the directory and output the results that I'm looking for to a file. Then I can copy and paste.
Ideally, it should eventually have a GUI where you can choose each file, save the result to the clipboard, then paste it directly into the document. But I believe this is beyond me at this point.
I have a basic understanding of coding and have done a smattering of scripts in Python and PHP.
Where do I start? Can I use Python? Or PHP? Or a different language? PHP has the getimagesize() function that returns the data I need. Does Python have a similar function?
Sorry to be long winded. And thanks for any input.
A:
Check out the PIL:
from PIL import Image
im = Image.open("yourfile.jpg")
print im.size
For looping through files see this tutorial.
A:
Maybe the better solution would be not to create a script that generates a list of all the image elements for you to put in your document but rather generate the image elements on the fly in the desired document. This could be done like this:
if ($handle = opendir('/path/to/images')) {
while (false !== ($file = readdir($handle))) {
?>
<img src="http://you.server.com/path/to/images/<?php echo $file ?>" alt="<?php echo $file ?>" />
<?php
}
closedir($handle);
}
Furthermore I don't understand why you would want to include image height and width in the img elements as there is no need to specify those. The height and the width are only used to modify the image dimensions as the browser automatically display's the actual size.
But to include the image size the getimagesize() function would work:
if ($handle = opendir('/path/to/images')) {
while (false !== ($file = readdir($handle))) {
$size=getimagesize('/path/to/images/'.$file);
?>
<img <?php echo $size[3] ?> src="http://you.server.com/path/to/images/<?php echo $file ?>" alt="<?php echo $file ?>" />
<?php
}
closedir($handle);
}
more info: http://nl2.php.net/function.getimagesize
A:
You can also use shell identify command from Imagemagick:
for file in dir/*; do
identify -format "Width: %w, Height: %h" $file
done
A:
I think this should do what you want. But it's not necessarily the best way (it's late, I'm tired, etc...):
<?php
$directory = "img"; // The path in which your images are located.
if ($directory) {
$files = scandir($directory); // $files becomes an array of files in the relevant directory.
}
foreach($files as $k => $v) {
if ($v == "." || $v == "..") {
// because I'm not good enough to think of a better way
unset($k);
}
else {
$size = getimagesize($directory . "/" . $v);
$images[] = array('img' => $v, 'w' => $size[0], 'h' => $size[1]); // $size[0] is width, $size[1] is height, both in px.
}
}
unset($files); // I just like to clear as I go. I make more than enough mess as it is without keeping it around.
if ($images) {
foreach($images as $key => $value) {
echo "\n\t\t<img src=\"$directory/" . $value[img] . "\" width=\"" . $value[w] . "px\" height=\"" . $value[h] . "px\" alt=\"" . $value[img] . "\" />";
}
}
?>
|
Novice needs advice for script that gets data and returns it in a usable format
|
I have a large number of images that I am putting into web pages. Rather than painstakingly enter all of the image attributes, I thought I could write a script that would do the work for me. I just need a little push in the right direction.
I want the script to get the width and height of each image, then format this into an img tag that would include the url I'm using. I'm thinking that first I would loop through the files in the directory and output the results that I'm looking for to a file. Then I can copy and paste.
Ideally, it should eventually have a GUI where you can choose each file, save the result to the clipboard, then paste it directly into the document. But I believe this is beyond me at this point.
I have a basic understanding of coding and have done a smattering of scripts in Python and PHP.
Where do I start? Can I use Python? Or PHP? Or a different language? PHP has the getimagesize() function that returns the data I need. Does Python have a similar function?
Sorry to be long winded. And thanks for any input.
|
[
"Check out the PIL:\nfrom PIL import Image\nim = Image.open(\"yourfile.jpg\")\nprint im.size\n\nFor looping through files see this tutorial.\n",
"Maybe the better solution would be not to create a script that generates a list of all the image elements for you to put in your document but rather generate the image elements on the fly in the desired document. This could be done like this:\nif ($handle = opendir('/path/to/images')) {\n while (false !== ($file = readdir($handle))) {\n ?>\n <img src=\"http://you.server.com/path/to/images/<?php echo $file ?>\" alt=\"<?php echo $file ?>\" />\n <?php\n }\n closedir($handle);\n}\n\nFurthermore I don't understand why you would want to include image height and width in the img elements as there is no need to specify those. The height and the width are only used to modify the image dimensions as the browser automatically display's the actual size.\nBut to include the image size the getimagesize() function would work:\nif ($handle = opendir('/path/to/images')) {\n while (false !== ($file = readdir($handle))) {\n $size=getimagesize('/path/to/images/'.$file);\n ?>\n <img <?php echo $size[3] ?> src=\"http://you.server.com/path/to/images/<?php echo $file ?>\" alt=\"<?php echo $file ?>\" />\n <?php\n }\n closedir($handle);\n}\n\nmore info: http://nl2.php.net/function.getimagesize\n",
"You can also use shell identify command from Imagemagick:\nfor file in dir/*; do\n identify -format \"Width: %w, Height: %h\" $file\ndone\n\n",
"I think this should do what you want. But it's not necessarily the best way (it's late, I'm tired, etc...):\n<?php\n\n$directory = \"img\"; // The path in which your images are located.\n\nif ($directory) {\n$files = scandir($directory); // $files becomes an array of files in the relevant directory.\n}\n\nforeach($files as $k => $v) {\n if ($v == \".\" || $v == \"..\") {\n // because I'm not good enough to think of a better way\n unset($k);\n }\n else {\n $size = getimagesize($directory . \"/\" . $v);\n $images[] = array('img' => $v, 'w' => $size[0], 'h' => $size[1]); // $size[0] is width, $size[1] is height, both in px.\n }\n}\n\nunset($files); // I just like to clear as I go. I make more than enough mess as it is without keeping it around.\n\n if ($images) {\n foreach($images as $key => $value) {\n echo \"\\n\\t\\t<img src=\\\"$directory/\" . $value[img] . \"\\\" width=\\\"\" . $value[w] . \"px\\\" height=\\\"\" . $value[h] . \"px\\\" alt=\\\"\" . $value[img] . \"\\\" />\";\n }\n }\n\n?>\n\n"
] |
[
3,
1,
0,
0
] |
[] |
[] |
[
"image_processing",
"php",
"python",
"scripting",
"web"
] |
stackoverflow_0001349932_image_processing_php_python_scripting_web.txt
|
Q:
Check whether debug is enabled in a Pylons application
I'm working on a fairly simple Pylons 0.9.7 application. How do I tell, in code, whether or not debugging is enabled? That is, I'm interested in the value of the debug setting under [app:main] in my INI file. More generally, how do I access the other values from there in my code?
A:
# tmp.py
print __debug__
$ python tmp.py
True
$ python -O tmp.py
False
I'm not sure if this holds in Pylons, as I've never used that -- but in "normal" command line Python, debug is enabled if optimizations are not enabled. The -O flag indicates to Python to turn on optimizations.
Actually, there's this snippet from Pylons documentation:
# Display error documents for 401, 403, 404 status codes (and
# 500 when debug is disabled)
if asbool(config['debug']):
app = StatusCodeRedirect(app)
else:
app = StatusCodeRedirect(app, [400, 401, 403, 404, 500])
Looks like config['debug'] is what you want.
|
Check whether debug is enabled in a Pylons application
|
I'm working on a fairly simple Pylons 0.9.7 application. How do I tell, in code, whether or not debugging is enabled? That is, I'm interested in the value of the debug setting under [app:main] in my INI file. More generally, how do I access the other values from there in my code?
|
[
"# tmp.py\nprint __debug__\n\n\n$ python tmp.py\nTrue\n$ python -O tmp.py\nFalse\n\nI'm not sure if this holds in Pylons, as I've never used that -- but in \"normal\" command line Python, debug is enabled if optimizations are not enabled. The -O flag indicates to Python to turn on optimizations.\nActually, there's this snippet from Pylons documentation:\n # Display error documents for 401, 403, 404 status codes (and\n # 500 when debug is disabled)\n if asbool(config['debug']):\n app = StatusCodeRedirect(app)\n else:\n app = StatusCodeRedirect(app, [400, 401, 403, 404, 500])\n\nLooks like config['debug'] is what you want.\n"
] |
[
3
] |
[] |
[] |
[
"configuration",
"pylons",
"python"
] |
stackoverflow_0001350227_configuration_pylons_python.txt
|
Q:
sqlalchemy create a foreign key?
I have a composite PK in table Strings (integer id, varchar(2) lang)
I want to create a FK to ONLY the id half of the PK from other tables. This means I'd have potentially many rows in Strings table (translations) matching the FK. I just need to store the id, and have referential integrity maintained by the DB.
Is this possible? If so, how?
A:
This is from wiki
The columns in the referencing table
must be the primary key or other
candidate key in the referenced table. The values in one row of the referencing columns must occur in a single row in the referenced table.
Let's say you have this:
id | var
1 | 10
1 | 11
2 | 10
The foreign key must reference exactly one row from the referenced table. This is why usually it references the primary key.
In your case you need to make another Table1(id) where you stored the ids and make the column unique/primary key. The id column in your current table is not unique - you can't use it in your situation... so you make a Table1(id - primary key) and make the id in your current table a foreign key to the Table1. Now you can create foreign keys to id in Table1 and the primary key in your current table is ok.
|
sqlalchemy create a foreign key?
|
I have a composite PK in table Strings (integer id, varchar(2) lang)
I want to create a FK to ONLY the id half of the PK from other tables. This means I'd have potentially many rows in Strings table (translations) matching the FK. I just need to store the id, and have referential integrity maintained by the DB.
Is this possible? If so, how?
|
[
"This is from wiki\n\nThe columns in the referencing table\n must be the primary key or other\n candidate key in the referenced table. The values in one row of the referencing columns must occur in a single row in the referenced table.\n\nLet's say you have this:\nid | var \n1 | 10 \n1 | 11 \n2 | 10\n\nThe foreign key must reference exactly one row from the referenced table. This is why usually it references the primary key. \nIn your case you need to make another Table1(id) where you stored the ids and make the column unique/primary key. The id column in your current table is not unique - you can't use it in your situation... so you make a Table1(id - primary key) and make the id in your current table a foreign key to the Table1. Now you can create foreign keys to id in Table1 and the primary key in your current table is ok. \n"
] |
[
3
] |
[] |
[] |
[
"python",
"sql",
"sqlalchemy"
] |
stackoverflow_0001350121_python_sql_sqlalchemy.txt
|
Q:
SQLAlchemy session query with INSERT IGNORE
I'm trying to do a bulk insert/update with SQLAlchemy. Here's a snippet:
for od in clist:
where = and_(Offer.network_id==od['network_id'],
Offer.external_id==od['external_id'])
o = session.query(Offer).filter(where).first()
if not o:
o = Offer()
o.network_id = od['network_id']
o.external_id = od['external_id']
o.title = od['title']
o.updated = datetime.datetime.now()
payout = od['payout']
countrylist = od['countries']
session.add(o)
session.flush()
for country in countrylist:
c = session.query(Country).filter(Country.name==country).first()
where = and_(OfferPayout.offer_id==o.id,
OfferPayout.country_name==country)
opayout = session.query(OfferPayout).filter(where).first()
if not opayout:
opayout = OfferPayout()
opayout.offer_id = o.id
opayout.payout = od['payout']
if c:
opayout.country_id = c.id
opayout.country_name = country
else:
opayout.country_id = 0
opayout.country_name = country
session.add(opayout)
session.flush()
It looks like my issue was touched on here, http://www.mail-archive.com/[email protected]/msg05983.html, but I don't know how to use "textual clauses" with session query objects and couldn't find much (though admittedly I haven't had as much time as I'd like to search).
I'm new to SQLAlchemy and I'd imagine there's some issues in the code besides the fact that it throws an exception on a duplicate key. For example, doing a flush after every iteration of clist (but I don't know how else to get an the o.id value that is used in the subsequent OfferPayout inserts).
Guidance on any of these issues is very appreciated.
A:
The way you should be doing these things is with session.merge().
You should also be using your objects relation properties. So the o above should have o.offerpayout and this a list (of objects) and your offerpayout has offerpayout.country property which is the related countries object.
So the above would look something like
for od in clist:
o = Offer()
o.network_id = od['network_id']
o.external_id = od['external_id']
o.title = od['title']
o.updated = datetime.datetime.now()
payout = od['payout']
countrylist = od['countries']
for country in countrylist:
opayout = OfferPayout()
opayout.payout = od['payout']
country_obj = Country()
country_obj.name = country
opayout.country = country_obj
o.offerpayout.append(opayout)
session.merge(o)
session.flush()
This should work as long as all the primary keys are correct (i.e the country table has a primary key of name). Merge essentially checks the primary keys and if they are there merges your object with one in the database (it will also cascade down the joins).
|
SQLAlchemy session query with INSERT IGNORE
|
I'm trying to do a bulk insert/update with SQLAlchemy. Here's a snippet:
for od in clist:
where = and_(Offer.network_id==od['network_id'],
Offer.external_id==od['external_id'])
o = session.query(Offer).filter(where).first()
if not o:
o = Offer()
o.network_id = od['network_id']
o.external_id = od['external_id']
o.title = od['title']
o.updated = datetime.datetime.now()
payout = od['payout']
countrylist = od['countries']
session.add(o)
session.flush()
for country in countrylist:
c = session.query(Country).filter(Country.name==country).first()
where = and_(OfferPayout.offer_id==o.id,
OfferPayout.country_name==country)
opayout = session.query(OfferPayout).filter(where).first()
if not opayout:
opayout = OfferPayout()
opayout.offer_id = o.id
opayout.payout = od['payout']
if c:
opayout.country_id = c.id
opayout.country_name = country
else:
opayout.country_id = 0
opayout.country_name = country
session.add(opayout)
session.flush()
It looks like my issue was touched on here, http://www.mail-archive.com/[email protected]/msg05983.html, but I don't know how to use "textual clauses" with session query objects and couldn't find much (though admittedly I haven't had as much time as I'd like to search).
I'm new to SQLAlchemy and I'd imagine there's some issues in the code besides the fact that it throws an exception on a duplicate key. For example, doing a flush after every iteration of clist (but I don't know how else to get an the o.id value that is used in the subsequent OfferPayout inserts).
Guidance on any of these issues is very appreciated.
|
[
"The way you should be doing these things is with session.merge(). \nYou should also be using your objects relation properties. So the o above should have o.offerpayout and this a list (of objects) and your offerpayout has offerpayout.country property which is the related countries object.\nSo the above would look something like\nfor od in clist:\n\n o = Offer()\n o.network_id = od['network_id']\n o.external_id = od['external_id']\n o.title = od['title']\n o.updated = datetime.datetime.now()\n payout = od['payout']\n countrylist = od['countries']\n\n for country in countrylist:\n opayout = OfferPayout()\n opayout.payout = od['payout']\n country_obj = Country()\n country_obj.name = country\n opayout.country = country_obj\n\n o.offerpayout.append(opayout)\n\n session.merge(o)\n session.flush()\n\nThis should work as long as all the primary keys are correct (i.e the country table has a primary key of name). Merge essentially checks the primary keys and if they are there merges your object with one in the database (it will also cascade down the joins).\n"
] |
[
3
] |
[] |
[] |
[
"python",
"sqlalchemy"
] |
stackoverflow_0001348510_python_sqlalchemy.txt
|
Q:
How do I get PyParsing set up on the Google App Engine?
I saw on the Google App Engine documentation that http://www.antlr.org/ Antlr3 is used as the parsing third party library.
But from what I know Pyparsing seems to be the easier to use and I am only aiming to parse some simple syntax.
Is there an alternative? Can I get pyparsing working on the App Engine?
A:
Pyparsing's runtime footprint is intentionally small for just this purpose. It is a single source file, pyparsing.py, so just drop it in amongst your own source files and parse away!
-- Paul
A:
"Just do it"!-) Get pyparsing.py, e.g. from here, and put it in your app engine app's directory; now you can just import pyparsing in your app code and use it.
For example, tweak the greeting.py from here to be:
from pyparsing import Word, alphas
greet = Word( alphas ) + "," + Word( alphas ) + "!" # <-- grammar defined here
hello = "Hello, World!"
print "Content-type: text/plain\n"
print hello, "->", greet.parseString( hello )
add to your app.yaml right under handlers: the two lines:
- url: /parshello
script: greeting.py
start your app, visit http://localhost:8083/parshello (or whatever port you're running on;-), and you'll see in your browser the plain text output:
Hello, World! -> ['Hello', ',', 'World', '!']
|
How do I get PyParsing set up on the Google App Engine?
|
I saw on the Google App Engine documentation that http://www.antlr.org/ Antlr3 is used as the parsing third party library.
But from what I know Pyparsing seems to be the easier to use and I am only aiming to parse some simple syntax.
Is there an alternative? Can I get pyparsing working on the App Engine?
|
[
"Pyparsing's runtime footprint is intentionally small for just this purpose. It is a single source file, pyparsing.py, so just drop it in amongst your own source files and parse away!\n-- Paul\n",
"\"Just do it\"!-) Get pyparsing.py, e.g. from here, and put it in your app engine app's directory; now you can just import pyparsing in your app code and use it.\nFor example, tweak the greeting.py from here to be:\nfrom pyparsing import Word, alphas\ngreet = Word( alphas ) + \",\" + Word( alphas ) + \"!\" # <-- grammar defined here\nhello = \"Hello, World!\"\nprint \"Content-type: text/plain\\n\"\nprint hello, \"->\", greet.parseString( hello )\n\nadd to your app.yaml right under handlers: the two lines:\n- url: /parshello\n script: greeting.py\n\nstart your app, visit http://localhost:8083/parshello (or whatever port you're running on;-), and you'll see in your browser the plain text output:\nHello, World! -> ['Hello', ',', 'World', '!']\n\n"
] |
[
4,
1
] |
[] |
[] |
[
"google_app_engine",
"pyparsing",
"python"
] |
stackoverflow_0001341137_google_app_engine_pyparsing_python.txt
|
Q:
Parsing an existing config file
I have a config file that is in the following form:
protocol sample_thread {
{ AUTOSTART 0 }
{ BITMAP thread.gif }
{ COORDS {0 0} }
{ DATAFORMAT {
{ TYPE hl7 }
{ PREPROCS {
{ ARGS {{}} }
{ PROCS sample_proc }
} }
} }
}
The real file may not have these exact fields, and I'd rather not have to describe the the structure of the data is to the parser before it parses.
I've looked for other configuration file parsers, but none that I've found seem to be able to accept a file of this syntax.
I'm looking for a module that can parse a file like this, any suggestions?
If anyone is curious, the file in question was generated by Quovadx Cloverleaf.
A:
pyparsing is pretty handy for quick and simple parsing like this. A bare minimum would be something like:
import pyparsing
string = pyparsing.CharsNotIn("{} \t\r\n")
group = pyparsing.Forward()
group << pyparsing.Group(pyparsing.Literal("{").suppress() +
pyparsing.ZeroOrMore(group) +
pyparsing.Literal("}").suppress())
| string
toplevel = pyparsing.OneOrMore(group)
The use it as:
>>> toplevel.parseString(text)
['protocol', 'sample_thread', [['AUTOSTART', '0'], ['BITMAP', 'thread.gif'],
['COORDS', ['0', '0']], ['DATAFORMAT', [['TYPE', 'hl7'], ['PREPROCS',
[['ARGS', [[]]], ['PROCS', 'sample_proc']]]]]]]
From there you can get more sophisticated as you want (parse numbers seperately from strings, look for specific field names etc). The above is pretty general, just looking for strings (defined as any non-whitespace character except "{" and "}") and {} delimited lists of strings.
A:
Taking Brian's pyparsing solution another step, you can create a quasi-deserializer for this format by using the Dict class:
import pyparsing
string = pyparsing.CharsNotIn("{} \t\r\n")
# use Word instead of CharsNotIn, to do whitespace skipping
stringchars = pyparsing.printables.replace("{","").replace("}","")
string = pyparsing.Word( stringchars )
# define a simple integer, plus auto-converting parse action
integer = pyparsing.Word("0123456789").setParseAction(lambda t : int(t[0]))
group = pyparsing.Forward()
group << ( pyparsing.Group(pyparsing.Literal("{").suppress() +
pyparsing.ZeroOrMore(group) +
pyparsing.Literal("}").suppress())
| integer | string )
toplevel = pyparsing.OneOrMore(group)
sample = """
protocol sample_thread {
{ AUTOSTART 0 }
{ BITMAP thread.gif }
{ COORDS {0 0} }
{ DATAFORMAT {
{ TYPE hl7 }
{ PREPROCS {
{ ARGS {{}} }
{ PROCS sample_proc }
} }
} }
}
"""
print toplevel.parseString(sample).asList()
# Now define something a little more meaningful for a protocol structure,
# and use Dict to auto-assign results names
LBRACE,RBRACE = map(pyparsing.Suppress,"{}")
protocol = ( pyparsing.Keyword("protocol") +
string("name") +
LBRACE +
pyparsing.Dict(pyparsing.OneOrMore(
pyparsing.Group(LBRACE + string + group + RBRACE)
) )("parameters") +
RBRACE )
results = protocol.parseString(sample)
print results.name
print results.parameters.BITMAP
print results.parameters.keys()
print results.dump()
Prints
['protocol', 'sample_thread', [['AUTOSTART', 0], ['BITMAP', 'thread.gif'], ['COORDS',
[0, 0]], ['DATAFORMAT', [['TYPE', 'hl7'], ['PREPROCS', [['ARGS', [[]]], ['PROCS', 'sample_proc']]]]]]]
sample_thread
thread.gif
['DATAFORMAT', 'COORDS', 'AUTOSTART', 'BITMAP']
['protocol', 'sample_thread', [['AUTOSTART', 0], ['BITMAP', 'thread.gif'], ['COORDS', [0, 0]], ['DATAFORMAT', [['TYPE', 'hl7'], ['PREPROCS', [['ARGS', [[]]], ['PROCS', 'sample_proc']]]]]]]
- name: sample_thread
- parameters: [['AUTOSTART', 0], ['BITMAP', 'thread.gif'], ['COORDS', [0, 0]], ['DATAFORMAT', [['TYPE', 'hl7'], ['PREPROCS', [['ARGS', [[]]], ['PROCS', 'sample_proc']]]]]]
- AUTOSTART: 0
- BITMAP: thread.gif
- COORDS: [0, 0]
- DATAFORMAT: [['TYPE', 'hl7'], ['PREPROCS', [['ARGS', [[]]], ['PROCS', 'sample_proc']]]]
I think you will get further faster with pyparsing.
-- Paul
A:
I'll try and answer what I think is the missing question(s)...
Configuration files come in many formats. There are well known formats such as *.ini or apache config - these tend to have many parsers available.
Then there are custom formats. That is what yours appears to be (it could be some well-defined format you and I have never seen before - but until you know what that is it doesn't really matter).
I would start with the software this came from and see if they have a programming API that can load/produce these files. If nothing is obvious give Quovadx a call. Chances are someone has already solved this problem.
Otherwise you're probably on your own to create your own parser.
Writing a parser for this format would not be terribly difficult assuming that your sample is representative of a complete example. It's a hierarchy of values where each node can contain either a value or a child hierarchy of values. Once you've defined the basic types that the values can contain the parser is a very simple structure.
You could write this reasonably quickly using something like Lex/Flex or just a straight-forward parser in the language of your choosing.
A:
You can easily write a script in python which will convert it to python dict, format looks almost like hierarchical name value pairs, only problem seems to be
Coards {0 0}, where {0 0} isn't a name value pair, but a list
so who know what other such cases are in the format
I think your best bet is to have spec for that format and write a simple python script to read it.
A:
Your config file is very similar to JSON (pretty much, replace all your "{" and "}" with "[" and "]"). Most languages have a built in JSON parser (PHP, Ruby, Python, etc), and if not, there are libraries available to handle it for you.
If you can not change the format of the configuration file, you can read all file contents as a string, and replace all the "{" and "}" characters via whatever means you prefer. Then you can parse the string as JSON, and you're set.
A:
I searched a little on the Cheese Shop, but I didn't find anything helpful for your example. Check the Examples page, and this specific parser ( it's syntax resembles yours a bit ). I think this should help you write your own.
A:
Look into LEX and YACC. A bit of a learning curve, but they can generate parsers for any language.
|
Parsing an existing config file
|
I have a config file that is in the following form:
protocol sample_thread {
{ AUTOSTART 0 }
{ BITMAP thread.gif }
{ COORDS {0 0} }
{ DATAFORMAT {
{ TYPE hl7 }
{ PREPROCS {
{ ARGS {{}} }
{ PROCS sample_proc }
} }
} }
}
The real file may not have these exact fields, and I'd rather not have to describe the the structure of the data is to the parser before it parses.
I've looked for other configuration file parsers, but none that I've found seem to be able to accept a file of this syntax.
I'm looking for a module that can parse a file like this, any suggestions?
If anyone is curious, the file in question was generated by Quovadx Cloverleaf.
|
[
"pyparsing is pretty handy for quick and simple parsing like this. A bare minimum would be something like:\nimport pyparsing\nstring = pyparsing.CharsNotIn(\"{} \\t\\r\\n\")\ngroup = pyparsing.Forward()\ngroup << pyparsing.Group(pyparsing.Literal(\"{\").suppress() + \n pyparsing.ZeroOrMore(group) + \n pyparsing.Literal(\"}\").suppress()) \n | string\n\ntoplevel = pyparsing.OneOrMore(group)\n\nThe use it as:\n>>> toplevel.parseString(text)\n['protocol', 'sample_thread', [['AUTOSTART', '0'], ['BITMAP', 'thread.gif'], \n['COORDS', ['0', '0']], ['DATAFORMAT', [['TYPE', 'hl7'], ['PREPROCS', \n[['ARGS', [[]]], ['PROCS', 'sample_proc']]]]]]]\n\nFrom there you can get more sophisticated as you want (parse numbers seperately from strings, look for specific field names etc). The above is pretty general, just looking for strings (defined as any non-whitespace character except \"{\" and \"}\") and {} delimited lists of strings.\n",
"Taking Brian's pyparsing solution another step, you can create a quasi-deserializer for this format by using the Dict class:\nimport pyparsing\n\nstring = pyparsing.CharsNotIn(\"{} \\t\\r\\n\")\n# use Word instead of CharsNotIn, to do whitespace skipping\nstringchars = pyparsing.printables.replace(\"{\",\"\").replace(\"}\",\"\")\nstring = pyparsing.Word( stringchars )\n# define a simple integer, plus auto-converting parse action\ninteger = pyparsing.Word(\"0123456789\").setParseAction(lambda t : int(t[0]))\ngroup = pyparsing.Forward()\ngroup << ( pyparsing.Group(pyparsing.Literal(\"{\").suppress() +\n pyparsing.ZeroOrMore(group) +\n pyparsing.Literal(\"}\").suppress())\n | integer | string )\n\ntoplevel = pyparsing.OneOrMore(group)\n\nsample = \"\"\"\nprotocol sample_thread {\n { AUTOSTART 0 }\n { BITMAP thread.gif }\n { COORDS {0 0} }\n { DATAFORMAT {\n { TYPE hl7 }\n { PREPROCS {\n { ARGS {{}} }\n { PROCS sample_proc }\n } }\n } } \n }\n\"\"\"\n\nprint toplevel.parseString(sample).asList()\n\n# Now define something a little more meaningful for a protocol structure, \n# and use Dict to auto-assign results names\nLBRACE,RBRACE = map(pyparsing.Suppress,\"{}\")\nprotocol = ( pyparsing.Keyword(\"protocol\") + \n string(\"name\") + \n LBRACE + \n pyparsing.Dict(pyparsing.OneOrMore(\n pyparsing.Group(LBRACE + string + group + RBRACE)\n ) )(\"parameters\") + \n RBRACE )\n\nresults = protocol.parseString(sample)\nprint results.name\nprint results.parameters.BITMAP\nprint results.parameters.keys()\nprint results.dump()\n\nPrints\n['protocol', 'sample_thread', [['AUTOSTART', 0], ['BITMAP', 'thread.gif'], ['COORDS', \n\n[0, 0]], ['DATAFORMAT', [['TYPE', 'hl7'], ['PREPROCS', [['ARGS', [[]]], ['PROCS', 'sample_proc']]]]]]]\nsample_thread\nthread.gif\n['DATAFORMAT', 'COORDS', 'AUTOSTART', 'BITMAP']\n['protocol', 'sample_thread', [['AUTOSTART', 0], ['BITMAP', 'thread.gif'], ['COORDS', [0, 0]], ['DATAFORMAT', [['TYPE', 'hl7'], ['PREPROCS', [['ARGS', [[]]], ['PROCS', 'sample_proc']]]]]]]\n- name: sample_thread\n- parameters: [['AUTOSTART', 0], ['BITMAP', 'thread.gif'], ['COORDS', [0, 0]], ['DATAFORMAT', [['TYPE', 'hl7'], ['PREPROCS', [['ARGS', [[]]], ['PROCS', 'sample_proc']]]]]]\n - AUTOSTART: 0\n - BITMAP: thread.gif\n - COORDS: [0, 0]\n - DATAFORMAT: [['TYPE', 'hl7'], ['PREPROCS', [['ARGS', [[]]], ['PROCS', 'sample_proc']]]]\n\nI think you will get further faster with pyparsing.\n-- Paul\n",
"I'll try and answer what I think is the missing question(s)...\nConfiguration files come in many formats. There are well known formats such as *.ini or apache config - these tend to have many parsers available.\nThen there are custom formats. That is what yours appears to be (it could be some well-defined format you and I have never seen before - but until you know what that is it doesn't really matter).\nI would start with the software this came from and see if they have a programming API that can load/produce these files. If nothing is obvious give Quovadx a call. Chances are someone has already solved this problem.\nOtherwise you're probably on your own to create your own parser.\nWriting a parser for this format would not be terribly difficult assuming that your sample is representative of a complete example. It's a hierarchy of values where each node can contain either a value or a child hierarchy of values. Once you've defined the basic types that the values can contain the parser is a very simple structure.\nYou could write this reasonably quickly using something like Lex/Flex or just a straight-forward parser in the language of your choosing.\n",
"You can easily write a script in python which will convert it to python dict, format looks almost like hierarchical name value pairs, only problem seems to be\nCoards {0 0}, where {0 0} isn't a name value pair, but a list\nso who know what other such cases are in the format\nI think your best bet is to have spec for that format and write a simple python script to read it.\n",
"Your config file is very similar to JSON (pretty much, replace all your \"{\" and \"}\" with \"[\" and \"]\"). Most languages have a built in JSON parser (PHP, Ruby, Python, etc), and if not, there are libraries available to handle it for you.\nIf you can not change the format of the configuration file, you can read all file contents as a string, and replace all the \"{\" and \"}\" characters via whatever means you prefer. Then you can parse the string as JSON, and you're set.\n",
"I searched a little on the Cheese Shop, but I didn't find anything helpful for your example. Check the Examples page, and this specific parser ( it's syntax resembles yours a bit ). I think this should help you write your own.\n",
"Look into LEX and YACC. A bit of a learning curve, but they can generate parsers for any language.\n"
] |
[
10,
2,
1,
1,
1,
0,
0
] |
[
"Maybe you could write a simple script that will convert your config into xml file and then read it just using lxml, Beatuful Soup or anything else? And your converter could use PyParsing or regular expressions for example.\n"
] |
[
-2
] |
[
"config",
"parsing",
"python"
] |
stackoverflow_0000996183_config_parsing_python.txt
|
Q:
IBOutlet in Python
Is there a way to use a Cocoa IBOutlet in Python? Or do I need to do this in ObjC? Thanks in advance.
A:
This article (found by searching Google for “pyobjc iboutlet”) has an example. Basically, you create objc.IBOutlet objects and set them as the values of class variables.
|
IBOutlet in Python
|
Is there a way to use a Cocoa IBOutlet in Python? Or do I need to do this in ObjC? Thanks in advance.
|
[
"This article (found by searching Google for “pyobjc iboutlet”) has an example. Basically, you create objc.IBOutlet objects and set them as the values of class variables.\n"
] |
[
1
] |
[] |
[] |
[
"cocoa",
"objective_c",
"python"
] |
stackoverflow_0001351480_cocoa_objective_c_python.txt
|
Q:
Strategy for maintaining complex filter states?
I need to maintain a list of filtered and sorted objects, preferably in a generic manner, that can be used in multiple views. This is necessary so I can generate next, prev links, along with some other very useful things for the user.
Examples of filters:
field__isnull=True
field__exact="so"
field__field__isnull=False
Additionally, after the filtered query set is built, ordering may be applied by any of the fields.
My current solution is to use a FilterSpec class containing a collection of filters, along with an initial query set. This class is then serialized and passed to a view.
Consider a view with 25 dynamically filtered items. Each item in the view has a link to get a detailed view of the item. To each of these links, the serialized FilterSpec object of the current list is appended. So you end up with huge urls. Worse, the same huge filter is appended to all 25 links!
Another option is to store the FilterSpec in the session, but then you run into problems of when to delete the FilterSpec. Next you find all your views getting cluttered with code trying to determine if the filter should be deleted in preparation for a new list of objects.
I'm sure this problem has been solved before, so I'd love to hear other solutions that you guys have come up with.
A:
You've identified the two options for maintaining user-specific state in a web application: store it in cookies/session, or pass it around on URLs. I don't believe there's a third "silver bullet" waiting in the wings to solve your problem.
The URL query-string option has the advantage that a particular view state can be bookmarked, sent as an emailed URL, &c. It also may keep your view code a bit simpler, but at the cost of some extra template code to ensure the proper query-string always gets passed along on links.
In part your preferred solution may depend on the behavior you want. For instance, if a user bookmarks (or emails to a friend) the URL for a detail view of an item, do you want that URL to simply refer to the item itself, or to always carry along information about what list that item came out of? If the former, use session data. If the latter, use URLs with query strings.
In either case, I'm confident that the code that you find "cluttering all your views" can be refactored to be elegant, DRY, and as invisible as you want it to be. Decorators and/or class-based views might help.
A:
Depending on what you want to do, you'll want to either create a custom manager or add a new manager method.
In this example, you add a new manager that selects blog posts that are marked as published with a date after the current datetime.
from django.db import models
from datetime import datetime
class PublishedPostManager(models.Manager):
def get_query_set(self):
return super(PublishedPostManager, self).get_query_set().filter(published=True, time__lt=datetime.now())
class Post(models.Model):
title = models.CharField(max_length=128)
body = models.TextField()
published = models.BooleanField(default=False)
time = models.DateTimeField()
objects = models.Manager() # Needed to ensure that the default manager is still available
published = PublishedPostManager()
Then, instead of Post.objects.all(), you can use Post.published.all() to fetch all records. The normal QuerySet methods are available as well:
Post.published.count()
Post.published.select_related().filter(spam__iexact='eggs')
# etc
And of course, you can still use the default manager:
Post.objects.all()
|
Strategy for maintaining complex filter states?
|
I need to maintain a list of filtered and sorted objects, preferably in a generic manner, that can be used in multiple views. This is necessary so I can generate next, prev links, along with some other very useful things for the user.
Examples of filters:
field__isnull=True
field__exact="so"
field__field__isnull=False
Additionally, after the filtered query set is built, ordering may be applied by any of the fields.
My current solution is to use a FilterSpec class containing a collection of filters, along with an initial query set. This class is then serialized and passed to a view.
Consider a view with 25 dynamically filtered items. Each item in the view has a link to get a detailed view of the item. To each of these links, the serialized FilterSpec object of the current list is appended. So you end up with huge urls. Worse, the same huge filter is appended to all 25 links!
Another option is to store the FilterSpec in the session, but then you run into problems of when to delete the FilterSpec. Next you find all your views getting cluttered with code trying to determine if the filter should be deleted in preparation for a new list of objects.
I'm sure this problem has been solved before, so I'd love to hear other solutions that you guys have come up with.
|
[
"You've identified the two options for maintaining user-specific state in a web application: store it in cookies/session, or pass it around on URLs. I don't believe there's a third \"silver bullet\" waiting in the wings to solve your problem.\nThe URL query-string option has the advantage that a particular view state can be bookmarked, sent as an emailed URL, &c. It also may keep your view code a bit simpler, but at the cost of some extra template code to ensure the proper query-string always gets passed along on links.\nIn part your preferred solution may depend on the behavior you want. For instance, if a user bookmarks (or emails to a friend) the URL for a detail view of an item, do you want that URL to simply refer to the item itself, or to always carry along information about what list that item came out of? If the former, use session data. If the latter, use URLs with query strings.\nIn either case, I'm confident that the code that you find \"cluttering all your views\" can be refactored to be elegant, DRY, and as invisible as you want it to be. Decorators and/or class-based views might help.\n",
"Depending on what you want to do, you'll want to either create a custom manager or add a new manager method.\nIn this example, you add a new manager that selects blog posts that are marked as published with a date after the current datetime.\nfrom django.db import models\nfrom datetime import datetime\n\nclass PublishedPostManager(models.Manager):\n def get_query_set(self):\n return super(PublishedPostManager, self).get_query_set().filter(published=True, time__lt=datetime.now())\n\nclass Post(models.Model):\n title = models.CharField(max_length=128)\n body = models.TextField()\n published = models.BooleanField(default=False)\n time = models.DateTimeField()\n\n objects = models.Manager() # Needed to ensure that the default manager is still available\n published = PublishedPostManager()\n\nThen, instead of Post.objects.all(), you can use Post.published.all() to fetch all records. The normal QuerySet methods are available as well:\nPost.published.count()\nPost.published.select_related().filter(spam__iexact='eggs')\n# etc\n\nAnd of course, you can still use the default manager:\nPost.objects.all()\n\n"
] |
[
2,
1
] |
[] |
[] |
[
"django",
"http",
"python"
] |
stackoverflow_0001349840_django_http_python.txt
|
Q:
What are some of the core conceptual differences between C# and Python?
I'm new to Python, coming from a C# background and I'm trying to get up to speed. I understand that Python is dynamically typed, whereas C# is strongly-typed. -> see comments. What conceptual obstacles should I watch out for when attempting to learn Python? Are there concepts for which no analog exists in Python? How important is object-oriented analysis?
I believe answers to these and any other questions you might be able to think of would speed up my understanding Python besides the Nike mentality ("just do it")?
A little more context: My company is moving from ASP.NET C# Web Forms to Django. I've gone through the Django tutorial and it was truly great. I need to get up to speed in about 2 weeks time (ridiculous maybe? LOL)
Thank you all for your time and efforts to respond to a realllly broad question(s).
A:
" I understand that Python is dynamically typed, whereas C# is strongly-typed. "
This is weirdly wrong.
Python is strongly typed. A list or integer or dictionary is always of the given type. The object's type cannot be changed.
Python variables are not strongly typed. Indeed, Python variables are just labels on objects. Variables are not declared; hence the description of Python as "dynamic".
C# is statically typed. The variables are declared to the compiler to be of a specific type. The code is generated based on certain knowledge about the variables use at run-time.
Python is "interpreted" -- things are done at run-time -- little is assumed. [Technically, the Python source is compiled into byte code and the byte code is interpreted. Some folks think this is an important distinction.]
C# is compiled -- the compiler generates code based on the declared assumptions.
What conceptual obstacles should I watch out for when attempting to learn Python?
None. If you insist that Python should be like something else; or you insist that something else is more intuitive then you've polluted your own thinking with inappropriate concepts.
No programming language has obstacles. We bring our own obstacles when we impose things on the language.
Are there concepts for which no analog exists in Python?
Since Python has object-oriented, procedural and functional elements, you'd be hard-pressed to find something missing from Python.
How important is object-oriented analysis?
OO analysis helps all phases of software development -- even if you aren't doing an OO implementation. This is unrelated to Python and should be a separate question.
I need to get up to speed in about 2 weeks time (ridiculous maybe?)
Perhaps not. If you start with a fresh, open mind, then Python can be learned in a week or so of diligent work.
If, on the other hand, you compare and contrast Python with C#, it can take you years to get past your C# bias and learn Python. Don't translate C# to Python. Don't translate Python to C#.
Don't go to the well with a full bucket.
A:
duck typing
I think the main thing that sets c#/java from python is that there is often no need for interfaces. This is because python has ducktyping.
class Duck(object):
def quack(self):
print "quack"
class Cat(object):
"""Cat that specializes in hunting ducks"""
def quack(self):
print "quack"
duck = Duck()
cat = Cat()
def quacker(something_that_quacks):
something_that_quacks.quack()
quacker(cat) #quack
quacker(duck) #quack
As long as an object has the method quack its OK to use it to call quacker. Duck typing also makes design patterns more easy to implement. Because you don't need to write interfaces and make sure objects are of the same type.
A:
There are a lot of differences between C# and Python; rather than dwell on the individual differences, it's probably better just to look at how Python works using a guide such as Dive Into Python. And remember, while Python allows you to do OOP very well, it doesn't constrain you to OOP. There are times when just plain functions are good enough (Django views being a good example).
There are also numerous conceptual differences between WebForms and Django. Django is more in tune with HTTP - there's no muddling of what happens client-side and what happens server-side. With a typical WebForms application, client side events often trigger server-side code using postbacks. Even with the ASP.NET Ajax framework, it's an environment which offers less control than you sometimes need. In Django, you achieve the same effect using client-side libraries using e.g. YUI or jQuery and make Ajax calls yourself. Even though that kind of approach doesn't hold your hand as much as say the ASP.NET approach, you should be more productive with Django and Python to make the latter an overall net positive. ASP.NET aims to make things more familiar for developers accustomed to WinForms and other desktop development environments; while this is a perfectly reasonable approach for Microsoft to have taken (and they're not the only ones - for example, Java has JSF), it's not really in tune with HTTP and REST to the same extent. For an example of this, just take a look at how constraining ASP.NET URLs are (pre ASP.NET MVC) as compared to Django URLs.
Just my 2 cents' worth :-)
A:
You said that Python is dynamically typed and C# is strongly typed but this isn't true. Strong vs. weak typing and static vs. dynamic typing are orthagonal. Strong typing means str + int doesn't coerce one of the opperands, so in this regard both Python and C# are strongly typed (whereas PHP or C is weakly typed). Python is dynamically typed which means names don't have a defined type at compile time, whereas in C# they do.
A:
The conceptual differences are important, but mostly in how they result in different attitudes.
Most important of those are "duck typing". Ie, forget what type things are, you don't need to care. You only need to care about what attributes and methods objects have. "If it looks like a duck and walks like a duck, it's a duck". Usually, these attitude changes come naturally after a while.
The biggest conceptual hurdles seems to be
The significant indenting. But the only ones who hate it are people who have, or are forced to work with, people who change their editors tab expansion from something other than the default 8.
No compiler, and hence no type testing at the compile stage. Many people coming from statically typed languages believe that the type checking during compilation finds many bugs. It doesn't, in my experience.
|
What are some of the core conceptual differences between C# and Python?
|
I'm new to Python, coming from a C# background and I'm trying to get up to speed. I understand that Python is dynamically typed, whereas C# is strongly-typed. -> see comments. What conceptual obstacles should I watch out for when attempting to learn Python? Are there concepts for which no analog exists in Python? How important is object-oriented analysis?
I believe answers to these and any other questions you might be able to think of would speed up my understanding Python besides the Nike mentality ("just do it")?
A little more context: My company is moving from ASP.NET C# Web Forms to Django. I've gone through the Django tutorial and it was truly great. I need to get up to speed in about 2 weeks time (ridiculous maybe? LOL)
Thank you all for your time and efforts to respond to a realllly broad question(s).
|
[
"\" I understand that Python is dynamically typed, whereas C# is strongly-typed. \"\nThis is weirdly wrong.\n\nPython is strongly typed. A list or integer or dictionary is always of the given type. The object's type cannot be changed.\nPython variables are not strongly typed. Indeed, Python variables are just labels on objects. Variables are not declared; hence the description of Python as \"dynamic\". \nC# is statically typed. The variables are declared to the compiler to be of a specific type. The code is generated based on certain knowledge about the variables use at run-time.\n\nPython is \"interpreted\" -- things are done at run-time -- little is assumed. [Technically, the Python source is compiled into byte code and the byte code is interpreted. Some folks think this is an important distinction.]\nC# is compiled -- the compiler generates code based on the declared assumptions.\n\nWhat conceptual obstacles should I watch out for when attempting to learn Python?\nNone. If you insist that Python should be like something else; or you insist that something else is more intuitive then you've polluted your own thinking with inappropriate concepts.\nNo programming language has obstacles. We bring our own obstacles when we impose things on the language.\nAre there concepts for which no analog exists in Python?\nSince Python has object-oriented, procedural and functional elements, you'd be hard-pressed to find something missing from Python.\nHow important is object-oriented analysis?\nOO analysis helps all phases of software development -- even if you aren't doing an OO implementation. This is unrelated to Python and should be a separate question.\nI need to get up to speed in about 2 weeks time (ridiculous maybe?)\nPerhaps not. If you start with a fresh, open mind, then Python can be learned in a week or so of diligent work.\nIf, on the other hand, you compare and contrast Python with C#, it can take you years to get past your C# bias and learn Python. Don't translate C# to Python. Don't translate Python to C#. \nDon't go to the well with a full bucket.\n",
"duck typing\nI think the main thing that sets c#/java from python is that there is often no need for interfaces. This is because python has ducktyping.\nclass Duck(object):\n def quack(self):\n print \"quack\"\n \nclass Cat(object):\n \"\"\"Cat that specializes in hunting ducks\"\"\"\n def quack(self):\n print \"quack\"\n \nduck = Duck()\ncat = Cat()\n\ndef quacker(something_that_quacks):\n something_that_quacks.quack()\n\nquacker(cat) #quack\nquacker(duck) #quack\n\nAs long as an object has the method quack its OK to use it to call quacker. Duck typing also makes design patterns more easy to implement. Because you don't need to write interfaces and make sure objects are of the same type.\n",
"There are a lot of differences between C# and Python; rather than dwell on the individual differences, it's probably better just to look at how Python works using a guide such as Dive Into Python. And remember, while Python allows you to do OOP very well, it doesn't constrain you to OOP. There are times when just plain functions are good enough (Django views being a good example).\nThere are also numerous conceptual differences between WebForms and Django. Django is more in tune with HTTP - there's no muddling of what happens client-side and what happens server-side. With a typical WebForms application, client side events often trigger server-side code using postbacks. Even with the ASP.NET Ajax framework, it's an environment which offers less control than you sometimes need. In Django, you achieve the same effect using client-side libraries using e.g. YUI or jQuery and make Ajax calls yourself. Even though that kind of approach doesn't hold your hand as much as say the ASP.NET approach, you should be more productive with Django and Python to make the latter an overall net positive. ASP.NET aims to make things more familiar for developers accustomed to WinForms and other desktop development environments; while this is a perfectly reasonable approach for Microsoft to have taken (and they're not the only ones - for example, Java has JSF), it's not really in tune with HTTP and REST to the same extent. For an example of this, just take a look at how constraining ASP.NET URLs are (pre ASP.NET MVC) as compared to Django URLs.\nJust my 2 cents' worth :-)\n",
"You said that Python is dynamically typed and C# is strongly typed but this isn't true. Strong vs. weak typing and static vs. dynamic typing are orthagonal. Strong typing means str + int doesn't coerce one of the opperands, so in this regard both Python and C# are strongly typed (whereas PHP or C is weakly typed). Python is dynamically typed which means names don't have a defined type at compile time, whereas in C# they do.\n",
"The conceptual differences are important, but mostly in how they result in different attitudes. \nMost important of those are \"duck typing\". Ie, forget what type things are, you don't need to care. You only need to care about what attributes and methods objects have. \"If it looks like a duck and walks like a duck, it's a duck\". Usually, these attitude changes come naturally after a while.\nThe biggest conceptual hurdles seems to be\n\nThe significant indenting. But the only ones who hate it are people who have, or are forced to work with, people who change their editors tab expansion from something other than the default 8.\nNo compiler, and hence no type testing at the compile stage. Many people coming from statically typed languages believe that the type checking during compilation finds many bugs. It doesn't, in my experience. \n\n"
] |
[
9,
4,
3,
2,
1
] |
[] |
[] |
[
"asp.net",
"c#",
"django",
"programming_languages",
"python"
] |
stackoverflow_0001351227_asp.net_c#_django_programming_languages_python.txt
|
Q:
Show/hide a plot's legend
I'm relatively new to python and am developing a pyqt GUI. I want to provide a checkbox option to show/hide a plot's legend. Is there a way to hide a legend?
I've tried using pyplot's '_nolegend_' and it appears to work on select legend entries but it creates a ValueError if applied to all entries.
I can brute force the legend to hide by clearing and redrawing the whole plot but... it's a terrible thing to do, especially with large data sets.
Appreciate any help with this.
A:
Here's something you can try on the command line:
plot([3,1,4,1],label='foo')
lgd=legend()
# when you want it to be invisible:
lgd.set_visible(False)
draw()
# when you want it to be visible:
lgd.set_visible(True)
draw()
In a GUI program it's best to avoid pyplot and use the object-oriented API, i.e., ax.legend and canvas.draw.
|
Show/hide a plot's legend
|
I'm relatively new to python and am developing a pyqt GUI. I want to provide a checkbox option to show/hide a plot's legend. Is there a way to hide a legend?
I've tried using pyplot's '_nolegend_' and it appears to work on select legend entries but it creates a ValueError if applied to all entries.
I can brute force the legend to hide by clearing and redrawing the whole plot but... it's a terrible thing to do, especially with large data sets.
Appreciate any help with this.
|
[
"Here's something you can try on the command line:\nplot([3,1,4,1],label='foo')\nlgd=legend()\n\n# when you want it to be invisible:\nlgd.set_visible(False)\ndraw()\n\n# when you want it to be visible:\nlgd.set_visible(True)\ndraw()\n\nIn a GUI program it's best to avoid pyplot and use the object-oriented API, i.e., ax.legend and canvas.draw.\n"
] |
[
10
] |
[] |
[] |
[
"matplotlib",
"python"
] |
stackoverflow_0001349202_matplotlib_python.txt
|
Q:
What does matrix**2 mean in python/numpy?
I have a python ndarray temp in some code I'm reading that suffers this:
x = temp**2
Is this the dot square (ie, equivalent to m.*m) or the matrix square (ie m must be a square matrix)? In particular, I'd like to know whether I can get rid of the transpose in this code:
temp = num.transpose(whatever)
num.sum(temp**2,axis=1))
and turn it into this:
num.sum(whatever**2,axis=0)
That will save me at least 0.1ms, and is clearly worth my time.
Thanks! The ** operator is ungooglable and I know nothing!
a
A:
It's just the square of each element.
from numpy import *
a = arange(4).reshape((2,2))
print a**2
prints
[[0 1]
[4 9]]
A:
You should read NumPy for Matlab Users. The elementwise power operation is mentioned there, and you can also see that in numpy, some operators apply differently to array and matrix.
>>> from numpy import *
>>> a = arange(4).reshape((2,2))
>>> print a**2
[[0 1]
[4 9]]
>>> print matrix(a)**2
[[ 2 3]
[ 6 11]]
A:
** is the raise-to-power operator in Python, so x**2 means "x squared" in Python -- including numpy. Such operations in numpy always apply element by element, so x**2 squares each element of array x (whatever number of dimensions) just like, say, x*2 would double each element, or x+2 would increment each element by two (in each case, x proper is unaffected -- the result is a new temporary array of the same shape as x!).
Edit: as @kaizer.ze points out, while what I wrote holds for numpy.array objects, it doesn't apply to numpy.matrix objects, where multiplication means matrix multiplication rather than element by element operation like for array (and similarly for raising to power) -- indeed, that's the key difference between the two types. As the Scipy tutorial puts it, for example:
When we use numpy.array or
numpy.matrix there is a difference.
A*x will be in the latter case matrix
product, not elementwise product as
with array.
i.e., as the numpy reference puts it:
A matrix is a specialized 2-d array
that retains its 2-d nature through
operations. It has certain special
operators, such as * (matrix
multiplication) and ** (matrix power).
|
What does matrix**2 mean in python/numpy?
|
I have a python ndarray temp in some code I'm reading that suffers this:
x = temp**2
Is this the dot square (ie, equivalent to m.*m) or the matrix square (ie m must be a square matrix)? In particular, I'd like to know whether I can get rid of the transpose in this code:
temp = num.transpose(whatever)
num.sum(temp**2,axis=1))
and turn it into this:
num.sum(whatever**2,axis=0)
That will save me at least 0.1ms, and is clearly worth my time.
Thanks! The ** operator is ungooglable and I know nothing!
a
|
[
"It's just the square of each element.\nfrom numpy import *\na = arange(4).reshape((2,2))\nprint a**2\n\nprints\n[[0 1]\n [4 9]]\n\n",
"You should read NumPy for Matlab Users. The elementwise power operation is mentioned there, and you can also see that in numpy, some operators apply differently to array and matrix.\n>>> from numpy import *\n>>> a = arange(4).reshape((2,2))\n>>> print a**2\n[[0 1]\n [4 9]]\n>>> print matrix(a)**2\n[[ 2 3]\n [ 6 11]]\n\n",
"** is the raise-to-power operator in Python, so x**2 means \"x squared\" in Python -- including numpy. Such operations in numpy always apply element by element, so x**2 squares each element of array x (whatever number of dimensions) just like, say, x*2 would double each element, or x+2 would increment each element by two (in each case, x proper is unaffected -- the result is a new temporary array of the same shape as x!).\nEdit: as @kaizer.ze points out, while what I wrote holds for numpy.array objects, it doesn't apply to numpy.matrix objects, where multiplication means matrix multiplication rather than element by element operation like for array (and similarly for raising to power) -- indeed, that's the key difference between the two types. As the Scipy tutorial puts it, for example:\n\nWhen we use numpy.array or\n numpy.matrix there is a difference.\n A*x will be in the latter case matrix\n product, not elementwise product as\n with array.\n\ni.e., as the numpy reference puts it:\n\nA matrix is a specialized 2-d array\n that retains its 2-d nature through\n operations. It has certain special\n operators, such as * (matrix\n multiplication) and ** (matrix power).\n\n"
] |
[
15,
6,
5
] |
[] |
[] |
[
"numpy",
"python"
] |
stackoverflow_0001350174_numpy_python.txt
|
Q:
Running doctests through iPython and pseudo-consoles
I've got a fairly basic doctestable file:
class Foo():
"""
>>> 3+2
5
"""
if __name__ in ("__main__", "__console__"):
import doctest
doctest.testmod(verbose=True)
which works as expected when run directly through python.
However, in iPython, I get
1 items had no tests:
__main__
0 tests in 1 items.
0 passed and 0 failed.
Test passed.
Since this is part of a Django project and will need access to all of the appropriate variables and such that manage.py sets up, I can also run it through a modified command, which uses code.InteractiveConsole, one result of which is __name__ gets set to '__console__'.
With the code above, I get the same result as with iPython. I tried changing the last line to this:
this = __import__(__name__)
doctest.testmod(this, verbose=True)
and I get an ImportError on __console__, which makes sense, I guess. This has no effect on either python or ipython.
So, I'd like to be able to run doctests successfully through all three of these methods, especially the InteractiveConsole one, since I expect to be needing Django pony magic fairly soon.
Just for clarification, this is what I'm expecting:
Trying:
3+2
Expecting:
5
ok
1 items had no tests:
__main__
1 items passed all tests:
1 tests in __main__.Foo
1 tests in 2 items.
1 passed and 0 failed.
Test passed.
A:
The root problem is that ipython plays weird tricks with __main__ (through its own FakeModule module) so that, by the time doctest is introspecting that "alleged module" through its __dict__, Foo is NOT there -- so doctest doesn't recurse into it.
Here's one solution:
class Foo():
"""
>>> 3+2
5
"""
if __name__ in ("__main__", "__console__"):
import doctest, inspect, sys
m = sys.modules['__main__']
m.__test__ = dict((n,v) for (n,v) in globals().items()
if inspect.isclass(v))
doctest.testmod(verbose=True)
This DOES produce, as requested:
$ ipython dot.py
Trying:
3+2
Expecting:
5
ok
1 items had no tests:
__main__
1 items passed all tests:
1 tests in __main__.__test__.Foo
1 tests in 2 items.
1 passed and 0 failed.
Test passed.
Python 2.5.1 (r251:54863, Feb 6 2009, 19:02:12)
[[ snip snip ]]
In [1]:
Just setting global __test__ doesn't work, again because setting it as a global of what you're thinking of as __main__ does NOT actually place it in the __dict__ of the actual object that gets recovered by m = sys.modules['__main__'], and the latter is exactly the expression doctest is using internally (actually it uses sys.modules.get, but the extra precaution is not necessary here since we do know that __main__ exists in sys.modules... it's just NOT the object you expect it to be!-).
Also, just setting m.__test__ = globals() directly does not work either, for a different reason: doctest checks that the values in __test__ are strings, functions, classes, or modules, and without some selection you cannot guarantee that globals() will satisfy that condition (in fact it won't). Here I'm selecting just classes, if you also want functions or whatnot you can use an or in the if clause in the genexp within the dict call.
I don't know exactly how you're running a Django shell that's able to execute your script (as I believe python manage.py shell doesn't accept arguments, you must be doing something else, and I can't guess exactly what!-), but a similar approach should help (whether your Django shell is using ipython, the default when available, or plain Python): appropriately setting __test__ in the object you obtain as sys.modules['__main__'] (or __console__, if that's what you're then passing on to doctest.testmod, I guess) should work, as it mimics what doctest will then be doing internally to locate your test strings.
And, to conclude, a philosophical reflection on design, architecture, simplicity, transparency, and "black magic"...:
All of this effort is basically what's needed to defeat the "black magic" that ipython (and maybe Django, though it may be simply delegating that part to ipython) is doing on your behalf for your "convenience"... any time at which two frameworks (or more;-) are independently doing each its own brand of black magic, interoperability may suddenly require substantial effort and become anything BUT convenient;-).
I'm not saying that the same convenience could have been provided (by any one or more of ipython, django and/or doctests) without black magic, introspection, fake modules, and so on; the designers and maintainers of each of those frameworks are superb engineers, and I expect they've done their homework thoroughly, and are performing only the minimum amount of black magic that's indispensable to deliver the amount of user convenience they decided they needed. Nevertheless, even in such a situation, "black magic" suddenly turns from a dream of convenience to a nightmare of debugging as soon as you want to do something even marginally outside what the framework's author had conceived.
OK, maybe in this case not quite a nightmare, but I do notice that this question has been open a while and even with the lure of the bounty it didn't get many answers yet -- though you now do have two answers to pick from, mine using the __test__ special feature of doctest, @codeape's using the peculiar __IP.magic_run feature of ironpython. I prefer mine because it does not rely on anything internal or undocumented -- __test__ IS a documented feature of doctest, while __IP, with those two looming leading underscores, scream "deep internals, don't touch" to me;-)... if it breaks at the next point release I wouldn't be at all surprised. Still, matter of taste -- that answer may arguably be considered more "convenient".
But, this is exactly my point: convenience may come at an enormous price in terms of giving up simplicity, transparency, and/or avoidance of internal/undocumented/unstable features; so, as a lesson for all of us, the least black magic &c we can get away with (even at the price of giving up an epsilon of convenience here and there), the happier we'll all be in the long run (and the happier we'll make other developers that need to leverage our current efforts in the future).
A:
The following works:
$ ipython
...
In [1]: %run file.py
Trying:
3+2
Expecting:
5
ok
1 items had no tests:
__main__
1 items passed all tests:
1 tests in __main__.Foo
1 tests in 2 items.
1 passed and 0 failed.
Test passed.
In [2]:
I have no idea why ipython file.py does not work. But the above is at least a workaround.
EDIT:
I found the reason why it does not work. It is quite simple:
If you do not specify the module to test in doctest.testmod(), it assumes that you want to test the __main__ module.
When IPython executes the file passed to it on the command line, the __main__ module is IPython's __main__, not your module. So doctest tries to execute doctests in IPython's entry script.
The following works, but feels a bit weird:
if __name__ == '__main__':
import doctest
import the_current_module
doctest.testmod(the_current_module)
So basically the module imports itself (that's the "feels a bit weird" part). But it works. Something I do not like abt. this approach is that every module needs to include its own name in the source.
EDIT 2:
The following script, ipython_doctest, makes ipython behave the way you want:
#! /usr/bin/env bash
echo "__IP.magic_run(\"$1\")" > __ipython_run.py
ipython __ipython_run.py
The script creates a python script that will execute %run argname in IPython.
Example:
$ ./ipython_doctest file.py
Trying:
3+2
Expecting:
5
ok
1 items had no tests:
__main__
1 items passed all tests:
1 tests in __main__.Foo
1 tests in 2 items.
1 passed and 0 failed.
Test passed.
Python 2.5 (r25:51908, Mar 7 2008, 03:27:42)
Type "copyright", "credits" or "license" for more information.
IPython 0.9.1 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object'. ?object also works, ?? prints more.
In [1]:
|
Running doctests through iPython and pseudo-consoles
|
I've got a fairly basic doctestable file:
class Foo():
"""
>>> 3+2
5
"""
if __name__ in ("__main__", "__console__"):
import doctest
doctest.testmod(verbose=True)
which works as expected when run directly through python.
However, in iPython, I get
1 items had no tests:
__main__
0 tests in 1 items.
0 passed and 0 failed.
Test passed.
Since this is part of a Django project and will need access to all of the appropriate variables and such that manage.py sets up, I can also run it through a modified command, which uses code.InteractiveConsole, one result of which is __name__ gets set to '__console__'.
With the code above, I get the same result as with iPython. I tried changing the last line to this:
this = __import__(__name__)
doctest.testmod(this, verbose=True)
and I get an ImportError on __console__, which makes sense, I guess. This has no effect on either python or ipython.
So, I'd like to be able to run doctests successfully through all three of these methods, especially the InteractiveConsole one, since I expect to be needing Django pony magic fairly soon.
Just for clarification, this is what I'm expecting:
Trying:
3+2
Expecting:
5
ok
1 items had no tests:
__main__
1 items passed all tests:
1 tests in __main__.Foo
1 tests in 2 items.
1 passed and 0 failed.
Test passed.
|
[
"The root problem is that ipython plays weird tricks with __main__ (through its own FakeModule module) so that, by the time doctest is introspecting that \"alleged module\" through its __dict__, Foo is NOT there -- so doctest doesn't recurse into it.\nHere's one solution:\nclass Foo():\n \"\"\"\n >>> 3+2\n 5\n \"\"\"\n\nif __name__ in (\"__main__\", \"__console__\"):\n import doctest, inspect, sys\n m = sys.modules['__main__']\n m.__test__ = dict((n,v) for (n,v) in globals().items()\n if inspect.isclass(v))\n doctest.testmod(verbose=True)\n\nThis DOES produce, as requested:\n$ ipython dot.py \nTrying:\n 3+2\nExpecting:\n 5\nok\n1 items had no tests:\n __main__\n1 items passed all tests:\n 1 tests in __main__.__test__.Foo\n1 tests in 2 items.\n1 passed and 0 failed.\nTest passed.\nPython 2.5.1 (r251:54863, Feb 6 2009, 19:02:12) \n [[ snip snip ]]\nIn [1]: \n\nJust setting global __test__ doesn't work, again because setting it as a global of what you're thinking of as __main__ does NOT actually place it in the __dict__ of the actual object that gets recovered by m = sys.modules['__main__'], and the latter is exactly the expression doctest is using internally (actually it uses sys.modules.get, but the extra precaution is not necessary here since we do know that __main__ exists in sys.modules... it's just NOT the object you expect it to be!-).\nAlso, just setting m.__test__ = globals() directly does not work either, for a different reason: doctest checks that the values in __test__ are strings, functions, classes, or modules, and without some selection you cannot guarantee that globals() will satisfy that condition (in fact it won't). Here I'm selecting just classes, if you also want functions or whatnot you can use an or in the if clause in the genexp within the dict call.\nI don't know exactly how you're running a Django shell that's able to execute your script (as I believe python manage.py shell doesn't accept arguments, you must be doing something else, and I can't guess exactly what!-), but a similar approach should help (whether your Django shell is using ipython, the default when available, or plain Python): appropriately setting __test__ in the object you obtain as sys.modules['__main__'] (or __console__, if that's what you're then passing on to doctest.testmod, I guess) should work, as it mimics what doctest will then be doing internally to locate your test strings.\nAnd, to conclude, a philosophical reflection on design, architecture, simplicity, transparency, and \"black magic\"...:\nAll of this effort is basically what's needed to defeat the \"black magic\" that ipython (and maybe Django, though it may be simply delegating that part to ipython) is doing on your behalf for your \"convenience\"... any time at which two frameworks (or more;-) are independently doing each its own brand of black magic, interoperability may suddenly require substantial effort and become anything BUT convenient;-).\nI'm not saying that the same convenience could have been provided (by any one or more of ipython, django and/or doctests) without black magic, introspection, fake modules, and so on; the designers and maintainers of each of those frameworks are superb engineers, and I expect they've done their homework thoroughly, and are performing only the minimum amount of black magic that's indispensable to deliver the amount of user convenience they decided they needed. Nevertheless, even in such a situation, \"black magic\" suddenly turns from a dream of convenience to a nightmare of debugging as soon as you want to do something even marginally outside what the framework's author had conceived.\nOK, maybe in this case not quite a nightmare, but I do notice that this question has been open a while and even with the lure of the bounty it didn't get many answers yet -- though you now do have two answers to pick from, mine using the __test__ special feature of doctest, @codeape's using the peculiar __IP.magic_run feature of ironpython. I prefer mine because it does not rely on anything internal or undocumented -- __test__ IS a documented feature of doctest, while __IP, with those two looming leading underscores, scream \"deep internals, don't touch\" to me;-)... if it breaks at the next point release I wouldn't be at all surprised. Still, matter of taste -- that answer may arguably be considered more \"convenient\".\nBut, this is exactly my point: convenience may come at an enormous price in terms of giving up simplicity, transparency, and/or avoidance of internal/undocumented/unstable features; so, as a lesson for all of us, the least black magic &c we can get away with (even at the price of giving up an epsilon of convenience here and there), the happier we'll all be in the long run (and the happier we'll make other developers that need to leverage our current efforts in the future).\n",
"The following works:\n$ ipython\n...\nIn [1]: %run file.py\n\nTrying:\n 3+2\nExpecting:\n 5\nok\n1 items had no tests:\n __main__\n1 items passed all tests:\n 1 tests in __main__.Foo\n1 tests in 2 items.\n1 passed and 0 failed.\nTest passed.\n\nIn [2]: \n\nI have no idea why ipython file.py does not work. But the above is at least a workaround.\nEDIT:\nI found the reason why it does not work. It is quite simple:\n\nIf you do not specify the module to test in doctest.testmod(), it assumes that you want to test the __main__ module.\nWhen IPython executes the file passed to it on the command line, the __main__ module is IPython's __main__, not your module. So doctest tries to execute doctests in IPython's entry script.\n\nThe following works, but feels a bit weird:\nif __name__ == '__main__':\n import doctest\n import the_current_module\n doctest.testmod(the_current_module)\n\nSo basically the module imports itself (that's the \"feels a bit weird\" part). But it works. Something I do not like abt. this approach is that every module needs to include its own name in the source.\nEDIT 2:\nThe following script, ipython_doctest, makes ipython behave the way you want:\n#! /usr/bin/env bash\n\necho \"__IP.magic_run(\\\"$1\\\")\" > __ipython_run.py\nipython __ipython_run.py\n\nThe script creates a python script that will execute %run argname in IPython.\nExample:\n$ ./ipython_doctest file.py\nTrying:\n 3+2\nExpecting:\n 5\nok\n1 items had no tests:\n __main__\n1 items passed all tests:\n 1 tests in __main__.Foo\n1 tests in 2 items.\n1 passed and 0 failed.\nTest passed.\nPython 2.5 (r25:51908, Mar 7 2008, 03:27:42) \nType \"copyright\", \"credits\" or \"license\" for more information.\n\nIPython 0.9.1 -- An enhanced Interactive Python.\n? -> Introduction and overview of IPython's features.\n%quickref -> Quick reference.\nhelp -> Python's own help system.\nobject? -> Details about 'object'. ?object also works, ?? prints more.\n\nIn [1]:\n\n"
] |
[
8,
2
] |
[] |
[] |
[
"django",
"ipython",
"python"
] |
stackoverflow_0001336980_django_ipython_python.txt
|
Q:
How do I form a URL in Django for what I'm doing
Desperate, please help. Will work for food :)
I want to be able to have pages at the following URLs, and I want to be able to look them up by their URL (ie, If somebody goes to a certain URL, I want to be able to check for a page there).
mysite.com/somepage/somesubpage/somesubsubpage/
mysite.com/somepage/somesubpage/anothersubpage/
mysite.com/somepage/somesubpage/somesubpage/
mysite.com/somepage/somepage/
Notice I want to be able to reuse each page's slug (ie, somepage/somepage). Of course each slug will be unique for it's level (ie, cannot have two pages with mysite.com/somepage/other/ and mysite.com/somepage/other/ because they would in essence be the same page). What is a good way to do this. I've tried to store the slug for a page ('somesubpage') in a field called 'slug', and make each slug unique for it's parent page so that the above circumstance can't happen. The problem with this is that if I try to look up a page by it's slug (ie, 'somepage'), and there happens to be a page at mysite.com/other/somepage/ and mysite.com/page/somepage/, how would my application know which one to get (they both have the same slug 'somepage').
A:
You need to also store level and parent attributes, so that you can always get the right object.
The requirement to store hierarchical data comes up very frequently, and I always recommend django-mptt. It's the Django implementation of an efficient algorithm for storing hierarchical data in a database. I've used it on several projects. Basically, as well as storing level and parent, it also stores a left and right for each object, so that it can describe the tree and all its sub-elements uniquely. There are some explanatory links on the project's home page.
A:
It sounds like you're looking for a CMS app. There's a comparison of several Django-based CMS. If you want a full-featured CMS at the center of your project, DjangoCMS 2 or django-page-cms might be the right fit. If you prefer a CMS that supports the basic CMS use cases but goes out of your way most of the time feincms could be something to look at.
edit: incidentally, most of the CMS on the comparision page use django-mptt that Daniel mentions.
|
How do I form a URL in Django for what I'm doing
|
Desperate, please help. Will work for food :)
I want to be able to have pages at the following URLs, and I want to be able to look them up by their URL (ie, If somebody goes to a certain URL, I want to be able to check for a page there).
mysite.com/somepage/somesubpage/somesubsubpage/
mysite.com/somepage/somesubpage/anothersubpage/
mysite.com/somepage/somesubpage/somesubpage/
mysite.com/somepage/somepage/
Notice I want to be able to reuse each page's slug (ie, somepage/somepage). Of course each slug will be unique for it's level (ie, cannot have two pages with mysite.com/somepage/other/ and mysite.com/somepage/other/ because they would in essence be the same page). What is a good way to do this. I've tried to store the slug for a page ('somesubpage') in a field called 'slug', and make each slug unique for it's parent page so that the above circumstance can't happen. The problem with this is that if I try to look up a page by it's slug (ie, 'somepage'), and there happens to be a page at mysite.com/other/somepage/ and mysite.com/page/somepage/, how would my application know which one to get (they both have the same slug 'somepage').
|
[
"You need to also store level and parent attributes, so that you can always get the right object.\nThe requirement to store hierarchical data comes up very frequently, and I always recommend django-mptt. It's the Django implementation of an efficient algorithm for storing hierarchical data in a database. I've used it on several projects. Basically, as well as storing level and parent, it also stores a left and right for each object, so that it can describe the tree and all its sub-elements uniquely. There are some explanatory links on the project's home page.\n",
"It sounds like you're looking for a CMS app. There's a comparison of several Django-based CMS. If you want a full-featured CMS at the center of your project, DjangoCMS 2 or django-page-cms might be the right fit. If you prefer a CMS that supports the basic CMS use cases but goes out of your way most of the time feincms could be something to look at.\nedit: incidentally, most of the CMS on the comparision page use django-mptt that Daniel mentions.\n"
] |
[
2,
0
] |
[] |
[] |
[
"django",
"django_urls",
"python",
"regex"
] |
stackoverflow_0001352073_django_django_urls_python_regex.txt
|
Q:
Object store for objects in Django between requests
I had the following idea: Say we have a webapp written using django which models some kind of bulletin board. This board has many threads but a few of them get the most posts/views per hour.
The thread pages look a little different for each user, so you can't cache the rendered page as whole and caching only some parts of the rendered page is also not an option.
My idea was: I create an object structure of the thread in memory (with every post and other data that is needed to display it). If a new message is posted the structure is updated and every X posts (or every Y minutes, whatever comes first) the new messages are written back to the database. If the app crashes, some posts are lost, but this is definitely okay (for users and admins).
The question: Can I create such a persistent in memory storage without serialization (so no serialize->memcached)? As I understand it, WSGI applications (like Django) run in a continuous process without shutting down between requests, so it should be possible in theory. Is there any API I could use? If not: any point to look?
/edit1: I know that "persistent" usually has a different meaning, but in this case I strictly mean "in between request".
A:
In a production WSGI environment, you would probably have multiple worker processes serving requests at the same time. These worker processes would be recycled from time to time, meaning local memory objects would be lost.
But if you really need this (and make sure you do), I suggest you look into Django's caching framework, check out local-memory caching. Also, have a look at sessions.
But even the local-memory caching uses serialization (with pickle). It is easy to implement local-memory cache without serialization by implementing a custom cache back-end (see the docs). You could use the code in locmem.py as a starting point to create a cache without serialization.
But I suspect you are doing a bit of premature optimization here?
A:
An in memory storage is not persistent, so no.
I think you mean that you only want to write to the database ever X new posts of objects. I guess this is for speedup reasons. But since you need to serialize them sooner or later anyway, you don't actually save any time that way. However, you will save time by not flushing the new objects to disk, but most databases already support that.
But you also talk about caching the rendered page, which is read caching. There you can't cache the finished result you say, but you can cache the result of the database query. That means that new message will not be immediately updated, but take a minute or so to show up, but I think most people will see this as acceptable.
Update: In this case not, then. But you should still easily be able to cache the query results, but invalidate that cache when new responses are added. That should help.
|
Object store for objects in Django between requests
|
I had the following idea: Say we have a webapp written using django which models some kind of bulletin board. This board has many threads but a few of them get the most posts/views per hour.
The thread pages look a little different for each user, so you can't cache the rendered page as whole and caching only some parts of the rendered page is also not an option.
My idea was: I create an object structure of the thread in memory (with every post and other data that is needed to display it). If a new message is posted the structure is updated and every X posts (or every Y minutes, whatever comes first) the new messages are written back to the database. If the app crashes, some posts are lost, but this is definitely okay (for users and admins).
The question: Can I create such a persistent in memory storage without serialization (so no serialize->memcached)? As I understand it, WSGI applications (like Django) run in a continuous process without shutting down between requests, so it should be possible in theory. Is there any API I could use? If not: any point to look?
/edit1: I know that "persistent" usually has a different meaning, but in this case I strictly mean "in between request".
|
[
"In a production WSGI environment, you would probably have multiple worker processes serving requests at the same time. These worker processes would be recycled from time to time, meaning local memory objects would be lost.\nBut if you really need this (and make sure you do), I suggest you look into Django's caching framework, check out local-memory caching. Also, have a look at sessions.\nBut even the local-memory caching uses serialization (with pickle). It is easy to implement local-memory cache without serialization by implementing a custom cache back-end (see the docs). You could use the code in locmem.py as a starting point to create a cache without serialization.\nBut I suspect you are doing a bit of premature optimization here?\n",
"An in memory storage is not persistent, so no. \nI think you mean that you only want to write to the database ever X new posts of objects. I guess this is for speedup reasons. But since you need to serialize them sooner or later anyway, you don't actually save any time that way. However, you will save time by not flushing the new objects to disk, but most databases already support that.\nBut you also talk about caching the rendered page, which is read caching. There you can't cache the finished result you say, but you can cache the result of the database query. That means that new message will not be immediately updated, but take a minute or so to show up, but I think most people will see this as acceptable.\nUpdate: In this case not, then. But you should still easily be able to cache the query results, but invalidate that cache when new responses are added. That should help.\n"
] |
[
7,
0
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0001351323_django_python.txt
|
Q:
suppress/redirect stderr when calling python webrowser
I have a python program that opens several urls in seperate tabs in a new browser window, however when I run the program from the command line and open the browser using
webbrowser.open_new(url)
The stderr from firefox prints to bash. Looking at the docs I can't seem to find a way to redirect or suppress them
I have resorted to using
browserInstance = subprocess.Popen(['firefox'], stdout=log, stderr=log)
Where log is a tempfile & then opening the other tabs with webbrowser.open_new.
Is there a way to do this within the webbrowser module?
A:
What is webbrowser.get() giving you?
If you do
webbrowser.get('firefox').open(url)
then you shouldn't see any output. The webbrowser module choses to leave stderr for some browsers - in particular the text browsers, and then ones where it isn't certain. For all UnixBrowsers that have set background to True, no output should be visible.
A:
What about sending the output to /dev/null instead of a temporary file?
A:
I think Martin is right about Unix systems, but it looks like things are different on Windows. Is this on a Windows system?
On Windows it looks like webbrowser.py is either going to give you a webbrowser.WindowsDefault browser, which opens the url using
os.startfile(url)
or if Firefox is present it's going to give you a webbrowser.BackgroundBrowser, which starts the browser on Windows using:
p = subprocess.Popen(cmdline)
It looks like only Unix browsers have the ability to redirect stderr in the webbrowser module. You should be able to find out what browser type you're getting by doing
>>> webbrowser.get('firefox')
In a Python interactive console.
|
suppress/redirect stderr when calling python webrowser
|
I have a python program that opens several urls in seperate tabs in a new browser window, however when I run the program from the command line and open the browser using
webbrowser.open_new(url)
The stderr from firefox prints to bash. Looking at the docs I can't seem to find a way to redirect or suppress them
I have resorted to using
browserInstance = subprocess.Popen(['firefox'], stdout=log, stderr=log)
Where log is a tempfile & then opening the other tabs with webbrowser.open_new.
Is there a way to do this within the webbrowser module?
|
[
"What is webbrowser.get() giving you?\nIf you do\n webbrowser.get('firefox').open(url)\n\nthen you shouldn't see any output. The webbrowser module choses to leave stderr for some browsers - in particular the text browsers, and then ones where it isn't certain. For all UnixBrowsers that have set background to True, no output should be visible.\n",
"What about sending the output to /dev/null instead of a temporary file?\n",
"I think Martin is right about Unix systems, but it looks like things are different on Windows. Is this on a Windows system?\nOn Windows it looks like webbrowser.py is either going to give you a webbrowser.WindowsDefault browser, which opens the url using\nos.startfile(url)\n\nor if Firefox is present it's going to give you a webbrowser.BackgroundBrowser, which starts the browser on Windows using:\np = subprocess.Popen(cmdline)\n\nIt looks like only Unix browsers have the ability to redirect stderr in the webbrowser module. You should be able to find out what browser type you're getting by doing\n>>> webbrowser.get('firefox')\n\nIn a Python interactive console.\n"
] |
[
6,
0,
0
] |
[] |
[] |
[
"browser",
"python",
"stderr"
] |
stackoverflow_0001352361_browser_python_stderr.txt
|
Q:
Why is Python's enumerate so slow?
Why is "enumerate" slower than "xrange + lst[i]"?
>>> from timeit import Timer
>>> lst = [1,2,3,0,1,2]*1000
>>> setup = 'from __main__ import lst'
>>> s1 = """
for i in range(len(lst)):
elem = lst[i]
"""
>>> s2 = """
for i in xrange(len(lst)):
elem = lst[i]
"""
>>> s3 = """
for i, v in enumerate(lst):
elem = v
"""
>>> t1 = Timer(s1, setup); t2 = Timer(s2, setup); t3 = Timer(s3, setup)
>>> t1.timeit(3000), t2.timeit(3000), t3.timeit(3000)
(1.9263118636586494, 1.6119261665937992, 1.9606022553145719)
>>> t1.timeit(3000), t2.timeit(3000), t3.timeit(3000)
(1.93520258859715, 1.6145745478824836, 1.9529405971988041)
EDIT:
I keep in mind why
for i, v in enumerate(lst):
elem = i, v
slower than for i in xrange(len(lst)):
elem = i, lst[i]
A:
If you measure properly you'll see there's essentially no difference (enumerate is microscopically faster than xrange in this example, but well within noise):
$ python -mtimeit -s'lst=[1,2,3,0,1,2]*1000' 'for i in xrange(len(lst)): elem=lst[i]'
1000 loops, best of 3: 480 usec per loop
$ python -mtimeit -s'lst=[1,2,3,0,1,2]*1000' 'for i, elem in enumerate(lst): pass'
1000 loops, best of 3: 473 usec per loop
(BTW, I always recommend using timeit at the shell prompt like this, not within code or at the interpreter prompt as you're doing, just because the output is so nicely formatted and usable, with units of measure of time and everything).
In your code, you have an extra assignment in the enumerate case: you assign the list item to v in the for header clause, then again assign v to elem; while in the xrange case you only assign the item once, to elem. In my case I'm also assigning only once in either case, of course; why would you WANT to assign multiple times anyway?! Whatever you're doing with elem and i in the body of the loop you can do it identically in the two forms I'm measuring, just without the redundancy that your enumerate case has.
A:
Possibly because you have hobbled enumerate. Try this:
>>> s3 = """
for i, elem in enumerate(lst):
pass
"""
Update Two extra reasons for using timeit at the shell prompt that Alex didn't mention:
(1) It does "best of N" for you.
(2) It works out for you how many iterations are necessary to get a meaningful result.
|
Why is Python's enumerate so slow?
|
Why is "enumerate" slower than "xrange + lst[i]"?
>>> from timeit import Timer
>>> lst = [1,2,3,0,1,2]*1000
>>> setup = 'from __main__ import lst'
>>> s1 = """
for i in range(len(lst)):
elem = lst[i]
"""
>>> s2 = """
for i in xrange(len(lst)):
elem = lst[i]
"""
>>> s3 = """
for i, v in enumerate(lst):
elem = v
"""
>>> t1 = Timer(s1, setup); t2 = Timer(s2, setup); t3 = Timer(s3, setup)
>>> t1.timeit(3000), t2.timeit(3000), t3.timeit(3000)
(1.9263118636586494, 1.6119261665937992, 1.9606022553145719)
>>> t1.timeit(3000), t2.timeit(3000), t3.timeit(3000)
(1.93520258859715, 1.6145745478824836, 1.9529405971988041)
EDIT:
I keep in mind why
for i, v in enumerate(lst):
elem = i, v
slower than for i in xrange(len(lst)):
elem = i, lst[i]
|
[
"If you measure properly you'll see there's essentially no difference (enumerate is microscopically faster than xrange in this example, but well within noise):\n$ python -mtimeit -s'lst=[1,2,3,0,1,2]*1000' 'for i in xrange(len(lst)): elem=lst[i]'\n1000 loops, best of 3: 480 usec per loop\n$ python -mtimeit -s'lst=[1,2,3,0,1,2]*1000' 'for i, elem in enumerate(lst): pass'\n1000 loops, best of 3: 473 usec per loop\n\n(BTW, I always recommend using timeit at the shell prompt like this, not within code or at the interpreter prompt as you're doing, just because the output is so nicely formatted and usable, with units of measure of time and everything).\nIn your code, you have an extra assignment in the enumerate case: you assign the list item to v in the for header clause, then again assign v to elem; while in the xrange case you only assign the item once, to elem. In my case I'm also assigning only once in either case, of course; why would you WANT to assign multiple times anyway?! Whatever you're doing with elem and i in the body of the loop you can do it identically in the two forms I'm measuring, just without the redundancy that your enumerate case has.\n",
"Possibly because you have hobbled enumerate. Try this:\n>>> s3 = \"\"\"\nfor i, elem in enumerate(lst):\n pass\n\"\"\"\n\nUpdate Two extra reasons for using timeit at the shell prompt that Alex didn't mention:\n(1) It does \"best of N\" for you.\n(2) It works out for you how many iterations are necessary to get a meaningful result.\n"
] |
[
18,
6
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001352497_python.txt
|
Q:
What classes of applications or problems do you prefer Python to strictly OO Languages?
I've got a pretty strong background in C-style languages. And have worked on several different types of projects. I have just started taking a serious look at Python after reading Programming Collective Intelligence. I understand that Python can solve any problem that C# can, and vice-versa. But I am curious to know from those who use both regularly, when they choose one over the other. Removing other factors, like coworkers experience, etc.
When do you choose to create an application in Python instead of a static typed, purely OO language like C# or Java?
Edit:
I was afraid we were going to get off topic a bit with this question. Python is an object oriented language. But, as is stated below it may not be the preferred language when your application will have a very heavy business domain, etc. I am aware that Python uses objects extensively, and that even functions are objects, something that is not true in all of the "OO" languages I brought up earlier.
Despite my poor choice of words in the question (almost no languages fit nicely into two or three word descriptions, and it is really difficult to point out differences in languages without it appearing that you are talking down to a certain class of developer.), I am still very interested in what you have to say about when you prefer Python to other languages.
A:
My motto is (and has long been) "Python where I can, C++ where I must" (one day I'll find opportunity to actually use Java, C#, &c &C, in a real-world project, but I haven't yet, except for a pilot project in Java 1.1, more tha ten years ago...;-) -- Javascript (with dojo) when code has to run in the client's browser, and SQL when it has to run in the DB server, of course, but C++ and Python are my daily bread on the "normal" servers and clients I develop, and that's the case in all parts of Google I've been working in in 4+ years (there are many parts using Java, too, I just never happened to work there;-). Hmmm, there's pure C when I'm working on the Python core and related extensions, too, of course;-).
Neither Python nor C++ are "strictly OO" -- they're multi-paradigm, and therein lies a good part of their strength in the hands of programmers who are highly skilled at OO and other paradigms, such as functional, generic, declarative, and so forth. I gather C# has pulled in some of each of these too (sometimes surpassing C++, e.g. by offering lambdas), and even Java has had to succumb to some (at least generic) to a tiny extent, so surely it's clear that "one size fits all" doesn't -- multi-paradigm programming is alive and well!-)
C++ (like C) forces me to control all memory carefully (our internal c++ style guide forbids the use of "smart pointers" that amount to poor implementations of garbage collection!-), which multiplies my work a lot, but helps ensure I'm not using one bit of memory more than strictly needed at any time: so, C++ (or C when needed) is the choice when memory is tight and precious. Otherwise, the extremely high productivity of Python (and Ruby or Javascript aren't all that different, if that's what you are used to) makes it preferable.
I'm sure there IS a niche in-between for a language that's garbage collected but mostly static, like Java (or C# before it started piling on more and more features, including dynamic ones in 4.0, I hear), or else those languages and cognate ones wouldn't be so widespread -- I've just never found myself inhabiting that peculiar niche, as yet.
A:
I select Python as often as possible. It is the most useful and productive programming environment that I know of.
If I run into projects where Python cannot be used directly or for the entire project (for instance a .NET-based app. server) my approach is usually to do as much with Python as possible. Depending on the situation that might mean:
Embed a python interpreter
Use Jython
Use IronPython
Use some IPC mechanism (usually http or sockets) to call an external python process
Export data - process using python - import data
Generate code using Python
From my answer to a previous question: I know C#. Will I be more productive with Python?
In my experience, what makes me more productive in Python vs. C#, is:
It is a dynamic language. Using a dynamic language often allows you to remove whole architectural layers from your app. Pythons dynamic nature allows you to create reusable high-level abstractions in more natural and flexible (syntax-wise) way than you can in C#.
Libraries. The standard libraries and a lot of the open-source libraries provided by the community are of high quality. The range of applications that Python is used for means that the range of libraries is wide.
Faster development cycle. No compile step means I can test changes faster. For instance, when developing a web app, the dev server detects changes and reloads the app when the files are saved. Running a unit test from within my editor is just a keystroke away, and executes instantaneously.
'Easy access' to often-used features: lists, list comprehensions, generators, tuples etc.
Less verbose syntax. You can create a WSGI-based Python web framework in fewer lines of code than your typical .NET web.config file :-)
Good documentation. Good books.
A:
I almost exclusively use Python to support my development of software in other languages. I should stress, this is not a result of some failing in Python, rather that the software domains I am working in tend to have other languages/frameworks that are more appropriate or simply the only option:
Web Development : I would love to check out Python on Google App Engine, but at the moment I am doing all my personal web development in PHP.
Desktop Application Development : I use the Ogre SDK for developing windows screensavers and use C++/Win32 for that.
Server Application Development : Writing server side software for Windows professionally is almost always in C++.
However, for all of these application domains I use Python regularly write tools, process data and generally to streamline my development efforts. A few examples here are probably the best way to describe how I tend to use Python:
To scape data from existing websites.
Generate reports based on XML data.
Generate sets of SQL queries to populate databases based on other data formats.
Parse entire C++ projects and pull out a distinct set of error messages and their corresponding error codes.
Compare data sets to find data that I have inadvertantly lost.
Image processing to generate data for other sotware.
Python is such an empowering and useful language that although I have never used it as the primary language for software development, I would like to.
A:
JavaScript and Python have affected how I program even in C now. I think the best thing about knowing multiple languages is that you get more tools to use mentally, because you often don't have a choice of which language to use.
A:
Python is much more strictly OO than Java and C#.
But if your question is when to use Python and when Java or C#, I find Python useful for small programs that build on existing libraries and don't involve much domain modelling. For example, little desktop utilities written with the Python Gtk bindings or website maintenance scripts written with lxml and elementtree.
When there is a lot of application domain modelling to do, especially if the domain is poorly understood or changing rapidly, I find Python's limited tooling makes changing the code very arduous compared to Java (not so relevant for C# because .NET tool support trails Java by a few years). So for projects like that I'll use Java and IntelliJ.
A:
Both C and python are my languages of choice, but I almost always start doing something in python for correctness, and then dive into C when needed. I am mostly using programming for research/numerical code, where the specifications keep changing, and C is an awful language for prototyping (this is true for most statically typed languages in my experience). When you have something working in C, you rarely change it significantly so that it is 'better', because you don't have the time. But sometimes, C is easier than python when you need to control resources (be it CPU, memory, etc...).
So the question really is "when is python not enough for the task", rather than the contrary.
A:
Generally the language is dictated by the job, who wants the stuff done and who you are working with. I only use java and c/c++ for my programming needs, mainly because the people i work with use it. That being said ive used python for fast prototyping and such.
A:
All of them.
Except for code that is already written in different language, obviously.
Even if something seems too big for python to handle, I usually make Python prototype anyway, mainly because it's goes so smooth. Often I'll stick with Python eitherway and just use C API or ctypes to tackle bottlenecks (after I rewrite the prototype in a nice clean manner, that is).
|
What classes of applications or problems do you prefer Python to strictly OO Languages?
|
I've got a pretty strong background in C-style languages. And have worked on several different types of projects. I have just started taking a serious look at Python after reading Programming Collective Intelligence. I understand that Python can solve any problem that C# can, and vice-versa. But I am curious to know from those who use both regularly, when they choose one over the other. Removing other factors, like coworkers experience, etc.
When do you choose to create an application in Python instead of a static typed, purely OO language like C# or Java?
Edit:
I was afraid we were going to get off topic a bit with this question. Python is an object oriented language. But, as is stated below it may not be the preferred language when your application will have a very heavy business domain, etc. I am aware that Python uses objects extensively, and that even functions are objects, something that is not true in all of the "OO" languages I brought up earlier.
Despite my poor choice of words in the question (almost no languages fit nicely into two or three word descriptions, and it is really difficult to point out differences in languages without it appearing that you are talking down to a certain class of developer.), I am still very interested in what you have to say about when you prefer Python to other languages.
|
[
"My motto is (and has long been) \"Python where I can, C++ where I must\" (one day I'll find opportunity to actually use Java, C#, &c &C, in a real-world project, but I haven't yet, except for a pilot project in Java 1.1, more tha ten years ago...;-) -- Javascript (with dojo) when code has to run in the client's browser, and SQL when it has to run in the DB server, of course, but C++ and Python are my daily bread on the \"normal\" servers and clients I develop, and that's the case in all parts of Google I've been working in in 4+ years (there are many parts using Java, too, I just never happened to work there;-). Hmmm, there's pure C when I'm working on the Python core and related extensions, too, of course;-).\nNeither Python nor C++ are \"strictly OO\" -- they're multi-paradigm, and therein lies a good part of their strength in the hands of programmers who are highly skilled at OO and other paradigms, such as functional, generic, declarative, and so forth. I gather C# has pulled in some of each of these too (sometimes surpassing C++, e.g. by offering lambdas), and even Java has had to succumb to some (at least generic) to a tiny extent, so surely it's clear that \"one size fits all\" doesn't -- multi-paradigm programming is alive and well!-)\nC++ (like C) forces me to control all memory carefully (our internal c++ style guide forbids the use of \"smart pointers\" that amount to poor implementations of garbage collection!-), which multiplies my work a lot, but helps ensure I'm not using one bit of memory more than strictly needed at any time: so, C++ (or C when needed) is the choice when memory is tight and precious. Otherwise, the extremely high productivity of Python (and Ruby or Javascript aren't all that different, if that's what you are used to) makes it preferable.\nI'm sure there IS a niche in-between for a language that's garbage collected but mostly static, like Java (or C# before it started piling on more and more features, including dynamic ones in 4.0, I hear), or else those languages and cognate ones wouldn't be so widespread -- I've just never found myself inhabiting that peculiar niche, as yet.\n",
"I select Python as often as possible. It is the most useful and productive programming environment that I know of.\nIf I run into projects where Python cannot be used directly or for the entire project (for instance a .NET-based app. server) my approach is usually to do as much with Python as possible. Depending on the situation that might mean:\n\nEmbed a python interpreter\nUse Jython\nUse IronPython\nUse some IPC mechanism (usually http or sockets) to call an external python process\nExport data - process using python - import data\nGenerate code using Python\n\nFrom my answer to a previous question: I know C#. Will I be more productive with Python?\n\nIn my experience, what makes me more productive in Python vs. C#, is:\n\nIt is a dynamic language. Using a dynamic language often allows you to remove whole architectural layers from your app. Pythons dynamic nature allows you to create reusable high-level abstractions in more natural and flexible (syntax-wise) way than you can in C#.\nLibraries. The standard libraries and a lot of the open-source libraries provided by the community are of high quality. The range of applications that Python is used for means that the range of libraries is wide.\nFaster development cycle. No compile step means I can test changes faster. For instance, when developing a web app, the dev server detects changes and reloads the app when the files are saved. Running a unit test from within my editor is just a keystroke away, and executes instantaneously.\n'Easy access' to often-used features: lists, list comprehensions, generators, tuples etc.\nLess verbose syntax. You can create a WSGI-based Python web framework in fewer lines of code than your typical .NET web.config file :-)\nGood documentation. Good books.\n\n\n",
"I almost exclusively use Python to support my development of software in other languages. I should stress, this is not a result of some failing in Python, rather that the software domains I am working in tend to have other languages/frameworks that are more appropriate or simply the only option:\n\nWeb Development : I would love to check out Python on Google App Engine, but at the moment I am doing all my personal web development in PHP.\nDesktop Application Development : I use the Ogre SDK for developing windows screensavers and use C++/Win32 for that.\nServer Application Development : Writing server side software for Windows professionally is almost always in C++.\n\nHowever, for all of these application domains I use Python regularly write tools, process data and generally to streamline my development efforts. A few examples here are probably the best way to describe how I tend to use Python:\n\nTo scape data from existing websites.\nGenerate reports based on XML data.\nGenerate sets of SQL queries to populate databases based on other data formats.\nParse entire C++ projects and pull out a distinct set of error messages and their corresponding error codes.\nCompare data sets to find data that I have inadvertantly lost.\nImage processing to generate data for other sotware.\n\nPython is such an empowering and useful language that although I have never used it as the primary language for software development, I would like to.\n",
"JavaScript and Python have affected how I program even in C now. I think the best thing about knowing multiple languages is that you get more tools to use mentally, because you often don't have a choice of which language to use.\n",
"Python is much more strictly OO than Java and C#.\nBut if your question is when to use Python and when Java or C#, I find Python useful for small programs that build on existing libraries and don't involve much domain modelling. For example, little desktop utilities written with the Python Gtk bindings or website maintenance scripts written with lxml and elementtree.\nWhen there is a lot of application domain modelling to do, especially if the domain is poorly understood or changing rapidly, I find Python's limited tooling makes changing the code very arduous compared to Java (not so relevant for C# because .NET tool support trails Java by a few years). So for projects like that I'll use Java and IntelliJ. \n",
"Both C and python are my languages of choice, but I almost always start doing something in python for correctness, and then dive into C when needed. I am mostly using programming for research/numerical code, where the specifications keep changing, and C is an awful language for prototyping (this is true for most statically typed languages in my experience). When you have something working in C, you rarely change it significantly so that it is 'better', because you don't have the time. But sometimes, C is easier than python when you need to control resources (be it CPU, memory, etc...).\nSo the question really is \"when is python not enough for the task\", rather than the contrary.\n",
"Generally the language is dictated by the job, who wants the stuff done and who you are working with. I only use java and c/c++ for my programming needs, mainly because the people i work with use it. That being said ive used python for fast prototyping and such.\n",
"All of them. \nExcept for code that is already written in different language, obviously.\nEven if something seems too big for python to handle, I usually make Python prototype anyway, mainly because it's goes so smooth. Often I'll stick with Python eitherway and just use C API or ctypes to tackle bottlenecks (after I rewrite the prototype in a nice clean manner, that is).\n"
] |
[
10,
4,
3,
2,
1,
0,
0,
0
] |
[] |
[] |
[
"c#",
"c++",
"java",
"programming_languages",
"python"
] |
stackoverflow_0001022971_c#_c++_java_programming_languages_python.txt
|
Q:
Tool for analysing and stepping through code?
Recently I came across a tool which could analyse running python code and produced a visual representation similar to a code editor to allow one to step through the different parts of the code, seeing how many times each part was called, execution time, etc.
I can't find the reference to it again. Would anyone know what it might be?
A:
cProfile or Hotshot.
A:
RunSnakeRun is user interface for cProfile/Hotshot (see James' answer), which also provides a visualization of the profiling data.
Another useful link might be the link to the PyCon2009 Talk Introduction to Python Profiling (#65)
A:
Found what I was looking for: Code Investigator
CodeInvestigator is a tracing tool for Python programs. All run time information is recorded. Read your code together with its run time details in a Firefox browser. See what your program did when it ran.
A:
NetBeans with python plug-in?
A:
Maybe Python Call Graph?
|
Tool for analysing and stepping through code?
|
Recently I came across a tool which could analyse running python code and produced a visual representation similar to a code editor to allow one to step through the different parts of the code, seeing how many times each part was called, execution time, etc.
I can't find the reference to it again. Would anyone know what it might be?
|
[
"cProfile or Hotshot.\n",
"RunSnakeRun is user interface for cProfile/Hotshot (see James' answer), which also provides a visualization of the profiling data.\nAnother useful link might be the link to the PyCon2009 Talk Introduction to Python Profiling (#65)\n",
"Found what I was looking for: Code Investigator\n\nCodeInvestigator is a tracing tool for Python programs. All run time information is recorded. Read your code together with its run time details in a Firefox browser. See what your program did when it ran.\n\n",
"NetBeans with python plug-in?\n",
"Maybe Python Call Graph?\n"
] |
[
2,
1,
1,
0,
0
] |
[] |
[] |
[
"code_analysis",
"profiling",
"python"
] |
stackoverflow_0001350864_code_analysis_profiling_python.txt
|
Q:
Python script performance as a background process
Im in the process of writing a python script to act as a "glue" between an application and some external devices. The script itself is quite straight forward and has three distinct processes:
Request data (from a socket connection, via UDP)
Receive response (from a socket connection, via UDP)
Process response and make data available to 3rd party application
However, this will be done repetitively, and for several (+/-200 different) devices. So once its reached device #200, it would start requesting data from device #001 again. My main concern here is not to bog down the processor whilst executing the script.
UPDATE:
I am using three threads to do the above, one thread for each of the above processes. The request/response is asynchronous as each response contains everything i need to be able to process it (including the senders details).
Is there any way to allow the script to run in the background and consume as little system resources as possible while doing its thing? This will be running on a windows 2003 machine.
Any advice would be appreciated.
A:
If you are using blocking I/O to your devices, then the script won't consume any processor while waiting for the data. How much processor you use depends on what sorts of computation you are doing with the data.
A:
Twisted -- the best async framework for Python -- would allow you do perform these tasks with the minimal hogging of system resources, most especially though not exclusively if you want to process several devices "at once" rather than just round-robin among the several hundreds (the latter might result in too long a cycle time, especially if there's a risk that some device will have very delayed answer or even fail to answer once in a while and result in a "timeout"; as a rule of thumb I'd suggest having at least half a dozens devices "in play" at any given time to avoid this excessive-delay risk).
|
Python script performance as a background process
|
Im in the process of writing a python script to act as a "glue" between an application and some external devices. The script itself is quite straight forward and has three distinct processes:
Request data (from a socket connection, via UDP)
Receive response (from a socket connection, via UDP)
Process response and make data available to 3rd party application
However, this will be done repetitively, and for several (+/-200 different) devices. So once its reached device #200, it would start requesting data from device #001 again. My main concern here is not to bog down the processor whilst executing the script.
UPDATE:
I am using three threads to do the above, one thread for each of the above processes. The request/response is asynchronous as each response contains everything i need to be able to process it (including the senders details).
Is there any way to allow the script to run in the background and consume as little system resources as possible while doing its thing? This will be running on a windows 2003 machine.
Any advice would be appreciated.
|
[
"If you are using blocking I/O to your devices, then the script won't consume any processor while waiting for the data. How much processor you use depends on what sorts of computation you are doing with the data.\n",
"Twisted -- the best async framework for Python -- would allow you do perform these tasks with the minimal hogging of system resources, most especially though not exclusively if you want to process several devices \"at once\" rather than just round-robin among the several hundreds (the latter might result in too long a cycle time, especially if there's a risk that some device will have very delayed answer or even fail to answer once in a while and result in a \"timeout\"; as a rule of thumb I'd suggest having at least half a dozens devices \"in play\" at any given time to avoid this excessive-delay risk).\n"
] |
[
5,
4
] |
[] |
[] |
[
"background",
"performance",
"process",
"python"
] |
stackoverflow_0001352760_background_performance_process_python.txt
|
Q:
Why is '#!/usr/bin/env python' supposedly more correct than just '#!/usr/bin/python'?
Anyone know this? I've never been able to find an answer.
A:
If you're prone to installing python in various and interesting places on your PATH (as in $PATH in typical Unix shells, %PATH on typical Windows ones), using /usr/bin/env will accomodate your whim (well, in Unix-like environments at least) while going directly to /usr/bin/python won't. But losing control of what version of Python your scripts run under is no unalloyed bargain... if you look at my code you're more likely to see it start with, e.g., #!/usr/local/bin/python2.5 rather than with an open and accepting #!/usr/bin/env python -- assuming the script is important I like to ensure it's run with the specific version I have tested and developed it with, NOT a semi-random one;-).
A:
From wikipedia
Shebangs specify absolute paths to system executables; this can cause
problems on systems which have non-standard file system layouts
Often, the program /usr/bin/env can be used to circumvent this
limitation
A:
it finds the python executable in your environment and uses that. it's more portable because python may not always be in /usr/bin/python. env is always located in /usr/bin.
A:
It finds 'python' also in /usr/local/bin, ~/bin, /opt/bin, ... or wherever it may hide.
A:
You may find this post to be of interest:
http://mail.python.org/pipermail/python-list/2008-May/661514.html
This may be a better explanation:
http://mail.python.org/pipermail/tutor/2007-June/054816.html
|
Why is '#!/usr/bin/env python' supposedly more correct than just '#!/usr/bin/python'?
|
Anyone know this? I've never been able to find an answer.
|
[
"If you're prone to installing python in various and interesting places on your PATH (as in $PATH in typical Unix shells, %PATH on typical Windows ones), using /usr/bin/env will accomodate your whim (well, in Unix-like environments at least) while going directly to /usr/bin/python won't. But losing control of what version of Python your scripts run under is no unalloyed bargain... if you look at my code you're more likely to see it start with, e.g., #!/usr/local/bin/python2.5 rather than with an open and accepting #!/usr/bin/env python -- assuming the script is important I like to ensure it's run with the specific version I have tested and developed it with, NOT a semi-random one;-).\n",
"From wikipedia\n\nShebangs specify absolute paths to system executables; this can cause\n problems on systems which have non-standard file system layouts\nOften, the program /usr/bin/env can be used to circumvent this\n limitation\n\n",
"it finds the python executable in your environment and uses that. it's more portable because python may not always be in /usr/bin/python. env is always located in /usr/bin.\n",
"It finds 'python' also in /usr/local/bin, ~/bin, /opt/bin, ... or wherever it may hide.\n",
"You may find this post to be of interest:\nhttp://mail.python.org/pipermail/python-list/2008-May/661514.html\nThis may be a better explanation:\nhttp://mail.python.org/pipermail/tutor/2007-June/054816.html\n"
] |
[
67,
24,
10,
5,
3
] |
[] |
[] |
[
"bash",
"python"
] |
stackoverflow_0001352922_bash_python.txt
|
Q:
Nice copying from Python Interpreter
When I am working with a Python Interpreter, I always find it a pain to try and copy code from it because it inserts all of these >>> and ...
Is there a Python interpreter that will let me copy code, without having to deal with this? Or alternatively, is there a way to clean the output.
Additionally, sometimes I would like to paste code in, but the code is indented. Is there any console that can automatically indent it instead of throwing an error?
Related
Why can I not paste the output of Pythons REPL without manual-editing?
A:
IPython lets you show, save and edit your command history, for example to show the first three commands of your session without line numbers you'd type %hist -n 1 4.
A:
WingIDE from Wingware will let you evaluate any chunk of code in a separate interpreter window.
A:
IPython will let you paste Python code with leading indents without giving you an IndentationError. You can also change your prompts to remove >>> and ... if you wish.
A:
I have a vim macro to "paste while cleaning interpreter prompts and sample output [[==stuff NOT preceded by prompts" and I'll be happy to share it if vim is what you're using. Any editor or IDE worth that name will of course be similarly easy to program for such purposes!
A:
Decent text editors such as Notepad++ can make global search and replace operations that can replace >>> with nothing.
|
Nice copying from Python Interpreter
|
When I am working with a Python Interpreter, I always find it a pain to try and copy code from it because it inserts all of these >>> and ...
Is there a Python interpreter that will let me copy code, without having to deal with this? Or alternatively, is there a way to clean the output.
Additionally, sometimes I would like to paste code in, but the code is indented. Is there any console that can automatically indent it instead of throwing an error?
Related
Why can I not paste the output of Pythons REPL without manual-editing?
|
[
"IPython lets you show, save and edit your command history, for example to show the first three commands of your session without line numbers you'd type %hist -n 1 4.\n",
"WingIDE from Wingware will let you evaluate any chunk of code in a separate interpreter window.\n",
"IPython will let you paste Python code with leading indents without giving you an IndentationError. You can also change your prompts to remove >>> and ... if you wish.\n",
"I have a vim macro to \"paste while cleaning interpreter prompts and sample output [[==stuff NOT preceded by prompts\" and I'll be happy to share it if vim is what you're using. Any editor or IDE worth that name will of course be similarly easy to program for such purposes!\n",
"Decent text editors such as Notepad++ can make global search and replace operations that can replace >>> with nothing.\n"
] |
[
4,
3,
1,
1,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001352886_python.txt
|
Q:
Remove elements as you traverse a list in Python
In Java I can do by using an Iterator and then using the .remove() method of the iterator to remove the last element returned by the iterator, like this:
import java.util.*;
public class ConcurrentMod {
public static void main(String[] args) {
List<String> colors = new ArrayList<String>(Arrays.asList("red", "green", "blue", "purple"));
for (Iterator<String> it = colors.iterator(); it.hasNext(); ) {
String color = it.next();
System.out.println(color);
if (color.equals("green"))
it.remove();
}
System.out.println("At the end, colors = " + colors);
}
}
/* Outputs:
red
green
blue
purple
At the end, colors = [red, blue, purple]
*/
How would I do this in Python? I can't modify the list while I iterate over it in a for loop because it causes stuff to be skipped (see here). And there doesn't seem to be an equivalent of the Iterator interface of Java.
A:
Best approach in Python is to make a new list, ideally in a listcomp, setting it as the [:] of the old one, e.g.:
colors[:] = [c for c in colors if c != 'green']
NOT colors = as some answers may suggest -- that only rebinds the name and will eventually leave some references to the old "body" dangling; colors[:] = is MUCH better on all counts;-).
A:
Iterate over a copy of the list:
for c in colors[:]:
if c == 'green':
colors.remove(c)
A:
You could use filter function:
>>> colors=['red', 'green', 'blue', 'purple']
>>> filter(lambda color: color != 'green', colors)
['red', 'blue', 'purple']
>>>
A:
or you also can do like this
>>> colors = ['red', 'green', 'blue', 'purple']
>>> if colors.__contains__('green'):
... colors.remove('green')
|
Remove elements as you traverse a list in Python
|
In Java I can do by using an Iterator and then using the .remove() method of the iterator to remove the last element returned by the iterator, like this:
import java.util.*;
public class ConcurrentMod {
public static void main(String[] args) {
List<String> colors = new ArrayList<String>(Arrays.asList("red", "green", "blue", "purple"));
for (Iterator<String> it = colors.iterator(); it.hasNext(); ) {
String color = it.next();
System.out.println(color);
if (color.equals("green"))
it.remove();
}
System.out.println("At the end, colors = " + colors);
}
}
/* Outputs:
red
green
blue
purple
At the end, colors = [red, blue, purple]
*/
How would I do this in Python? I can't modify the list while I iterate over it in a for loop because it causes stuff to be skipped (see here). And there doesn't seem to be an equivalent of the Iterator interface of Java.
|
[
"Best approach in Python is to make a new list, ideally in a listcomp, setting it as the [:] of the old one, e.g.:\ncolors[:] = [c for c in colors if c != 'green']\n\nNOT colors = as some answers may suggest -- that only rebinds the name and will eventually leave some references to the old \"body\" dangling; colors[:] = is MUCH better on all counts;-).\n",
"Iterate over a copy of the list:\nfor c in colors[:]:\n if c == 'green':\n colors.remove(c)\n\n",
"You could use filter function:\n>>> colors=['red', 'green', 'blue', 'purple']\n>>> filter(lambda color: color != 'green', colors)\n['red', 'blue', 'purple']\n>>>\n\n",
"or you also can do like this\n>>> colors = ['red', 'green', 'blue', 'purple']\n>>> if colors.__contains__('green'):\n... colors.remove('green')\n\n"
] |
[
30,
24,
4,
0
] |
[] |
[] |
[
"iterator",
"list",
"loops",
"python",
"python_datamodel"
] |
stackoverflow_0001352885_iterator_list_loops_python_python_datamodel.txt
|
Q:
Issues with scoped_session in sqlalchemy - how does it work?
I'm not really sure how scoped_session works, other than it seems to be a wrapper that hides several real sessions, keeping them separate for different requests. Does it do this with thread locals?
Anyway the trouble is as follows:
S = elixir.session # = scoped_session(...)
f = Foo(bar=1)
S.add(f) # ERROR, f is already attached to session (different session)
Not sure how f ended up in a different session, I've not had problems with that before. Elsewhere I have code that looks just like that, but actually works. As you can imagine I find that very confusing.
I just don't know anything here, f seems to be magically added to a session in the constructor, but I don't seem to have any references to the session it uses. Why would it end up in a different session? How can I get it to end up in the right session? How does this scoped_session thing work anyway? It just seems to work sometimes, and other times it just doesn't.
I'm definitely very confused.
A:
Scoped session creates a proxy object that keeps a registry of (by default) per thread session objects created on demand from the passed session factory. When you access a session method such as ScopedSession.add it finds the session corresponding to the current thread and returns the add method bound to that session. The active session can be removed using the ScopedSession.remove() method.
ScopedSession has a few convenience methods, one is query_property that creates a property that returns a query object bound to the scoped session it was created on and the class it was accessed. The other is ScopedSession.mapper that adds a default __init__(**kwargs) constructor and by default adds created objects to the scoped session the mapper was created off. This behavior can be controlled by the save_on_init keyword argument to the mapper. ScopedSession.mapper is deprecated because of exactly the problem that is in the question. This is one case where the Python "explicit is better than implicit" philosophy really applies. Unfortunately Elixir still by default uses ScopedSession.mapper.
A:
It turns out elixir sets save-on-init=True on the created mappers. This can be disabled by:
using_mapper_options(save_on_init=False)
This solves the problem. Kudos to stepz on #sqlalchemy for figuring out what was going on immediately. Although I am still curious how scoped_session really works, so if someone answers that, they'll get credit for answering the question.
|
Issues with scoped_session in sqlalchemy - how does it work?
|
I'm not really sure how scoped_session works, other than it seems to be a wrapper that hides several real sessions, keeping them separate for different requests. Does it do this with thread locals?
Anyway the trouble is as follows:
S = elixir.session # = scoped_session(...)
f = Foo(bar=1)
S.add(f) # ERROR, f is already attached to session (different session)
Not sure how f ended up in a different session, I've not had problems with that before. Elsewhere I have code that looks just like that, but actually works. As you can imagine I find that very confusing.
I just don't know anything here, f seems to be magically added to a session in the constructor, but I don't seem to have any references to the session it uses. Why would it end up in a different session? How can I get it to end up in the right session? How does this scoped_session thing work anyway? It just seems to work sometimes, and other times it just doesn't.
I'm definitely very confused.
|
[
"Scoped session creates a proxy object that keeps a registry of (by default) per thread session objects created on demand from the passed session factory. When you access a session method such as ScopedSession.add it finds the session corresponding to the current thread and returns the add method bound to that session. The active session can be removed using the ScopedSession.remove() method.\nScopedSession has a few convenience methods, one is query_property that creates a property that returns a query object bound to the scoped session it was created on and the class it was accessed. The other is ScopedSession.mapper that adds a default __init__(**kwargs) constructor and by default adds created objects to the scoped session the mapper was created off. This behavior can be controlled by the save_on_init keyword argument to the mapper. ScopedSession.mapper is deprecated because of exactly the problem that is in the question. This is one case where the Python \"explicit is better than implicit\" philosophy really applies. Unfortunately Elixir still by default uses ScopedSession.mapper. \n",
"It turns out elixir sets save-on-init=True on the created mappers. This can be disabled by:\nusing_mapper_options(save_on_init=False)\n\nThis solves the problem. Kudos to stepz on #sqlalchemy for figuring out what was going on immediately. Although I am still curious how scoped_session really works, so if someone answers that, they'll get credit for answering the question. \n"
] |
[
7,
2
] |
[] |
[] |
[
"python",
"python_elixir",
"sqlalchemy"
] |
stackoverflow_0001353131_python_python_elixir_sqlalchemy.txt
|
Q:
I'm a .NET Programmer. What are specific uses of Python and/or Ruby for that will make me more productive?
I recall when I first read Pragmatic Programmer that they suggested using scripting languages to make you a more productive programmer.
I am in a quandary putting this into practice.
I want to know specific ways that using Python or Ruby can make me a more productive .NET developer.
One specific way per answer, and even better if you can say whether I could use Python or Ruby or Both for it.
See standard format below.
A:
IronPython / IronRuby
IronPython in Action will do a better job explaining this (and exactly how best to use IronPython) that can possibly be accommodated in a SO answer. I'm biased -- I was a tech reviewer and am a friend of one of the authors -- but objectively think it's a great book. (No idea if IronRuby is blessed with a similarly wonderful book, yet).
As you want "one specific way per answer" (incompatible with SO, which STRONGLY discourages a poster posting 25 different answers if they have 25 "specific ways" to indicate...!-): prototyping in order to explore some specific assembly or collection thereof that you're unfamiliar with (to check if you've understood their docs right and how to perform certain tasks) is an order of magnitude more productive in IronPython than in C#, as you can explore interactively and compilation is instantaneous and as-needed. (Have not tried IronRuby but I'll assume it can work in a roughly equivalent way and speed).
A:
Less Code
I think productivity is direct result on how proficient you are in a specific language. That said the terseness of a language like Python might save some time on getting certain things done.
If I compare how much less code I have to write for simple administration scripts (e.g. clean-up of old files) compared to .NET code there is certain amount of productivity gain. (Plus it is more fun which also helps getting the job done)
A:
Advanced Text Processing
Traditional strengths of awk and perl. You can just glue together a bunch of regular expressions to create a simple data-mining system on the go.
A:
Learning a new language gives you knowledge that you can bring back to any programming language. Here are some things you'd learn.
Add functionality to your objects on the fly.
Mix in modules.
Pass a chunk of code around.
Figure out how to do more with less code: ruby -e "puts 'hello world'"
C# can do some of these things, but a fresh perspective might bring you one step closer to automating your breakfast.
A:
Embedding a script engine
Use of IronPython for a scripting engine inside your .NET application. For example enabling end-users of your application to change customizable parts with a full fledge language such as Python.
A possible example might be to expose custom logic to end-users for a work flow engine.
A:
Quick Prototyping - Both
In the simplest cases when firing a python interpreter and writing a line or two is way faster than creating a new project in visual studio.
And you can use ruby to. Or lua, or evel perl, whatever. The point is implicit typing and light-weight feel.
A:
Cross platform
Compared to .NET a simple script Python is more easily ported to other platforms such as Linux. Although possible to achieve the same with the likes of Mono it simpler to run a Python script file on different platforms.
A:
Processing received Email
Python has built-in support for POP3 and IMAP where the standard .NET framework doesn't. Useful for automating email triggered tasks.
|
I'm a .NET Programmer. What are specific uses of Python and/or Ruby for that will make me more productive?
|
I recall when I first read Pragmatic Programmer that they suggested using scripting languages to make you a more productive programmer.
I am in a quandary putting this into practice.
I want to know specific ways that using Python or Ruby can make me a more productive .NET developer.
One specific way per answer, and even better if you can say whether I could use Python or Ruby or Both for it.
See standard format below.
|
[
"IronPython / IronRuby\nIronPython in Action will do a better job explaining this (and exactly how best to use IronPython) that can possibly be accommodated in a SO answer. I'm biased -- I was a tech reviewer and am a friend of one of the authors -- but objectively think it's a great book. (No idea if IronRuby is blessed with a similarly wonderful book, yet).\nAs you want \"one specific way per answer\" (incompatible with SO, which STRONGLY discourages a poster posting 25 different answers if they have 25 \"specific ways\" to indicate...!-): prototyping in order to explore some specific assembly or collection thereof that you're unfamiliar with (to check if you've understood their docs right and how to perform certain tasks) is an order of magnitude more productive in IronPython than in C#, as you can explore interactively and compilation is instantaneous and as-needed. (Have not tried IronRuby but I'll assume it can work in a roughly equivalent way and speed).\n",
"Less Code\nI think productivity is direct result on how proficient you are in a specific language. That said the terseness of a language like Python might save some time on getting certain things done.\nIf I compare how much less code I have to write for simple administration scripts (e.g. clean-up of old files) compared to .NET code there is certain amount of productivity gain. (Plus it is more fun which also helps getting the job done)\n",
"Advanced Text Processing\nTraditional strengths of awk and perl. You can just glue together a bunch of regular expressions to create a simple data-mining system on the go.\n",
"Learning a new language gives you knowledge that you can bring back to any programming language. Here are some things you'd learn.\nAdd functionality to your objects on the fly. \nMix in modules.\nPass a chunk of code around.\nFigure out how to do more with less code: ruby -e \"puts 'hello world'\"\nC# can do some of these things, but a fresh perspective might bring you one step closer to automating your breakfast.\n",
"Embedding a script engine\nUse of IronPython for a scripting engine inside your .NET application. For example enabling end-users of your application to change customizable parts with a full fledge language such as Python.\nA possible example might be to expose custom logic to end-users for a work flow engine.\n",
"Quick Prototyping - Both\nIn the simplest cases when firing a python interpreter and writing a line or two is way faster than creating a new project in visual studio.\nAnd you can use ruby to. Or lua, or evel perl, whatever. The point is implicit typing and light-weight feel.\n",
"Cross platform\nCompared to .NET a simple script Python is more easily ported to other platforms such as Linux. Although possible to achieve the same with the likes of Mono it simpler to run a Python script file on different platforms.\n",
"Processing received Email\nPython has built-in support for POP3 and IMAP where the standard .NET framework doesn't. Useful for automating email triggered tasks.\n"
] |
[
5,
4,
2,
2,
2,
1,
1,
1
] |
[] |
[] |
[
".net",
"python",
"ruby"
] |
stackoverflow_0001353211_.net_python_ruby.txt
|
Q:
Multiprocessing debug techniques
I'm having trouble debugging a multi-process application (specifically using a process pool in python's multiprocessing module). I have an apparent deadlock and I do not know what is causing it. The stack trace is not sufficient to describe the issue, as it only displays code in the multiprocessing module.
Are there any python tools, or otherwise general techniques used to debug deadlocks?
A:
Yah, debugging deadlocks is fun. You can set the logging level to be higher -- see the Python documentation for a description of it, but really quickly:
import multiprocessing, logging
logger = multiprocessing.log_to_stderr()
logger.setLevel(multiprocessing.SUBDEBUG)
Also, add logging for anything in your code that deals with a resource or whatnot that might be in contention. Finally, shot in the dark: spawning off child processes during an import might cause a problem.
A:
In order to avoid deadlocks in the first place, learning good practices is useful, as parallel processing is indeed quite subtle. The (free) Little Book of Semaphores can be a very enjoyable read!
|
Multiprocessing debug techniques
|
I'm having trouble debugging a multi-process application (specifically using a process pool in python's multiprocessing module). I have an apparent deadlock and I do not know what is causing it. The stack trace is not sufficient to describe the issue, as it only displays code in the multiprocessing module.
Are there any python tools, or otherwise general techniques used to debug deadlocks?
|
[
"Yah, debugging deadlocks is fun. You can set the logging level to be higher -- see the Python documentation for a description of it, but really quickly:\nimport multiprocessing, logging\nlogger = multiprocessing.log_to_stderr()\nlogger.setLevel(multiprocessing.SUBDEBUG)\n\nAlso, add logging for anything in your code that deals with a resource or whatnot that might be in contention. Finally, shot in the dark: spawning off child processes during an import might cause a problem.\n",
"In order to avoid deadlocks in the first place, learning good practices is useful, as parallel processing is indeed quite subtle. The (free) Little Book of Semaphores can be a very enjoyable read!\n"
] |
[
45,
12
] |
[] |
[] |
[
"deadlock",
"debugging",
"multiprocessing",
"python"
] |
stackoverflow_0001352980_deadlock_debugging_multiprocessing_python.txt
|
Q:
How to put Google login box inside flash in GAE?
I am putting my old flash site into GAE. I want to use Google's user authentication too. Now, I want to put Googles login box inside the flash instead of redirecting to Google's login page. Same thing I want for forgot password.
Is it possible to do this? How to do this?
A:
Of course this is possible
you just need to use flash to post http request to your server
and your server could communicate to flash through several ways like: xml , html, and AMF or even johnson( I am not sure).
I recommend you use pyamf at server side to build a native support for flash at server side
|
How to put Google login box inside flash in GAE?
|
I am putting my old flash site into GAE. I want to use Google's user authentication too. Now, I want to put Googles login box inside the flash instead of redirecting to Google's login page. Same thing I want for forgot password.
Is it possible to do this? How to do this?
|
[
"Of course this is possible\nyou just need to use flash to post http request to your server\nand your server could communicate to flash through several ways like: xml , html, and AMF or even johnson( I am not sure).\nI recommend you use pyamf at server side to build a native support for flash at server side\n"
] |
[
1
] |
[] |
[] |
[
"authentication",
"flash",
"google_app_engine",
"python"
] |
stackoverflow_0001079022_authentication_flash_google_app_engine_python.txt
|
Q:
Python SOAP clients will not work with this WSDL
Thus far I've tried to access this WSDL:
https://login.azoogleads.com/affiliate/tool/soap_api
from the two common Python SOAP clients that I'm aware of: SOAPpy and ZSI.client.Binding. SOAPpy raises an exception in PyXML (xml.parsers.expat.ExpatError: not well-formed (invalid token)) and ZSI raises an exception in the urlparse library.
What I'm hoping is:
1.) I'm using these libraries incorrectly (usage below)
or
2.) There is another SOAP library I don't know about that will be able to handle this
Here's my usage of the libraries:
from ZSI.client import Binding
b = Binding('https://login.azoogleads.com/affiliate/tool/soap_api/')
hash = b.authenticate('should', 'get', 'authenticationfailurefromthis')
and
import SOAPpy
b = SOAPpy.WSDL.Proxy('https://login.azoogleads.com/affiliate/tool/soap_api/')
hash = b.authenticate('any', 'info', 'shoulddo')
A:
your not actually giving it a valid WSDL endpoint try explicilty giving it the WSDL location rather than the directory it is in. Remember computer are exceptually stupid things!
|
Python SOAP clients will not work with this WSDL
|
Thus far I've tried to access this WSDL:
https://login.azoogleads.com/affiliate/tool/soap_api
from the two common Python SOAP clients that I'm aware of: SOAPpy and ZSI.client.Binding. SOAPpy raises an exception in PyXML (xml.parsers.expat.ExpatError: not well-formed (invalid token)) and ZSI raises an exception in the urlparse library.
What I'm hoping is:
1.) I'm using these libraries incorrectly (usage below)
or
2.) There is another SOAP library I don't know about that will be able to handle this
Here's my usage of the libraries:
from ZSI.client import Binding
b = Binding('https://login.azoogleads.com/affiliate/tool/soap_api/')
hash = b.authenticate('should', 'get', 'authenticationfailurefromthis')
and
import SOAPpy
b = SOAPpy.WSDL.Proxy('https://login.azoogleads.com/affiliate/tool/soap_api/')
hash = b.authenticate('any', 'info', 'shoulddo')
|
[
"your not actually giving it a valid WSDL endpoint try explicilty giving it the WSDL location rather than the directory it is in. Remember computer are exceptually stupid things!\n"
] |
[
2
] |
[] |
[] |
[
"python",
"soap",
"soappy",
"zsi"
] |
stackoverflow_0001349325_python_soap_soappy_zsi.txt
|
Q:
Summing up two columns the Unix way
# To fix the symptom
How can you sum up the following columns effectively?
Column 1
1
3
3
...
Column 2
2323
343
232
...
This should give me
Expected result
2324
346
235
...
I have the columns in two files.
# Initial situation
I use sometimes too many curly brackets such that I have used one more this { than this } in my files.
I am trying to find where I have used the one unnecessary curly bracket.
I have used the following steps in getting the data
Find commands
find . * -exec grep '{' {} + > /tmp/1
find . * -exec grep '}' {} + > /tmp/2
AWK commands
awk -F: '{ print $2 }' /tmp/1 > /tmp/11
awk -F: '{ print $2 }' /tmp/2 > /tmp/22
The column are in the files /tmp/11 and /tmp/22.
I repeat a lot of similar commands in my procedure.
This suggests me that this is not the right way.
Please, suggests me any way such as Python, Perl or any Unix tool which can decrease the number of steps.
A:
Using python:
totals = [ int(i)+int(j) for i, j in zip ( open(fname1), open(fname2) ) ]
A:
If c1 and c2 are youre files, you can do this:
$ paste c1 c2 | awk '{print $1 + $2}'
Or (without AWK):
$ paste c1 c2 | while read i j; do echo $(($i+$j)); done
A:
You can avoid the intermediate steps by just using a command that do the counts and the comparison at the same time:
find . -type f -exec perl -nle 'END { print $ARGV if $h{"{"} != $h{"}"} } $h{$_}++ for /([}{])/g' {}\;
This calls the Perl program once per file, the Perl program counts the number of each type curly brace and prints the name of the file if they counts don't match.
You must be careful with the /([}{]])/ section, find will think it needs to do the replacement on {} if you say /([{}]])/.
WARNING: this code will have false positives and negatives if you are trying to run it against source code. Consider the following cases:
balanced, but curlies in strings:
if ($s eq '{') {
print "I saw a {\n"
}
unbalanced, but curlies in strings:
while (1) {
print "}";
You can expand the Perl command by using B::Deparse:
perl -MO=Deparse -nle 'END { print $ARGV if $h{"{"} != $h{"}"} } $h{$_}++ for /([}{])/g'
Which results in:
BEGIN { $/ = "\n"; $\ = "\n"; }
LINE: while (defined($_ = <ARGV>)) {
chomp $_;
sub END {
print $ARGV if $h{'{'} != $h{'}'};
}
;
++$h{$_} foreach (/([}{])/g);
}
We can now look at each piece of the program:
BEGIN { $/ = "\n"; $\ = "\n"; }
This is caused by the -l option. It sets both the input and output record separators to "\n". This means anything read in will be broken into records based "\n" and any print statement will have "\n" appended to it.
LINE: while (defined($_ = <ARGV>)) {
}
This is created by the -n option. It loops over every file passed in via the commandline (or STDIN if no files are passed) reading each line of those files. This also happens to set $ARGV to the last file read by <ARGV>.
chomp $_;
This removes whatever is in the $/ variable from the line that was just read ($_), it does nothing useful here. It was caused by the -l option.
sub END {
print $ARGV if $h{'{'} != $h{'}'};
}
This is an END block, this code will run at the end of the program. It prints $ARGV (the name of the file last read from, see above) if the values stored in %h associated with the keys '{' and '}' are equal.
++$h{$_} foreach (/([}{])/g);
This needs to be broken down further:
/
( #begin capture
[}{] #match any of the '}' or '{' characters
) #end capture
/gx
Is a regex that returns a list of '{' and '}' characters that are in the string being matched. Since no string was specified the $_ variable (which holds the line last read from the file, see above) will be matched against. That list is fed into the foreach statement which then runs the statement it is in front of for each item (hence the name) in the list. It also sets $_ (as you can see $_ is a popular variable in Perl) to be the item from the list.
++h{$_}
This line increments the value in $h that is associated with $_ (which will be either '{' or '}', see above) by one.
A:
In Python (or Perl, Awk, &c) you can reasonably do it in a single stand-alone "pass" -- I'm not sure what you mean by "too many curly brackets", but you can surely count curly use per file. For example (unless you have to worry about multi-GB files), the 10 files using most curly braces:
import heapq
import os
import re
curliest = dict()
for path, dirs, files in os.walk('.'):
for afile in files:
fn = os.path.join(path, afile)
with open(fn) as f:
data = f.read()
braces = data.count('{') + data.count('}')
curliest[fn] = bracs
top10 = heapq.nlargest(10, curlies, curliest.get)
top10.sort(key=curliest.get)
for fn in top10:
print '%6d %s' % (curliest[fn], fn)
A:
Reply to Lutz'n answer
My problem was finally solved by this commnad
paste -d: /tmp/1 /tmp/2 | awk -F: '{ print $1 "\t" $2 - $4 }'
A:
your problem can be solved with just 1 awk command...
awk '{getline i<"file1";print i+$0}' file2
|
Summing up two columns the Unix way
|
# To fix the symptom
How can you sum up the following columns effectively?
Column 1
1
3
3
...
Column 2
2323
343
232
...
This should give me
Expected result
2324
346
235
...
I have the columns in two files.
# Initial situation
I use sometimes too many curly brackets such that I have used one more this { than this } in my files.
I am trying to find where I have used the one unnecessary curly bracket.
I have used the following steps in getting the data
Find commands
find . * -exec grep '{' {} + > /tmp/1
find . * -exec grep '}' {} + > /tmp/2
AWK commands
awk -F: '{ print $2 }' /tmp/1 > /tmp/11
awk -F: '{ print $2 }' /tmp/2 > /tmp/22
The column are in the files /tmp/11 and /tmp/22.
I repeat a lot of similar commands in my procedure.
This suggests me that this is not the right way.
Please, suggests me any way such as Python, Perl or any Unix tool which can decrease the number of steps.
|
[
"Using python:\ntotals = [ int(i)+int(j) for i, j in zip ( open(fname1), open(fname2) ) ]\n\n",
"If c1 and c2 are youre files, you can do this:\n$ paste c1 c2 | awk '{print $1 + $2}'\n\nOr (without AWK):\n$ paste c1 c2 | while read i j; do echo $(($i+$j)); done\n\n",
"You can avoid the intermediate steps by just using a command that do the counts and the comparison at the same time:\nfind . -type f -exec perl -nle 'END { print $ARGV if $h{\"{\"} != $h{\"}\"} } $h{$_}++ for /([}{])/g' {}\\;\n\nThis calls the Perl program once per file, the Perl program counts the number of each type curly brace and prints the name of the file if they counts don't match.\nYou must be careful with the /([}{]])/ section, find will think it needs to do the replacement on {} if you say /([{}]])/.\nWARNING: this code will have false positives and negatives if you are trying to run it against source code. Consider the following cases:\nbalanced, but curlies in strings:\nif ($s eq '{') {\n print \"I saw a {\\n\"\n}\n\nunbalanced, but curlies in strings:\nwhile (1) {\n print \"}\";\n\nYou can expand the Perl command by using B::Deparse:\nperl -MO=Deparse -nle 'END { print $ARGV if $h{\"{\"} != $h{\"}\"} } $h{$_}++ for /([}{])/g'\nWhich results in:\nBEGIN { $/ = \"\\n\"; $\\ = \"\\n\"; }\nLINE: while (defined($_ = <ARGV>)) {\n chomp $_;\n sub END {\n print $ARGV if $h{'{'} != $h{'}'};\n }\n ;\n ++$h{$_} foreach (/([}{])/g);\n}\n\nWe can now look at each piece of the program:\nBEGIN { $/ = \"\\n\"; $\\ = \"\\n\"; }\n\nThis is caused by the -l option. It sets both the input and output record separators to \"\\n\". This means anything read in will be broken into records based \"\\n\" and any print statement will have \"\\n\" appended to it.\nLINE: while (defined($_ = <ARGV>)) {\n}\n\nThis is created by the -n option. It loops over every file passed in via the commandline (or STDIN if no files are passed) reading each line of those files. This also happens to set $ARGV to the last file read by <ARGV>.\nchomp $_;\n\nThis removes whatever is in the $/ variable from the line that was just read ($_), it does nothing useful here. It was caused by the -l option.\nsub END {\n print $ARGV if $h{'{'} != $h{'}'};\n}\n\nThis is an END block, this code will run at the end of the program. It prints $ARGV (the name of the file last read from, see above) if the values stored in %h associated with the keys '{' and '}' are equal.\n++$h{$_} foreach (/([}{])/g);\n\nThis needs to be broken down further:\n/\n ( #begin capture\n [}{] #match any of the '}' or '{' characters\n ) #end capture\n/gx\n\nIs a regex that returns a list of '{' and '}' characters that are in the string being matched. Since no string was specified the $_ variable (which holds the line last read from the file, see above) will be matched against. That list is fed into the foreach statement which then runs the statement it is in front of for each item (hence the name) in the list. It also sets $_ (as you can see $_ is a popular variable in Perl) to be the item from the list.\n++h{$_}\n\nThis line increments the value in $h that is associated with $_ (which will be either '{' or '}', see above) by one.\n",
"In Python (or Perl, Awk, &c) you can reasonably do it in a single stand-alone \"pass\" -- I'm not sure what you mean by \"too many curly brackets\", but you can surely count curly use per file. For example (unless you have to worry about multi-GB files), the 10 files using most curly braces:\nimport heapq\nimport os\nimport re\n\ncurliest = dict()\n\nfor path, dirs, files in os.walk('.'):\n for afile in files:\n fn = os.path.join(path, afile)\n with open(fn) as f:\n data = f.read()\n braces = data.count('{') + data.count('}')\n curliest[fn] = bracs\n\ntop10 = heapq.nlargest(10, curlies, curliest.get)\ntop10.sort(key=curliest.get)\nfor fn in top10:\n print '%6d %s' % (curliest[fn], fn)\n\n",
"Reply to Lutz'n answer\nMy problem was finally solved by this commnad\npaste -d: /tmp/1 /tmp/2 | awk -F: '{ print $1 \"\\t\" $2 - $4 }'\n\n",
"your problem can be solved with just 1 awk command...\nawk '{getline i<\"file1\";print i+$0}' file2\n\n"
] |
[
11,
11,
3,
1,
0,
0
] |
[] |
[] |
[
"awk",
"brackets",
"perl",
"python",
"unix"
] |
stackoverflow_0001347457_awk_brackets_perl_python_unix.txt
|
Q:
Creating SVGs using Python
I'm building a set of SVG files that include an unfortunate number of hardcoded values (they must print with some elements sized in mm, while others must be scaled as a percent, and most of the values are defined relative to each other). Rather than managing those numbers by hand (heaven forbid I want to change something), I thought I might use my trusty hammer python for the task.
SVG 1.1 doesn't natively support any kind of variable scheme that would let me do what I want, and I'm not interested in introducing javascript or unstable w3c draft specs into the mix. One obvious solution is to use string formatting to read, parse, and replace variables in my SVG file. This seems like a bad idea for a larger document, but has the advantage of being simple and portable.
My second though was to investigate the available python->svg libraries. Unfortunately, it seems that the few options tend to be either too new (pySVG still has an unstable interface), too old (not updated since 2005), or abandoned. I haven't looked closely, but my sense is that the charting applications are not flexible enough to generate my documents.
The third option I came across was that of using some other drawing tool (cairo, for instance) that can be convinced to put out svg. This has the (potential) disadvantage of not natively supporting the absolute element sizes that are so important to me, but might include the ability to output PDF, which would be convenient.
I've already done the googling, so I'm looking for input from people who have used any of the methods mentioned, or who might know of some other approach. Long-term stability of whatever solution is chosen is important to me (it was the original reason for the hand-coding instead of just using illustrator).
At this point, I'm leaning towards the first solution, so recommendations on best practices for using python to parse and replace variables in XML files are welcome.
A:
A markup based templating engine, such as genshi might be useful. It would let you do most of the authoring using a SVG tool and do the customization in the template. I'd definitely prefer it to XSLT.
A:
Since SVG is XML, maybe you could use XSLT to transform a source XML file containing your variables to SVG. In your XSLT style sheet, you would have templates corresponding to various elements of your SVG illustration, that change their output depending on the values found in the source XML file.
Or you could use a template SVG as the source and transform it into the final one, with the values passed as parameters to the XSLT processor.
You's either use XSLT directly, or via Python if you need some logic that's easier to perform in a traditional language.
|
Creating SVGs using Python
|
I'm building a set of SVG files that include an unfortunate number of hardcoded values (they must print with some elements sized in mm, while others must be scaled as a percent, and most of the values are defined relative to each other). Rather than managing those numbers by hand (heaven forbid I want to change something), I thought I might use my trusty hammer python for the task.
SVG 1.1 doesn't natively support any kind of variable scheme that would let me do what I want, and I'm not interested in introducing javascript or unstable w3c draft specs into the mix. One obvious solution is to use string formatting to read, parse, and replace variables in my SVG file. This seems like a bad idea for a larger document, but has the advantage of being simple and portable.
My second though was to investigate the available python->svg libraries. Unfortunately, it seems that the few options tend to be either too new (pySVG still has an unstable interface), too old (not updated since 2005), or abandoned. I haven't looked closely, but my sense is that the charting applications are not flexible enough to generate my documents.
The third option I came across was that of using some other drawing tool (cairo, for instance) that can be convinced to put out svg. This has the (potential) disadvantage of not natively supporting the absolute element sizes that are so important to me, but might include the ability to output PDF, which would be convenient.
I've already done the googling, so I'm looking for input from people who have used any of the methods mentioned, or who might know of some other approach. Long-term stability of whatever solution is chosen is important to me (it was the original reason for the hand-coding instead of just using illustrator).
At this point, I'm leaning towards the first solution, so recommendations on best practices for using python to parse and replace variables in XML files are welcome.
|
[
"A markup based templating engine, such as genshi might be useful. It would let you do most of the authoring using a SVG tool and do the customization in the template. I'd definitely prefer it to XSLT.\n",
"Since SVG is XML, maybe you could use XSLT to transform a source XML file containing your variables to SVG. In your XSLT style sheet, you would have templates corresponding to various elements of your SVG illustration, that change their output depending on the values found in the source XML file.\nOr you could use a template SVG as the source and transform it into the final one, with the values passed as parameters to the XSLT processor.\nYou's either use XSLT directly, or via Python if you need some logic that's easier to perform in a traditional language.\n"
] |
[
4,
0
] |
[] |
[] |
[
"graphics",
"python",
"svg",
"xml"
] |
stackoverflow_0001353976_graphics_python_svg_xml.txt
|
Q:
Is Python the right hammer for this nail? (build script)
Currently I'm using a Windows batch file to build my software. It does things like running MSBuild, copying files, creating a ZIP file, running some tests, including the subversion revision number, etc.
But the problem is, batch files are evil. So I would like to change to something better. I was planning to recreate my build script in Python. Is that a smart choice? What about all those build systems, like Ant, SCons, Maven, Rake, etc. Would using any of those be a better choice?
Note: I'm not planning to replace my Visual Studio solution/project files. I only want to script everything else that's required to create a release of my software.
Edit: I have good reasons to move away from batch, that's not what my question is about. I would like to know (for example) what SCons gives me, over a normal Python script.
A:
For a tool that is scripted with Python, I happen to think Paver is a more easily-managed and more flexible build automator than SCons. Unlike SCons, Paver is designed for the plethora of not-compiling-programs tasks that go along with managing and distributing a software project.
A:
Batch files aren't evil - they've actually come quite a long way from the brain-dead days of command.com. The command language can be pretty expressive nowadays, it just requires a bit of effort on your part to learn it.
Unless there's an actual problem with your build script that you can't fix (and, if that's the case, that's the question you should be asking rather than some wishy-washy "What's the best replacement?" :-), my approach would be to stick with what you've got.
A vague feeling of evilness would not be reason for me to waste effort 'fixing' something that isn't broken. And it would be wasted effort unless there's a clear advantage to changing ("less evil" is not something I'd consider a clear advantage).
A:
As you're mentioning Python and SCons, I'd say go for SCons. It is Python after all. And yes, any of the above would be a better choice than hand-rolled build scripts.
A:
I've seen python scripts used for building releases elsewhere so it can't be bad. Actually, I've personally used perl scripts to automate release building. I guess any scripting language could easily automate that procedure. If it's gonna be easy to do (and probably better than batch scripts), why not try it?
A:
I would suggest using NAnt for your build script instead of python.
My reasons for this are:
It has the tasks defined already, all you need to do is write the XML and point it to the right places. If you are working with people who do not know python, XML may be a little less scary than learning a new language.
NAnt is designed to work in the windows .Net environment, so it can already do MSBuild and NUnit tasks.
If you are already writing in C#, if you need to extend NAnt to do new tasks you are not adding another language to the mix of your project.
You can hook into Cruise Control .Net (for continuous builds). Which I think is the main reason why you would use NAnt.
A:
Why should you use python? If your build script isn't broke don't fix it. If your having issues updating it to deal with new aditions to the project then you may want to look at rewriting it. I wouldn't use Python though tools like NANT or MSBuild do the job. I don't see the point in using a general purpis programming language to do something that tools have already been written to do unless you have a lot of obscure requirements existing tools can't deal with. Second what happens if you get hit by a bus or win the lotto? If you are determined to script everything I'd use powershell or some other Microsoft specific technology since your already wedded to Microsoft. If you leave will there be enough Python programmers to maintain the build scripts?
A:
I would strongly suggest to take a look at waf. It's kind of what you want: "a Python-based framework for configuring, compiling and installing applications"
A:
Personally I would use scripting as a last resort given that
With a bit of work you can get MSBuild to do all those things for you by extending it with additional components
There are third party equivalents to MSBuild like NANT that can do the same thing
There are entire tools like FinalBuilder that also do the same thing, and are easier to configure and extend
However, if I had to go the scripting route I would use Powershell for a couple of reasons:
Complete access to file system
You can easily access .NET objects
You can easily access COM objects
A:
You can create custom makefiles for Microsoft nmake tool which you already have installed. Using a tool like that (SCons, Maven, etc. fall into the same category) gives you much more than regular scripts.
The main benefit is that dependencies between files are tracked and also the timestamps of changes. For example, you can make your .zip file depend on some other files, so .zip only gets repacked if some of those files have changed in the meantime. Just like with source code and its compiled form.
A:
Python is very portable. SCons is field tested and reliable. Given what you know (from what you explained), why even ask the question?
If your maintaining something, its not just about getting it to build, its also about explaining to the user why it can NOT build, which saves you a ton of very frustrating questions while helping users to help themselves.
I can not think of a modern, production operating system that lacks Python, unless you get into the embedded / research arena.
So, I'm answering to say, you answered your own question :)
A:
It depends on what technology your software uses. If you're building C++ programs, I'd probably say go for scons without question (unless you have weird requirements scons can't meet). On the other hand, consider the instructions for building C#: CSharpBuilder.
I would like to know (for example) what SCons gives me, over a normal Python script.
Think of scons as being more of a library than a program. It provides you with code that will prevent a lot of tedium that you will have to deal with without it. In my opinion, vanilla Python isn't the best option for any kind of shell scripting stuff (not that it can't do it).
But the problem is, batch files are evil.
Lastly, batch files are evil if they're used for a project they're not suited to handle. For the one or two file project, batch files do just fine.
A:
It does things like running MSBuild, copying files, creating a ZIP file, running some tests, including the subversion revision number, etc.
MSBuild and PowerShell can easily do all of this with reasonably clean succinct code. You're then sticking to purely M$ products which managers tend to like. Otherwise I would suggest you could look into Rake if not only for its large community. It has a nice syntax and iron ruby support (irake).
To be honest all but the last task you have mentioned are easily done in MSBuild alone. I would suggest learning the tools you have before going elsewhere.
Check out http://msbuildtasks.tigris.org/ for some good add ons to MSBuild
|
Is Python the right hammer for this nail? (build script)
|
Currently I'm using a Windows batch file to build my software. It does things like running MSBuild, copying files, creating a ZIP file, running some tests, including the subversion revision number, etc.
But the problem is, batch files are evil. So I would like to change to something better. I was planning to recreate my build script in Python. Is that a smart choice? What about all those build systems, like Ant, SCons, Maven, Rake, etc. Would using any of those be a better choice?
Note: I'm not planning to replace my Visual Studio solution/project files. I only want to script everything else that's required to create a release of my software.
Edit: I have good reasons to move away from batch, that's not what my question is about. I would like to know (for example) what SCons gives me, over a normal Python script.
|
[
"For a tool that is scripted with Python, I happen to think Paver is a more easily-managed and more flexible build automator than SCons. Unlike SCons, Paver is designed for the plethora of not-compiling-programs tasks that go along with managing and distributing a software project.\n",
"Batch files aren't evil - they've actually come quite a long way from the brain-dead days of command.com. The command language can be pretty expressive nowadays, it just requires a bit of effort on your part to learn it.\nUnless there's an actual problem with your build script that you can't fix (and, if that's the case, that's the question you should be asking rather than some wishy-washy \"What's the best replacement?\" :-), my approach would be to stick with what you've got.\nA vague feeling of evilness would not be reason for me to waste effort 'fixing' something that isn't broken. And it would be wasted effort unless there's a clear advantage to changing (\"less evil\" is not something I'd consider a clear advantage).\n",
"As you're mentioning Python and SCons, I'd say go for SCons. It is Python after all. And yes, any of the above would be a better choice than hand-rolled build scripts.\n",
"I've seen python scripts used for building releases elsewhere so it can't be bad. Actually, I've personally used perl scripts to automate release building. I guess any scripting language could easily automate that procedure. If it's gonna be easy to do (and probably better than batch scripts), why not try it?\n",
"I would suggest using NAnt for your build script instead of python.\nMy reasons for this are:\n\nIt has the tasks defined already, all you need to do is write the XML and point it to the right places. If you are working with people who do not know python, XML may be a little less scary than learning a new language. \nNAnt is designed to work in the windows .Net environment, so it can already do MSBuild and NUnit tasks. \nIf you are already writing in C#, if you need to extend NAnt to do new tasks you are not adding another language to the mix of your project.\nYou can hook into Cruise Control .Net (for continuous builds). Which I think is the main reason why you would use NAnt. \n\n",
"Why should you use python? If your build script isn't broke don't fix it. If your having issues updating it to deal with new aditions to the project then you may want to look at rewriting it. I wouldn't use Python though tools like NANT or MSBuild do the job. I don't see the point in using a general purpis programming language to do something that tools have already been written to do unless you have a lot of obscure requirements existing tools can't deal with. Second what happens if you get hit by a bus or win the lotto? If you are determined to script everything I'd use powershell or some other Microsoft specific technology since your already wedded to Microsoft. If you leave will there be enough Python programmers to maintain the build scripts?\n",
"I would strongly suggest to take a look at waf. It's kind of what you want: \"a Python-based framework for configuring, compiling and installing applications\"\n",
"Personally I would use scripting as a last resort given that\n\nWith a bit of work you can get MSBuild to do all those things for you by extending it with additional components\nThere are third party equivalents to MSBuild like NANT that can do the same thing\nThere are entire tools like FinalBuilder that also do the same thing, and are easier to configure and extend\n\nHowever, if I had to go the scripting route I would use Powershell for a couple of reasons:\n\nComplete access to file system\nYou can easily access .NET objects\nYou can easily access COM objects\n\n",
"You can create custom makefiles for Microsoft nmake tool which you already have installed. Using a tool like that (SCons, Maven, etc. fall into the same category) gives you much more than regular scripts. \nThe main benefit is that dependencies between files are tracked and also the timestamps of changes. For example, you can make your .zip file depend on some other files, so .zip only gets repacked if some of those files have changed in the meantime. Just like with source code and its compiled form.\n",
"Python is very portable. SCons is field tested and reliable. Given what you know (from what you explained), why even ask the question?\nIf your maintaining something, its not just about getting it to build, its also about explaining to the user why it can NOT build, which saves you a ton of very frustrating questions while helping users to help themselves.\nI can not think of a modern, production operating system that lacks Python, unless you get into the embedded / research arena.\nSo, I'm answering to say, you answered your own question :)\n",
"It depends on what technology your software uses. If you're building C++ programs, I'd probably say go for scons without question (unless you have weird requirements scons can't meet). On the other hand, consider the instructions for building C#: CSharpBuilder.\n\nI would like to know (for example) what SCons gives me, over a normal Python script.\n\nThink of scons as being more of a library than a program. It provides you with code that will prevent a lot of tedium that you will have to deal with without it. In my opinion, vanilla Python isn't the best option for any kind of shell scripting stuff (not that it can't do it).\n\nBut the problem is, batch files are evil.\n\nLastly, batch files are evil if they're used for a project they're not suited to handle. For the one or two file project, batch files do just fine.\n",
"\nIt does things like running MSBuild, copying files, creating a ZIP file, running some tests, including the subversion revision number, etc.\n\nMSBuild and PowerShell can easily do all of this with reasonably clean succinct code. You're then sticking to purely M$ products which managers tend to like. Otherwise I would suggest you could look into Rake if not only for its large community. It has a nice syntax and iron ruby support (irake).\nTo be honest all but the last task you have mentioned are easily done in MSBuild alone. I would suggest learning the tools you have before going elsewhere.\nCheck out http://msbuildtasks.tigris.org/ for some good add ons to MSBuild\n"
] |
[
16,
9,
7,
4,
4,
3,
2,
1,
1,
1,
0,
0
] |
[] |
[] |
[
"build_automation",
"build_process",
"python"
] |
stackoverflow_0000792629_build_automation_build_process_python.txt
|
Q:
Using HTML Parser with HTTPResponse in Python 3.1
The response data from HTTPResponse object is of type bytes.
conn = http.client.HTTPConnection(www.yahoo.com)
conn.request("GET","/")
response = conn.getresponse();
data = response.read()
type(data)
The data is of type bytes.
I would like to use the response along with the built-in HTML parser of Python 3.1. However I find that HTMLParser.feed() requires a string (of type str). And this method does not accept data as the argument. To circumvent this problem, I have used data.decode() to continue with the parsing.
Question:
Is there a better way to accomplish
this?
Is there a reason why HTTP
response does not return string?
I guess the reason is this: The response of the server could be in any character set. So, the library cannot assume that it would be ASCII. But then, string in python is Unicode. The HTTP library could as well return a string. HTML tags are definitely in ASCII.
A:
Is there a reason why HTTP response
does not return string?
You nailed it yourself. A HTTP response isn't necessarily a string.
It can be an image, for example, and even when it is a string it can't know the encoding.
If you know the encoding (or have an encoding detection library) then it's very easy to convert a series of bytes to a string. In fact, the byte type is often used synonymously with the char type in C-based languages.
HTML tags are definitely in ASCII.
And if HTML tags were always ASCII, XHTML (which is recommended to be delivered as UTF-8) would have serious issues!
Besides, HTTP != HTML.
|
Using HTML Parser with HTTPResponse in Python 3.1
|
The response data from HTTPResponse object is of type bytes.
conn = http.client.HTTPConnection(www.yahoo.com)
conn.request("GET","/")
response = conn.getresponse();
data = response.read()
type(data)
The data is of type bytes.
I would like to use the response along with the built-in HTML parser of Python 3.1. However I find that HTMLParser.feed() requires a string (of type str). And this method does not accept data as the argument. To circumvent this problem, I have used data.decode() to continue with the parsing.
Question:
Is there a better way to accomplish
this?
Is there a reason why HTTP
response does not return string?
I guess the reason is this: The response of the server could be in any character set. So, the library cannot assume that it would be ASCII. But then, string in python is Unicode. The HTTP library could as well return a string. HTML tags are definitely in ASCII.
|
[
"\nIs there a reason why HTTP response\n does not return string?\n\nYou nailed it yourself. A HTTP response isn't necessarily a string.\nIt can be an image, for example, and even when it is a string it can't know the encoding.\nIf you know the encoding (or have an encoding detection library) then it's very easy to convert a series of bytes to a string. In fact, the byte type is often used synonymously with the char type in C-based languages.\n\nHTML tags are definitely in ASCII.\n\nAnd if HTML tags were always ASCII, XHTML (which is recommended to be delivered as UTF-8) would have serious issues!\nBesides, HTTP != HTML.\n"
] |
[
2
] |
[] |
[] |
[
"html",
"http",
"python"
] |
stackoverflow_0001354338_html_http_python.txt
|
Q:
Should I optimise my python code like C++? Does it matter?
I had an argument with a colleague about writing python efficiently. He claimed that though you are programming python you still have to optimise the little bits of your software as much as possible, as if you are writing an efficient algorithm in C++.
Things like:
In an if statement with an or always put the condition most likely to fail first, so the second will not be checked.
Use the most efficient functions for manipulating strings in common use. Not code that grinds strings, but simple things like doing joins and splits, and finding substrings.
Call as less functions as possible, even if it comes on the expense of readability, because of the overhead this creates.
I say, that in most cases it doesn't matter. I should also say that context of the code is not a super-efficient NOC or missile-guidance systems. We're mostly writing tests in python.
What's your view of the matter?
A:
My answer to that would be :
We should forget about small
efficiencies, say about 97% of the
time: premature optimization is the
root of all evil.
(Quoting Knuth, Donald. Structured Programming with go to Statements, ACM Journal Computing Surveys, Vol 6, No. 4, Dec. 1974. p.268)
If your application is doing anything like a query to the database, that one query will take more time than anything you can gain with those kind of small optimizations, anyway...
And if running after performances like that, why not code in assembly language, afterall ? Because Python is easier/faster to write and maintain ? Well, if so, you are right :-)
The most important thing is that your code is easy to maintain ; not a couple micro-seconds of CPU-time !
Well, maybe except if you have thousands of servers -- but is it your case ?
A:
The answer is really simple :
Follow Python best practices, not C++ best practices.
Readability in Python is more important that speed.
If performance becomes an issue, measure, then start optimizing.
A:
This sort of premature micro-optimisation is usually a waste of time in my experience, even in C and C++. Write readable code first. If it's running too slowly, run it through a profiler, and if necessary, fix the hot-spots.
Fundamentally, you need to think about return on investment. Is it worth the extra effort in reading and maintaining "optimised" code for the couple of microseconds it saves you? In most cases it isn't.
(Also, compilers and runtimes are getting cleverer. Some micro-optimisations may become micro-pessimisations over time.)
A:
I agree with others: readable code first ("Performance is not a problem until performance is a problem.").
I only want to add that when you absolutely need to write some unreadable and/or non-intuitive code, you can generally isolate it in few specific methods, for which you can write detailed comments, and keep the rest of your code highly readable. If you do so, you'll end up having easy to maintain code, and you'll only have to go through the unreadable parts when you really need to.
A:
I should also say that context of the code is not a super-efficient NOC or missile-guidance systems. We're mostly writing tests in python.
Given this, I'd say that you should take your colleague's advice about writing efficient Python but ignore anything he says that goes against prioritizing readability and maintainability of the code, which will probably be more important than the speed at which it'll execute.
A:
In an if statement with an or always
put the condition most likely to fail
first, so the second will not be
checked.
This is generally a good advice, and also depends on the logic of your program. If it makes sense that the second statement is not evaluated if the first returns false, then do so. Doing the opposite could be a bug otherwise.
Use the most efficient functions for
manipulating strings in common use.
Not code that grinds strings, but
simple things like doing joins and
splits, and finding substrings.
I don't really get this point. Of course you should use the library provided functions, because they are probably implemented in C, and a pure python implementation is most likely to be slower. In any case, no need to reinvent the wheel.
Call as less functions as possible,
even if it comes on the expense of
readability, because of the overhead
this creates.
$ cat withcall.py
def square(a):
return a*a
for i in xrange(1,100000):
i_square = square(i)
$ cat withoutcall.py
for i in xrange(1,100000):
i_square = i*i
$ time python2.3 withcall.py
real 0m5.769s
user 0m4.304s
sys 0m0.215s
$ time python2.3 withcall.py
real 0m5.884s
user 0m4.315s
sys 0m0.206s
$ time python2.3 withoutcall.py
real 0m5.806s
user 0m4.172s
sys 0m0.209s
$ time python2.3 withoutcall.py
real 0m5.613s
user 0m4.171s
sys 0m0.216s
I mean... come on... please.
A:
I think there are several related 'urban legends' here.
False Putting the more often-checked condition first in a conditional and similar optimizations save enough time for a typical program that it is worthy for a typical programmer.
True Some, but not many, people are using such styles in Python in the incorrect belief outlined above.
True Many people use such style in Python when they think that it improves readability of a Python program.
About readability: I think it's indeed useful when you give the most useful conditional first, since this is what people notice first anyway. You should also use ''.join() if you mean concatenation of strings since it's the most direct way to do it (the s += x operation could mean something different).
"Call as less functions as possible" decreases readability and goes against Pythonic principle of code reuse. And so it's not a style people use in Python.
A:
Before introducing performance optimizations at the expense of readability, look into modules like psyco that will do some JIT-ish compiling of distinct functions, often with striking results, with no impairment of readability.
Then if you really want to embark on the optimization path, you must first learn to measure and profile. Optimization MUST BE QUANTITATIVE - do not go with your gut. The hotspot profiler will show you the functions where your program is burning up the most time.
If optimization turns up a function like this is being frequently called:
def get_order_qty(ordernumber):
# look up order in database and return quantity
If there is any repetition of ordernumbers, then memoization would be a good optimization technique to learn, and it is easily packaged in an @memoize decorator so that there is little impact to program readability. The effect of memoizing is that values returned for a given set of input arguments are cached, so that the expensive function can be called only once, with subseqent calls resolved against the cache.
Lastly, consider lifting invariants out of loops. For large multi-dimensional structures, this can save a lot of time - in fact in this case, I would argue that this optimization improves readability, as it often serves to make clear that some expression can be computed at a high-level dimension in the nested logic.
(BTW, is this really what you meant?
•In an if statement with an or always put the condition most likely to fail first, so the second will not be checked.
I should think this might be the case for "and", but an "or" will short-circuit if the first value is True, saving the evaluation of the second term of the conditional. So I would change this optimization "rule" to:
If testing "A and B", put A first if
it is more likely to evaluate to
False.
If testing "A or B", put A first if
it is more likely to evaluate to
True.
But often, the sequence of conditions is driven by the tests themselves:
if obj is not None and hasattr(obj,"name") and obj.name.startswith("X"):
You can't reorder these for optimization - they have to be in this order (or just let the exceptions fly and catch them later:
if obj.name.startswith("X"):
A:
Sure follow Python best-practices (and in fact I agree with the first two recommendations), but maintainability and efficiency are not opposites, they are mostly togethers (if that's a word).
Statements like "always write your IF statements a certain way for performance" are a-priori, i.e. not based on knowledge of what your program spends time on, and are therefore guesses. The first (or second, or third, whatever) rule of performance tuning is don't guess.
If after you measure, profile, or in my case do this, you actually know that you can save much time by re-ordering tests, by all means, do. My money says that's at the 1% level or less.
A:
My visceral reaction is this:
I've worked with guys like your colleague and in general I wouldn't take advice from them.
Ask him if he's ever even used a profiler.
|
Should I optimise my python code like C++? Does it matter?
|
I had an argument with a colleague about writing python efficiently. He claimed that though you are programming python you still have to optimise the little bits of your software as much as possible, as if you are writing an efficient algorithm in C++.
Things like:
In an if statement with an or always put the condition most likely to fail first, so the second will not be checked.
Use the most efficient functions for manipulating strings in common use. Not code that grinds strings, but simple things like doing joins and splits, and finding substrings.
Call as less functions as possible, even if it comes on the expense of readability, because of the overhead this creates.
I say, that in most cases it doesn't matter. I should also say that context of the code is not a super-efficient NOC or missile-guidance systems. We're mostly writing tests in python.
What's your view of the matter?
|
[
"My answer to that would be :\n\nWe should forget about small\n efficiencies, say about 97% of the\n time: premature optimization is the\n root of all evil.\n\n(Quoting Knuth, Donald. Structured Programming with go to Statements, ACM Journal Computing Surveys, Vol 6, No. 4, Dec. 1974. p.268)\n\nIf your application is doing anything like a query to the database, that one query will take more time than anything you can gain with those kind of small optimizations, anyway...\nAnd if running after performances like that, why not code in assembly language, afterall ? Because Python is easier/faster to write and maintain ? Well, if so, you are right :-)\nThe most important thing is that your code is easy to maintain ; not a couple micro-seconds of CPU-time !\nWell, maybe except if you have thousands of servers -- but is it your case ?\n",
"The answer is really simple :\n\nFollow Python best practices, not C++ best practices.\nReadability in Python is more important that speed.\nIf performance becomes an issue, measure, then start optimizing.\n\n",
"This sort of premature micro-optimisation is usually a waste of time in my experience, even in C and C++. Write readable code first. If it's running too slowly, run it through a profiler, and if necessary, fix the hot-spots.\nFundamentally, you need to think about return on investment. Is it worth the extra effort in reading and maintaining \"optimised\" code for the couple of microseconds it saves you? In most cases it isn't.\n(Also, compilers and runtimes are getting cleverer. Some micro-optimisations may become micro-pessimisations over time.)\n",
"I agree with others: readable code first (\"Performance is not a problem until performance is a problem.\").\nI only want to add that when you absolutely need to write some unreadable and/or non-intuitive code, you can generally isolate it in few specific methods, for which you can write detailed comments, and keep the rest of your code highly readable. If you do so, you'll end up having easy to maintain code, and you'll only have to go through the unreadable parts when you really need to.\n",
"\nI should also say that context of the code is not a super-efficient NOC or missile-guidance systems. We're mostly writing tests in python.\n\nGiven this, I'd say that you should take your colleague's advice about writing efficient Python but ignore anything he says that goes against prioritizing readability and maintainability of the code, which will probably be more important than the speed at which it'll execute.\n",
"\nIn an if statement with an or always\n put the condition most likely to fail\n first, so the second will not be\n checked.\n\nThis is generally a good advice, and also depends on the logic of your program. If it makes sense that the second statement is not evaluated if the first returns false, then do so. Doing the opposite could be a bug otherwise.\n\nUse the most efficient functions for\n manipulating strings in common use.\n Not code that grinds strings, but\n simple things like doing joins and\n splits, and finding substrings.\n\nI don't really get this point. Of course you should use the library provided functions, because they are probably implemented in C, and a pure python implementation is most likely to be slower. In any case, no need to reinvent the wheel.\n\nCall as less functions as possible,\n even if it comes on the expense of\n readability, because of the overhead\n this creates.\n\n$ cat withcall.py\ndef square(a):\n return a*a\n\nfor i in xrange(1,100000):\n i_square = square(i)\n\n$ cat withoutcall.py\nfor i in xrange(1,100000):\n i_square = i*i\n\n$ time python2.3 withcall.py\nreal 0m5.769s\nuser 0m4.304s\nsys 0m0.215s\n$ time python2.3 withcall.py\nreal 0m5.884s\nuser 0m4.315s\nsys 0m0.206s\n\n$ time python2.3 withoutcall.py\nreal 0m5.806s\nuser 0m4.172s\nsys 0m0.209s\n$ time python2.3 withoutcall.py\nreal 0m5.613s\nuser 0m4.171s\nsys 0m0.216s\n\nI mean... come on... please.\n",
"I think there are several related 'urban legends' here.\n\nFalse Putting the more often-checked condition first in a conditional and similar optimizations save enough time for a typical program that it is worthy for a typical programmer.\nTrue Some, but not many, people are using such styles in Python in the incorrect belief outlined above.\nTrue Many people use such style in Python when they think that it improves readability of a Python program.\n\nAbout readability: I think it's indeed useful when you give the most useful conditional first, since this is what people notice first anyway. You should also use ''.join() if you mean concatenation of strings since it's the most direct way to do it (the s += x operation could mean something different). \n\"Call as less functions as possible\" decreases readability and goes against Pythonic principle of code reuse. And so it's not a style people use in Python.\n",
"Before introducing performance optimizations at the expense of readability, look into modules like psyco that will do some JIT-ish compiling of distinct functions, often with striking results, with no impairment of readability.\nThen if you really want to embark on the optimization path, you must first learn to measure and profile. Optimization MUST BE QUANTITATIVE - do not go with your gut. The hotspot profiler will show you the functions where your program is burning up the most time.\nIf optimization turns up a function like this is being frequently called:\ndef get_order_qty(ordernumber):\n # look up order in database and return quantity\n\nIf there is any repetition of ordernumbers, then memoization would be a good optimization technique to learn, and it is easily packaged in an @memoize decorator so that there is little impact to program readability. The effect of memoizing is that values returned for a given set of input arguments are cached, so that the expensive function can be called only once, with subseqent calls resolved against the cache.\nLastly, consider lifting invariants out of loops. For large multi-dimensional structures, this can save a lot of time - in fact in this case, I would argue that this optimization improves readability, as it often serves to make clear that some expression can be computed at a high-level dimension in the nested logic.\n(BTW, is this really what you meant?\n•In an if statement with an or always put the condition most likely to fail first, so the second will not be checked.\nI should think this might be the case for \"and\", but an \"or\" will short-circuit if the first value is True, saving the evaluation of the second term of the conditional. So I would change this optimization \"rule\" to:\n\nIf testing \"A and B\", put A first if \nit is more likely to evaluate to\nFalse.\nIf testing \"A or B\", put A first if\nit is more likely to evaluate to\nTrue.\n\nBut often, the sequence of conditions is driven by the tests themselves:\nif obj is not None and hasattr(obj,\"name\") and obj.name.startswith(\"X\"):\n\nYou can't reorder these for optimization - they have to be in this order (or just let the exceptions fly and catch them later:\nif obj.name.startswith(\"X\"):\n\n",
"Sure follow Python best-practices (and in fact I agree with the first two recommendations), but maintainability and efficiency are not opposites, they are mostly togethers (if that's a word).\nStatements like \"always write your IF statements a certain way for performance\" are a-priori, i.e. not based on knowledge of what your program spends time on, and are therefore guesses. The first (or second, or third, whatever) rule of performance tuning is don't guess.\nIf after you measure, profile, or in my case do this, you actually know that you can save much time by re-ordering tests, by all means, do. My money says that's at the 1% level or less.\n",
"My visceral reaction is this:\nI've worked with guys like your colleague and in general I wouldn't take advice from them.\nAsk him if he's ever even used a profiler.\n"
] |
[
14,
13,
10,
4,
3,
2,
2,
2,
1,
1
] |
[] |
[] |
[
"performance",
"python"
] |
stackoverflow_0001353715_performance_python.txt
|
Q:
Storing huge hash table in a file in Python
Hey. I have a function I want to memoize, however, it has too many possible values. Is there any convenient way to store the values in a text file and make it read from them? For example, something like storing a pre-computed list of primes up to 10^9 in a text file? I know it's slow to read from a text file but there's no other option if the amount of data is really huge. Thanks!
A:
For a list of primes up to 10**9, why do you need a hash? What would the KEYS be?! Sounds like a perfect opportunity for a simple, straightforward binary file! By the Prime Number Theorem, there's about 10**9/ln(10**9) such primes -- i.e. 50 millions or a bit less. At 4 bytes per prime, that's only 200 MB or less -- perfect for an array.array("L") and its methods such as fromfile, etc (see the docs). In many cases you could actually suck all of the 200 MB into memory, but, worst case, you can get a slice of those (e.g. via mmap and the fromstring method of array.array), do binary searches there (e.g. via bisect), etc, etc.
When you DO need a huge key-values store -- gigabytes, not a paltry 200 MB!-) -- I used to recommend shelve but after unpleasant real-life experience with huge shelves (performance, reliability, etc), I currently recommend a database engine instead -- sqlite is good and comes with Python, PostgreSQL is even better, non-relational ones such as CouchDB can be better still, and so forth.
A:
You can use the shelve module to store a dictionary like structure in a file. From the Python documentation:
import shelve
d = shelve.open(filename) # open -- file may get suffix added by low-level
# library
d[key] = data # store data at key (overwrites old data if
# using an existing key)
data = d[key] # retrieve a COPY of data at key (raise KeyError
# if no such key)
del d[key] # delete data stored at key (raises KeyError
# if no such key)
flag = key in d # true if the key exists
klist = list(d.keys()) # a list of all existing keys (slow!)
# as d was opened WITHOUT writeback=True, beware:
d['xx'] = [0, 1, 2] # this works as expected, but...
d['xx'].append(3) # *this doesn't!* -- d['xx'] is STILL [0, 1, 2]!
# having opened d without writeback=True, you need to code carefully:
temp = d['xx'] # extracts the copy
temp.append(5) # mutates the copy
d['xx'] = temp # stores the copy right back, to persist it
# or, d=shelve.open(filename,writeback=True) would let you just code
# d['xx'].append(5) and have it work as expected, BUT it would also
# consume more memory and make the d.close() operation slower.
d.close() # close it
A:
You could also just go with the ultimate brute force, and create a Python file with just a single statement in it:
seedprimes = [3,5,7,11,13,17,19,23,29,31,37,41,43,47,53,59,61,67,71,73,
79,83,89,97,101,103,107,109,113,127,131,137,139,149,151,157,163,167,173, ...
and then just import it. (Here is file with the primes up to 1e5: http://python.pastebin.com/f177ec30).
from primes_up_to_1e9 import seedprimes
A:
For Project Euler, I stored a precomputed list of primes up to 10**8 in a text file just by writing them in comma separated format. It worked well for that size, but it doesn't scale well to going much larger.
If your huge is not really that huge, I would use something simple like me, otherwise I would go with shelve as the others have said.
A:
Just naively sticking a hash table onto disk will result in about 5 orders of magnitude performance loss compared to an in memory implementation (or at least 3 if you have a SSD). When dealing with hard disks you'll want to extract every bit of data-locality and caching you can get.
The correct choice will depend on details of your use case. How much performance do you need? What kind of operations do you need on data-structure? Do you need to only check if the table contains a key or do you need to fetch a value based on the key? Can you precompute the table or do you need to be able to modify it on the fly? What kind of hit rate are you expecting? Can you filter out a significant amount of the operations using a bloom filter? Are the requests uniformly distributed or do you expect some kind of temporal locality? Can you predict the locality clusters ahead of time?
If you don't need ultimate performance or can parallelize and throw hardware at the problem check out some distributed key-value stores.
A:
You can also go one step down the ladder and use pickle. Shelve imports from pickle (link), so if you don't need the added functionality of shelve, this may spare you some clock cycles (although, they really don't matter to you, as you have choosen python to do large number storing)
A:
Let's see where the bottleneck is. When you're going to read a file, the hard drive has to turn enough to be able to read from it; then it reads a big block and caches the results.
So you want some method that will guess exactly what position in file you're going to read from and then do it exactly once. I'm pretty much sure standard DB modules will work for you, but you can do it yourself -- just open the file in binary mode for reading/writing and store your values as, say, 30-digits (=100-bit = 13-byte) numbers.
Then use standard file methods .
|
Storing huge hash table in a file in Python
|
Hey. I have a function I want to memoize, however, it has too many possible values. Is there any convenient way to store the values in a text file and make it read from them? For example, something like storing a pre-computed list of primes up to 10^9 in a text file? I know it's slow to read from a text file but there's no other option if the amount of data is really huge. Thanks!
|
[
"For a list of primes up to 10**9, why do you need a hash? What would the KEYS be?! Sounds like a perfect opportunity for a simple, straightforward binary file! By the Prime Number Theorem, there's about 10**9/ln(10**9) such primes -- i.e. 50 millions or a bit less. At 4 bytes per prime, that's only 200 MB or less -- perfect for an array.array(\"L\") and its methods such as fromfile, etc (see the docs). In many cases you could actually suck all of the 200 MB into memory, but, worst case, you can get a slice of those (e.g. via mmap and the fromstring method of array.array), do binary searches there (e.g. via bisect), etc, etc.\nWhen you DO need a huge key-values store -- gigabytes, not a paltry 200 MB!-) -- I used to recommend shelve but after unpleasant real-life experience with huge shelves (performance, reliability, etc), I currently recommend a database engine instead -- sqlite is good and comes with Python, PostgreSQL is even better, non-relational ones such as CouchDB can be better still, and so forth.\n",
"You can use the shelve module to store a dictionary like structure in a file. From the Python documentation:\nimport shelve\n\nd = shelve.open(filename) # open -- file may get suffix added by low-level\n # library\n\nd[key] = data # store data at key (overwrites old data if\n # using an existing key)\ndata = d[key] # retrieve a COPY of data at key (raise KeyError\n # if no such key)\ndel d[key] # delete data stored at key (raises KeyError\n # if no such key)\n\nflag = key in d # true if the key exists\nklist = list(d.keys()) # a list of all existing keys (slow!)\n\n# as d was opened WITHOUT writeback=True, beware:\nd['xx'] = [0, 1, 2] # this works as expected, but...\nd['xx'].append(3) # *this doesn't!* -- d['xx'] is STILL [0, 1, 2]!\n\n# having opened d without writeback=True, you need to code carefully:\ntemp = d['xx'] # extracts the copy\ntemp.append(5) # mutates the copy\nd['xx'] = temp # stores the copy right back, to persist it\n\n# or, d=shelve.open(filename,writeback=True) would let you just code\n# d['xx'].append(5) and have it work as expected, BUT it would also\n# consume more memory and make the d.close() operation slower.\n\nd.close() # close it\n\n",
"You could also just go with the ultimate brute force, and create a Python file with just a single statement in it:\nseedprimes = [3,5,7,11,13,17,19,23,29,31,37,41,43,47,53,59,61,67,71,73,\n79,83,89,97,101,103,107,109,113,127,131,137,139,149,151,157,163,167,173, ...\n\nand then just import it. (Here is file with the primes up to 1e5: http://python.pastebin.com/f177ec30).\nfrom primes_up_to_1e9 import seedprimes\n\n",
"For Project Euler, I stored a precomputed list of primes up to 10**8 in a text file just by writing them in comma separated format. It worked well for that size, but it doesn't scale well to going much larger.\nIf your huge is not really that huge, I would use something simple like me, otherwise I would go with shelve as the others have said.\n",
"Just naively sticking a hash table onto disk will result in about 5 orders of magnitude performance loss compared to an in memory implementation (or at least 3 if you have a SSD). When dealing with hard disks you'll want to extract every bit of data-locality and caching you can get.\nThe correct choice will depend on details of your use case. How much performance do you need? What kind of operations do you need on data-structure? Do you need to only check if the table contains a key or do you need to fetch a value based on the key? Can you precompute the table or do you need to be able to modify it on the fly? What kind of hit rate are you expecting? Can you filter out a significant amount of the operations using a bloom filter? Are the requests uniformly distributed or do you expect some kind of temporal locality? Can you predict the locality clusters ahead of time?\nIf you don't need ultimate performance or can parallelize and throw hardware at the problem check out some distributed key-value stores.\n",
"You can also go one step down the ladder and use pickle. Shelve imports from pickle (link), so if you don't need the added functionality of shelve, this may spare you some clock cycles (although, they really don't matter to you, as you have choosen python to do large number storing)\n",
"Let's see where the bottleneck is. When you're going to read a file, the hard drive has to turn enough to be able to read from it; then it reads a big block and caches the results.\nSo you want some method that will guess exactly what position in file you're going to read from and then do it exactly once. I'm pretty much sure standard DB modules will work for you, but you can do it yourself -- just open the file in binary mode for reading/writing and store your values as, say, 30-digits (=100-bit = 13-byte) numbers.\nThen use standard file methods .\n"
] |
[
11,
6,
3,
1,
1,
0,
0
] |
[] |
[] |
[
"file",
"hashtable",
"python"
] |
stackoverflow_0001354520_file_hashtable_python.txt
|
Q:
Split Twitter RSS string using Python
I am trying to parse Twitter RSS feeds and put the information in a sqlite database, using Python. Here's an example:
MiamiPete: today's "Last Call" is now up http://bit.ly/MGDzu #stocks #stockmarket #finance #money
What I want to do is create one column for the main content (Miami Pete…now up), one column for the URL (http://bit.ly/MGDzu), and four separate columns for the hashtags (stocks, stockmarket, finance, money). I've been playing around with how to do this.
Any advice would be greatly appreciated!
P.S. Some code I've been playing around with is below--you can see I tried initially creating a variable called "tiny_url" and splitting it, which it does seem to do, but this feeble attempt is not anywhere close to solving the problem noted above. :)
def store_feed_items(id, items):
""" Takes a feed_id and a list of items and stored them in the DB """
for entry in items:
c.execute('SELECT entry_id from RSSEntries WHERE url=?', (entry.link,))
tinyurl = entry.summary ### I added this in
print tinyurl.split('http') ### I added this in
if len(c.fetchall()) == 0:
c.execute('INSERT INTO RSSEntries (id, url, title, content, tinyurl, date, tiny) VALUES (?,?,?,?,?,?,?)', (id, entry.link, entry.title, entry.summary, tinyurl, strftime("%Y-%m-%d %H:%M:%S",entry.updated_parsed), tiny ))
A:
It seems like your data-driven design is rather flawed. Unless all your entries have a text part, an url and up to 4 tags, it's not going to work.
You also need to separate saving to db from parsing. Parsing could be easily done with a regexep (or even string methods):
>>> s = your_string
>>> s.split()
['MiamiPete:', "today's", '"Last', 'Call"', 'is', 'now', 'up', 'http://bit.ly/MGDzu', '#stocks', '#stockmarket', '#finance', '#money']
>>> url = [i for i in s.split() if i.startswith('http://')]
>>> url
['http://bit.ly/MGDzu']
>>> tags = [i for i in s.split() if i.startswith('#')]
>>> tags
['#stocks', '#stockmarket', '#finance', '#money']
>>> ' '.join(i for i in s.split() if i not in url+tags)
'MiamiPete: today\'s "Last Call" is now up'
Single-table db design would probably have to go, though.
A:
Also, you can parse your strings using regexps:
>>> s = (u'MiamiPete: today\'s "Last Call" is now up http://bit.ly/MGDzu '
'#stocks #stockmarket #finance #money')
>>> re.match(r'(.*) (http://[^ ]+)', s).groups()
(u'MiamiPete: today\'s "Last Call" is now up', u'http://bit.ly/MGDzu')
>>> re.findall(r'(#\w+)', s)
[u'#stocks', u'#stockmarket', u'#finance', u'#money']
A:
Twitter has an api that may be easier for you to use here, http://apiwiki.twitter.com/Twitter-API-Documentation.
You can get the results as JSON or XML and use one of the many Python libraries to parse the results.
Or if you must your the RSS there are Python feed parsers like, http://www.feedparser.org/.
A:
I would highly recommend using the Twitter API. There are actually two APIs, one for the main twitter server and one for the search server. They are used for different things.
You can find sample code, pytwitter on svn. Add simplejson and you can be doing very powerful things in a matter of minutes.
Good luck
|
Split Twitter RSS string using Python
|
I am trying to parse Twitter RSS feeds and put the information in a sqlite database, using Python. Here's an example:
MiamiPete: today's "Last Call" is now up http://bit.ly/MGDzu #stocks #stockmarket #finance #money
What I want to do is create one column for the main content (Miami Pete…now up), one column for the URL (http://bit.ly/MGDzu), and four separate columns for the hashtags (stocks, stockmarket, finance, money). I've been playing around with how to do this.
Any advice would be greatly appreciated!
P.S. Some code I've been playing around with is below--you can see I tried initially creating a variable called "tiny_url" and splitting it, which it does seem to do, but this feeble attempt is not anywhere close to solving the problem noted above. :)
def store_feed_items(id, items):
""" Takes a feed_id and a list of items and stored them in the DB """
for entry in items:
c.execute('SELECT entry_id from RSSEntries WHERE url=?', (entry.link,))
tinyurl = entry.summary ### I added this in
print tinyurl.split('http') ### I added this in
if len(c.fetchall()) == 0:
c.execute('INSERT INTO RSSEntries (id, url, title, content, tinyurl, date, tiny) VALUES (?,?,?,?,?,?,?)', (id, entry.link, entry.title, entry.summary, tinyurl, strftime("%Y-%m-%d %H:%M:%S",entry.updated_parsed), tiny ))
|
[
"It seems like your data-driven design is rather flawed. Unless all your entries have a text part, an url and up to 4 tags, it's not going to work.\nYou also need to separate saving to db from parsing. Parsing could be easily done with a regexep (or even string methods):\n>>> s = your_string\n>>> s.split()\n['MiamiPete:', \"today's\", '\"Last', 'Call\"', 'is', 'now', 'up', 'http://bit.ly/MGDzu', '#stocks', '#stockmarket', '#finance', '#money']\n>>> url = [i for i in s.split() if i.startswith('http://')]\n>>> url\n['http://bit.ly/MGDzu']\n>>> tags = [i for i in s.split() if i.startswith('#')]\n>>> tags\n['#stocks', '#stockmarket', '#finance', '#money']\n>>> ' '.join(i for i in s.split() if i not in url+tags)\n'MiamiPete: today\\'s \"Last Call\" is now up'\n\nSingle-table db design would probably have to go, though.\n",
"Also, you can parse your strings using regexps:\n>>> s = (u'MiamiPete: today\\'s \"Last Call\" is now up http://bit.ly/MGDzu '\n '#stocks #stockmarket #finance #money')\n>>> re.match(r'(.*) (http://[^ ]+)', s).groups()\n(u'MiamiPete: today\\'s \"Last Call\" is now up', u'http://bit.ly/MGDzu')\n>>> re.findall(r'(#\\w+)', s)\n[u'#stocks', u'#stockmarket', u'#finance', u'#money']\n\n",
"Twitter has an api that may be easier for you to use here, http://apiwiki.twitter.com/Twitter-API-Documentation.\nYou can get the results as JSON or XML and use one of the many Python libraries to parse the results.\nOr if you must your the RSS there are Python feed parsers like, http://www.feedparser.org/.\n",
"I would highly recommend using the Twitter API. There are actually two APIs, one for the main twitter server and one for the search server. They are used for different things.\nYou can find sample code, pytwitter on svn. Add simplejson and you can be doing very powerful things in a matter of minutes.\nGood luck\n"
] |
[
4,
2,
1,
1
] |
[] |
[] |
[
"bit.ly",
"python",
"split",
"sqlite",
"string"
] |
stackoverflow_0001354415_bit.ly_python_split_sqlite_string.txt
|
Q:
Python code to download a webpage using JavaScript
Im trying to download share data from a stock exchange using python. The problem is that there is no direct download link, but rather a javascript to export the data.
The data page url:
http://tase.co.il/TASE/Templates/Company/CompanyHistory.aspx?NRMODE=Published&NRORIGINALURL=%2fTASEEng%2fGeneral%2fCompany%2fcompanyHistoryData.htm%3fcompanyID%3d001216%26ShareID%3d01091248%26subDataType%3d0%26&NRNODEGUID={045D6005-5C86-4A8E-ADD4-C151A77EC14B}&NRCACHEHINT=Guest&shareID=01820083&companyID=000182&subDataType=0
When I open the data page in a browser and then the download page, it works like a charm.
When I just open the download page, it doen't download anything. I guess this is because the data page injects the actual data to the variables Columns, Titles etc.
I've tried to mimic this behaviour in a python script, but without success.
def download_CSV (shareID, compID):
data_url ="http://tase.co.il/TASE/Templates/Company/CompanyHistory.aspx?NRMODE=Published&NRORIGINALURL=%2fTASEEng%2fGeneral%2fCompany%2fcompanyHistoryData.htm%3fsubDataType%3d0%26shareID%3d00759019&NRNODEGUID={045D6005-5C86-4A8E-ADD4-C151A77EC14B}&NRCACHEHINT=Guest&shareID="+shareID+"&companyID="+compID+"&subDataType=0"
import urllib2
response = urllib2.urlopen(data_url)
html = response.read()
down_url ="http://tase.co.il/TASE/Pages/Export.aspx?tbl=0&Columns=AddColColumnsHistory&Titles=AddColTitlesHistory&sn=dsHistory&enumTblType=GridHistorydaily&ExportType=3"
import urllib
urllib.urlretrieve (down_url, "test.csv")
Thanks a lot
A:
You could use Selenium or other ways to automate a browser to take advantage of the browser's built-in Javascript interpreter -- to control Selenium with Python, see e.g. here.
|
Python code to download a webpage using JavaScript
|
Im trying to download share data from a stock exchange using python. The problem is that there is no direct download link, but rather a javascript to export the data.
The data page url:
http://tase.co.il/TASE/Templates/Company/CompanyHistory.aspx?NRMODE=Published&NRORIGINALURL=%2fTASEEng%2fGeneral%2fCompany%2fcompanyHistoryData.htm%3fcompanyID%3d001216%26ShareID%3d01091248%26subDataType%3d0%26&NRNODEGUID={045D6005-5C86-4A8E-ADD4-C151A77EC14B}&NRCACHEHINT=Guest&shareID=01820083&companyID=000182&subDataType=0
When I open the data page in a browser and then the download page, it works like a charm.
When I just open the download page, it doen't download anything. I guess this is because the data page injects the actual data to the variables Columns, Titles etc.
I've tried to mimic this behaviour in a python script, but without success.
def download_CSV (shareID, compID):
data_url ="http://tase.co.il/TASE/Templates/Company/CompanyHistory.aspx?NRMODE=Published&NRORIGINALURL=%2fTASEEng%2fGeneral%2fCompany%2fcompanyHistoryData.htm%3fsubDataType%3d0%26shareID%3d00759019&NRNODEGUID={045D6005-5C86-4A8E-ADD4-C151A77EC14B}&NRCACHEHINT=Guest&shareID="+shareID+"&companyID="+compID+"&subDataType=0"
import urllib2
response = urllib2.urlopen(data_url)
html = response.read()
down_url ="http://tase.co.il/TASE/Pages/Export.aspx?tbl=0&Columns=AddColColumnsHistory&Titles=AddColTitlesHistory&sn=dsHistory&enumTblType=GridHistorydaily&ExportType=3"
import urllib
urllib.urlretrieve (down_url, "test.csv")
Thanks a lot
|
[
"You could use Selenium or other ways to automate a browser to take advantage of the browser's built-in Javascript interpreter -- to control Selenium with Python, see e.g. here.\n"
] |
[
1
] |
[] |
[] |
[
"javascript",
"python"
] |
stackoverflow_0001355244_javascript_python.txt
|
Q:
Latin letters with acute : DjangoUnicodeDecodeError
I have a problem reading a txt file to insert in the mysql db table, te sniped of this code:
file contains the in first line: "aclaración"
archivo = open('file.txt',"r")
for line in archivo.readlines():
....body = body + line
model = MyModel(body=body)
model.save()
i get a DjangoUnicodeDecodeError:
'utf8' codec can't decode bytes in position 8: invalid data. You passed in 'aclaraci\xf3n' (type 'str')
Unicode error hint
The string that could not be encoded/decoded was: araci�n.
I tried to body.decode('utf-8'), body.decode('latin-1'), body.decode('iso-8859-1') without solution.
Can you help me please? Any hint is apreciated :)
A:
Judging from the \xf3 code for 'ó', it does look like the data is encoded in ISO-8859-1 (or some close relative). So body.decode('iso-8859-1') should be a valid Unicode string (you don't specify what "without solution" means -- what error message do you get, and where?); if what you need is a utf-8 encoded bytestring instead, body.decode('iso-8859-1').encode('utf-8') should give you one!
|
Latin letters with acute : DjangoUnicodeDecodeError
|
I have a problem reading a txt file to insert in the mysql db table, te sniped of this code:
file contains the in first line: "aclaración"
archivo = open('file.txt',"r")
for line in archivo.readlines():
....body = body + line
model = MyModel(body=body)
model.save()
i get a DjangoUnicodeDecodeError:
'utf8' codec can't decode bytes in position 8: invalid data. You passed in 'aclaraci\xf3n' (type 'str')
Unicode error hint
The string that could not be encoded/decoded was: araci�n.
I tried to body.decode('utf-8'), body.decode('latin-1'), body.decode('iso-8859-1') without solution.
Can you help me please? Any hint is apreciated :)
|
[
"Judging from the \\xf3 code for 'ó', it does look like the data is encoded in ISO-8859-1 (or some close relative). So body.decode('iso-8859-1') should be a valid Unicode string (you don't specify what \"without solution\" means -- what error message do you get, and where?); if what you need is a utf-8 encoded bytestring instead, body.decode('iso-8859-1').encode('utf-8') should give you one!\n"
] |
[
5
] |
[] |
[] |
[
"character_encoding",
"django",
"python",
"utf_8"
] |
stackoverflow_0001355285_character_encoding_django_python_utf_8.txt
|
Q:
Python classes -- mutability
Im having a problem with python.. I have a binary tree node type:
class NODE:
element = 0
leftchild = None
rightchild = None
And I had to implement a function deletemin:
def DELETEMIN( A ):
if A.leftchild == None:
retval = A.element
A = A.rightchild
return retval
else:
return DELETEMIN( A.leftchild )
Yet, when I try to test this on the binary tree:
1
/ \
0 2
It should delete 0, by just setting it to null but instead, i get this:
0
/ \
0 2
Why can I not nullify a node within a function in python? Is their a way to do this?
A:
Python passes arguments by object-reference, just like java, not by variable-reference. When you assign to a local variable (including an argument) to a new value, you're changing only the local variable, nothing else (don't confuse that with calling mutators or assigning to ATTRIBUTES of objects: we're talking about assignments to barenames).
The preferred solution in Python is generally to return multiple values, as many as you need, and assign them appropriately in the caller. So deletemin would return two values, the current returnval and the modified node, and the caller would assign the latter as needed. I.e.:
def DELETEMIN( A ):
if A.leftchild is None:
return A.element, A.rightchild
else:
return DELETEMIN( A.leftchild )
and in the caller, where you previously had foo = DELETEMIN( bar ), you'd use instead
foo, bar = DELETEMIN( bar )
Peculiar capitalization and spacing within parentheses, BTW, but that's another issue;-).
There is no way to get "a pointer or reference to a caller's barename" (in either Python or Java) in the way you could, e.g., in C or C++. There are other alternative approaches, but they require different arrangements than you appear to prefer, so I recommend the multiple return values approach as here indicated.
A:
class Node:
element = 0;
left_child = None
right_child = None
def delete_min( A ):
if A.left_child is None:
return A.right_child
else:
A.left_child = delete_min(A.left_child)
return A
tree = delete_min(tree)
|
Python classes -- mutability
|
Im having a problem with python.. I have a binary tree node type:
class NODE:
element = 0
leftchild = None
rightchild = None
And I had to implement a function deletemin:
def DELETEMIN( A ):
if A.leftchild == None:
retval = A.element
A = A.rightchild
return retval
else:
return DELETEMIN( A.leftchild )
Yet, when I try to test this on the binary tree:
1
/ \
0 2
It should delete 0, by just setting it to null but instead, i get this:
0
/ \
0 2
Why can I not nullify a node within a function in python? Is their a way to do this?
|
[
"Python passes arguments by object-reference, just like java, not by variable-reference. When you assign to a local variable (including an argument) to a new value, you're changing only the local variable, nothing else (don't confuse that with calling mutators or assigning to ATTRIBUTES of objects: we're talking about assignments to barenames).\nThe preferred solution in Python is generally to return multiple values, as many as you need, and assign them appropriately in the caller. So deletemin would return two values, the current returnval and the modified node, and the caller would assign the latter as needed. I.e.:\ndef DELETEMIN( A ):\n if A.leftchild is None:\n return A.element, A.rightchild\n else:\n return DELETEMIN( A.leftchild )\n\nand in the caller, where you previously had foo = DELETEMIN( bar ), you'd use instead\nfoo, bar = DELETEMIN( bar )\n\nPeculiar capitalization and spacing within parentheses, BTW, but that's another issue;-).\nThere is no way to get \"a pointer or reference to a caller's barename\" (in either Python or Java) in the way you could, e.g., in C or C++. There are other alternative approaches, but they require different arrangements than you appear to prefer, so I recommend the multiple return values approach as here indicated.\n",
"class Node:\n element = 0;\n left_child = None\n right_child = None\n\ndef delete_min( A ):\n if A.left_child is None:\n return A.right_child\n else:\n A.left_child = delete_min(A.left_child)\n return A\n\ntree = delete_min(tree)\n\n"
] |
[
2,
0
] |
[] |
[] |
[
"class",
"python"
] |
stackoverflow_0001355555_class_python.txt
|
Q:
How can python call a class that is never defined in the code?
I don't know if it is feasable to paste all of the code here but I am looking at the code in this git repo.
If you look at the example they do:
ec2 = EC2('access key id', 'secret key')
...but there is no EC2 class. However, it looks like in libcloud\providers.py there is a dict that maps the EC2 to the EC2NodeDriver found in libcloud\drivers\ec2.py. The correct mapping is calculated by get_driver(provider), but that method doesn't appear to be called anywhere.
I am new to python, obviously, but not to programming. I'm not even sure what I should be looking up in the docs to figure this out.
A:
example.py includes an import statement that reads:
from libcloud.drivers import EC2, Slicehost, Rackspace
This means that the EC2 class is imported from the libcloud.drivers module. However, in this case, libcloud.drivers is actually a package (a Python package contains modules), which means that EC2 should be defined in a file __init__.py in libcloud/drivers/, but it's not. Which means that in this specific case, their example code is actually wrong. (I downloaded the code and got an import error when running example.py, and as you can see, the file libcloud/drivers/__init__.py does not contain any definitions at all, least of all an EC2 definition.)
A:
Checking out the libcloud\examples.py might be helpful. I saw this:
from libcloud.drivers import EC2, Slicehost, Rackspace
The python 'import' statement brings in the class from other python module, in this case from the libcloud.drivers module.
|
How can python call a class that is never defined in the code?
|
I don't know if it is feasable to paste all of the code here but I am looking at the code in this git repo.
If you look at the example they do:
ec2 = EC2('access key id', 'secret key')
...but there is no EC2 class. However, it looks like in libcloud\providers.py there is a dict that maps the EC2 to the EC2NodeDriver found in libcloud\drivers\ec2.py. The correct mapping is calculated by get_driver(provider), but that method doesn't appear to be called anywhere.
I am new to python, obviously, but not to programming. I'm not even sure what I should be looking up in the docs to figure this out.
|
[
"example.py includes an import statement that reads:\nfrom libcloud.drivers import EC2, Slicehost, Rackspace\n\nThis means that the EC2 class is imported from the libcloud.drivers module. However, in this case, libcloud.drivers is actually a package (a Python package contains modules), which means that EC2 should be defined in a file __init__.py in libcloud/drivers/, but it's not. Which means that in this specific case, their example code is actually wrong. (I downloaded the code and got an import error when running example.py, and as you can see, the file libcloud/drivers/__init__.py does not contain any definitions at all, least of all an EC2 definition.)\n",
"Checking out the libcloud\\examples.py might be helpful. I saw this:\nfrom libcloud.drivers import EC2, Slicehost, Rackspace\n\nThe python 'import' statement brings in the class from other python module, in this case from the libcloud.drivers module.\n"
] |
[
5,
0
] |
[] |
[] |
[
"python",
"python_import"
] |
stackoverflow_0001355710_python_python_import.txt
|
Q:
Is there a stable integration procedure/pluggin for Django 1.1 and Google app engine?
What is the best procedure/plugging to use to integrate django and google app engine? I have read many articles in the internet and seen videos on how to go round this. Am still left wondering which is the best procedure to use.
Is there an official procedure documented in django or google app engine. Examples and site references will really help.
Am using Python 2.6, Django 1.1
Gath
A:
There is NO way you can run Python 2.6 on App Engine: it's 2.5 only.
If you're rarin' to have Django 1.1 (with Python 2.5), I suggest app-engine patch which now supports it (it's a release candidate, not a final release, but close). I find their docs good and thorough, and their code well written and solid.
|
Is there a stable integration procedure/pluggin for Django 1.1 and Google app engine?
|
What is the best procedure/plugging to use to integrate django and google app engine? I have read many articles in the internet and seen videos on how to go round this. Am still left wondering which is the best procedure to use.
Is there an official procedure documented in django or google app engine. Examples and site references will really help.
Am using Python 2.6, Django 1.1
Gath
|
[
"There is NO way you can run Python 2.6 on App Engine: it's 2.5 only.\nIf you're rarin' to have Django 1.1 (with Python 2.5), I suggest app-engine patch which now supports it (it's a release candidate, not a final release, but close). I find their docs good and thorough, and their code well written and solid.\n"
] |
[
3
] |
[] |
[] |
[
"django",
"google_app_engine",
"python"
] |
stackoverflow_0001355813_django_google_app_engine_python.txt
|
Q:
How to call super() in Python 3.0?
I have the strangest error I have seen for a while in Python (version 3.0).
Changing the signature of the function affects whether super() works, despite the fact that it takes no arguments. Can you explain why this occurs?
Thanks,
Chris
>>> class tmp:
... def __new__(*args):
... super()
...
>>> tmp()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 3, in __new__
SystemError: super(): no arguments
>>> class tmp:
... def __new__(mcl,*args):
... super()
...
>>> tmp()
>>>
A:
As the docs say, "The zero argument form automatically searches the stack frame for the class (__class__) and the first argument." Your first example of __new__ doesn't HAVE a first argument - it claims it can be called with zero or more arguments, so argumentless super is stumped. Your second example DOES have an explicit first argument, so the search in the stack frame succeeds.
A:
python 3.0 new super is trying to dynamically make a choice for you here, read this PEP here that should explain everything.
|
How to call super() in Python 3.0?
|
I have the strangest error I have seen for a while in Python (version 3.0).
Changing the signature of the function affects whether super() works, despite the fact that it takes no arguments. Can you explain why this occurs?
Thanks,
Chris
>>> class tmp:
... def __new__(*args):
... super()
...
>>> tmp()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 3, in __new__
SystemError: super(): no arguments
>>> class tmp:
... def __new__(mcl,*args):
... super()
...
>>> tmp()
>>>
|
[
"As the docs say, \"The zero argument form automatically searches the stack frame for the class (__class__) and the first argument.\" Your first example of __new__ doesn't HAVE a first argument - it claims it can be called with zero or more arguments, so argumentless super is stumped. Your second example DOES have an explicit first argument, so the search in the stack frame succeeds.\n",
"python 3.0 new super is trying to dynamically make a choice for you here, read this PEP here that should explain everything.\n"
] |
[
6,
1
] |
[] |
[] |
[
"python",
"python_3.x"
] |
stackoverflow_0001355931_python_python_3.x.txt
|
Q:
Why is the C++ syntax so complicated?
I'm a novice at programming although I've been teaching myself Python for about a year and I studied C# some time ago.
This month I started C++ programming courses at my university and I just have to ask; "why is the C++ code so complicated?"
Writing "Hello world." in Python is as simple as "print 'Hello world.'" but in C++ it's:
# include <iostream>
using namespace std;
int main ()
{
cout << "Hello world.";
return 0;
}
I know there is probably a good reason for all of this but, why...
... do you have to include the <iostream> everytime? Do you ever not need it?
... same question for the standard library, when do you not need std::*?
... is the "main" part a function? Do you ever call the main function? Why is it an integer? Why does C++ need to have a main function but Python doesn't?
... do you need "std::cout << "? Isn't that needlessly long and complicated compared to Python?
... do you need to return 0 even when you are never going to use it?
This is probably because I'm learning such basic C++ but every program I've made so far looks like this, so I have to retype the same code over and over again. Isn't that redundant? Couldn't the compiler just input this code itself, since it's always the same (i.e. afaik you always include <iostream>, std, int main, return 0)
A:
C++ is a more low-level language that executes without the context of an interpreter. As such, it has many different design choices than does Python, because C++ has no environment which it can rely on to manage information like types and memory. C++ can be used to write an operating system kernel where there is no code running on the machine except for the program itself, which means that the language (some library facilities are not available for so-called freestanding implementations) must be self-contained. This is why C++ has no equivalent to Python's eval, nor a means of determining members, etc. of a class, nor other features that require an execution environment (or a massive overhead in the program itself instead of such an environment)
For your individual questions:
do you have to include the <iostream> everytime? Do you ever not need it?
#include <iostream> is the directive that imports the <iostream> header into your program. <iostream> contains the standard input/output objects - in particular, cout. If you aren't using standard I/O objects (for instance, you use only file I/O, or your program uses a GUI library, or are writing an operating system kernel), you do not need <iostream>
same question for the standard library, when do you not need std::*?
std is the namespace containing all of the standard library. using namespace std; is sort of like from std import *, whereas a #include directive is (in this regard) more like a barebones import std statement. (in actual fact, the mechanism is rather different, because C++ does not use using namespace std; to automatically lookup objects in std; the using-directive only imports the names into the global namespace.)
I'll note here that using-directives (using namespace) are frequently frowned upon in C++ code, as they import a lot of names and can cause name clashes. using-declarations (using std::cout;) are preferred when possible, as is limiting the scope of a using-directive (for instance, to one function or to one source file). Don't ever put using namespace in a header without good reason.
is the "main" part a function? Do you ever call the main function? Why is it an integer? Why does C++ need to have a main function but Python doesn't?
main is the entry point to the program - where execution starts. In Python, the __main__ module serves the same purpose. C++ does not execute code outside a defined function like Python does, so its entry point is a function rather than a module.
do you need "std::cout << "? Isn't that needlessly long and complicated compared to Python?
std::cout is only needed if you don't import the cout name into the global namespace, either by a using-directive (using namespace std;) or by a using-declaration (using std::cout). In this regard, it is once again much like the distinction between Python's import std and from std import * or from std import cout.
The << is an overloaded operator for standard stream objects. cout << value calls cout's function to output value. Python needs no such extra code because print is built into the language; this does not make sense for C++, where there may not even be an operating system, much less an I/O library.
do you need to return 0 even when you are never going to use it?
No. main (and no other function) has an implicit return 0; at the end. The return value of main (or, if the exit function is called, the value passed to it) is passed back to the operating system as the exit code. 0 indicates the program successfully executed - that it encountered no errors, etc. If an error is encountered, a non-zero value should be returned (or passed to exit).
A:
In response to your questions at the end of the post, it can be summed up with the philosophy of C++:
You don't pay for what you don't use.
You don't always need to use stdin or stdout (Windows/GUI apps?), nor will you always be using the STL, nor will everything you write necessarily use the standard main (winAPI) etc. As a previous poster said, C++ is lower level than Python. You will be exposed to more of the details, which offers you more control over what you're doing.
A:
... do you have to include the
everytime? Do you ever not
need it?
You don't need it if you're not going to use iostreams in that module. In larger programs, few modules do any actual IO directly, and so few actually need to use iostreams.
Turning the question around: in python you need to import sys and/or os in most non-trivial programs. Why?
... same question for the standard
library, when do you not need std::*?
You can have the using line or you can use the std:: prefix. This is very similar to the choice python gives you of either saying "from sys import *" or "import sys" and then having to prefix things with "sys.". In python you have to say "sys.stdout". Is "std::cout" really any worse?
... is the "main" part a function? Do
you ever call the main function? Why
is it an integer? Why does C++ need to
have a main function but Python
doesn't?
Yes, main is a function. Typically you wouldn't call main yourself. The name "main" is reserved for the entry-point of your program. It returns an integer because the value returned is used as the status code of your program. In Python you can use sys.exit if you want to return a non-zero status code.
Python doesn't have the same convention because with Python you can have code in a module not in a function. This code is executed when you load the module. Interestingly, many people feel it is bad style to have code at the top-level of a module and will instead create a main function by doing something like this:
def main(argv):
# program goes here
return 0
if __name__ == '__main__':
sys.exit(main(sys.argv))
Also, in Python you tell the interpreter with module is the "main" module when you run it. eg: "python foo.py". In C, the "main" module is (effectively) the one with a function called main. (If there are multiple modules with a main function, it's a linker error.)
... do you need "std::cout << "? Isn't
that needlessly long and complicated
compared to Python?
The equivalent in Python is actually "sys.stdout.write(...)". Python's print statement is a special-case short-hand.
That said, many people do feel the iostreams convention of using bit-shifting operators for IO was a bad idea. Ironically, Python seems to have been "inspired" by this syntax. If you want to use print to write to somewhere other than stdout you can say:
print >>file, "Hello"
... do you need to return 0 even when
you are never going to use it?
You aren't going to use it, but your program will. As mentioned earlier, the value you return is the status code of your program.
Aside: I actually do feel that C++ is overcomplicated, but not because of any of the points you mention. All of the differences you mention go away (in the sense that you need just as much complexity in Python) once you start writing non-trivial programs that have multiple modules and do more than just writing to stdout.
A:
You include <iostream> when you want to output things to the console. Since printing "Hello world" involves console output, you need iostream.
The main function is called by the operating system, basically. It gets called with the command-line arguments passed to the program. It returns an integer because the program must return an error code to the operating system (this is the standard way for determining if the last command was successful).
You can always use printf("hello world"); instead of std::cout << "hello world"; if you want to go C style. It's a bit quicker to write and lets you do formatted output.
You return 0 from main to indicate that the program executed successfully.
The compiler does not automatically include all the standard libraries and use namespace std because sometimes name collisions can result between your code and library code that you may not actually need at all. You don't always need all the libraries. Likewise, sometimes you are using a different main routine (Windows development comes to mind with its own, different WinMain starting function). The compiler also does not automatically return 0 because sometimes the program needs to indicate that it completed unsuccessfully.
A:
There are good reasons for all these things. C++ is a very broad language it is used for everything from small embedded systems to giant applications built by 100s of programmers. The use case of a guy building a small program to run on a desktop is by no means the only one. So sometimes you are building library components. In that case no main(). Sometimes you are working on a tiny system with no standard library. In that case no std. Sometimes you want to build a Unix tool that works with other Unix text tools and signals its completion status with an int returned from main().
In other words the things you complain about are boilerplate to you. But they are vital details that vary to other users of the language.
A:
This reminds me of The Evolution of a Programmer. Some of the languages and technologies demonstrated are a bit dated now, but you should get the general idea. :)
A:
One of the reasons C++ is rather complicated is because it was designed to address problems that crop up in large programs. At the time C++ was created as AT&T, their biggest C program was about 10 million lines of code. At that scale, C doesn't function very well. C++ addresses many of the problems you get with that kind of program.
With that said, it's also possible to answer the original questions:
You would include <iostream> where it's needed. If you've got 10.000 C++ files, it's quite common that less than 1000, sometimes less than 100 will produce user-visible output.
A statement like print "Hello, world" assumes that there is a default output, but makes it hard to generalize. The cout << "Hello, world" form makes it explicit where the output goes, but the same form also allows cerr << "Goodbye, world" and MyTmpFile << "Starting phase #" << i
The standard library is in the std:: namespace. My 10.000 files will be in an additional 25 namespaces.
main is an oddity in many ways, being the startup function.
A:
Baldur:
You don't always need <iostream>. The only things that you will always need are:
A main function (or a WinMain, if you're writing Win32 apps).
Variables, functions, operators, language constructs (if, while, etc.).
The ability to include functionality from libraries into your program.
Everything else is application-specific.
As other posters say, the return value of the main function is an error code1. If main returns 0, be happy: everything worked OK!
1This is useful when you write programs that "communicate" with other programs. The most simple way that a program can "tell" another whether it executed properly is using an error code.
A:
As people have said, the simple answer is that they're different languages, with different goals. To answer your specific questions...
... do you have to include the <iostream> everytime? Do you ever not need it?
<iostream> is one of the header files for iostreams, the part of the C++ standard library responsible for input/output; in this instance, you need it to gain access to std::cout. If you're not doing I/O operations in a source file, you don't need to include it -- for example, most files containing class definitions probably won't need <iostream>.
... same question for the standard library, when do you not need std::*?
std is the name of namespace containing classes in the standard library; it's there to avoid name collisions. Python has packages and modules to do this.
You can use the using statement to bring items from another namespace into your current scope, see this FAQ entry for an example (and an explanation of why it's bad to blindly bring all of std into scope!).
... why is the "main" part a function? Do you ever call the main function? Why is it an integer? Why does C++ need to have a main function but Python doesn't?
Executable statements in C++ have to be contained within a function, and the main function is defined as where execution begins. In Python, executable statements can be placed at the top-level of a file, and execution is defined to .
You can call main() if you wish -- it's just a function, after all -- but there's not often a reason to do this. Behind the scenes, most implementations of C++ call main() for you once some startup housekeeping has been done by the runtime library.
The return value of main() is returned back to the operating system. This stems from C and UNIX, in which application programs are required to provide a 1-byte exit status code, and returning that value from main() is a clear way of expressing this.
... why do you need "std::cout << "? Isn't that needlessly long and complicated compared to Python?
This is just a design difference. iostreams is a fairly complex beast with lots of features, and one of the side-effects of this is that the syntax is a bit ugly for simple tasks at times.
... why do you need to return 0 even when you are never going to use it?
You do use it; this is the value returned to the operating system as the exit status of the program.
A:
Python is high-level language. C++ is middle-level language.
|
Why is the C++ syntax so complicated?
|
I'm a novice at programming although I've been teaching myself Python for about a year and I studied C# some time ago.
This month I started C++ programming courses at my university and I just have to ask; "why is the C++ code so complicated?"
Writing "Hello world." in Python is as simple as "print 'Hello world.'" but in C++ it's:
# include <iostream>
using namespace std;
int main ()
{
cout << "Hello world.";
return 0;
}
I know there is probably a good reason for all of this but, why...
... do you have to include the <iostream> everytime? Do you ever not need it?
... same question for the standard library, when do you not need std::*?
... is the "main" part a function? Do you ever call the main function? Why is it an integer? Why does C++ need to have a main function but Python doesn't?
... do you need "std::cout << "? Isn't that needlessly long and complicated compared to Python?
... do you need to return 0 even when you are never going to use it?
This is probably because I'm learning such basic C++ but every program I've made so far looks like this, so I have to retype the same code over and over again. Isn't that redundant? Couldn't the compiler just input this code itself, since it's always the same (i.e. afaik you always include <iostream>, std, int main, return 0)
|
[
"C++ is a more low-level language that executes without the context of an interpreter. As such, it has many different design choices than does Python, because C++ has no environment which it can rely on to manage information like types and memory. C++ can be used to write an operating system kernel where there is no code running on the machine except for the program itself, which means that the language (some library facilities are not available for so-called freestanding implementations) must be self-contained. This is why C++ has no equivalent to Python's eval, nor a means of determining members, etc. of a class, nor other features that require an execution environment (or a massive overhead in the program itself instead of such an environment)\nFor your individual questions:\n\ndo you have to include the <iostream> everytime? Do you ever not need it?\n\n#include <iostream> is the directive that imports the <iostream> header into your program. <iostream> contains the standard input/output objects - in particular, cout. If you aren't using standard I/O objects (for instance, you use only file I/O, or your program uses a GUI library, or are writing an operating system kernel), you do not need <iostream>\n\nsame question for the standard library, when do you not need std::*?\n\nstd is the namespace containing all of the standard library. using namespace std; is sort of like from std import *, whereas a #include directive is (in this regard) more like a barebones import std statement. (in actual fact, the mechanism is rather different, because C++ does not use using namespace std; to automatically lookup objects in std; the using-directive only imports the names into the global namespace.)\nI'll note here that using-directives (using namespace) are frequently frowned upon in C++ code, as they import a lot of names and can cause name clashes. using-declarations (using std::cout;) are preferred when possible, as is limiting the scope of a using-directive (for instance, to one function or to one source file). Don't ever put using namespace in a header without good reason.\n\nis the \"main\" part a function? Do you ever call the main function? Why is it an integer? Why does C++ need to have a main function but Python doesn't?\n\nmain is the entry point to the program - where execution starts. In Python, the __main__ module serves the same purpose. C++ does not execute code outside a defined function like Python does, so its entry point is a function rather than a module.\n\ndo you need \"std::cout << \"? Isn't that needlessly long and complicated compared to Python?\n\nstd::cout is only needed if you don't import the cout name into the global namespace, either by a using-directive (using namespace std;) or by a using-declaration (using std::cout). In this regard, it is once again much like the distinction between Python's import std and from std import * or from std import cout.\nThe << is an overloaded operator for standard stream objects. cout << value calls cout's function to output value. Python needs no such extra code because print is built into the language; this does not make sense for C++, where there may not even be an operating system, much less an I/O library.\n\ndo you need to return 0 even when you are never going to use it?\n\nNo. main (and no other function) has an implicit return 0; at the end. The return value of main (or, if the exit function is called, the value passed to it) is passed back to the operating system as the exit code. 0 indicates the program successfully executed - that it encountered no errors, etc. If an error is encountered, a non-zero value should be returned (or passed to exit).\n",
"In response to your questions at the end of the post, it can be summed up with the philosophy of C++:\nYou don't pay for what you don't use.\nYou don't always need to use stdin or stdout (Windows/GUI apps?), nor will you always be using the STL, nor will everything you write necessarily use the standard main (winAPI) etc. As a previous poster said, C++ is lower level than Python. You will be exposed to more of the details, which offers you more control over what you're doing.\n",
"\n... do you have to include the\n everytime? Do you ever not\n need it?\n\nYou don't need it if you're not going to use iostreams in that module. In larger programs, few modules do any actual IO directly, and so few actually need to use iostreams.\nTurning the question around: in python you need to import sys and/or os in most non-trivial programs. Why?\n\n... same question for the standard\n library, when do you not need std::*?\n\nYou can have the using line or you can use the std:: prefix. This is very similar to the choice python gives you of either saying \"from sys import *\" or \"import sys\" and then having to prefix things with \"sys.\". In python you have to say \"sys.stdout\". Is \"std::cout\" really any worse?\n\n... is the \"main\" part a function? Do\n you ever call the main function? Why\n is it an integer? Why does C++ need to\n have a main function but Python\n doesn't?\n\nYes, main is a function. Typically you wouldn't call main yourself. The name \"main\" is reserved for the entry-point of your program. It returns an integer because the value returned is used as the status code of your program. In Python you can use sys.exit if you want to return a non-zero status code.\nPython doesn't have the same convention because with Python you can have code in a module not in a function. This code is executed when you load the module. Interestingly, many people feel it is bad style to have code at the top-level of a module and will instead create a main function by doing something like this:\ndef main(argv):\n # program goes here\n\n return 0\n\nif __name__ == '__main__':\n sys.exit(main(sys.argv))\n\nAlso, in Python you tell the interpreter with module is the \"main\" module when you run it. eg: \"python foo.py\". In C, the \"main\" module is (effectively) the one with a function called main. (If there are multiple modules with a main function, it's a linker error.)\n\n... do you need \"std::cout << \"? Isn't\n that needlessly long and complicated\n compared to Python?\n\nThe equivalent in Python is actually \"sys.stdout.write(...)\". Python's print statement is a special-case short-hand.\nThat said, many people do feel the iostreams convention of using bit-shifting operators for IO was a bad idea. Ironically, Python seems to have been \"inspired\" by this syntax. If you want to use print to write to somewhere other than stdout you can say:\nprint >>file, \"Hello\"\n\n\n... do you need to return 0 even when\n you are never going to use it?\n\nYou aren't going to use it, but your program will. As mentioned earlier, the value you return is the status code of your program.\nAside: I actually do feel that C++ is overcomplicated, but not because of any of the points you mention. All of the differences you mention go away (in the sense that you need just as much complexity in Python) once you start writing non-trivial programs that have multiple modules and do more than just writing to stdout.\n",
"You include <iostream> when you want to output things to the console. Since printing \"Hello world\" involves console output, you need iostream.\nThe main function is called by the operating system, basically. It gets called with the command-line arguments passed to the program. It returns an integer because the program must return an error code to the operating system (this is the standard way for determining if the last command was successful).\nYou can always use printf(\"hello world\"); instead of std::cout << \"hello world\"; if you want to go C style. It's a bit quicker to write and lets you do formatted output.\nYou return 0 from main to indicate that the program executed successfully.\nThe compiler does not automatically include all the standard libraries and use namespace std because sometimes name collisions can result between your code and library code that you may not actually need at all. You don't always need all the libraries. Likewise, sometimes you are using a different main routine (Windows development comes to mind with its own, different WinMain starting function). The compiler also does not automatically return 0 because sometimes the program needs to indicate that it completed unsuccessfully.\n",
"There are good reasons for all these things. C++ is a very broad language it is used for everything from small embedded systems to giant applications built by 100s of programmers. The use case of a guy building a small program to run on a desktop is by no means the only one. So sometimes you are building library components. In that case no main(). Sometimes you are working on a tiny system with no standard library. In that case no std. Sometimes you want to build a Unix tool that works with other Unix text tools and signals its completion status with an int returned from main().\nIn other words the things you complain about are boilerplate to you. But they are vital details that vary to other users of the language.\n",
"This reminds me of The Evolution of a Programmer. Some of the languages and technologies demonstrated are a bit dated now, but you should get the general idea. :)\n",
"One of the reasons C++ is rather complicated is because it was designed to address problems that crop up in large programs. At the time C++ was created as AT&T, their biggest C program was about 10 million lines of code. At that scale, C doesn't function very well. C++ addresses many of the problems you get with that kind of program. \nWith that said, it's also possible to answer the original questions:\n\nYou would include <iostream> where it's needed. If you've got 10.000 C++ files, it's quite common that less than 1000, sometimes less than 100 will produce user-visible output.\nA statement like print \"Hello, world\" assumes that there is a default output, but makes it hard to generalize. The cout << \"Hello, world\" form makes it explicit where the output goes, but the same form also allows cerr << \"Goodbye, world\" and MyTmpFile << \"Starting phase #\" << i\nThe standard library is in the std:: namespace. My 10.000 files will be in an additional 25 namespaces.\nmain is an oddity in many ways, being the startup function. \n\n",
"Baldur:\nYou don't always need <iostream>. The only things that you will always need are:\n\nA main function (or a WinMain, if you're writing Win32 apps).\nVariables, functions, operators, language constructs (if, while, etc.).\nThe ability to include functionality from libraries into your program.\n\nEverything else is application-specific.\nAs other posters say, the return value of the main function is an error code1. If main returns 0, be happy: everything worked OK!\n1This is useful when you write programs that \"communicate\" with other programs. The most simple way that a program can \"tell\" another whether it executed properly is using an error code.\n",
"As people have said, the simple answer is that they're different languages, with different goals. To answer your specific questions...\n\n... do you have to include the <iostream> everytime? Do you ever not need it?\n\n<iostream> is one of the header files for iostreams, the part of the C++ standard library responsible for input/output; in this instance, you need it to gain access to std::cout. If you're not doing I/O operations in a source file, you don't need to include it -- for example, most files containing class definitions probably won't need <iostream>.\n\n... same question for the standard library, when do you not need std::*?\n\nstd is the name of namespace containing classes in the standard library; it's there to avoid name collisions. Python has packages and modules to do this. \nYou can use the using statement to bring items from another namespace into your current scope, see this FAQ entry for an example (and an explanation of why it's bad to blindly bring all of std into scope!). \n\n... why is the \"main\" part a function? Do you ever call the main function? Why is it an integer? Why does C++ need to have a main function but Python doesn't?\n\nExecutable statements in C++ have to be contained within a function, and the main function is defined as where execution begins. In Python, executable statements can be placed at the top-level of a file, and execution is defined to .\nYou can call main() if you wish -- it's just a function, after all -- but there's not often a reason to do this. Behind the scenes, most implementations of C++ call main() for you once some startup housekeeping has been done by the runtime library.\nThe return value of main() is returned back to the operating system. This stems from C and UNIX, in which application programs are required to provide a 1-byte exit status code, and returning that value from main() is a clear way of expressing this. \n\n... why do you need \"std::cout << \"? Isn't that needlessly long and complicated compared to Python?\n\nThis is just a design difference. iostreams is a fairly complex beast with lots of features, and one of the side-effects of this is that the syntax is a bit ugly for simple tasks at times. \n\n... why do you need to return 0 even when you are never going to use it?\n\nYou do use it; this is the value returned to the operating system as the exit status of the program. \n",
"Python is high-level language. C++ is middle-level language.\n"
] |
[
91,
14,
7,
6,
6,
5,
5,
3,
2,
0
] |
[] |
[] |
[
"c++",
"python",
"syntax"
] |
stackoverflow_0001355803_c++_python_syntax.txt
|
Q:
How to Extract the key from unnecessary Html Wrapping in python
The HTML page containing the key and some \n character .I need to use only key block i.e from -----BEGIN PGP PUBLIC KEY BLOCK----- to -----END PGP PUBLIC KEY BLOCK-----
and after putting extracting key in a file can i pass it in any function....
A:
in it's simpliest form
import re
clean = re.sub("</?[^\W].{0,10}?>|\n|\r\n", "", your_html) #remove tags and newlines
key = re.search(r'BEGIN PGP PUBLIC KEY BLOCK.+?END PGP PUBLIC KEY BLOCK', clean)
or if you don't need BEGIN PGP ... BLOCK and END PGP ... BLOCK:
key = re.search(r'BEGIN PGP PUBLIC KEY BLOCK----(.+?)----END PGP PUBLIC KEY BLOCK',clean)
is this what you're after?
(I don't have python right here to check it, but I hope it's OK)
|
How to Extract the key from unnecessary Html Wrapping in python
|
The HTML page containing the key and some \n character .I need to use only key block i.e from -----BEGIN PGP PUBLIC KEY BLOCK----- to -----END PGP PUBLIC KEY BLOCK-----
and after putting extracting key in a file can i pass it in any function....
|
[
"in it's simpliest form\nimport re\nclean = re.sub(\"</?[^\\W].{0,10}?>|\\n|\\r\\n\", \"\", your_html) #remove tags and newlines\nkey = re.search(r'BEGIN PGP PUBLIC KEY BLOCK.+?END PGP PUBLIC KEY BLOCK', clean)\n\nor if you don't need BEGIN PGP ... BLOCK and END PGP ... BLOCK:\nkey = re.search(r'BEGIN PGP PUBLIC KEY BLOCK----(.+?)----END PGP PUBLIC KEY BLOCK',clean)\n\nis this what you're after?\n(I don't have python right here to check it, but I hope it's OK)\n"
] |
[
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001356705_python.txt
|
Q:
Display jpg images in python
I am creating a simple tool to add album cover images to mp3 files in python. So far I am just working on sending a request to amazon with artist and album title, and get the resulting list, as well as finding the actual images for each result. What I want to do is to display a simple frame with a button/link for each image, and a skip/cancel button.
I have done some googling, but I can't find examples that I can use as a base.
I want to display the images directly from the web. Ie. using urllib to open and read the bytes into memory, rather than go via a file on disk
I want to display the images as buttons preferably
All examples seems to focus on working on files on disk, rather with just a buffer. The TK documentation in the python standard library doesn't seem to cover the basic Button widget. This seems like an easy task, I have just not had any luck in finding the proper documentation yet.
A:
you can modify this using urllib.urlopen(). But I don't know (as I haven't tested it) if you can make this step without saving the (image) file locally. But IMHO urlopen returns a file handle that is usable in tk.PhotoImage().
For jpg files in PhotoImage you need PIL:
from PIL import Image, ImageTk
image = Image.open("test.jpg")
photo = ImageTk.PhotoImage(image)
|
Display jpg images in python
|
I am creating a simple tool to add album cover images to mp3 files in python. So far I am just working on sending a request to amazon with artist and album title, and get the resulting list, as well as finding the actual images for each result. What I want to do is to display a simple frame with a button/link for each image, and a skip/cancel button.
I have done some googling, but I can't find examples that I can use as a base.
I want to display the images directly from the web. Ie. using urllib to open and read the bytes into memory, rather than go via a file on disk
I want to display the images as buttons preferably
All examples seems to focus on working on files on disk, rather with just a buffer. The TK documentation in the python standard library doesn't seem to cover the basic Button widget. This seems like an easy task, I have just not had any luck in finding the proper documentation yet.
|
[
"you can modify this using urllib.urlopen(). But I don't know (as I haven't tested it) if you can make this step without saving the (image) file locally. But IMHO urlopen returns a file handle that is usable in tk.PhotoImage().\nFor jpg files in PhotoImage you need PIL:\nfrom PIL import Image, ImageTk\nimage = Image.open(\"test.jpg\")\nphoto = ImageTk.PhotoImage(image)\n\n"
] |
[
3
] |
[
"For displaying jpgs in Python check out PIL \n"
] |
[
-1
] |
[
"python",
"tkinter"
] |
stackoverflow_0001356255_python_tkinter.txt
|
Q:
file system performance testing
I am writing a python script that will perform performance test in linux file system. so besides deadlocks, race conditions and time takes to perform an action (delete, read, write and create) what other variables/parameters should the test contain?
A:
File system performance testing is a very complex topic. You can easily make a lots of mistakes that basically make your whole tests worthless.
Stony Brook University and IBM Watson Labs have published an highly recommended journal paper in the "Transaction of Storage" about file system benchmarking, in which they present different benchmarks and their strong and weak points: A nine year study of file system and storage benchmarking.
They give lots of advise how to design and implement a good filesystem benchmark. As I said: It is not an easy task.
A:
Can you be a little more clear?
I tried doing such once before using Python itself. I need time to try it out myself. I tried using time.time() to get the time since epoch. I think the time difference can suffice for file operations.
Update:
Check this GSOC Idea, PSF had pledged to sponsor it
http://allmydata.org/trac/tahoe/wiki/GSoCIdeas
I am trying to read up that page to get more information.
A:
You might be inetersting in looking at tools like caollectd and iotop. Then again, yopu mightalso by interested in just using them instead of reinventing the wheel - as far as I see, such performance analysis is not learned in a day, and these guys invested significant amounts of time and knowledge in building these tools.
A:
You should try to use the softwares already present. You can use iozone for the same.
For tutorial, you should refer to this blog post on nixcraft
|
file system performance testing
|
I am writing a python script that will perform performance test in linux file system. so besides deadlocks, race conditions and time takes to perform an action (delete, read, write and create) what other variables/parameters should the test contain?
|
[
"File system performance testing is a very complex topic. You can easily make a lots of mistakes that basically make your whole tests worthless.\nStony Brook University and IBM Watson Labs have published an highly recommended journal paper in the \"Transaction of Storage\" about file system benchmarking, in which they present different benchmarks and their strong and weak points: A nine year study of file system and storage benchmarking.\nThey give lots of advise how to design and implement a good filesystem benchmark. As I said: It is not an easy task.\n",
"Can you be a little more clear?\nI tried doing such once before using Python itself. I need time to try it out myself. I tried using time.time() to get the time since epoch. I think the time difference can suffice for file operations.\nUpdate:\nCheck this GSOC Idea, PSF had pledged to sponsor it\nhttp://allmydata.org/trac/tahoe/wiki/GSoCIdeas\nI am trying to read up that page to get more information.\n",
"You might be inetersting in looking at tools like caollectd and iotop. Then again, yopu mightalso by interested in just using them instead of reinventing the wheel - as far as I see, such performance analysis is not learned in a day, and these guys invested significant amounts of time and knowledge in building these tools.\n",
"You should try to use the softwares already present. You can use iozone for the same. \nFor tutorial, you should refer to this blog post on nixcraft\n"
] |
[
3,
0,
0,
0
] |
[] |
[] |
[
"filesystems",
"performance_testing",
"python"
] |
stackoverflow_0001356240_filesystems_performance_testing_python.txt
|
Q:
Handling KeyboardInterrupt in a KDE Python application?
I'm working on a PyKDE4/PyQt4 application, Autokey, and I noticed that when I send the program a CTRL+C, the keyboard interrupt is not processed until I interact with the application, by ie. clicking on a menu item or changing a checkbox.
lfaraone@stone:~$ /usr/bin/autokey
^C^C^C
Traceback (most recent call last):
File "/usr/lib/python2.6/dist-packages/autokey/ui/popupmenu.py", line 113, in on_triggered
def on_triggered(self):
KeyboardInterrupt
^C^C^C
Traceback (most recent call last):
File "/usr/lib/python2.6/dist-packages/autokey/ui/configwindow.py", line 423, in mousePressEvent
def mousePressEvent(self, event):
KeyboardInterrupt
This is despite having the following in /usr/bin/autokey:
#!/usr/bin/python
# -*- coding: utf-8 -*-
import sys
from autokey.autokey import Application
a = Application()
try:
a.main()
except KeyboardInterrupt:
a.shutdown()
sys.exit(0)
Why isn't the KeyboardInterrupt caught:
when I issue it, rather than when I next take an action in the GUI
by the initial try/except clause?
Running Ubuntu 9.04 with Python 2.6.
A:
Try doing this:
import signal
signal.signal(signal.SIGINT, signal.SIG_DFL)
before invoking a.main().
Update: Remember, Ctrl-C can be used for Copy in GUI applications. It's better to use Ctrl+\ in Qt, which will cause the event loop to terminate and the application to close.
|
Handling KeyboardInterrupt in a KDE Python application?
|
I'm working on a PyKDE4/PyQt4 application, Autokey, and I noticed that when I send the program a CTRL+C, the keyboard interrupt is not processed until I interact with the application, by ie. clicking on a menu item or changing a checkbox.
lfaraone@stone:~$ /usr/bin/autokey
^C^C^C
Traceback (most recent call last):
File "/usr/lib/python2.6/dist-packages/autokey/ui/popupmenu.py", line 113, in on_triggered
def on_triggered(self):
KeyboardInterrupt
^C^C^C
Traceback (most recent call last):
File "/usr/lib/python2.6/dist-packages/autokey/ui/configwindow.py", line 423, in mousePressEvent
def mousePressEvent(self, event):
KeyboardInterrupt
This is despite having the following in /usr/bin/autokey:
#!/usr/bin/python
# -*- coding: utf-8 -*-
import sys
from autokey.autokey import Application
a = Application()
try:
a.main()
except KeyboardInterrupt:
a.shutdown()
sys.exit(0)
Why isn't the KeyboardInterrupt caught:
when I issue it, rather than when I next take an action in the GUI
by the initial try/except clause?
Running Ubuntu 9.04 with Python 2.6.
|
[
"Try doing this:\nimport signal\nsignal.signal(signal.SIGINT, signal.SIG_DFL)\n\nbefore invoking a.main().\nUpdate: Remember, Ctrl-C can be used for Copy in GUI applications. It's better to use Ctrl+\\ in Qt, which will cause the event loop to terminate and the application to close.\n"
] |
[
8
] |
[] |
[] |
[
"autokey",
"keyboardinterrupt",
"pykde",
"pyqt",
"python"
] |
stackoverflow_0001353823_autokey_keyboardinterrupt_pykde_pyqt_python.txt
|
Q:
python object to native c++ pointer
Im toying around with the idea to use python as an embedded scripting language for a project im working on and have got most things working. However i cant seem to be able to convert a python extended object back into a native c++ pointer.
So this is my class:
class CGEGameModeBase
{
public:
virtual void FunctionCall()=0;
virtual const char* StringReturn()=0;
};
class CGEPYGameMode : public CGEGameModeBase, public boost::python::wrapper<CGEPYGameMode>
{
public:
virtual void FunctionCall()
{
if (override f = this->get_override("FunctionCall"))
f();
}
virtual const char* StringReturn()
{
if (override f = this->get_override("StringReturn"))
return f();
return "FAILED TO CALL";
}
};
Boost wrapping:
BOOST_PYTHON_MODULE(GEGameMode)
{
class_<CGEGameModeBase, boost::noncopyable>("CGEGameModeBase", no_init);
class_<CGEPYGameMode, bases<CGEGameModeBase> >("CGEPYGameMode", no_init)
.def("FunctionCall", &CGEPYGameMode::FunctionCall)
.def("StringReturn", &CGEPYGameMode::StringReturn);
}
and the python code:
import GEGameMode
def Ident():
return "Alpha"
def NewGamePlay():
return "NewAlpha"
def NewAlpha():
import GEGameMode
import GEUtil
class Alpha(GEGameMode.CGEPYGameMode):
def __init__(self):
print "Made new Alpha!"
def FunctionCall(self):
GEUtil.Msg("This is function test Alpha!")
def StringReturn(self):
return "This is return test Alpha!"
return Alpha()
Now i can call the first to functions fine by doing this:
const char* ident = extract< const char* >( GetLocalDict()["Ident"]() );
const char* newgameplay = extract< const char* >( GetLocalDict()["NewGamePlay"]() );
printf("Loading Script: %s\n", ident);
CGEPYGameMode* m_pGameMode = extract< CGEPYGameMode* >( GetLocalDict()[newgameplay]() );
However when i try and convert the Alpha class back to its base class (last line above) i get an boost error:
TypeError: No registered converter was able to extract a C++ pointer to type class CGEPYGameMode from this Python object of type Alpha
I have done alot of searching on the net but cant work out how to convert the Alpha object into its base class pointer. I could leave it as an object but rather have it as a pointer so some non python aware code can use it. Any ideas?
A:
Thanks to Stefan from the python c++ mailling list, i was missing
super(Alpha, self).__init__()
from the constructor call meaning it never made the parent class. Thought this would of been automatic :D
Only other issue i had was saving the new class instance as a global var otherwise it got cleaned up as it went out of scope.
So happy now
A:
May not be the answer you are looking for, but take a look at ChaiScript for embedding in your C++ application.
According to their website,
ChaiScript is the first and only
scripting language designed from the
ground up with C++ compatibility in
mind. It is an ECMAScript-inspired,
embedded functional-like language.
ChaiScript has no meta-compiler, no
library dependencies, no build system
requirements and no legacy baggage of
any kind. At can work seamlessly with
any C++ functions you expose to it. It
does not have to be told explicitly
about any type, it is function
centric.
With ChaiScript you can literally
begin scripting your application by
adding three lines of code to your
program and not modifying your build
steps at all.
A:
Well, I am not sure whether it will help you, but I had the same problem with scripts in Lua. We created objects from Lua and wanted some c++ code to handle the objects via pointers. We did the following:
all object stuff was written in c++, including constructors, destructors and factory method;
lua code was calling a factory method to create an object. this factory method 1) gave the object a unique ID number and 2) registered it in the c++ map, that mapped ID numbers to native pointers;
so, whenever lua was going to pass a pointer to c++ code, it gave an object ID instead, and the c++ code looked up the map for finding the actual pointer by ID.
|
python object to native c++ pointer
|
Im toying around with the idea to use python as an embedded scripting language for a project im working on and have got most things working. However i cant seem to be able to convert a python extended object back into a native c++ pointer.
So this is my class:
class CGEGameModeBase
{
public:
virtual void FunctionCall()=0;
virtual const char* StringReturn()=0;
};
class CGEPYGameMode : public CGEGameModeBase, public boost::python::wrapper<CGEPYGameMode>
{
public:
virtual void FunctionCall()
{
if (override f = this->get_override("FunctionCall"))
f();
}
virtual const char* StringReturn()
{
if (override f = this->get_override("StringReturn"))
return f();
return "FAILED TO CALL";
}
};
Boost wrapping:
BOOST_PYTHON_MODULE(GEGameMode)
{
class_<CGEGameModeBase, boost::noncopyable>("CGEGameModeBase", no_init);
class_<CGEPYGameMode, bases<CGEGameModeBase> >("CGEPYGameMode", no_init)
.def("FunctionCall", &CGEPYGameMode::FunctionCall)
.def("StringReturn", &CGEPYGameMode::StringReturn);
}
and the python code:
import GEGameMode
def Ident():
return "Alpha"
def NewGamePlay():
return "NewAlpha"
def NewAlpha():
import GEGameMode
import GEUtil
class Alpha(GEGameMode.CGEPYGameMode):
def __init__(self):
print "Made new Alpha!"
def FunctionCall(self):
GEUtil.Msg("This is function test Alpha!")
def StringReturn(self):
return "This is return test Alpha!"
return Alpha()
Now i can call the first to functions fine by doing this:
const char* ident = extract< const char* >( GetLocalDict()["Ident"]() );
const char* newgameplay = extract< const char* >( GetLocalDict()["NewGamePlay"]() );
printf("Loading Script: %s\n", ident);
CGEPYGameMode* m_pGameMode = extract< CGEPYGameMode* >( GetLocalDict()[newgameplay]() );
However when i try and convert the Alpha class back to its base class (last line above) i get an boost error:
TypeError: No registered converter was able to extract a C++ pointer to type class CGEPYGameMode from this Python object of type Alpha
I have done alot of searching on the net but cant work out how to convert the Alpha object into its base class pointer. I could leave it as an object but rather have it as a pointer so some non python aware code can use it. Any ideas?
|
[
"Thanks to Stefan from the python c++ mailling list, i was missing \nsuper(Alpha, self).__init__()\n\nfrom the constructor call meaning it never made the parent class. Thought this would of been automatic :D\nOnly other issue i had was saving the new class instance as a global var otherwise it got cleaned up as it went out of scope.\nSo happy now\n",
"May not be the answer you are looking for, but take a look at ChaiScript for embedding in your C++ application.\nAccording to their website,\n\nChaiScript is the first and only\n scripting language designed from the\n ground up with C++ compatibility in\n mind. It is an ECMAScript-inspired,\n embedded functional-like language.\nChaiScript has no meta-compiler, no\n library dependencies, no build system\n requirements and no legacy baggage of\n any kind. At can work seamlessly with\n any C++ functions you expose to it. It\n does not have to be told explicitly\n about any type, it is function\n centric.\nWith ChaiScript you can literally\n begin scripting your application by\n adding three lines of code to your\n program and not modifying your build\n steps at all.\n\n",
"Well, I am not sure whether it will help you, but I had the same problem with scripts in Lua. We created objects from Lua and wanted some c++ code to handle the objects via pointers. We did the following:\n\nall object stuff was written in c++, including constructors, destructors and factory method;\nlua code was calling a factory method to create an object. this factory method 1) gave the object a unique ID number and 2) registered it in the c++ map, that mapped ID numbers to native pointers;\nso, whenever lua was going to pass a pointer to c++ code, it gave an object ID instead, and the c++ code looked up the map for finding the actual pointer by ID.\n\n"
] |
[
2,
1,
0
] |
[] |
[] |
[
"boost",
"c++",
"embedded_language",
"python"
] |
stackoverflow_0001355187_boost_c++_embedded_language_python.txt
|
Q:
How do I line up text from python into columns in my terminal?
I'm printing out some values from a script in my terminal window like this:
for i in items:
print "Name: %s Price: %d" % (i.name, i.price)
How do I make these line up into columns?
A:
If you know the maximum lengths of data in the two columns, then you can use format qualifiers. For example if the name is at most 20 chars long and the price will fit into 10 chars, you could do
print "Name: %-20s Price: %10d" % (i.name, i.price)
This is better than using tabs as tabs won't line up in some circumstances (e.g. if one name is quite a bit longer than another).
If some names won't fit into usable space, then you can use the . format qualifier to truncate the data. For example, using "%-20.20s" for the name format will truncate any longer names to fit in the 20-character column.
A:
As Vinay said, use string format specifiers.
If you don't know the maximum lengths, you can find them by making an extra pass through the list:
maxn, maxp = 0, 0
for item in items:
maxn = max(maxn, len(item.name))
maxp = max(maxp, len(str(item.price)))
then use '*' instead of the number and provide the calculated width as an argument.
for item in items:
print "Name: %-*s Price: %*d" % (maxn, item.name, maxp, item.price)
A:
You can also use the rjust() / ljust() methods for str objects.
|
How do I line up text from python into columns in my terminal?
|
I'm printing out some values from a script in my terminal window like this:
for i in items:
print "Name: %s Price: %d" % (i.name, i.price)
How do I make these line up into columns?
|
[
"If you know the maximum lengths of data in the two columns, then you can use format qualifiers. For example if the name is at most 20 chars long and the price will fit into 10 chars, you could do\nprint \"Name: %-20s Price: %10d\" % (i.name, i.price)\n\nThis is better than using tabs as tabs won't line up in some circumstances (e.g. if one name is quite a bit longer than another).\nIf some names won't fit into usable space, then you can use the . format qualifier to truncate the data. For example, using \"%-20.20s\" for the name format will truncate any longer names to fit in the 20-character column.\n",
"As Vinay said, use string format specifiers.\nIf you don't know the maximum lengths, you can find them by making an extra pass through the list:\nmaxn, maxp = 0, 0\nfor item in items:\n maxn = max(maxn, len(item.name))\n maxp = max(maxp, len(str(item.price)))\n\nthen use '*' instead of the number and provide the calculated width as an argument.\nfor item in items:\n print \"Name: %-*s Price: %*d\" % (maxn, item.name, maxp, item.price)\n\n",
"You can also use the rjust() / ljust() methods for str objects.\n"
] |
[
18,
9,
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001356029_python.txt
|
Q:
Detecting case mismatch on filename in Windows (preferably using python)?
I have some xml-configuration files that we create in a Windows environment but is deployed on Linux. These configuration files reference each other with filepaths. We've had problems with case-sensitivity and trailing spaces before, and I'd like to write a script that checks for these problems. We have Cygwin if that helps.
Example:
Let's say I have a reference to the file foo/bar/baz.xml, I'd do this
<someTag fileref="foo/bar/baz.xml" />
Now if we by mistake do this:
<someTag fileref="fOo/baR/baz.Xml " />
It will still work on Windows, but it will fail on Linux.
What I want to do is detect these cases where the file reference in these files don't match the real file with respect to case sensitivity.
A:
os.listdir on a directory, in all case-preserving filesystems (including those on Windows), returns the actual case for the filenames in the directory you're listing.
So you need to do this check at each level of the path:
def onelevelok(parent, thislevel):
for fn in os.listdir(parent):
if fn.lower() == thislevel.lower():
return fn == thislevel
raise ValueError('No %r in dir %r!' % (
thislevel, parent))
where I'm assuming that the complete absence of any case variation of a name is a different kind of error, and using an exception for that; and, for the whole path (assuming no drive letters or UNC that wouldn't translate to Windows anyway):
def allpathok(path):
levels = os.path.split(path)
if os.path.isabs(path):
top = ['/']
else:
top = ['.']
return all(onelevelok(p, t)
for p, t in zip(top+levels, levels))
You may need to adapt this if , e.g., foo/bar is not to be taken to mean that foo is in the current directory, but somewhere else; or, of course, if UNC or drive letters are in fact needed (but as I mentioned translating them to Linux is not trivial anyway;-).
Implementation notes: I'm taking advantage of the fact that zip just drop "extra entries" beyond the length of the shortest of the sequences it's zipping; so I don't need to explicitly slice off the "leaf" (last entry) from levels in the first argument, zip does it for me. all will short circuit where it can, returning False as soon as it detects a false value, so it's just as good as an explicit loop but faster and more concise.
A:
it's hard to judge what exactly your problem is, but if you apply os.path.normcase along with str.stript before saving your file name, it should solve all your problems.
as I said in comment, it's not clear how are you ending up with such a mistake. However, it would be trivial to check for existing file, as long as you have some sensible convention (all file names are lower case, for example):
try:
open(fname)
except IOError:
open(fname.lower())
|
Detecting case mismatch on filename in Windows (preferably using python)?
|
I have some xml-configuration files that we create in a Windows environment but is deployed on Linux. These configuration files reference each other with filepaths. We've had problems with case-sensitivity and trailing spaces before, and I'd like to write a script that checks for these problems. We have Cygwin if that helps.
Example:
Let's say I have a reference to the file foo/bar/baz.xml, I'd do this
<someTag fileref="foo/bar/baz.xml" />
Now if we by mistake do this:
<someTag fileref="fOo/baR/baz.Xml " />
It will still work on Windows, but it will fail on Linux.
What I want to do is detect these cases where the file reference in these files don't match the real file with respect to case sensitivity.
|
[
"os.listdir on a directory, in all case-preserving filesystems (including those on Windows), returns the actual case for the filenames in the directory you're listing.\nSo you need to do this check at each level of the path:\ndef onelevelok(parent, thislevel):\n for fn in os.listdir(parent):\n if fn.lower() == thislevel.lower():\n return fn == thislevel\n raise ValueError('No %r in dir %r!' % (\n thislevel, parent))\n\nwhere I'm assuming that the complete absence of any case variation of a name is a different kind of error, and using an exception for that; and, for the whole path (assuming no drive letters or UNC that wouldn't translate to Windows anyway):\ndef allpathok(path):\n levels = os.path.split(path)\n if os.path.isabs(path):\n top = ['/']\n else:\n top = ['.']\n return all(onelevelok(p, t)\n for p, t in zip(top+levels, levels))\n\nYou may need to adapt this if , e.g., foo/bar is not to be taken to mean that foo is in the current directory, but somewhere else; or, of course, if UNC or drive letters are in fact needed (but as I mentioned translating them to Linux is not trivial anyway;-).\nImplementation notes: I'm taking advantage of the fact that zip just drop \"extra entries\" beyond the length of the shortest of the sequences it's zipping; so I don't need to explicitly slice off the \"leaf\" (last entry) from levels in the first argument, zip does it for me. all will short circuit where it can, returning False as soon as it detects a false value, so it's just as good as an explicit loop but faster and more concise.\n",
"it's hard to judge what exactly your problem is, but if you apply os.path.normcase along with str.stript before saving your file name, it should solve all your problems.\nas I said in comment, it's not clear how are you ending up with such a mistake. However, it would be trivial to check for existing file, as long as you have some sensible convention (all file names are lower case, for example):\ntry:\n open(fname)\nexcept IOError:\n open(fname.lower())\n\n"
] |
[
3,
0
] |
[] |
[] |
[
"case_sensitive",
"python",
"windows"
] |
stackoverflow_0001356386_case_sensitive_python_windows.txt
|
Q:
Django ModelChoiceField initial data not working for ForeignKey
I am filling my form with initial data using the normail:
form = somethingForm(initial = {
'title' : something.title,
'category' : something.category_id,
})
The title works fine, but if the category is a ModelChoiceField and a ForeignKey in the model, the initial data won't work. Nothing will be selected in the Select Box. If I change category to an IntegerField in the model it works fine.
I still want to use a ForeignKey for category though, so how do I fix this?
A:
Perhaps try using an instance of a category rather than its ID?
A:
You need to do this
form = somethingForm(initial = {
'title' : something.title,
'category' : [("database value","display value")],
})
Why list of tuples?
Because choice fields are associated with select widget ( i.e
html ===>
..............)
For each option we need to specify two things 1.internal value
2.display value (each tuple in the list specifies this)
|
Django ModelChoiceField initial data not working for ForeignKey
|
I am filling my form with initial data using the normail:
form = somethingForm(initial = {
'title' : something.title,
'category' : something.category_id,
})
The title works fine, but if the category is a ModelChoiceField and a ForeignKey in the model, the initial data won't work. Nothing will be selected in the Select Box. If I change category to an IntegerField in the model it works fine.
I still want to use a ForeignKey for category though, so how do I fix this?
|
[
"Perhaps try using an instance of a category rather than its ID?\n",
"You need to do this\nform = somethingForm(initial = {\n 'title' : something.title, \n 'category' : [(\"database value\",\"display value\")],\n })\n\nWhy list of tuples?\n\nBecause choice fields are associated with select widget ( i.e\n html ===> \n ..............) \nFor each option we need to specify two things 1.internal value\n 2.display value (each tuple in the list specifies this)\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0001296938_django_python.txt
|
Q:
Creating database schema for parsed feed
Additional questions regarding SilentGhost's initial answer to a problem I'm having parsing Twitter RSS feeds. See also partial code below.
First, could I insert tags[0], tags[1], etc., into the database, or is there a different/better way to do it?
Second, almost all of the entries have a url, but a few don't; likewise, many entries don't have the hashtags. So, would the thing to do be to create default values for url and tags? And if so, do you have any hints on how to do that? :)
Third, when you say the single-table db design is not optimal, do you mean I should create a separate table for tags? Right now, I have one table for the RSS feed urls and another table with all the rss entry data (summar.y, date, etc.).
I've pasted in a modified version of the code you posted. I had some success in getting a "tinyurl" variable to get into the sqlite database, but now it isn't working. Not sure why.
Lastly, assuming I can get the whole thing up and running (smile), is there a central site where people might appreciate seeing my solution? Or should I just post something on my own blog?
Best,
Greg
A:
I would suggest reading up on database normalisation, especially on 1st and 2nd normal forms. Once you're done with it, I hope there won't be need for default values, and your db schema evolves into something more appropriate.
There are plenty of options for sharing your source code on the web, depending on what versioning system you're most comfortable with you might have a look at such well know sites as google code, bitbucket, github and many other.
|
Creating database schema for parsed feed
|
Additional questions regarding SilentGhost's initial answer to a problem I'm having parsing Twitter RSS feeds. See also partial code below.
First, could I insert tags[0], tags[1], etc., into the database, or is there a different/better way to do it?
Second, almost all of the entries have a url, but a few don't; likewise, many entries don't have the hashtags. So, would the thing to do be to create default values for url and tags? And if so, do you have any hints on how to do that? :)
Third, when you say the single-table db design is not optimal, do you mean I should create a separate table for tags? Right now, I have one table for the RSS feed urls and another table with all the rss entry data (summar.y, date, etc.).
I've pasted in a modified version of the code you posted. I had some success in getting a "tinyurl" variable to get into the sqlite database, but now it isn't working. Not sure why.
Lastly, assuming I can get the whole thing up and running (smile), is there a central site where people might appreciate seeing my solution? Or should I just post something on my own blog?
Best,
Greg
|
[
"I would suggest reading up on database normalisation, especially on 1st and 2nd normal forms. Once you're done with it, I hope there won't be need for default values, and your db schema evolves into something more appropriate.\nThere are plenty of options for sharing your source code on the web, depending on what versioning system you're most comfortable with you might have a look at such well know sites as google code, bitbucket, github and many other.\n"
] |
[
2
] |
[] |
[] |
[
"database_schema",
"python",
"rss"
] |
stackoverflow_0001358501_database_schema_python_rss.txt
|
Q:
Homework: Triangle angle calculation all sides known
I know I should do my homework on my own but I simply can't get my homework to work the way I want it to:
from __future__ import division
import turtle
import math
def triangle(c,a,b,beta,gamma):
turtle.forward(c)
turtle.right(180+beta)
turtle.forward(a)
turtle.right(beta)
turtle.left(beta+gamma)
turtle.forward(b)
turtle.left(beta+gamma)
def general_abc(a,b,c):
alpha = math.degrees(math.acos(a/c))
print alpha
beta = math.degrees(math.asin(b/c))
print beta
general_abc(50,60,90)
The function general_abc() is supposed to calculate the degrees of the angles when knowing all 3 sides. I am mainly searching for the math behind it. With lots of googling I just don't seem to find the right keywords to use. Please tell me the formulas I have to look into.
A:
I think what you're looking for is the Law of Cosines, using acos and asin like you are presumes a right triangle.
A:
you can use law of cosines: c² = a² + b² - 2abcos(alpha)
A:
Old Indian Chief (as I was taught):
SohCahToa
Sine = Opposite/Hypoteneuse
Cosine = Adjacent/Hypoteneuse
Tangent = Opposite/Adjacent
|
Homework: Triangle angle calculation all sides known
|
I know I should do my homework on my own but I simply can't get my homework to work the way I want it to:
from __future__ import division
import turtle
import math
def triangle(c,a,b,beta,gamma):
turtle.forward(c)
turtle.right(180+beta)
turtle.forward(a)
turtle.right(beta)
turtle.left(beta+gamma)
turtle.forward(b)
turtle.left(beta+gamma)
def general_abc(a,b,c):
alpha = math.degrees(math.acos(a/c))
print alpha
beta = math.degrees(math.asin(b/c))
print beta
general_abc(50,60,90)
The function general_abc() is supposed to calculate the degrees of the angles when knowing all 3 sides. I am mainly searching for the math behind it. With lots of googling I just don't seem to find the right keywords to use. Please tell me the formulas I have to look into.
|
[
"I think what you're looking for is the Law of Cosines, using acos and asin like you are presumes a right triangle.\n",
"you can use law of cosines: c² = a² + b² - 2abcos(alpha)\n",
"Old Indian Chief (as I was taught):\nSohCahToa\nSine = Opposite/Hypoteneuse\nCosine = Adjacent/Hypoteneuse\nTangent = Opposite/Adjacent\n"
] |
[
7,
1,
1
] |
[] |
[] |
[
"math",
"python",
"trigonometry"
] |
stackoverflow_0001358584_math_python_trigonometry.txt
|
Q:
Raising an exception on updating a 'constant' attribute in python
As python does not have concept of constants, would it be possible to raise an exception if an 'constant' attribute is updated? How?
class MyClass():
CLASS_CONSTANT = 'This is a constant'
var = 'This is a not a constant, can be updated'
#this should raise an exception
MyClass.CLASS_CONSTANT = 'No, this cannot be updated, will raise an exception'
#this should not raise an exception
MyClass.var = 'updating this is fine'
#this also should raise an exception
MyClass().CLASS_CONSTANT = 'No, this cannot be updated, will raise an exception'
#this should not raise an exception
MyClass().var = 'updating this is fine'
Any attempt to change CLASS_CONSTANT as a class attribute or as an instance attribute should raise an exception.
Changing var as a class attribute or as an instance attribute should not raise an exception.
A:
Customizing __setattr__ in every class (e.g. as exemplified in my old recipe that @ainab's answer is pointing to, and other answers), only works to stop assignment to INSTANCE attributes and not to CLASS attributes. So, none of the existing answers would actually satisfy your requirement as stated.
If what you asked for IS actually exactly what you want, you could resort to some mix of custom metaclasses and descriptors, such as:
class const(object):
def __init__(self, val): self.val = val
def __get__(self, *_): return self.val
def __set__(self, *_): raise TypeError("Can't reset const!")
class mcl(type):
def __init__(cls, *a, **k):
mkl = cls.__class__
class spec(mkl): pass
for n, v in vars(cls).items():
if isinstance(v, const):
setattr(spec, n, v)
spec.__name__ = mkl.__name__
cls.__class__ = spec
class with_const:
__metaclass__ = mcl
class foo(with_const):
CLASS_CONSTANT = const('this is a constant')
print foo().CLASS_CONSTANT
print foo.CLASS_CONSTANT
foo.CLASS_CONSTANT = 'Oops!'
print foo.CLASS_CONSTANT
This is pretty advanced stuff, so you might prefer the simpler __setattr__ approach suggested in other answers, despite it NOT meeting your requirements as stated (i.e., you might reasonably choose to weaken your requirements in order to gain simplicity;-). But the techniques here might still be interesting: the custom descriptor type const is another way (IMHO far nicer than overriding __setattr__ in each and every class that needs some constants AND making all attributes constants rather than picking and choosing...) to block assignment to an instance attribute; the rest of the code is about a custom metaclass creating unique per-class sub-metaclasses of itself, in order to exploit said custom descriptor to the fullest and achieving the exact functionality you specifically asked for.
A:
You could do something like this:
(from http://www.siafoo.net/snippet/108)
class Constants:
# A constant variable
foo = 1337
def __setattr__(self, attr, value):
if hasattr(self, attr):
raise ValueError, 'Attribute %s already has a value and so cannot be written to' % attr
self.__dict__[attr] = value
Then use it like this:
>>> const = Constants()
>>> const.test1 = 42
>>> const.test1
42
>>> const.test1 = 43
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 4, in __setattr__
ValueError: Attribute test1 already has a value and so cannot be written to
>>> const.test1
42
A:
You can use a metaclass to achieve this:
class ImmutableConstants(type):
def __init__(cls, name, bases, dct):
type.__init__(cls, name, bases, dct)
old_setattr = cls.__setattr__
def __setattr__(self, key, value):
cls.assert_attribute_mutable(key)
old_setattr(self, key, value)
cls.__setattr__ = __setattr__
def __setattr__(self, key, value):
self.assert_attribute_mutable(key)
type.__setattr__(self, key, value)
def assert_attribute_mutable(self, name):
if name.isupper():
raise AttributeError('Attribute %s is constant' % name)
class Foo(object):
__metaclass__ = ImmutableConstants
CONST = 5
class_var = 'foobar'
Foo.class_var = 'new value'
Foo.CONST = 42 # raises
But are you sure this is a real issue? Are you really accidentally setting constants all over the place? You can find most of these pretty easily with a grep -r '\.[A-Z][A-Z0-9_]*\s*=' src/.
A:
If you really want to have constant that can't be changed then look at this: http://code.activestate.com/recipes/65207/
A:
Start reading this:
http://docs.python.org/reference/datamodel.html#customizing-attribute-access
You basically write your own version of __setattr__ that throws exceptions for some attributes, but not others.
|
Raising an exception on updating a 'constant' attribute in python
|
As python does not have concept of constants, would it be possible to raise an exception if an 'constant' attribute is updated? How?
class MyClass():
CLASS_CONSTANT = 'This is a constant'
var = 'This is a not a constant, can be updated'
#this should raise an exception
MyClass.CLASS_CONSTANT = 'No, this cannot be updated, will raise an exception'
#this should not raise an exception
MyClass.var = 'updating this is fine'
#this also should raise an exception
MyClass().CLASS_CONSTANT = 'No, this cannot be updated, will raise an exception'
#this should not raise an exception
MyClass().var = 'updating this is fine'
Any attempt to change CLASS_CONSTANT as a class attribute or as an instance attribute should raise an exception.
Changing var as a class attribute or as an instance attribute should not raise an exception.
|
[
"Customizing __setattr__ in every class (e.g. as exemplified in my old recipe that @ainab's answer is pointing to, and other answers), only works to stop assignment to INSTANCE attributes and not to CLASS attributes. So, none of the existing answers would actually satisfy your requirement as stated.\nIf what you asked for IS actually exactly what you want, you could resort to some mix of custom metaclasses and descriptors, such as:\nclass const(object):\n def __init__(self, val): self.val = val\n def __get__(self, *_): return self.val\n def __set__(self, *_): raise TypeError(\"Can't reset const!\")\n\nclass mcl(type):\n def __init__(cls, *a, **k):\n mkl = cls.__class__\n class spec(mkl): pass\n for n, v in vars(cls).items():\n if isinstance(v, const):\n setattr(spec, n, v)\n spec.__name__ = mkl.__name__\n cls.__class__ = spec\n\nclass with_const:\n __metaclass__ = mcl\n\nclass foo(with_const):\n CLASS_CONSTANT = const('this is a constant')\n\nprint foo().CLASS_CONSTANT\nprint foo.CLASS_CONSTANT\nfoo.CLASS_CONSTANT = 'Oops!'\nprint foo.CLASS_CONSTANT\n\nThis is pretty advanced stuff, so you might prefer the simpler __setattr__ approach suggested in other answers, despite it NOT meeting your requirements as stated (i.e., you might reasonably choose to weaken your requirements in order to gain simplicity;-). But the techniques here might still be interesting: the custom descriptor type const is another way (IMHO far nicer than overriding __setattr__ in each and every class that needs some constants AND making all attributes constants rather than picking and choosing...) to block assignment to an instance attribute; the rest of the code is about a custom metaclass creating unique per-class sub-metaclasses of itself, in order to exploit said custom descriptor to the fullest and achieving the exact functionality you specifically asked for.\n",
"You could do something like this:\n(from http://www.siafoo.net/snippet/108)\nclass Constants:\n # A constant variable\n foo = 1337\n\n def __setattr__(self, attr, value):\n if hasattr(self, attr):\n raise ValueError, 'Attribute %s already has a value and so cannot be written to' % attr\n self.__dict__[attr] = value\n\nThen use it like this:\n>>> const = Constants()\n>>> const.test1 = 42\n>>> const.test1\n42\n>>> const.test1 = 43\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \"<stdin>\", line 4, in __setattr__\nValueError: Attribute test1 already has a value and so cannot be written to\n>>> const.test1\n42\n\n",
"You can use a metaclass to achieve this:\nclass ImmutableConstants(type):\n def __init__(cls, name, bases, dct):\n type.__init__(cls, name, bases, dct)\n\n old_setattr = cls.__setattr__\n def __setattr__(self, key, value):\n cls.assert_attribute_mutable(key)\n old_setattr(self, key, value)\n cls.__setattr__ = __setattr__\n\n def __setattr__(self, key, value):\n self.assert_attribute_mutable(key)\n type.__setattr__(self, key, value)\n\n def assert_attribute_mutable(self, name):\n if name.isupper():\n raise AttributeError('Attribute %s is constant' % name)\n\nclass Foo(object):\n __metaclass__ = ImmutableConstants\n CONST = 5\n class_var = 'foobar'\n\nFoo.class_var = 'new value'\nFoo.CONST = 42 # raises\n\nBut are you sure this is a real issue? Are you really accidentally setting constants all over the place? You can find most of these pretty easily with a grep -r '\\.[A-Z][A-Z0-9_]*\\s*=' src/.\n",
"If you really want to have constant that can't be changed then look at this: http://code.activestate.com/recipes/65207/\n",
"Start reading this:\nhttp://docs.python.org/reference/datamodel.html#customizing-attribute-access\nYou basically write your own version of __setattr__ that throws exceptions for some attributes, but not others.\n"
] |
[
3,
2,
2,
1,
1
] |
[] |
[] |
[
"attributes",
"constants",
"exception",
"python"
] |
stackoverflow_0001358711_attributes_constants_exception_python.txt
|
Q:
Wildcard Downloads with Python
How can I download files from a website using wildacrds in Python? I have a site that I need to download file from periodically. The problem is the filenames change each time. A portion of the file stays the same though. How can I use a wildcard to specify the unknown portion of the file in a URL?
A:
If the filename changes, there must still be a link to the file somewhere (otherwise nobody would ever guess the filename). A typical approach is to get the HTML page that contains a link to the file, search through that looking for the link target, and then send a second request to get the actual file you're after.
Web servers do not generally implement such a "wildcard" facility as you describe, so you must use other techniques.
A:
You could try logging into the ftp server using ftplib.
From the python docs:
from ftplib import FTP
ftp = FTP('ftp.cwi.nl') # connect to host, default port
ftp.login() # user anonymous, passwd anonymous@
The ftp object has a dir method that lists the contents of a directory.
You could use this listing to find the name of the file you want.
|
Wildcard Downloads with Python
|
How can I download files from a website using wildacrds in Python? I have a site that I need to download file from periodically. The problem is the filenames change each time. A portion of the file stays the same though. How can I use a wildcard to specify the unknown portion of the file in a URL?
|
[
"If the filename changes, there must still be a link to the file somewhere (otherwise nobody would ever guess the filename). A typical approach is to get the HTML page that contains a link to the file, search through that looking for the link target, and then send a second request to get the actual file you're after.\nWeb servers do not generally implement such a \"wildcard\" facility as you describe, so you must use other techniques.\n",
"You could try logging into the ftp server using ftplib.\nFrom the python docs:\nfrom ftplib import FTP\nftp = FTP('ftp.cwi.nl') # connect to host, default port\nftp.login() # user anonymous, passwd anonymous@\n\nThe ftp object has a dir method that lists the contents of a directory.\nYou could use this listing to find the name of the file you want.\n"
] |
[
7,
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001359090_python.txt
|
Q:
Generate two lists at once
Background
The algorithm manipulates financial analytics. There are multiple lists of the same size and they are filtered into other lists for analysis. I am doing the same filtering on different by parallel lists. I could set it up so that a1,b1,c2 occur as a tuple in a list but then the analytics have to stripe the tuples the other way to do analysis (regression of one list against the other, beta, etc.).
What I want to do
I want to generate two different lists based on a third list:
>>> a = list(range(10))
>>> b = list(range(10,20))
>>> c = list(i & 1 for i in range(10))
>>>
>>> aprime = [a1 for a1, c1 in zip(a,c) if c1 == 0]
>>> bprime = [b1 for b1, c1 in zip(b,c) if c1 == 0]
>>> aprime
[0, 2, 4, 6, 8]
>>> bprime
[10, 12, 14, 16, 18]
It seems there should be a pythonic/functional programming/itertools way to create the two lists and iterate over the three lists only once. Something like:
aprime, bprime = [a1, b1 for a1, b1, c1 in zip(a,b,c) if c1 == 0]
But of course this generates a syntax error.
The question
Is there a pythonic way?
Micro-optimization shootout
The ugly but pythonic-to-the-max one-liner edges out the "just use a for-loop" solution and my original code in the ever popular timeit cage match:
>>> import timeit
>>> timeit.timeit("z2(a,b,c)", "n=100;a = list(range(n)); b = list(range(10,10+n)); c = list(i & 1 for i in range(n));\ndef z2(a,b,c):\n\treturn zip(*[(a1,b1) for a1,b1,c1 in zip(a,b,c) if c1==0])\n")
26.977873025761482
>>> timeit.timeit("z2(a,b,c)", "n=100;a = list(range(n)); b = list(range(10,10+n)); c = list(i & 1 for i in range(n));\ndef z2(a,b,c):\n\taprime, bprime = [], [];\n\tfor a1, b1, c1 in zip(a, b, c):\n\t\tif c1 == 0:\n\t\t\taprime.append(a1); bprime.append(b1);\n\treturn aprime, bprime\n")
32.232914169258947
>>> timeit.timeit("z2(a,b,c)", "n=100;a = list(range(n)); b = list(range(10,10+n)); c = list(i & 1 for i in range(n));\ndef z2(a,b,c):\n\treturn [a1 for a1, c1 in zip(a,c) if c1 == 0], [b1 for b1, c1 in zip(b,c) if c1 == 0]\n")
32.37302275847901
A:
Just use a for loop:
aprime = []
bprime = []
for a1, b1, c1 in zip(a, b, c):
if c1 == 0:
aprime.append(a1)
bprime.append(b1)
A:
This might win the ugliest code award, but it works in one line:
aprime, bprime = zip(*[(a1,b1) for a1,b1,c1 in zip(a,b,c) if c1==0])
A:
There's no way to create multiple lists at a time with list comprehensions--if you only want to iterate once you're going to need to do it some other way--possible with a loop.
You could use a list comprehension to create a list of tuples, with the first element belonging to one list, the second to the other. But if you do want them as separate lists, you're going to have to use another operation to split it, anyway.
|
Generate two lists at once
|
Background
The algorithm manipulates financial analytics. There are multiple lists of the same size and they are filtered into other lists for analysis. I am doing the same filtering on different by parallel lists. I could set it up so that a1,b1,c2 occur as a tuple in a list but then the analytics have to stripe the tuples the other way to do analysis (regression of one list against the other, beta, etc.).
What I want to do
I want to generate two different lists based on a third list:
>>> a = list(range(10))
>>> b = list(range(10,20))
>>> c = list(i & 1 for i in range(10))
>>>
>>> aprime = [a1 for a1, c1 in zip(a,c) if c1 == 0]
>>> bprime = [b1 for b1, c1 in zip(b,c) if c1 == 0]
>>> aprime
[0, 2, 4, 6, 8]
>>> bprime
[10, 12, 14, 16, 18]
It seems there should be a pythonic/functional programming/itertools way to create the two lists and iterate over the three lists only once. Something like:
aprime, bprime = [a1, b1 for a1, b1, c1 in zip(a,b,c) if c1 == 0]
But of course this generates a syntax error.
The question
Is there a pythonic way?
Micro-optimization shootout
The ugly but pythonic-to-the-max one-liner edges out the "just use a for-loop" solution and my original code in the ever popular timeit cage match:
>>> import timeit
>>> timeit.timeit("z2(a,b,c)", "n=100;a = list(range(n)); b = list(range(10,10+n)); c = list(i & 1 for i in range(n));\ndef z2(a,b,c):\n\treturn zip(*[(a1,b1) for a1,b1,c1 in zip(a,b,c) if c1==0])\n")
26.977873025761482
>>> timeit.timeit("z2(a,b,c)", "n=100;a = list(range(n)); b = list(range(10,10+n)); c = list(i & 1 for i in range(n));\ndef z2(a,b,c):\n\taprime, bprime = [], [];\n\tfor a1, b1, c1 in zip(a, b, c):\n\t\tif c1 == 0:\n\t\t\taprime.append(a1); bprime.append(b1);\n\treturn aprime, bprime\n")
32.232914169258947
>>> timeit.timeit("z2(a,b,c)", "n=100;a = list(range(n)); b = list(range(10,10+n)); c = list(i & 1 for i in range(n));\ndef z2(a,b,c):\n\treturn [a1 for a1, c1 in zip(a,c) if c1 == 0], [b1 for b1, c1 in zip(b,c) if c1 == 0]\n")
32.37302275847901
|
[
"Just use a for loop:\naprime = []\nbprime = []\nfor a1, b1, c1 in zip(a, b, c):\n if c1 == 0:\n aprime.append(a1) \n bprime.append(b1) \n\n",
"This might win the ugliest code award, but it works in one line:\naprime, bprime = zip(*[(a1,b1) for a1,b1,c1 in zip(a,b,c) if c1==0])\n\n",
"There's no way to create multiple lists at a time with list comprehensions--if you only want to iterate once you're going to need to do it some other way--possible with a loop.\nYou could use a list comprehension to create a list of tuples, with the first element belonging to one list, the second to the other. But if you do want them as separate lists, you're going to have to use another operation to split it, anyway.\n"
] |
[
4,
4,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001359232_python.txt
|
Q:
Global hotkey for Python application in Gnome
I would like to assign a global hotkey to my Python application, running in Gnome. How do I do that? All I can find are two year old posts saying, well, pretty much nothing :-)
A:
There is python-keybinder which is that same code, but packaged standalone. Also available in debian and ubuntu repositories now.
https://github.com/engla/keybinder
A:
Check out the Deskbar source code - they do this; afaik, they call out a C library that interacts with X11 to do the job
|
Global hotkey for Python application in Gnome
|
I would like to assign a global hotkey to my Python application, running in Gnome. How do I do that? All I can find are two year old posts saying, well, pretty much nothing :-)
|
[
"There is python-keybinder which is that same code, but packaged standalone. Also available in debian and ubuntu repositories now.\nhttps://github.com/engla/keybinder\n",
"Check out the Deskbar source code - they do this; afaik, they call out a C library that interacts with X11 to do the job\n"
] |
[
9,
2
] |
[] |
[] |
[
"gnome",
"python"
] |
stackoverflow_0000302163_gnome_python.txt
|
Q:
Killing the background window when running a .exe from a Python program
the following is a line from a python program that calls the "demo.exe" file. a window for demo.exe opens when it is called, is there any way for demo.exe to run in the "background"? that is, i don't want the window for it show, i just want demo.exe to run.
p = subprocess.Popen(args = "demo.exe", stdout = subprocess.PIPE)
the output of demo.exe is used by the python program in real time, so demo.exe is not something that i can have run in advance of running the python program. demo.exe handles a lot of on the fly back-end calculations. i'm using windows xp.
thanks in advance!
A:
Thanks to another StackOverflow thread, I think this is what you need:
startupinfo = subprocess.STARTUPINFO()
startupinfo.dwFlags |= subprocess.STARTF_USESHOWWINDOW
p = subprocess.Popen(args = "demo.exe", stdout=subprocess.PIPE, startupinfo=startupinfo)
I tested on my Python 2.6 on XP and it does indeed hide the window.
A:
Try this:
from subprocess import Popen, PIPE, STARTUPINFO, STARTF_USESHOWWINDOW
startupinfo = STARTUPINFO()
startupinfo.dwFlags |= STARTF_USESHOWWINDOW
p = Popen(cmdlist, startupinfo=startupinfo, ...)
|
Killing the background window when running a .exe from a Python program
|
the following is a line from a python program that calls the "demo.exe" file. a window for demo.exe opens when it is called, is there any way for demo.exe to run in the "background"? that is, i don't want the window for it show, i just want demo.exe to run.
p = subprocess.Popen(args = "demo.exe", stdout = subprocess.PIPE)
the output of demo.exe is used by the python program in real time, so demo.exe is not something that i can have run in advance of running the python program. demo.exe handles a lot of on the fly back-end calculations. i'm using windows xp.
thanks in advance!
|
[
"Thanks to another StackOverflow thread, I think this is what you need:\nstartupinfo = subprocess.STARTUPINFO()\nstartupinfo.dwFlags |= subprocess.STARTF_USESHOWWINDOW\np = subprocess.Popen(args = \"demo.exe\", stdout=subprocess.PIPE, startupinfo=startupinfo)\n\nI tested on my Python 2.6 on XP and it does indeed hide the window.\n",
"Try this:\nfrom subprocess import Popen, PIPE, STARTUPINFO, STARTF_USESHOWWINDOW\nstartupinfo = STARTUPINFO()\nstartupinfo.dwFlags |= STARTF_USESHOWWINDOW\np = Popen(cmdlist, startupinfo=startupinfo, ...)\n\n"
] |
[
4,
3
] |
[] |
[] |
[
"executable",
"python"
] |
stackoverflow_0001360066_executable_python.txt
|
Q:
Python/Suds: Type not found: 'xs:complexType'
I have the following simple python test script that uses Suds to call a SOAP web service (the service is written in ASP.net):
from suds.client import Client
url = 'http://someURL.asmx?WSDL'
client = Client( url )
result = client.service.GetPackageDetails( "MyPackage" )
print result
When I run this test script I am getting the following error (used code markup as it doesn't wrap):
No handlers could be found for logger "suds.bindings.unmarshaller"
Traceback (most recent call last):
File "sudsTest.py", line 9, in <module>
result = client.service.GetPackageDetails( "t3db" )
File "build/bdist.cygwin-1.5.25-i686/egg/suds/client.py", line 240, in __call__
File "build/bdist.cygwin-1.5.25-i686/egg/suds/client.py", line 379, in call
File "build/bdist.cygwin-1.5.25-i686/egg/suds/client.py", line 240, in __call__
File "build/bdist.cygwin-1.5.25-i686/egg/suds/client.py", line 422, in call
File "build/bdist.cygwin-1.5.25-i686/egg/suds/client.py", line 480, in invoke
File "build/bdist.cygwin-1.5.25-i686/egg/suds/client.py", line 505, in send
File "build/bdist.cygwin-1.5.25-i686/egg/suds/client.py", line 537, in succeeded
File "build/bdist.cygwin-1.5.25-i686/egg/suds/bindings/binding.py", line 149, in get_reply
File "build/bdist.cygwin-1.5.25-i686/egg/suds/bindings/unmarshaller.py", line 303, in process
File "build/bdist.cygwin-1.5.25-i686/egg/suds/bindings/unmarshaller.py", line 88, in process
File "build/bdist.cygwin-1.5.25-i686/egg/suds/bindings/unmarshaller.py", line 104, in append
File "build/bdist.cygwin-1.5.25-i686/egg/suds/bindings/unmarshaller.py", line 181, in append_children
File "build/bdist.cygwin-1.5.25-i686/egg/suds/bindings/unmarshaller.py", line 104, in append
File "build/bdist.cygwin-1.5.25-i686/egg/suds/bindings/unmarshaller.py", line 181, in append_children
File "build/bdist.cygwin-1.5.25-i686/egg/suds/bindings/unmarshaller.py", line 104, in append
File "build/bdist.cygwin-1.5.25-i686/egg/suds/bindings/unmarshaller.py", line 181, in append_children
File "build/bdist.cygwin-1.5.25-i686/egg/suds/bindings/unmarshaller.py", line 102, in append
File "build/bdist.cygwin-1.5.25-i686/egg/suds/bindings/unmarshaller.py", line 324, in start
suds.TypeNotFound: Type not found: 'xs:complexType'
Looking at the source for the WSDL file's header (reformatted to fit):
<?xml version="1.0" encoding="utf-8" ?>
<wsdl:definitions xmlns:http="http://schemas.xmlsoap.org/wsdl/http/"
xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/"
xmlns:s="http://www.w3.org/2001/XMLSchema"
xmlns:soapenc="http://schemas.xmlsoap.org/soap/encoding/"
xmlns:tns="http://http://someInternalURL/webservices.asmx"
xmlns:tm="http://microsoft.com/wsdl/mime/textMatching/"
xmlns:mime="http://schemas.xmlsoap.org/wsdl/mime/"
targetNamespace="http://someURL.asmx"
xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/">
I am guessing based on the last line of output:
suds.TypeNotFound: Type not found: 'xs:complexType'
That I need to use Sud's doctor class to fix the schema but being a SOAP newbie I don't know what exactly needs fixed in my case. Does anyone here have any experience using Suds to fix/correct schema?
A:
Ewall's resource is a good one. If you try to search in suds trac tickets, you could see that other people have problems similar to yours, but with different object types. It can be a good way to learn from it's examples and how they import their namespaces.
The problem is that your wsdl contains
a schema definition that references
the (...) but fails to import
the
"http://schemas.xmlsoap.org/soap/encoding/"
namespace (and associated schema)
properly. The schema can be patched at
runtime using the schema ImportDoctor
as discussed here:
https://fedorahosted.org/suds/wiki/Documentation#FIXINGBROKENSCHEMAs.
This is a fairly common problem.
A commonly referenced schema (that is
not imported) is the SOAP section 5
encoding schema. This can now be fixed
as follows:
(all emphasis were mine).
You could try the lines that these documentations provide adding the namespaces presented in your WSDL. This can be a try-and-error aproach.
imp = Import('http://schemas.xmlsoap.org/soap/encoding/')
# Below is your targetNamespace presented in WSDL. Remember
# that you can add more namespaces by appending more imp.filter.add
imp.filter.add('http://someURL.asmx')
doctor = ImportDoctor(imp)
client = Client(url, doctor=doctor)
You didn't provide the WSDL you're working with, I suppose you have reasons to not showing to us... so I think you have to try these possibilities by yourself. Good luck!
|
Python/Suds: Type not found: 'xs:complexType'
|
I have the following simple python test script that uses Suds to call a SOAP web service (the service is written in ASP.net):
from suds.client import Client
url = 'http://someURL.asmx?WSDL'
client = Client( url )
result = client.service.GetPackageDetails( "MyPackage" )
print result
When I run this test script I am getting the following error (used code markup as it doesn't wrap):
No handlers could be found for logger "suds.bindings.unmarshaller"
Traceback (most recent call last):
File "sudsTest.py", line 9, in <module>
result = client.service.GetPackageDetails( "t3db" )
File "build/bdist.cygwin-1.5.25-i686/egg/suds/client.py", line 240, in __call__
File "build/bdist.cygwin-1.5.25-i686/egg/suds/client.py", line 379, in call
File "build/bdist.cygwin-1.5.25-i686/egg/suds/client.py", line 240, in __call__
File "build/bdist.cygwin-1.5.25-i686/egg/suds/client.py", line 422, in call
File "build/bdist.cygwin-1.5.25-i686/egg/suds/client.py", line 480, in invoke
File "build/bdist.cygwin-1.5.25-i686/egg/suds/client.py", line 505, in send
File "build/bdist.cygwin-1.5.25-i686/egg/suds/client.py", line 537, in succeeded
File "build/bdist.cygwin-1.5.25-i686/egg/suds/bindings/binding.py", line 149, in get_reply
File "build/bdist.cygwin-1.5.25-i686/egg/suds/bindings/unmarshaller.py", line 303, in process
File "build/bdist.cygwin-1.5.25-i686/egg/suds/bindings/unmarshaller.py", line 88, in process
File "build/bdist.cygwin-1.5.25-i686/egg/suds/bindings/unmarshaller.py", line 104, in append
File "build/bdist.cygwin-1.5.25-i686/egg/suds/bindings/unmarshaller.py", line 181, in append_children
File "build/bdist.cygwin-1.5.25-i686/egg/suds/bindings/unmarshaller.py", line 104, in append
File "build/bdist.cygwin-1.5.25-i686/egg/suds/bindings/unmarshaller.py", line 181, in append_children
File "build/bdist.cygwin-1.5.25-i686/egg/suds/bindings/unmarshaller.py", line 104, in append
File "build/bdist.cygwin-1.5.25-i686/egg/suds/bindings/unmarshaller.py", line 181, in append_children
File "build/bdist.cygwin-1.5.25-i686/egg/suds/bindings/unmarshaller.py", line 102, in append
File "build/bdist.cygwin-1.5.25-i686/egg/suds/bindings/unmarshaller.py", line 324, in start
suds.TypeNotFound: Type not found: 'xs:complexType'
Looking at the source for the WSDL file's header (reformatted to fit):
<?xml version="1.0" encoding="utf-8" ?>
<wsdl:definitions xmlns:http="http://schemas.xmlsoap.org/wsdl/http/"
xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/"
xmlns:s="http://www.w3.org/2001/XMLSchema"
xmlns:soapenc="http://schemas.xmlsoap.org/soap/encoding/"
xmlns:tns="http://http://someInternalURL/webservices.asmx"
xmlns:tm="http://microsoft.com/wsdl/mime/textMatching/"
xmlns:mime="http://schemas.xmlsoap.org/wsdl/mime/"
targetNamespace="http://someURL.asmx"
xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/">
I am guessing based on the last line of output:
suds.TypeNotFound: Type not found: 'xs:complexType'
That I need to use Sud's doctor class to fix the schema but being a SOAP newbie I don't know what exactly needs fixed in my case. Does anyone here have any experience using Suds to fix/correct schema?
|
[
"Ewall's resource is a good one. If you try to search in suds trac tickets, you could see that other people have problems similar to yours, but with different object types. It can be a good way to learn from it's examples and how they import their namespaces.\n\nThe problem is that your wsdl contains\n a schema definition that references\n the (...) but fails to import\n the\n \"http://schemas.xmlsoap.org/soap/encoding/\"\n namespace (and associated schema)\n properly. The schema can be patched at\n runtime using the schema ImportDoctor\n as discussed here:\n https://fedorahosted.org/suds/wiki/Documentation#FIXINGBROKENSCHEMAs.\nThis is a fairly common problem.\nA commonly referenced schema (that is\n not imported) is the SOAP section 5\n encoding schema. This can now be fixed\n as follows:\n\n(all emphasis were mine).\nYou could try the lines that these documentations provide adding the namespaces presented in your WSDL. This can be a try-and-error aproach.\nimp = Import('http://schemas.xmlsoap.org/soap/encoding/')\n# Below is your targetNamespace presented in WSDL. Remember\n# that you can add more namespaces by appending more imp.filter.add\nimp.filter.add('http://someURL.asmx') \ndoctor = ImportDoctor(imp) \nclient = Client(url, doctor=doctor)\n\nYou didn't provide the WSDL you're working with, I suppose you have reasons to not showing to us... so I think you have to try these possibilities by yourself. Good luck!\n"
] |
[
14
] |
[] |
[] |
[
"python",
"soap",
"suds"
] |
stackoverflow_0001329190_python_soap_suds.txt
|
Q:
Can Selenium RC tests written in Python be integrated into PHPUnit?
I'm working on large project in PHP and I'm running phpundercontrol with PHPUnit for my unit tests. I would like to use Selenium RC for running acceptance tests. Unfortunately the only person I have left to write tests only knows Python. Can Selenium tests written in Python be integrated into PHPUnit?
Thanks!
A:
The only thing that comes to my mind is running them through the shell.
It would be:
<?php
$output = shell_exec('python testScript.py');
echo $output;
?>
It's not too integrated with phpunit, but once you get the output in a variable ($output), you can then parse the text inside it to see if you have "E" or "." ("E" states for errors in pyunit and "." states for pass).
This is the best thing I could think of, hope it helps.
|
Can Selenium RC tests written in Python be integrated into PHPUnit?
|
I'm working on large project in PHP and I'm running phpundercontrol with PHPUnit for my unit tests. I would like to use Selenium RC for running acceptance tests. Unfortunately the only person I have left to write tests only knows Python. Can Selenium tests written in Python be integrated into PHPUnit?
Thanks!
|
[
"The only thing that comes to my mind is running them through the shell.\nIt would be:\n<?php\n$output = shell_exec('python testScript.py');\necho $output;\n?>\n\nIt's not too integrated with phpunit, but once you get the output in a variable ($output), you can then parse the text inside it to see if you have \"E\" or \".\" (\"E\" states for errors in pyunit and \".\" states for pass).\nThis is the best thing I could think of, hope it helps.\n"
] |
[
1
] |
[] |
[] |
[
"phpunit",
"python",
"selenium"
] |
stackoverflow_0001350114_phpunit_python_selenium.txt
|
Q:
Generating a list from complex dictionary
I have a dictionary dict1['a'] = [ [1,2], [3,4] ] and need to generate a list out of it as l1 = [2, 4]. That is, a list out of the second element of each inner list. It can be a separate list or even the dictionary can be modified as dict1['a'] = [2,4].
A:
Given a list:
>>> lst = [ [1,2], [3,4] ]
You can extract the second element of each sublist with a simple list comprehension:
>>> [x[1] for x in lst]
[2, 4]
If you want to do this for every value in a dictionary, you can iterate over the dictionary. I'm not sure exactly what you want your final data to look like, but something like this may help:
>>> dict1 = {}
>>> dict1['a'] = [ [1,2], [3,4] ]
>>> [(k, [x[1] for x in v]) for k, v in dict1.items()]
[('a', [2, 4])]
dict.items() returns (key, value) pairs from the dictionary, as a list. So this code will extract each key in your dictionary and pair it with a list generated as above.
A:
Assuming that each value in the dictionary is a list of pairs, then this should do it for you:
[pair[1] for pairlist in dict1.values() for pair in pairlist]
As you can see:
dict1.values() gets you just the values in your dict,
for pairlist in dict1.values() gets you all the lists of pairs,
for pair in pairlist gets you all the pairs in each of those lists,
and pair[1] gets you the second value in each pair.
Try it out. The Python shell is your friend!...
>>> dict1 = {}
>>> dict1['a'] = [[1,2], [3,4]]
>>> dict1['b'] = [[5, 6], [42, 69], [220, 284]]
>>>
>>> dict1.values()
[[[1, 2], [3, 4]], [[5, 6], [42, 69], [220, 284]]]
>>>
>>> [pairlist for pairlist in dict1.values()]
[[[1, 2], [3, 4]], [[5, 6], [42, 69], [220, 284]]]
>>> # No real difference here, but we can refer to each list now.
>>>
>>> [pair for pairlist in dict1.values() for pair in pairlist]
[[1, 2], [3, 4], [5, 6], [42, 69], [220, 284]]
>>>
>>> # Finally...
>>> [pair[1] for pairlist in dict1.values() for pair in pairlist]
[2, 4, 6, 69, 284]
While I'm at it, I'll just say: ipython loves you!
A:
a list out of the second element of
each inner list
that sounds like [sl[1] for sl in dict1['a']] -- so what's the QUESTION?!-)
|
Generating a list from complex dictionary
|
I have a dictionary dict1['a'] = [ [1,2], [3,4] ] and need to generate a list out of it as l1 = [2, 4]. That is, a list out of the second element of each inner list. It can be a separate list or even the dictionary can be modified as dict1['a'] = [2,4].
|
[
"Given a list:\n>>> lst = [ [1,2], [3,4] ]\n\nYou can extract the second element of each sublist with a simple list comprehension:\n>>> [x[1] for x in lst]\n[2, 4]\n\nIf you want to do this for every value in a dictionary, you can iterate over the dictionary. I'm not sure exactly what you want your final data to look like, but something like this may help:\n>>> dict1 = {}\n>>> dict1['a'] = [ [1,2], [3,4] ]\n>>> [(k, [x[1] for x in v]) for k, v in dict1.items()] \n[('a', [2, 4])]\n\ndict.items() returns (key, value) pairs from the dictionary, as a list. So this code will extract each key in your dictionary and pair it with a list generated as above.\n",
"Assuming that each value in the dictionary is a list of pairs, then this should do it for you:\n[pair[1] for pairlist in dict1.values() for pair in pairlist]\n\nAs you can see:\n\ndict1.values() gets you just the values in your dict,\nfor pairlist in dict1.values() gets you all the lists of pairs,\nfor pair in pairlist gets you all the pairs in each of those lists,\nand pair[1] gets you the second value in each pair.\n\nTry it out. The Python shell is your friend!...\n>>> dict1 = {}\n>>> dict1['a'] = [[1,2], [3,4]]\n>>> dict1['b'] = [[5, 6], [42, 69], [220, 284]]\n>>> \n>>> dict1.values()\n[[[1, 2], [3, 4]], [[5, 6], [42, 69], [220, 284]]]\n>>> \n>>> [pairlist for pairlist in dict1.values()]\n[[[1, 2], [3, 4]], [[5, 6], [42, 69], [220, 284]]]\n>>> # No real difference here, but we can refer to each list now.\n>>> \n>>> [pair for pairlist in dict1.values() for pair in pairlist]\n[[1, 2], [3, 4], [5, 6], [42, 69], [220, 284]]\n>>> \n>>> # Finally...\n>>> [pair[1] for pairlist in dict1.values() for pair in pairlist]\n[2, 4, 6, 69, 284]\n\nWhile I'm at it, I'll just say: ipython loves you!\n",
"\na list out of the second element of\n each inner list\n\nthat sounds like [sl[1] for sl in dict1['a']] -- so what's the QUESTION?!-)\n"
] |
[
8,
2,
0
] |
[] |
[] |
[
"python",
"python_3.x"
] |
stackoverflow_0001360507_python_python_3.x.txt
|
Q:
How to make custom buttons in wx?
I'd like to make a custom button in wxPython. Where should I start, how should I do it?
A:
Here is a skeleton which you can use to draw totally custom button, its up to your imagination how it looks or behaves
class MyButton(wx.PyControl):
def __init__(self, parent, id, bmp, text, **kwargs):
wx.PyControl.__init__(self,parent, id, **kwargs)
self.Bind(wx.EVT_LEFT_DOWN, self._onMouseDown)
self.Bind(wx.EVT_LEFT_UP, self._onMouseUp)
self.Bind(wx.EVT_LEAVE_WINDOW, self._onMouseLeave)
self.Bind(wx.EVT_ENTER_WINDOW, self._onMouseEnter)
self.Bind(wx.EVT_ERASE_BACKGROUND,self._onEraseBackground)
self.Bind(wx.EVT_PAINT,self._onPaint)
self._mouseIn = self._mouseDown = False
def _onMouseEnter(self, event):
self._mouseIn = True
def _onMouseLeave(self, event):
self._mouseIn = False
def _onMouseDown(self, event):
self._mouseDown = True
def _onMouseUp(self, event):
self._mouseDown = False
self.sendButtonEvent()
def sendButtonEvent(self):
event = wx.CommandEvent(wx.wxEVT_COMMAND_BUTTON_CLICKED, self.GetId())
event.SetInt(0)
event.SetEventObject(self)
self.GetEventHandler().ProcessEvent(event)
def _onEraseBackground(self,event):
# reduce flicker
pass
def _onPaint(self, event):
dc = wx.BufferedPaintDC(self)
dc.SetFont(self.GetFont())
dc.SetBackground(wx.Brush(self.GetBackgroundColour()))
dc.Clear()
# draw whatever you want to draw
# draw glossy bitmaps e.g. dc.DrawBitmap
if self._mouseIn:
pass# on mouserover may be draw different bitmap
if self._mouseDown:
pass # draw different image text
A:
When I wanted to learn how to make custom widgets (buttons included) I referenced Andrea Gavana's page (full working example there) on the wxPyWiki and Cody Precord's platebutton (the source is in wx.lib.platebtn, also here in svn). Look at both of those and you should be able to build most any custom widget you would like.
A:
You can extend the default button class, like this for example:
class RedButton(wx.Button):
def __init__(self, *a, **k):
wx.Button.__init__(self, *a, **k)
self.SetBackgroundColour('RED')
# more customization here
Every time you put a RedButton into your layout, it should appear red (haven't tested it though).
A:
Try using a Generic Button or a Bitmap Button.
|
How to make custom buttons in wx?
|
I'd like to make a custom button in wxPython. Where should I start, how should I do it?
|
[
"Here is a skeleton which you can use to draw totally custom button, its up to your imagination how it looks or behaves\nclass MyButton(wx.PyControl):\n\n def __init__(self, parent, id, bmp, text, **kwargs):\n wx.PyControl.__init__(self,parent, id, **kwargs)\n\n self.Bind(wx.EVT_LEFT_DOWN, self._onMouseDown)\n self.Bind(wx.EVT_LEFT_UP, self._onMouseUp)\n self.Bind(wx.EVT_LEAVE_WINDOW, self._onMouseLeave)\n self.Bind(wx.EVT_ENTER_WINDOW, self._onMouseEnter)\n self.Bind(wx.EVT_ERASE_BACKGROUND,self._onEraseBackground)\n self.Bind(wx.EVT_PAINT,self._onPaint)\n\n self._mouseIn = self._mouseDown = False\n\n def _onMouseEnter(self, event):\n self._mouseIn = True\n\n def _onMouseLeave(self, event):\n self._mouseIn = False\n\n def _onMouseDown(self, event):\n self._mouseDown = True\n\n def _onMouseUp(self, event):\n self._mouseDown = False\n self.sendButtonEvent()\n\n def sendButtonEvent(self):\n event = wx.CommandEvent(wx.wxEVT_COMMAND_BUTTON_CLICKED, self.GetId())\n event.SetInt(0)\n event.SetEventObject(self)\n self.GetEventHandler().ProcessEvent(event)\n\n def _onEraseBackground(self,event):\n # reduce flicker\n pass\n\n def _onPaint(self, event):\n dc = wx.BufferedPaintDC(self)\n dc.SetFont(self.GetFont())\n dc.SetBackground(wx.Brush(self.GetBackgroundColour()))\n dc.Clear()\n # draw whatever you want to draw\n # draw glossy bitmaps e.g. dc.DrawBitmap\n if self._mouseIn:\n pass# on mouserover may be draw different bitmap\n if self._mouseDown:\n pass # draw different image text \n\n",
"When I wanted to learn how to make custom widgets (buttons included) I referenced Andrea Gavana's page (full working example there) on the wxPyWiki and Cody Precord's platebutton (the source is in wx.lib.platebtn, also here in svn). Look at both of those and you should be able to build most any custom widget you would like.\n",
"You can extend the default button class, like this for example:\nclass RedButton(wx.Button):\n def __init__(self, *a, **k):\n wx.Button.__init__(self, *a, **k)\n self.SetBackgroundColour('RED')\n # more customization here\n\nEvery time you put a RedButton into your layout, it should appear red (haven't tested it though). \n",
"Try using a Generic Button or a Bitmap Button.\n"
] |
[
8,
5,
3,
2
] |
[] |
[] |
[
"button",
"python",
"wxpython"
] |
stackoverflow_0001351448_button_python_wxpython.txt
|
Q:
Why does namespace after method call changes?
I'm creating a class, but having some trouble with the namespacing in python.
You can see the code below, and it mostly works ok, but after the call to guiFrame._stateMachine() the time module is somehow not defined anymore.
If I re-import the time module in _stateMachine() it works. But why is the time module not in the namespace when I import it in the head?
Am I missing something?
The error message:
File "C:\Scripts\Python\GUI.py", line 106, in <module>
guiFrame._stateMachine()
File "C:\Scripts\Python\GUI.py", line 74, in _stateMachine
self.tempfile.write('%s cpuUMTS %s\n' % (time.asctime(time.localt
f.load.cpuThreadsValue['10094']))
UnboundLocalError: local variable 'time' referenced before assignment
The code:
import os
import cpu_load_internal
import throughput_internal
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
from Tkinter import *
import tkMessageBox
import time
class GUIFramework(Frame):
"""This is the GUI"""
def __init__(self,master=None):
"""Initialize yourself"""
"""Initialise the base class"""
Frame.__init__(self,master)
"""Set the Window Title"""
self.master.title("Type Some Text")
"""Display the main window
with a little bit of padding"""
self.grid(padx=10,pady=10)
self.CreateWidgets()
plt.figure(1)
def _setup_parsing(self):
self.load = cpu_load_internal.CPULoad('C:\Templogs')
self.throughput = throughput_internal.MACThroughput('C:\Templogs')
self.tempfile = open('output.txt','w')
self.state = 0
def _parsing(self):
self.load.read_lines()
self.throughput.read_lines()
self.cpuLoad.set(self.load.cpuThreadsValue['10094'])
self.macThroughput.set(self.throughput.macULThroughput)
def __change_state1(self):
self.state = 2
def __change_state3(self):
self.state = 3
def CreateWidgets(self):
"""Create all the widgets that we need"""
"""Create the Text"""
self.cpuLoad = StringVar()
self.lbText1 = Label(self, textvariable=self.cpuLoad)
self.lbText1.grid(row=0, column=0)
self.macThroughput = StringVar()
self.lbText2 = Label(self, textvariable=self.macThroughput)
self.lbText2.grid(row=0, column=1)
self.butStart = Button(self, text = 'Start', command = self.__change_state1)
self.butStart.grid(row=1, column=0)
self.butStop = Button(self, text = 'Stop', command = self.__change_state3)
self.butStop.grid(row=1, column=1)
def _stateMachine(self):
if (self.state == 2):
print self.throughput.macULUpdate
print self.load.cpuUpdate
if self.load.cpuUpdate:
self.load.cpuUpdate = 0
print 'cpuUMTS %s\n' % (self.load.cpuThreadsValue['10094'])
self.tempfile.write('%s cpuUMTS %s\n' % (time.asctime(time.localtime()), self.load.cpuThreadsValue['10094']))
if self.throughput.macULUpdate:
self.throughput.macULUpdate = 0
print 'macUL %s %s\n' % (self.throughput.macULThroughput, self.throughput.macULThroughputUnit)
self.tempfile.write('%s macUL %s %s\n' % (time.asctime(time.localtime()), self.throughput.macULThroughput, self.throughput.macULThroughputUnit))
if (self.state == 3):
self.tempfile.seek(0)
plt.plot([1,2,3],[1,4,6])
plt.savefig('test.png')
self.state == 0
while 1:
try:
line = (self.tempfile.next())
except:
break
if 'cpuUMTS' in line:
line.split
time = 4
if __name__ == "__main__":
guiFrame = GUIFramework()
print dir(guiFrame)
guiFrame._setup_parsing()
guiFrame.state = 2
while(1):
guiFrame._parsing()
guiFrame._stateMachine()
guiFrame.update()
time.sleep(0.1)
A:
Why do you assign to time? You can't use it as local variable, it will overshadow the module! If you look closely it complains that you use time before you assign to it -- since to use it as a local variable in _stateMachine.
time = 4
A:
You seem to use time as a variable. What happens here:
"C:\Scripts\Python\GUI.py", line 74
A:
You try to assing to the variable time in this method:
time = 4
Therefore the compiler assumes that time must be a local variable, which isn't true. And that is the reason why you get this error when you want to use the module time, even if you try to use the module before you assign to time.
|
Why does namespace after method call changes?
|
I'm creating a class, but having some trouble with the namespacing in python.
You can see the code below, and it mostly works ok, but after the call to guiFrame._stateMachine() the time module is somehow not defined anymore.
If I re-import the time module in _stateMachine() it works. But why is the time module not in the namespace when I import it in the head?
Am I missing something?
The error message:
File "C:\Scripts\Python\GUI.py", line 106, in <module>
guiFrame._stateMachine()
File "C:\Scripts\Python\GUI.py", line 74, in _stateMachine
self.tempfile.write('%s cpuUMTS %s\n' % (time.asctime(time.localt
f.load.cpuThreadsValue['10094']))
UnboundLocalError: local variable 'time' referenced before assignment
The code:
import os
import cpu_load_internal
import throughput_internal
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
from Tkinter import *
import tkMessageBox
import time
class GUIFramework(Frame):
"""This is the GUI"""
def __init__(self,master=None):
"""Initialize yourself"""
"""Initialise the base class"""
Frame.__init__(self,master)
"""Set the Window Title"""
self.master.title("Type Some Text")
"""Display the main window
with a little bit of padding"""
self.grid(padx=10,pady=10)
self.CreateWidgets()
plt.figure(1)
def _setup_parsing(self):
self.load = cpu_load_internal.CPULoad('C:\Templogs')
self.throughput = throughput_internal.MACThroughput('C:\Templogs')
self.tempfile = open('output.txt','w')
self.state = 0
def _parsing(self):
self.load.read_lines()
self.throughput.read_lines()
self.cpuLoad.set(self.load.cpuThreadsValue['10094'])
self.macThroughput.set(self.throughput.macULThroughput)
def __change_state1(self):
self.state = 2
def __change_state3(self):
self.state = 3
def CreateWidgets(self):
"""Create all the widgets that we need"""
"""Create the Text"""
self.cpuLoad = StringVar()
self.lbText1 = Label(self, textvariable=self.cpuLoad)
self.lbText1.grid(row=0, column=0)
self.macThroughput = StringVar()
self.lbText2 = Label(self, textvariable=self.macThroughput)
self.lbText2.grid(row=0, column=1)
self.butStart = Button(self, text = 'Start', command = self.__change_state1)
self.butStart.grid(row=1, column=0)
self.butStop = Button(self, text = 'Stop', command = self.__change_state3)
self.butStop.grid(row=1, column=1)
def _stateMachine(self):
if (self.state == 2):
print self.throughput.macULUpdate
print self.load.cpuUpdate
if self.load.cpuUpdate:
self.load.cpuUpdate = 0
print 'cpuUMTS %s\n' % (self.load.cpuThreadsValue['10094'])
self.tempfile.write('%s cpuUMTS %s\n' % (time.asctime(time.localtime()), self.load.cpuThreadsValue['10094']))
if self.throughput.macULUpdate:
self.throughput.macULUpdate = 0
print 'macUL %s %s\n' % (self.throughput.macULThroughput, self.throughput.macULThroughputUnit)
self.tempfile.write('%s macUL %s %s\n' % (time.asctime(time.localtime()), self.throughput.macULThroughput, self.throughput.macULThroughputUnit))
if (self.state == 3):
self.tempfile.seek(0)
plt.plot([1,2,3],[1,4,6])
plt.savefig('test.png')
self.state == 0
while 1:
try:
line = (self.tempfile.next())
except:
break
if 'cpuUMTS' in line:
line.split
time = 4
if __name__ == "__main__":
guiFrame = GUIFramework()
print dir(guiFrame)
guiFrame._setup_parsing()
guiFrame.state = 2
while(1):
guiFrame._parsing()
guiFrame._stateMachine()
guiFrame.update()
time.sleep(0.1)
|
[
"Why do you assign to time? You can't use it as local variable, it will overshadow the module! If you look closely it complains that you use time before you assign to it -- since to use it as a local variable in _stateMachine.\ntime = 4\n\n",
"You seem to use time as a variable. What happens here:\n\n\"C:\\Scripts\\Python\\GUI.py\", line 74\n\n",
"You try to assing to the variable time in this method:\ntime = 4\n\nTherefore the compiler assumes that time must be a local variable, which isn't true. And that is the reason why you get this error when you want to use the module time, even if you try to use the module before you assign to time.\n"
] |
[
7,
2,
2
] |
[] |
[] |
[
"namespaces",
"python"
] |
stackoverflow_0001361393_namespaces_python.txt
|
Q:
KeyboardInterrupt in Windows?
How to generate a KeyboardInterrupt in Windows?
while True:
try:
print 'running'
except KeyboardInterrupt:
break
I expected CTRL+C to stop this program but it doesn't work.
A:
Your code is working ok when ran into a windows console.
Ctrl+C generating a KeyboardInterrupt is a console feature. If you run it from a text editor like SciTE, it will not work.
|
KeyboardInterrupt in Windows?
|
How to generate a KeyboardInterrupt in Windows?
while True:
try:
print 'running'
except KeyboardInterrupt:
break
I expected CTRL+C to stop this program but it doesn't work.
|
[
"Your code is working ok when ran into a windows console.\nCtrl+C generating a KeyboardInterrupt is a console feature. If you run it from a text editor like SciTE, it will not work.\n"
] |
[
2
] |
[] |
[] |
[
"keyboardinterrupt",
"python",
"windows"
] |
stackoverflow_0001361217_keyboardinterrupt_python_windows.txt
|
Q:
stop python object going out of scope in c++
Is there a way to transfer a new class instance (python class that inherits c++ class) into c++ with out having to hold on to the object return and just treat it as a c++ pointer.
For example:
C++
object pyInstance = GetLocalDict()["makeNewGamePlay"]();
CGEPYGameMode* m_pGameMode = extract< CGEPYGameMode* >( pyInstance );
pyth:
class Alpha(CGEPYGameMode):
def someFunct(self):
pass
def makeNewGamePlay():
return Alpha()
pyInstance is the python class instance and m_pGameMode is a pointer to the c++ baseclass of the same instance. However if i store the pointer and let the object go out of scope, the python object is cleaned up. Is there any way to only have the c++ pointer with out the object getting cleaned up?
More info: python object to native c++ pointer
A:
You must increment the reference count of the pyInstance. That will prevent Python from deleting it. When you are ready to delete it, you can simply decrement the reference count and Python will clean it up for you.
|
stop python object going out of scope in c++
|
Is there a way to transfer a new class instance (python class that inherits c++ class) into c++ with out having to hold on to the object return and just treat it as a c++ pointer.
For example:
C++
object pyInstance = GetLocalDict()["makeNewGamePlay"]();
CGEPYGameMode* m_pGameMode = extract< CGEPYGameMode* >( pyInstance );
pyth:
class Alpha(CGEPYGameMode):
def someFunct(self):
pass
def makeNewGamePlay():
return Alpha()
pyInstance is the python class instance and m_pGameMode is a pointer to the c++ baseclass of the same instance. However if i store the pointer and let the object go out of scope, the python object is cleaned up. Is there any way to only have the c++ pointer with out the object getting cleaned up?
More info: python object to native c++ pointer
|
[
"You must increment the reference count of the pyInstance. That will prevent Python from deleting it. When you are ready to delete it, you can simply decrement the reference count and Python will clean it up for you.\n"
] |
[
2
] |
[] |
[] |
[
"boost",
"c++",
"object",
"python"
] |
stackoverflow_0001361028_boost_c++_object_python.txt
|
Q:
IMAP interface access to existing user messaging system in Python
I am running a site where users can private message each other. As with any other such website, to read and mark their messages, users must log on to the site.
I wish to expose an IMAP interface so that users may read their site messages using their standard email client. There would be few complications in such approach as what be userid to email-address mapping and what would happen if the user replies a mail but for the time being I'm little concerned about these issues.
Is there any lightweight raw IMAP server in Python to which I could just add few rules or logic to expose an IMAP interface to user's messages?
A:
Twisted Mail project:
Twisted Mail contains high-level, efficient protocol implementations for both clients and servers of SMTP, POP3, and IMAP4. Additionally, it contains an "out of the box" combination SMTP/POP3 virtual-hosting mail server. Also included is a read/write Maildir implementation and a basic Mail Exchange calculator (depends on Twisted Names).
The examples for IMAP4 contain only a client. Look into the source for more information.
|
IMAP interface access to existing user messaging system in Python
|
I am running a site where users can private message each other. As with any other such website, to read and mark their messages, users must log on to the site.
I wish to expose an IMAP interface so that users may read their site messages using their standard email client. There would be few complications in such approach as what be userid to email-address mapping and what would happen if the user replies a mail but for the time being I'm little concerned about these issues.
Is there any lightweight raw IMAP server in Python to which I could just add few rules or logic to expose an IMAP interface to user's messages?
|
[
"Twisted Mail project:\n\nTwisted Mail contains high-level, efficient protocol implementations for both clients and servers of SMTP, POP3, and IMAP4. Additionally, it contains an \"out of the box\" combination SMTP/POP3 virtual-hosting mail server. Also included is a read/write Maildir implementation and a basic Mail Exchange calculator (depends on Twisted Names). \n\nThe examples for IMAP4 contain only a client. Look into the source for more information.\n"
] |
[
1
] |
[] |
[] |
[
"imap",
"interface",
"python"
] |
stackoverflow_0001361671_imap_interface_python.txt
|
Q:
Match multiple patterns in a multiline string
I have some data which look like that:
PMID- 19587274
OWN - NLM
DP - 2009 Jul 8
TI - Domain general mechanisms of perceptual decision making in human cortex.
PG - 8675-87
AB - To successfully interact with objects in the environment, sensory evidence must
be continuously acquired, interpreted, and used to guide appropriate motor
responses. For example, when driving, a red
AD - Perception and Cognition Laboratory, Department of Psychology, University of
California, San Diego, La Jolla, California 92093, USA.
PMID- 19583148
OWN - NLM
DP - 2009 Jun
TI - Ursodeoxycholic acid for treatment of cholestasis in patients with hepatic
amyloidosis.
PG - 482-6
AB - BACKGROUND: Amyloidosis represents a group of different diseases characterized by
extracellular accumulation of pathologic fibrillar proteins in various tissues
AD - Asklepios Hospital, Department of Medicine, Langen, Germany.
[email protected]
I want to write a regex which can match the sentences which follow PMID, TI and AB.
Is it possible to get these in a one shot regex?
I have spent nearly the whole day to try to figure out a regex and the closest I could get is that:
reg4 = r'PMID- (?P<pmid>[0-9]*).*TI.*- (?P<title>.*)PG.*AB.*- (?P<abstract>.*)AD'
for i in re.finditer(reg4, data, re.S | re.M): print i.groupdict()
Which will return me the matches only in the second "set" of data, and not all of them.
Any idea? Thank you!
A:
How about:
import re
reg4 = re.compile(r'^(?:PMID- (?P<pmid>[0-9]+)|TI - (?P<title>.*?)^PG|AB - (?P<abstract>.*?)^AD)', re.MULTILINE | re.DOTALL)
for i in reg4.finditer(data):
print i.groupdict()
Output:
{'pmid': '19587274', 'abstract': None, 'title': None}
{'pmid': None, 'abstract': None, 'title': 'Domain general mechanisms of perceptual decision making in human cortex.\n'}
{'pmid': None, 'abstract': 'To successfully interact with objects in the environment, sensory evidence must\n be continuously acquired, interpreted, and used to guide appropriate motor\n responses. For example, when driving, a red \n', 'title': None}
{'pmid': '19583148', 'abstract': None, 'title': None}
{'pmid': None, 'abstract': None, 'title': 'Ursodeoxycholic acid for treatment of cholestasis in patients with hepatic\n amyloidosis.\n'}
{'pmid': None, 'abstract': 'BACKGROUND: Amyloidosis represents a group of different diseases characterized by\n extracellular accumulation of pathologic fibrillar proteins in various tissues\n', 'title': None}
Edit
As a verbose RE to make it more understandable (I think verbose REs should be used for anything but the simplest of expressions, but that's just my opinion!):
#!/usr/bin/python
import re
reg4 = re.compile(r'''
^ # Start of a line (due to re.MULTILINE, this may match at the start of any line)
(?: # Non capturing group with multiple options, first option:
PMID-\s # Literal "PMID-" followed by a space
(?P<pmid>[0-9]+) # Then a string of one or more digits, group as 'pmid'
| # Next option:
TI\s{2}-\s # "TI", two spaces, a hyphen and a space
(?P<title>.*?) # The title, a non greedy match that will capture everything up to...
^PG # The characters PG at the start of a line
| # Next option
AB\s{2}-\s # "AB - "
(?P<abstract>.*?) # The abstract, a non greedy match that will capture everything up to...
^AD # "AD" at the start of a line
)
''', re.MULTILINE | re.DOTALL | re.VERBOSE)
for i in reg4.finditer(data):
print i.groupdict()
Note that you could replace the ^PG and ^AD with ^\S to make it more general (you want to match everything up until the first non-space at the start of a line).
Edit 2
If you want to catch the whole thing in one regexp, get rid of the starting (?:, the ending ) and change the | characters to .*?:
#!/usr/bin/python
import re
reg4 = re.compile(r'''
^ # Start of a line (due to re.MULTILINE, this may match at the start of any line)
PMID-\s # Literal "PMID-" followed by a space
(?P<pmid>[0-9]+) # Then a string of one or more digits, group as 'pmid'
.*? # Next part:
TI\s{2}-\s # "TI", two spaces, a hyphen and a space
(?P<title>.*?) # The title, a non greedy match that will capture everything up to...
^PG # The characters PG at the start of a line
.*? # Next option
AB\s{2}-\s # "AB - "
(?P<abstract>.*?) # The abstract, a non greedy match that will capture everything up to...
^AD # "AD" at the start of a line
''', re.MULTILINE | re.DOTALL | re.VERBOSE)
for i in reg4.finditer(data):
print i.groupdict()
This gives:
{'pmid': '19587274', 'abstract': 'To successfully interact with objects in the environment, sensory evidence must\n be continuously acquired, interpreted, and used to guide appropriate motor\n responses. For example, when driving, a red \n', 'title': 'Domain general mechanisms of perceptual decision making in human cortex.\n'}
{'pmid': '19583148', 'abstract': 'BACKGROUND: Amyloidosis represents a group of different diseases characterized by\n extracellular accumulation of pathologic fibrillar proteins in various tissues\n', 'title': 'Ursodeoxycholic acid for treatment of cholestasis in patients with hepatic\n amyloidosis.\n'}
A:
How about not using regexps for this task, but instead using programmatic code that splits by newlines, looks at prefix codes using .startswith() etc?
The code would be longer that way but everyone would be able to understand it, without having to come to stackoverflow for help.
A:
Another regex:
reg4 = r'(?<=PMID- )(?P<pmid>.*?)(?=OWN - ).*?(?<=TI - )(?P<title>.*?)(?=PG - ).*?(?<=AB - )(?P<abstract>.*?)(?=AD - )'
A:
If the order of the lines can change, you could use this pattern:
reg4 = re.compile(r"""
^
(?: PMID \s*-\s* (?P<pmid> [0-9]+ ) \n
| TI \s*-\s* (?P<title> .* (?:\n[^\S\n].*)* ) \n
| AB \s*-\s* (?P<abstract> .* (?:\n[^\S\n].*)* ) \n
| .+\n
)+
""", re.MULTILINE | re.VERBOSE)
It will match consecutive non-empty lines, and capture the PMID, TI and AB items.
The item values can span multiple lines, as long as the lines following the first start with a whitespace character.
"[^\S\n]" matches any whitespace character (\s), except newline (\n).
".* (?:\n[^\S\n].*)*" matches consecutive lines that start with a whitespace character.
".+\n" matches any other non-empty line.
|
Match multiple patterns in a multiline string
|
I have some data which look like that:
PMID- 19587274
OWN - NLM
DP - 2009 Jul 8
TI - Domain general mechanisms of perceptual decision making in human cortex.
PG - 8675-87
AB - To successfully interact with objects in the environment, sensory evidence must
be continuously acquired, interpreted, and used to guide appropriate motor
responses. For example, when driving, a red
AD - Perception and Cognition Laboratory, Department of Psychology, University of
California, San Diego, La Jolla, California 92093, USA.
PMID- 19583148
OWN - NLM
DP - 2009 Jun
TI - Ursodeoxycholic acid for treatment of cholestasis in patients with hepatic
amyloidosis.
PG - 482-6
AB - BACKGROUND: Amyloidosis represents a group of different diseases characterized by
extracellular accumulation of pathologic fibrillar proteins in various tissues
AD - Asklepios Hospital, Department of Medicine, Langen, Germany.
[email protected]
I want to write a regex which can match the sentences which follow PMID, TI and AB.
Is it possible to get these in a one shot regex?
I have spent nearly the whole day to try to figure out a regex and the closest I could get is that:
reg4 = r'PMID- (?P<pmid>[0-9]*).*TI.*- (?P<title>.*)PG.*AB.*- (?P<abstract>.*)AD'
for i in re.finditer(reg4, data, re.S | re.M): print i.groupdict()
Which will return me the matches only in the second "set" of data, and not all of them.
Any idea? Thank you!
|
[
"How about:\nimport re\nreg4 = re.compile(r'^(?:PMID- (?P<pmid>[0-9]+)|TI - (?P<title>.*?)^PG|AB - (?P<abstract>.*?)^AD)', re.MULTILINE | re.DOTALL)\nfor i in reg4.finditer(data):\n print i.groupdict()\n\nOutput:\n{'pmid': '19587274', 'abstract': None, 'title': None}\n{'pmid': None, 'abstract': None, 'title': 'Domain general mechanisms of perceptual decision making in human cortex.\\n'}\n{'pmid': None, 'abstract': 'To successfully interact with objects in the environment, sensory evidence must\\n be continuously acquired, interpreted, and used to guide appropriate motor\\n responses. For example, when driving, a red \\n', 'title': None}\n{'pmid': '19583148', 'abstract': None, 'title': None}\n{'pmid': None, 'abstract': None, 'title': 'Ursodeoxycholic acid for treatment of cholestasis in patients with hepatic\\n amyloidosis.\\n'}\n{'pmid': None, 'abstract': 'BACKGROUND: Amyloidosis represents a group of different diseases characterized by\\n extracellular accumulation of pathologic fibrillar proteins in various tissues\\n', 'title': None}\n\nEdit\nAs a verbose RE to make it more understandable (I think verbose REs should be used for anything but the simplest of expressions, but that's just my opinion!):\n#!/usr/bin/python\nimport re\nreg4 = re.compile(r'''\n ^ # Start of a line (due to re.MULTILINE, this may match at the start of any line)\n (?: # Non capturing group with multiple options, first option:\n PMID-\\s # Literal \"PMID-\" followed by a space\n (?P<pmid>[0-9]+) # Then a string of one or more digits, group as 'pmid'\n | # Next option:\n TI\\s{2}-\\s # \"TI\", two spaces, a hyphen and a space\n (?P<title>.*?) # The title, a non greedy match that will capture everything up to...\n ^PG # The characters PG at the start of a line\n | # Next option\n AB\\s{2}-\\s # \"AB - \"\n (?P<abstract>.*?) # The abstract, a non greedy match that will capture everything up to...\n ^AD # \"AD\" at the start of a line\n )\n ''', re.MULTILINE | re.DOTALL | re.VERBOSE)\nfor i in reg4.finditer(data):\n print i.groupdict()\n\nNote that you could replace the ^PG and ^AD with ^\\S to make it more general (you want to match everything up until the first non-space at the start of a line).\nEdit 2\nIf you want to catch the whole thing in one regexp, get rid of the starting (?:, the ending ) and change the | characters to .*?:\n#!/usr/bin/python\nimport re\nreg4 = re.compile(r'''\n ^ # Start of a line (due to re.MULTILINE, this may match at the start of any line)\n PMID-\\s # Literal \"PMID-\" followed by a space\n (?P<pmid>[0-9]+) # Then a string of one or more digits, group as 'pmid'\n .*? # Next part:\n TI\\s{2}-\\s # \"TI\", two spaces, a hyphen and a space\n (?P<title>.*?) # The title, a non greedy match that will capture everything up to...\n ^PG # The characters PG at the start of a line\n .*? # Next option\n AB\\s{2}-\\s # \"AB - \"\n (?P<abstract>.*?) # The abstract, a non greedy match that will capture everything up to...\n ^AD # \"AD\" at the start of a line\n ''', re.MULTILINE | re.DOTALL | re.VERBOSE)\nfor i in reg4.finditer(data):\n print i.groupdict()\n\nThis gives:\n{'pmid': '19587274', 'abstract': 'To successfully interact with objects in the environment, sensory evidence must\\n be continuously acquired, interpreted, and used to guide appropriate motor\\n responses. For example, when driving, a red \\n', 'title': 'Domain general mechanisms of perceptual decision making in human cortex.\\n'}\n{'pmid': '19583148', 'abstract': 'BACKGROUND: Amyloidosis represents a group of different diseases characterized by\\n extracellular accumulation of pathologic fibrillar proteins in various tissues\\n', 'title': 'Ursodeoxycholic acid for treatment of cholestasis in patients with hepatic\\n amyloidosis.\\n'}\n\n",
"How about not using regexps for this task, but instead using programmatic code that splits by newlines, looks at prefix codes using .startswith() etc? \nThe code would be longer that way but everyone would be able to understand it, without having to come to stackoverflow for help.\n",
"Another regex:\nreg4 = r'(?<=PMID- )(?P<pmid>.*?)(?=OWN - ).*?(?<=TI - )(?P<title>.*?)(?=PG - ).*?(?<=AB - )(?P<abstract>.*?)(?=AD - )'\n\n",
"If the order of the lines can change, you could use this pattern:\nreg4 = re.compile(r\"\"\"\n ^\n (?: PMID \\s*-\\s* (?P<pmid> [0-9]+ ) \\n\n | TI \\s*-\\s* (?P<title> .* (?:\\n[^\\S\\n].*)* ) \\n\n | AB \\s*-\\s* (?P<abstract> .* (?:\\n[^\\S\\n].*)* ) \\n\n | .+\\n\n )+\n\"\"\", re.MULTILINE | re.VERBOSE)\n\nIt will match consecutive non-empty lines, and capture the PMID, TI and AB items.\nThe item values can span multiple lines, as long as the lines following the first start with a whitespace character.\n\n\"[^\\S\\n]\" matches any whitespace character (\\s), except newline (\\n).\n\".* (?:\\n[^\\S\\n].*)*\" matches consecutive lines that start with a whitespace character.\n\".+\\n\" matches any other non-empty line.\n\n"
] |
[
2,
2,
0,
0
] |
[
"The problem were the greedy qualifiers. Here's a regex that is more specific, and non-greedy:\n#!/usr/bin/python\nimport re\nfrom pprint import pprint\ndata = open(\"testdata.txt\").read()\n\nreg4 = r'''\n ^PMID # Start matching at the string PMID\n \\s*?- # As little whitespace as possible up to the next '-'\n \\s*? # As little whitespcase as possible\n (?P<pmid>[0-9]+) # Capture the field \"pmid\", accepting only numeric characters\n .*?TI # next, match any character up to the first occurrence of 'TI'\n \\s*?- # as little whitespace as possible up to the next '-'\n \\s*? # as little whitespace as possible\n (?P<title>.*?)PG # capture the field <title> accepting any character up the the next occurrence of 'PG'\n .*?AB # match any character up to the following occurrence of 'AB'\n \\s*?- # As little whitespace as possible up to the next '-'\n \\s*? # As little whitespcase as possible\n (?P<abstract>.*?)AD # capture the fiels <abstract> accepting any character up to the next occurrence of 'AD'\n'''\nfor i in re.finditer(reg4, data, re.S | re.M | re.VERBOSE):\n print 78*\"-\"\n pprint(i.groupdict())\n\nOutput:\n------------------------------------------------------------------------------\n{'abstract': ' To successfully interact with objects in the environment,\n sensory evidence must\\n be continuously acquired, interpreted, and\n used to guide appropriate motor\\n responses. For example, when\n driving, a red \\n',\n 'pmid': '19587274',\n 'title': ' Domain general mechanisms of perceptual decision making in\n human cortex.\\n'}\n------------------------------------------------------------------------------\n{'abstract': ' BACKGROUND: Amyloidosis represents a group of different\n diseases characterized by\\n extracellular accumulation of pathologic\n fibrillar proteins in various tissues\\n',\n 'pmid': '19583148',\n 'title': ' Ursodeoxycholic acid for treatment of cholestasis in patients\n with hepatic\\n amyloidosis.\\n'}\n\nYou may want to strip the whitespace of each field after scanning.\n"
] |
[
-1
] |
[
"python",
"regex"
] |
stackoverflow_0001361373_python_regex.txt
|
Q:
Why saving of MSWord document can silently fail?
I need to change some custom properties values in many files. Here is an example of code - how I do it for a single file:
import win32com.client
MSWord = win32com.client.Dispatch("Word.Application")
MSWord.Visible = False
doc = MSWord.Documents.Open(file)
doc.CustomDocumentProperties('Some Property').Value = 'Some New Value'
doc.Save()
doc.Close()
MSWord.Quit()
Running the same code for "Excel.Application" (with minor changes - just to make it work) gives me excellent result. However when I'm using doc.Save() or doc.SaveAs(same_file) for MSWord it silently fails. I don't know why, but changes are not saved.
Now my workaround is to use SaveAs to a different file, it also works good. But I want to understand why I have such strange behaviour for MSWord files and how it can be fixed?
Edit: I changed my code, not to misdirect people with silent fail cause of try/except.
However, thanks to all of them for finding that defect in my code :)
A:
You were using the CustomDocumentProperties in the wrong way, and as other people pointed out, you could not see it, because you were swallowing the exception.
Moreover - and here I could not find anything in the documentation - the Saved property was not reset while changing properties, and for this reason the file was not changed.
This is the correct code:
msoPropertyTypeBoolean = 0
msoPropertyTypeDate = 1
msoPropertyTypeFloat = 2
msoPropertyTypeNumber = 3
msoPropertyTypeString = 4
import win32com.client
MSWord = win32com.client.Dispatch("Word.Application")
MSWord.Visible = False
doc = MSWord.Documents.Open(file)
csp = doc.CustomDocumentProperties
csp.Add('Some Property', False, msoPropertyTypeString, 'Some New Value')
doc.Saved = False
doc.Save()
doc.Close()
MSWord.Quit()
Note: there is no error handling, and it is definitely not of production quality, but it should be enough for you to implement your functionality.
Finally, I am guessing the values of the property types (and for the string type the guess is correct) but for the others there could be some issue.
A:
you're saving file only if Value was successfully changed. May be you could try to remove try-except clause and see what is actually happening when you're file is not saved. And, btw, using bare except is not a good practice.
A:
(a) Check to see if you have file write access
(b) Make sure you code catches using the COMException
(C) are you gracefully terminating excel/words when creating multiple documents
Darknight
A:
It fails silently since you ignore errors (except: pass).
The most common reason why saving a Word file usually fails is that it's open in Word.
|
Why saving of MSWord document can silently fail?
|
I need to change some custom properties values in many files. Here is an example of code - how I do it for a single file:
import win32com.client
MSWord = win32com.client.Dispatch("Word.Application")
MSWord.Visible = False
doc = MSWord.Documents.Open(file)
doc.CustomDocumentProperties('Some Property').Value = 'Some New Value'
doc.Save()
doc.Close()
MSWord.Quit()
Running the same code for "Excel.Application" (with minor changes - just to make it work) gives me excellent result. However when I'm using doc.Save() or doc.SaveAs(same_file) for MSWord it silently fails. I don't know why, but changes are not saved.
Now my workaround is to use SaveAs to a different file, it also works good. But I want to understand why I have such strange behaviour for MSWord files and how it can be fixed?
Edit: I changed my code, not to misdirect people with silent fail cause of try/except.
However, thanks to all of them for finding that defect in my code :)
|
[
"You were using the CustomDocumentProperties in the wrong way, and as other people pointed out, you could not see it, because you were swallowing the exception.\nMoreover - and here I could not find anything in the documentation - the Saved property was not reset while changing properties, and for this reason the file was not changed.\nThis is the correct code:\nmsoPropertyTypeBoolean = 0\nmsoPropertyTypeDate = 1\nmsoPropertyTypeFloat = 2\nmsoPropertyTypeNumber = 3\nmsoPropertyTypeString = 4\n\nimport win32com.client\n\nMSWord = win32com.client.Dispatch(\"Word.Application\")\nMSWord.Visible = False\n\ndoc = MSWord.Documents.Open(file)\ncsp = doc.CustomDocumentProperties\ncsp.Add('Some Property', False, msoPropertyTypeString, 'Some New Value')\ndoc.Saved = False\ndoc.Save()\ndoc.Close()\n\nMSWord.Quit()\n\nNote: there is no error handling, and it is definitely not of production quality, but it should be enough for you to implement your functionality.\nFinally, I am guessing the values of the property types (and for the string type the guess is correct) but for the others there could be some issue.\n",
"you're saving file only if Value was successfully changed. May be you could try to remove try-except clause and see what is actually happening when you're file is not saved. And, btw, using bare except is not a good practice.\n",
"\n(a) Check to see if you have file write access\n(b) Make sure you code catches using the COMException\n(C) are you gracefully terminating excel/words when creating multiple documents\n\nDarknight\n",
"It fails silently since you ignore errors (except: pass).\nThe most common reason why saving a Word file usually fails is that it's open in Word.\n"
] |
[
3,
1,
0,
0
] |
[] |
[] |
[
"automation",
"ms_word",
"ole",
"python"
] |
stackoverflow_0001340950_automation_ms_word_ole_python.txt
|
Q:
Is it safe to rely on condition evaluation order in if statements?
Is it bad practice to use the following format when my_var can be None?
if my_var and 'something' in my_var:
#do something
The issue is that 'something' in my_var will throw a TypeError if my_var is None.
Or should I use:
if my_var:
if 'something' in my_var:
#do something
or
try:
if 'something' in my_var:
#do something
except TypeError:
pass
To rephrase the question, which of the above is the best practice in Python (if any)?
Alternatives are welcome!
A:
It's safe to depend on the order of conditionals (Python reference here), specifically because of the problem you point out - it's very useful to be able to short-circuit evaluation that could cause problems in a string of conditionals.
This sort of code pops up in most languages:
IF exists(variable) AND variable.doSomething()
THEN ...
A:
Yes it is safe, it's explicitly and very clearly defined in the language reference:
The expression x and y first evaluates
x; if x is false, its value is
returned; otherwise, y is evaluated
and the resulting value is returned.
The expression x or y first evaluates
x; if x is true, its value is
returned; otherwise, y is evaluated
and the resulting value is returned.
A:
I may be being a little pedantic here, but I would say the best answer is
if my_var is not None and 'something' in my_var:
#do something
The difference being the explicit check for None, rather than the implicit conversion of my_var to True or False.
While I'm sure in your case the distinction isn't important, in the more general case it would be quite possible for the variable to not be None but still evaluate to False, for example an integer value of 0 or an empty list.
So contrary to most of the other posters' assertions that it's safe, I'd say that it's safe as long as you're explicit. If you're not convinced then consider this very contrived class:
class Contrived(object):
def __contains__(self, s):
return True
def __nonzero__(self):
return False
my_var = Contrived()
if 'something' in my_var:
print "Yes the condition is true"
if my_var and 'something' in my_var:
print "But this statement won't get reached."
if my_var is not None and 'something' in my_var:
print "Whereas this one will."
Yes I know that's not a realistic example, but variations do happen in real code, especially when None is used to indicate a default function argument.
A:
It's not that simple. As a C# dude I am very used to doing something like:
if(x != null && ! string.isnullorempty(x.Name))
{
//do something
}
The above works great and is evaluated as expected. However in VB.Net the following would produce a result you were NOT expecting:
If Not x Is Nothing **And** Not String.IsNullOrEmpty(x.Name) Then
'do something
End If
The above will generate an exception. The correct syntax should be
If Not x Is Nothing **AndAlso** Not String.IsNullOrEmpty(x.Name) Then
'do something
End If
Note the very subtle difference. This had me confused for about 10 minutes (way too long) and is why C# (and other) dudes needs to be very careful when coding in other languages.
A:
I would go with the try/except, but it depends on what you know about the variable.
If you are expecting that the variable will exist most of the time, then a try/except is less operations. If you are expecting the variable to be None most of the time, then an IF statement will be less operations.
A:
It's perfectly safe and I do it all the time.
|
Is it safe to rely on condition evaluation order in if statements?
|
Is it bad practice to use the following format when my_var can be None?
if my_var and 'something' in my_var:
#do something
The issue is that 'something' in my_var will throw a TypeError if my_var is None.
Or should I use:
if my_var:
if 'something' in my_var:
#do something
or
try:
if 'something' in my_var:
#do something
except TypeError:
pass
To rephrase the question, which of the above is the best practice in Python (if any)?
Alternatives are welcome!
|
[
"It's safe to depend on the order of conditionals (Python reference here), specifically because of the problem you point out - it's very useful to be able to short-circuit evaluation that could cause problems in a string of conditionals.\nThis sort of code pops up in most languages:\nIF exists(variable) AND variable.doSomething()\n THEN ...\n\n",
"Yes it is safe, it's explicitly and very clearly defined in the language reference:\n\nThe expression x and y first evaluates\n x; if x is false, its value is\n returned; otherwise, y is evaluated\n and the resulting value is returned.\nThe expression x or y first evaluates\n x; if x is true, its value is\n returned; otherwise, y is evaluated\n and the resulting value is returned.\n\n",
"I may be being a little pedantic here, but I would say the best answer is\nif my_var is not None and 'something' in my_var:\n #do something\n\nThe difference being the explicit check for None, rather than the implicit conversion of my_var to True or False.\nWhile I'm sure in your case the distinction isn't important, in the more general case it would be quite possible for the variable to not be None but still evaluate to False, for example an integer value of 0 or an empty list.\nSo contrary to most of the other posters' assertions that it's safe, I'd say that it's safe as long as you're explicit. If you're not convinced then consider this very contrived class:\nclass Contrived(object):\n def __contains__(self, s):\n return True\n def __nonzero__(self):\n return False\n\nmy_var = Contrived()\nif 'something' in my_var:\n print \"Yes the condition is true\"\nif my_var and 'something' in my_var:\n print \"But this statement won't get reached.\"\nif my_var is not None and 'something' in my_var:\n print \"Whereas this one will.\"\n\nYes I know that's not a realistic example, but variations do happen in real code, especially when None is used to indicate a default function argument.\n",
"It's not that simple. As a C# dude I am very used to doing something like:\nif(x != null && ! string.isnullorempty(x.Name))\n{\n //do something\n}\n\nThe above works great and is evaluated as expected. However in VB.Net the following would produce a result you were NOT expecting:\nIf Not x Is Nothing **And** Not String.IsNullOrEmpty(x.Name) Then\n\n 'do something\n\nEnd If\n\nThe above will generate an exception. The correct syntax should be\nIf Not x Is Nothing **AndAlso** Not String.IsNullOrEmpty(x.Name) Then\n\n 'do something\n\nEnd If\n\nNote the very subtle difference. This had me confused for about 10 minutes (way too long) and is why C# (and other) dudes needs to be very careful when coding in other languages.\n",
"I would go with the try/except, but it depends on what you know about the variable.\nIf you are expecting that the variable will exist most of the time, then a try/except is less operations. If you are expecting the variable to be None most of the time, then an IF statement will be less operations.\n",
"It's perfectly safe and I do it all the time.\n"
] |
[
104,
41,
4,
2,
1,
0
] |
[] |
[] |
[
"if_statement",
"python"
] |
stackoverflow_0000752373_if_statement_python.txt
|
Q:
How to unload a .NET assembly reference in IronPython
After loading a reference to an assembly with something like:
import clr
clr.AddRferenceToFileAndPath(r'C:\foo.dll')
How can I unload the assembly again?
Why would anyone ever want to do this? Because I'm recompiling foo.dll and want to reload it, but the compiler is giving me a fuss, since IronPython is allready accessing foo.dll.
A:
.NET itself doesn't support unloading just a single assembly. Instead, you need to unload a whole AppDomain. I don't know exactly how IronPython works with AppDomains, but that's the normal .NET way of doing things. (Load the assembly into a new AppDomain, use it, discard the AppDomain, create a new AppDomain with the new version of the file etc.)
|
How to unload a .NET assembly reference in IronPython
|
After loading a reference to an assembly with something like:
import clr
clr.AddRferenceToFileAndPath(r'C:\foo.dll')
How can I unload the assembly again?
Why would anyone ever want to do this? Because I'm recompiling foo.dll and want to reload it, but the compiler is giving me a fuss, since IronPython is allready accessing foo.dll.
|
[
".NET itself doesn't support unloading just a single assembly. Instead, you need to unload a whole AppDomain. I don't know exactly how IronPython works with AppDomains, but that's the normal .NET way of doing things. (Load the assembly into a new AppDomain, use it, discard the AppDomain, create a new AppDomain with the new version of the file etc.)\n"
] |
[
6
] |
[] |
[] |
[
".net",
"ironpython",
"python",
"python.net"
] |
stackoverflow_0001362114_.net_ironpython_python_python.net.txt
|
Q:
Django : save a new value in a ManyToManyField
I gave details on my code : I don't know why my table is empty (it seems that it was empty out after calling save_model, but I'm not sure).
class PostAdmin(admin.ModelAdmin):
def save_model(self, request, post, form, change):
post.save()
# Authors must be saved after saving post
print form.cleaned_data['authors'] # []
print request.user # pg
authors = form.cleaned_data['authors'] or request.user
print authors # pg
post.authors.add(authors)
print post.authors.all() # [<User: pg>]
# But on a shell, table is empty. WTF ?! :
# select * from journal_post_authors;
# Empty set (0.00 sec)
A:
You need to save the post again, after the post.authors.add(authors).
A:
I found the solution. I've just changed the value in cleaned_data and it works :
if not form.cleaned_data['authors']:
form.cleaned_data['authors'] = [request.user]
Thank for helping me. :)
A:
I don't know what kind of field you're using, but shouldn't there be one of these in there somewhere? (or something similar)
author = form.cleaned_data['authors']
User.objects.get(id=author)
|
Django : save a new value in a ManyToManyField
|
I gave details on my code : I don't know why my table is empty (it seems that it was empty out after calling save_model, but I'm not sure).
class PostAdmin(admin.ModelAdmin):
def save_model(self, request, post, form, change):
post.save()
# Authors must be saved after saving post
print form.cleaned_data['authors'] # []
print request.user # pg
authors = form.cleaned_data['authors'] or request.user
print authors # pg
post.authors.add(authors)
print post.authors.all() # [<User: pg>]
# But on a shell, table is empty. WTF ?! :
# select * from journal_post_authors;
# Empty set (0.00 sec)
|
[
"You need to save the post again, after the post.authors.add(authors). \n",
"I found the solution. I've just changed the value in cleaned_data and it works :\nif not form.cleaned_data['authors']:\n form.cleaned_data['authors'] = [request.user]\n\nThank for helping me. :)\n",
"I don't know what kind of field you're using, but shouldn't there be one of these in there somewhere? (or something similar)\nauthor = form.cleaned_data['authors']\nUser.objects.get(id=author)\n\n"
] |
[
2,
1,
0
] |
[] |
[] |
[
"add",
"django",
"manytomanyfield",
"python",
"save"
] |
stackoverflow_0001356761_add_django_manytomanyfield_python_save.txt
|
Q:
Django: how to retrieve an object selected by the ``object_detail`` generic view?
Hi (sorry for my ugly english)
I wonder if this is possible to retrieve an object which was selected with the object_detail generic view. For example :
from django.views.generic.list_detail import object_detail
def my_view(request, slug)
response = object_detail(request, MyModel.objects.all(),
slug=slug,
slug_field='slug',
template_object_name='object')
# Here I need my object in ``response`` to do something after.
I don't know where is the object
A:
You can't get the object this way, since object_detail simply returns a rendered response. If you need it, you'll just have to get it manually:
object = MyModel.objects.get(slug=slug)
|
Django: how to retrieve an object selected by the ``object_detail`` generic view?
|
Hi (sorry for my ugly english)
I wonder if this is possible to retrieve an object which was selected with the object_detail generic view. For example :
from django.views.generic.list_detail import object_detail
def my_view(request, slug)
response = object_detail(request, MyModel.objects.all(),
slug=slug,
slug_field='slug',
template_object_name='object')
# Here I need my object in ``response`` to do something after.
I don't know where is the object
|
[
"You can't get the object this way, since object_detail simply returns a rendered response. If you need it, you'll just have to get it manually:\nobject = MyModel.objects.get(slug=slug)\n\n"
] |
[
5
] |
[] |
[] |
[
"django",
"generics",
"python",
"view"
] |
stackoverflow_0001362782_django_generics_python_view.txt
|
Q:
Encapsulation severely hurts performance?
I know this question is kind of stupid, maybe it's a just a part of writing code but it seems defining simple functions can really hurt performance severely... I've tried this simple test:
def make_legal_foo_string(x):
return "This is a foo string: " + str(x)
def sum_up_to(x):
return x*(x+1)/2
def foo(x):
return [make_legal_foo_string(x),sum_up_to(x),x+1]
def bar(x):
return ''.join([str(foo(x))," -- bar !! "])
it's very good style and makes code clear but it can be three times as slow as just writing it literally. It's inescapable for functions that can have side effects but it's actually almost trivial to define some functions that just should literally be replaced with lines of code every time they appear, translate the source code into that and only then compile. Same I think for magic numbers, it doesn't take a lot of time to read from memory but if they're not supposed to be changed then why not just replace every instance of 'magic' with a literal before the code compiles?
A:
Function call overheads are not big; you won't normally notice them. You only see them in this case because your actual code (x*x) is itself so completely trivial. In any real program that does real work, the amount of time spent in function-calling overhead will be negligably small.
(Not that I'd really recommend using foo, identity and square in the example, in any case; they're so trivial it's just as readable to have them inline and they don't really encapsulate or abstract anything.)
if they're not supposed to be changed then why not just replace every instance of 'magic' with a literal before the code compiles?
Because programs are written for to be easy for you to read and maintain. You could replace constants with their literal values, but it'd make the program harder to work with, for a benefit that is so tiny you'll probably never even be able to measure it: the height of premature optimisation.
A:
I don't know how good python compilers are, but the answer to this question for many languages is that the compiler will optimize calls to small procedures / functions / methods by inlining them. In fact, in some language implementations you generally get better performance by NOT trying to "micro-optimize" the code yourself.
A:
Encapsulation is about one thing and one thing only: Readability. If you're really so worried about performance that you're willing to start stripping out encapsulated logic, you may as well just start coding in assembly.
Encapsulation also assists in debugging and feature adding. Consider the following: lets say you have a simple game and need to add code that depletes the players health under some circumstances. Easy, yes?
def DamagePlayer(dmg):
player.health -= dmg;
This IS very trivial code, so it's very tempting to simply scatter "player.health -=" everywhere. But what if later you want to add a powerup that halves damage done to the player while active? If the logic is still encapsulated, it's easy:
def DamagePlayer(dmg):
if player.hasCoolPowerUp:
player.health -= dmg / 2
else
player.health -= dmg
Now, consider if you had neglected to encapulate that bit of logic because of it's simplicity. Now you're looking at coding the same logic into 50 different places, at least one of which you are almost certain to forget, which leads to weird bugs like: "When player has powerup all damage is halved except when hit by AlienSheep enemies..."
Do you want to have problems with Alien Sheep? I don't think so. :)
In all seriousness, the point I'm trying to make is that encapsulation is a very good thing in the right circumstances. Of course, it's also easy to over-encapsulate, which can be just as problematic. Also, there are situations where the speed really truly does matter (though they are rare), and that extra few clock cycles is worth it. About the only way to find the right balance is practice. Don't shun encapsulation because it's slower, though. The benefits usually far outweigh the costs.
A:
What you are talking about is the effect of inlining functions for gaining efficiency.
It is certainly true in your Python example, that encapsulation hurts performance. But there are some counter example to it:
In Java, defining getter&setter instead of defining public member variables does not result in performance degradation as the JIT inline the getters&setters.
sometimes calling a function repetedly may be better than performing inlining as the code executed may then fit in the cache. Inlining may cause code explosion...
A:
Figuring out what to make into a function and what to just include inline is something of an art. Many factors (performance, readability, maintainability) feed into the equation.
I actually find your example kind of silly in many ways - a function that just returns it's argument? Unless it's an overload that's changing the rules, it's stupid. A function to square things? Again, why bother. Your function 'foo' should probably return a string, so that it can be used directly:
''.join(foo(x)," -- bar !! "])
That's probably a more correct level of encapsulation in this example.
As I say, it really depends on the circumstances. Unfortunately, this is the sort of thing that doesn't lend itself well to examples.
A:
IMO, this is related to Function Call Costs. Which are negligible usually, but not zero. Splitting the code in a lot of very small functions may hurt. Especially in interpreted languages where full optimization is not available.
Inlining functions may improve performance but it may also deteriorate. See, for example, C++ FQA Lite for explanations (“Inlining can make code faster by eliminating function call overhead, or slower by generating too much code, causing instruction cache misses”). This is not C++ specific. Better leave optimizations for compiler/interpreter unless they are really necessary.
By the way, I don't see a huge difference between two versions:
$ python bench.py
fine-grained function decomposition: 5.46632194519
one-liner: 4.46827578545
$ python --version
Python 2.5.2
I think this result is acceptable. See bench.py in the pastebin.
A:
There is a performance hit for using functions, since there is the overhead of jumping to a new address, pushing registers on the stack, and returning at the end. This overhead is very small, however, and even in peformance critial systems worrying about such overhead is most likely premature optimization.
Many languages avoid these issues in small frequenty called functions by using inlining, which is essentially what you do above.
Python does not do inlining. The closest thing you can do is use macros to replace the function calls.
This sort of performance issue is better served by another language, if you need the sort of speed gained by inlining (mostly marginal, and sometimes detremental) then you need to consider not using python for whatever you are working on.
A:
There is a good technical reason why what you suggested is impossible. In Python, functions, constants, and everything else are accessible at runtime and can be changed at any time if necessary; they could also be changed by outside modules. This is an explicit promise of Python and one would need some extremely important reasons to break it.
For example, here's the common logging idiom:
# beginning of the file xxx.py
log = lambda *x: None
def something():
...
log(...)
...
(here log does nothing), and then at some other module or at the interactive prompt:
import xxx
xxx.log = print
xxx.something()
As you see, here log is modified by completely different module --- or by the user --- so that logging now works. That would be impossible if log was optimized away.
Similarly, if an exception was to happen in make_legal_foo_string (this is possible, e.g. if x.__str__() is broken and returns None) you'd be hit with a source quote from a wrong line and even perhaps from a wrong file in your scenario.
There are some tools that in fact apply some optimizations to Python code, but I don't think of the kind you suggested.
|
Encapsulation severely hurts performance?
|
I know this question is kind of stupid, maybe it's a just a part of writing code but it seems defining simple functions can really hurt performance severely... I've tried this simple test:
def make_legal_foo_string(x):
return "This is a foo string: " + str(x)
def sum_up_to(x):
return x*(x+1)/2
def foo(x):
return [make_legal_foo_string(x),sum_up_to(x),x+1]
def bar(x):
return ''.join([str(foo(x))," -- bar !! "])
it's very good style and makes code clear but it can be three times as slow as just writing it literally. It's inescapable for functions that can have side effects but it's actually almost trivial to define some functions that just should literally be replaced with lines of code every time they appear, translate the source code into that and only then compile. Same I think for magic numbers, it doesn't take a lot of time to read from memory but if they're not supposed to be changed then why not just replace every instance of 'magic' with a literal before the code compiles?
|
[
"Function call overheads are not big; you won't normally notice them. You only see them in this case because your actual code (x*x) is itself so completely trivial. In any real program that does real work, the amount of time spent in function-calling overhead will be negligably small.\n(Not that I'd really recommend using foo, identity and square in the example, in any case; they're so trivial it's just as readable to have them inline and they don't really encapsulate or abstract anything.)\n\nif they're not supposed to be changed then why not just replace every instance of 'magic' with a literal before the code compiles?\n\nBecause programs are written for to be easy for you to read and maintain. You could replace constants with their literal values, but it'd make the program harder to work with, for a benefit that is so tiny you'll probably never even be able to measure it: the height of premature optimisation.\n",
"I don't know how good python compilers are, but the answer to this question for many languages is that the compiler will optimize calls to small procedures / functions / methods by inlining them. In fact, in some language implementations you generally get better performance by NOT trying to \"micro-optimize\" the code yourself.\n",
"Encapsulation is about one thing and one thing only: Readability. If you're really so worried about performance that you're willing to start stripping out encapsulated logic, you may as well just start coding in assembly.\nEncapsulation also assists in debugging and feature adding. Consider the following: lets say you have a simple game and need to add code that depletes the players health under some circumstances. Easy, yes?\ndef DamagePlayer(dmg):\n player.health -= dmg;\n\nThis IS very trivial code, so it's very tempting to simply scatter \"player.health -=\" everywhere. But what if later you want to add a powerup that halves damage done to the player while active? If the logic is still encapsulated, it's easy:\ndef DamagePlayer(dmg):\n if player.hasCoolPowerUp:\n player.health -= dmg / 2\n else\n player.health -= dmg\n\nNow, consider if you had neglected to encapulate that bit of logic because of it's simplicity. Now you're looking at coding the same logic into 50 different places, at least one of which you are almost certain to forget, which leads to weird bugs like: \"When player has powerup all damage is halved except when hit by AlienSheep enemies...\"\nDo you want to have problems with Alien Sheep? I don't think so. :)\nIn all seriousness, the point I'm trying to make is that encapsulation is a very good thing in the right circumstances. Of course, it's also easy to over-encapsulate, which can be just as problematic. Also, there are situations where the speed really truly does matter (though they are rare), and that extra few clock cycles is worth it. About the only way to find the right balance is practice. Don't shun encapsulation because it's slower, though. The benefits usually far outweigh the costs.\n",
"What you are talking about is the effect of inlining functions for gaining efficiency. \nIt is certainly true in your Python example, that encapsulation hurts performance. But there are some counter example to it:\n\nIn Java, defining getter&setter instead of defining public member variables does not result in performance degradation as the JIT inline the getters&setters.\nsometimes calling a function repetedly may be better than performing inlining as the code executed may then fit in the cache. Inlining may cause code explosion...\n\n",
"Figuring out what to make into a function and what to just include inline is something of an art. Many factors (performance, readability, maintainability) feed into the equation.\nI actually find your example kind of silly in many ways - a function that just returns it's argument? Unless it's an overload that's changing the rules, it's stupid. A function to square things? Again, why bother. Your function 'foo' should probably return a string, so that it can be used directly:\n''.join(foo(x),\" -- bar !! \"])\n\nThat's probably a more correct level of encapsulation in this example.\nAs I say, it really depends on the circumstances. Unfortunately, this is the sort of thing that doesn't lend itself well to examples.\n",
"IMO, this is related to Function Call Costs. Which are negligible usually, but not zero. Splitting the code in a lot of very small functions may hurt. Especially in interpreted languages where full optimization is not available.\nInlining functions may improve performance but it may also deteriorate. See, for example, C++ FQA Lite for explanations (“Inlining can make code faster by eliminating function call overhead, or slower by generating too much code, causing instruction cache misses”). This is not C++ specific. Better leave optimizations for compiler/interpreter unless they are really necessary.\nBy the way, I don't see a huge difference between two versions:\n$ python bench.py \nfine-grained function decomposition: 5.46632194519\none-liner: 4.46827578545\n$ python --version\nPython 2.5.2\n\nI think this result is acceptable. See bench.py in the pastebin.\n",
"There is a performance hit for using functions, since there is the overhead of jumping to a new address, pushing registers on the stack, and returning at the end. This overhead is very small, however, and even in peformance critial systems worrying about such overhead is most likely premature optimization.\nMany languages avoid these issues in small frequenty called functions by using inlining, which is essentially what you do above.\nPython does not do inlining. The closest thing you can do is use macros to replace the function calls. \nThis sort of performance issue is better served by another language, if you need the sort of speed gained by inlining (mostly marginal, and sometimes detremental) then you need to consider not using python for whatever you are working on. \n",
"There is a good technical reason why what you suggested is impossible. In Python, functions, constants, and everything else are accessible at runtime and can be changed at any time if necessary; they could also be changed by outside modules. This is an explicit promise of Python and one would need some extremely important reasons to break it.\nFor example, here's the common logging idiom:\n# beginning of the file xxx.py\nlog = lambda *x: None \n\ndef something():\n ...\n log(...)\n ...\n\n(here log does nothing), and then at some other module or at the interactive prompt:\nimport xxx\nxxx.log = print\nxxx.something()\n\nAs you see, here log is modified by completely different module --- or by the user --- so that logging now works. That would be impossible if log was optimized away.\nSimilarly, if an exception was to happen in make_legal_foo_string (this is possible, e.g. if x.__str__() is broken and returns None) you'd be hit with a source quote from a wrong line and even perhaps from a wrong file in your scenario. \nThere are some tools that in fact apply some optimizations to Python code, but I don't think of the kind you suggested.\n"
] |
[
6,
2,
2,
1,
0,
0,
0,
0
] |
[] |
[] |
[
"performance",
"python"
] |
stackoverflow_0001362997_performance_python.txt
|
Q:
Python version shipping with Mac OS X Snow Leopard?
I would appreciate it if somebody running the final version of Snow Leopard could post what version of Python is included with the OS (on a Terminal, just type "python --version")
Thanks!
A:
It ships with both python 2.6.1 and 2.5.4.
$ python2.5
Python 2.5.4 (r254:67916, Jul 7 2009, 23:51:24)
$ python
Python 2.6.1 (r261:67515, Jul 7 2009, 23:51:51)
A:
bot:nasuni jesse$ python
Python 2.6.1 (r261:67515, Jul 7 2009, 23:51:51)
[GCC 4.2.1 (Apple Inc. build 5646)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>>
Probably the biggest reason I went and upgraded this morning, it's not 2.6.2, but it's close enough.
A:
Python 2.6.1
(according to the web)
Really good to know :)
A:
http://developer.apple.com/mac/library/documentation/Darwin/Reference/ManPages/man1/python.1.html
A:
You can get an installer for 2.6.2 from python.org, no reason to go without.
|
Python version shipping with Mac OS X Snow Leopard?
|
I would appreciate it if somebody running the final version of Snow Leopard could post what version of Python is included with the OS (on a Terminal, just type "python --version")
Thanks!
|
[
"It ships with both python 2.6.1 and 2.5.4.\n\n$ python2.5\nPython 2.5.4 (r254:67916, Jul 7 2009, 23:51:24)\n$ python\nPython 2.6.1 (r261:67515, Jul 7 2009, 23:51:51)\n\n",
"bot:nasuni jesse$ python\nPython 2.6.1 (r261:67515, Jul 7 2009, 23:51:51) \n[GCC 4.2.1 (Apple Inc. build 5646)] on darwin\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> \n\nProbably the biggest reason I went and upgraded this morning, it's not 2.6.2, but it's close enough.\n",
"Python 2.6.1\n(according to the web)\nReally good to know :)\n",
"http://developer.apple.com/mac/library/documentation/Darwin/Reference/ManPages/man1/python.1.html\n",
"You can get an installer for 2.6.2 from python.org, no reason to go without.\n"
] |
[
12,
5,
3,
3,
1
] |
[] |
[] |
[
"macos",
"osx_snow_leopard",
"python"
] |
stackoverflow_0001347376_macos_osx_snow_leopard_python.txt
|
Q:
calling vb dll in python
So I have a function in vb that is converted to a dll that I want to use in python. However trying to use it, I get an error message
this is the VB function
Function DISPLAYNAME(Name)
MsgBox ("Hello " & Name & "!")
End Function
and this is how I call it in python
from ctypes import *
test = windll.TestDLL
print test
print test.DISPLAYNAME("one")
But I get errors so what is the right way of calling the dll
Traceback (most recent call last):
File "C:\Test\testdll.py", line 4, in <module>
print test.DISPLAYNAME("one")
File "C:\Python26\lib\ctypes\__init__.py", line 366, in __getattr__
func = self.__getitem__(name)
File "C:\Python26\lib\ctypes\__init__.py", line 371, in __getitem__
func = self._FuncPtr((name_or_ordinal, self))
AttributeError: function 'DISPLAYNAME' not found
I have been looking around online but no solution so far. Can't use cdll since this is for c progs.
I have looked at some of the python and dll related questions but no solution so far works for my issue.
A:
I dunno the answer to your specific question, but if it's VB.NET, you can natively call it in IronPython.
A:
It might be a scoping issue, with out the Public access modifier, the function may not be visible to external callers. Try
Public Function DISPLAYNAME(Name)
MsgBox ("Hello " & Name & "!")
End Function
in your dll
|
calling vb dll in python
|
So I have a function in vb that is converted to a dll that I want to use in python. However trying to use it, I get an error message
this is the VB function
Function DISPLAYNAME(Name)
MsgBox ("Hello " & Name & "!")
End Function
and this is how I call it in python
from ctypes import *
test = windll.TestDLL
print test
print test.DISPLAYNAME("one")
But I get errors so what is the right way of calling the dll
Traceback (most recent call last):
File "C:\Test\testdll.py", line 4, in <module>
print test.DISPLAYNAME("one")
File "C:\Python26\lib\ctypes\__init__.py", line 366, in __getattr__
func = self.__getitem__(name)
File "C:\Python26\lib\ctypes\__init__.py", line 371, in __getitem__
func = self._FuncPtr((name_or_ordinal, self))
AttributeError: function 'DISPLAYNAME' not found
I have been looking around online but no solution so far. Can't use cdll since this is for c progs.
I have looked at some of the python and dll related questions but no solution so far works for my issue.
|
[
"I dunno the answer to your specific question, but if it's VB.NET, you can natively call it in IronPython.\n",
"It might be a scoping issue, with out the Public access modifier, the function may not be visible to external callers. Try\nPublic Function DISPLAYNAME(Name)\nMsgBox (\"Hello \" & Name & \"!\")\nEnd Function\n\nin your dll \n"
] |
[
0,
0
] |
[] |
[] |
[
"dll",
"python",
"vb.net",
"vb6"
] |
stackoverflow_0001363305_dll_python_vb.net_vb6.txt
|
Q:
Django newbie deployment question - ImportError: Could not import settings 'settings'
The app runs fine using django internal server however when I use apache + mod_python I get the below error
File "/usr/local/lib/python2.6/dist-packages/django/conf/__init__.py", line 75, in __init__
raise ImportError, "Could not import settings '%s' (Is it on sys.path? Does it have syntax errors?): %s" % (self.SETTINGS_MODULE, e)
ImportError: Could not import settings 'settings' (Is it on sys.path? Does it have syntax errors?): No module named settings
Here is the needed information
1) Project directory: /root/djangoprojects/mysite
2) directory listing of /root/djangoprojects/mysite
ls -ltr
total 28
-rw-r--r-- 1 root root 546 Aug 1 08:34 manage.py
-rw-r--r-- 1 root root 0 Aug 1 08:34 __init__.py
-rw-r--r-- 1 root root 136 Aug 1 08:35 __init__.pyc
-rw-r--r-- 1 root root 2773 Aug 1 08:39 settings.py
-rw-r--r-- 1 root root 1660 Aug 1 08:53 settings.pyc
drwxr-xr-x 2 root root 4096 Aug 1 09:04 polls
-rw-r--r-- 1 root root 581 Aug 1 10:06 urls.py
-rw-r--r-- 1 root root 314 Aug 1 10:07 urls.pyc
3) App directory : /root/djangoprojects/mysite/polls
4) directory listing of /root/djangoprojects/mysite/polls
ls -ltr
total 20
-rw-r--r-- 1 root root 514 Aug 1 08:53 tests.py
-rw-r--r-- 1 root root 57 Aug 1 08:53 models.py
-rw-r--r-- 1 root root 0 Aug 1 08:53 __init__.py
-rw-r--r-- 1 root root 128 Aug 1 09:02 views.py
-rw-r--r-- 1 root root 375 Aug 1 09:04 views.pyc
-rw-r--r-- 1 root root 132 Aug 1 09:04 __init__.pyc
5) Anywhere in the filesystem running import django in python interpreter works fine
6) content of httpd.conf
<Location "/mysite">
SetHandler python-program
PythonHandler django.core.handlers.modpython
SetEnv DJANGO_SETTINGS_MODULE settings
PythonOption django.root /mysite
PythonPath "['/root/djangoprojects/', '/root/djangoprojects/mysite','/root/djangoprojects/mysite/polls', '/var/www'] + sys.path"
PythonDebug On
</Location>
7) PYTHONPATH variable is set to
echo $PYTHONPATH
/root/djangoprojects/mysite
8) DJANGO_SETTINGS_MODULE is set to
echo $DJANGO_SETTINGS_MODULE
mysite.settings
9) content of sys.path is
import sys
>>> sys.path
['', '/root/djangoprojects/mysite', '/usr/lib/python2.6', '/usr/lib/python2.6/plat-linux2', '/usr/lib/python2.6/lib-tk', '/usr/lib/python2.6/lib-old', '/usr/lib/python2.6/lib-dynload', '/usr/lib/python2.6/dist-packages', '/usr/local/lib/python2.6/dist-packages']
How do I add settings location to sys.path such that it persistent across sessions ?
I have read umpteen no of post with people having the same issue it and I have tried a lot completely beats me as to what I need to do.
Looking for some help.
Thanks in advance
Ankur Gupta
A:
Your apache configuration should look like this:
<Location "/mysite">
SetHandler python-program
PythonHandler django.core.handlers.modpython
SetEnv DJANGO_SETTINGS_MODULE mysite.settings
PythonOption django.root /mysite
PythonPath "['/root/djangoprojects/', '/root/djangoprojects/mysite','/root/djangoprojects/mysite/polls', '/var/www'] + sys.path"
PythonDebug On
</Location>
Note that the sole difference is the "mysite.settings". Don't forget to restart apache once the config has changed (apache2ctl restart). See http://docs.djangoproject.com/en/dev/howto/deployment/modpython/ for more info.
A:
Try changing to the following:
<Location "/mysite">
SetHandler python-program
PythonHandler django.core.handlers.modpython
SetEnv DJANGO_SETTINGS_MODULE mysite.settings
PythonOption django.root /mysite
PythonPath "['/root/djangoprojects', '/var/www'] + sys.path"
PythonDebug On
</Location>
Use no "/" at the end of the PythonPath entries (that may be an issue, I had problems with that at least on Windows).
The entries '/root/djangoprojects/mysite','/root/djangoprojects/mysite/polls' are not needed since you will be referring to your modules by full name (i.e. mysite.sttings means the settings module inside the mysite package, which is the correct way to access it in the PythonPath /root/djangoprojects).
|
Django newbie deployment question - ImportError: Could not import settings 'settings'
|
The app runs fine using django internal server however when I use apache + mod_python I get the below error
File "/usr/local/lib/python2.6/dist-packages/django/conf/__init__.py", line 75, in __init__
raise ImportError, "Could not import settings '%s' (Is it on sys.path? Does it have syntax errors?): %s" % (self.SETTINGS_MODULE, e)
ImportError: Could not import settings 'settings' (Is it on sys.path? Does it have syntax errors?): No module named settings
Here is the needed information
1) Project directory: /root/djangoprojects/mysite
2) directory listing of /root/djangoprojects/mysite
ls -ltr
total 28
-rw-r--r-- 1 root root 546 Aug 1 08:34 manage.py
-rw-r--r-- 1 root root 0 Aug 1 08:34 __init__.py
-rw-r--r-- 1 root root 136 Aug 1 08:35 __init__.pyc
-rw-r--r-- 1 root root 2773 Aug 1 08:39 settings.py
-rw-r--r-- 1 root root 1660 Aug 1 08:53 settings.pyc
drwxr-xr-x 2 root root 4096 Aug 1 09:04 polls
-rw-r--r-- 1 root root 581 Aug 1 10:06 urls.py
-rw-r--r-- 1 root root 314 Aug 1 10:07 urls.pyc
3) App directory : /root/djangoprojects/mysite/polls
4) directory listing of /root/djangoprojects/mysite/polls
ls -ltr
total 20
-rw-r--r-- 1 root root 514 Aug 1 08:53 tests.py
-rw-r--r-- 1 root root 57 Aug 1 08:53 models.py
-rw-r--r-- 1 root root 0 Aug 1 08:53 __init__.py
-rw-r--r-- 1 root root 128 Aug 1 09:02 views.py
-rw-r--r-- 1 root root 375 Aug 1 09:04 views.pyc
-rw-r--r-- 1 root root 132 Aug 1 09:04 __init__.pyc
5) Anywhere in the filesystem running import django in python interpreter works fine
6) content of httpd.conf
<Location "/mysite">
SetHandler python-program
PythonHandler django.core.handlers.modpython
SetEnv DJANGO_SETTINGS_MODULE settings
PythonOption django.root /mysite
PythonPath "['/root/djangoprojects/', '/root/djangoprojects/mysite','/root/djangoprojects/mysite/polls', '/var/www'] + sys.path"
PythonDebug On
</Location>
7) PYTHONPATH variable is set to
echo $PYTHONPATH
/root/djangoprojects/mysite
8) DJANGO_SETTINGS_MODULE is set to
echo $DJANGO_SETTINGS_MODULE
mysite.settings
9) content of sys.path is
import sys
>>> sys.path
['', '/root/djangoprojects/mysite', '/usr/lib/python2.6', '/usr/lib/python2.6/plat-linux2', '/usr/lib/python2.6/lib-tk', '/usr/lib/python2.6/lib-old', '/usr/lib/python2.6/lib-dynload', '/usr/lib/python2.6/dist-packages', '/usr/local/lib/python2.6/dist-packages']
How do I add settings location to sys.path such that it persistent across sessions ?
I have read umpteen no of post with people having the same issue it and I have tried a lot completely beats me as to what I need to do.
Looking for some help.
Thanks in advance
Ankur Gupta
|
[
"Your apache configuration should look like this:\n<Location \"/mysite\">\n SetHandler python-program\n PythonHandler django.core.handlers.modpython\n SetEnv DJANGO_SETTINGS_MODULE mysite.settings\n PythonOption django.root /mysite\n PythonPath \"['/root/djangoprojects/', '/root/djangoprojects/mysite','/root/djangoprojects/mysite/polls', '/var/www'] + sys.path\"\n PythonDebug On\n</Location>\n\nNote that the sole difference is the \"mysite.settings\". Don't forget to restart apache once the config has changed (apache2ctl restart). See http://docs.djangoproject.com/en/dev/howto/deployment/modpython/ for more info.\n",
"Try changing to the following:\n<Location \"/mysite\">\n SetHandler python-program\n PythonHandler django.core.handlers.modpython\n SetEnv DJANGO_SETTINGS_MODULE mysite.settings\n PythonOption django.root /mysite\n PythonPath \"['/root/djangoprojects', '/var/www'] + sys.path\"\n PythonDebug On\n</Location>\n\nUse no \"/\" at the end of the PythonPath entries (that may be an issue, I had problems with that at least on Windows). \nThe entries '/root/djangoprojects/mysite','/root/djangoprojects/mysite/polls' are not needed since you will be referring to your modules by full name (i.e. mysite.sttings means the settings module inside the mysite package, which is the correct way to access it in the PythonPath /root/djangoprojects).\n"
] |
[
5,
0
] |
[] |
[] |
[
"deployment",
"django",
"mod_python",
"python"
] |
stackoverflow_0001216340_deployment_django_mod_python_python.txt
|
Q:
Python singleton / object instantiation
I'm learning Python and i've been trying to implement a Singleton-type class as a test. The code i have is as follows:
_Singleton__instance = None
class Singleton:
def __init__(self):
global __instance
if __instance == None:
self.name = "The one"
__instance = self
else:
self = __instance
This works in part but the self = __instance part seems to be failing. I've included some output from the interpretor to demonstrate (the code above is saved in singleton.py):
>>> import singleton
>>> x = singleton.Singleton()
>>> x.name
'The one'
>>> singleton._Singleton__instance.name
'The one'
>>> y = singleton.Singleton()
>>> y.name
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: Singleton instance has no attribute 'name'
>>> type(y)
<type 'instance'>
>>> dir(y)
['__doc__', '__init__', '__module__']
Is it possible to do what i'm trying? If not is there another way of doing this?
Any suggestions welcome.
Cheers.
A:
Assigning to an argument or any other local variable (barename) cannot ever, possibly have ANY effect outside the function; that applies to your self = whatever as it would to ANY other assignment to a (barename) argument or other local variable.
Rather, override __new__:
class Singleton(object):
__instance = None
def __new__(cls):
if cls.__instance == None:
cls.__instance = object.__new__(cls)
cls.__instance.name = "The one"
return cls.__instance
I've done other enhancements here, such as uprooting the global, the old-style class, etc.
MUCH better is to use Borg (aka monostate) instead of your chosen Highlander (aka singleton), but that's a different issue from the one you're asking about;-).
A:
Bruce Eckel's code snippet from Design Pattern: I'm confused on how it works
class Borg:
_shared_state = {}
def __init__(self):
self.__dict__ = self._shared_state
class MySingleton(Borg):
def __init__(self, arg):
Borg.__init__(self)
self.val = arg
def __str__(self): return self.val
x = MySingleton('sausage')
print x
y = MySingleton('eggs')
print y
z = MySingleton('spam')
print z
print x
print y
print ´x´
print ´y´
print ´z´
output = '''
sausage
eggs
spam
spam
spam
<__main__. MySingleton instance at 0079EF2C>
<__main__. MySingleton instance at 0079E10C>
<__main__. MySingleton instance at 00798F9C>
'''
A:
From Singleton Pattern (Python):
class Singleton(type):
def __init__(self, name, bases, dict):
super(Singleton, self).__init__(name, bases, dict)
self.instance = None
def __call__(self, *args, **kw):
if self.instance is None:
self.instance = super(Singleton, self).__call__(*args, **kw)
return self.instance
class MyClass(object):
__metaclass__ = Singleton
print MyClass()
print MyClass()
A:
This is about the most basic Singleton you can make. It uses a class method to check whether the singleton has been created and makes a new one if it hasn't. There are more advanced ways of going about this, such as overriding the __new__ method.
class Singleton:
instance = None
@classmethod
def get(cls):
if cls.instance is None:
cls.instance = cls()
return cls.instance
def __init__(self):
self.x = 5 # or whatever you want to do
sing = Singleton.get()
print sing.x # prints 5
As for why your code fails, there are several reasons. First, by the time __init__ is called, a new object has already been created, defeating the purpose of the singleton pattern. Second, when you say self = __instance, that simply resets the local variable self; this would be akin to saying
def f(x):
x = 7 # changes the value of our local variable
y = 5
f(y)
print y # this is still 5
Since variables in Python are passed by value and not reference, you can't say self = blah and have it be meaningful in the way you want. The above Singleton class is more what you want, unless you want to get fancy and look into overriding the __new__ operator.
A:
self = _instance
This wont do what you are expecting it to do. Read about how Python treats names.
|
Python singleton / object instantiation
|
I'm learning Python and i've been trying to implement a Singleton-type class as a test. The code i have is as follows:
_Singleton__instance = None
class Singleton:
def __init__(self):
global __instance
if __instance == None:
self.name = "The one"
__instance = self
else:
self = __instance
This works in part but the self = __instance part seems to be failing. I've included some output from the interpretor to demonstrate (the code above is saved in singleton.py):
>>> import singleton
>>> x = singleton.Singleton()
>>> x.name
'The one'
>>> singleton._Singleton__instance.name
'The one'
>>> y = singleton.Singleton()
>>> y.name
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: Singleton instance has no attribute 'name'
>>> type(y)
<type 'instance'>
>>> dir(y)
['__doc__', '__init__', '__module__']
Is it possible to do what i'm trying? If not is there another way of doing this?
Any suggestions welcome.
Cheers.
|
[
"Assigning to an argument or any other local variable (barename) cannot ever, possibly have ANY effect outside the function; that applies to your self = whatever as it would to ANY other assignment to a (barename) argument or other local variable.\nRather, override __new__:\nclass Singleton(object):\n\n __instance = None\n\n def __new__(cls):\n if cls.__instance == None:\n cls.__instance = object.__new__(cls)\n cls.__instance.name = \"The one\"\n return cls.__instance\n\nI've done other enhancements here, such as uprooting the global, the old-style class, etc.\nMUCH better is to use Borg (aka monostate) instead of your chosen Highlander (aka singleton), but that's a different issue from the one you're asking about;-).\n",
"Bruce Eckel's code snippet from Design Pattern: I'm confused on how it works\nclass Borg:\n _shared_state = {}\n def __init__(self):\n self.__dict__ = self._shared_state\n\nclass MySingleton(Borg):\n def __init__(self, arg):\n Borg.__init__(self)\n self.val = arg\n def __str__(self): return self.val\n\nx = MySingleton('sausage')\nprint x\ny = MySingleton('eggs')\nprint y\nz = MySingleton('spam')\nprint z\nprint x\nprint y\nprint ´x´\nprint ´y´\nprint ´z´\noutput = '''\nsausage\neggs\nspam\nspam\nspam\n<__main__. MySingleton instance at 0079EF2C>\n<__main__. MySingleton instance at 0079E10C>\n<__main__. MySingleton instance at 00798F9C>\n'''\n\n",
"From Singleton Pattern (Python):\nclass Singleton(type):\n def __init__(self, name, bases, dict):\n super(Singleton, self).__init__(name, bases, dict)\n self.instance = None\n\n def __call__(self, *args, **kw):\n if self.instance is None:\n self.instance = super(Singleton, self).__call__(*args, **kw)\n\n return self.instance\n\nclass MyClass(object):\n __metaclass__ = Singleton\n\nprint MyClass()\nprint MyClass()\n\n",
"This is about the most basic Singleton you can make. It uses a class method to check whether the singleton has been created and makes a new one if it hasn't. There are more advanced ways of going about this, such as overriding the __new__ method.\nclass Singleton:\n instance = None\n\n @classmethod\n def get(cls):\n if cls.instance is None:\n cls.instance = cls()\n return cls.instance\n\n def __init__(self):\n self.x = 5 # or whatever you want to do\n\nsing = Singleton.get()\nprint sing.x # prints 5\n\nAs for why your code fails, there are several reasons. First, by the time __init__ is called, a new object has already been created, defeating the purpose of the singleton pattern. Second, when you say self = __instance, that simply resets the local variable self; this would be akin to saying\ndef f(x):\n x = 7 # changes the value of our local variable\n\ny = 5\nf(y)\nprint y # this is still 5\n\nSince variables in Python are passed by value and not reference, you can't say self = blah and have it be meaningful in the way you want. The above Singleton class is more what you want, unless you want to get fancy and look into overriding the __new__ operator.\n",
"self = _instance\n\nThis wont do what you are expecting it to do. Read about how Python treats names.\n"
] |
[
22,
6,
4,
3,
0
] |
[] |
[] |
[
"python",
"singleton"
] |
stackoverflow_0001363839_python_singleton.txt
|
Q:
Invoking a method on an object
Given a PyObject* pointing to a python object, how do I invoke one of the object methods? The documentation never gives an example of this:
PyObject* obj = ....
PyObject* args = Py_BuildValue("(s)", "An arg");
PyObject* method = PyWHATGOESHERE(obj, "foo");
PyObject* ret = PyWHATGOESHERE(obj, method, args);
if (!ret) {
// check error...
}
This would be the equivalent of
>>> ret = obj.foo("An arg")
A:
PyObject* obj = ....
PyObject *ret = PyObject_CallMethod(obj, "foo", "(s)", "An arg");
if (!ret) {
// check error...
}
Read up on the Python C API documentation. In this case, you want the object protocol.
PyObject* PyObject_CallMethod(PyObject *o, char *method, char *format, ...)
Return value: New reference.
Call the method named method of object o with a variable number of C
arguments. The C arguments are
described by a Py_BuildValue() format
string that should produce a tuple.
The format may be NULL, indicating
that no arguments are provided.
Returns the result of the call on
success, or NULL on failure. This is
the equivalent of the Python
expression o.method(args). Note that
if you only pass PyObject * args,
PyObject_CallMethodObjArgs() is a
faster alternative.
And
PyObject* PyObject_CallMethodObjArgs(PyObject *o, PyObject *name, ..., NULL)
Return value: New reference.
Calls a method of the object o, where the name of the method is given
as a Python string object in name. It
is called with a variable number of
PyObject* arguments. The arguments are
provided as a variable number of
parameters followed by NULL. Returns
the result of the call on success, or
NULL on failure.
A:
Your example would be:
PyObject* ret = PyObject_CallMethod(obj, "foo", "(s)", "An arg");
|
Invoking a method on an object
|
Given a PyObject* pointing to a python object, how do I invoke one of the object methods? The documentation never gives an example of this:
PyObject* obj = ....
PyObject* args = Py_BuildValue("(s)", "An arg");
PyObject* method = PyWHATGOESHERE(obj, "foo");
PyObject* ret = PyWHATGOESHERE(obj, method, args);
if (!ret) {
// check error...
}
This would be the equivalent of
>>> ret = obj.foo("An arg")
|
[
"PyObject* obj = ....\nPyObject *ret = PyObject_CallMethod(obj, \"foo\", \"(s)\", \"An arg\");\nif (!ret) {\n // check error...\n}\n\nRead up on the Python C API documentation. In this case, you want the object protocol.\n\nPyObject* PyObject_CallMethod(PyObject *o, char *method, char *format, ...)\n\nReturn value: New reference.\nCall the method named method of object o with a variable number of C\n arguments. The C arguments are\n described by a Py_BuildValue() format\n string that should produce a tuple.\n The format may be NULL, indicating\n that no arguments are provided.\n Returns the result of the call on\n success, or NULL on failure. This is\n the equivalent of the Python\n expression o.method(args). Note that\n if you only pass PyObject * args,\n PyObject_CallMethodObjArgs() is a\n faster alternative.\n\nAnd\n\nPyObject* PyObject_CallMethodObjArgs(PyObject *o, PyObject *name, ..., NULL)\n\nReturn value: New reference.\nCalls a method of the object o, where the name of the method is given\n as a Python string object in name. It\n is called with a variable number of\n PyObject* arguments. The arguments are\n provided as a variable number of\n parameters followed by NULL. Returns\n the result of the call on success, or\n NULL on failure.\n\n",
"Your example would be:\nPyObject* ret = PyObject_CallMethod(obj, \"foo\", \"(s)\", \"An arg\");\n\n"
] |
[
9,
3
] |
[] |
[] |
[
"c",
"embedded_language",
"python"
] |
stackoverflow_0001364117_c_embedded_language_python.txt
|
Q:
Python 3.1 inline division override
I don't know if this is a bug in 3.1, but if I remember correctly "inline" division worked like this in pre-3k versions:
Python 3.1 (r31:73574, Jun 26 2009, 20:21:35) [MSC v.1500 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> class A:
... def __init__(self, x):
... self.x = x
... def __idiv__(self, y):
... self.x /= y
...
>>> a = A(5)
>>> a /= 2
However, 3.1 gives me this:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for /=: 'A' and 'int'
... or am I missing something?
A:
Gaaah! Found __floordiv__ and __truediv__. Sorry!
If you'd like to tell me why 2to3 doesn't translate __idiv__ into a __truediv__ with a __floordiv__(self, y): self.__truediv__(y), please go ahead!
|
Python 3.1 inline division override
|
I don't know if this is a bug in 3.1, but if I remember correctly "inline" division worked like this in pre-3k versions:
Python 3.1 (r31:73574, Jun 26 2009, 20:21:35) [MSC v.1500 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> class A:
... def __init__(self, x):
... self.x = x
... def __idiv__(self, y):
... self.x /= y
...
>>> a = A(5)
>>> a /= 2
However, 3.1 gives me this:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for /=: 'A' and 'int'
... or am I missing something?
|
[
"Gaaah! Found __floordiv__ and __truediv__. Sorry!\nIf you'd like to tell me why 2to3 doesn't translate __idiv__ into a __truediv__ with a __floordiv__(self, y): self.__truediv__(y), please go ahead!\n"
] |
[
6
] |
[] |
[] |
[
"python",
"python_3.x"
] |
stackoverflow_0001364583_python_python_3.x.txt
|
Q:
Python on Snow Leopard, how to open >255 sockets?
Consider this code:
import socket
store = []
scount = 0
while True:
scount+=1
print "Creating socket %d" % (scount)
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
store.append(s)
Gives the following result:
Creating socket 1
Creating socket 2
...
Creating socket 253
Creating socket 254
Traceback (most recent call last):
File "test_sockets.py", line 9, in <module>
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/socket.py", line 159, in __init__
socket.error: (24, 'Too many open files')
Checking sysctl for the allowed number of open files gives:
$ sysctl -A |grep maxfiles
kern.maxfiles = 12288
kern.maxfilesperproc = 10240
kern.maxfiles: 12288
kern.maxfilesperproc: 10240
Which is way more than the 253 sockets I could successfully open...
Could someone please help me in getting this number up to over 500? I am trying to simulate a peer to peer network using real sockets (requirement), with only 50 simulated nodes and 5 outgoing and 5 incoming connections each, would give the number of 500 needed sockets.
By the way, running this same code under Linux gives me about 1020 sockets, which is more the way I like it.
A:
You can increase available sockets with ulimit. Looks like 1200 is the max for non-root users in bash. I can get up to 10240 with zsh.
$ ulimit -n 1200
$ python sockets
....
Creating socket 1197
Creating socket 1198
Traceback (most recent call last):
File "sockets", line 7, in <module>
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/socket.py", line 182, in __init__
socket.error: [Errno 24] Too many open files
A:
Did you install XCode and the developer tools off the Snow Leopard install disk? I'm able to open way more ports than you're able to:
Creating socket 1
Creating socket 2
...
Creating socket 7161
Creating socket 7162
Creating socket 7163
Creating socket 7164
Creating socket 7165
Creating socket 7166
Traceback (most recent call last):
File "socket-test.py", line 7, in <module>
File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/socket.py", line 159, in __init__
socket.error: (24, 'Too many open files')
sysctl shows me a lot more info then your output shows (even with the grep) but the four lines you have match mine exactly, so all I can think of is needing something from the dev tools on the disk.
|
Python on Snow Leopard, how to open >255 sockets?
|
Consider this code:
import socket
store = []
scount = 0
while True:
scount+=1
print "Creating socket %d" % (scount)
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
store.append(s)
Gives the following result:
Creating socket 1
Creating socket 2
...
Creating socket 253
Creating socket 254
Traceback (most recent call last):
File "test_sockets.py", line 9, in <module>
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/socket.py", line 159, in __init__
socket.error: (24, 'Too many open files')
Checking sysctl for the allowed number of open files gives:
$ sysctl -A |grep maxfiles
kern.maxfiles = 12288
kern.maxfilesperproc = 10240
kern.maxfiles: 12288
kern.maxfilesperproc: 10240
Which is way more than the 253 sockets I could successfully open...
Could someone please help me in getting this number up to over 500? I am trying to simulate a peer to peer network using real sockets (requirement), with only 50 simulated nodes and 5 outgoing and 5 incoming connections each, would give the number of 500 needed sockets.
By the way, running this same code under Linux gives me about 1020 sockets, which is more the way I like it.
|
[
"You can increase available sockets with ulimit. Looks like 1200 is the max for non-root users in bash. I can get up to 10240 with zsh.\n$ ulimit -n 1200\n$ python sockets\n....\nCreating socket 1197\nCreating socket 1198\nTraceback (most recent call last):\n File \"sockets\", line 7, in <module>\n File \"/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/socket.py\", line 182, in __init__\nsocket.error: [Errno 24] Too many open files\n\n",
"Did you install XCode and the developer tools off the Snow Leopard install disk? I'm able to open way more ports than you're able to:\nCreating socket 1\nCreating socket 2\n...\nCreating socket 7161\nCreating socket 7162\nCreating socket 7163\nCreating socket 7164\nCreating socket 7165\nCreating socket 7166\nTraceback (most recent call last):\n File \"socket-test.py\", line 7, in <module>\n File \"/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/socket.py\", line 159, in __init__\nsocket.error: (24, 'Too many open files')\n\nsysctl shows me a lot more info then your output shows (even with the grep) but the four lines you have match mine exactly, so all I can think of is needing something from the dev tools on the disk.\n"
] |
[
17,
1
] |
[] |
[] |
[
"macos",
"python",
"sockets"
] |
stackoverflow_0001364955_macos_python_sockets.txt
|
Q:
Python packages depending on libxml2 and libxslt
Apart from lxml, is anyone aware of Python packages that depend on libxml2 and libxslt?
A:
See e.g. the list here -- not exhaustive, as the page itself says, but a start.
|
Python packages depending on libxml2 and libxslt
|
Apart from lxml, is anyone aware of Python packages that depend on libxml2 and libxslt?
|
[
"See e.g. the list here -- not exhaustive, as the page itself says, but a start.\n"
] |
[
1
] |
[] |
[] |
[
"dependencies",
"libxml2",
"libxslt",
"lxml",
"python"
] |
stackoverflow_0001365075_dependencies_libxml2_libxslt_lxml_python.txt
|
Q:
Python generating Python
I have a group of objects which I am creating a class for that I want to store each object as its own text file. I would really like to store it as a Python class definition which subclasses the main class I am creating. So, I did some poking around and found a Python Code Generator on effbot.org. I did some experimenting with it and here's what I came up with:
#
# a Python code generator backend
#
# fredrik lundh, march 1998
#
# [email protected]
# http://www.pythonware.com
#
# Code taken from http://effbot.org/zone/python-code-generator.htm
import sys, string
class CodeGeneratorBackend:
def begin(self, tab="\t"):
self.code = []
self.tab = tab
self.level = 0
def end(self):
return string.join(self.code, "")
def write(self, string):
self.code.append(self.tab * self.level + string)
def indent(self):
self.level = self.level + 1
def dedent(self):
if self.level == 0:
raise SyntaxError, "internal error in code generator"
self.level = self.level - 1
class Point():
"""Defines a Point. Has x and y."""
def __init__(self, x, y):
self.x = x
self.y = y
def dump_self(self, filename):
self.c = CodeGeneratorBackend()
self.c.begin(tab=" ")
self.c.write("class {0}{1}Point()\n".format(self.x,self.y))
self.c.indent()
self.c.write('"""Defines a Point. Has x and y"""\n')
self.c.write('def __init__(self, x={0}, y={1}):\n'.format(self.x, self.y))
self.c.indent()
self.c.write('self.x = {0}\n'.format(self.x))
self.c.write('self.y = {0}\n'.format(self.y))
self.c.dedent()
self.c.dedent()
f = open(filename,'w')
f.write(self.c.end())
f.close()
if __name__ == "__main__":
p = Point(3,4)
p.dump_self('demo.py')
That feels really ugly, is there a cleaner/better/more pythonic way to do this? Please note, this is not the class I actually intend to do this with, this is a small class I can easily mock up in not too many lines. Also, the subclasses don't need to have the generating function in them, if I need that again, I can just call the code generator from the superclass.
A:
We use Jinja2 to fill in a template. It's much simpler.
The template looks a lot like Python code with a few {{something}} replacements in it.
A:
This is pretty much the best way to generate Python source code. However, you can also generate Python executable code at runtime using the ast library. You can build code using the abstract syntax tree, then pass it to compile() to compile it into executable code. Then you can use eval() to run the code.
I'm not sure whether there is a convenient way to save the compiled code for use later though (ie. in a .pyc file).
A:
Just read your comment to wintermute - ie:
What I have is a bunch of planets that
I want to store each as their own text
files. I'm not particularly attached
to storing them as python source code,
but I am attached to making them
human-readable.
If that's the case, then it seems like you shouldn't need subclasses but should be able to use the same class and distinguish the planets via data alone. And in that case, why not just write the data to files and, when you need the planet objects in your program, read in the data to initialize the objects?
If you needed to do stuff like overriding methods, I could see writing out code - but shouldn't you just be able to have the same methods for all planets, just using different variables?
The advantage of just writing out the data (it can include label type info for readability that you'd skip when you read it in) is that non-Python programmers won't get distracted when reading them, you could use the same files with some other language if necessary, etc.
A:
From what I understand you are trying to do, I would consider using reflection to dynamically examine a class at runtime and generate output based on that. There is a good tutorial on reflection (A.K.A. introspection) at http://diveintopython3.ep.io/.
You can use the dir() function to get a list of names of the attributes of a given object. The doc string of an object is accessible via the __doc__ attribute. That is, if you want to look at the doc string of a function or class you can do the following:
>>> def foo():
... """A doc string comment."""
... pass
...
>>> print foo.__doc__
A doc string comment.
A:
I'm not sure whether this is especially Pythonic, but you could use operator overloading:
class CodeGenerator:
def __init__(self, indentation='\t'):
self.indentation = indentation
self.level = 0
self.code = ''
def indent(self):
self.level += 1
def dedent(self):
if self.level > 0:
self.level -= 1
def __add__(self, value):
temp = CodeGenerator(indentation=self.indentation)
temp.level = self.level
temp.code = str(self) + ''.join([self.indentation for i in range(0, self.level)]) + str(value)
return temp
def __str__(self):
return str(self.code)
a = CodeGenerator()
a += 'for a in range(1, 3):\n'
a.indent()
a += 'for b in range(4, 6):\n'
a.indent()
a += 'print(a * b)\n'
a.dedent()
a += '# pointless comment\n'
print(a)
This is, of course, far more expensive to implement than your example, and I would be wary of too much meta-programming, but it was a fun exercise. You can extend or use this as you see fit; how about:
adding a write method and redirecting stdout to an object of this to print straight to a script file
inheriting from it to customise output
adding attribute getters and setters
Would be great to hear about whatever you go with :)
|
Python generating Python
|
I have a group of objects which I am creating a class for that I want to store each object as its own text file. I would really like to store it as a Python class definition which subclasses the main class I am creating. So, I did some poking around and found a Python Code Generator on effbot.org. I did some experimenting with it and here's what I came up with:
#
# a Python code generator backend
#
# fredrik lundh, march 1998
#
# [email protected]
# http://www.pythonware.com
#
# Code taken from http://effbot.org/zone/python-code-generator.htm
import sys, string
class CodeGeneratorBackend:
def begin(self, tab="\t"):
self.code = []
self.tab = tab
self.level = 0
def end(self):
return string.join(self.code, "")
def write(self, string):
self.code.append(self.tab * self.level + string)
def indent(self):
self.level = self.level + 1
def dedent(self):
if self.level == 0:
raise SyntaxError, "internal error in code generator"
self.level = self.level - 1
class Point():
"""Defines a Point. Has x and y."""
def __init__(self, x, y):
self.x = x
self.y = y
def dump_self(self, filename):
self.c = CodeGeneratorBackend()
self.c.begin(tab=" ")
self.c.write("class {0}{1}Point()\n".format(self.x,self.y))
self.c.indent()
self.c.write('"""Defines a Point. Has x and y"""\n')
self.c.write('def __init__(self, x={0}, y={1}):\n'.format(self.x, self.y))
self.c.indent()
self.c.write('self.x = {0}\n'.format(self.x))
self.c.write('self.y = {0}\n'.format(self.y))
self.c.dedent()
self.c.dedent()
f = open(filename,'w')
f.write(self.c.end())
f.close()
if __name__ == "__main__":
p = Point(3,4)
p.dump_self('demo.py')
That feels really ugly, is there a cleaner/better/more pythonic way to do this? Please note, this is not the class I actually intend to do this with, this is a small class I can easily mock up in not too many lines. Also, the subclasses don't need to have the generating function in them, if I need that again, I can just call the code generator from the superclass.
|
[
"We use Jinja2 to fill in a template. It's much simpler.\nThe template looks a lot like Python code with a few {{something}} replacements in it.\n",
"This is pretty much the best way to generate Python source code. However, you can also generate Python executable code at runtime using the ast library. You can build code using the abstract syntax tree, then pass it to compile() to compile it into executable code. Then you can use eval() to run the code.\nI'm not sure whether there is a convenient way to save the compiled code for use later though (ie. in a .pyc file).\n",
"Just read your comment to wintermute - ie:\n\nWhat I have is a bunch of planets that\n I want to store each as their own text\n files. I'm not particularly attached\n to storing them as python source code,\n but I am attached to making them\n human-readable.\n\nIf that's the case, then it seems like you shouldn't need subclasses but should be able to use the same class and distinguish the planets via data alone. And in that case, why not just write the data to files and, when you need the planet objects in your program, read in the data to initialize the objects?\nIf you needed to do stuff like overriding methods, I could see writing out code - but shouldn't you just be able to have the same methods for all planets, just using different variables?\nThe advantage of just writing out the data (it can include label type info for readability that you'd skip when you read it in) is that non-Python programmers won't get distracted when reading them, you could use the same files with some other language if necessary, etc.\n",
"From what I understand you are trying to do, I would consider using reflection to dynamically examine a class at runtime and generate output based on that. There is a good tutorial on reflection (A.K.A. introspection) at http://diveintopython3.ep.io/.\nYou can use the dir() function to get a list of names of the attributes of a given object. The doc string of an object is accessible via the __doc__ attribute. That is, if you want to look at the doc string of a function or class you can do the following:\n>>> def foo():\n... \"\"\"A doc string comment.\"\"\"\n... pass\n...\n>>> print foo.__doc__\nA doc string comment.\n\n",
"I'm not sure whether this is especially Pythonic, but you could use operator overloading:\nclass CodeGenerator:\n def __init__(self, indentation='\\t'):\n self.indentation = indentation\n self.level = 0\n self.code = ''\n\n def indent(self):\n self.level += 1\n\n def dedent(self):\n if self.level > 0:\n self.level -= 1\n\n def __add__(self, value):\n temp = CodeGenerator(indentation=self.indentation)\n temp.level = self.level\n temp.code = str(self) + ''.join([self.indentation for i in range(0, self.level)]) + str(value)\n return temp\n\n def __str__(self):\n return str(self.code)\n\na = CodeGenerator()\na += 'for a in range(1, 3):\\n'\na.indent()\na += 'for b in range(4, 6):\\n'\na.indent()\na += 'print(a * b)\\n'\na.dedent()\na += '# pointless comment\\n'\nprint(a)\n\nThis is, of course, far more expensive to implement than your example, and I would be wary of too much meta-programming, but it was a fun exercise. You can extend or use this as you see fit; how about:\n\nadding a write method and redirecting stdout to an object of this to print straight to a script file\ninheriting from it to customise output\nadding attribute getters and setters\n\nWould be great to hear about whatever you go with :)\n"
] |
[
35,
10,
7,
1,
1
] |
[] |
[] |
[
"code_generation",
"python"
] |
stackoverflow_0001364640_code_generation_python.txt
|
Q:
Is storing user configuration settings on database OK?
I'm building a fairly large enterprise application made in python that on its first version will require network connection.
I've been thinking in keeping some user settings stored on the database, instead of a file in the users home folder.
Some of the advantages I've thought of are:
the user can change computers keeping all its settings
settings can be backed up along with the rest of the systems data (not a big concern)
What would be some of the caveats of this approach?
A:
This is pretty standard. Go for it.
The caveat is that when you take the database down for maintenance, no one can use the app because their profile is inaccessible. You can either solve that by making a 100%-on db solution, or, more easily, through some form of caching of profiles locally (an "offline" mode of operations). That would allow your app to function whether the user or the db are off the network.
A:
One caveat might depend on where the user is using the application from. For example, if they use two computers with different screen resolutions, and 'selected zoom/text size' is one of the things you associate with the user, it might not always be suitable. It depends what kind of settings you intend to allow the user to customize. My workplace still has some users trapped on tiny LCD screens with a max res of 800x600, and we have to account for those when developing.
A:
Do you need the database to run any part of the application? If that's the case there are no reasons not to store the config inside the DB. You already mentioned the benefits and there are no downsides.
A:
It's perfectly reasonable to keep user settings in the database, as long as the settings pertain to the application independent of user location. One possible advantage of a file in the user's home folder is that users can send settings to one another. You may of course regard this as an advantage or a disadvantage :-)
|
Is storing user configuration settings on database OK?
|
I'm building a fairly large enterprise application made in python that on its first version will require network connection.
I've been thinking in keeping some user settings stored on the database, instead of a file in the users home folder.
Some of the advantages I've thought of are:
the user can change computers keeping all its settings
settings can be backed up along with the rest of the systems data (not a big concern)
What would be some of the caveats of this approach?
|
[
"This is pretty standard. Go for it.\nThe caveat is that when you take the database down for maintenance, no one can use the app because their profile is inaccessible. You can either solve that by making a 100%-on db solution, or, more easily, through some form of caching of profiles locally (an \"offline\" mode of operations). That would allow your app to function whether the user or the db are off the network.\n",
"One caveat might depend on where the user is using the application from. For example, if they use two computers with different screen resolutions, and 'selected zoom/text size' is one of the things you associate with the user, it might not always be suitable. It depends what kind of settings you intend to allow the user to customize. My workplace still has some users trapped on tiny LCD screens with a max res of 800x600, and we have to account for those when developing.\n",
"Do you need the database to run any part of the application? If that's the case there are no reasons not to store the config inside the DB. You already mentioned the benefits and there are no downsides.\n",
"It's perfectly reasonable to keep user settings in the database, as long as the settings pertain to the application independent of user location. One possible advantage of a file in the user's home folder is that users can send settings to one another. You may of course regard this as an advantage or a disadvantage :-)\n"
] |
[
8,
5,
3,
3
] |
[] |
[] |
[
"database",
"python",
"settings"
] |
stackoverflow_0001365164_database_python_settings.txt
|
Q:
How to deal with user authentication and wrongful modification in scripting languages?
I'm building a centralized desktop application using Python/wxPython. One of the requirements is User authentication, which I'm trying to implement using LDAP (although this is not mandatory).
Users of the system will be mechanical and electrical engineers making budgets, and the biggest problem would be industrial espionage. Its a common problem that leaks occur commonly from the bottom on informal ways, and this could pose problems. The system is set up in such a way that every user has access to all and only the information it needs, so that no one person but the people on top has monetary information on the whole project.
The problem is that, for every way I can think to implement the authentication system, Python's openness makes me think of at least one way of bypassing/getting sensible information from the system, because "compiling" with py2exe is the closest I can get to obfuscation of the code on Windows.
I'm not really trying to hide the code, but rather make the authentication routine secure by itself, make it in such a way that access to the code doesn't mean capability to access the application. One thing I wanted to add, was some sort of code signing to the access routine, so the user can be sure that he is not running a modified client app.
One of the ways I've thought to avoid this is making a C module for the authentication, but I would rather not have to do that.
Of course this question is changing now and is not just "Could anyone point me in the right direction as to how to build a secure authentication system running on Python? Does something like this already exist?", but "How do you harden an scripting (Python) against wrongful modification?"
A:
How malicious are your users? Really.
Exactly how malicious?
If your users are evil sociopaths and can't be trusted with a desktop solution, then don't build a desktop solution. Build a web site.
If your users are ordinary users, they'll screw the environment up by installing viruses, malware and keyloggers from porn sites before they try to (a) learn Python (b) learn how your security works and (c) make a sincere effort at breaking it.
If you actually have desktop security issues (i.e., public safety, military, etc.) then rethink using the desktop.
Otherwise, relax, do the right thing, and don't worry about "scripting".
C++ programs are easier to hack because people are lazy and permit SQL injection.
A:
Possibly:
The user enters their credentials into the desktop client.
The client says to the server: "Hi, my name username and my password is password".
The server checks these.
The server says to the client: "Hi, username. Here is your secret token: ..."
Subsequently the client uses the secret token together with the username to "sign" communications with the server.
|
How to deal with user authentication and wrongful modification in scripting languages?
|
I'm building a centralized desktop application using Python/wxPython. One of the requirements is User authentication, which I'm trying to implement using LDAP (although this is not mandatory).
Users of the system will be mechanical and electrical engineers making budgets, and the biggest problem would be industrial espionage. Its a common problem that leaks occur commonly from the bottom on informal ways, and this could pose problems. The system is set up in such a way that every user has access to all and only the information it needs, so that no one person but the people on top has monetary information on the whole project.
The problem is that, for every way I can think to implement the authentication system, Python's openness makes me think of at least one way of bypassing/getting sensible information from the system, because "compiling" with py2exe is the closest I can get to obfuscation of the code on Windows.
I'm not really trying to hide the code, but rather make the authentication routine secure by itself, make it in such a way that access to the code doesn't mean capability to access the application. One thing I wanted to add, was some sort of code signing to the access routine, so the user can be sure that he is not running a modified client app.
One of the ways I've thought to avoid this is making a C module for the authentication, but I would rather not have to do that.
Of course this question is changing now and is not just "Could anyone point me in the right direction as to how to build a secure authentication system running on Python? Does something like this already exist?", but "How do you harden an scripting (Python) against wrongful modification?"
|
[
"How malicious are your users? Really.\nExactly how malicious?\nIf your users are evil sociopaths and can't be trusted with a desktop solution, then don't build a desktop solution. Build a web site.\nIf your users are ordinary users, they'll screw the environment up by installing viruses, malware and keyloggers from porn sites before they try to (a) learn Python (b) learn how your security works and (c) make a sincere effort at breaking it.\nIf you actually have desktop security issues (i.e., public safety, military, etc.) then rethink using the desktop.\nOtherwise, relax, do the right thing, and don't worry about \"scripting\".\nC++ programs are easier to hack because people are lazy and permit SQL injection. \n",
"Possibly:\n\nThe user enters their credentials into the desktop client.\nThe client says to the server: \"Hi, my name username and my password is password\".\nThe server checks these.\nThe server says to the client: \"Hi, username. Here is your secret token: ...\"\nSubsequently the client uses the secret token together with the username to \"sign\" communications with the server.\n\n"
] |
[
3,
1
] |
[] |
[] |
[
"authentication",
"cracking",
"design_patterns",
"python",
"security"
] |
stackoverflow_0001365254_authentication_cracking_design_patterns_python_security.txt
|
Q:
python DST and GMT management into a scheduler
I'm planning to write a sheduler app in python and I wouldn't
be in trouble with DST and GMT handling.
As example see also PHP related question 563053.
Does anyone worked already on something similar?
Does anyone already experienced with PyTZ - Python Time Zone Library?
A:
Sure, many of us have worked on calendars / schedulers and are familiar with pytz. What's your specific question, that's not already well answered in the SO question you point to and ITS answers / comments...?
Edit: so there are no special, particular pitfalls if you do things as recommended in the best answers to the other question. In particular, do standardize on UTC internally ("GMT" is an antediluvian term and concept) and convert to/from timzones (w/DST &c) on I/O (just as you should standardize on Unicode internally and encode/decode to bytes, if you must, only on I/O!-).
There's a simple and flexible module in the Python standard library called sched which provides a configurable "event scheduler" and might be at the core of your app, with help from calendar, datetime etc. Some of the recipes in the "Time and Money" chapter of the Python Cookbook, 2nd ed, may help (it's widely available as online pirate copies, though as a co-author I don't necessarily LIKE that fact;-).
It's hard to say much more without having any idea of what you're writing -- web service, web app, desktop app, or whatever else. Do you want to support vCal, iCalendar, vCalendar, other forms of interop/sync/mashup and if so with what other apps, services and/or de facto standards? Etc, etc -- like all apps, it can grow and grow if it proves successful, of course;-).
A:
pytz works great. Be sure to convert and store your times as UTC and use the pytz/datetime conversion routines to convert to local time. There's an example of usage and timezone conversion here, basically:
import datetime
import pytz
datetime.datetime(2008, 1, 31, 22, 56, 13, tzinfo=<UTC>)
utcdate.astimezone(pytz.timezone('US/Pacific'))
# result:
# datetime.datetime(2008, 1, 31, 14, 56, 13, tzinfo=<DstTzInfo 'US/Pacific' PST-1 day, 16:00:00 STD>)
|
python DST and GMT management into a scheduler
|
I'm planning to write a sheduler app in python and I wouldn't
be in trouble with DST and GMT handling.
As example see also PHP related question 563053.
Does anyone worked already on something similar?
Does anyone already experienced with PyTZ - Python Time Zone Library?
|
[
"Sure, many of us have worked on calendars / schedulers and are familiar with pytz. What's your specific question, that's not already well answered in the SO question you point to and ITS answers / comments...?\nEdit: so there are no special, particular pitfalls if you do things as recommended in the best answers to the other question. In particular, do standardize on UTC internally (\"GMT\" is an antediluvian term and concept) and convert to/from timzones (w/DST &c) on I/O (just as you should standardize on Unicode internally and encode/decode to bytes, if you must, only on I/O!-).\nThere's a simple and flexible module in the Python standard library called sched which provides a configurable \"event scheduler\" and might be at the core of your app, with help from calendar, datetime etc. Some of the recipes in the \"Time and Money\" chapter of the Python Cookbook, 2nd ed, may help (it's widely available as online pirate copies, though as a co-author I don't necessarily LIKE that fact;-).\nIt's hard to say much more without having any idea of what you're writing -- web service, web app, desktop app, or whatever else. Do you want to support vCal, iCalendar, vCalendar, other forms of interop/sync/mashup and if so with what other apps, services and/or de facto standards? Etc, etc -- like all apps, it can grow and grow if it proves successful, of course;-).\n",
"pytz works great. Be sure to convert and store your times as UTC and use the pytz/datetime conversion routines to convert to local time. There's an example of usage and timezone conversion here, basically:\nimport datetime\nimport pytz\ndatetime.datetime(2008, 1, 31, 22, 56, 13, tzinfo=<UTC>)\nutcdate.astimezone(pytz.timezone('US/Pacific'))\n# result: \n# datetime.datetime(2008, 1, 31, 14, 56, 13, tzinfo=<DstTzInfo 'US/Pacific' PST-1 day, 16:00:00 STD>)\n\n"
] |
[
3,
3
] |
[] |
[] |
[
"calendar",
"python",
"time",
"timezone",
"utc"
] |
stackoverflow_0001363692_calendar_python_time_timezone_utc.txt
|
Q:
Anyone get python26 install in Snow Leopard via Macports?
I got build error after run in Snow Leopard (MacPort v.1.8.0)
sudo port install python26
any workaround please?
Error: Target org.macports.build returned: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_lang_python26/work/Python-2.6.2" && /usr/bin/make all MAKE="/usr/bin/make CC=/usr/bin/gcc-4.2" " returned error 2
Command output: if test ""; then \
/usr/bin/gcc-4.2 -o Python.framework/Versions/2.6/Python -dynamiclib \
-isysroot "" \
-all_load libpython2.6.a -Wl,-single_module \
-install_name /opt/local/Library/Frameworks/Python.framework/Versions/2.6/Python \
-compatibility_version 2.6 \
-current_version 2.6; \
else \
/usr/bin/libtool -o Python.framework/Versions/2.6/Python -dynamic libpython2.6.a \
-lSystem -lSystemStubs -arch_only i386 -install_name /opt/local/Library/Frameworks/Python.framework/Versions/2.6/Python -compatibility_version 2.6 -current_version 2.6 ;\
fi
/usr/bin/install -c -d -m 755 \
Python.framework/Versions/2.6/Resources/English.lproj
/usr/bin/install -c -m 644 Mac/Resources/framework/Info.plist \
Python.framework/Versions/2.6/Resources/Info.plist
ln -fsn 2.6 Python.framework/Versions/Current
ln -fsn Versions/Current/Python Python.framework/Python
ln -fsn Versions/Current/Headers Python.framework/Headers
ln -fsn Versions/Current/Resources Python.framework/Resources
/usr/bin/gcc-4.2 -L/opt/local/lib -u _PyMac_Error Python.framework/Versions/2.6/Python -o python.exe \
Modules/python.o \
-ldl
ld: warning: in Python.framework/Versions/2.6/Python, file is not of required architecture
Undefined symbols:
"_PyMac_Error", referenced from:
"_Py_Main", referenced from:
_main in python.o
ld: symbol(s) not found
collect2: ld returned 1 exit status
make: *** [python.exe] Error 1
A:
There are apparently problems with Python via Macports on Snow Leopard, see this thread. From there, here's an entry suggesting a way to get it working.
|
Anyone get python26 install in Snow Leopard via Macports?
|
I got build error after run in Snow Leopard (MacPort v.1.8.0)
sudo port install python26
any workaround please?
Error: Target org.macports.build returned: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_lang_python26/work/Python-2.6.2" && /usr/bin/make all MAKE="/usr/bin/make CC=/usr/bin/gcc-4.2" " returned error 2
Command output: if test ""; then \
/usr/bin/gcc-4.2 -o Python.framework/Versions/2.6/Python -dynamiclib \
-isysroot "" \
-all_load libpython2.6.a -Wl,-single_module \
-install_name /opt/local/Library/Frameworks/Python.framework/Versions/2.6/Python \
-compatibility_version 2.6 \
-current_version 2.6; \
else \
/usr/bin/libtool -o Python.framework/Versions/2.6/Python -dynamic libpython2.6.a \
-lSystem -lSystemStubs -arch_only i386 -install_name /opt/local/Library/Frameworks/Python.framework/Versions/2.6/Python -compatibility_version 2.6 -current_version 2.6 ;\
fi
/usr/bin/install -c -d -m 755 \
Python.framework/Versions/2.6/Resources/English.lproj
/usr/bin/install -c -m 644 Mac/Resources/framework/Info.plist \
Python.framework/Versions/2.6/Resources/Info.plist
ln -fsn 2.6 Python.framework/Versions/Current
ln -fsn Versions/Current/Python Python.framework/Python
ln -fsn Versions/Current/Headers Python.framework/Headers
ln -fsn Versions/Current/Resources Python.framework/Resources
/usr/bin/gcc-4.2 -L/opt/local/lib -u _PyMac_Error Python.framework/Versions/2.6/Python -o python.exe \
Modules/python.o \
-ldl
ld: warning: in Python.framework/Versions/2.6/Python, file is not of required architecture
Undefined symbols:
"_PyMac_Error", referenced from:
"_Py_Main", referenced from:
_main in python.o
ld: symbol(s) not found
collect2: ld returned 1 exit status
make: *** [python.exe] Error 1
|
[
"There are apparently problems with Python via Macports on Snow Leopard, see this thread. From there, here's an entry suggesting a way to get it working.\n"
] |
[
4
] |
[] |
[] |
[
"macports",
"osx_snow_leopard",
"python"
] |
stackoverflow_0001366542_macports_osx_snow_leopard_python.txt
|
Q:
Replace AppEngine Devserver With Spawning (BaseHTTPRequestHandler as WSGI)
I'm looking to replace AppEngine's devserver with spawning. Spawning handles standard wsgi handlers, just like appengine, so running your app on it is easy.
But the devserver takes into account your app.yaml file that has url redirects etc. I've been going through the devserver code and it is pretty easy to get the BaseHTTPRequestHandler like this:
from google.appengine.tools.dev_appserver import CreateRequestHandler
dev = CreateRequestHandler(os.path.dirname(__file__), '', require_indexes=False, static_caching=True)
But the BaseHTTPRequestHandler is not a WSGI app, so my guess is I need to put something around it to make it work. Any hints?
A:
I don't think you're going to be able to pull out a part of the dev_appserver and use it in a custom WSGI server quite so easily. The dev_appserver does a lot of 'magic', and it isn't really structured to be pulled out and used as a WSGI wrapper in another server (more's the pity).
You may want to check out TwistedAE, which is working on creating an alternate serving environment; if you really want to use spawning, you can probably use TwistedAE's work as a basis.
That said, if you do want to do it yourself, there's a couple of options:
You can write your own shim to interface WSGI with the class returned by CreateRequestHandler. In that case, you need to replicate the interface in BaseHTTPServer.BaseHTTPRequestHandler from the Python SDK. Converting WSGI to that, just so the dev_appserver code can convert it back seems a bit perverse, though.
You can rip out the code from the _HandleRequest method of DevAppServerRequestHandler, modify it to work with WSGI, and create a WSGI app from that (probably your best bet if you want to DIY).
You can start from scratch, which I believe is the approach taken by TwistedAE.
One thing to bear in mind whatever you do: App Engine explicitly expects a single-threaded environment for its apps. Don't use a multithreaded approach if you want apps to work the same locally as they do in production or on the dev_appserver!
|
Replace AppEngine Devserver With Spawning (BaseHTTPRequestHandler as WSGI)
|
I'm looking to replace AppEngine's devserver with spawning. Spawning handles standard wsgi handlers, just like appengine, so running your app on it is easy.
But the devserver takes into account your app.yaml file that has url redirects etc. I've been going through the devserver code and it is pretty easy to get the BaseHTTPRequestHandler like this:
from google.appengine.tools.dev_appserver import CreateRequestHandler
dev = CreateRequestHandler(os.path.dirname(__file__), '', require_indexes=False, static_caching=True)
But the BaseHTTPRequestHandler is not a WSGI app, so my guess is I need to put something around it to make it work. Any hints?
|
[
"I don't think you're going to be able to pull out a part of the dev_appserver and use it in a custom WSGI server quite so easily. The dev_appserver does a lot of 'magic', and it isn't really structured to be pulled out and used as a WSGI wrapper in another server (more's the pity).\nYou may want to check out TwistedAE, which is working on creating an alternate serving environment; if you really want to use spawning, you can probably use TwistedAE's work as a basis.\nThat said, if you do want to do it yourself, there's a couple of options: \n\nYou can write your own shim to interface WSGI with the class returned by CreateRequestHandler. In that case, you need to replicate the interface in BaseHTTPServer.BaseHTTPRequestHandler from the Python SDK. Converting WSGI to that, just so the dev_appserver code can convert it back seems a bit perverse, though.\nYou can rip out the code from the _HandleRequest method of DevAppServerRequestHandler, modify it to work with WSGI, and create a WSGI app from that (probably your best bet if you want to DIY).\nYou can start from scratch, which I believe is the approach taken by TwistedAE.\n\nOne thing to bear in mind whatever you do: App Engine explicitly expects a single-threaded environment for its apps. Don't use a multithreaded approach if you want apps to work the same locally as they do in production or on the dev_appserver!\n"
] |
[
2
] |
[] |
[] |
[
"google_app_engine",
"python",
"wsgi"
] |
stackoverflow_0001293249_google_app_engine_python_wsgi.txt
|
Q:
What would cause a zip file to not be recognized on Google App Engine's when it reads properly in my local GAE sdk
My code executes successfully when I run it locally, but when I upload it to GAE and attempt to run it throws me a BadZipfile: File is not a zip file, or ends with a comment
raw_file = urllib2.urlopen(url)
buffer = cStringIO.StringIO(raw_file.read())
z = zipfile.ZipFile(buffer)
zipped file size is 2.5 mb
unzipped size is 14 mb
What is the difference in the two environments that is causing this error?
A:
The maximum size you can fetch using urlfetch (App Engine's API for making HTTP requests to other sites) is 1MB, so your file is getting truncated. The dev_appserver doesn't enforce the 1MB limit.
|
What would cause a zip file to not be recognized on Google App Engine's when it reads properly in my local GAE sdk
|
My code executes successfully when I run it locally, but when I upload it to GAE and attempt to run it throws me a BadZipfile: File is not a zip file, or ends with a comment
raw_file = urllib2.urlopen(url)
buffer = cStringIO.StringIO(raw_file.read())
z = zipfile.ZipFile(buffer)
zipped file size is 2.5 mb
unzipped size is 14 mb
What is the difference in the two environments that is causing this error?
|
[
"The maximum size you can fetch using urlfetch (App Engine's API for making HTTP requests to other sites) is 1MB, so your file is getting truncated. The dev_appserver doesn't enforce the 1MB limit.\n"
] |
[
2
] |
[] |
[] |
[
"google_app_engine",
"python",
"zip"
] |
stackoverflow_0001366274_google_app_engine_python_zip.txt
|
Q:
How to download a webpage in every five minutes?
I want to download a list of web pages. I know wget can do this. However downloading every URL in every five minutes and save them to a folder seems beyond the capability of wget.
Does anyone knows some tools either in java or python or Perl which accomplishes the task?
Thanks in advance.
A:
Sounds like you'd want to use cron with wget
But if you're set on using python:
import time
import os
wget_command_string = "wget ..."
while true:
os.system(wget_command_string)
time.sleep(5*60)
A:
Write a bash script that uses wget and put it in your crontab to run every 5 minutes. (*/5 * * * *)
If you need to keep a history of all these web pages, set a variable at the beginning of your script with the current unixtime and append it to the output filenames.
|
How to download a webpage in every five minutes?
|
I want to download a list of web pages. I know wget can do this. However downloading every URL in every five minutes and save them to a folder seems beyond the capability of wget.
Does anyone knows some tools either in java or python or Perl which accomplishes the task?
Thanks in advance.
|
[
"Sounds like you'd want to use cron with wget\n\nBut if you're set on using python:\nimport time\nimport os\n\nwget_command_string = \"wget ...\"\n\nwhile true:\n os.system(wget_command_string)\n time.sleep(5*60)\n\n",
"Write a bash script that uses wget and put it in your crontab to run every 5 minutes. (*/5 * * * *)\nIf you need to keep a history of all these web pages, set a variable at the beginning of your script with the current unixtime and append it to the output filenames.\n"
] |
[
7,
5
] |
[] |
[] |
[
"download",
"python",
"web_crawler",
"webpage",
"wget"
] |
stackoverflow_0001367189_download_python_web_crawler_webpage_wget.txt
|
Q:
Diff django model objects with ManyToMany fields
I have a situation where I need to notify some users when something in DB changes. My idea is to catch pre_save and post_save signal and make some kind of diff and mail that. Generally it works good, but I don't know how to get diff for m2m fields.
At the moment I have something like this:
def pre_save(sender, **kwargs):
pk = kwargs['instance'].pk
instance = copy.deepcopy(sender.objects.get(pk=pk))
tracking[sender] = instance
def post_save(sender, **kwargs):
instance = copy.deepcopy(kwargs['instance'])
print diff(instance, (tracking[sender])) # TODO: don't print, save diff somewhere
Diff function should work for every model (at the mommet I have four model classes). With deepcopy I can save old model, but I don't know how to save m2m fields because they are in separate table (yes, I know I can get this data, but at the momment of execution I don't know what fields are m2m and I wouldn't like to create different slot for every model). What I would like is generic solution, so I can just add models later without thinking about notification part.
My plan is to call get_data() and clear_data() functions after save() in view to clean diff that slots have generated.
Is this good way of doing this? Is there a better way? Is there django application that can do this job for me?
Excuse my English, it's not my native language.
A:
First of all, you don't need to use deepcopy for this. Re-querying the sender from the database returns a "fresh" object.
def pre_save(sender, **kwargs):
pk = kwargs['instance'].pk
instance = sender.objects.get(pk=pk)
tracking[sender] = instance
You can get a list of all the many-to-many fields for a class, and check the values related to the current instance:
for field in sender._meta.local_many:
values = field.value_from_object(instance).objects.all()
# Now values is a list of related objects, which you can diff
|
Diff django model objects with ManyToMany fields
|
I have a situation where I need to notify some users when something in DB changes. My idea is to catch pre_save and post_save signal and make some kind of diff and mail that. Generally it works good, but I don't know how to get diff for m2m fields.
At the moment I have something like this:
def pre_save(sender, **kwargs):
pk = kwargs['instance'].pk
instance = copy.deepcopy(sender.objects.get(pk=pk))
tracking[sender] = instance
def post_save(sender, **kwargs):
instance = copy.deepcopy(kwargs['instance'])
print diff(instance, (tracking[sender])) # TODO: don't print, save diff somewhere
Diff function should work for every model (at the mommet I have four model classes). With deepcopy I can save old model, but I don't know how to save m2m fields because they are in separate table (yes, I know I can get this data, but at the momment of execution I don't know what fields are m2m and I wouldn't like to create different slot for every model). What I would like is generic solution, so I can just add models later without thinking about notification part.
My plan is to call get_data() and clear_data() functions after save() in view to clean diff that slots have generated.
Is this good way of doing this? Is there a better way? Is there django application that can do this job for me?
Excuse my English, it's not my native language.
|
[
"First of all, you don't need to use deepcopy for this. Re-querying the sender from the database returns a \"fresh\" object.\ndef pre_save(sender, **kwargs):\n pk = kwargs['instance'].pk\n instance = sender.objects.get(pk=pk)\n tracking[sender] = instance\n\nYou can get a list of all the many-to-many fields for a class, and check the values related to the current instance:\nfor field in sender._meta.local_many:\n values = field.value_from_object(instance).objects.all()\n # Now values is a list of related objects, which you can diff\n\n"
] |
[
6
] |
[] |
[] |
[
"diff",
"django",
"models",
"python"
] |
stackoverflow_0001365963_diff_django_models_python.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.