content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: How to read a structure containing an array using Python's ctypes and readinto? We have some binary files created by a C program. One type of file is created by calling fwrite to write the following C structure to file: typedef struct { unsigned long int foo; unsigned short int bar; unsigned short int bow; } easyStruc; In Python, I read the structs of this file as follows: class easyStruc(Structure): _fields_ = [ ("foo", c_ulong), ("bar", c_ushort), ("bow", c_ushort) ] f = open (filestring, 'rb') record = censusRecord() while (f.readinto(record) != 0): ##do stuff f.close() That works fine. Our other type of file is created using the following structure: typedef struct { // bin file (one file per year) unsigned long int foo; float barFloat[4]; float bowFloat[17]; } strucWithArrays; I'm not sure how to create the structure in Python. A: According to this documentation page (section: 15.15.1.13. Arrays), it should be something like: class strucWithArrays(Structure): _fields_ = [ ("foo", c_ulong), ("barFloat", c_float * 4), ("bowFloat", c_float * 17)] Check that documentation page for other examples. A: There's a section about arrays in ctypes in the documentation. Basically this means: class structWithArray(Structure): _fields_ = [ ("foo", c_ulong), ("barFloat", c_float * 4), ("bowFloat", c_float * 17) ]
How to read a structure containing an array using Python's ctypes and readinto?
We have some binary files created by a C program. One type of file is created by calling fwrite to write the following C structure to file: typedef struct { unsigned long int foo; unsigned short int bar; unsigned short int bow; } easyStruc; In Python, I read the structs of this file as follows: class easyStruc(Structure): _fields_ = [ ("foo", c_ulong), ("bar", c_ushort), ("bow", c_ushort) ] f = open (filestring, 'rb') record = censusRecord() while (f.readinto(record) != 0): ##do stuff f.close() That works fine. Our other type of file is created using the following structure: typedef struct { // bin file (one file per year) unsigned long int foo; float barFloat[4]; float bowFloat[17]; } strucWithArrays; I'm not sure how to create the structure in Python.
[ "According to this documentation page (section: 15.15.1.13. Arrays), it should be something like:\nclass strucWithArrays(Structure):\n _fields_ = [\n (\"foo\", c_ulong),\n (\"barFloat\", c_float * 4),\n (\"bowFloat\", c_float * 17)]\n\nCheck that documentation page for other examples.\n", "There's a section about arrays in ctypes in the documentation.\nBasically this means:\nclass structWithArray(Structure):\n _fields_ = [\n (\"foo\", c_ulong),\n (\"barFloat\", c_float * 4),\n (\"bowFloat\", c_float * 17)\n ]\n\n" ]
[ 10, 2 ]
[]
[]
[ "ctypes", "python" ]
stackoverflow_0001444159_ctypes_python.txt
Q: Example code for a python server wrapper I have a command line server for which I want to create a wrapper in python. The idea is that the wrapper receives commands like: my_wrapper start my_wrapper stop my_wrapper restart my_wrapper status And handles the server in background, unlinked to the terminal that launched it from the wrapper. I was about to start thinking on how to do it and thought on the golden rule, DRY. Do you know of any example code I should start reading before starting my first line? Update: I noticed I didn't include that the server is a jar file, so I'll have to run it using subprocess or something similar. I'd prefer not to use modules that are not included in python's standard lib. A: You could use an implementation of PEP 3143 - Standard daemon process library. One existing is python-daemon.
Example code for a python server wrapper
I have a command line server for which I want to create a wrapper in python. The idea is that the wrapper receives commands like: my_wrapper start my_wrapper stop my_wrapper restart my_wrapper status And handles the server in background, unlinked to the terminal that launched it from the wrapper. I was about to start thinking on how to do it and thought on the golden rule, DRY. Do you know of any example code I should start reading before starting my first line? Update: I noticed I didn't include that the server is a jar file, so I'll have to run it using subprocess or something similar. I'd prefer not to use modules that are not included in python's standard lib.
[ "You could use an implementation of PEP 3143 - Standard daemon process library. One existing is python-daemon.\n" ]
[ 1 ]
[]
[]
[ "python", "wrapper" ]
stackoverflow_0001444358_python_wrapper.txt
Q: Django Models internal methods I'm new to Django so I just made up an project to get to know it but I'm having a little problem with this code, I want to be able to as the car obj if it is available so I do a: >>>cars = Car.objects.all() >>>print cars[0].category >>>'A' >>>cars[0].available(fr, to) that results in a: >>>global name 'category' is not defined So it seems that I don't have access to the self.category within the class, any ideas? from django.db import models class Car(models.Model): TRANSMISSION_CHOICES = ( ('M', 'Manual'), ('A', 'Automatic'), ) category = models.CharField("Category",max_length=1,primary_key=True) description = models.CharField("Description",max_length=200) numberOfCars = models.IntegerField("Number of cars") numberOfDoors = models.IntegerField("Number of doors") transmission = models.CharField("Transmission", max_length=1, choices=TRANSMISSION_CHOICES) passengers = models.IntegerField("Number of passengers") image = models.ImageField("Image", upload_to="photos/%Y/%m/%d") def available(self, fr, to): rents = Rent.objects.filter(car=self.category) rents = rents.excludes(start < fr) rents = rents.exclude(end > to) return cont(rents) def __unicode__(self): return "Class " + self.category class Rent(models.Model): car = models.ForeignKey(Car) start = models.DateTimeField() end = models.DateTimeField() childrenSeat = models.BooleanField() extraDriver = models.BooleanField() def __unicode__(self): return str(self.car) + " From: " + str(self.start) + " To: " + str(self.end) A: Although I can't see how the error you are getting relates to it, the filter you are using doesn't look correct. You define category as a string in the Car model: category = models.CharField("Category",max_length=1,primary_key=True) And define car as a foreignkey in the Rent model: car = models.ForeignKey(Car) And then you try and filter Rents: rents = Rent.objects.filter(car=self.category) But car should be a model here, not a charfield. Perhaps you meant to say car=self?
Django Models internal methods
I'm new to Django so I just made up an project to get to know it but I'm having a little problem with this code, I want to be able to as the car obj if it is available so I do a: >>>cars = Car.objects.all() >>>print cars[0].category >>>'A' >>>cars[0].available(fr, to) that results in a: >>>global name 'category' is not defined So it seems that I don't have access to the self.category within the class, any ideas? from django.db import models class Car(models.Model): TRANSMISSION_CHOICES = ( ('M', 'Manual'), ('A', 'Automatic'), ) category = models.CharField("Category",max_length=1,primary_key=True) description = models.CharField("Description",max_length=200) numberOfCars = models.IntegerField("Number of cars") numberOfDoors = models.IntegerField("Number of doors") transmission = models.CharField("Transmission", max_length=1, choices=TRANSMISSION_CHOICES) passengers = models.IntegerField("Number of passengers") image = models.ImageField("Image", upload_to="photos/%Y/%m/%d") def available(self, fr, to): rents = Rent.objects.filter(car=self.category) rents = rents.excludes(start < fr) rents = rents.exclude(end > to) return cont(rents) def __unicode__(self): return "Class " + self.category class Rent(models.Model): car = models.ForeignKey(Car) start = models.DateTimeField() end = models.DateTimeField() childrenSeat = models.BooleanField() extraDriver = models.BooleanField() def __unicode__(self): return str(self.car) + " From: " + str(self.start) + " To: " + str(self.end)
[ "Although I can't see how the error you are getting relates to it, the filter you are using doesn't look correct.\nYou define category as a string in the Car model:\ncategory = models.CharField(\"Category\",max_length=1,primary_key=True)\n\nAnd define car as a foreignkey in the Rent model:\ncar = models.ForeignKey(Car)\n\nAnd then you try and filter Rents:\nrents = Rent.objects.filter(car=self.category)\n\nBut car should be a model here, not a charfield. Perhaps you meant to say car=self?\n" ]
[ 0 ]
[]
[]
[ "django", "django_models", "python" ]
stackoverflow_0001444222_django_django_models_python.txt
Q: Werkzeug and SQLAlchemy 0.5x session Updated: Going through the Werkzeug link text tutorial, got stack with creating SQLAlchemy session using sessionmaker() instead of create_session() as recommended. Note: it is not about SA, it is about Werkzeug. Werkzeug tutorial: session = scoped_session(lambda: create_session(bind=application.database_engine, autoflush=True, autocommit=False), local_manager.get_ident) I asked how to achieve the same using sessionmaker(): As a result guys from #pocoo RCI helped me with this: session = scoped_session(lambda: sessionmaker(bind=application.database_engine)(), local_manager.get_ident) without () at the end of sessionmaker(**args) it kept giving me an error: RuntimeError: no object bound to application P.S. if delete lambda it will not work. A: sessionmaker() returns a session factory, not a session itself. scoped_session() takes a session factory as argument. So just omit the lambda: and pass the result of sessionmaker() directly to scoped_session().
Werkzeug and SQLAlchemy 0.5x session
Updated: Going through the Werkzeug link text tutorial, got stack with creating SQLAlchemy session using sessionmaker() instead of create_session() as recommended. Note: it is not about SA, it is about Werkzeug. Werkzeug tutorial: session = scoped_session(lambda: create_session(bind=application.database_engine, autoflush=True, autocommit=False), local_manager.get_ident) I asked how to achieve the same using sessionmaker(): As a result guys from #pocoo RCI helped me with this: session = scoped_session(lambda: sessionmaker(bind=application.database_engine)(), local_manager.get_ident) without () at the end of sessionmaker(**args) it kept giving me an error: RuntimeError: no object bound to application P.S. if delete lambda it will not work.
[ "sessionmaker() returns a session factory, not a session itself. scoped_session() takes a session factory as argument. So just omit the lambda: and pass the result of sessionmaker() directly to scoped_session().\n" ]
[ 4 ]
[]
[]
[ "python", "sqlalchemy", "werkzeug" ]
stackoverflow_0001444735_python_sqlalchemy_werkzeug.txt
Q: What is the best-maintained generic functions implementation for Python? A generic function is dispatched based on the type of all its arguments. The programmer defines several implementations of a function. The correct one is chosen at call time based on the types of its arguments. This is useful for object adaptation among other things. Python has a few generic functions including len(). These packages tend to allow code that looks like this: @when(int) def dumbexample(a): return a * 2 @when(list) def dumbexample(a): return [("%s" % i) for i in a] dumbexample(1) # calls first implementation dumbexample([1,2,3]) # calls second implementation A less dumb example I've been thinking about lately would be a web component that requires a User. Instead of requiring a particular web framework, the integrator would just need to write something like: class WebComponentUserAdapter(object): def __init__(self, guest): self.guest = guest def canDoSomething(self): return guest.member_of("something_group") @when(my.webframework.User) componentNeedsAUser(user): return WebComponentUserAdapter(user) Python has a few generic functions implementations. Why would I chose one over the others? How is that implementation being used in applications? I'm familiar with Zope's zope.component.queryAdapter(object, ISomething). The programmer registers a callable adapter that takes a particular class of object as its argument and returns something compatible with the interface. It's a useful way to allow plugins. Unlike monkey patching, it works even if an object needs to adapt to multiple interfaces with the same method names. A: I'd recommend the PEAK-Rules library by P. Eby. By the same author (deprecated though) is the RuleDispatch package (the predecessor of PEAK-Rules). The latter being no longer maintained IIRC. PEAK-Rules has a lot of nice features, one being, that it is (well, not easily, but) extensible. Besides "classic" dispatch on types ony, it features dispatch on arbitrary expressions as "guardians". The len() function is not a true generic function (at least in the sense of the packages mentioned above, and also in the sense, this term is used in languages like Common Lisp, Dylan or Cecil), as it is simply a convenient syntax for a call to specially named (but otherwise regular) method: len(s) == s.__len__() Also note, that this is single-dispatch only, that is, the actual receiver (s in the code above) determines the method implementation called. And even a hypothetical def call_special(receiver, *args, **keys): return receiver.__call_special__(*args, **keys) is still a single-dispatch function, as only the receiver is used when the method to be called is resolved. The remaining arguments are simply passed on, but they don't affect the method selection. This is different from multiple-dispatch, where there is no dedicated receiver, and all arguments are used in order to find the actual method implementation to call. This is, what actually makes the whole thing worthwhile. If it were only some odd kind of syntactic sugar, nobody would bother with using it, IMHO. from peak.rules import abstract, when @abstract def serialize_object(object, target): pass @when(serialize_object, (MyStuff, BinaryStream)) def serialize_object(object, target): target.writeUInt32(object.identifier) target.writeString(object.payload) @when(serialize_object, (MyStuff, XMLStream)) def serialize_object(object, target): target.openElement("my-stuff") target.writeAttribute("id", str(object.identifier)) target.writeText(object.payload) target.closeElement() In this example, a call like serialize_object(MyStuff(10, "hello world"), XMLStream()) considers both arguments in order to decide, which method must actually be called. For a nice usage scenario of generic functions in Python I'd recommend reading the refactored code of the peak.security which gives a very elegant solution to access permission checking using generic functions (using RuleDispatch). A: You can use a construction like this: def my_func(*args, **kwargs): pass In this case args will be a list of any unnamed arguments, and kwargs will be a dictionary of the named ones. From here you can detect their types and act as appropriate. A: I'm unable to see the point in these "generic" functions. It sounds like simple polymorphism. Your "generic" features can be implemented like this without resorting to any run-time type identification. class intWithDumbExample( int ): def dumbexample( self ): return self*2 class listWithDumbExmaple( list ): def dumpexample( self ): return [("%s" % i) for i in self] def dumbexample( a ): return a.dumbexample()
What is the best-maintained generic functions implementation for Python?
A generic function is dispatched based on the type of all its arguments. The programmer defines several implementations of a function. The correct one is chosen at call time based on the types of its arguments. This is useful for object adaptation among other things. Python has a few generic functions including len(). These packages tend to allow code that looks like this: @when(int) def dumbexample(a): return a * 2 @when(list) def dumbexample(a): return [("%s" % i) for i in a] dumbexample(1) # calls first implementation dumbexample([1,2,3]) # calls second implementation A less dumb example I've been thinking about lately would be a web component that requires a User. Instead of requiring a particular web framework, the integrator would just need to write something like: class WebComponentUserAdapter(object): def __init__(self, guest): self.guest = guest def canDoSomething(self): return guest.member_of("something_group") @when(my.webframework.User) componentNeedsAUser(user): return WebComponentUserAdapter(user) Python has a few generic functions implementations. Why would I chose one over the others? How is that implementation being used in applications? I'm familiar with Zope's zope.component.queryAdapter(object, ISomething). The programmer registers a callable adapter that takes a particular class of object as its argument and returns something compatible with the interface. It's a useful way to allow plugins. Unlike monkey patching, it works even if an object needs to adapt to multiple interfaces with the same method names.
[ "I'd recommend the PEAK-Rules library by P. Eby. By the same author (deprecated though) is the RuleDispatch package (the predecessor of PEAK-Rules). The latter being no longer maintained IIRC. \nPEAK-Rules has a lot of nice features, one being, that it is (well, not easily, but) extensible. Besides \"classic\" dispatch on types ony, it features dispatch on arbitrary expressions as \"guardians\".\nThe len() function is not a true generic function (at least in the sense of the packages mentioned above, and also in the sense, this term is used in languages like Common Lisp, Dylan or Cecil), as it is simply a convenient syntax for a call to specially named (but otherwise regular) method:\nlen(s) == s.__len__()\n\nAlso note, that this is single-dispatch only, that is, the actual receiver (s in the code above) determines the method implementation called. And even a hypothetical\ndef call_special(receiver, *args, **keys):\n return receiver.__call_special__(*args, **keys)\n\nis still a single-dispatch function, as only the receiver is used when the method to be called is resolved. The remaining arguments are simply passed on, but they don't affect the method selection. \nThis is different from multiple-dispatch, where there is no dedicated receiver, and all arguments are used in order to find the actual method implementation to call. This is, what actually makes the whole thing worthwhile. If it were only some odd kind of syntactic sugar, nobody would bother with using it, IMHO.\nfrom peak.rules import abstract, when\n\n@abstract\ndef serialize_object(object, target):\n pass\n\n@when(serialize_object, (MyStuff, BinaryStream))\ndef serialize_object(object, target):\n target.writeUInt32(object.identifier)\n target.writeString(object.payload)\n\n@when(serialize_object, (MyStuff, XMLStream))\ndef serialize_object(object, target):\n target.openElement(\"my-stuff\")\n target.writeAttribute(\"id\", str(object.identifier))\n target.writeText(object.payload)\n target.closeElement()\n\nIn this example, a call like\nserialize_object(MyStuff(10, \"hello world\"), XMLStream())\n\nconsiders both arguments in order to decide, which method must actually be called.\nFor a nice usage scenario of generic functions in Python I'd recommend reading the refactored code of the peak.security which gives a very elegant solution to access permission checking using generic functions (using RuleDispatch).\n", "You can use a construction like this:\ndef my_func(*args, **kwargs):\n pass\n\nIn this case args will be a list of any unnamed arguments, and kwargs will be a dictionary of the named ones. From here you can detect their types and act as appropriate.\n", "I'm unable to see the point in these \"generic\" functions. It sounds like simple polymorphism.\nYour \"generic\" features can be implemented like this without resorting to any run-time type identification. \nclass intWithDumbExample( int ):\n def dumbexample( self ):\n return self*2\n\nclass listWithDumbExmaple( list ):\n def dumpexample( self ):\n return [(\"%s\" % i) for i in self]\n\ndef dumbexample( a ):\n return a.dumbexample()\n\n" ]
[ 7, 0, 0 ]
[]
[]
[ "generics", "python" ]
stackoverflow_0001445065_generics_python.txt
Q: How can I find out why subprocess.Popen wait() waits forever if stdout=PIPE? I have a program that writes to stdout and possibly stderr. I want to run it from python, capturing the stdout and stderr. My code looks like: from subprocess import * p = Popen( exe, shell=TRUE, stdout=PIPE, stderr=PIPE ) rtrncode = p.wait() For a couple of programs, this works fine, but when I added a new one, the new one hangs forever. If I remove stdout=PIPE, the program writes its output to the console and finishes and everything is fine. How can I determine what's causing the hang? Using python 2.5 on Windows XP. The program does not read from stdin nor does it have any kind of user input (i.e. "hit a key"). A: When a pipe's buffer fills up (typically 4KB or so), the writing process stops until a reading process has read some of the data in question; but here you're reading nothing until the subprocess is done, hence the deadlock. The docs on wait put it very clearly indeed: Warning This will deadlock if the child process generates enough output to a stdout or stderr pipe such that it blocks waiting for the OS pipe buffer to accept more data. Use communicate() to avoid that. If you can't use communicate for some reason, have the subprocess write to a temporary file, and then you can wait and read that file when it's ready -- writing to a file, instead of to a pipe, does not risk deadlock. A: Take a look at the docs. It states that you shouldn't use wait as it can cause a dead lock. Try using communicate.
How can I find out why subprocess.Popen wait() waits forever if stdout=PIPE?
I have a program that writes to stdout and possibly stderr. I want to run it from python, capturing the stdout and stderr. My code looks like: from subprocess import * p = Popen( exe, shell=TRUE, stdout=PIPE, stderr=PIPE ) rtrncode = p.wait() For a couple of programs, this works fine, but when I added a new one, the new one hangs forever. If I remove stdout=PIPE, the program writes its output to the console and finishes and everything is fine. How can I determine what's causing the hang? Using python 2.5 on Windows XP. The program does not read from stdin nor does it have any kind of user input (i.e. "hit a key").
[ "When a pipe's buffer fills up (typically 4KB or so), the writing process stops until a reading process has read some of the data in question; but here you're reading nothing until the subprocess is done, hence the deadlock. The docs on wait put it very clearly indeed:\n\nWarning This will deadlock if the\n child process generates enough output\n to a stdout or stderr pipe such that\n it blocks waiting for the OS pipe\n buffer to accept more data. Use\n communicate() to avoid that.\n\nIf you can't use communicate for some reason, have the subprocess write to a temporary file, and then you can wait and read that file when it's ready -- writing to a file, instead of to a pipe, does not risk deadlock.\n", "Take a look at the docs. It states that you shouldn't use wait as it can cause a dead lock. Try using communicate.\n" ]
[ 52, 3 ]
[]
[]
[ "python", "subprocess" ]
stackoverflow_0001445627_python_subprocess.txt
Q: Introduction to the Python Clutter bindings? I've had a search around but I haven't been able to find decent online tutorials for the recent clutter bindings. There are guides for 0.4 and 0.6 around but 0.8 is supposed to be very different making these guides kind of useless. Links or examples greatly appreciated! A: these docs seem to be pretty up to date.
Introduction to the Python Clutter bindings?
I've had a search around but I haven't been able to find decent online tutorials for the recent clutter bindings. There are guides for 0.4 and 0.6 around but 0.8 is supposed to be very different making these guides kind of useless. Links or examples greatly appreciated!
[ "these docs seem to be pretty up to date.\n" ]
[ 6 ]
[]
[]
[ "binding", "clutter_gui", "python", "user_interface" ]
stackoverflow_0001445633_binding_clutter_gui_python_user_interface.txt
Q: In Django, how to call a subprocess with a slow start-up time Suppose you're running Django on Linux, and you've got a view, and you want that view to return the data from a subprocess called cmd that operates on a file that the view creates, for example likeso: def call_subprocess(request): response = HttpResponse() with tempfile.NamedTemporaryFile("W") as f: f.write(request.GET['data']) # i.e. some data # cmd operates on fname and returns output p = subprocess.Popen(["cmd", f.name], stdout=subprocess.PIPE, stderr=subprocess.PIPE) out, err = p.communicate() response.write(p.out) # would be text/plain... return response Now, suppose cmd has a very slow start-up time, but a very fast operating time, and it does not natively have a daemon mode. I would like to improve the response-time of this view. I would like to make the whole system would run much faster by starting up a number of instances of cmd in a worker-pool, have them wait for input, and having call_process ask one of those worker pool processes handle the data. This is really 2 parts: Part 1. A function that calls cmd and cmd waits for input. This could be done with pipes, i.e. def _run_subcmd(): p = subprocess.Popen(["cmd", fname], stdout=subprocess.PIPE, stderr=subprocess.PIPE) out, err = p.communicate() # write 'out' to a tmp file o = open("out.txt", "W") o.write(out) o.close() p.close() exit() def _run_cmd(data): f = tempfile.NamedTemporaryFile("W") pipe = os.mkfifo(f.name) if os.fork() == 0: _run_subcmd(fname) else: f.write(data) r = open("out.txt", "r") out = r.read() # read 'out' from a tmp file return out def call_process(request): response = HttpResponse() out = _run_cmd(request.GET['data']) response.write(out) # would be text/plain... return response Part 2. A set of workers running in the background that are waiting on the data. i.e. We want to extend the above so that the subprocess is already running, e.g. when the Django instance initializes, or this call_process is first called, a set of these workers is created WORKER_COUNT = 6 WORKERS = [] class Worker(object): def __init__(index): self.tmp_file = tempfile.NamedTemporaryFile("W") # get a tmp file name os.mkfifo(self.tmp_file.name) self.p = subprocess.Popen(["cmd", self.tmp_file], stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.index = index def run(out_filename, data): WORKERS[self.index] = Null # qua-mutex?? self.tmp_file.write(data) if (os.fork() == 0): # does the child have access to self.p?? out, err = self.p.communicate() o = open(out_filename, "w") o.write(out) exit() self.p.close() self.o.close() self.tmp_file.close() WORKERS[self.index] = Worker(index) # replace this one return out_file @classmethod def get_worker() # get the next worker # ... static, incrementing index There should be some initialization of workers somewhere, like this: def init_workers(): # create WORKERS_COUNT workers for i in xrange(0, WORKERS_COUNT): tmp_file = tempfile.NamedTemporaryFile() WORKERS.push(Worker(i)) Now, what I have above becomes something likeso: def _run_cmd(data): Worker.get_worker() # this needs to be atomic & lock worker at Worker.index fifo = open(tempfile.NamedTemporaryFile("r")) # this stores output of cmd Worker.run(fifo.name, data) # please ignore the fact that everything will be # appended to out.txt ... these will be tmp files, too, but named elsewhere. out = fifo.read() # read 'out' from a tmp file return out def call_process(request): response = HttpResponse() out = _run_cmd(request.GET['data']) response.write(out) # would be text/plain... return response Now, the questions: Will this work? (I've just typed this off the top of my head into StackOverflow, so I'm sure there are problems, but conceptually, will it work) What are the problems to look for? Are there better alternatives to this? e.g. Could threads work just as well (it's Debian Lenny Linux)? Are there any libraries that handle parallel process worker-pools like this? Are there interactions with Django that I ought to be conscious of? Thanks for reading! I hope you find this as interesting a problem as I do. Brian A: It may seem like i am punting this product as this is the second time i have responded with a recommendation of this. But it seems like you need a Message Queing service, in particular a distributed message queue. ere is how it will work: Your Django App requests CMD CMD gets added to a queue CMD gets pushed to several works It is executed and results returned upstream Most of this code exists, and you dont have to go about building your own system. Have a look at Celery which was initially built with Django. http://www.celeryq.org/ http://robertpogorzelski.com/blog/2009/09/10/rabbitmq-celery-and-django/ A: Issy already mentioned Celery, but since comments doesn't work well with code samples, I'll reply as an answer instead. You should try to use Celery synchronously with the AMQP result store. You could distribute the actual execution to another process or even another machine. Executing synchronously in celery is easy, e.g.: >>> from celery.task import Task >>> from celery.registry import tasks >>> class MyTask(Task): ... ... def run(self, x, y): ... return x * y >>> tasks.register(MyTask) >>> async_result = MyTask.delay(2, 2) >>> retval = async_result.get() # Now synchronous >>> retval 4 The AMQP result store makes sending back the result very fast, but it's only available in the current development version (in code-freeze to become 0.8.0) A: How about "daemonizing" the subprocess call using python-daemon or its successor, grizzled.
In Django, how to call a subprocess with a slow start-up time
Suppose you're running Django on Linux, and you've got a view, and you want that view to return the data from a subprocess called cmd that operates on a file that the view creates, for example likeso: def call_subprocess(request): response = HttpResponse() with tempfile.NamedTemporaryFile("W") as f: f.write(request.GET['data']) # i.e. some data # cmd operates on fname and returns output p = subprocess.Popen(["cmd", f.name], stdout=subprocess.PIPE, stderr=subprocess.PIPE) out, err = p.communicate() response.write(p.out) # would be text/plain... return response Now, suppose cmd has a very slow start-up time, but a very fast operating time, and it does not natively have a daemon mode. I would like to improve the response-time of this view. I would like to make the whole system would run much faster by starting up a number of instances of cmd in a worker-pool, have them wait for input, and having call_process ask one of those worker pool processes handle the data. This is really 2 parts: Part 1. A function that calls cmd and cmd waits for input. This could be done with pipes, i.e. def _run_subcmd(): p = subprocess.Popen(["cmd", fname], stdout=subprocess.PIPE, stderr=subprocess.PIPE) out, err = p.communicate() # write 'out' to a tmp file o = open("out.txt", "W") o.write(out) o.close() p.close() exit() def _run_cmd(data): f = tempfile.NamedTemporaryFile("W") pipe = os.mkfifo(f.name) if os.fork() == 0: _run_subcmd(fname) else: f.write(data) r = open("out.txt", "r") out = r.read() # read 'out' from a tmp file return out def call_process(request): response = HttpResponse() out = _run_cmd(request.GET['data']) response.write(out) # would be text/plain... return response Part 2. A set of workers running in the background that are waiting on the data. i.e. We want to extend the above so that the subprocess is already running, e.g. when the Django instance initializes, or this call_process is first called, a set of these workers is created WORKER_COUNT = 6 WORKERS = [] class Worker(object): def __init__(index): self.tmp_file = tempfile.NamedTemporaryFile("W") # get a tmp file name os.mkfifo(self.tmp_file.name) self.p = subprocess.Popen(["cmd", self.tmp_file], stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.index = index def run(out_filename, data): WORKERS[self.index] = Null # qua-mutex?? self.tmp_file.write(data) if (os.fork() == 0): # does the child have access to self.p?? out, err = self.p.communicate() o = open(out_filename, "w") o.write(out) exit() self.p.close() self.o.close() self.tmp_file.close() WORKERS[self.index] = Worker(index) # replace this one return out_file @classmethod def get_worker() # get the next worker # ... static, incrementing index There should be some initialization of workers somewhere, like this: def init_workers(): # create WORKERS_COUNT workers for i in xrange(0, WORKERS_COUNT): tmp_file = tempfile.NamedTemporaryFile() WORKERS.push(Worker(i)) Now, what I have above becomes something likeso: def _run_cmd(data): Worker.get_worker() # this needs to be atomic & lock worker at Worker.index fifo = open(tempfile.NamedTemporaryFile("r")) # this stores output of cmd Worker.run(fifo.name, data) # please ignore the fact that everything will be # appended to out.txt ... these will be tmp files, too, but named elsewhere. out = fifo.read() # read 'out' from a tmp file return out def call_process(request): response = HttpResponse() out = _run_cmd(request.GET['data']) response.write(out) # would be text/plain... return response Now, the questions: Will this work? (I've just typed this off the top of my head into StackOverflow, so I'm sure there are problems, but conceptually, will it work) What are the problems to look for? Are there better alternatives to this? e.g. Could threads work just as well (it's Debian Lenny Linux)? Are there any libraries that handle parallel process worker-pools like this? Are there interactions with Django that I ought to be conscious of? Thanks for reading! I hope you find this as interesting a problem as I do. Brian
[ "It may seem like i am punting this product as this is the second time i have responded with a recommendation of this.\nBut it seems like you need a Message Queing service, in particular a distributed message queue.\nere is how it will work:\n\nYour Django App requests CMD\nCMD gets added to a queue\nCMD gets pushed to several works\nIt is executed and results returned upstream\n\nMost of this code exists, and you dont have to go about building your own system.\nHave a look at Celery which was initially built with Django. \nhttp://www.celeryq.org/\nhttp://robertpogorzelski.com/blog/2009/09/10/rabbitmq-celery-and-django/\n", "Issy already mentioned Celery, but since comments doesn't work well\nwith code samples, I'll reply as an answer instead.\nYou should try to use Celery synchronously with the AMQP result store.\nYou could distribute the actual execution to another process or even another machine. Executing synchronously in celery is easy, e.g.:\n>>> from celery.task import Task\n>>> from celery.registry import tasks\n\n>>> class MyTask(Task):\n...\n... def run(self, x, y):\n... return x * y \n>>> tasks.register(MyTask)\n\n>>> async_result = MyTask.delay(2, 2)\n>>> retval = async_result.get() # Now synchronous\n>>> retval 4\n\nThe AMQP result store makes sending back the result very fast,\nbut it's only available in the current development version (in code-freeze to become\n0.8.0)\n", "How about \"daemonizing\" the subprocess call using python-daemon or its successor, grizzled. \n" ]
[ 3, 3, 0 ]
[]
[]
[ "django", "fork", "multithreading", "python", "subprocess" ]
stackoverflow_0001428900_django_fork_multithreading_python_subprocess.txt
Q: Class for pickle- and copy-persistent object? I'm trying to write a class for a read-only object which will not be really copied with the copy module, and when it will be pickled to be transferred between processes each process will maintain no more than one copy of it, no matter how many times it will be passed around as a "new" object. Is there already something like that? A: I made an attempt to implement this. @Alex Martelli and anyone else, please give me comments/improvements. I think this will eventually end up on GitHub. """ todo: need to lock library to avoid thread trouble? todo: need to raise an exception if we're getting pickled with an old protocol? todo: make it polite to other classes that use __new__. Therefore, should probably work not only when there is only one item in the *args passed to new. """ import uuid import weakref library = weakref.WeakValueDictionary() class UuidToken(object): def __init__(self, uuid): self.uuid = uuid class PersistentReadOnlyObject(object): def __new__(cls, *args, **kwargs): if len(args)==1 and len(kwargs)==0 and isinstance(args[0], UuidToken): received_uuid = args[0].uuid else: received_uuid = None if received_uuid: # This section is for when we are called at unpickling time thing = library.pop(received_uuid, None) if thing: thing._PersistentReadOnlyObject__skip_setstate = True return thing else: # This object does not exist in our library yet; Let's add it new_args = args[1:] thing = super(PersistentReadOnlyObject, cls).__new__(cls, *new_args, **kwargs) thing._PersistentReadOnlyObject__uuid = received_uuid library[received_uuid] = thing return thing else: # This section is for when we are called at normal creation time thing = super(PersistentReadOnlyObject, cls).__new__(cls, *args, **kwargs) new_uuid = uuid.uuid4() thing._PersistentReadOnlyObject__uuid = new_uuid library[new_uuid] = thing return thing def __getstate__(self): my_dict = dict(self.__dict__) del my_dict["_PersistentReadOnlyObject__uuid"] return my_dict def __getnewargs__(self): return (UuidToken(self._PersistentReadOnlyObject__uuid),) def __setstate__(self, state): if self.__dict__.pop("_PersistentReadOnlyObject__skip_setstate", None): return else: self.__dict__.update(state) def __deepcopy__(self, memo): return self def __copy__(self): return self # -------------------------------------------------------------- """ From here on it's just testing stuff; will be moved to another file. """ def play_around(queue, thing): import copy queue.put((thing, copy.deepcopy(thing),)) class Booboo(PersistentReadOnlyObject): def __init__(self): self.number = random.random() if __name__ == "__main__": import multiprocessing import random import copy def same(a, b): return (a is b) and (a == b) and (id(a) == id(b)) and \ (a.number == b.number) a = Booboo() b = copy.copy(a) c = copy.deepcopy(a) assert same(a, b) and same(b, c) my_queue = multiprocessing.Queue() process = multiprocessing.Process(target = play_around, args=(my_queue, a,)) process.start() process.join() things = my_queue.get() for thing in things: assert same(thing, a) and same(thing, b) and same(thing, c) print("all cool!") A: I don't know of any such functionality already implemented. The interesting problem is as follows, and needs precise specs as to what's to happen in this case...: process A makes the obj and sends it to B which unpickles it, so far so good A makes change X to the obj, meanwhile B makes change Y to ITS copy of the obj now either process sends its obj to the other, which unpickles it: what changes to the object need to be visible at this time in each process? does it matter whether A's sending to B or vice versa, i.e. does A "own" the object? or what? If you don't care, say because only A OWNS the obj -- only A is ever allowed to make changes and send the obj to others, others can't and won't change -- then the problems boil down to identifying obj uniquely -- a GUID will do. The class can maintain a class attribute dict mapping GUIDs to existing instances (probably as a weak-value dict to avoid keeping instances needlessly alive, but that's a side issue) and ensure the existing instance is returned when appropriate. But if changes need to be synchronized to any finer granularity, then suddenly it's a REALLY difficult problem of distributed computing and the specs of what happens in what cases really need to be nailed down with the utmost care (and more paranoia than is present in most of us -- distributed programming is VERY tricky unless a few simple and provably correct patterns and idioms are followed fanatically!-). If you can nail down the specs for us, I can offer a sketch of how I would go about trying to meet them. But I won't presume to guess the specs on your behalf;-). Edit: the OP has clarified, and it seems all he needs is a better understanding of how to control __new__. That's easy: see __getnewargs__ -- you'll need a new-style class and pickling with protocol 2 or better (but those are advisable anyway for other reasons!-), then __getnewargs__ in an existing object can simply return the object's GUID (which __new__ must receive as an optional parameter). So __new__ can check if the GUID is present in the class's memo [[weakvalue;-)]]dict (and if so return the corresponding object value) -- if not (or if the GUID is not passed, implying it's not an unpickling, so a fresh GUID must be generated), then make a truly-new object (setting its GUID;-) and also record it in the class-level memo. BTW, to make GUIDs, consider using the uuid module in the standard library. A: you could use simply a dictionnary with the key and the values the same in the receiver. And to avoid a memory leak use a WeakKeyDictionary.
Class for pickle- and copy-persistent object?
I'm trying to write a class for a read-only object which will not be really copied with the copy module, and when it will be pickled to be transferred between processes each process will maintain no more than one copy of it, no matter how many times it will be passed around as a "new" object. Is there already something like that?
[ "I made an attempt to implement this. @Alex Martelli and anyone else, please give me comments/improvements. I think this will eventually end up on GitHub.\n\"\"\"\ntodo: need to lock library to avoid thread trouble?\n\ntodo: need to raise an exception if we're getting pickled with\nan old protocol?\n\ntodo: make it polite to other classes that use __new__. Therefore, should\nprobably work not only when there is only one item in the *args passed to new.\n\n\"\"\"\n\nimport uuid\nimport weakref\n\nlibrary = weakref.WeakValueDictionary()\n\nclass UuidToken(object):\n def __init__(self, uuid):\n self.uuid = uuid\n\n\nclass PersistentReadOnlyObject(object):\n def __new__(cls, *args, **kwargs):\n if len(args)==1 and len(kwargs)==0 and isinstance(args[0], UuidToken):\n received_uuid = args[0].uuid\n else:\n received_uuid = None\n\n if received_uuid:\n # This section is for when we are called at unpickling time\n thing = library.pop(received_uuid, None)\n if thing:\n thing._PersistentReadOnlyObject__skip_setstate = True\n return thing\n else: # This object does not exist in our library yet; Let's add it\n new_args = args[1:]\n thing = super(PersistentReadOnlyObject, cls).__new__(cls,\n *new_args,\n **kwargs)\n thing._PersistentReadOnlyObject__uuid = received_uuid\n library[received_uuid] = thing\n return thing\n\n else:\n # This section is for when we are called at normal creation time\n thing = super(PersistentReadOnlyObject, cls).__new__(cls, *args,\n **kwargs)\n new_uuid = uuid.uuid4()\n thing._PersistentReadOnlyObject__uuid = new_uuid\n library[new_uuid] = thing\n return thing\n\n def __getstate__(self):\n my_dict = dict(self.__dict__)\n del my_dict[\"_PersistentReadOnlyObject__uuid\"]\n return my_dict\n\n def __getnewargs__(self):\n return (UuidToken(self._PersistentReadOnlyObject__uuid),)\n\n def __setstate__(self, state):\n if self.__dict__.pop(\"_PersistentReadOnlyObject__skip_setstate\", None):\n return\n else:\n self.__dict__.update(state)\n\n def __deepcopy__(self, memo):\n return self\n\n def __copy__(self):\n return self\n\n# --------------------------------------------------------------\n\"\"\"\nFrom here on it's just testing stuff; will be moved to another file.\n\"\"\"\n\n\ndef play_around(queue, thing):\n import copy\n queue.put((thing, copy.deepcopy(thing),))\n\nclass Booboo(PersistentReadOnlyObject):\n def __init__(self):\n self.number = random.random()\n\nif __name__ == \"__main__\":\n\n import multiprocessing\n import random\n import copy\n\n def same(a, b):\n return (a is b) and (a == b) and (id(a) == id(b)) and \\\n (a.number == b.number)\n\n a = Booboo()\n b = copy.copy(a)\n c = copy.deepcopy(a)\n assert same(a, b) and same(b, c)\n\n my_queue = multiprocessing.Queue()\n process = multiprocessing.Process(target = play_around,\n args=(my_queue, a,))\n process.start()\n process.join()\n things = my_queue.get()\n for thing in things:\n assert same(thing, a) and same(thing, b) and same(thing, c)\n print(\"all cool!\")\n\n", "I don't know of any such functionality already implemented. The interesting problem is as follows, and needs precise specs as to what's to happen in this case...:\n\nprocess A makes the obj and sends it to B which unpickles it, so far so good\nA makes change X to the obj, meanwhile B makes change Y to ITS copy of the obj\nnow either process sends its obj to the other, which unpickles it: what changes\nto the object need to be visible at this time in each process? does it matter\nwhether A's sending to B or vice versa, i.e. does A \"own\" the object? or what?\n\nIf you don't care, say because only A OWNS the obj -- only A is ever allowed to make changes and send the obj to others, others can't and won't change -- then the problems boil down to identifying obj uniquely -- a GUID will do. The class can maintain a class attribute dict mapping GUIDs to existing instances (probably as a weak-value dict to avoid keeping instances needlessly alive, but that's a side issue) and ensure the existing instance is returned when appropriate.\nBut if changes need to be synchronized to any finer granularity, then suddenly it's a REALLY difficult problem of distributed computing and the specs of what happens in what cases really need to be nailed down with the utmost care (and more paranoia than is present in most of us -- distributed programming is VERY tricky unless a few simple and provably correct patterns and idioms are followed fanatically!-).\nIf you can nail down the specs for us, I can offer a sketch of how I would go about trying to meet them. But I won't presume to guess the specs on your behalf;-).\nEdit: the OP has clarified, and it seems all he needs is a better understanding of how to control __new__. That's easy: see __getnewargs__ -- you'll need a new-style class and pickling with protocol 2 or better (but those are advisable anyway for other reasons!-), then __getnewargs__ in an existing object can simply return the object's GUID (which __new__ must receive as an optional parameter). So __new__ can check if the GUID is present in the class's memo [[weakvalue;-)]]dict (and if so return the corresponding object value) -- if not (or if the GUID is not passed, implying it's not an unpickling, so a fresh GUID must be generated), then make a truly-new object (setting its GUID;-) and also record it in the class-level memo.\nBTW, to make GUIDs, consider using the uuid module in the standard library.\n", "you could use simply a dictionnary with the key and the values the same in the receiver. And to avoid a memory leak use a WeakKeyDictionary. \n" ]
[ 2, 1, 0 ]
[]
[]
[ "persistence", "pickle", "python" ]
stackoverflow_0001400295_persistence_pickle_python.txt
Q: Pydev code browsing? I've been -trying- to use pydev to do some python (can't say I'm having good times so far). I finally got code completion working for the libraries I'm using, but I'm still wondering about a couple of things... So the library I'm using is called orange. Say I call the function orange.MakeRandomIndices2, but I'm not sure how it works... I want to see the source code of this function, or at least some useful information as to it's usage.. Is there any way to do this from my ide? Also while debugging, I might want to do just the same.. step into that function and debug it internally... I cannot seem to be able to do this and I just don't understand WHY seeing I have the source code on my hard disk. Thanks. JC A: When you hover your cursor over a function or class, Pydev should show you the docstring. Click on the function/class, then press F3, and it will take you to the definition of that function/class. If that is not happening, you probably have not configured Pydev correctly. Look over the documentation again, making sure you have done all the steps. If that still does not work, the Pydev author is very active and helpful on the pydev mailinglists. When you are debugging code, F5 will step into a function. F6 will step over.
Pydev code browsing?
I've been -trying- to use pydev to do some python (can't say I'm having good times so far). I finally got code completion working for the libraries I'm using, but I'm still wondering about a couple of things... So the library I'm using is called orange. Say I call the function orange.MakeRandomIndices2, but I'm not sure how it works... I want to see the source code of this function, or at least some useful information as to it's usage.. Is there any way to do this from my ide? Also while debugging, I might want to do just the same.. step into that function and debug it internally... I cannot seem to be able to do this and I just don't understand WHY seeing I have the source code on my hard disk. Thanks. JC
[ "When you hover your cursor over a function or class, Pydev should show you the docstring. Click on the function/class, then press F3, and it will take you to the definition of that function/class. If that is not happening, you probably have not configured Pydev correctly. Look over the documentation again, making sure you have done all the steps. If that still does not work, the Pydev author is very active and helpful on the pydev mailinglists.\nWhen you are debugging code, F5 will step into a function. F6 will step over.\n" ]
[ 2 ]
[]
[]
[ "eclipse", "pydev", "python" ]
stackoverflow_0001444651_eclipse_pydev_python.txt
Q: Whats the proper idiom for naming django model fields that are python reserved names? I have a model that needs to have a field named complex and another one named type. Those are both python reserved names. According to PEP 8, I should name them complex_ and type_ respectively, but django won't allow me to have fields named with a trailing underscore. Whats the proper way to handle this? A: There's no problem with those examples. Just use complex and type. You are only shadowing in a very limited scope (the class definition itself). After that, you'll be accessing them using dot notation (self.type), so there's no ambiguity: Python 2.6.2 (release26-maint, Apr 19 2009, 01:58:18) [GCC 4.3.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> class Foo(object): ... type = 'abc' ... >>> f = Foo() >>> f.type 'abc' >>> class Bar(object): ... complex = 123+4j ... >>> bar = Bar() >>> bar.complex (123+4j) >>> A: Do you really want to use the db_column="complex" argument and call your field something else?
Whats the proper idiom for naming django model fields that are python reserved names?
I have a model that needs to have a field named complex and another one named type. Those are both python reserved names. According to PEP 8, I should name them complex_ and type_ respectively, but django won't allow me to have fields named with a trailing underscore. Whats the proper way to handle this?
[ "There's no problem with those examples. Just use complex and type. You are only shadowing in a very limited scope (the class definition itself). After that, you'll be accessing them using dot notation (self.type), so there's no ambiguity:\nPython 2.6.2 (release26-maint, Apr 19 2009, 01:58:18) \n[GCC 4.3.3] on linux2\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> class Foo(object):\n... type = 'abc'\n... \n>>> f = Foo()\n>>> f.type\n'abc'\n>>> class Bar(object):\n... complex = 123+4j\n... \n>>> bar = Bar()\n>>> bar.complex\n(123+4j)\n>>> \n\n", "Do you really want to use the db_column=\"complex\" argument and call your field something else?\n" ]
[ 4, 1 ]
[]
[]
[ "django", "idioms", "python" ]
stackoverflow_0001445971_django_idioms_python.txt
Q: Python cmd module command aliases I am making a command line interface in Python 3.1.1 using the cmd module. Is there a way to create a command with more than one name e.g. "quit" and "exit"? Or would it just be a case of making a number of commands that all reference the same function? A: Yes, it would just be a case of making a number of commands that all reference the same function. This is common. It often helps to provide multiple common aliases for a command. It makes the user's life simpler because the odds of them guessing correctly are improved.
Python cmd module command aliases
I am making a command line interface in Python 3.1.1 using the cmd module. Is there a way to create a command with more than one name e.g. "quit" and "exit"? Or would it just be a case of making a number of commands that all reference the same function?
[ "Yes, it would just be a case of making a number of commands that all reference the same function.\nThis is common. It often helps to provide multiple common aliases for a command. It makes the user's life simpler because the odds of them guessing correctly are improved.\n" ]
[ 4 ]
[]
[]
[ "cmd", "command_line_interface", "python" ]
stackoverflow_0001446137_cmd_command_line_interface_python.txt
Q: Python decorator to ensure that kwargs are correct I have done a decorator that I used to ensure that the keyword arguments passed to a constructor are the correct/expected ones. The code is the following: from functools import wraps def keyargs_check(keywords): """ This decorator ensures that the keys passed in kwargs are the onces that are specified in the passed tuple. When applied this decorate will check the keywords and will throw an exception if the developer used one that is not recognized. @type keywords: tuple @param keywords: A tuple with all the keywords recognized by the function. """ def wrap(f): @wraps(f) def newFunction(*args, **kw): # we are going to add an extra check in kw for current_key in kw.keys(): if not current_key in keywords: raise ValueError( "The key {0} is a not recognized parameters by {1}.".format( current_key, f.__name__)) return f(*args, **kw) return newFunction return wrap An example use of this decorator would be the following: class Person(object): @keyargs_check(("name", "surname", "age")) def __init__(self, **kwargs): # perform init according to args Using the above code if the developer passes a key args like "blah" it will throw an exception. Unfortunately my implementation has a major problem with inheritance, if I define the following: class PersonTest(Person): @keyargs_check(("test")) def __init__(self, **kwargs): Person.__init__(self,**kwargs) Because I'm passing kwargs to the super class init method, I'm going to get an exception because "test" is not in the tuple passed to the decorator of the super class. Is there a way to let the decorator used in the super class to know about the extra keywords? or event better, is there a standard way to achieve what I want? Update: I am more interested in automate the way I throw an exception when a developer passes the wrong kwarg rather than on the fact that I use kwargs instead of args. What I mean is, I do not want have to write the code that check the args passed to the method in every class. A: Your decorator is not necessary. The only thing the decorator does that can't be done with the standard syntax is prevent keyword args from absorbing positional arguments. Thus class Base(object): def __init__(name=None,surname=None,age=None): #some code class Child(Base): def __init__(test=None,**kwargs): Base.__init__(self,**kwargs) The advantage of this is that kwargs in Child will not contain test. The problem is that you can muck it up with a call like c = Child('red herring'). This is fixed in python 3.0. The problem with your approach is that you're trying to use a decorator to do a macro's job, which is unpythonic. The only thing that will get you what you want is something that modifies the locals of the innermost function (f in your code, specifically the kwargs variable). How should your decorator know the wrapper's insides, how would it know that it calls a superclass?
Python decorator to ensure that kwargs are correct
I have done a decorator that I used to ensure that the keyword arguments passed to a constructor are the correct/expected ones. The code is the following: from functools import wraps def keyargs_check(keywords): """ This decorator ensures that the keys passed in kwargs are the onces that are specified in the passed tuple. When applied this decorate will check the keywords and will throw an exception if the developer used one that is not recognized. @type keywords: tuple @param keywords: A tuple with all the keywords recognized by the function. """ def wrap(f): @wraps(f) def newFunction(*args, **kw): # we are going to add an extra check in kw for current_key in kw.keys(): if not current_key in keywords: raise ValueError( "The key {0} is a not recognized parameters by {1}.".format( current_key, f.__name__)) return f(*args, **kw) return newFunction return wrap An example use of this decorator would be the following: class Person(object): @keyargs_check(("name", "surname", "age")) def __init__(self, **kwargs): # perform init according to args Using the above code if the developer passes a key args like "blah" it will throw an exception. Unfortunately my implementation has a major problem with inheritance, if I define the following: class PersonTest(Person): @keyargs_check(("test")) def __init__(self, **kwargs): Person.__init__(self,**kwargs) Because I'm passing kwargs to the super class init method, I'm going to get an exception because "test" is not in the tuple passed to the decorator of the super class. Is there a way to let the decorator used in the super class to know about the extra keywords? or event better, is there a standard way to achieve what I want? Update: I am more interested in automate the way I throw an exception when a developer passes the wrong kwarg rather than on the fact that I use kwargs instead of args. What I mean is, I do not want have to write the code that check the args passed to the method in every class.
[ "Your decorator is not necessary. The only thing the decorator does that can't be done with the standard syntax is prevent keyword args from absorbing positional arguments. Thus\nclass Base(object):\n def __init__(name=None,surname=None,age=None):\n #some code\n\nclass Child(Base):\n def __init__(test=None,**kwargs):\n Base.__init__(self,**kwargs)\n\nThe advantage of this is that kwargs in Child will not contain test. The problem is that you can muck it up with a call like c = Child('red herring'). This is fixed in python 3.0.\nThe problem with your approach is that you're trying to use a decorator to do a macro's job, which is unpythonic. The only thing that will get you what you want is something that modifies the locals of the innermost function (f in your code, specifically the kwargs variable). How should your decorator know the wrapper's insides, how would it know that it calls a superclass?\n" ]
[ 4 ]
[]
[]
[ "python" ]
stackoverflow_0001446555_python.txt
Q: What are good names for user defined exceptions? This question covers a broad range of programming languages; however, I am specifically working with Python in this case. I would like to create some user defined exceptions, but I'm not sure how fine-grained they should be. For example, if I have the following class: class Race(object): def __init__(self, start_time, end_time): if end_time < start_time: raise WhatToNameThisError self._start_time = start_time self._finish_time = end_time I would like an exception to be raised if the finish time occurs before the start time, but I could I call it? RaceError (all exceptions related to the Race class can use this, and the message can distinguish between them) RaceFinishTimeBeforeStartTime (more specific, but means I know exactly what I'm catching) I'm sure there are other ways of looking at this, and thus more options for naming an exception. Are there any standards or guidelines which outline this? A: I think the built-in ValueError may be appropriate in this case. From the Python docs: exception ValueError Raised when a built-in operation or function receives an argument that has the right type but an inappropriate value, and the situation is not described by a more precise exception such as IndexError. A: I try to use appropriate built-in exceptions as much as possible. In languages like Java and C# the exception list is so robust that you'll rarely find one you need to define yourself. I'm not super-familiar with Python's exception list, but IIRC there are at least a few pretty general good ones.
What are good names for user defined exceptions?
This question covers a broad range of programming languages; however, I am specifically working with Python in this case. I would like to create some user defined exceptions, but I'm not sure how fine-grained they should be. For example, if I have the following class: class Race(object): def __init__(self, start_time, end_time): if end_time < start_time: raise WhatToNameThisError self._start_time = start_time self._finish_time = end_time I would like an exception to be raised if the finish time occurs before the start time, but I could I call it? RaceError (all exceptions related to the Race class can use this, and the message can distinguish between them) RaceFinishTimeBeforeStartTime (more specific, but means I know exactly what I'm catching) I'm sure there are other ways of looking at this, and thus more options for naming an exception. Are there any standards or guidelines which outline this?
[ "I think the built-in ValueError may be appropriate in this case. From the Python docs:\n\nexception ValueError\nRaised when a\n built-in operation or function\n receives an argument that has the\n right type but an inappropriate value,\n and the situation is not described by\n a more precise exception such as\n IndexError.\n\n", "I try to use appropriate built-in exceptions as much as possible. In languages like Java and C# the exception list is so robust that you'll rarely find one you need to define yourself. I'm not super-familiar with Python's exception list, but IIRC there are at least a few pretty general good ones.\n" ]
[ 6, 1 ]
[]
[]
[ "exception", "exception_handling", "python" ]
stackoverflow_0001446789_exception_exception_handling_python.txt
Q: How to identify binary and text files using Python? I need identify which file is binary and which is a text in a directory. I tried use mimetypes but it isnt a good idea in my case because it cant identify all files mimes, and I have strangers ones here... I just need know, binary or text. Simple ? But I couldn´t find a solution... Thanks A: Thanks everybody, I found a solution that suited my problem. I found this code at http://code.activestate.com/recipes/173220/ and I changed just a little piece to suit me. It works fine. from __future__ import division import string def istext(filename): s=open(filename).read(512) text_characters = "".join(map(chr, range(32, 127)) + list("\n\r\t\b")) _null_trans = string.maketrans("", "") if not s: # Empty files are considered text return True if "\0" in s: # Files with null bytes are likely binary return False # Get the non-text characters (maps a character to itself then # use the 'remove' option to get rid of the text characters.) t = s.translate(_null_trans, text_characters) # If more than 30% non-text characters, then # this is considered a binary file if float(len(t))/float(len(s)) > 0.30: return False return True A: It's inherently not simple. There's no way of knowing for sure, although you can take a reasonably good guess in most cases. Things you might like to do: Look for known magic numbers in binary signatures Look for the Unicode byte-order-mark at the start of the file If the file is regularly 00 xx 00 xx 00 xx (for arbitrary xx) or vice versa, that's quite possibly UTF-16 Otherwise, look for 0s in the file; a file with a 0 in is unlikely to be a single-byte-encoding text file. But it's all heuristic - it's quite possible to have a file which is a valid text file and a valid image file, for example. It would probably be nonsense as a text file, but legitimate in some encoding or other... A: It might be possible to use libmagic to guess the MIME type of the file using python-magic. If you get back something in the "text/*" namespace, it is likely a text file, while anything else is likely a binary file. A: If your script is running on *nix, you could use something like this: import subprocess import re def is_text(fn): msg = subprocess.Popen(["file", fn], stdout=subprocess.PIPE).communicate()[0] return re.search('text', msg) != None
How to identify binary and text files using Python?
I need identify which file is binary and which is a text in a directory. I tried use mimetypes but it isnt a good idea in my case because it cant identify all files mimes, and I have strangers ones here... I just need know, binary or text. Simple ? But I couldn´t find a solution... Thanks
[ "Thanks everybody, I found a solution that suited my problem. I found this code at http://code.activestate.com/recipes/173220/ and I changed just a little piece to suit me.\nIt works fine. \nfrom __future__ import division\nimport string \n\ndef istext(filename):\n s=open(filename).read(512)\n text_characters = \"\".join(map(chr, range(32, 127)) + list(\"\\n\\r\\t\\b\"))\n _null_trans = string.maketrans(\"\", \"\")\n if not s:\n # Empty files are considered text\n return True\n if \"\\0\" in s:\n # Files with null bytes are likely binary\n return False\n # Get the non-text characters (maps a character to itself then\n # use the 'remove' option to get rid of the text characters.)\n t = s.translate(_null_trans, text_characters)\n # If more than 30% non-text characters, then\n # this is considered a binary file\n if float(len(t))/float(len(s)) > 0.30:\n return False\n return True\n\n", "It's inherently not simple. There's no way of knowing for sure, although you can take a reasonably good guess in most cases.\nThings you might like to do:\n\nLook for known magic numbers in binary signatures\nLook for the Unicode byte-order-mark at the start of the file\nIf the file is regularly 00 xx 00 xx 00 xx (for arbitrary xx) or vice versa, that's quite possibly UTF-16\nOtherwise, look for 0s in the file; a file with a 0 in is unlikely to be a single-byte-encoding text file.\n\nBut it's all heuristic - it's quite possible to have a file which is a valid text file and a valid image file, for example. It would probably be nonsense as a text file, but legitimate in some encoding or other...\n", "It might be possible to use libmagic to guess the MIME type of the file using python-magic. If you get back something in the \"text/*\" namespace, it is likely a text file, while anything else is likely a binary file.\n", "If your script is running on *nix, you could use something like this:\nimport subprocess\nimport re\n\ndef is_text(fn):\n msg = subprocess.Popen([\"file\", fn], stdout=subprocess.PIPE).communicate()[0]\n return re.search('text', msg) != None\n\n" ]
[ 11, 8, 6, 5 ]
[]
[]
[ "binary", "file_type", "python", "text" ]
stackoverflow_0001446549_binary_file_type_python_text.txt
Q: apt like column output - python library Debian's apt tool outputs results in uniform width columns. For instance, try running "aptitude search svn" .. and all names appear in the first column of the same width. Now if you resize the terminal, the column width is adjusted accordingly. Is there a Python library that enables one to do this? Note that the library has to be aware of the terminal width and take a table as input - which could be, for instance, [('rapidsvn', 'A GUI client for subversion'), ...] .. and you may also specify a max-width for the first column (or any column). Also note how the string in the second column below is trimmed if exceeds the terminal width .. thus not introducing the undesired second line. $ aptitude search svn [...] p python-svn-dbg - A(nother) Python interface to Subversion (d v python2.5-svn - v python2.6-svn - p rapidsvn - A GUI client for subversion p statsvn - SVN repository statistics p svn-arch-mirror - one-way mirroring from Subversion to Arch r p svn-autoreleasedeb - Automatically release/upload debian package p svn-buildpackage - helper programs to maintain Debian packages p svn-load - An enhanced import facility for Subversion p svn-workbench - A Workbench for Subversion p svnmailer - extensible Subversion commit notification t p websvn - interface for subversion repositories writt $ EDIT: (in response to Alex's answer below) ... the output will be similar to 'aptitude search' in that 1) only the last column (which is the only column with the longest string in a row) is to be trimmed, 2) there are typically 2-4 columns only, but the last column ("description") is expected to take at least half the terminal width. 3) all rows contain equal number of columns, 4) all entries are strings only A: Update: The colprint routine is now available in the applib Python library hosted in GitHub. Here's the complete program for those of you interested: # This function was written by Alex Martelli # http://stackoverflow.com/questions/1396820/ def colprint(table, totwidth=None): """Print the table in terminal taking care of wrapping/alignment - `table`: A table of strings. Elements must not be `None` - `totwidth`: If None, console width is used """ if not table: return if totwidth is None: totwidth = find_console_width() totwidth -= 1 # for not printing an extra empty line on windows numcols = max(len(row) for row in table) # ensure all rows have >= numcols columns, maybe empty padded = [row+numcols*('',) for row in table] # compute col widths, including separating space (except for last one) widths = [ 1 + max(len(x) for x in column) for column in zip(*padded)] widths[-1] -= 1 # drop or truncate columns from the right in order to fit while sum(widths) > totwidth: mustlose = sum(widths) - totwidth if widths[-1] <= mustlose: del widths[-1] else: widths[-1] -= mustlose break # and finally, the output phase! for row in padded: print(''.join([u'%*s' % (-w, i[:w]) for w, i in zip(widths, row)])) def find_console_width(): if sys.platform.startswith('win'): return _find_windows_console_width() else: return _find_unix_console_width() def _find_unix_console_width(): """Return the width of the Unix terminal If `stdout` is not a real terminal, return the default value (80) """ import termios, fcntl, struct, sys # fcntl.ioctl will fail if stdout is not a tty if not sys.stdout.isatty(): return 80 s = struct.pack("HHHH", 0, 0, 0, 0) fd_stdout = sys.stdout.fileno() size = fcntl.ioctl(fd_stdout, termios.TIOCGWINSZ, s) height, width = struct.unpack("HHHH", size)[:2] return width def _find_windows_console_width(): """Return the width of the Windows console If the width cannot be determined, return the default value (80) """ # http://code.activestate.com/recipes/440694/ from ctypes import windll, create_string_buffer STDIN, STDOUT, STDERR = -10, -11, -12 h = windll.kernel32.GetStdHandle(STDERR) csbi = create_string_buffer(22) res = windll.kernel32.GetConsoleScreenBufferInfo(h, csbi) if res: import struct (bufx, bufy, curx, cury, wattr, left, top, right, bottom, maxx, maxy) = struct.unpack("hhhhHhhhhhh", csbi.raw) sizex = right - left + 1 sizey = bottom - top + 1 else: sizex, sizey = 80, 25 return sizex A: Well, aptitude uses cwidget to format the columns in the text-only display. You could call into cwidget writing a python extension for it, but I don't think it is worth the trouble... You can use your preferred method of getting the actual horizontal size in chars and calculate yourself. A: First, use ioctl to get the size of the TTY: import termios, fcntl, struct, sys def get_tty_size(): s = struct.pack("HHHH", 0, 0, 0, 0) fd_stdout = sys.stdout.fileno() size = fcntl.ioctl(fd_stdout, termios.TIOCGWINSZ, s) return struct.unpack("HHHH", size)[:2] print get_tty_size() Then use a function like this to make columns: pad = lambda s, n=20: "%s%s" % (s,' '*(n-len(s))) Put those together and you've got resizing columns for the console! A: I don't think there's a general, cross-platform way to "get the width of the terminal" -- most definitely NOT "look at the COLUMNS environment variable" (see my comment on the question). On Linux and Mac OS X (and I expect all modern Unix versions), curses.wrapper(lambda _: curses.tigetnum('cols')) returns the number of columns; but I don't know if wcurses supports this in Windows. Once you do have (from os.environ['COLUMNS'] if you insist, or via curses, or from an oracle, or defaulted to 80, or any other way you like) the desired output width, the rest is quite feasible. It's finnicky work, with many chances for off-by-one kinds of errors, and very vulnerable to a lot of detailed specs that you don't make entirely clear, such as: which column gets cut to avoid wrapping -- it it always the last one, or...? How come you're showing 3 columns in the sample output when according to your question only two are passed in...? what is supposed to happen if not all rows have the same number of columns? must all entries in table be strings? and many, many other mysteries of this ilk. So, taking somewhat-arbitrary guesses for all the specs that you don't express, one approach might be something like...: import sys def colprint(totwidth, table): numcols = max(len(row) for row in table) # ensure all rows have >= numcols columns, maybe empty padded = [row+numcols*('',) for row in table] # compute col widths, including separating space (except for last one) widths = [ 1 + max(len(x) for x in column) for column in zip(*padded)] widths[-1] -= 1 # drop or truncate columns from the right in order to fit while sum(widths) > totwidth: mustlose = sum(widths) - totwidth if widths[-1] <= mustlose: del widths[-1] else: widths[-1] -= mustlose break # and finally, the output phase! for row in padded: for w, i in zip(widths, row): sys.stdout.write('%*s' % (-w, i[:w])) sys.stdout.write('\n')
apt like column output - python library
Debian's apt tool outputs results in uniform width columns. For instance, try running "aptitude search svn" .. and all names appear in the first column of the same width. Now if you resize the terminal, the column width is adjusted accordingly. Is there a Python library that enables one to do this? Note that the library has to be aware of the terminal width and take a table as input - which could be, for instance, [('rapidsvn', 'A GUI client for subversion'), ...] .. and you may also specify a max-width for the first column (or any column). Also note how the string in the second column below is trimmed if exceeds the terminal width .. thus not introducing the undesired second line. $ aptitude search svn [...] p python-svn-dbg - A(nother) Python interface to Subversion (d v python2.5-svn - v python2.6-svn - p rapidsvn - A GUI client for subversion p statsvn - SVN repository statistics p svn-arch-mirror - one-way mirroring from Subversion to Arch r p svn-autoreleasedeb - Automatically release/upload debian package p svn-buildpackage - helper programs to maintain Debian packages p svn-load - An enhanced import facility for Subversion p svn-workbench - A Workbench for Subversion p svnmailer - extensible Subversion commit notification t p websvn - interface for subversion repositories writt $ EDIT: (in response to Alex's answer below) ... the output will be similar to 'aptitude search' in that 1) only the last column (which is the only column with the longest string in a row) is to be trimmed, 2) there are typically 2-4 columns only, but the last column ("description") is expected to take at least half the terminal width. 3) all rows contain equal number of columns, 4) all entries are strings only
[ "Update: The colprint routine is now available in the applib Python library hosted in GitHub. \nHere's the complete program for those of you interested:\n# This function was written by Alex Martelli\n# http://stackoverflow.com/questions/1396820/\ndef colprint(table, totwidth=None):\n \"\"\"Print the table in terminal taking care of wrapping/alignment\n\n - `table`: A table of strings. Elements must not be `None`\n - `totwidth`: If None, console width is used\n \"\"\"\n if not table: return\n if totwidth is None:\n totwidth = find_console_width()\n totwidth -= 1 # for not printing an extra empty line on windows\n numcols = max(len(row) for row in table)\n # ensure all rows have >= numcols columns, maybe empty\n padded = [row+numcols*('',) for row in table]\n # compute col widths, including separating space (except for last one)\n widths = [ 1 + max(len(x) for x in column) for column in zip(*padded)]\n widths[-1] -= 1\n # drop or truncate columns from the right in order to fit\n while sum(widths) > totwidth:\n mustlose = sum(widths) - totwidth\n if widths[-1] <= mustlose:\n del widths[-1]\n else:\n widths[-1] -= mustlose\n break\n # and finally, the output phase!\n for row in padded:\n print(''.join([u'%*s' % (-w, i[:w])\n for w, i in zip(widths, row)]))\n\ndef find_console_width():\n if sys.platform.startswith('win'):\n return _find_windows_console_width()\n else:\n return _find_unix_console_width()\ndef _find_unix_console_width():\n \"\"\"Return the width of the Unix terminal\n\n If `stdout` is not a real terminal, return the default value (80)\n \"\"\"\n import termios, fcntl, struct, sys\n\n # fcntl.ioctl will fail if stdout is not a tty\n if not sys.stdout.isatty():\n return 80\n\n s = struct.pack(\"HHHH\", 0, 0, 0, 0)\n fd_stdout = sys.stdout.fileno()\n size = fcntl.ioctl(fd_stdout, termios.TIOCGWINSZ, s)\n height, width = struct.unpack(\"HHHH\", size)[:2]\n return width\ndef _find_windows_console_width():\n \"\"\"Return the width of the Windows console\n\n If the width cannot be determined, return the default value (80)\n \"\"\"\n # http://code.activestate.com/recipes/440694/\n from ctypes import windll, create_string_buffer\n STDIN, STDOUT, STDERR = -10, -11, -12\n\n h = windll.kernel32.GetStdHandle(STDERR)\n csbi = create_string_buffer(22)\n res = windll.kernel32.GetConsoleScreenBufferInfo(h, csbi)\n\n if res:\n import struct\n (bufx, bufy, curx, cury, wattr,\n left, top, right, bottom,\n maxx, maxy) = struct.unpack(\"hhhhHhhhhhh\", csbi.raw)\n sizex = right - left + 1\n sizey = bottom - top + 1\n else:\n sizex, sizey = 80, 25\n\n return sizex\n\n", "Well, aptitude uses cwidget to format the columns in the text-only display. You could call into cwidget writing a python extension for it, but I don't think it is worth the trouble... You can use your preferred method of getting the actual horizontal size in chars and calculate yourself.\n", "First, use ioctl to get the size of the TTY:\nimport termios, fcntl, struct, sys\n\ndef get_tty_size():\n s = struct.pack(\"HHHH\", 0, 0, 0, 0)\n fd_stdout = sys.stdout.fileno()\n size = fcntl.ioctl(fd_stdout, termios.TIOCGWINSZ, s)\n return struct.unpack(\"HHHH\", size)[:2]\n\nprint get_tty_size()\n\nThen use a function like this to make columns:\npad = lambda s, n=20: \"%s%s\" % (s,' '*(n-len(s)))\n\nPut those together and you've got resizing columns for the console!\n", "I don't think there's a general, cross-platform way to \"get the width of the terminal\" -- most definitely NOT \"look at the COLUMNS environment variable\" (see my comment on the question). On Linux and Mac OS X (and I expect all modern Unix versions),\ncurses.wrapper(lambda _: curses.tigetnum('cols'))\n\nreturns the number of columns; but I don't know if wcurses supports this in Windows.\nOnce you do have (from os.environ['COLUMNS'] if you insist, or via curses, or from an oracle, or defaulted to 80, or any other way you like) the desired output width, the rest is \nquite feasible. It's finnicky work, with many chances for off-by-one kinds of errors, and very vulnerable to a lot of detailed specs that you don't make entirely clear, such as: which column gets cut to avoid wrapping -- it it always the last one, or...? How come you're showing 3 columns in the sample output when according to your question only two are passed in...? what is supposed to happen if not all rows have the same number of columns? must all entries in table be strings? and many, many other mysteries of this ilk.\nSo, taking somewhat-arbitrary guesses for all the specs that you don't express, one approach might be something like...:\nimport sys\n\ndef colprint(totwidth, table):\n numcols = max(len(row) for row in table)\n # ensure all rows have >= numcols columns, maybe empty\n padded = [row+numcols*('',) for row in table]\n # compute col widths, including separating space (except for last one)\n widths = [ 1 + max(len(x) for x in column) for column in zip(*padded)]\n widths[-1] -= 1\n # drop or truncate columns from the right in order to fit\n while sum(widths) > totwidth:\n mustlose = sum(widths) - totwidth\n if widths[-1] <= mustlose:\n del widths[-1]\n else:\n widths[-1] -= mustlose\n break\n # and finally, the output phase!\n for row in padded:\n for w, i in zip(widths, row):\n sys.stdout.write('%*s' % (-w, i[:w]))\n sys.stdout.write('\\n')\n\n" ]
[ 4, 2, 2, 2 ]
[]
[]
[ "apt", "formatting", "python", "terminal" ]
stackoverflow_0001396820_apt_formatting_python_terminal.txt
Q: How to encapsulate python modules? Is it possible to encapsulate python modules 'mechanize' and 'BeautifulSoup' into a single .py file? My problem is the following: I have a python script that requires mechanize and BeautifulSoup. I am calling it from a php page. The webhost server supports python, but doesn't have the modules installed. That's why I would like to do this. Sorry if this is a dumb question. edit: Thanks all for your answers. I think the solution for this resolves around virtualenv. Could someone be kind enough to explain me this in very simple terms? I have read the virtualenv page, but still am very confused. Thanks once again. A: You don't actually have to combine the files, or install them in a system-wide location. Just make sure the libraries are in a directory readable by your Python script (and therefore, by your PHP app) and added to the Python load path. The Python runtime searches for libraries in the directories in the sys.path array. You can add directories to that array at runtime using the PYTHONPATH environment variable, or by explicitly appending items to sys.path. The code snippet below, added to a Python script, adds the directory in which the script is located to the search path, implicitly making any libraries located in the same place available for import: import os, sys sys.path.append(os.path.dirname(__file__)) A: Check out virtualenv, to create a local python installation, where you can install BeautifulSoup and all other libraries you want. http://pypi.python.org/pypi/virtualenv A: No, you can't (in general), because doing so would destroy the module searching method that python uses (files and directories). I guess that you could, with some hack, generate a module hierarchy without having the actual filesystem layout, but this is a huge hack, probably something like >>> sha=types.ModuleType("sha") >>> sha <module 'sha' (built-in)> ['__doc__', '__name__'] >>> sha.foo=3 >>> sha.zam=types.ModuleType("zam") >>> sha.foo 3 >>> dir(sha) ['__doc__', '__name__', 'foo', 'zam'] But in any case you would have to generate the code that creates this layout and stores the stuff into properly named identifiers. What you need is probably to learn to use .egg files. They are unique agglomerated entities containing your library, like .jar files.
How to encapsulate python modules?
Is it possible to encapsulate python modules 'mechanize' and 'BeautifulSoup' into a single .py file? My problem is the following: I have a python script that requires mechanize and BeautifulSoup. I am calling it from a php page. The webhost server supports python, but doesn't have the modules installed. That's why I would like to do this. Sorry if this is a dumb question. edit: Thanks all for your answers. I think the solution for this resolves around virtualenv. Could someone be kind enough to explain me this in very simple terms? I have read the virtualenv page, but still am very confused. Thanks once again.
[ "You don't actually have to combine the files, or install them in a system-wide location. Just make sure the libraries are in a directory readable by your Python script (and therefore, by your PHP app) and added to the Python load path.\nThe Python runtime searches for libraries in the directories in the sys.path array. You can add directories to that array at runtime using the PYTHONPATH environment variable, or by explicitly appending items to sys.path. The code snippet below, added to a Python script, adds the directory in which the script is located to the search path, implicitly making any libraries located in the same place available for import:\nimport os, sys\nsys.path.append(os.path.dirname(__file__))\n\n", "Check out virtualenv, to create a local python installation, where you can install BeautifulSoup and all other libraries you want.\nhttp://pypi.python.org/pypi/virtualenv\n", "No, you can't (in general), because doing so would destroy the module searching method that python uses (files and directories). I guess that you could, with some hack, generate a module hierarchy without having the actual filesystem layout, but this is a huge hack, probably something like\n>>> sha=types.ModuleType(\"sha\")\n>>> sha\n<module 'sha' (built-in)>\n['__doc__', '__name__']\n>>> sha.foo=3\n>>> sha.zam=types.ModuleType(\"zam\")\n>>> sha.foo\n3\n>>> dir(sha)\n['__doc__', '__name__', 'foo', 'zam']\n\nBut in any case you would have to generate the code that creates this layout and stores the stuff into properly named identifiers.\nWhat you need is probably to learn to use .egg files. They are unique agglomerated entities containing your library, like .jar files.\n" ]
[ 4, 2, 0 ]
[]
[]
[ "encapsulation", "python" ]
stackoverflow_0001446852_encapsulation_python.txt
Q: In Django Admin, I want to change how foreign keys are displayed in a Many-Many Relationship admin widget I have a ManyToMany relationship: class Book: title = models.CharField(...) isbn = models.CharField(...) def unicode(self): return self.title def ISBN(self): return self.isbn class Author: name = models.CharField(...) books = models.ManyToManyField(Book...) In the admin interface for Author I get a multiple select list that uses the unicode display for books. I want to change the list in two ways: 1) Only for the admin interface I want to display the ISBN number, everywhere else I just print out a "Book" object I want the title displayed. 2) How could I use a better widget than MultipleSelectList for the ManyToMany. How could I specify to use a CheckBoxSelectList instead? A: To display the ISBN you could make a custom field like this: class BooksField(forms.ModelMultipleChoiceField): def label_from_instance(self, obj): return obj.isbn There's a CheckboxSelectMultiple for the ManyToManyField but it doesn't display correctly on the admin, so you could also write some css to fix that. You need to create a form for the model, and use that in your admin class: class AuthorForm(forms.ModelForm): books = BooksField(Book.objects.all(), widget=forms.CheckboxSelectMultiple) class Meta: model = Author class Media: css = { 'all': ('booksfield.css',) } class AuthorAdmin(admin.ModelAdmin): form = AuthorForm A: For 2), use this in your AuthorAdmin class: raw_id_fields = ['books'] Check here: http://docs.djangoproject.com/en/dev/ref/contrib/admin/#ref-contrib-admin for instructions on creating a custom ModelAdmin class. I've thought about this a lot myself for my own Django project, and I think 1) would require modifying the admin template for viewing Author objects.
In Django Admin, I want to change how foreign keys are displayed in a Many-Many Relationship admin widget
I have a ManyToMany relationship: class Book: title = models.CharField(...) isbn = models.CharField(...) def unicode(self): return self.title def ISBN(self): return self.isbn class Author: name = models.CharField(...) books = models.ManyToManyField(Book...) In the admin interface for Author I get a multiple select list that uses the unicode display for books. I want to change the list in two ways: 1) Only for the admin interface I want to display the ISBN number, everywhere else I just print out a "Book" object I want the title displayed. 2) How could I use a better widget than MultipleSelectList for the ManyToMany. How could I specify to use a CheckBoxSelectList instead?
[ "To display the ISBN you could make a custom field like this:\n\nclass BooksField(forms.ModelMultipleChoiceField):\n def label_from_instance(self, obj):\n return obj.isbn\n\nThere's a CheckboxSelectMultiple for the ManyToManyField but it doesn't display correctly on the admin, so you could also write some css to fix that.\nYou need to create a form for the model, and use that in your admin class:\n\nclass AuthorForm(forms.ModelForm):\n books = BooksField(Book.objects.all(), widget=forms.CheckboxSelectMultiple)\n\n class Meta:\n model = Author\n\n class Media:\n css = {\n 'all': ('booksfield.css',)\n }\n\nclass AuthorAdmin(admin.ModelAdmin):\n form = AuthorForm\n\n", "For 2), use this in your AuthorAdmin class:\nraw_id_fields = ['books']\n\nCheck here: http://docs.djangoproject.com/en/dev/ref/contrib/admin/#ref-contrib-admin for instructions on creating a custom ModelAdmin class. I've thought about this a lot myself for my own Django project, and I think 1) would require modifying the admin template for viewing Author objects.\n" ]
[ 3, 2 ]
[]
[]
[ "django", "django_admin", "django_widget", "python" ]
stackoverflow_0001444912_django_django_admin_django_widget_python.txt
Q: How to send a file via HTTP, the good way, using Python? If a would-be-HTTP-server written in Python2.6 has local access to a file, what would be the most correct way for that server to return the file to a client, on request? Let's say this is the current situation: header('Content-Type', file.mimetype) header('Content-Length', file.size) # file size in bytes header('Content-MD5', file.hash) # an md5 hash of the entire file return open(file.path).read() All the files are .zip or .rar archives no bigger than a couple of megabytes. With the current situation, browsers handle the incoming download weirdly. No browser knows the file's name, for example, so they use a random or default one. (Firefox even saved the file with a .part extension, even though it was complete and completely usable.) What would be the best way to fix this and other errors I may not even be aware of, yet? What headers am I not sending? Thanks! A: This is how I send ZIP file, req.send_response(200) req.send_header('Content-Type', 'application/zip') req.send_header('Content-Disposition', 'attachment;' 'filename=%s' % filename) Most browsers handle it correctly. A: If you don't have to return the response body (that is, if you are given a stream for the response body by your framework) you can avoid holding the file in memory with something like this: fp = file(path_to_the_file, 'rb') while True: bytes = fp.read(8192) if bytes: response.write(bytes) else: return What web framework are you using?
How to send a file via HTTP, the good way, using Python?
If a would-be-HTTP-server written in Python2.6 has local access to a file, what would be the most correct way for that server to return the file to a client, on request? Let's say this is the current situation: header('Content-Type', file.mimetype) header('Content-Length', file.size) # file size in bytes header('Content-MD5', file.hash) # an md5 hash of the entire file return open(file.path).read() All the files are .zip or .rar archives no bigger than a couple of megabytes. With the current situation, browsers handle the incoming download weirdly. No browser knows the file's name, for example, so they use a random or default one. (Firefox even saved the file with a .part extension, even though it was complete and completely usable.) What would be the best way to fix this and other errors I may not even be aware of, yet? What headers am I not sending? Thanks!
[ "This is how I send ZIP file,\n req.send_response(200)\n req.send_header('Content-Type', 'application/zip')\n req.send_header('Content-Disposition', 'attachment;'\n 'filename=%s' % filename)\n\nMost browsers handle it correctly.\n", "If you don't have to return the response body (that is, if you are given a stream for the response body by your framework) you can avoid holding the file in memory with something like this:\nfp = file(path_to_the_file, 'rb')\nwhile True:\n bytes = fp.read(8192)\n if bytes:\n response.write(bytes)\n else:\n return\n\nWhat web framework are you using?\n" ]
[ 6, 1 ]
[]
[]
[ "http", "http_headers", "python" ]
stackoverflow_0001447353_http_http_headers_python.txt
Q: Installed Python to portable python I have installed python from .msi installer in windows and installed a lot of other modules. i would like to have all these available on a portable thumbdrive, but i don't want to redownload all the extramodules. Is there a way i can convert my C:\python26* to a portable python installation? A: Python is pretty smart about knowing where it is run from. What happens if you just copy the whole directory tree to the thumb drive?
Installed Python to portable python
I have installed python from .msi installer in windows and installed a lot of other modules. i would like to have all these available on a portable thumbdrive, but i don't want to redownload all the extramodules. Is there a way i can convert my C:\python26* to a portable python installation?
[ "Python is pretty smart about knowing where it is run from. What happens if you just copy the whole directory tree to the thumb drive?\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0001447422_python.txt
Q: Class usage in Python I write a lot of scripts in Python to analyze and plot experimental data as well as write simple simulations to test how theories fit the data. The scripts tend to be very procedural; calculate some property, calculate some other property, plot properties, analyze plot... Rather than just writing a procedure, would there be an benefits of using a Class? I can bury the actual analysis into functions so I can pass the data to the function and let it do it's thing but the functions are not contained in a Class. What sort of drawbacks would a Class over come and what would be the purpose of using a Class if it can be written procedurally? If this has been posted before my apologies, just point me in that direction. A: By using Object Oriented Programming, you will have objects, that have associated functions, that are (should) be the only way to modify its properties (internal variables). It was common to have functions called trim_string(string), while with a string class you could do string.trim(). The difference is noticeable mainly when doing big complex modules, where you need to do all you can to minify the coupling between individual components. There are other concepts that encompass OOP, like inheritance, but the real important thing to know, is that OOP is about making you think about objects that have operations and message passing (methods/verbs), instead of thinking in term of operations (functions/verbs) and basic elements (variables) The importance of the object oriented paradigm is not as much in the language mechanism as it is in the thinking and design process. Also take a look at this question. There is nothing inherently wrong about Structured Programming, it's just that some problems map better to an Object Oriented design. For example you could have in a SP language: #Pseudocode!!! function talk(dog): if dog is aDog: print "bark!" raise "IS NOT A SUPPORTED ANIMAL!!!" >>var dog as aDog >>talk(dog) "bark!" >>var cat as aCat >>talk(cat) EXCEPTION: IS NOT A SUPPORTED ANIMAL!!! # Lets add the cat function talk(animal): if animal is aDog: print "bark!" if animal is aCat: print "miau!" raise "IS NOT A SUPPORTED ANIMAL!!!" While on an OOP you'd have: class Animal: def __init__(self, name="skippy"): self.name = name def talk(self): raise "MUTE ANIMAL" class Dog(Animal): def talk(self): print "bark!" class Cat(Animal): def talk(self): print "miau!" >>dog = new Dog() >>dog.talk() "bark!" >>cat = new Cat() >>cat.talk() "miau!" You can see that with SP, every animal that you add, you'd have to add another if to talk, add another variable to store the name of the animal, touch potentially every function in the module, while on OOP, you can consider your class as independent to the rest. When there is a global change, you change the Animal, when it's a narrow change, you just have to look at the class definition. For simple, sequential, and possibly throwaway code, it's ok to use structured programming. A: You don't need to use classes in Python - it doesn't force you to do OOP. If you're more comfortable with the functional style, that's fine. I use classes when I want to model some abstraction which has variations, and I want to model those variations using classes. As the word "class" implies, they're useful mainly when the stuff you are working with falls naturally into various classes. When just manipulating large datasets, I've not found an overarching need to follow an OOP paradigm just for the sake of it. A: "but the functions are not contained in a Class." They could be. class Linear( object ): a= 2. b= 3. def calculate( self, somePoint ): somePoint['line']= b + somePoint['x']*a class Exponential( object ): a = 1.05 b = 3.2 def calculate( self, somePoint ): somePoint['exp']= b * somePoint['x']**a class Mapping( object ): def __init__( self ): self.funcs = ( Linear(), Exponential() ) def apply( self, someData ): for row in someData: for f in self.funcs: f.calculate( row ) Now your calculations are wrapped in classes. You can use design patterns like Delegation, Composition and Command to simplify your scripts. A: OOP lends itself well to complex programs. It's great for capturing the state and behavior of real world concepts and orchestrating the interplay between them. Good OO code is easy to read/understand, protects your data's integrity, and maximizes code reuse. I'd say code reuse is one big advantage to keeping your frequently used calculations in a class. A: Object-oriented programming isn't the solution to every coding problem. In Python, functions are objects. You can mix as many objects and functions as you want. Modules with functions are already objects with properties. If you find yourself passing a lot of the same variables around — state — an object is probably better suited. If you have a lot of classes with class methods, or methods that don't use self very much, then functions are probably better.
Class usage in Python
I write a lot of scripts in Python to analyze and plot experimental data as well as write simple simulations to test how theories fit the data. The scripts tend to be very procedural; calculate some property, calculate some other property, plot properties, analyze plot... Rather than just writing a procedure, would there be an benefits of using a Class? I can bury the actual analysis into functions so I can pass the data to the function and let it do it's thing but the functions are not contained in a Class. What sort of drawbacks would a Class over come and what would be the purpose of using a Class if it can be written procedurally? If this has been posted before my apologies, just point me in that direction.
[ "By using Object Oriented Programming, you will have objects, that have associated functions, that are (should) be the only way to modify its properties (internal variables).\nIt was common to have functions called trim_string(string), while with a string class you could do string.trim(). The difference is noticeable mainly when doing big complex modules, where you need to do all you can to minify the coupling between individual components.\nThere are other concepts that encompass OOP, like inheritance, but the real important thing to know, is that OOP is about making you think about objects that have operations and message passing (methods/verbs), instead of thinking in term of operations (functions/verbs) and basic elements (variables)\n\nThe importance of the object oriented paradigm is not as much in the language mechanism as it is in the thinking and design process.\n\nAlso take a look at this question.\nThere is nothing inherently wrong about Structured Programming, it's just that some problems map better to an Object Oriented design.\nFor example you could have in a SP language:\n#Pseudocode!!!\n\nfunction talk(dog):\n if dog is aDog:\n print \"bark!\"\n raise \"IS NOT A SUPPORTED ANIMAL!!!\"\n\n>>var dog as aDog\n>>talk(dog)\n\"bark!\"\n>>var cat as aCat\n>>talk(cat)\nEXCEPTION: IS NOT A SUPPORTED ANIMAL!!!\n\n# Lets add the cat\nfunction talk(animal):\n if animal is aDog:\n print \"bark!\"\n if animal is aCat:\n print \"miau!\"\n raise \"IS NOT A SUPPORTED ANIMAL!!!\"\n\nWhile on an OOP you'd have:\nclass Animal:\n def __init__(self, name=\"skippy\"):\n self.name = name\n def talk(self):\n raise \"MUTE ANIMAL\"\n\nclass Dog(Animal):\n def talk(self):\n print \"bark!\"\n\nclass Cat(Animal):\n def talk(self):\n print \"miau!\"\n\n>>dog = new Dog()\n>>dog.talk()\n\"bark!\"\n>>cat = new Cat()\n>>cat.talk()\n\"miau!\"\n\nYou can see that with SP, every animal that you add, you'd have to add another if to talk, add another variable to store the name of the animal, touch potentially every function in the module, while on OOP, you can consider your class as independent to the rest. When there is a global change, you change the Animal, when it's a narrow change, you just have to look at the class definition.\nFor simple, sequential, and possibly throwaway code, it's ok to use structured programming.\n", "You don't need to use classes in Python - it doesn't force you to do OOP. If you're more comfortable with the functional style, that's fine. I use classes when I want to model some abstraction which has variations, and I want to model those variations using classes. As the word \"class\" implies, they're useful mainly when the stuff you are working with falls naturally into various classes. When just manipulating large datasets, I've not found an overarching need to follow an OOP paradigm just for the sake of it.\n", "\"but the functions are not contained in a Class.\"\nThey could be.\nclass Linear( object ):\n a= 2.\n b= 3.\n def calculate( self, somePoint ):\n somePoint['line']= b + somePoint['x']*a\n\nclass Exponential( object ):\n a = 1.05\n b = 3.2\n def calculate( self, somePoint ):\n somePoint['exp']= b * somePoint['x']**a\n\nclass Mapping( object ):\n def __init__( self ):\n self.funcs = ( Linear(), Exponential() )\n def apply( self, someData ):\n for row in someData:\n for f in self.funcs:\n f.calculate( row )\n\nNow your calculations are wrapped in classes. You can use design patterns like Delegation, Composition and Command to simplify your scripts.\n", "OOP lends itself well to complex programs. It's great for capturing the state and behavior of real world concepts and orchestrating the interplay between them. Good OO code is easy to read/understand, protects your data's integrity, and maximizes code reuse. I'd say code reuse is one big advantage to keeping your frequently used calculations in a class.\n", "\nObject-oriented programming isn't the solution to every coding problem.\nIn Python, functions are objects. You can mix as many objects and functions as you want.\nModules with functions are already objects with properties.\nIf you find yourself passing a lot of the same variables around — state — an object is probably better suited. If you have a lot of classes with class methods, or methods that don't use self very much, then functions are probably better.\n\n" ]
[ 15, 4, 1, 1, 1 ]
[]
[]
[ "class_design", "oop", "procedural_programming", "python" ]
stackoverflow_0001440434_class_design_oop_procedural_programming_python.txt
Q: How do I see the results of my class on a file? I found this class to take a space delimited file and if there are multiple spaces, they will be treated as a single separator. How do I see the effects of this on a file? class FH: def __init__(self, fh): self.fh = fh def close(self): self.fh.close() def seek(self, arg): self.fh.seek(arg) def fix(self, s): return ' '.join(s.split()) def next(self): return self.fix(self.fh.next()) def __iter__(self): for line in self.fh: yield self.fix(line) so how do I see this work on a file? I've created a file with multiple spaces to see it in action. I've done this: In [31]: FH('classfhtry.csv') Out[31]: In [32]: r = FH('classfhtry.csv') In [33]: r Out[33]: In [34]: print r -------> print(r) In [35]: f = open(r) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) /home/blahblahblah/Documents/Programming/EXERCISES/piece.py in () ----> 1 2 3 4 5 TypeError: coercing to Unicode: need string or buffer, instance found I want to see my class in action! Thanks for any 2cents! A: Looks like this class takes a file (not a file name) in the initializer. Try: r = FH(file('classfhtry.csv', 'r')) for line in r: print line A: dcrosta is correct. The class expects a space delimited contents in the file. Have a file like: somefile.txt one two three four five six And follow the method as suggested above. The class is an iterator (it yields the lines from the file obj). So, the way to access to the file is the iterate through the contents.
How do I see the results of my class on a file?
I found this class to take a space delimited file and if there are multiple spaces, they will be treated as a single separator. How do I see the effects of this on a file? class FH: def __init__(self, fh): self.fh = fh def close(self): self.fh.close() def seek(self, arg): self.fh.seek(arg) def fix(self, s): return ' '.join(s.split()) def next(self): return self.fix(self.fh.next()) def __iter__(self): for line in self.fh: yield self.fix(line) so how do I see this work on a file? I've created a file with multiple spaces to see it in action. I've done this: In [31]: FH('classfhtry.csv') Out[31]: In [32]: r = FH('classfhtry.csv') In [33]: r Out[33]: In [34]: print r -------> print(r) In [35]: f = open(r) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) /home/blahblahblah/Documents/Programming/EXERCISES/piece.py in () ----> 1 2 3 4 5 TypeError: coercing to Unicode: need string or buffer, instance found I want to see my class in action! Thanks for any 2cents!
[ "Looks like this class takes a file (not a file name) in the initializer. Try:\nr = FH(file('classfhtry.csv', 'r'))\nfor line in r:\n print line\n\n", "dcrosta is correct. The class expects a space delimited contents in the file.\nHave a file like:\nsomefile.txt\none two\nthree four\nfive six\n\nAnd follow the method as suggested above. The class is an iterator (it yields the lines from the file obj). So, the way to access to the file is the iterate through the contents.\n" ]
[ 5, 0 ]
[]
[]
[ "class", "python" ]
stackoverflow_0001447487_class_python.txt
Q: Writing a function to display current day using time_t? I've been writing a time converter to take the systems time_t and convert it into human readable date/time. Oh, and this is my second python script ever. We'll leave that fact aside and move on. The full script is hosted here. Writing converters for the year and month were fairly easy, but I've hit a serious brick wall trying to get the day working. As you can see, I've been trying to brute-force my way all the way from 1970 to today. Unfortunately, the day comes out as -105. Does anyone know of a better way to do it, or a way to fix up what I have attempted here? It's currently 3:30 AM, so it's quite possible I'm missing something obvious. Sorry, I forgot to note that I'm doing this manually in order to learn python. Doing it via date functions defeats the purpose, unfortunately. A: Why not use: from datetime import datetime the_date = datetime.fromtimestamp(the_time) print(the_date.strftime('%Y %B %d')) The datetime module handles all the edge cases -- leap years, leap seconds, leap days -- as well as time zone conversion (with optional second argument) A: You could do it either with time.strftime: >>> import time >>> time.strftime('%Y %B %d') '2009 September 18' or with datetime.date.strftime: >>> import datetime >>> datetime.date.today().strftime('%Y %B %d') '2009 September 18' A: (I'm assuming you do this to learn Python, so I'll point out the errors in your code). >>> years = SecondsSinceEpoch / 31540000 Nonononono. You can't do that. Some years have 31536000 seconds, others have 31622400 seconds. >>> if calendar.isleap(yeariterator) == True: You don't need to test if a true value is true. :-) Do: >>> if calendar.isleap(yeariterator): Instead. Also change: >>> yeariterator = 1969 >>> iterator = 0 >>> while yeariterator < yearsfordayfunction: >>> yeariterator = yeariterator + 1 To: for yeariterator in range(1970, yearsfordayfunction): That will also fix your error: You don't stop until AFTER 2009, so you get the answer -105, because there is 105 days left of the year. And also, there's not much point in calculating month by month. Year by year works fine. for yeariterator in range(1970, yearsfordayfunction): if calendar.isleap(yeariterator) == True: days = days - 366 else: days = days - 365 And an indent of 8 spaces is a lot. 4 is more common. Also, I'd calculate year and day of year in one method, instead of doing it twice. def YearDay(): SecondsSinceEpoch = int(time.time()) days = SecondsSinceEpoch // 86400 # Double slash means floored int. year = 1970 while True: if calendar.isleap(year): days -= 366 else: days -= 365 year += 1 if calendar.isleap(year): if days <= 366: return year, days else: if days <= 365: return year, days def MonthDay(year, day): if calendar.isleap(year): monthstarts = [0, 31, 60, 91, 121, 152, 182, 213, 244, 274, 305, 335, 366] else: monthstarts = [0, 31, 59, 90, 120, 151, 181, 212, 243, 273, 304, 334, 365] month = 0 for start in monthstarts: if start > day: return month, day - monthstarts[month-1] + 1 month += 1 A: I'll also point out something odd in the code you posted: try: SecondsSinceEpoch = time.time() except IOError: Print("Unable to get your system time!") 1.) Why would time.time() raise an IOError? As far as I know it's impossible for that function to raise an error, it should always return a value. 2.) Print should be print. 3.) Even if time.time did raise an IOError exception, you are swallowing the exception, which you probably don't want to do. The next line requires SecondsSinceEpoch to be defined, so that will just raise another (more confusing) exception.
Writing a function to display current day using time_t?
I've been writing a time converter to take the systems time_t and convert it into human readable date/time. Oh, and this is my second python script ever. We'll leave that fact aside and move on. The full script is hosted here. Writing converters for the year and month were fairly easy, but I've hit a serious brick wall trying to get the day working. As you can see, I've been trying to brute-force my way all the way from 1970 to today. Unfortunately, the day comes out as -105. Does anyone know of a better way to do it, or a way to fix up what I have attempted here? It's currently 3:30 AM, so it's quite possible I'm missing something obvious. Sorry, I forgot to note that I'm doing this manually in order to learn python. Doing it via date functions defeats the purpose, unfortunately.
[ "Why not use:\nfrom datetime import datetime\nthe_date = datetime.fromtimestamp(the_time)\nprint(the_date.strftime('%Y %B %d'))\n\nThe datetime module handles all the edge cases -- leap years, leap seconds, leap days -- as well as time zone conversion (with optional second argument)\n", "You could do it either with time.strftime:\n>>> import time\n>>> time.strftime('%Y %B %d')\n'2009 September 18'\n\nor with datetime.date.strftime:\n>>> import datetime\n>>> datetime.date.today().strftime('%Y %B %d')\n'2009 September 18'\n\n", "(I'm assuming you do this to learn Python, so I'll point out the errors in your code).\n>>> years = SecondsSinceEpoch / 31540000\n\nNonononono. You can't do that. Some years have 31536000 seconds, others have 31622400 seconds.\n>>> if calendar.isleap(yeariterator) == True:\n\nYou don't need to test if a true value is true. :-) Do:\n>>> if calendar.isleap(yeariterator):\n\nInstead.\nAlso change:\n>>> yeariterator = 1969\n>>> iterator = 0\n>>> while yeariterator < yearsfordayfunction:\n>>> yeariterator = yeariterator + 1\n\nTo:\n\n\n\nfor yeariterator in range(1970, yearsfordayfunction):\n\n\n\nThat will also fix your error: You don't stop until AFTER 2009, so you get the answer -105, because there is 105 days left of the year.\nAnd also, there's not much point in calculating month by month. Year by year works fine.\n for yeariterator in range(1970, yearsfordayfunction):\n if calendar.isleap(yeariterator) == True:\n days = days - 366\n else:\n days = days - 365\n\nAnd an indent of 8 spaces is a lot. 4 is more common.\nAlso, I'd calculate year and day of year in one method, instead of doing it twice.\ndef YearDay():\n SecondsSinceEpoch = int(time.time())\n days = SecondsSinceEpoch // 86400 # Double slash means floored int.\n year = 1970\n while True:\n if calendar.isleap(year):\n days -= 366\n else:\n days -= 365\n year += 1\n if calendar.isleap(year):\n if days <= 366:\n return year, days\n else:\n if days <= 365:\n return year, days\n\n\ndef MonthDay(year, day):\n if calendar.isleap(year):\n monthstarts = [0, 31, 60, 91, 121, 152, 182, 213, 244, 274, 305, 335, 366]\n else:\n monthstarts = [0, 31, 59, 90, 120, 151, 181, 212, 243, 273, 304, 334, 365]\n\n month = 0\n for start in monthstarts:\n if start > day:\n return month, day - monthstarts[month-1] + 1\n month += 1\n\n", "I'll also point out something odd in the code you posted:\n try:\n SecondsSinceEpoch = time.time()\n except IOError:\n Print(\"Unable to get your system time!\")\n\n1.) Why would time.time() raise an IOError? As far as I know it's impossible for that function to raise an error, it should always return a value.\n2.) Print should be print.\n3.) Even if time.time did raise an IOError exception, you are swallowing the exception, which you probably don't want to do. The next line requires SecondsSinceEpoch to be defined, so that will just raise another (more confusing) exception.\n" ]
[ 8, 3, 1, 0 ]
[]
[]
[ "python", "time", "time_t" ]
stackoverflow_0001445236_python_time_time_t.txt
Q: Help with Python in the web I've been using Werkzeug to make WSGI compliant applications. I'm trying to modify the code in the front page. Its basic idea is that you go to the /hello URL and you get a "Hello World!" message. You go to /hello/ and you get "hello !". For example, /hello/jeff yields "Hello Jeff!". Anyway, what I'm trying to do is putting a form in the front page with a text box where you can enter your name, and it will submit it to /hello. So if you enter "Jeff" in the form and submit, you get the "Hello Jeff!" message. However, I have no idea how to do this. I need to pass the "name" variable to the hello template, but I don't know how. Here's my index.html: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html> <head> <title>Index page</title> </head> <body> <h1>Go to the <a href="${url_for('say_hello')}">default</a></h1> <form name="helloform" action="${url_for('say_hello')}" method="post"> <input type="text" name="name"> <input type="submit"> </form> </body> </html> method="get" doesn't work either, predictably. A: Do it the right way: go to /hello?name=joe to say hello to joe, and so forth. That's how HTML/HTTP is designed to work! Your code behind the /hello URL just needs to get the name parameter from the request, if present, and respond accordingly. A: HTML Forms have a static target address, action="/something", but you want to change the address depending user input. You have two options: Add javascript to the html form to change the target address (by appending the name) before the form is submitted. Write a new method in your web framework that reads GET or POST variables and point the form there.
Help with Python in the web
I've been using Werkzeug to make WSGI compliant applications. I'm trying to modify the code in the front page. Its basic idea is that you go to the /hello URL and you get a "Hello World!" message. You go to /hello/ and you get "hello !". For example, /hello/jeff yields "Hello Jeff!". Anyway, what I'm trying to do is putting a form in the front page with a text box where you can enter your name, and it will submit it to /hello. So if you enter "Jeff" in the form and submit, you get the "Hello Jeff!" message. However, I have no idea how to do this. I need to pass the "name" variable to the hello template, but I don't know how. Here's my index.html: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html> <head> <title>Index page</title> </head> <body> <h1>Go to the <a href="${url_for('say_hello')}">default</a></h1> <form name="helloform" action="${url_for('say_hello')}" method="post"> <input type="text" name="name"> <input type="submit"> </form> </body> </html> method="get" doesn't work either, predictably.
[ "Do it the right way: go to /hello?name=joe to say hello to joe, and so forth. That's how HTML/HTTP is designed to work! Your code behind the /hello URL just needs to get the name parameter from the request, if present, and respond accordingly.\n", "HTML Forms have a static target address, action=\"/something\", but you want to change the address depending user input.\nYou have two options: \n\nAdd javascript to the html form to\nchange the target address (by\nappending the name) before the form\nis submitted.\nWrite a new method in your web framework that reads GET or POST\nvariables and point the form there.\n\n" ]
[ 1, 0 ]
[ "Directly on the page link to which you provide (http://werkzeug.pocoo.org/) when clicking on 'Click here', you get a code for the hello X example. What you seem to be missing is:\nHello ${url_values['name']|h}!\n\nsomewhere in your html template (assuming it is the template for response as well as for request)\n", "HTML is based on the REST principle... each url should be an object, like a person; NOT an action, like saying hello to a person. Have the URL identify who you want to greet only if your web application knows who that is internally, by looking at its database.\nIf your webapplication has no object called Joe, then designing URLs with Joe in them is not the right approach. Your particular web application is about finding out who people are and sending a hello messages to them. So, you should probably have one URL: /greeter, which looks for information sent in GET or POST requests (from your form), and displays a greeting. If it doesn't know who to greet, it can display the form to find out.\nAlways think in terms of the objects you're actually working with --- the components that make up the system --- when building software.\n" ]
[ -1, -1 ]
[ "python", "werkzeug", "wsgi" ]
stackoverflow_0001447010_python_werkzeug_wsgi.txt
Q: How do I partition datetime intervals which overlap (Org Mode clocked time)? I have related tasks from two Org files/subtrees where some of the clocked time overlaps. These are a manual worklog and a generated git commit log, see below. One subtree's CLOCK: entries needs to be adjusted to remove overlapping time. The other subtree is considered complete, and it's CLOCK: entries should not be adjusted. EDIT: This question is about calculating new time intervals to remove any overlaps. Any suggestions don't need to parse the Org mode file format. Python datetime.datetime algorithms are helpful, as are Emacs Lisp with or without the use of Org mode functions. In Python (more familiar) or Emacs Lisp (Org functions could help) I would like to: Identify time overlaps where they occur. file1.org will be mutable, file2.org time intervals should be considered fixed/correct. Calculate new time intervals for the CLOCK: lines in file1.org to remove any overlap with file2.org CLOCK: lines. write resulting new CLOCK: lines out, or at least the pertinent datetimes. The python convenience function tsparse converts an Org Mode timestamp to a python datetime.datetime object: >>> from datetime import datetime, timedelta >>> def tsparse(timestring): return datetime.strptime(timestring,'%Y-%m-%d %a %H:%M') >>> tsparse('2008-10-15 Wed 00:45') datetime.datetime(2008, 10, 15, 0, 45) Test cases can be found below. Thanks for any algorithm or implementation suggestions for Python or Emacs Lisp. Jeff file1.org, prior to adjustments: * Manually Edited Worklog ** DONE Onsite CLOSED: [2009-09-09 Wed 15:00] :LOGBOOK: CLOCK: [2009-09-09 Wed 07:00]--[2009-09-09 Wed 15:00] => 8:00 :END: ** DONE Onsite CLOSED: [2009-09-10 Wed 15:00] :LOGBOOK: CLOCK: [2009-09-10 Thu 08:00]--[2009-09-10 Thu 15:00] => 7:00 :END: file2.org: * Generated commit log ** DONE Commit 1 :partial:overlap:leading:contained: CLOSED: [2009-09-09 Tue 10:18] :LOGBOOK: CLOCK: [2009-09-09 Wed 06:40]--[2009-09-09 Wed 07:18] => 0:38 CLOCK: [2009-09-09 Wed 10:12]--[2009-09-09 Wed 10:18] => 0:06 :END: ** DONE Commit 2 :contained:overlap:contiguous: CLOSED: [2009-09-09 Wed 10:20] :LOGBOOK: CLOCK: [2009-09-09 Wed 10:18]--[2009-09-09 Wed 10:20] => 0:02 :END: ** DONE Commit 4 :contained:overlap: CLOSED: [2009-09-10 Wed 09:53] :LOGBOOK: CLOCK: [2009-09-10 Wed 09:49]--[2009-09-10 Wed 09:53] => 0:04 :END: ** DONE Commit 5 :partial:overlap:trailing: CLOSED: [2009-09-10 Wed 15:12] :LOGBOOK: CLOCK: [2009-09-10 Wed 14:45]--[2009-09-10 Wed 15:12] => 0:27 :END: ** DONE Commit 6 :partial:overlap:leading: CLOSED: [2009-09-11 Fri 08:05] :LOGBOOK: CLOCK: [2009-09-11 Fri 07:50]--[2009-09-11 Fri 08:05] => 0:15 :END: ** DONE Commit 7 :nonoverlap: CLOSED: [2009-09-11 Fri 15:55] :LOGBOOK: CLOCK: [2009-09-11 Fri 15:25]--[2009-09-11 Fri 15:55] => 0:30 :END: file1.org, after adjustments: * Manually Edited Worklog ** DONE Onsite CLOSED: [2009-09-09 Wed 15:00] :LOGBOOK: CLOCK: [2009-09-09 Wed 10:20]--[2009-09-09 Wed 14:45] => 4:25 CLOCK: [2009-09-09 Wed 07:18]--[2009-09-09 Wed 10:12] => 2:54 :END: ** DONE Onsite CLOSED: [2009-09-10 Wed 15:00] :LOGBOOK: CLOCK: [2009-09-10 Thu 08:05]--[2009-09-10 Thu 15:00] => 6:55 :END: A: Do you want help parsing the file format? Or just on figuring out the overlapping times? datetime objects are comparable in Python, so you can do something like this: >>> (a,b) = (datetime(2009, 9, 15, 8, 30), datetime(2009, 9, 15, 8, 45)) >>> (c,d) = (datetime(2009, 9, 15, 8, 40), datetime(2009, 9, 15, 8, 50)) >>> a <= b True >>> if c <= b <= d: ... print "overlap, merge these two ranges" ... else: ... print "separate ranges, leave them alone" ... overlap, merge these two ranges If the end of the first range (b) is within the second range (c and d), then there is an overlap and you can merge those two pairs into one range (a,d). Since your set of data looks pretty small you can probably just do this comparison and merge between all time ranges (N**2) and get an acceptable result.
How do I partition datetime intervals which overlap (Org Mode clocked time)?
I have related tasks from two Org files/subtrees where some of the clocked time overlaps. These are a manual worklog and a generated git commit log, see below. One subtree's CLOCK: entries needs to be adjusted to remove overlapping time. The other subtree is considered complete, and it's CLOCK: entries should not be adjusted. EDIT: This question is about calculating new time intervals to remove any overlaps. Any suggestions don't need to parse the Org mode file format. Python datetime.datetime algorithms are helpful, as are Emacs Lisp with or without the use of Org mode functions. In Python (more familiar) or Emacs Lisp (Org functions could help) I would like to: Identify time overlaps where they occur. file1.org will be mutable, file2.org time intervals should be considered fixed/correct. Calculate new time intervals for the CLOCK: lines in file1.org to remove any overlap with file2.org CLOCK: lines. write resulting new CLOCK: lines out, or at least the pertinent datetimes. The python convenience function tsparse converts an Org Mode timestamp to a python datetime.datetime object: >>> from datetime import datetime, timedelta >>> def tsparse(timestring): return datetime.strptime(timestring,'%Y-%m-%d %a %H:%M') >>> tsparse('2008-10-15 Wed 00:45') datetime.datetime(2008, 10, 15, 0, 45) Test cases can be found below. Thanks for any algorithm or implementation suggestions for Python or Emacs Lisp. Jeff file1.org, prior to adjustments: * Manually Edited Worklog ** DONE Onsite CLOSED: [2009-09-09 Wed 15:00] :LOGBOOK: CLOCK: [2009-09-09 Wed 07:00]--[2009-09-09 Wed 15:00] => 8:00 :END: ** DONE Onsite CLOSED: [2009-09-10 Wed 15:00] :LOGBOOK: CLOCK: [2009-09-10 Thu 08:00]--[2009-09-10 Thu 15:00] => 7:00 :END: file2.org: * Generated commit log ** DONE Commit 1 :partial:overlap:leading:contained: CLOSED: [2009-09-09 Tue 10:18] :LOGBOOK: CLOCK: [2009-09-09 Wed 06:40]--[2009-09-09 Wed 07:18] => 0:38 CLOCK: [2009-09-09 Wed 10:12]--[2009-09-09 Wed 10:18] => 0:06 :END: ** DONE Commit 2 :contained:overlap:contiguous: CLOSED: [2009-09-09 Wed 10:20] :LOGBOOK: CLOCK: [2009-09-09 Wed 10:18]--[2009-09-09 Wed 10:20] => 0:02 :END: ** DONE Commit 4 :contained:overlap: CLOSED: [2009-09-10 Wed 09:53] :LOGBOOK: CLOCK: [2009-09-10 Wed 09:49]--[2009-09-10 Wed 09:53] => 0:04 :END: ** DONE Commit 5 :partial:overlap:trailing: CLOSED: [2009-09-10 Wed 15:12] :LOGBOOK: CLOCK: [2009-09-10 Wed 14:45]--[2009-09-10 Wed 15:12] => 0:27 :END: ** DONE Commit 6 :partial:overlap:leading: CLOSED: [2009-09-11 Fri 08:05] :LOGBOOK: CLOCK: [2009-09-11 Fri 07:50]--[2009-09-11 Fri 08:05] => 0:15 :END: ** DONE Commit 7 :nonoverlap: CLOSED: [2009-09-11 Fri 15:55] :LOGBOOK: CLOCK: [2009-09-11 Fri 15:25]--[2009-09-11 Fri 15:55] => 0:30 :END: file1.org, after adjustments: * Manually Edited Worklog ** DONE Onsite CLOSED: [2009-09-09 Wed 15:00] :LOGBOOK: CLOCK: [2009-09-09 Wed 10:20]--[2009-09-09 Wed 14:45] => 4:25 CLOCK: [2009-09-09 Wed 07:18]--[2009-09-09 Wed 10:12] => 2:54 :END: ** DONE Onsite CLOSED: [2009-09-10 Wed 15:00] :LOGBOOK: CLOCK: [2009-09-10 Thu 08:05]--[2009-09-10 Thu 15:00] => 6:55 :END:
[ "Do you want help parsing the file format? Or just on figuring out the overlapping times?\ndatetime objects are comparable in Python, so you can do something like this:\n>>> (a,b) = (datetime(2009, 9, 15, 8, 30), datetime(2009, 9, 15, 8, 45))\n>>> (c,d) = (datetime(2009, 9, 15, 8, 40), datetime(2009, 9, 15, 8, 50))\n>>> a <= b\nTrue\n>>> if c <= b <= d:\n... print \"overlap, merge these two ranges\"\n... else:\n... print \"separate ranges, leave them alone\"\n...\noverlap, merge these two ranges\n\nIf the end of the first range (b) is within the second range (c and d), then there is an overlap and you can merge those two pairs into one range (a,d).\nSince your set of data looks pretty small you can probably just do this comparison and merge between all time ranges (N**2) and get an acceptable result.\n" ]
[ 2 ]
[]
[]
[ "datetime", "emacs", "org_mode", "overlap", "python" ]
stackoverflow_0001447257_datetime_emacs_org_mode_overlap_python.txt
Q: Error trying to pass (large) image over socket in python I am trying to pass an image over python socket for smaller images it works fine but for larger images it gives error as socket.error: [Errno 10040] A message sent on a datagram socket was larger than the internal message buffer or some other network limit, or the buffer used to receive a datagram into was smaller than the datagram itself I am using socket.socket(socket.AF_INET, socket.SOCK_DGRAM) Thanks for any clue . I tried using SOCK_STREAM, it does not work .. It just says me starting ... and hangs out . with no output .. Its not coming out of send function import thread import socket import ImageGrab class p2p: def __init__(self): socket.setdefaulttimeout(50) #send port self.send_port = 3000 #receive port self.recv_port=2000 #OUR IP HERE self.peerid = '127.0.0.1:' #DESTINATION self.recv_peers = '127.0.0.1' #declaring sender socket self.socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM ) self.socket.bind(('127.0.0.1', self.send_port)) self.socket.settimeout(50) #receiver socket self.serverSocket=socket.socket(socket.AF_INET, socket.SOCK_STREAM ) self.serverSocket.bind(('127.0.0.1', self.recv_port)) self.serverSocket.settimeout(50) #starting thread for reception thread.start_new_thread(self.receiveData, ()) #grabbing screenshot image = ImageGrab.grab() image.save("c:\\test.jpg") f = open("c:\\ test.jpg", "rb") data = f.read() #sending self.sendData(data) print 'sent...' f.close() while 1: pass def receiveData(self): f = open("c:\\received.png","wb") while 1: data,address = self.serverSocket.recvfrom(1024) if not data: break f.write(data) try: f.close() except: print 'could not save' print "received" def sendData(self,data): self.socket.sendto(data, (self.recv_peers,self.recv_port)) if __name__=='__main__': print 'Started......' p2p() A: Your image is too big to be sent in one UDP packet. You need to split the image data into several packets that are sent individually. If you don't have a special reason to use UDP you could also use TCP by specifying socket.SOCK_STREAM instead of socket.SOCK_DGRAM. There you don't have to worry about packet sizes and ordering. A: The message you are sending is being truncated. Since you haven't shown the actual code that sends, I'm guessing you are trying to write the entire image to the socket. You'll have to break the image into several, smaller chunks.
Error trying to pass (large) image over socket in python
I am trying to pass an image over python socket for smaller images it works fine but for larger images it gives error as socket.error: [Errno 10040] A message sent on a datagram socket was larger than the internal message buffer or some other network limit, or the buffer used to receive a datagram into was smaller than the datagram itself I am using socket.socket(socket.AF_INET, socket.SOCK_DGRAM) Thanks for any clue . I tried using SOCK_STREAM, it does not work .. It just says me starting ... and hangs out . with no output .. Its not coming out of send function import thread import socket import ImageGrab class p2p: def __init__(self): socket.setdefaulttimeout(50) #send port self.send_port = 3000 #receive port self.recv_port=2000 #OUR IP HERE self.peerid = '127.0.0.1:' #DESTINATION self.recv_peers = '127.0.0.1' #declaring sender socket self.socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM ) self.socket.bind(('127.0.0.1', self.send_port)) self.socket.settimeout(50) #receiver socket self.serverSocket=socket.socket(socket.AF_INET, socket.SOCK_STREAM ) self.serverSocket.bind(('127.0.0.1', self.recv_port)) self.serverSocket.settimeout(50) #starting thread for reception thread.start_new_thread(self.receiveData, ()) #grabbing screenshot image = ImageGrab.grab() image.save("c:\\test.jpg") f = open("c:\\ test.jpg", "rb") data = f.read() #sending self.sendData(data) print 'sent...' f.close() while 1: pass def receiveData(self): f = open("c:\\received.png","wb") while 1: data,address = self.serverSocket.recvfrom(1024) if not data: break f.write(data) try: f.close() except: print 'could not save' print "received" def sendData(self,data): self.socket.sendto(data, (self.recv_peers,self.recv_port)) if __name__=='__main__': print 'Started......' p2p()
[ "Your image is too big to be sent in one UDP packet. You need to split the image data into several packets that are sent individually.\nIf you don't have a special reason to use UDP you could also use TCP by specifying socket.SOCK_STREAM instead of socket.SOCK_DGRAM. There you don't have to worry about packet sizes and ordering.\n", "The message you are sending is being truncated.\nSince you haven't shown the actual code that sends, I'm guessing you are trying to write the entire image to the socket. You'll have to break the image into several, smaller chunks.\n" ]
[ 5, 2 ]
[]
[]
[ "python", "sockets" ]
stackoverflow_0001447684_python_sockets.txt
Q: Modifying a GUI started with Glade I am just starting to learn Glade with pyGTK. Since Glade makes XML files instead of actual python code, is there a good way to start a project with Glade and then hand code more or tweak it? Are there times or reasons it would be preferrable to hand code all of it instead of starting with glade? A: GUI's created with glade are accessible in the code in two way: libglade or gtkbuilder. I cannot comment much on the differences between the two, other than that gtkbuilder is newer; there are a lot of pages on google that show how to migrate from libglade to gtkbuilder. Using gtkbuilder, you can create your GUI object by retrieving it from the the XML file using gtkbuilder. This creates the object with all of the settings you set in glade. You now have an GUI object which you can manipulate via it's regular interface. builder = gtk.Builder() builder.add_from_file(glade_path) builder.connect_signals(self) main_window = builder.get_object("main_window") main_window.show() text_box1 = builder.get_object("textbox1") text_box1.set_text("enter your name") Line 3 shows how signal handlers are attached when loaded from glade. Essentially, it looks for the function you specified for the signal in the glade interface and attached to it; if the function isn't provided, you'll see a warning on the command line. A: How much do you know about glade and pygtk? Glade creates xml files but you load these using gtk.Builder in python. You can easily tweak any widgets you created with glade in python. Read these tutorials to understand how to do it better. You just need to learn more about pygtk and glade and it will be obvious.
Modifying a GUI started with Glade
I am just starting to learn Glade with pyGTK. Since Glade makes XML files instead of actual python code, is there a good way to start a project with Glade and then hand code more or tweak it? Are there times or reasons it would be preferrable to hand code all of it instead of starting with glade?
[ "GUI's created with glade are accessible in the code in two way: libglade or gtkbuilder. I cannot comment much on the differences between the two, other than that gtkbuilder is newer; there are a lot of pages on google that show how to migrate from libglade to gtkbuilder.\nUsing gtkbuilder, you can create your GUI object by retrieving it from the the XML file using gtkbuilder. This creates the object with all of the settings you set in glade. You now have an GUI object which you can manipulate via it's regular interface.\nbuilder = gtk.Builder()\nbuilder.add_from_file(glade_path)\nbuilder.connect_signals(self)\n\nmain_window = builder.get_object(\"main_window\")\nmain_window.show()\n\ntext_box1 = builder.get_object(\"textbox1\")\ntext_box1.set_text(\"enter your name\")\n\nLine 3 shows how signal handlers are attached when loaded from glade. Essentially, it looks for the function you specified for the signal in the glade interface and attached to it; if the function isn't provided, you'll see a warning on the command line.\n", "How much do you know about glade and pygtk? Glade creates xml files but you load these using gtk.Builder in python. You can easily tweak any widgets you created with glade in python. Read these tutorials to understand how to do it better. You just need to learn more about pygtk and glade and it will be obvious.\n" ]
[ 4, 2 ]
[]
[]
[ "glade", "gtk", "pygtk", "python" ]
stackoverflow_0001412350_glade_gtk_pygtk_python.txt
Q: IDLE and Python have different path in Mac OS X I am running Mac OS X 10.5.8. I have installed Python 2.6 from the site. It's in my application directory. I have edited my .bash_profile to have: # Setting PATH for MacPython 2.6 # The orginal version is saved in .bash_profile.pysave PATH="/Library/Frameworks/Python.framework/Versions/2.6/bin:${PATH}" export PATH export PATH=/usr/local/mysql/bin:/Library/Frameworks/Python.framework/Versions/2.6/bin:/Library/Frameworks/Python.framework/Versions/2.6/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin export PYTHONPATH=/Users/blwatson/pythonpath:/Users/blwatson/pythonpath/django/bin:$PYTHONPATH when I run python from the command prompt, I can get the following: Python 2.6.2 (r262:71600, Apr 16 2009, 09:17:39) [GCC 4.0.1 (Apple Computer, Inc. build 5250)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import django >>> django.VERSION (1, 0, 4, 'alpha', 0) checking PATH >>> import sys >>> sys.path ['', '/Users/blwatson/pythonpath', '/Users/blwatson/pythonpath/django/bin', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python26.zip', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/plat-darwin', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/plat-mac', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/plat-mac/lib-scriptpackages', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-tk', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-old', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-dynload', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/PIL', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/wx-2.8-mac-unicode'] >>> When I am in IDLE, I get a different experience. Python 2.6.2 (r262:71600, Apr 16 2009, 09:17:39) [GCC 4.0.1 (Apple Computer, Inc. build 5250)] on darwin Type "copyright", "credits" or "license()" for more information. **************************************************************** Personal firewall software may warn about the connection IDLE makes to its subprocess using this computer's internal loopback interface. This connection is not visible on any external interface and no data is sent to or received from the Internet. **************************************************************** IDLE 2.6.2 >>> import django Traceback (most recent call last): File "<pyshell#0>", line 1, in <module> import django ImportError: No module named django >>> import sys >>> print sys.path ['/Users/blwatson/Documents', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python26.zip', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/plat-darwin', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/plat-mac', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/plat-mac/lib-scriptpackages', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-tk', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-old', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-dynload', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/PIL', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/wx-2.8-mac-unicode'] I have no idea what's going on. I moved from one laptop to another and did the whole TimeCapsule thing, so I know that there's some conflict because of that. Where is IDLE getting the PATH from? Why can't I import Django? A: Like all OS X application bundles, if you launch IDLE.app by double-clicking, a shell is not involved and thus .bash_profile or other shell initialization files are not invoked. There is a way to set user session environment variables through the use of a special property list file (~/.MacOSX/environment.plist) but it really is a bit of a kludge and not recommended. Fortunately, there is a simpler solution: on OS X, it is also possible to invoke IDLE from a shell command line in a terminal window. In this way, it will inherit exported environment variables from that shell as you would expect. So something like: $ export PYTHONPATH= ... $ /usr/local/bin/idle2.6 There were various inconsistencies and problems with IDLE on OS X prior to 2.6.2 depending on how it was invoked so I recommend using nothing older than the python.org 2.6.2 or 3.1 versions on OS X. EDIT: I see from the open(1) man page that, since 10.4, applications launched via open also inherit environment variables so that would work from the command line as well. If you want to avoid opening a terminal window, it is easy to create a simple launcher app using AppleScript or Automator (or even Python with py2app!). In this case, use the open command command so that the launcher app does not sit around. For example, in Automator, choose the Run Shell Script action and add: export PYTHONPATH= ... open -a "/Applications/Python 2.6/IDLE.app" Save it as File Format Application (in 10.5) and you should have a clickable way to launch a tailored IDLE. A: Seems to be a common problem: http://www.google.com/search?q=python+idle+pythonpath Unfortunately, no answers. The only suggestion seemed to be to edit the idle executable and adding a few sys.path.insert(...) lines.
IDLE and Python have different path in Mac OS X
I am running Mac OS X 10.5.8. I have installed Python 2.6 from the site. It's in my application directory. I have edited my .bash_profile to have: # Setting PATH for MacPython 2.6 # The orginal version is saved in .bash_profile.pysave PATH="/Library/Frameworks/Python.framework/Versions/2.6/bin:${PATH}" export PATH export PATH=/usr/local/mysql/bin:/Library/Frameworks/Python.framework/Versions/2.6/bin:/Library/Frameworks/Python.framework/Versions/2.6/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin export PYTHONPATH=/Users/blwatson/pythonpath:/Users/blwatson/pythonpath/django/bin:$PYTHONPATH when I run python from the command prompt, I can get the following: Python 2.6.2 (r262:71600, Apr 16 2009, 09:17:39) [GCC 4.0.1 (Apple Computer, Inc. build 5250)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import django >>> django.VERSION (1, 0, 4, 'alpha', 0) checking PATH >>> import sys >>> sys.path ['', '/Users/blwatson/pythonpath', '/Users/blwatson/pythonpath/django/bin', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python26.zip', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/plat-darwin', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/plat-mac', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/plat-mac/lib-scriptpackages', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-tk', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-old', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-dynload', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/PIL', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/wx-2.8-mac-unicode'] >>> When I am in IDLE, I get a different experience. Python 2.6.2 (r262:71600, Apr 16 2009, 09:17:39) [GCC 4.0.1 (Apple Computer, Inc. build 5250)] on darwin Type "copyright", "credits" or "license()" for more information. **************************************************************** Personal firewall software may warn about the connection IDLE makes to its subprocess using this computer's internal loopback interface. This connection is not visible on any external interface and no data is sent to or received from the Internet. **************************************************************** IDLE 2.6.2 >>> import django Traceback (most recent call last): File "<pyshell#0>", line 1, in <module> import django ImportError: No module named django >>> import sys >>> print sys.path ['/Users/blwatson/Documents', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python26.zip', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/plat-darwin', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/plat-mac', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/plat-mac/lib-scriptpackages', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-tk', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-old', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-dynload', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/PIL', '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/wx-2.8-mac-unicode'] I have no idea what's going on. I moved from one laptop to another and did the whole TimeCapsule thing, so I know that there's some conflict because of that. Where is IDLE getting the PATH from? Why can't I import Django?
[ "Like all OS X application bundles, if you launch IDLE.app by double-clicking, a shell is not involved and thus .bash_profile or other shell initialization files are not invoked. There is a way to set user session environment variables through the use of a special property list file (~/.MacOSX/environment.plist) but it really is a bit of a kludge and not recommended.\nFortunately, there is a simpler solution: on OS X, it is also possible to invoke IDLE from a shell command line in a terminal window. In this way, it will inherit exported environment variables from that shell as you would expect. So something like:\n$ export PYTHONPATH= ...\n$ /usr/local/bin/idle2.6\n\nThere were various inconsistencies and problems with IDLE on OS X prior to 2.6.2 depending on how it was invoked so I recommend using nothing older than the python.org 2.6.2 or 3.1 versions on OS X.\nEDIT: I see from the open(1) man page that, since 10.4, applications launched via open also inherit environment variables so that would work from the command line as well. If you want to avoid opening a terminal window, it is easy to create a simple launcher app using AppleScript or Automator (or even Python with py2app!). In this case, use the open command command so that the launcher app does not sit around. For example, in Automator, choose the Run Shell Script action and add:\nexport PYTHONPATH= ...\nopen -a \"/Applications/Python 2.6/IDLE.app\"\n\nSave it as File Format Application (in 10.5) and you should have a clickable way to launch a tailored IDLE.\n", "Seems to be a common problem:\nhttp://www.google.com/search?q=python+idle+pythonpath\nUnfortunately, no answers. The only suggestion seemed to be to edit the idle executable and adding a few sys.path.insert(...) lines.\n" ]
[ 3, 0 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001447961_django_python.txt
Q: Python SOCK_STREAM over internet I have a simple programs for socket client and server its not working over internet # Echo server program import socket import ImageGrab HOST = '' # Symbolic name meaning all available interfaces PORT = 3000 # Arbitrary non-privileged port s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.bind((HOST, PORT)) s.listen(1) conn, addr = s.accept() print 'Connected by', addr data = conn.recv(1024) print data conn.close() # Echo client program import socket import ImageGrab #destnation ip HOST = '127.0.0.1' # The remote host PORT = 3000 # The same port as used by the server s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect((HOST, PORT)) s.send('Hello rushikesh') s.close() print 'Received'#, repr(data) When we try to make it work over internet it is not able to connect. Program is shown as above only thing is destination ip is replaces by my friends ip. When working over localhost it works perfectly fine but not working over internet ... I have written program using SOCK_DGRAM it works over internet only for small chunks of data. I want to transmit image using it so I have written it using SOCK_STREAM for transmitting image which successfully worked on localhost and was not working over internet. So I did write simplest program but still showing same problem Can somebody please guid me through this... A: You've got the right approach, but you are probably running into networking or firewall problems. Depending on how your friend's networking is configured, he may be behind NAT or a firewall that prevents you from making a direct connection into his computer. To eliminate half the problem, you can use telnet as a client to make a simple connection to the server to see whether it is available: telnet 127.0.0.1 3000 If telnet connects successfully, then the networking works. If it fails, then there is something else wrong (and may give you information that might help discover what).
Python SOCK_STREAM over internet
I have a simple programs for socket client and server its not working over internet # Echo server program import socket import ImageGrab HOST = '' # Symbolic name meaning all available interfaces PORT = 3000 # Arbitrary non-privileged port s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.bind((HOST, PORT)) s.listen(1) conn, addr = s.accept() print 'Connected by', addr data = conn.recv(1024) print data conn.close() # Echo client program import socket import ImageGrab #destnation ip HOST = '127.0.0.1' # The remote host PORT = 3000 # The same port as used by the server s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect((HOST, PORT)) s.send('Hello rushikesh') s.close() print 'Received'#, repr(data) When we try to make it work over internet it is not able to connect. Program is shown as above only thing is destination ip is replaces by my friends ip. When working over localhost it works perfectly fine but not working over internet ... I have written program using SOCK_DGRAM it works over internet only for small chunks of data. I want to transmit image using it so I have written it using SOCK_STREAM for transmitting image which successfully worked on localhost and was not working over internet. So I did write simplest program but still showing same problem Can somebody please guid me through this...
[ "You've got the right approach, but you are probably running into networking or firewall problems. Depending on how your friend's networking is configured, he may be behind NAT or a firewall that prevents you from making a direct connection into his computer.\nTo eliminate half the problem, you can use telnet as a client to make a simple connection to the server to see whether it is available:\ntelnet 127.0.0.1 3000\n\nIf telnet connects successfully, then the networking works. If it fails, then there is something else wrong (and may give you information that might help discover what).\n" ]
[ 5 ]
[]
[]
[ "python", "sockets" ]
stackoverflow_0001448193_python_sockets.txt
Q: Python subprocess.Popen - adding GCC flags results in "no input files" error I'm building a Python script to automate my build process, which invokes GCC using subprocess.Popen. My initial attempt works fine. >>> import subprocess >>> p = Popen(['gcc', 'hello.c'], stdout=subprocess.PIPE, stderr=stderr=subprocess.STDOUT) >>> p.wait() 0 >>> p.communicate() ('', None) However, once I pass additional options to GCC I get the error "no input files", as demonstrated below: >>> import subprocess >>> p = Popen(['gcc', '-o hello hello.c'], stdout=subprocess.PIPE, stderr=stderr=subprocess.STDOUT) >>> p.wait() 1 >>> p.communicate() ('gcc: no input files\r\n', None) Any ideas what may be causing this issue? A: Shouldn't that be p = Popen(['gcc', '-o', 'hello', 'hello.c'], stdout=subprocess.PIPE, stderr=stderr=subprocess.STDOUT)
Python subprocess.Popen - adding GCC flags results in "no input files" error
I'm building a Python script to automate my build process, which invokes GCC using subprocess.Popen. My initial attempt works fine. >>> import subprocess >>> p = Popen(['gcc', 'hello.c'], stdout=subprocess.PIPE, stderr=stderr=subprocess.STDOUT) >>> p.wait() 0 >>> p.communicate() ('', None) However, once I pass additional options to GCC I get the error "no input files", as demonstrated below: >>> import subprocess >>> p = Popen(['gcc', '-o hello hello.c'], stdout=subprocess.PIPE, stderr=stderr=subprocess.STDOUT) >>> p.wait() 1 >>> p.communicate() ('gcc: no input files\r\n', None) Any ideas what may be causing this issue?
[ "Shouldn't that be\np = Popen(['gcc', '-o', 'hello', 'hello.c'], stdout=subprocess.PIPE, stderr=stderr=subprocess.STDOUT)\n\n" ]
[ 6 ]
[]
[]
[ "gcc", "popen", "python", "subprocess" ]
stackoverflow_0001448558_gcc_popen_python_subprocess.txt
Q: Python - I can't stop the program running I am completely new to python. I have installed it on windows. I am having a problem, I write: from pylab import* subplot(111,projection="hammer") show() After this it will not let me do anything else and ctrl-c does not work. I have looked at another post here and tried ctrl-break, ctrl-z and various other methods to no avail. Could anyone point me in the right direction. Many thanks A: I'd recommend to use IPython. It brings a matplotlib/pylab mode that handles all this for you. After you install IPython, you can start it with the pylab flag: $ ipython -pylab Then, in the interactive shell, you type your code: In [1]: from pylab import* In [2]: subplot(111,projection="hammer") Out[2]: <matplotlib.axes.HammerAxesSubplot object at 0x2241050> In [3]: IPython automatically shows the plot using a separate thread and returns control to the interactive shell. The documentation of matplotlib has a little more information on how all this works. A: If it's a simple matter of interrupting a running program, have you tried CTRL-D ? A: Try this: After all of your imports for pylab and what not.. add: import signal signal.signal(signal.SIGINT, signal.SIG_DFL) This will cause CTRL-C to not be caught by anything in your program, which should then cause it to kill the program. A: Try catching KeyboardInterrupt like so: try: show() except KeyboardInterrupt: print "Shutting down." import sys sys.exit()
Python - I can't stop the program running
I am completely new to python. I have installed it on windows. I am having a problem, I write: from pylab import* subplot(111,projection="hammer") show() After this it will not let me do anything else and ctrl-c does not work. I have looked at another post here and tried ctrl-break, ctrl-z and various other methods to no avail. Could anyone point me in the right direction. Many thanks
[ "I'd recommend to use IPython. It brings a matplotlib/pylab mode that handles all this for you. After you install IPython, you can start it with the pylab flag:\n$ ipython -pylab\n\nThen, in the interactive shell, you type your code:\nIn [1]: from pylab import*\n\nIn [2]: subplot(111,projection=\"hammer\")\nOut[2]: <matplotlib.axes.HammerAxesSubplot object at 0x2241050>\n\nIn [3]:\n\nIPython automatically shows the plot using a separate thread and returns control to the interactive shell.\nThe documentation of matplotlib has a little more information on how all this works.\n", "If it's a simple matter of interrupting a running program, have you tried CTRL-D ?\n", "Try this:\nAfter all of your imports for pylab and what not.. add:\nimport signal\nsignal.signal(signal.SIGINT, signal.SIG_DFL)\n\nThis will cause CTRL-C to not be caught by anything in your program, which should then cause it to kill the program.\n", "Try catching KeyboardInterrupt like so:\ntry:\n show()\nexcept KeyboardInterrupt:\n print \"Shutting down.\"\n import sys\n sys.exit()\n\n" ]
[ 1, 0, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001448505_python.txt
Q: Role-based security with Google App Engine and Python I would like to ask what is the common way for handling role-based security with Google App Engine, Python? In the app.yaml, there is the "login" section, but available values are only "admin" and "required". How do you normally handle role-based security? Create the model with two tables: Roles and UserRoles Import values for Roles table Manually add User to UserRoles Check if user is in the right Roles group Any other idea or any other method for role-based security, please let us know! A: I would do this by adding a ListProperty for roles to the model representing users. The list contains any roles a given user belongs to. This way if you want to know whether a given user belongs to a given role (I expect, the most common operation), it is a fast membership test. You could put the role names directly into the lists as strings or add a layer of indirection to another entity specifying the details about the role so it is easy to change the details later. But, this has a runtime cost of an additional RPC to fetch the details about the role. The downside to this method comes if you want to remove all users from a given role, or perform any other kind of global operation. I suppose you could mark a role 'deleted', but then you still have data cluttering up all your user models until you clean them up manually. So I am curious to hear what others suggest.
Role-based security with Google App Engine and Python
I would like to ask what is the common way for handling role-based security with Google App Engine, Python? In the app.yaml, there is the "login" section, but available values are only "admin" and "required". How do you normally handle role-based security? Create the model with two tables: Roles and UserRoles Import values for Roles table Manually add User to UserRoles Check if user is in the right Roles group Any other idea or any other method for role-based security, please let us know!
[ "I would do this by adding a ListProperty for roles to the model representing users. The list contains any roles a given user belongs to. This way if you want to know whether a given user belongs to a given role (I expect, the most common operation), it is a fast membership test.\nYou could put the role names directly into the lists as strings or add a layer of indirection to another entity specifying the details about the role so it is easy to change the details later. But, this has a runtime cost of an additional RPC to fetch the details about the role.\nThe downside to this method comes if you want to remove all users from a given role, or perform any other kind of global operation. I suppose you could mark a role 'deleted', but then you still have data cluttering up all your user models until you clean them up manually. So I am curious to hear what others suggest.\n" ]
[ 4 ]
[]
[]
[ "google_app_engine", "python", "role_based" ]
stackoverflow_0001448308_google_app_engine_python_role_based.txt
Q: Apache: VirtualHost with [PHP|Python|Ruby] support I am experimenting with several languages (Python, Ruby...), and I would like to know if there is a way to optimize my Apache Server to load certain modules only in certain VirtualHost, for instance: http://myapp1 <- just with Ruby support http://myapp2 <- just with Python support http://myapp3 <- just with Php support ... Thanks. A: Each Apache worker loads every module, so it's not possible to do within Apache itself. What you need to do is move your language modules to processes external to Apache workers. This is done for your languages with the following modules: PHP: mod_fastcgi. More info: Apache+Chroot+FastCGI. Python: mod_wsgi in daemon mode. Ruby: passenger/mod_rack A: I dont think thats possible as, The same thread/forked process might be serving pages from different Virtualhosts. So if it has loaded only python, what happens when it needs to serve ruby? For reason 1, certain directives are web server only, and not virtualhost specific. MaxRequestsPerChild, LoadModule etc are such. A: I think the only way is to have a "proxy" web server that dispatches requests to the real servers ... The proxy server has a list of domain names -> Server Side language, and does nothing else but transparently redirecting to the correct real server There are N real server, each one with a specific configuration and a single language supported and loaded ... each server will listen on a different port of course and eventually only on the loopback device Apache mod_proxy should do the job My 2 cents A: My Idea is several apache processes (each one with different config) listening on different addresses and/or ports and a http proxy (squid or apache) in the front redirecting to respective server. This has a possible added advantage of caching.
Apache: VirtualHost with [PHP|Python|Ruby] support
I am experimenting with several languages (Python, Ruby...), and I would like to know if there is a way to optimize my Apache Server to load certain modules only in certain VirtualHost, for instance: http://myapp1 <- just with Ruby support http://myapp2 <- just with Python support http://myapp3 <- just with Php support ... Thanks.
[ "Each Apache worker loads every module, so it's not possible to do within Apache itself.\nWhat you need to do is move your language modules to processes external to Apache workers.\nThis is done for your languages with the following modules:\n\nPHP: mod_fastcgi. More info: Apache+Chroot+FastCGI.\nPython: mod_wsgi in daemon mode.\nRuby: passenger/mod_rack\n\n", "I dont think thats possible as,\n\nThe same thread/forked process might be serving pages from different Virtualhosts. So if it has loaded only python, what happens when it needs to serve ruby?\nFor reason 1, certain directives are web server only, and not virtualhost specific. MaxRequestsPerChild, LoadModule etc are such.\n\n", "I think the only way is to have a \"proxy\" web server that dispatches requests to the real servers ...\nThe proxy server has a list of domain names -> Server Side language, and does nothing else but transparently redirecting to the correct real server\nThere are N real server, each one with a specific configuration and a single language supported and loaded ... each server will listen on a different port of course and eventually only on the loopback device\nApache mod_proxy should do the job\nMy 2 cents\n", "My Idea is several apache processes (each one with different config) listening on different addresses and/or ports and a http proxy (squid or apache) in the front redirecting to respective server. This has a possible added advantage of caching.\n" ]
[ 3, 0, 0, 0 ]
[]
[]
[ "apache", "php", "python", "ruby", "virtualhost" ]
stackoverflow_0001082906_apache_php_python_ruby_virtualhost.txt
Q: Decorators should not have side effects? Editing because the initial code was confusing. I would assume these two things to be same, #I would use either of these #Option 1 def bar(*args): pass foo = deco(bar) #Option2 @deco def foo(*args): pass However if the decorators deco has side effects, this is not guaranteed. In particular, this was my expectation form a decorator(no side effect), and I came across one with side effect and was bitten by it, #Option1 def bar(*args): pass foo = register.filter(bar) #Option 2 @register.filter def foo(val, arg): pass So is my expectation wrong, or is Django being inconsistent with the best practices? A: Actually, these both are exactly the same: def foo(*args): pass foo = deco(foo) @deco def foo(*args): pass If you want to decorate bar and call it foo, foo = deco(bar) is the right way. It says: "decorate this previously defined thing called bar and call it foo". The point of the decorator syntax is to state the wrapping function before the definition, not to rename it. Unless you need to use bar later, there is no reason to call the undecorated function with a different name. By doing this you lose precisely the ability to use the decorator syntax sugar. deco doesn't need to be a function. It can be an object with a __call__ method, which is useful precisely to encapsulate side effects. A: Your examples do not express the same things in every case! Why do you insist on using bar? Take your first example: #Option 1 def bar(*args): pass foo = deco(bar) #Option2 @deco def foo(*args): pass Option 1 does (literally) foo = deco(bar) but Option 2 is the equivalent of foo = deco(foo) Can't you see the difference there? So, in short, yes: your assumption and your expectations are wrong. If you need the undecorated version of your function, as well as the decorated one, just save it beforehand: def foo(*args): pass bar = foo foo = deco(foo)
Decorators should not have side effects?
Editing because the initial code was confusing. I would assume these two things to be same, #I would use either of these #Option 1 def bar(*args): pass foo = deco(bar) #Option2 @deco def foo(*args): pass However if the decorators deco has side effects, this is not guaranteed. In particular, this was my expectation form a decorator(no side effect), and I came across one with side effect and was bitten by it, #Option1 def bar(*args): pass foo = register.filter(bar) #Option 2 @register.filter def foo(val, arg): pass So is my expectation wrong, or is Django being inconsistent with the best practices?
[ "Actually, these both are exactly the same:\ndef foo(*args):\n pass\nfoo = deco(foo)\n\n@deco\ndef foo(*args):\n pass\n\nIf you want to decorate bar and call it foo, foo = deco(bar) is the right way. It says: \"decorate this previously defined thing called bar and call it foo\". The point of the decorator syntax is to state the wrapping function before the definition, not to rename it.\nUnless you need to use bar later, there is no reason to call the undecorated function with a different name. By doing this you lose precisely the ability to use the decorator syntax sugar.\ndeco doesn't need to be a function. It can be an object with a __call__ method, which is useful precisely to encapsulate side effects. \n", "Your examples do not express the same things in every case! Why do you insist on using bar? \nTake your first example:\n#Option 1\ndef bar(*args):\n pass\nfoo = deco(bar)\n\n#Option2\n@deco\ndef foo(*args):\n pass\n\nOption 1 does (literally)\nfoo = deco(bar)\n\nbut Option 2 is the equivalent of\nfoo = deco(foo)\n\nCan't you see the difference there?\nSo, in short, yes: your assumption and your expectations are wrong.\nIf you need the undecorated version of your function, as well as the decorated one, just save it beforehand:\ndef foo(*args):\n pass\nbar = foo\nfoo = deco(foo)\n\n" ]
[ 2, 0 ]
[]
[]
[ "decorator", "django", "python" ]
stackoverflow_0001447996_decorator_django_python.txt
Q: WSGI Authentication: Homegrown, Authkit, OpenID...? I want basic authentication for a very minimal site, all I personally need is a single superuser. While hard-coding a password and username in one of my source files is awfully tempting, especially since I'm hosting the site on my own server, I feel I'm breaking the law of the internets and I should just use a database (I'm using sqlite for blog posts and such). Which would be the easiest to setup, in terms of time and effort, out of OpenID or AuthKit (repoze just scares me.. it feels like too much overhead for what I'm trying to achieve), or should I roll my own? Why I brought up OpenID is, it might just solve my spam problem (I'm currently using Akismet), to just require all commentors to login with an OpenID. I have absolutely no idea how to go about integrating OpenID with my WSGI application though (it's probably dead simple, I've never actually looked into it yet). A: also look at repose.who http://static.repoze.org/whodocs/ A: AuthKit includes a built-in OpenID module, if that helps. The AuthKit cookbook includes a simple example here... http://wiki.pylonshq.com/display/authkitcookbook/OpenID+Passurl That said, if you only need a single login (so there's no complex user management going on), why not use Apache's built-in authentication features (AuthUserFile .htpasswd together with Require valid-user)? A: Opid is a very small and simple to use WSGI OpenID app: python-opid A: You can adapt this. http://code.activestate.com/recipes/302378/ Or, better, adapt this. http://devel.almad.net/trac/django-http-digest/ This is quite nice.
WSGI Authentication: Homegrown, Authkit, OpenID...?
I want basic authentication for a very minimal site, all I personally need is a single superuser. While hard-coding a password and username in one of my source files is awfully tempting, especially since I'm hosting the site on my own server, I feel I'm breaking the law of the internets and I should just use a database (I'm using sqlite for blog posts and such). Which would be the easiest to setup, in terms of time and effort, out of OpenID or AuthKit (repoze just scares me.. it feels like too much overhead for what I'm trying to achieve), or should I roll my own? Why I brought up OpenID is, it might just solve my spam problem (I'm currently using Akismet), to just require all commentors to login with an OpenID. I have absolutely no idea how to go about integrating OpenID with my WSGI application though (it's probably dead simple, I've never actually looked into it yet).
[ "also look at repose.who\nhttp://static.repoze.org/whodocs/\n", "AuthKit includes a built-in OpenID module, if that helps.\nThe AuthKit cookbook includes a simple example here... http://wiki.pylonshq.com/display/authkitcookbook/OpenID+Passurl \nThat said, if you only need a single login (so there's no complex user management going on), why not use Apache's built-in authentication features (AuthUserFile .htpasswd together with Require valid-user)?\n", "Opid is a very small and simple to use WSGI OpenID app: python-opid\n", "You can adapt this.\nhttp://code.activestate.com/recipes/302378/\nOr, better, adapt this.\nhttp://devel.almad.net/trac/django-http-digest/\nThis is quite nice.\n" ]
[ 4, 2, 1, 0 ]
[]
[]
[ "authentication", "python", "wsgi" ]
stackoverflow_0000723856_authentication_python_wsgi.txt
Q: Use Python 2.6 subprocess module in Python 2.5 I would like to use Python 2.6's version of subprocess, because it allows the Popen.terminate() function, but I'm stuck with Python 2.5. Is there some reasonably clean way to use the newer version of the module in my 2.5 code? Some sort of from __future__ import subprocess_module? A: I know this question has already been answered, but for what it's worth, I've used the subprocess.py that ships with Python 2.6 in Python 2.3 and it's worked fine. If you read the comments at the top of the file it says: # This module should remain compatible with Python 2.2, see PEP 291. A: There isn't really a great way to do it. subprocess is implemented in python (as opposed to C) so you could conceivably copy the module somewhere and use it (hoping of course that it doesn't use any 2.6 goodness). On the other hand you could simply implement what subprocess claims to do and write a function that sends SIGTERM on *nix and calls TerminateProcess on Windows. The following implementation has been tested on linux and in a Win XP vm, you'll need the python Windows extensions: import sys def terminate(process): """ Kills a process, useful on 2.5 where subprocess.Popens don't have a terminate method. Used here because we're stuck on 2.5 and don't have Popen.terminate goodness. """ def terminate_win(process): import win32process return win32process.TerminateProcess(process._handle, -1) def terminate_nix(process): import os import signal return os.kill(process.pid, signal.SIGTERM) terminate_default = terminate_nix handlers = { "win32": terminate_win, "linux2": terminate_nix } return handlers.get(sys.platform, terminate_default)(process) That way you only have to maintain the terminate code rather than the entire module. A: While this doesn't directly answer your question, it may be worth knowing. Imports from __future__ actually only change compiler options, so while it can turn with into a statement or make string literals produce unicodes instead of strs, it can't change the capabilities and features of modules in the Python standard library. A: I followed Kamil Kisiel suggestion regarding using python 2.6 subprocess.py in python 2.5 and it worked perfectly. To make it easier, I created a distutils package that you can easy_install and/or include in buildout. To use subprocess from python 2.6 in python 2.5 project: easy_install taras.python26 in your code from taras.python26 import subprocess in buildout [buildout] parts = subprocess26 [subprocess26] recipe = zc.recipe.egg eggs = taras.python26 A: Here are some ways to end processes on Windows, taken directly from http://code.activestate.com/recipes/347462/ # Create a process that won't end on its own import subprocess process = subprocess.Popen(['python.exe', '-c', 'while 1: pass']) # Kill the process using pywin32 import win32api win32api.TerminateProcess(int(process._handle), -1) # Kill the process using ctypes import ctypes ctypes.windll.kernel32.TerminateProcess(int(process._handle), -1) # Kill the proces using pywin32 and pid import win32api PROCESS_TERMINATE = 1 handle = win32api.OpenProcess(PROCESS_TERMINATE, False, process.pid) win32api.TerminateProcess(handle, -1) win32api.CloseHandle(handle) # Kill the proces using ctypes and pid import ctypes PROCESS_TERMINATE = 1 handle = ctypes.windll.kernel32.OpenProcess(PROCESS_TERMINATE, False, process.pid) ctypes.windll.kernel32.TerminateProcess(handle, -1) ctypes.windll.kernel32.CloseHandle(handle) A: Well Python is open source, you are free to take that pthread function from 2.6 and move it into your own code or use it as a reference to implement your own. For reasons that should be obvious there's no way to have a hybrid of Python that can import portions of newer versions.
Use Python 2.6 subprocess module in Python 2.5
I would like to use Python 2.6's version of subprocess, because it allows the Popen.terminate() function, but I'm stuck with Python 2.5. Is there some reasonably clean way to use the newer version of the module in my 2.5 code? Some sort of from __future__ import subprocess_module?
[ "I know this question has already been answered, but for what it's worth, I've used the subprocess.py that ships with Python 2.6 in Python 2.3 and it's worked fine. If you read the comments at the top of the file it says:\n\n# This module should remain compatible with Python 2.2, see PEP 291.\n\n", "There isn't really a great way to do it. subprocess is implemented in python (as opposed to C) so you could conceivably copy the module somewhere and use it (hoping of course that it doesn't use any 2.6 goodness).\nOn the other hand you could simply implement what subprocess claims to do and write a function that sends SIGTERM on *nix and calls TerminateProcess on Windows. The following implementation has been tested on linux and in a Win XP vm, you'll need the python Windows extensions:\nimport sys\n\ndef terminate(process):\n \"\"\"\n Kills a process, useful on 2.5 where subprocess.Popens don't have a \n terminate method.\n\n\n Used here because we're stuck on 2.5 and don't have Popen.terminate \n goodness.\n \"\"\"\n\n def terminate_win(process):\n import win32process\n return win32process.TerminateProcess(process._handle, -1)\n\n def terminate_nix(process):\n import os\n import signal\n return os.kill(process.pid, signal.SIGTERM)\n\n terminate_default = terminate_nix\n\n handlers = {\n \"win32\": terminate_win, \n \"linux2\": terminate_nix\n }\n\n return handlers.get(sys.platform, terminate_default)(process)\n\nThat way you only have to maintain the terminate code rather than the entire module.\n", "While this doesn't directly answer your question, it may be worth knowing.\nImports from __future__ actually only change compiler options, so while it can turn with into a statement or make string literals produce unicodes instead of strs, it can't change the capabilities and features of modules in the Python standard library.\n", "I followed Kamil Kisiel suggestion regarding using python 2.6 subprocess.py in python 2.5 and it worked perfectly. To make it easier, I created a distutils package that you can easy_install and/or include in buildout.\nTo use subprocess from python 2.6 in python 2.5 project:\neasy_install taras.python26\n\nin your code\nfrom taras.python26 import subprocess\n\nin buildout\n[buildout]\nparts = subprocess26\n\n[subprocess26]\nrecipe = zc.recipe.egg\neggs = taras.python26\n\n", "Here are some ways to end processes on Windows, taken directly from\nhttp://code.activestate.com/recipes/347462/\n# Create a process that won't end on its own\nimport subprocess\nprocess = subprocess.Popen(['python.exe', '-c', 'while 1: pass'])\n\n\n# Kill the process using pywin32\nimport win32api\nwin32api.TerminateProcess(int(process._handle), -1)\n\n\n# Kill the process using ctypes\nimport ctypes\nctypes.windll.kernel32.TerminateProcess(int(process._handle), -1)\n\n\n# Kill the proces using pywin32 and pid\nimport win32api\nPROCESS_TERMINATE = 1\nhandle = win32api.OpenProcess(PROCESS_TERMINATE, False, process.pid)\nwin32api.TerminateProcess(handle, -1)\nwin32api.CloseHandle(handle)\n\n\n# Kill the proces using ctypes and pid\nimport ctypes\nPROCESS_TERMINATE = 1\nhandle = ctypes.windll.kernel32.OpenProcess(PROCESS_TERMINATE, False, process.pid)\nctypes.windll.kernel32.TerminateProcess(handle, -1)\nctypes.windll.kernel32.CloseHandle(handle)\n\n", "Well Python is open source, you are free to take that pthread function from 2.6 and move it into your own code or use it as a reference to implement your own.\nFor reasons that should be obvious there's no way to have a hybrid of Python that can import portions of newer versions.\n" ]
[ 9, 6, 2, 2, 1, 0 ]
[]
[]
[ "python", "python_2.5", "subprocess" ]
stackoverflow_0000552423_python_python_2.5_subprocess.txt
Q: why does this code break out of loop? import math t=raw_input() k=[] a=0 for i in range(0,int(t)): s=raw_input() b=1 c=1 a=int(s) if a==0: continue else: d=math.atan(float(1)/b) + math.atan(float(1)/c) v=math.atan(float(1)/a) print v print d print float(v) print float(d) while(): if float(v)== float(d): break b=b+1 c=c+1 d=math.atan(float(1)/float(b)) + math.atan(float(1)/float(c)) print d k.append(int(b)+int(c)) for i in range(0,int(t)): print k[i] as it's very evident float(v) != float(d) till b becomes 2 and c becomes 3. A: Your while loop tests on an empty tuple, which evaluates to False. Thus, the statements within the while loop will never execute: If you want your while loop to run until it encounters a break statement, do this: while True: if (some_condition): break else: # Do stuff... A: If is very dangerous to make comparsisons like float(a)==float(b) since float variables have no exact representation. Due to rounding errors you may not have identic values. Even 2*0.5 may not be equal 1. You may use the following: if abs(float(a)-float(b)) < verySmallValue: A: http://www.cygnus-software.com/papers/comparingfloats/comparingfloats.htm Floating point math is not exact. Simple values like 0.2 cannot be precisely represented using binary floating point numbers, and the limited precision of floating point numbers means that slight changes in the order of operations can change the result. Different compilers and CPU architectures store temporary results at different precisions, so results will differ depending on the details of your environment. If you do a calculation and then compare the results against some expected value it is highly unlikely that you will get exactly the result you intended. In other words, if you do a calculation and then do this comparison: if (result == expectedResult) then it is unlikely that the comparison will be true. If the comparison is true then it is probably unstable – tiny changes in the input values, compiler, or CPU may change the result and make the comparison be false. A: Well, it didn't reach the break point. The problem is that while() does not loop at all. To do an infinite loop, do while (1): (since the while condition must evaluate to true. Here's a working (cleaned up) sample. import math t = raw_input() k = [] a = 0.0 for i in range(0,int(t)): s = float(raw_input()) b = 1.0 c = 1.0 a= float(s) if a == 0: continue else: d = math.atan(1.0/b) + math.atan(1.0/c) v = math.atan(1.0/a) print v print d while True: if v == d: print 'bar' break b += 1 c += 1 d = math.atan(1.0/b) + math.atan(1.0/c) print d k.append(int(b)+int(c)) for i in range(0,int(t)): print k[i]
why does this code break out of loop?
import math t=raw_input() k=[] a=0 for i in range(0,int(t)): s=raw_input() b=1 c=1 a=int(s) if a==0: continue else: d=math.atan(float(1)/b) + math.atan(float(1)/c) v=math.atan(float(1)/a) print v print d print float(v) print float(d) while(): if float(v)== float(d): break b=b+1 c=c+1 d=math.atan(float(1)/float(b)) + math.atan(float(1)/float(c)) print d k.append(int(b)+int(c)) for i in range(0,int(t)): print k[i] as it's very evident float(v) != float(d) till b becomes 2 and c becomes 3.
[ "Your while loop tests on an empty tuple, which evaluates to False. Thus, the statements within the while loop will never execute:\nIf you want your while loop to run until it encounters a break statement, do this:\nwhile True:\n if (some_condition):\n break\n else:\n # Do stuff...\n\n", "If is very dangerous to make comparsisons like float(a)==float(b) since float variables have no exact representation. Due to rounding errors you may not have identic values.\nEven 2*0.5 may not be equal 1. You may use the following:\nif abs(float(a)-float(b)) < verySmallValue:\n\n", "http://www.cygnus-software.com/papers/comparingfloats/comparingfloats.htm\n\nFloating point math is not exact.\n Simple values like 0.2 cannot be\n precisely represented using binary\n floating point numbers, and the\n limited precision of floating point\n numbers means that slight changes in\n the order of operations can change the\n result. Different compilers and CPU\n architectures store temporary results\n at different precisions, so results\n will differ depending on the details\n of your environment. If you do a\n calculation and then compare the\n results against some expected value it\n is highly unlikely that you will get\n exactly the result you intended. In\n other words, if you do a calculation\n and then do this comparison: if\n (result == expectedResult)\nthen it is unlikely that the\n comparison will be true. If the\n comparison is true then it is probably\n unstable – tiny changes in the input\n values, compiler, or CPU may change\n the result and make the comparison be\n false.\n\n", "Well, it didn't reach the break point. The problem is that while() does not loop at all. To do an infinite loop, do while (1): (since the while condition must evaluate to true. Here's a working (cleaned up) sample.\nimport math\nt = raw_input()\nk = []\na = 0.0\nfor i in range(0,int(t)):\n s = float(raw_input())\n b = 1.0\n c = 1.0\n a= float(s)\n if a == 0:\n continue\n else:\n d = math.atan(1.0/b) + math.atan(1.0/c)\n v = math.atan(1.0/a)\n print v\n print d\n while True:\n if v == d:\n print 'bar'\n break\n b += 1\n c += 1\n d = math.atan(1.0/b) + math.atan(1.0/c)\n print d\n k.append(int(b)+int(c))\n\nfor i in range(0,int(t)):\n print k[i]\n\n" ]
[ 8, 2, 2, 0 ]
[]
[]
[ "python", "syntax_error" ]
stackoverflow_0000994729_python_syntax_error.txt
Q: How can I omit words in the middle of a regular expression in Python? I have a multi-line string like this: "...Togo...Togo...Togo...ACTIVE..." I want to get everything between the third 'Togo' and 'ACTIVE' and the remainder of the string. I am unable to create a regular expression that can do this. If I try something like reg = "(Togo^[Togo]*?)(ACTIVE.*)" nothing is captured (the first and last parentheses are needed for capturing groups). A: reg = "Togo.*Togo.*Togo(.*)ACTIVE" Alternatively, if you want to match the string between the last occurrence of Togo and the following occurence of ACTIVE, and the number of Togo occurences is not necessarily three, try this: reg = "Togo(([^T]|T[^o]|To[^g]|Tog[^o])*T?.?.?)ACTIVE" A: This matches just the desired parts: .*(Togo.*?)(ACTIVE.*) The leading .* is greedy, so the following Togo matches at the last possible place. The captured part starts at the last Togo. In your expression ^[Togo]*? doesn't do the right thing. ^ tries to match the beginning of a line and [Togo] matches any of the characters T, o or g. Even [^Togo] wouldn't work since this just matches any character that is not T, o or g. A: "(Togo(?:(?!Togo).)*)(ACTIVE.*)" The square brackets in your regex form a character class that matches one of the characters 'T', 'o', or 'g'. The caret ('^') matches the beginning of the input if it's not in a character class, and it can be used inside the square brackets to invert the character class. In my regex, after matching the word "Togo" I match one character at a time, but only after I check that it isn't the start of another instance of "Togo". (?!Togo) is called a negative lookahead.
How can I omit words in the middle of a regular expression in Python?
I have a multi-line string like this: "...Togo...Togo...Togo...ACTIVE..." I want to get everything between the third 'Togo' and 'ACTIVE' and the remainder of the string. I am unable to create a regular expression that can do this. If I try something like reg = "(Togo^[Togo]*?)(ACTIVE.*)" nothing is captured (the first and last parentheses are needed for capturing groups).
[ "reg = \"Togo.*Togo.*Togo(.*)ACTIVE\"\n\nAlternatively, if you want to match the string between the last occurrence of Togo and the following occurence of ACTIVE, and the number of Togo occurences is not necessarily three, try this:\nreg = \"Togo(([^T]|T[^o]|To[^g]|Tog[^o])*T?.?.?)ACTIVE\"\n\n", "This matches just the desired parts:\n.*(Togo.*?)(ACTIVE.*)\n\nThe leading .* is greedy, so the following Togo matches at the last possible place. The captured part starts at the last Togo.\nIn your expression ^[Togo]*? doesn't do the right thing. ^ tries to match the beginning of a line and [Togo] matches any of the characters T, o or g. Even [^Togo] wouldn't work since this just matches any character that is not T, o or g.\n", "\"(Togo(?:(?!Togo).)*)(ACTIVE.*)\"\n\nThe square brackets in your regex form a character class that matches one of the characters 'T', 'o', or 'g'. The caret ('^') matches the beginning of the input if it's not in a character class, and it can be used inside the square brackets to invert the character class.\nIn my regex, after matching the word \"Togo\" I match one character at a time, but only after I check that it isn't the start of another instance of \"Togo\". (?!Togo) is called a negative lookahead. \n" ]
[ 1, 1, 1 ]
[]
[]
[ "python", "string" ]
stackoverflow_0001441866_python_string.txt
Q: PIL not rendering fonts uniformly across machines I wrote some code that spits out an image. The code ran on my local machine yields this image: local http://img32.yfrog.com/img32/9476/local.png and on my webhost, it looks like this: host http://img32.imageshack.us/img32/858/hoste.png As you can see they are different. The top is much nicer. Both are using the same code, and the same font file (VeraMoBd.ttf), the same version of PIL (1.1.6), and the same Python version (2.6). I googled around and there doesn't seem to be any kind of global settings relating to how PIL renders fonts... What could be causing different results? A: I would guess that the top image was rendered with the TrueType hinting bytecode VM enabled, where the bottom was using only FreeType's auto-hinting. (Personally I prefer the bottom!) There are, unfortunately, software patent issues which mean the hinting bytecode feature is not available on all binary builds. This is why it's not a simple run-time feature you can enable and disable, but something that is decided at compile-time. If you compile your own copy of FreeType you can enable the feature by #define-ing the flag TT_CONFIG_OPTION_BYTECODE_INTERPRETER in config/ftoption.h — if your lawyer reckons it's a good idea.
PIL not rendering fonts uniformly across machines
I wrote some code that spits out an image. The code ran on my local machine yields this image: local http://img32.yfrog.com/img32/9476/local.png and on my webhost, it looks like this: host http://img32.imageshack.us/img32/858/hoste.png As you can see they are different. The top is much nicer. Both are using the same code, and the same font file (VeraMoBd.ttf), the same version of PIL (1.1.6), and the same Python version (2.6). I googled around and there doesn't seem to be any kind of global settings relating to how PIL renders fonts... What could be causing different results?
[ "I would guess that the top image was rendered with the TrueType hinting bytecode VM enabled, where the bottom was using only FreeType's auto-hinting. (Personally I prefer the bottom!)\nThere are, unfortunately, software patent issues which mean the hinting bytecode feature is not available on all binary builds. This is why it's not a simple run-time feature you can enable and disable, but something that is decided at compile-time. If you compile your own copy of FreeType you can enable the feature by #define-ing the flag TT_CONFIG_OPTION_BYTECODE_INTERPRETER in config/ftoption.h — if your lawyer reckons it's a good idea.\n" ]
[ 4 ]
[]
[]
[ "python", "python_imaging_library" ]
stackoverflow_0001449663_python_python_imaging_library.txt
Q: Python TurtleGraphics - Making a randomly moving turtle? I'm trying to create a randomly moving turtle here by following these steps in a function I've called drunk_turtle(): Repeat the following as many times as you like: Randomly choose an integer, called rand_num, from -1 to 1 (i.e. randomly set rand_num to be -1, 0, or 1) Make the turtle turn right rand_num * 90 degrees; Go forward 5, 10, or 15 --- choose this value at random. 5 = 1*5, 10 = 2*5, 15 = 3*5, ... How do I make code that does this? I don't really get how to get my random integer or get it to pick randomly 5, 10, or 15. Any help is appreciated. Thanks! A: You can find all of that information in the Python random manual. random.randint(a, b) Return a random integer N such that a <= N <= b. So you would do random.randint(-1,1) to get a number from -1, 0, or 1. To get 5, 10, or 15, just do 5 * random.randint(1,3). If you had a more complicated set of numbers to choose from -- say (6, 25, or 33) -- you can do random.choice([6, 25, 33]).
Python TurtleGraphics - Making a randomly moving turtle?
I'm trying to create a randomly moving turtle here by following these steps in a function I've called drunk_turtle(): Repeat the following as many times as you like: Randomly choose an integer, called rand_num, from -1 to 1 (i.e. randomly set rand_num to be -1, 0, or 1) Make the turtle turn right rand_num * 90 degrees; Go forward 5, 10, or 15 --- choose this value at random. 5 = 1*5, 10 = 2*5, 15 = 3*5, ... How do I make code that does this? I don't really get how to get my random integer or get it to pick randomly 5, 10, or 15. Any help is appreciated. Thanks!
[ "You can find all of that information in the Python random manual.\n\nrandom.randint(a, b)\nReturn a random integer N such that a <= N <= b.\n\n\nSo you would do random.randint(-1,1) to get a number from -1, 0, or 1.\nTo get 5, 10, or 15, just do 5 * random.randint(1,3).\nIf you had a more complicated set of numbers to choose from -- say (6, 25, or 33) -- you can do random.choice([6, 25, 33]).\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0001449826_python.txt
Q: Converting 2.5 byte comparisons to 3 I'm trying to convert a 2.5 program to 3. Is there a way in python 3 to change a byte string, such as b'\x01\x02' to a python 2.5 style string, such as '\x01\x02', so that string and byte-by-byte comparisons work similarly to 2.5? I'm reading the string from a binary file. I have a 2.5 program that reads bytes from a file, then compares or processes each byte or combination of bytes with specified constants. To run the program under 3, I'd like to avoid changing all my constants to bytes and byte strings ('\x01' to b'\x01'), then dealing with issues in 3 such as: a = b'\x01' b = b'\x02' results in (a+b)[0] != a even though similar operation work in 2.5. I have to do (a+b)[0] == ord(a), while a+b == b'\x01\x02' works fine. (By the way, what do I do to (a+b)[0] so it equals a?) Unpacking structures is also an issue. Am I missing something simple? A: Bytes is an immutable sequence of integers (in the range 0<= to <256), therefore when you're accessing (a+b)[0] you're getting back an integer, exactly the same one you'd get by accessing a[0]. so when you're comparing sequence a to an integer (a+b)[0], they're naturally different. using the slice notation you could however get a sequence back: >>> (a+b)[:1] == a # 1 == len(a) ;) True because slicing returns bytes object. I would also advised to run 2to3 utility (it needs to be run with py2k) to convert some code automatically. It won't solve all your problems, but it'll help a lot.
Converting 2.5 byte comparisons to 3
I'm trying to convert a 2.5 program to 3. Is there a way in python 3 to change a byte string, such as b'\x01\x02' to a python 2.5 style string, such as '\x01\x02', so that string and byte-by-byte comparisons work similarly to 2.5? I'm reading the string from a binary file. I have a 2.5 program that reads bytes from a file, then compares or processes each byte or combination of bytes with specified constants. To run the program under 3, I'd like to avoid changing all my constants to bytes and byte strings ('\x01' to b'\x01'), then dealing with issues in 3 such as: a = b'\x01' b = b'\x02' results in (a+b)[0] != a even though similar operation work in 2.5. I have to do (a+b)[0] == ord(a), while a+b == b'\x01\x02' works fine. (By the way, what do I do to (a+b)[0] so it equals a?) Unpacking structures is also an issue. Am I missing something simple?
[ "Bytes is an immutable sequence of integers (in the range 0<= to <256), therefore when you're accessing (a+b)[0] you're getting back an integer, exactly the same one you'd get by accessing a[0]. so when you're comparing sequence a to an integer (a+b)[0], they're naturally different.\nusing the slice notation you could however get a sequence back:\n>>> (a+b)[:1] == a # 1 == len(a) ;)\nTrue\n\nbecause slicing returns bytes object.\nI would also advised to run 2to3 utility (it needs to be run with py2k) to convert some code automatically. It won't solve all your problems, but it'll help a lot.\n" ]
[ 3 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0001449791_python_python_3.x.txt
Q: How to convert an NSDictionary to a Python dict? I have a plugin written entirely in Python using PyObjC whose core classes I need to convert to Objective-C. One of them basically just loads up a Python module and executes a specific function, passing it keyword arguments. In PyObjC, this was extremely. However, I'm having difficulty figuring out how to do the same thing using the Python C API. In particular, I'm unsure how best to convert an NSDictionary (which might hold integers, strings, booleans, or all of the above) into a format that I can then pass on to Python as keyword arguments. Anyone have pointers on how to accomplish something like this? Thanks in advance! Edit: just to clarify, I'm converting my existing class which was formerly Python into Objective-C, and am having trouble figuring out how to move from an NSDictionary in Objective-C to a Python dictionary I can pass on when I invoke the remaining Python scripts. The Objective-C class is basically just a Python loader, but I'm unfamiliar with the Python C API and am having trouble figuring out where to look for examples or functions that will help me. A: Oh, looks like I misunderstood your question. Well, going the other direction isn't terribly different. This should be (as least a start of) the function you're looking for (I haven't tested it thoroughly though, so beware of the bugs): // Returns a new reference PyObject *ObjcToPyObject(id object) { if (object == nil) { // This technically doesn't need to be an extra case, // but you may want to differentiate it for error checking return NULL; } else if ([object isKindOfClass:[NSString class]]) { return PyString_FromString([object UTF8String]); } else if ([object isKindOfClass:[NSNumber class]]) { // You could probably do some extra checking here if you need to // with the -objCType method. return PyLong_FromLong([object longValue]); } else if ([object isKindOfClass:[NSArray class]]) { // You may want to differentiate between NSArray (analagous to tuples) // and NSMutableArray (analagous to lists) here. Py_ssize_t i, len = [object count]; PyObject *list = PyList_New(len); for (i = 0; i < len; ++i) { PyObject *item = ObjcToPyObject([object objectAtIndex:i]); NSCAssert(item != NULL, @"Can't add NULL item to Python List"); // Note that PyList_SetItem() "steals" the reference to the passed item. // (i.e., you do not need to release it) PyList_SetItem(list, i, item); } return list; } else if ([object isKindOfClass:[NSDictionary class]]) { PyObject *dict = PyDict_New(); for (id key in object) { PyObject *pyKey = ObjcToPyObject(key); NSCAssert(pyKey != NULL, @"Can't add NULL key to Python Dictionary"); PyObject *pyItem = ObjcToPyObject([object objectForKey:key]); NSCAssert(pyItem != NULL, @"Can't add NULL item to Python Dictionary"); PyDict_SetItem(dict, pyKey, pyItem); Py_DECREF(pyKey); Py_DECREF(pyItem); } return dict; } else { NSLog(@"ObjcToPyObject() could not convert Obj-C object to PyObject."); return NULL; } } You may also want to take a look at the Python/C API Reference manual if you haven't already.
How to convert an NSDictionary to a Python dict?
I have a plugin written entirely in Python using PyObjC whose core classes I need to convert to Objective-C. One of them basically just loads up a Python module and executes a specific function, passing it keyword arguments. In PyObjC, this was extremely. However, I'm having difficulty figuring out how to do the same thing using the Python C API. In particular, I'm unsure how best to convert an NSDictionary (which might hold integers, strings, booleans, or all of the above) into a format that I can then pass on to Python as keyword arguments. Anyone have pointers on how to accomplish something like this? Thanks in advance! Edit: just to clarify, I'm converting my existing class which was formerly Python into Objective-C, and am having trouble figuring out how to move from an NSDictionary in Objective-C to a Python dictionary I can pass on when I invoke the remaining Python scripts. The Objective-C class is basically just a Python loader, but I'm unfamiliar with the Python C API and am having trouble figuring out where to look for examples or functions that will help me.
[ "Oh, looks like I misunderstood your question. Well, going the other direction isn't terribly different. This should be (as least a start of) the function you're looking for (I haven't tested it thoroughly though, so beware of the bugs):\n// Returns a new reference\nPyObject *ObjcToPyObject(id object)\n{\n if (object == nil) {\n // This technically doesn't need to be an extra case, \n // but you may want to differentiate it for error checking\n return NULL;\n } else if ([object isKindOfClass:[NSString class]]) {\n return PyString_FromString([object UTF8String]);\n } else if ([object isKindOfClass:[NSNumber class]]) {\n // You could probably do some extra checking here if you need to\n // with the -objCType method.\n return PyLong_FromLong([object longValue]);\n } else if ([object isKindOfClass:[NSArray class]]) {\n // You may want to differentiate between NSArray (analagous to tuples) \n // and NSMutableArray (analagous to lists) here.\n Py_ssize_t i, len = [object count];\n PyObject *list = PyList_New(len);\n for (i = 0; i < len; ++i) {\n PyObject *item = ObjcToPyObject([object objectAtIndex:i]);\n NSCAssert(item != NULL, @\"Can't add NULL item to Python List\");\n // Note that PyList_SetItem() \"steals\" the reference to the passed item.\n // (i.e., you do not need to release it)\n PyList_SetItem(list, i, item);\n }\n return list;\n } else if ([object isKindOfClass:[NSDictionary class]]) {\n PyObject *dict = PyDict_New();\n for (id key in object) {\n PyObject *pyKey = ObjcToPyObject(key);\n NSCAssert(pyKey != NULL, @\"Can't add NULL key to Python Dictionary\");\n PyObject *pyItem = ObjcToPyObject([object objectForKey:key]);\n NSCAssert(pyItem != NULL, @\"Can't add NULL item to Python Dictionary\");\n PyDict_SetItem(dict, pyKey, pyItem);\n Py_DECREF(pyKey);\n Py_DECREF(pyItem);\n }\n return dict;\n } else {\n NSLog(@\"ObjcToPyObject() could not convert Obj-C object to PyObject.\");\n return NULL;\n }\n}\n\nYou may also want to take a look at the Python/C API Reference manual if you haven't already.\n" ]
[ 2 ]
[]
[]
[ "objective_c", "pyobjc", "python" ]
stackoverflow_0001449620_objective_c_pyobjc_python.txt
Q: Which queue is most appropriate? I'm building a mobile photo sharing site in Python similar to TwitPic and have been exploring various queues to handle the image processing. I've looked into RabbitMQ and ActiveMQ but I'm thinking that there is a better solution for my use case. I'm looking for something a little more lightweight. I'm open to any suggestions. A: You could write a daemon that uses python's built-in multiprocessing library and its Queue. All you should have to do is set up a pool of workers, and have them wait on jobs from the Queue. Your main process can dump new jobs into the Queue, and you're good to go. A: Gearman is good in that it optionally allows you to synchronize multiple jobs executed on multiple workers. I've used beanstalkd successfully in a few high-volume applications. The latter is better-suited to async jobs, and the former gives you more flexibility when you'd like to block on job execution. A: Are you considering single machine architecture, or a cluster of machines? Forwarding the image to an available worker process on the same machine or a different machine isn't profoundly different, particularly if you use TCP sockets. Knowing what workers are available, spawning more if necessary and the resources are available, having a fail-safe mechanism if a worker crashes, etc, gradually make the problem more complicated. It could be something as simple as using httplib to push the image to a private server running Apache or twisted and a collection of cgi applications. When you add another server, round robin the request amongst them.
Which queue is most appropriate?
I'm building a mobile photo sharing site in Python similar to TwitPic and have been exploring various queues to handle the image processing. I've looked into RabbitMQ and ActiveMQ but I'm thinking that there is a better solution for my use case. I'm looking for something a little more lightweight. I'm open to any suggestions.
[ "You could write a daemon that uses python's built-in multiprocessing library and its Queue.\nAll you should have to do is set up a pool of workers, and have them wait on jobs from the Queue. Your main process can dump new jobs into the Queue, and you're good to go.\n", "Gearman is good in that it optionally allows you to synchronize multiple jobs executed on multiple workers.\nI've used beanstalkd successfully in a few high-volume applications.\nThe latter is better-suited to async jobs, and the former gives you more flexibility when you'd like to block on job execution.\n", "Are you considering single machine architecture, or a cluster of machines? Forwarding the image to an available worker process on the same machine or a different machine isn't profoundly different, particularly if you use TCP sockets. Knowing what workers are available, spawning more if necessary and the resources are available, having a fail-safe mechanism if a worker crashes, etc, gradually make the problem more complicated.\nIt could be something as simple as using httplib to push the image to a private server running Apache or twisted and a collection of cgi applications. When you add another server, round robin the request amongst them.\n" ]
[ 2, 1, 0 ]
[]
[]
[ "python", "queue" ]
stackoverflow_0001450038_python_queue.txt
Q: Python TurtleGraphics - Smoothing out random walks? I need some help with this question relating to TurtleGraphics in Python: A small detail of tipsy_turtle() is that when the turtle turns 90 degrees it immediately "jumps" to the new direction. This makes its movement seem jagged. It might look better if the turtle moved smoothly when turning. So, for this question, write a function called smooth_tipsy_turtle() that is the same as tipsy_turtle(), except instead of using the turtle.right(d) function, write a brand new function called smooth_right(d) that works as follows: - If d is negative then - repeat the following -d times: - turn left 1 using the ordinary turtle.left command - Otherwise, repeat the following d times: - turn right 1 using the ordinary turtle.right command Here is my original function to get the random turtle movement: def tipsy_turtle(num_steps): turtle.reset() for step in range(num_steps): rand_num = random.randint(-1, 1) turtle.right(rand_num * 90) turtle.forward(5 * random.randint(1, 3)) So, how would I go about making this work? I tried adding: if rand_num*90 < 0: for step in range(rand_num*90): turtle.left(rand_num*90) else: turtle.right(rand_num*90) But it didn't really work out and I don't know what I did wrong. Thanks! A: Hopefully this sample clears up what went wrong in your example -- you performed either rand_num*90*rand_num*90 left turns, or rand_num*90 right turns! if rand_num < 0: # don't need to multiply by 90 here - it's either +ve or -ve. for step in xrange(90): # xrange is preferred over range in situations like this turtle.left(rand_num) # net result is 90 left turns in rand_num direction else: for step in xrange(90): turtle.right(rand_num) Or you could write this as: for step in xrange(90): if rand_num < 0: turtle.left(rand_num) else: turtle.right(rand_num) For code like this, it's really a matter of preference. A: You might be able to do without the conditional for left-vs-right. I don't have python syntax down, so here's pseudocode turtle left randomly generated value 0 to 90 turtle right randomly generated value 0 to 90 turtle forward some amount I.e., generate a random angle and turn left that much, then generate another random angle and turn right by that much. This way you don't have to worry about generating or dealing with negative randoms. You can keep all the random angles positive, and the combination of a left followed by a right effectively does a subtraction for you which gives a nice gaussian distribution to the changes in direction. A: I guess I'll venture an answer even though I am not completely sure what you want (see my comment to the question, and don't be surprised if I edit this answer as appropriate!). Assuming you want the turtle to turn some number of degrees at each step, not necessarily 90, but not more than 90, then simply use rand_num = random.randint(-90, 90) and then turtle.right(rand_num).
Python TurtleGraphics - Smoothing out random walks?
I need some help with this question relating to TurtleGraphics in Python: A small detail of tipsy_turtle() is that when the turtle turns 90 degrees it immediately "jumps" to the new direction. This makes its movement seem jagged. It might look better if the turtle moved smoothly when turning. So, for this question, write a function called smooth_tipsy_turtle() that is the same as tipsy_turtle(), except instead of using the turtle.right(d) function, write a brand new function called smooth_right(d) that works as follows: - If d is negative then - repeat the following -d times: - turn left 1 using the ordinary turtle.left command - Otherwise, repeat the following d times: - turn right 1 using the ordinary turtle.right command Here is my original function to get the random turtle movement: def tipsy_turtle(num_steps): turtle.reset() for step in range(num_steps): rand_num = random.randint(-1, 1) turtle.right(rand_num * 90) turtle.forward(5 * random.randint(1, 3)) So, how would I go about making this work? I tried adding: if rand_num*90 < 0: for step in range(rand_num*90): turtle.left(rand_num*90) else: turtle.right(rand_num*90) But it didn't really work out and I don't know what I did wrong. Thanks!
[ "Hopefully this sample clears up what went wrong in your example -- you performed either rand_num*90*rand_num*90 left turns, or rand_num*90 right turns!\nif rand_num < 0: # don't need to multiply by 90 here - it's either +ve or -ve.\n for step in xrange(90): # xrange is preferred over range in situations like this\n turtle.left(rand_num) # net result is 90 left turns in rand_num direction\nelse:\n for step in xrange(90):\n turtle.right(rand_num)\n\nOr you could write this as:\nfor step in xrange(90):\n if rand_num < 0:\n turtle.left(rand_num)\n else:\n turtle.right(rand_num)\n\nFor code like this, it's really a matter of preference.\n", "You might be able to do without the conditional for left-vs-right. I don't have python syntax down, so here's pseudocode\nturtle left randomly generated value 0 to 90\nturtle right randomly generated value 0 to 90\nturtle forward some amount\n\nI.e., generate a random angle and turn left that much, then generate another random angle and turn right by that much. This way you don't have to worry about generating or dealing with negative randoms. You can keep all the random angles positive, and the combination of a left followed by a right effectively does a subtraction for you which gives a nice gaussian distribution to the changes in direction. \n", "I guess I'll venture an answer even though I am not completely sure what you want (see my comment to the question, and don't be surprised if I edit this answer as appropriate!).\nAssuming you want the turtle to turn some number of degrees at each step, not necessarily 90, but not more than 90, then simply use rand_num = random.randint(-90, 90) and then turtle.right(rand_num).\n" ]
[ 2, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001449990_python.txt
Q: Django FormPreview - What is it for? While looking across the Django documentation, I came across the FormPreview. The description says this: Django comes with an optional “form preview” application that helps automate the following workflow: “Display an HTML form, force a preview, then do something with the submission.” What is meant by "force a preview"? What would you use this feature for an in application? A: I think they mean (I use django but I didn't know of this until now..) that you can let people write, for example in a textarea box like I'm doing right now. After the user submits it the system would preview it to the user and give him the chance to read and edit what he submitted, before it being submitted again all the way to the database. A: To force a preview means the users are forced to see the value they have inserted on the form input fields, before django actually saves it to the database. One example is django comment system, which enforce the users to take a look at the comment they have written before django actually saves it to the database. You would see that the users are redirected to another page to take a look at their comment, and after that there is a submit button to actually save the comment.
Django FormPreview - What is it for?
While looking across the Django documentation, I came across the FormPreview. The description says this: Django comes with an optional “form preview” application that helps automate the following workflow: “Display an HTML form, force a preview, then do something with the submission.” What is meant by "force a preview"? What would you use this feature for an in application?
[ "I think they mean (I use django but I didn't know of this until now..) that you can let people write, for example in a textarea box like I'm doing right now. After the user submits it the system would preview it to the user and give him the chance to read and edit what he submitted, before it being submitted again all the way to the database.\n", "To force a preview means the users are forced to see the value they have inserted on the form input fields, before django actually saves it to the database.\nOne example is django comment system, which enforce the users to take a look at the comment they have written before django actually saves it to the database. You would see that the users are redirected to another page to take a look at their comment, and after that there is a submit button to actually save the comment.\n" ]
[ 2, 2 ]
[]
[]
[ "django", "forms", "python" ]
stackoverflow_0001450295_django_forms_python.txt
Q: How to check form entry for special characters in python? Let's say I have a form field for "Name". I want to display an error message if it contains special characters such as $,#,etc. The only acceptable characters should be any alphanumeric, the hyphen "-", and the apostrophe "'". I am not sure how i should search the name for these non-acceptable characters, especially the apostrophe. So in the code it should look like the following: name = request.POST['name'] if name contains any non-acceptable characters then display error message. A: You can use regular expressions to validate your string, like this: import re if re.search(r"^[\w\d'-]+$", name): # success Another way: if set("#$").intersection(name): print "bad chars in the name" A: import re p = r"^[\w'-]+$" if re.search(p, name): # it's okay else: # display error
How to check form entry for special characters in python?
Let's say I have a form field for "Name". I want to display an error message if it contains special characters such as $,#,etc. The only acceptable characters should be any alphanumeric, the hyphen "-", and the apostrophe "'". I am not sure how i should search the name for these non-acceptable characters, especially the apostrophe. So in the code it should look like the following: name = request.POST['name'] if name contains any non-acceptable characters then display error message.
[ "You can use regular expressions to validate your string, like this:\nimport re\nif re.search(r\"^[\\w\\d'-]+$\", name):\n # success\n\nAnother way:\nif set(\"#$\").intersection(name):\n print \"bad chars in the name\"\n\n", "import re\np = r\"^[\\w'-]+$\"\nif re.search(p, name):\n # it's okay\nelse:\n # display error\n\n" ]
[ 3, 1 ]
[]
[]
[ "forms", "python" ]
stackoverflow_0001450522_forms_python.txt
Q: Embed a spreadsheet/table in a PyGTK application? In my application, we want to present the user with a typical spreadsheet/table (OO.O/Excel-style), and then pull out the values and do something with them internally. Is there a preexisting widget for PyGTK that does this? The PyGTK FAQ mentions GtkGrid, but the link is dead and I can't find a tarball anywhere. A: GtkGrid is deprecated in favor of the more powerful and more customizable GtkTreeView. It can display trees and lists. To make it work like a table, you must define a ListStore where it will take the data from, and TreeViewColumns for each column you want to show, with CellRenderers to define how to show the column. This separation of data and render allows you to render other controls on the cell, like text boxes or images. GtkTreeView is very flexible, but it seems complex at first because of the many options. To help with that, there's the relevant section in the PyGTK tutorial (although you should read it entirely, not just this section). For completeness, here's an example code and a screenshot of it running in my system: #!/usr/bin/env python import pygtk pygtk.require('2.0') import gtk class TreeViewColumnExample(object): # close the window and quit def delete_event(self, widget, event, data=None): gtk.main_quit() return False def __init__(self): # Create a new window self.window = gtk.Window(gtk.WINDOW_TOPLEVEL) self.window.set_title("TreeViewColumn Example") self.window.connect("delete_event", self.delete_event) # create a liststore with one string column to use as the model self.liststore = gtk.ListStore(str, str, str, 'gboolean') # create the TreeView using liststore self.treeview = gtk.TreeView(self.liststore) # create the TreeViewColumns to display the data self.tvcolumn = gtk.TreeViewColumn('Pixbuf and Text') self.tvcolumn1 = gtk.TreeViewColumn('Text Only') # add a row with text and a stock item - color strings for # the background self.liststore.append(['Open', gtk.STOCK_OPEN, 'Open a File', True]) self.liststore.append(['New', gtk.STOCK_NEW, 'New File', True]) self.liststore.append(['Print', gtk.STOCK_PRINT, 'Print File', False]) # add columns to treeview self.treeview.append_column(self.tvcolumn) self.treeview.append_column(self.tvcolumn1) # create a CellRenderers to render the data self.cellpb = gtk.CellRendererPixbuf() self.cell = gtk.CellRendererText() self.cell1 = gtk.CellRendererText() # set background color property self.cellpb.set_property('cell-background', 'yellow') self.cell.set_property('cell-background', 'cyan') self.cell1.set_property('cell-background', 'pink') # add the cells to the columns - 2 in the first self.tvcolumn.pack_start(self.cellpb, False) self.tvcolumn.pack_start(self.cell, True) self.tvcolumn1.pack_start(self.cell1, True) self.tvcolumn.set_attributes(self.cellpb, stock_id=1) self.tvcolumn.set_attributes(self.cell, text=0) self.tvcolumn1.set_attributes(self.cell1, text=2, cell_background_set=3) # make treeview searchable self.treeview.set_search_column(0) # Allow sorting on the column self.tvcolumn.set_sort_column_id(0) # Allow drag and drop reordering of rows self.treeview.set_reorderable(True) self.window.add(self.treeview) self.window.show_all() def main(): gtk.main() if __name__ == "__main__": tvcexample = TreeViewColumnExample() main() A: It's perhaps a little more manual than, for instance, a GridView in .Net, but the Tree View widget will do what you need and much more. See PyGtk TreeView A: You might consider embedding a web page viewer - you can do a lot with that: http://blog.mypapit.net/2009/09/pymoembed-web-browser-in-python-gtk-application.html
Embed a spreadsheet/table in a PyGTK application?
In my application, we want to present the user with a typical spreadsheet/table (OO.O/Excel-style), and then pull out the values and do something with them internally. Is there a preexisting widget for PyGTK that does this? The PyGTK FAQ mentions GtkGrid, but the link is dead and I can't find a tarball anywhere.
[ "GtkGrid is deprecated in favor of the more powerful and more customizable GtkTreeView.\nIt can display trees and lists. To make it work like a table, you must define a ListStore where it will take the data from, and TreeViewColumns for each column you want to show, with CellRenderers to define how to show the column. This separation of data and render allows you to render other controls on the cell, like text boxes or images.\nGtkTreeView is very flexible, but it seems complex at first because of the many options. To help with that, there's the relevant section in the PyGTK tutorial (although you should read it entirely, not just this section).\nFor completeness, here's an example code and a screenshot of it running in my system:\n\n#!/usr/bin/env python\n\nimport pygtk\npygtk.require('2.0')\nimport gtk\n\nclass TreeViewColumnExample(object):\n\n # close the window and quit\n def delete_event(self, widget, event, data=None):\n gtk.main_quit()\n return False\n\n def __init__(self):\n # Create a new window\n self.window = gtk.Window(gtk.WINDOW_TOPLEVEL)\n self.window.set_title(\"TreeViewColumn Example\")\n self.window.connect(\"delete_event\", self.delete_event)\n\n # create a liststore with one string column to use as the model\n self.liststore = gtk.ListStore(str, str, str, 'gboolean')\n\n # create the TreeView using liststore\n self.treeview = gtk.TreeView(self.liststore)\n\n # create the TreeViewColumns to display the data\n self.tvcolumn = gtk.TreeViewColumn('Pixbuf and Text')\n self.tvcolumn1 = gtk.TreeViewColumn('Text Only')\n\n # add a row with text and a stock item - color strings for\n # the background\n self.liststore.append(['Open', gtk.STOCK_OPEN, 'Open a File', True])\n self.liststore.append(['New', gtk.STOCK_NEW, 'New File', True])\n self.liststore.append(['Print', gtk.STOCK_PRINT, 'Print File', False])\n\n # add columns to treeview\n self.treeview.append_column(self.tvcolumn)\n self.treeview.append_column(self.tvcolumn1)\n\n # create a CellRenderers to render the data\n self.cellpb = gtk.CellRendererPixbuf()\n self.cell = gtk.CellRendererText()\n self.cell1 = gtk.CellRendererText()\n\n # set background color property\n self.cellpb.set_property('cell-background', 'yellow')\n self.cell.set_property('cell-background', 'cyan')\n self.cell1.set_property('cell-background', 'pink')\n\n\n # add the cells to the columns - 2 in the first\n self.tvcolumn.pack_start(self.cellpb, False)\n self.tvcolumn.pack_start(self.cell, True)\n self.tvcolumn1.pack_start(self.cell1, True)\n\n self.tvcolumn.set_attributes(self.cellpb, stock_id=1)\n self.tvcolumn.set_attributes(self.cell, text=0)\n self.tvcolumn1.set_attributes(self.cell1, text=2,\n cell_background_set=3)\n\n # make treeview searchable\n self.treeview.set_search_column(0)\n\n # Allow sorting on the column\n self.tvcolumn.set_sort_column_id(0)\n\n # Allow drag and drop reordering of rows\n self.treeview.set_reorderable(True)\n\n self.window.add(self.treeview)\n\n self.window.show_all()\n\ndef main():\n gtk.main()\n\nif __name__ == \"__main__\":\n tvcexample = TreeViewColumnExample()\n main()\n\n", "It's perhaps a little more manual than, for instance, a GridView in .Net, but the Tree View widget will do what you need and much more. See PyGtk TreeView\n", "You might consider embedding a web page viewer - you can do a lot with that: http://blog.mypapit.net/2009/09/pymoembed-web-browser-in-python-gtk-application.html\n" ]
[ 12, 3, 0 ]
[]
[]
[ "pygtk", "python", "spreadsheet" ]
stackoverflow_0001447187_pygtk_python_spreadsheet.txt
Q: wx.StaticBitmap or wx.DC: Which is better to use for constantly changing images? I would like to have a python gui that loads different images from files. I've seen many exmples of loading an image with some code like: img = wx.Image("1.jpg", wx.BITMAP_TYPE_ANY, -1) sb = wx.StaticBitmap(rightPanel, -1, wx.BitmapFromImage(img)) sizer.Add(sb) It seems to be suited for an image that will be there for the entire life of the program. I could not find an elegant way to delete/reload images with this. Would using a wx.DC be a better idea for my application? A: If you have rapidly changing big images, or you would like some custom effect in future, it better to write your own control and doing painting using paintDC, and it is not that hard. Doing your own drawing you can correctly scale, avoid flicker and may be do blend of one image into other if you like :) A: You don't have to delete the StaticBitmap, you can just set another bitmap to it using it's SetBitmap method. If the new image has different dimensions you will probably have to do an explicit Layout call on it's parent so that the sizers would adjust. A: I read here: http://docs.wxwidgets.org/trunk/classwx_static_bitmap.html "Native implementations on some platforms are only meant for display of the small icons in the dialog boxes. In particular, under Windows 9x the size of bitmap is limited to 64*64 pixels." Which may be a problem. If you do use a DC, then you may have to "double buffer" it, or it may flicker on redraw, resize or update. Otherwise it seems to me that you should use a "normal" Bitmap if you are oging to update it often.
wx.StaticBitmap or wx.DC: Which is better to use for constantly changing images?
I would like to have a python gui that loads different images from files. I've seen many exmples of loading an image with some code like: img = wx.Image("1.jpg", wx.BITMAP_TYPE_ANY, -1) sb = wx.StaticBitmap(rightPanel, -1, wx.BitmapFromImage(img)) sizer.Add(sb) It seems to be suited for an image that will be there for the entire life of the program. I could not find an elegant way to delete/reload images with this. Would using a wx.DC be a better idea for my application?
[ "If you have rapidly changing big images, or you would like some custom effect in future, it better to write your own control and doing painting using paintDC, and it is not that hard.\nDoing your own drawing you can correctly scale, avoid flicker and may be do blend of one image into other if you like :)\n", "You don't have to delete the StaticBitmap, you can just set another bitmap to it using it's SetBitmap method.\nIf the new image has different dimensions you will probably have to do an explicit Layout call on it's parent so that the sizers would adjust.\n", "I read here: http://docs.wxwidgets.org/trunk/classwx_static_bitmap.html\n\"Native implementations on some platforms are only meant for display of the small icons in the dialog boxes. In particular, under Windows 9x the size of bitmap is limited to 64*64 pixels.\"\nWhich may be a problem. If you do use a DC, then you may have to \"double buffer\" it, or it may flicker on redraw, resize or update.\nOtherwise it seems to me that you should use a \"normal\" Bitmap if you are oging to update it often.\n" ]
[ 1, 0, 0 ]
[]
[]
[ "image", "python", "user_interface", "wxpython" ]
stackoverflow_0001450639_image_python_user_interface_wxpython.txt
Q: Simple object recognition ===SOLVED=== Thanks for your suggestions and comments. By working on the flood_fill algorithm given in Beginning Python Visualization book (Chapter 9 - Image Processing) I have implemented what I have wanted. I can count the objects, get enclosing rectangles for each object (therefore height and widths), and lastly can construct NumPy arrays or matrices for each of them. Although it is not an optimized approach it does what I want. The source code (lab2.py) and the png file (lab2-particles.png) that I use have been put under http://code.google.com/p/ccnworks/source/browse/#svn/trunk/AtSc450. You need NumPy and PIL installed, and matplotlib to see the histogram. Core of the code lies within the objfind function where the main recursive object search action occurs. One further update: SciPy's ndimage.label() does exactly what I want, too. Cheers for David-Warde Farley and Zachary Pincus from the NumPy and SciPy mailing-lists for pointing this right into my eyes :) ============= Hello, I have an image that contains the shadows of ice particles measured by a particle spectrometer. I want to be able to identify each object, so that I can later classify and use them further in my calculations. In essence, what I am willing to do is to simply implement a fuzzy selection tool where I can simply select each entity. How could I easily solve this problem? (Preferably using Python) Thanks. NOTE: In my question I am referring to each specific connected pixels as objects or entities. My intention to extract them and create NumPy array representations as shown below. (Here I am using the top-left object; if a pixel exist use 1's if not use 0's. This object's shape is 3 by 3 which correspondingly 3 pixel height by 3 pixel width. These are projections of real ice-particles onto 2D domain, under the assumption of their being spherical and equivalent radius is (height+width)/2, and later some scalings --from pixels to actual sizes and volume calculations will follow) import numpy as np np.array([[1,1,1], [1,1,1], [0,0,1]]) array([[1, 1, 1], [1, 1, 1], [0, 0, 1]]) Here is a section from the image which I am going to use. screenshot http://img43.imageshack.us/img43/2327/particles.png A: Scan every square (e.g. from the top-left, left-to-right, top-to-bottom) When you hit a blue square then: a. Record this square as a location of a new object b. Find all the other contiguous blue squares (e.g. by looking at the neighbours of this square, and the neighbours of those neighbours, etc.) and mark them as being part of the same object Continue to scan When you find another blue square, test to see whether it's part of a known object before going to step 2; alternatively in step 2b, erase any square after you've associated it with an object A: Looking at the image you provided, all you need to do next is to apply a simple region growing algorithm. If I were using MATLAB, I would use bwlabel/bwboundaries functions. I believe there's an equivalent function somewhere in Numpy, or use OpenCV with python wrappers as suggested by @kwatford A: I used to do this kind of analysis on micrographs and eventually put everything I needed into an image processing and analysis package written in C, driven via Tcl. (It worked with 512 x 512 images only, which explains why 512 crops up so often. There were images with pixels of various sizes allocated, but most of the work was done with 8-bit pixels, which explains why there is that business of 0xff and maximum meaningful count of 254 on an image.) Briefly, the 'zz' at the begining of the Tcl commands sends the remainder of the line to the package's parser which calls the appropriate C routine with the given arguments. Right after the 'zz' is an argument that indicates the input and output of the command. (There can be multiple inputs but only a single output.) 'r' indicates a 512 x 512 x 8-bit image. The third word is the name of the command to be invoked; 'graphs' marks up an image as described in the text below. So, 'zz rr graphs' means 'Call the ZZ parser; input an r image to the graphs command and get back an r image.' The rest of the Tcl command line specifies which of the pre-allocated images to use. (The 'g' image is an ROI, i.e., region-of-interest, image; almost all ZZ ops are done under ROI control.) So, 'r1 r1 g8' means 'Use r1 as input, use r1 as output (that is, mark up the input image itself), and do the operation wherever the corresponding pixel on image g8 --- that is, r8, used as an ROI --- is >0. I don't think it is available online anywhere, but if you want to pick through the source code or even compile the whole shebang, I'll be happy to send it to you. Here's an excerpt from the manual (but I think I see some errors in the manual at this late date --- that's embarrassing ...): Example 6. Counting features. Problem Counting is a common task. The items counted are called “features”, and it is usually necessary to prepare images carefully so that features correspond in a one-to-one way with things that are the real objects to be counted. Here, however, we ignore image preparation and consider, instead, the mechanics of counting. The first counting exercise is to find out how many features are on the images in the directory ./cells? Approach First, let us define “feature”. A feature is the largest group of “set” (non-zero) pixels all of which can be reached by travelling from one set pixel to another along north-south-east-west (up-down-right-left) routes, starting from a given set pixel. The zz command that detects and marks such features on an image is “zz rr graphs R:src R:dest G:ROI”, so called because the mathematical term for such a feature is a “graph”. If all the pixels on an image are set, then there is only a single graph on the image, but it contains 262144 pixels (512 * 512). If pixels are set and clear (equal to zero) in a checkerboard pattern, then there will be 131072 (512 * 512 / 2) graphs, but each will containing only one pixel. Briefly explained, “zz rr graphs” starts in the upper-left corner of an image and scans each succeeding row left to right until it finds a set pixel, then finds all the set pixels attached to that through north, south, east, or west borders (“4-connected”). It then sets all pixels in that graph to 1 (0x01). After finding and marking graph 1, it starts scanning again at the pixel after the one where it first discovered graph 1, this time ignoring any pixels that already belong to a graph. The first 254 graphs that it finds will be marked uniquely; all graphs found after that, however, will be marked with the value 255 (0xff) and so cannot be distinguished from each other. The key to being able to count any number of graphs accurately is to process each image in stages, that is, find the number of graphs on an image and, if the number is greater than 254, erase the 254 graphs just found, repeating the process until 254 or fewer graphs are found. The Tcl language provides the means to set up control of this operation. Let us begin to build the commands needed by reading a ZZ image file into an R image and detecting and marking the graphs. Before the processing loop, we declare and zero a variable to hold the total number of features in the image series. Within the processing loop, we begin by reading the image file into an R image and detecting and marking the graphs. zz ur to $inDir/$img r1 zz rr graphs r1 r1 g8 Next, we zero some variables to keep track of the counts, then use the “ra max” command to find out whether more than 254 graphs were detected. set nGraphs [ zz ra max r1 a1 g1 ] If nGraphs does equal 255, then the 254 accurately counted graphs should be added to the total, the graphs from 1 through 254 should be erased, and the count repeated for as many times as it takes to reduce the number of graphs below 255. while {$nGraphs == 255} { incr sumGraphs 254 zz rbr lt r1 155 r1 g1 0 255 set sumGraphs 0 zz rr graphs r1 r1 g8 set nGraphs [ zz ra max r1 a1 g8 ] } When the “while” loop exits, the variable nGraphs must hold a number less than 255, that is, a number of accurately counted graphs; this is added to the rising total of the number of features in the image series. incr sumGraphs $nGraphs After the processing loop, print out the total number of features found in the series. puts “Total number of features in $inDir \ images $beginImg through $endImg is $sumGraphs.” After the processing loop, print out the total number of features found in the series. A: OpenCV has a Python interface that you might find useful. A: Connected component analysis may be what you are looking for.
Simple object recognition
===SOLVED=== Thanks for your suggestions and comments. By working on the flood_fill algorithm given in Beginning Python Visualization book (Chapter 9 - Image Processing) I have implemented what I have wanted. I can count the objects, get enclosing rectangles for each object (therefore height and widths), and lastly can construct NumPy arrays or matrices for each of them. Although it is not an optimized approach it does what I want. The source code (lab2.py) and the png file (lab2-particles.png) that I use have been put under http://code.google.com/p/ccnworks/source/browse/#svn/trunk/AtSc450. You need NumPy and PIL installed, and matplotlib to see the histogram. Core of the code lies within the objfind function where the main recursive object search action occurs. One further update: SciPy's ndimage.label() does exactly what I want, too. Cheers for David-Warde Farley and Zachary Pincus from the NumPy and SciPy mailing-lists for pointing this right into my eyes :) ============= Hello, I have an image that contains the shadows of ice particles measured by a particle spectrometer. I want to be able to identify each object, so that I can later classify and use them further in my calculations. In essence, what I am willing to do is to simply implement a fuzzy selection tool where I can simply select each entity. How could I easily solve this problem? (Preferably using Python) Thanks. NOTE: In my question I am referring to each specific connected pixels as objects or entities. My intention to extract them and create NumPy array representations as shown below. (Here I am using the top-left object; if a pixel exist use 1's if not use 0's. This object's shape is 3 by 3 which correspondingly 3 pixel height by 3 pixel width. These are projections of real ice-particles onto 2D domain, under the assumption of their being spherical and equivalent radius is (height+width)/2, and later some scalings --from pixels to actual sizes and volume calculations will follow) import numpy as np np.array([[1,1,1], [1,1,1], [0,0,1]]) array([[1, 1, 1], [1, 1, 1], [0, 0, 1]]) Here is a section from the image which I am going to use. screenshot http://img43.imageshack.us/img43/2327/particles.png
[ "\nScan every square (e.g. from the top-left, left-to-right, top-to-bottom)\nWhen you hit a blue square then:\na. Record this square as a location of a new object\nb. Find all the other contiguous blue squares (e.g. by looking at the neighbours of this square, and the neighbours of those neighbours, etc.) and mark them as being part of the same object\nContinue to scan\nWhen you find another blue square, test to see whether it's part of a known object before going to step 2; alternatively in step 2b, erase any square after you've associated it with an object\n\n", "Looking at the image you provided, all you need to do next is to apply a simple region growing algorithm.\nIf I were using MATLAB, I would use bwlabel/bwboundaries functions. I believe there's an equivalent function somewhere in Numpy, or use OpenCV with python wrappers as suggested by @kwatford\n", "I used to do this kind of analysis on micrographs and eventually put everything I needed into an image processing and analysis package written in C, driven via Tcl. (It worked with 512 x 512 images only, which explains why 512 crops up so often. There were images with pixels of various sizes allocated, but most of the work was done with 8-bit pixels, which explains why there is that business of 0xff and maximum meaningful count of 254 on an image.) \nBriefly, the 'zz' at the begining of the Tcl commands sends the remainder of the line to the package's parser which calls the appropriate C routine with the given arguments. Right after the 'zz' is an argument that indicates the input and output of the command. (There can be multiple inputs but only a single output.) 'r' indicates a 512 x 512 x 8-bit image. The third word is the name of the command to be invoked; 'graphs' marks up an image as described in the text below. So, 'zz rr graphs' means 'Call the ZZ parser; input an r image to the graphs command and get back an r image.' The rest of the Tcl command line specifies which of the pre-allocated images to use. (The 'g' image is an ROI, i.e., region-of-interest, image; almost all ZZ ops are done under ROI control.) So, 'r1 r1 g8' means 'Use r1 as input, use r1 as output (that is, mark up the input image itself), and do the operation wherever the corresponding pixel on image g8 --- that is, r8, used as an ROI --- is >0.\nI don't think it is available online anywhere, but if you want to pick through the source code or even compile the whole shebang, I'll be happy to send it to you. Here's an excerpt from the manual (but I think I see some errors in the manual at this late date --- that's embarrassing ...):\nExample 6. Counting features.\nProblem\nCounting is a common task. The items counted are called “features”, and it is usually necessary to prepare images carefully so that features correspond in a one-to-one way with things that are the real objects to be counted. Here, however, we ignore image preparation and consider, instead, the mechanics of counting. The first counting exercise is to find out how many features are on the images in the directory ./cells?\nApproach\nFirst, let us define “feature”. A feature is the largest group of “set” (non-zero) pixels all of which can be reached by travelling from one set pixel to another along north-south-east-west (up-down-right-left) routes, starting from a given set pixel. The zz command that detects and marks such features on an image is “zz rr graphs R:src R:dest G:ROI”, so called because the mathematical term for such a feature is a “graph”. If all the pixels on an image are set, then there is only a single graph on the image, but it contains 262144 pixels (512 * 512). If pixels are set and clear (equal to zero) in a checkerboard pattern,\nthen there will be 131072 (512 * 512 / 2) graphs, but each will containing only one pixel.\nBriefly explained, “zz rr graphs” starts in the upper-left corner of an image and scans each\nsucceeding row left to right until it finds a set pixel, then finds all the set pixels attached to that through north, south, east, or west borders (“4-connected”). It then sets all pixels in that graph to 1 (0x01). After finding and marking graph 1, it starts scanning again at the pixel after the one where it first discovered graph 1, this time ignoring any pixels that already belong to a graph. The first 254 graphs that it finds will be marked uniquely; all graphs found after that, however, will be marked with the value 255 (0xff)\nand so cannot be distinguished from each other. The key to being able to count any number of graphs accurately is to process each image in stages, that is, find the number of graphs on an image and, if the number is greater than 254, erase the 254 graphs just found, repeating the process until 254 or fewer graphs are found. The Tcl language provides the means to set up control of this operation.\nLet us begin to build the commands needed by reading a ZZ image file into an R image and detecting and marking the graphs. Before the processing loop, we declare and zero a variable to hold the total number of features in the image series. Within the processing loop, we begin by reading the image file into an R image and detecting and marking the graphs.\nzz ur to $inDir/$img r1\nzz rr graphs r1 r1 g8\n\nNext, we zero some variables to keep track of the counts, then use the “ra max” command to find out whether more than 254 graphs were detected.\nset nGraphs [ zz ra max r1 a1 g1 ]\n\nIf nGraphs does equal 255, then the 254 accurately counted graphs should be added to the total, the graphs from 1 through 254 should be erased, and the count repeated for as many times as it takes to reduce the number of graphs below 255.\nwhile {$nGraphs == 255} {\n incr sumGraphs 254\n zz rbr lt r1 155 r1 g1 0 255 \n set sumGraphs 0\n zz rr graphs r1 r1 g8\n set nGraphs [ zz ra max r1 a1 g8 ]\n}\n\nWhen the “while” loop exits, the variable nGraphs must hold a number less than 255, that is, a number of accurately counted graphs; this is added to the rising total of the number of features in the image series.\nincr sumGraphs $nGraphs\n\nAfter the processing loop, print out the total number of features found in the series.\nputs “Total number of features in $inDir \\\nimages $beginImg through $endImg is $sumGraphs.”\n\nAfter the processing loop, print out the total number of features found in the series.\n", "OpenCV has a Python interface that you might find useful. \n", "Connected component analysis may be what you are looking for.\n" ]
[ 5, 3, 3, 2, 2 ]
[]
[]
[ "computer_vision", "image_processing", "pattern_recognition", "python" ]
stackoverflow_0001449139_computer_vision_image_processing_pattern_recognition_python.txt
Q: How to stream binary data in python I want to stream a binary data using python. I do not have any idea how to achieve it. I did created python socket program using SOCK_DGRAM. Problem with SOCK_STREAM is that it does not work over internet as our isp dont allow tcp server socket. I want to transmit screen shots periodically to remote computer. I have an idea of maintaining a Que of binary data and have two threads write and read synchronously. I do not want to use VNC . How do I do it? I did written server socket and client socket using SOCK_STREAM it was working on localhost and did not work over internet even if respective ip's are placed. We also did tried running tomcat web server on one pc and tried accessing via other pc on internet and was not working. A: SOCK_STREAM is the correct way to stream data. What you're saying about ISPs makes very little sense; they don't control whether or not your machine listens on a certain port on an interface. Perhaps you're talking about firewall/addressing issues? If you insist on using UDP (and you shouldn't because you'll have to worry about packets arriving out of place or not arriving at all) then you'll need to first use socket.bind and then socket.recvfrom in a loop to read data and keep track of open connections. It'll be hard work to do correctly. A: There are two problems here. First problem, you will need to be able to address the remote party. This is related to what you referred to as "does not work over Internet as most ISP don't allow TCP server socket". It is usually difficult because the other party could be placed behind a NAT or a firewall. As for the second problem, the problem of actual transmitting of data after you can make a TCP connection, python socket would work if you can address the remote party.
How to stream binary data in python
I want to stream a binary data using python. I do not have any idea how to achieve it. I did created python socket program using SOCK_DGRAM. Problem with SOCK_STREAM is that it does not work over internet as our isp dont allow tcp server socket. I want to transmit screen shots periodically to remote computer. I have an idea of maintaining a Que of binary data and have two threads write and read synchronously. I do not want to use VNC . How do I do it? I did written server socket and client socket using SOCK_STREAM it was working on localhost and did not work over internet even if respective ip's are placed. We also did tried running tomcat web server on one pc and tried accessing via other pc on internet and was not working.
[ "SOCK_STREAM is the correct way to stream data.\nWhat you're saying about ISPs makes very little sense; they don't control whether or not your machine listens on a certain port on an interface. Perhaps you're talking about firewall/addressing issues?\nIf you insist on using UDP (and you shouldn't because you'll have to worry about packets arriving out of place or not arriving at all) then you'll need to first use socket.bind and then socket.recvfrom in a loop to read data and keep track of open connections. It'll be hard work to do correctly.\n", "There are two problems here.\nFirst problem, you will need to be able to address the remote party. This is related to what you referred to as \"does not work over Internet as most ISP don't allow TCP server socket\". It is usually difficult because the other party could be placed behind a NAT or a firewall.\nAs for the second problem, the problem of actual transmitting of data after you can make a TCP connection, python socket would work if you can address the remote party.\n" ]
[ 3, 2 ]
[]
[]
[ "python", "sockets" ]
stackoverflow_0001451349_python_sockets.txt
Q: Security concerns with a Python PAM module? I'm interested in writing a PAM module that would make use of a popular authentication mechanism for Unix logins. Most of my past programming experience has been in Python, and the system I'm interacting with already has a Python API. I googled around and found pam_python, which allows PAM modules to invoke the python intrepreter, therefore allowing PAM modules to be written essentially in Python. However, I've read that there are security risks when allowing a user to invoke Python code that runs with a higher access level than the user itself, such as SUID Python scripts. Are these concerns applicable to a Python PAM module as well? A: The security concerns that you mention aren't, per se, about "allowing the user to invoke Python code" which runs with high access levels, but allowing the user to exercise any form of control over the running of such code -- most obviously by injecting or altering the code itself, but, more subtly, also by controlling that process's environment (and therefore the path from which that code imports modules, for example). (Similar concerns would apply with C-written code if the user was able to control the path from which the latter loads .so's for example -- though the OS may help more directly with such a concern). It appears to me that pam_python is well armored against such risks, and so it should be safely usable for your purposes. However, do note that the docs point out...: Writing PAM modules from Python incurs a large performance penalty and requires Python to be installed, so it is not the best option for writing modules that will be used widely. So, if you're right that the mechanism you're supplying will be popular, you will probably want to code your module in C to avoid these issues. However, prototyping it in Python as a proof of concept with limited distribution, before firming it up in C, is a viable strategy.
Security concerns with a Python PAM module?
I'm interested in writing a PAM module that would make use of a popular authentication mechanism for Unix logins. Most of my past programming experience has been in Python, and the system I'm interacting with already has a Python API. I googled around and found pam_python, which allows PAM modules to invoke the python intrepreter, therefore allowing PAM modules to be written essentially in Python. However, I've read that there are security risks when allowing a user to invoke Python code that runs with a higher access level than the user itself, such as SUID Python scripts. Are these concerns applicable to a Python PAM module as well?
[ "The security concerns that you mention aren't, per se, about \"allowing the user to invoke Python code\" which runs with high access levels, but allowing the user to exercise any form of control over the running of such code -- most obviously by injecting or altering the code itself, but, more subtly, also by controlling that process's environment (and therefore the path from which that code imports modules, for example). (Similar concerns would apply with C-written code if the user was able to control the path from which the latter loads .so's for example -- though the OS may help more directly with such a concern).\nIt appears to me that pam_python is well armored against such risks, and so it should be safely usable for your purposes. However, do note that the docs point out...:\n\nWriting PAM modules from Python incurs\n a large performance penalty and\n requires Python to be installed, so it\n is not the best option for writing\n modules that will be used widely.\n\nSo, if you're right that the mechanism you're supplying will be popular, you will probably want to code your module in C to avoid these issues. However, prototyping it in Python as a proof of concept with limited distribution, before firming it up in C, is a viable strategy.\n" ]
[ 17 ]
[]
[]
[ "pam", "python", "security", "suid" ]
stackoverflow_0001451224_pam_python_security_suid.txt
Q: Paste text to active window on Linux I want to write an application which pastes some text to the active window on some keystroke. How can I do this with Python or C++? I want to write an app which will work like a daemon and on some global keystroke paste some text to the current active application (text editor, browser, and jabber client). I think I will need to use some low-level X Window server API. A: Interacting between multiple applications interfaces can be tricky, so it may help to provide more information on specifically what you are trying to do. Nonetheless, you have a few options if you want to use the clipboard to accomplish this. On Windows, the Windows API provides GetClipboardData and SetClipboardData. To use these functions from Python you would want to take advantage of win32com. On Linux, you have two main options (that I know of) for interacting with the clipboard. PyGTK provides a gtk.Clipboard object. There is also a command line tool for setting the X "selection," XSel. You could interact with XSel using Python by means of os.popen or subprocess. See this guide for info on using gtk.Clipboard and xsel. In terms of how you actually use the clipboard. One application might poll the clipboard every so often looking for changes. If you want to get into real "enterprisey" architecture, you could use a message bus, like RabbitMQ, to communicate between the two applications. A: If you use Tkinter (a GUI library that works in Linux, Mac OS X, Windows, and everywhere), and make any widget (for example a text widget), the copy (Ctrl + C) and paste (Ctrl + V) commands automatically work. For example, the following code shows a Text widget, where you can type multi-line text, and copy and paste to other applications, or from other application (for example, OpenOffice). from Tkinter import * root = Tk() # Initialize GUI t = Text(root) # Create a text widget t.grid() # Show the widget root.mainloop() # Start the GUI I have tested the code with Windows and Linux/KDE 3.5.
Paste text to active window on Linux
I want to write an application which pastes some text to the active window on some keystroke. How can I do this with Python or C++? I want to write an app which will work like a daemon and on some global keystroke paste some text to the current active application (text editor, browser, and jabber client). I think I will need to use some low-level X Window server API.
[ "Interacting between multiple applications interfaces can be tricky, so it may help to provide more information on specifically what you are trying to do. \nNonetheless, you have a few options if you want to use the clipboard to accomplish this. On Windows, the Windows API provides GetClipboardData and SetClipboardData. To use these functions from Python you would want to take advantage of win32com.\nOn Linux, you have two main options (that I know of) for interacting with the clipboard. PyGTK provides a gtk.Clipboard object. There is also a command line tool for setting the X \"selection,\" XSel. You could interact with XSel using Python by means of os.popen or subprocess. See this guide for info on using gtk.Clipboard and xsel.\nIn terms of how you actually use the clipboard. One application might poll the clipboard every so often looking for changes.\nIf you want to get into real \"enterprisey\" architecture, you could use a message bus, like RabbitMQ, to communicate between the two applications.\n", "If you use Tkinter (a GUI library that works in Linux, Mac OS X, Windows, and everywhere), and make any widget (for example a text widget), the copy (Ctrl + C) and paste (Ctrl + V) commands automatically work. For example, the following code shows a Text widget, where you can type multi-line text, and copy and paste to other applications, or from other application (for example, OpenOffice).\nfrom Tkinter import *\nroot = Tk() # Initialize GUI\nt = Text(root) # Create a text widget\nt.grid() # Show the widget\nroot.mainloop() # Start the GUI\n\nI have tested the code with Windows and Linux/KDE 3.5.\n" ]
[ 1, 0 ]
[]
[]
[ "c++", "linux", "python", "x11" ]
stackoverflow_0001450892_c++_linux_python_x11.txt
Q: Python: How to find presence of every list item in string What is the most pythonic way to find presence of every directory name ['spam', 'eggs'] in path e.g. "/home/user/spam/eggs" Usage example (doesn't work but explains my case): dirs = ['spam', 'eggs'] path = "/home/user/spam/eggs" if path.find(dirs): print "All dirs are present in the path" Thanks A: set.issubset: >>> set(['spam', 'eggs']).issubset('/home/user/spam/eggs'.split('/')) True A: Looks line you want something like...: if all(d in path.split('/') for d in dirs): ... This one-liner style is inefficient since it keeps splitting path for each d (and split makes a list, while a set is better for membership checking). Making it into a 2-liner: pathpieces = set(path.split('/')) if all(d in pathpieces for d in dirs): ... vastly improves performance. A: names = ['spam', 'eggs'] dir = "/home/user/spam/eggs" # Split into parts parts = [ for part in dir.split('/') if part != '' ] # Rejoin found paths dirs = [ '/'.join(parts[0:n]) for (n, name) in enumerate(parts) if name in names ] Edit : If you just want to verify whether all dirs exist: parts = "/home/user/spam/eggs".split('/') print all(dir in parts for dir in ['spam', 'eggs']) A: Maybe this is what you want? dirs = ['spam', 'eggs'] path = "/home/user/spam/eggs" present = [dir for dir in dirs if dir in path] A: A one liner using generators (using textual lookup and not treating names as anything to do with the filesystem - your request is not totally clear to me) [x for x in dirs if x in path.split( os.sep )]
Python: How to find presence of every list item in string
What is the most pythonic way to find presence of every directory name ['spam', 'eggs'] in path e.g. "/home/user/spam/eggs" Usage example (doesn't work but explains my case): dirs = ['spam', 'eggs'] path = "/home/user/spam/eggs" if path.find(dirs): print "All dirs are present in the path" Thanks
[ "set.issubset:\n>>> set(['spam', 'eggs']).issubset('/home/user/spam/eggs'.split('/'))\nTrue\n\n", "Looks line you want something like...:\nif all(d in path.split('/') for d in dirs):\n ...\n\nThis one-liner style is inefficient since it keeps splitting path for each d (and split makes a list, while a set is better for membership checking). Making it into a 2-liner:\npathpieces = set(path.split('/'))\nif all(d in pathpieces for d in dirs):\n ...\n\nvastly improves performance.\n", "names = ['spam', 'eggs']\ndir = \"/home/user/spam/eggs\"\n\n# Split into parts\nparts = [ for part in dir.split('/') if part != '' ]\n\n# Rejoin found paths\ndirs = [ '/'.join(parts[0:n]) for (n, name) in enumerate(parts) if name in names ]\n\nEdit : If you just want to verify whether all dirs exist:\nparts = \"/home/user/spam/eggs\".split('/')\n\nprint all(dir in parts for dir in ['spam', 'eggs'])\n\n", "Maybe this is what you want?\ndirs = ['spam', 'eggs']\npath = \"/home/user/spam/eggs\"\npresent = [dir for dir in dirs if dir in path]\n\n", "A one liner using generators (using textual lookup and not treating names as anything to do with the filesystem - your request is not totally clear to me)\n[x for x in dirs if x in path.split( os.sep )]\n\n" ]
[ 9, 5, 2, 1, 0 ]
[]
[]
[ "list", "path", "python" ]
stackoverflow_0001451390_list_path_python.txt
Q: Setting a colour scale in ipython I am new to python and am having trouble finding the correct syntax to use. I want to plot some supernovae data onto a hammer projection. The data has coordinates alpha and beta. For each data point there is also a value delta describing a property of the SN. I would like to create a colour scale that ranges from min. value of delta to max. value of delta and goes from red to violet. This would mean that when I came to plot the data I could simply write: subplot(111,projection="hammer") p=plot([alpha],[beta],'o',mfc='delta') where delta would represent a colour somewhere in the spectrum between red and violet. I.e if delta =0, the marker would be red and if delta =delta max. the marker would be violet and if delta =(delta. max)/2 the marker would be yellow. Could anyone help me out with the syntax to do this? Many thanks Angela A: If you are thinking of a fixed color table, just map your delta values into the index range for that table. For example, you can construct a color table with color names recognized by your plot package: >>> colors = ['red', 'orange', 'yellow', 'green', 'blue', 'indigo', 'violet'] The range of possible delta values, from your example, is 0 to delta.max. Mapping that to the length of the color tables, gives the step: >>> step = delta.max / len(colors) And the computation required to get a color name matching a given data point is: >>> color = colors[math.trunc(data / step)] This method works for any set of pre-selected colors, for example RGB values expressed as hex numbers. A quick google search discovered Johnny Lin's Python Library. It contains color maps, including Rainbow (red to violet, 790-380 nm). You also need his wavelen2rgb.py (Calculate RGB values given the wavelength of visible light). Note that this library generates the colors as RGB triplets - you have to figure how your plotting library expects such color values. A: I'm not familiar with plotting, but a nice method for generating rainbow colors is using the HSV (hue, saturation, value) colorspace. Set saturation and value to the maximum values, and vary the hue. import colorsys def color(value): return colorsys.hsv_to_rgb(value / delta.max / (1.1), 1, 1) This wil get you RGB triplets for the rainbow colors. The (1.1) is there to end at violet at delta.max instead of going all the way back to red. So, instead of selecting from a list, you call the function. You can also try experimenting with the saturation and value (the 1's in the function above) if the returned colors are too bright. A: Using the wavelen2rgb function of Johnny Lin's Python Library (as gimel suggested), the following code plots the SNs as filled circles. The code uses Tkinter which is always available with Python. You can get wavelen2rgb.py here. def sn(): "Plot a diagram of supernovae, assuming wavelengths between 380 and 645nm." from Tkinter import * from random import Random root = Tk() # initialize gui dc = Canvas(root) # Create a canvas dc.grid() # Show canvas r = Random() # intitialize random number generator for i in xrange(100): # plot 100 random SNs a = r.randint(10, 400) b = r.randint(10, 200) wav = r.uniform(380.0, 645.0) rgb = wavelen2rgb(wav, MaxIntensity=255) # Calculate color as RGB col = "#%02x%02x%02x" % tuple(rgb) # Calculate color in the fornat that Tkinter expects dc.create_oval(a-5, b-5, a+5, b+5, outline=col, fill=col) # Plot a filled circle root.mainloop() sn() Here's the outpout: alt text http://img38.imageshack.us/img38/3449/83921879.jpg
Setting a colour scale in ipython
I am new to python and am having trouble finding the correct syntax to use. I want to plot some supernovae data onto a hammer projection. The data has coordinates alpha and beta. For each data point there is also a value delta describing a property of the SN. I would like to create a colour scale that ranges from min. value of delta to max. value of delta and goes from red to violet. This would mean that when I came to plot the data I could simply write: subplot(111,projection="hammer") p=plot([alpha],[beta],'o',mfc='delta') where delta would represent a colour somewhere in the spectrum between red and violet. I.e if delta =0, the marker would be red and if delta =delta max. the marker would be violet and if delta =(delta. max)/2 the marker would be yellow. Could anyone help me out with the syntax to do this? Many thanks Angela
[ "If you are thinking of a fixed color table, just map your delta values into the index range for that table. For example, you can construct a color table with color names recognized by your plot package:\n>>> colors = ['red', 'orange', 'yellow', 'green', 'blue', 'indigo', 'violet']\n\nThe range of possible delta values, from your example, is 0 to delta.max. Mapping that to the length of the color tables, gives the step:\n>>> step = delta.max / len(colors)\n\nAnd the computation required to get a color name matching a given data point is:\n>>> color = colors[math.trunc(data / step)]\n\nThis method works for any set of pre-selected colors, for example RGB values expressed as hex numbers.\nA quick google search discovered Johnny Lin's Python Library. It contains color maps, including Rainbow (red to violet, 790-380 nm).\nYou also need his wavelen2rgb.py (Calculate RGB values given the wavelength of visible light). Note that this library generates the colors as RGB triplets - you have to figure how your plotting library expects such color values.\n", "I'm not familiar with plotting, but a nice method for generating rainbow colors is using the HSV (hue, saturation, value) colorspace. Set saturation and value to the maximum values, and vary the hue.\nimport colorsys\n\ndef color(value):\n return colorsys.hsv_to_rgb(value / delta.max / (1.1), 1, 1)\n\nThis wil get you RGB triplets for the rainbow colors. The (1.1) is there to end at violet at delta.max instead of going all the way back to red.\nSo, instead of selecting from a list, you call the function.\nYou can also try experimenting with the saturation and value (the 1's in the function above) if the returned colors are too bright.\n", "Using the wavelen2rgb function of Johnny Lin's Python Library (as gimel suggested), the following code plots the SNs as filled circles. The code uses Tkinter which is always available with Python. You can get wavelen2rgb.py here.\ndef sn():\n \"Plot a diagram of supernovae, assuming wavelengths between 380 and 645nm.\"\n from Tkinter import *\n from random import Random\n root = Tk() # initialize gui\n dc = Canvas(root) # Create a canvas\n dc.grid() # Show canvas\n r = Random() # intitialize random number generator\n for i in xrange(100): # plot 100 random SNs\n a = r.randint(10, 400)\n b = r.randint(10, 200)\n wav = r.uniform(380.0, 645.0)\n rgb = wavelen2rgb(wav, MaxIntensity=255) # Calculate color as RGB\n col = \"#%02x%02x%02x\" % tuple(rgb) # Calculate color in the fornat that Tkinter expects\n dc.create_oval(a-5, b-5, a+5, b+5, outline=col, fill=col) # Plot a filled circle\n root.mainloop()\nsn()\n\nHere's the outpout:\nalt text http://img38.imageshack.us/img38/3449/83921879.jpg\n" ]
[ 2, 1, 1 ]
[]
[]
[ "python" ]
stackoverflow_0001450874_python.txt
Q: How to add bi-directional manytomanyfields in django admin? In my models.py i have something like: class LocationGroup(models.Model): name = models.CharField(max_length=200) class Report(models.Model): name = models.CharField(max_length=200) locationgroups = models.ManyToManyField(LocationGroup) admin.py (standard): admin.site.register(LocationGroup) admin.site.register(Report) When I enter the admin page for Report, it shows a nice multiple choice field. How can I add the same multiple choice field in LocationGroup? I can access all Reports by calling LocationGroup.report_set.all() A: The workaround I found was to follow the instructions for ManyToManyFields with intermediary models. Even though you're not using the 'through' model feature, just pretend as if you were and create a stub model with the necessary ForeignKey. # models: make sure the naming convention matches what ManyToManyField would create class Report_LocationGroups(models.Model): locationgroup = models.ForeignKey(LocationGroup) report = models.ForeignKey(Report) # admin class ReportInline(admin.TabularInline): model = models.Report_LocationGroups class LocationGroupAdmin(admin.ModelAdmin): inlines = ReportInline, A: I think yon can combine this sample code (source) wich breaks sync_db class ItemType(meta.Model): name = meta.CharField(maxlength=100) description = meta.CharField(maxlength=250) properties = meta.ManyToManyField('PropertyType', db_table='app_propertytype_itemtypes') class PropertyType(meta.Model): name = meta.CharField(maxlength=100) itemtypes = meta.ManyToManyField(ItemType) with this snippet class ManyToManyField_NoSyncdb(models.ManyToManyField): def __init__(self, *args, **kwargs): super(ManyToManyField_NoSyncdb, self).__init__(*args, **kwargs) self.creates_table = False to obtain something like class ItemType(meta.Model): name = meta.CharField(maxlength=100) description = meta.CharField(maxlength=250) properties = meta.ManyToManyField_NoSyncdb('PropertyType', db_table='app_propertytype_itemtypes') class PropertyType(meta.Model): name = meta.CharField(maxlength=100) itemtypes = meta.ManyToManyField(ItemType) Disclaimer : this is just a rough idea Edit: There is probably someting to do with Django's 1.1 Proxy Models A: I think what are you are looking for is admin inlines. In your admin.py you will want to add something like this: class LocationGroupInline(admin.TabularInline): model = LocationGroup class ReportAdmin(admin.ModelAdmin): inlines = [ LocationGroupInline, ] admin.site.register(Report, ReportAdmin) admin.site.register(LocationGroup) There are many options to include in LocationGroupInline if you want to further configure the inline display of the related model. Two of these options are form and formset, which will let you use custom Django Form and FormSet classes to further customize the look and feel of the inline model admin. Using this you can create a simple Form that displays just the multiple choice field you want (except for a M2M field it will not be possible to display as a single drop down, but a multiple select box). For example: class MyLocationGroupForm(forms.Form): location = forms.MultipleModelChoiceField( queryset=LocationGroup.objects.all()) class LocationGroupInline(admin.TabularInline): model = LocationGroup form = MyLocationGroupForm
How to add bi-directional manytomanyfields in django admin?
In my models.py i have something like: class LocationGroup(models.Model): name = models.CharField(max_length=200) class Report(models.Model): name = models.CharField(max_length=200) locationgroups = models.ManyToManyField(LocationGroup) admin.py (standard): admin.site.register(LocationGroup) admin.site.register(Report) When I enter the admin page for Report, it shows a nice multiple choice field. How can I add the same multiple choice field in LocationGroup? I can access all Reports by calling LocationGroup.report_set.all()
[ "The workaround I found was to follow the instructions for ManyToManyFields with intermediary models. Even though you're not using the 'through' model feature, just pretend as if you were and create a stub model with the necessary ForeignKey.\n# models: make sure the naming convention matches what ManyToManyField would create\nclass Report_LocationGroups(models.Model):\n locationgroup = models.ForeignKey(LocationGroup)\n report = models.ForeignKey(Report)\n\n# admin\nclass ReportInline(admin.TabularInline):\n model = models.Report_LocationGroups\n\nclass LocationGroupAdmin(admin.ModelAdmin):\n inlines = ReportInline,\n\n", "I think yon can combine this sample code (source) wich breaks sync_db\nclass ItemType(meta.Model):\n name = meta.CharField(maxlength=100)\n description = meta.CharField(maxlength=250)\n properties = meta.ManyToManyField('PropertyType',\n db_table='app_propertytype_itemtypes')\n\nclass PropertyType(meta.Model):\n name = meta.CharField(maxlength=100)\n itemtypes = meta.ManyToManyField(ItemType)\n\nwith this snippet\nclass ManyToManyField_NoSyncdb(models.ManyToManyField):\n def __init__(self, *args, **kwargs):\n super(ManyToManyField_NoSyncdb, self).__init__(*args, **kwargs)\n self.creates_table = False\n\nto obtain something like\nclass ItemType(meta.Model):\n name = meta.CharField(maxlength=100)\n description = meta.CharField(maxlength=250)\n properties = meta.ManyToManyField_NoSyncdb('PropertyType',\n db_table='app_propertytype_itemtypes')\n\nclass PropertyType(meta.Model):\n name = meta.CharField(maxlength=100)\n itemtypes = meta.ManyToManyField(ItemType)\n\nDisclaimer : this is just a rough idea\nEdit: There is probably someting to do with Django's 1.1 Proxy Models\n", "I think what are you are looking for is admin inlines. In your admin.py you will want to add something like this:\nclass LocationGroupInline(admin.TabularInline):\n model = LocationGroup\n\nclass ReportAdmin(admin.ModelAdmin):\n inlines = [ LocationGroupInline, ]\nadmin.site.register(Report, ReportAdmin)\nadmin.site.register(LocationGroup)\n\nThere are many options to include in LocationGroupInline if you want to further configure the inline display of the related model. Two of these options are form and formset, which will let you use custom Django Form and FormSet classes to further customize the look and feel of the inline model admin. Using this you can create a simple Form that displays just the multiple choice field you want (except for a M2M field it will not be possible to display as a single drop down, but a multiple select box). For example:\nclass MyLocationGroupForm(forms.Form):\n location = forms.MultipleModelChoiceField(\n queryset=LocationGroup.objects.all())\n\nclass LocationGroupInline(admin.TabularInline):\n model = LocationGroup\n form = MyLocationGroupForm\n\n" ]
[ 8, 2, 1 ]
[]
[]
[ "django", "django_models", "python" ]
stackoverflow_0001339409_django_django_models_python.txt
Q: Simple scraping of youtube xml to get a Python list of videos I have an xml feed, say: http://gdata.youtube.com/feeds/api/videos/-/bass/fishing/ I want to get the list of hrefs for the videos: ['http://www.youtube.com/watch?v=aJvVkBcbFFY', 'ht....', ... ] A: from xml.etree import cElementTree as ET import urllib def get_bass_fishing_URLs(): results = [] data = urllib.urlopen( 'http://gdata.youtube.com/feeds/api/videos/-/bass/fishing/') tree = ET.parse(data) ns = '{http://www.w3.org/2005/Atom}' for entry in tree.findall(ns + 'entry'): for link in entry.findall(ns + 'link'): if link.get('rel') == 'alternate': results.append(link.get('href')) as it appears that what you get are the so-called "alternate" links. The many small, possible variations if you want something slightly different, I hope, should be clear from the above code (plus the standard Python library docs for ElementTree). A: Have a look at Universal Feed Parser, which is an open source RSS and Atom feed parser for Python. A: In such a simple case, this should be enough: import re, urllib2 request = urllib2.urlopen("http://gdata.youtube.com/feeds/api/videos/-/bass/fishing/") text = request.read() videos = re.findall("http:\/\/www\.youtube\.com\/watch\?v=[\w-]+", text) If you want to do more complicated stuff, parsing the XML will be better suited than regular expressions A: import urllib from xml.dom import minidom xmldoc = minidom.parse(urllib.urlopen('http://gdata.youtube.com/feeds/api/videos/-/bass/fishing/')) links = xmldoc.getElementsByTagName('link') hrefs = [] for links in link: if link.getAttribute('rel') == 'alternate': hrefs.append( link.getAttribute('href') ) hrefs
Simple scraping of youtube xml to get a Python list of videos
I have an xml feed, say: http://gdata.youtube.com/feeds/api/videos/-/bass/fishing/ I want to get the list of hrefs for the videos: ['http://www.youtube.com/watch?v=aJvVkBcbFFY', 'ht....', ... ]
[ "from xml.etree import cElementTree as ET\nimport urllib\n\ndef get_bass_fishing_URLs():\n results = []\n data = urllib.urlopen(\n 'http://gdata.youtube.com/feeds/api/videos/-/bass/fishing/')\n tree = ET.parse(data)\n ns = '{http://www.w3.org/2005/Atom}'\n for entry in tree.findall(ns + 'entry'):\n for link in entry.findall(ns + 'link'):\n if link.get('rel') == 'alternate':\n results.append(link.get('href'))\n\nas it appears that what you get are the so-called \"alternate\" links. The many small, possible variations if you want something slightly different, I hope, should be clear from the above code (plus the standard Python library docs for ElementTree).\n", "Have a look at Universal Feed Parser, which is an open source RSS and Atom feed parser for Python.\n", "In such a simple case, this should be enough:\nimport re, urllib2\nrequest = urllib2.urlopen(\"http://gdata.youtube.com/feeds/api/videos/-/bass/fishing/\")\ntext = request.read()\nvideos = re.findall(\"http:\\/\\/www\\.youtube\\.com\\/watch\\?v=[\\w-]+\", text)\n\nIf you want to do more complicated stuff, parsing the XML will be better suited than regular expressions\n", "import urllib\nfrom xml.dom import minidom\nxmldoc = minidom.parse(urllib.urlopen('http://gdata.youtube.com/feeds/api/videos/-/bass/fishing/'))\n\nlinks = xmldoc.getElementsByTagName('link')\nhrefs = []\nfor links in link:\n if link.getAttribute('rel') == 'alternate':\n hrefs.append( link.getAttribute('href') )\n\nhrefs\n\n" ]
[ 7, 3, 3, 1 ]
[]
[]
[ "python", "xml", "youtube" ]
stackoverflow_0001452144_python_xml_youtube.txt
Q: How to write a large amount of data in a tarfile in python without using temporary file I've wrote a small cryptographic module in python whose task is to cipher a file and put the result in a tarfile. The original file to encrypt can be quit large, but that's not a problem because my program only need to work with a small block of data at a time, that can be encrypted on the fly and stored. I'm looking for a way to avoid doing it in two passes, first writing all the data in a temporary file then inserting result in a tarfile. Basically I do the following (where generator_encryptor is a simple generator that yield chunks of data read from sourcefile). : t = tarfile.open("target.tar", "w") tmp = file('content', 'wb') for chunk in generator_encryptor("sourcefile"): tmp.write(chunks) tmp.close() t.add(content) t.close() I'm a bit annoyed having to use a temporary file as I file it should be easy to write blocs directly in the tar file, but collecting every chunks in a single string and using something like t.addfile('content', StringIO(bigcipheredstring) seems excluded because I can't guarantee that I have memory enough to old bigcipheredstring. Any hint of how to do that ? A: You can create an own file-like object and pass to TarFile.addfile. Your file-like object will generate the encrypted contents on the fly in the fileobj.read() method. A: Huh? Can't you just use the subprocess module to run a pipe through to tar? That way, no temporary file should be needed. Of course, this won't work if you can't generate your data in small enough chunks to fit in RAM, but if you have that problem, then tar isn't the issue. A: Basically using a file-like object and passing it to TarFile.addfile do the trick, but there is still some issues open. I need to known the full encrypted file size at the beginning the way tarfile access to read method is such that the custom file-like object must always return full read buffers, or tarfile suppose it's end of file. It leads to some really inefficient buffer copying in the code of read method, but it's either that or change tarfile module. The resulting code is below, basically I had to write a wrapper class that transform my existing generator into a file-like object. I also added the GeneratorEncrypto class in my example to make code compleat. You can notice it has a len method that returns the length of the written file (but understand it's just a dummy placeholder that does nothing usefull). import tarfile class GeneratorEncryptor(object): """Dummy class for testing purpose The real one perform on the fly encryption of source file """ def __init__(self, source): self.source = source self.BLOCKSIZE = 1024 self.NBBLOCKS = 1000 def __call__(self): for c in range(0, self.NBBLOCKS): yield self.BLOCKSIZE * str(c%10) def __len__(self): return self.BLOCKSIZE * self.NBBLOCKS class GeneratorToFile(object): """Transform a data generator into a conventional file handle """ def __init__(self, generator): self.buf = '' self.generator = generator() def read(self, size): chunk = self.buf while len(chunk) < size: try: chunk = chunk + self.generator.next() except StopIteration: self.buf = '' return chunk self.buf = chunk[size:] return chunk[:size] t = tarfile.open("target.tar", "w") tmp = file('content', 'wb') generator = GeneratorEncryptor("source") ti = t.gettarinfo(name = "content") ti.size = len(generator) t.addfile(ti, fileobj = GeneratorToFile(generator)) t.close() A: I guess you need to understand how the tar format works, and handle the tar writing yourself. Maybe this can be helpful? http://mail.python.org/pipermail/python-list/2001-August/100796.html
How to write a large amount of data in a tarfile in python without using temporary file
I've wrote a small cryptographic module in python whose task is to cipher a file and put the result in a tarfile. The original file to encrypt can be quit large, but that's not a problem because my program only need to work with a small block of data at a time, that can be encrypted on the fly and stored. I'm looking for a way to avoid doing it in two passes, first writing all the data in a temporary file then inserting result in a tarfile. Basically I do the following (where generator_encryptor is a simple generator that yield chunks of data read from sourcefile). : t = tarfile.open("target.tar", "w") tmp = file('content', 'wb') for chunk in generator_encryptor("sourcefile"): tmp.write(chunks) tmp.close() t.add(content) t.close() I'm a bit annoyed having to use a temporary file as I file it should be easy to write blocs directly in the tar file, but collecting every chunks in a single string and using something like t.addfile('content', StringIO(bigcipheredstring) seems excluded because I can't guarantee that I have memory enough to old bigcipheredstring. Any hint of how to do that ?
[ "You can create an own file-like object and pass to TarFile.addfile. Your file-like object will generate the encrypted contents on the fly in the fileobj.read() method.\n", "Huh? Can't you just use the subprocess module to run a pipe through to tar? That way, no temporary file should be needed. Of course, this won't work if you can't generate your data in small enough chunks to fit in RAM, but if you have that problem, then tar isn't the issue.\n", "Basically using a file-like object and passing it to TarFile.addfile do the trick, but there is still some issues open.\n\nI need to known the full encrypted file size at the beginning\nthe way tarfile access to read method is such that the custom file-like object must always return full read buffers, or tarfile suppose it's end of file. It leads to some really inefficient buffer copying in the code of read method, but it's either that or change tarfile module.\n\nThe resulting code is below, basically I had to write a wrapper class that transform my existing generator into a file-like object. I also added the GeneratorEncrypto class in my example to make code compleat. You can notice it has a len method that returns the length of the written file (but understand it's just a dummy placeholder that does nothing usefull).\nimport tarfile\n\nclass GeneratorEncryptor(object):\n \"\"\"Dummy class for testing purpose\n\n The real one perform on the fly encryption of source file\n \"\"\"\n def __init__(self, source):\n self.source = source\n self.BLOCKSIZE = 1024\n self.NBBLOCKS = 1000\n\n def __call__(self):\n for c in range(0, self.NBBLOCKS):\n yield self.BLOCKSIZE * str(c%10)\n\n def __len__(self):\n return self.BLOCKSIZE * self.NBBLOCKS\n\nclass GeneratorToFile(object):\n \"\"\"Transform a data generator into a conventional file handle\n \"\"\"\n def __init__(self, generator):\n self.buf = ''\n self.generator = generator()\n\n def read(self, size):\n chunk = self.buf\n while len(chunk) < size:\n try:\n chunk = chunk + self.generator.next()\n except StopIteration:\n self.buf = ''\n return chunk\n self.buf = chunk[size:]\n return chunk[:size]\n\nt = tarfile.open(\"target.tar\", \"w\")\ntmp = file('content', 'wb')\ngenerator = GeneratorEncryptor(\"source\")\nti = t.gettarinfo(name = \"content\")\nti.size = len(generator)\nt.addfile(ti, fileobj = GeneratorToFile(generator))\nt.close()\n\n", "I guess you need to understand how the tar format works, and handle the tar writing yourself. Maybe this can be helpful?\nhttp://mail.python.org/pipermail/python-list/2001-August/100796.html\n" ]
[ 4, 2, 2, 1 ]
[]
[]
[ "python", "tar" ]
stackoverflow_0001389681_python_tar.txt
Q: Python - Writing pseudocode? How would you write pseudocode for drawing an 8-by-8 checkerboard of squares, where none of the squares have to be full? (Can all be empty) I don't quite get the pseudocode concept. A: I would be even more generic eg. Loop with x from 1 to 8 Loop with y from 1 to 8 draw square at x, y A: Pseudo code is writing out the code in form that is like code but not quite code. So for opening a file and printing printing out its lines of text if file exists(path_to_file) then : open (path_to_file) for each line in file : print the line of the file All you should do is create the sequence of steps needed for your problem and write it out like that. Since you mention python, just use use a more python like syntax in your pseudo code. I suspect that you problem will be to encourage you to consider how to make functions and classes, and writing the pseudo code first will help you do this. A: Wikipedia articles use Pseudocode a lot, quite successfully. There is no standard for Pseudocode on wikipedia, and syntax varies, but here is some general information with examples: Algorithms on Wikipedia Here are two good examples of articles with Pseudocode (more): Quicksort Description of SHA-1 Using Wikipedia-like style, I'd do: for i from 0 to 7 for j from 0 to 7 if (i + j) is even then paint square (i, j) black else paint square (i, j) white (Marking end of if or end of for with 'end if' or 'repeat'/'end for' is a matter of style I guess). A: Just write something that looks like a hybrid between code and normal human explanation. for i from 1 to 8 for j from 1 to 8 print "[ ]" print "\n" A: I'm guessing this is a class assignment, right? In short, pseudocode is very similar to an outline. It's the structure of how you're going to go about solving the problem, without the specific details. In this case, you'd probably use a couple for-loops, and sketch out the drawing and there... for x in range(0,10): for y in range(0,10): #print out the square (x,y)
Python - Writing pseudocode?
How would you write pseudocode for drawing an 8-by-8 checkerboard of squares, where none of the squares have to be full? (Can all be empty) I don't quite get the pseudocode concept.
[ "I would be even more generic eg.\nLoop with x from 1 to 8\n Loop with y from 1 to 8\n draw square at x, y\n\n", "Pseudo code is writing out the code in form that is like code but not quite code. So for opening a file and printing printing out its lines of text\nif file exists(path_to_file) then :\n open (path_to_file)\n for each line in file : print the line of the file\n\nAll you should do is create the sequence of steps needed for your problem and write it out like that. Since you mention python, just use use a more python like syntax in your pseudo code.\nI suspect that you problem will be to encourage you to consider how to make functions and classes, and writing the pseudo code first will help you do this.\n", "Wikipedia articles use Pseudocode a lot, quite successfully. There is no standard for Pseudocode on wikipedia, and syntax varies, but here is some general information with examples: Algorithms on Wikipedia\nHere are two good examples of articles with Pseudocode (more):\n\nQuicksort\nDescription of SHA-1\n\nUsing Wikipedia-like style, I'd do:\nfor i from 0 to 7\n for j from 0 to 7\n if (i + j) is even then\n paint square (i, j) black\n else\n paint square (i, j) white\n\n(Marking end of if or end of for with 'end if' or 'repeat'/'end for' is a matter of style I guess).\n", "Just write something that looks like a hybrid between code and normal human explanation.\nfor i from 1 to 8\n for j from 1 to 8\n print \"[ ]\"\n print \"\\n\"\n\n", "I'm guessing this is a class assignment, right? \nIn short, pseudocode is very similar to an outline. It's the structure of how you're going to go about solving the problem, without the specific details.\nIn this case, you'd probably use a couple for-loops, and sketch out the drawing and there...\nfor x in range(0,10):\n for y in range(0,10):\n #print out the square (x,y)\n\n" ]
[ 7, 5, 5, 2, 1 ]
[]
[]
[ "pseudocode", "python" ]
stackoverflow_0001452237_pseudocode_python.txt
Q: Evaluation of boolean expressions in Python What truth value do objects evaluate to in Python? Related Questions Boolean Value of Objects in Python: Discussion about overriding the way it is evaluated A: Any object can be tested for truth value, for use in an if or while condition or as operand of the Boolean operations below. The following values are considered false: None False zero of any numeric type, for example, 0, 0L, 0.0, 0j. any empty sequence, for example, '', (), []. any empty mapping, for example, {}. instances of user-defined classes, if the class defines a __nonzero__() or __len__() method, when that method returns the integer zero or bool value False. All other values are considered true -- so objects of many types are always true. Operations and built-in functions that have a Boolean result always return 0 or False for false and 1 or True for true, unless otherwise stated. (Important exception: the Boolean operations "or" and "and" always return one of their operands.) https://docs.python.org/2/library/stdtypes.html#truth-value-testing And as mentioned, you can override with custom objects by modifying nonzero. A: Update: Removed all duplicate infomation with Meder's post For custom objects in Python < 3.0 __nonzero__ to change how it is evaluated. In Python 3.0 this is __bool__ (Reference by e-satis) It is important to understand what is meant by evaluate. One meaning is when an object is explicitly casting to a bool or implicitly cast by its location (in a if or while loop). Another is == evalutation. 1==True, 0==False, nothing else is equal via ==. >>> None==False False >>> 1==True True >>> 0==False True >>> 2==False False >>> 2==True False Finally, for is, only True or False are themselves.
Evaluation of boolean expressions in Python
What truth value do objects evaluate to in Python? Related Questions Boolean Value of Objects in Python: Discussion about overriding the way it is evaluated
[ "\nAny object can be tested for truth\n value, for use in an if or while\n condition or as operand of the Boolean\n operations below. The following values\n are considered false:\n\nNone\nFalse\nzero of any numeric type, for example, 0, 0L, 0.0, 0j.\nany empty sequence, for example, '', (), [].\nany empty mapping, for example, {}.\ninstances of user-defined classes, if the class defines a __nonzero__() or __len__() method, when that method returns the integer zero or bool value False.\n\nAll other values are considered true\n -- so objects of many types are always true.\n Operations and built-in functions that have a Boolean result always return 0 or False for false and 1 or True for true, unless otherwise stated. (Important exception: the Boolean operations \"or\" and \"and\" always return one of their operands.) \n\nhttps://docs.python.org/2/library/stdtypes.html#truth-value-testing\nAnd as mentioned, you can override with custom objects by modifying nonzero.\n", "Update: Removed all duplicate infomation with Meder's post \nFor custom objects in Python < 3.0 __nonzero__ to change how it is evaluated. In Python 3.0 this is __bool__ (Reference by e-satis)\nIt is important to understand what is meant by evaluate. One meaning is when an object is explicitly casting to a bool or implicitly cast by its location (in a if or while loop).\nAnother is == evalutation. 1==True, 0==False, nothing else is equal via ==.\n>>> None==False\nFalse\n>>> 1==True\nTrue\n>>> 0==False\nTrue\n>>> 2==False\nFalse\n>>> 2==True\nFalse\n\nFinally, for is, only True or False are themselves.\n" ]
[ 24, 9 ]
[]
[]
[ "boolean", "object", "python" ]
stackoverflow_0001452489_boolean_object_python.txt
Q: Creating Simultaneous Loops in Python I want to create a loop who has this sense: for i in xrange(0,10): for k in xrange(0,10): z=k+i print z where the output should be 0 2 4 6 8 10 12 14 16 18 A: You can use zip to turn multiple lists (or iterables) into pairwise* tuples: >>> for a,b in zip(xrange(10), xrange(10)): ... print a+b ... 0 2 4 6 8 10 12 14 16 18 But zip will not scale as well as izip (that sth mentioned) on larger sets. zip's advantage is that it is a built-in and you don't have to import itertools -- and whether that is actually an advantage is subjective. *Not just pairwise, but n-wise. The tuples' length will be the same as the number of iterables you pass in to zip. A: The itertools module contains an izip function that combines iterators in the desired way: from itertools import izip for (i, k) in izip(xrange(0,10), xrange(0,10)): print i+k A: You can do this in python - just have to make the tabs right and use the xrange argument for step. for i in xrange(0, 20, 2); print i A: What about this? i = range(0,10) k = range(0,10) for x in range(0,10): z=k[x]+i[x] print z 0 2 4 6 8 10 12 14 16 18 A: What you want is two arrays and one loop, iterate over each array once, adding the results.
Creating Simultaneous Loops in Python
I want to create a loop who has this sense: for i in xrange(0,10): for k in xrange(0,10): z=k+i print z where the output should be 0 2 4 6 8 10 12 14 16 18
[ "You can use zip to turn multiple lists (or iterables) into pairwise* tuples:\n>>> for a,b in zip(xrange(10), xrange(10)):\n... print a+b\n... \n0\n2\n4\n6\n8\n10\n12\n14\n16\n18\n\nBut zip will not scale as well as izip (that sth mentioned) on larger sets. zip's advantage is that it is a built-in and you don't have to import itertools -- and whether that is actually an advantage is subjective.\n*Not just pairwise, but n-wise. The tuples' length will be the same as the number of iterables you pass in to zip.\n", "The itertools module contains an izip function that combines iterators in the desired way:\nfrom itertools import izip\n\nfor (i, k) in izip(xrange(0,10), xrange(0,10)):\n print i+k\n\n", "You can do this in python - just have to make the tabs right and use the xrange argument for step.\nfor i in xrange(0, 20, 2);\n print i\n\n", "What about this?\ni = range(0,10)\nk = range(0,10)\nfor x in range(0,10):\n z=k[x]+i[x]\n print z\n\n0\n2\n4\n6\n8\n10\n12\n14\n16\n18\n", "What you want is two arrays and one loop, iterate over each array once, adding the results.\n" ]
[ 21, 11, 2, 2, 0 ]
[]
[]
[ "loops", "python" ]
stackoverflow_0001452694_loops_python.txt
Q: Why can't a Python class definition assign a closure variable to itself? Why doesn't the following work in Python? def make_class(a): class A(object): a=a return A A: works just fine: >>> def make_class(a): class A(object): _a=a return A >>> make_class('df') <class '__main__.A'> >>> make_class('df')._a 'df' btw, function is not a reserved keyword in Python. A: Let's use a simpler example for the same problem: a = 'something' def boo(): a = a boo() This fails because assignments in python, without an accompanying global or nonlocal statement, means that the assigned name is local to the current scope. This happens not just in functions but also in class definitions. This means that you can't use the same name for a global and local variable and use them both. You can use the workaround from Aaron Digulia's answer, or use a different name: def make_class(_a): class A(object): a = _a return A A: Both appear to work fine (in Python 2.5, at least): >>> def make_class(a): ... class A(object): ... _a = a ... return A ... >>> make_class(10)._a 10 >>> def make_class(b): ... class B(object): ... def get_b(self): ... return b ... return B ... >>> make_class(10)().get_b() 10 A: Try def make_class(a): class A(object): pass A.a=a return A The error you get (NameError: name 'a' is not defined) is because the name a in the class shadows the parameter a of the function; hence there is no a defined when you try "a=a" in your code. In other words, the right side a is not the a from the def; instead Python looks for it in the class A since a was already mentioned on the left side of the assignment. This becomes more clean with functions: x = 1 def a(x): print 'a:',x x = 3 def b(): print 'b:',x b() a(2) def c(): x = x Obviously, the first print should print 2, not 1, so the parameter x of a must shadow the global variable x. b is defined in a scope where x is known as a parameter of a, so the print works. If you try to call c, however, you get UnboundLocalError: local variable 'x' referenced before assignment since Python doesn't bind global variables automatically. To fix this, you must add global x before the assignment. Your case looks more like this: x = 1 def a(x): print 'a:',x x = 3 def b(): x = x print 'b:',x b() a(2) While printing x worked in the example above, assignment doesn't work. This is a safety measure to make sure that variables don't leak. The solution is to use a default parameter to "copy" the variable into b's scope: x = 1 def a(x): print 'a:',x x = 3 def b(x=x): x = x print 'b:',x b() a(2) To solve your problem, you would need to tell Python "make the parameter a of make_class visible in A" and you would need to do that before you try to assign the field a of the class. This is not possible in Python. If you could make a visible, the assignment would change the parameter, not the field, since Python has no way to distinguish the two. Since you can't make it visible, there is no a to read from, hence the NameError. See here for an explanation of the scope of a name in Python.
Why can't a Python class definition assign a closure variable to itself?
Why doesn't the following work in Python? def make_class(a): class A(object): a=a return A
[ "works just fine:\n>>> def make_class(a):\n class A(object):\n _a=a\n return A\n\n>>> make_class('df')\n<class '__main__.A'>\n>>> make_class('df')._a\n'df'\n\nbtw, function is not a reserved keyword in Python.\n", "Let's use a simpler example for the same problem:\na = 'something'\ndef boo():\n a = a\nboo()\n\nThis fails because assignments in python, without an accompanying global or nonlocal statement, means that the assigned name is local to the current scope. This happens not just in functions but also in class definitions.\nThis means that you can't use the same name for a global and local variable and use them both. You can use the workaround from Aaron Digulia's answer, or use a different name:\ndef make_class(_a):\n class A(object):\n a = _a\n return A\n\n", "Both appear to work fine (in Python 2.5, at least):\n>>> def make_class(a):\n... class A(object):\n... _a = a\n... return A\n... \n>>> make_class(10)._a\n10\n>>> def make_class(b):\n... class B(object):\n... def get_b(self):\n... return b\n... return B\n... \n>>> make_class(10)().get_b()\n10\n\n", "Try\ndef make_class(a):\n class A(object): pass\n A.a=a\n return A\n\nThe error you get (NameError: name 'a' is not defined) is because the name a in the class shadows the parameter a of the function; hence there is no a defined when you try \"a=a\" in your code. In other words, the right side a is not the a from the def; instead Python looks for it in the class A since a was already mentioned on the left side of the assignment.\nThis becomes more clean with functions:\nx = 1\ndef a(x):\n print 'a:',x\n x = 3\n def b():\n print 'b:',x\n b()\na(2)\ndef c():\n x = x\n\nObviously, the first print should print 2, not 1, so the parameter x of a must shadow the global variable x. b is defined in a scope where x is known as a parameter of a, so the print works.\nIf you try to call c, however, you get UnboundLocalError: local variable 'x' referenced before assignment since Python doesn't bind global variables automatically. To fix this, you must add global x before the assignment.\nYour case looks more like this:\nx = 1\ndef a(x):\n print 'a:',x\n x = 3\n def b():\n x = x\n print 'b:',x\n b()\na(2)\n\nWhile printing x worked in the example above, assignment doesn't work. This is a safety measure to make sure that variables don't leak. The solution is to use a default parameter to \"copy\" the variable into b's scope:\nx = 1\ndef a(x):\n print 'a:',x\n x = 3\n def b(x=x):\n x = x\n print 'b:',x\n b()\na(2)\n\nTo solve your problem, you would need to tell Python \"make the parameter a of make_class visible in A\" and you would need to do that before you try to assign the field a of the class. This is not possible in Python. If you could make a visible, the assignment would change the parameter, not the field, since Python has no way to distinguish the two.\nSince you can't make it visible, there is no a to read from, hence the NameError.\nSee here for an explanation of the scope of a name in Python.\n" ]
[ 9, 7, 2, 2 ]
[]
[]
[ "closures", "python" ]
stackoverflow_0001445207_closures_python.txt
Q: RTSP library in Python or C/C++? I am trying to find any RTSP streaming library for Python or C/C++. If not is there any other solutions for real time streaming? How much easy or difficult it is to implement RTSP in Python or C/C++ and where to get started? A: try live555. They have a lots of libraries and modules for implementing rtp and rtsp (as well as sip) into your c and c++ programs A: With Python and Twisted, you could use this module.
RTSP library in Python or C/C++?
I am trying to find any RTSP streaming library for Python or C/C++. If not is there any other solutions for real time streaming? How much easy or difficult it is to implement RTSP in Python or C/C++ and where to get started?
[ "try live555. They have a lots of libraries and modules for implementing rtp and rtsp (as well as sip) into your c and c++ programs\n", "With Python and Twisted, you could use this module.\n" ]
[ 4, 2 ]
[]
[]
[ "c", "c++", "python", "rtsp" ]
stackoverflow_0001452710_c_c++_python_rtsp.txt
Q: How to filter query in sqlalchemy by year (datetime column) I have table in sqlalchemy 0.4 that with types.DateTime column: Column("dfield", types.DateTime, index=True) I want to select records, that has specific year in this column, using model. How to do this? I though it should be done like this: selected_year = 2009 my_session = model.Session() my_query = my_session.query(model.MyRecord).filter(model.dfield.??? == selected_year) # process data in my_query Part with ??? is for me unclear. A: sqlalchemy.extract('year', model.MyRecord.dfield) == selected_year For referene: https://docs.sqlalchemy.org/en/13/core/sqlelement.html#sqlalchemy.sql.expression.extract
How to filter query in sqlalchemy by year (datetime column)
I have table in sqlalchemy 0.4 that with types.DateTime column: Column("dfield", types.DateTime, index=True) I want to select records, that has specific year in this column, using model. How to do this? I though it should be done like this: selected_year = 2009 my_session = model.Session() my_query = my_session.query(model.MyRecord).filter(model.dfield.??? == selected_year) # process data in my_query Part with ??? is for me unclear.
[ "sqlalchemy.extract('year', model.MyRecord.dfield) == selected_year\n\nFor referene: https://docs.sqlalchemy.org/en/13/core/sqlelement.html#sqlalchemy.sql.expression.extract\n" ]
[ 27 ]
[]
[]
[ "python", "sqlalchemy" ]
stackoverflow_0001453591_python_sqlalchemy.txt
Q: Django: queryset filter for *all* values from a ManyToManyField Hi (sorry for my bad english :p) Imagine these models : class Fruit(models.Model): # ... class Basket(models.Model): fruits = models.ManyToManyField(Fruit) Now I would like to retrieve Basket instances related to all fruits. The problem is that the code bellow returns Basket instances related to any fruits : baskets = Basket.objects.filter(fruits__in=Fruit.objects.all()) # This doesn't work: baskets = Basket.objects.filter(fruits=Fruit.objects.all()) Any solution do resolve this problem ? Thank you very much. :) A: I don't have a dataset handy to test this, but I think it should work: Basket.objects.annotate(num_fruits=Count('fruits')).filter(num_fruits=len(Fruit.objects.all())) It annotates every basket object with the count of related fruits and filters out those baskets that have a fruit count that equals the total amount of fruits. Note: you need Django 1.1 for this to work.
Django: queryset filter for *all* values from a ManyToManyField
Hi (sorry for my bad english :p) Imagine these models : class Fruit(models.Model): # ... class Basket(models.Model): fruits = models.ManyToManyField(Fruit) Now I would like to retrieve Basket instances related to all fruits. The problem is that the code bellow returns Basket instances related to any fruits : baskets = Basket.objects.filter(fruits__in=Fruit.objects.all()) # This doesn't work: baskets = Basket.objects.filter(fruits=Fruit.objects.all()) Any solution do resolve this problem ? Thank you very much. :)
[ "I don't have a dataset handy to test this, but I think it should work:\nBasket.objects.annotate(num_fruits=Count('fruits')).filter(num_fruits=len(Fruit.objects.all()))\n\nIt annotates every basket object with the count of related fruits and filters out those baskets that have a fruit count that equals the total amount of fruits.\nNote: you need Django 1.1 for this to work.\n" ]
[ 6 ]
[]
[]
[ "django", "django_models", "django_queryset", "python", "sql" ]
stackoverflow_0001453662_django_django_models_django_queryset_python_sql.txt
Q: ctypes memory management: how and when free the allocated resources? I'm writing a small wrapper for a C library in Python with Ctypes, and I don't know if the structures allocated from Python will be automatically freed when they're out of scope. Example: from ctypes import * mylib = cdll.LoadLibrary("mylib.so") class MyPoint(Structure): _fields_ = [("x", c_int), ("y", c_int)] def foo(): p = MyPoint() #do something with the point foo() Will that point still be "alive" after foo returns? Do I have to call clib.free(pointer(p))? or does ctypes provide a function to free memory allocated for C structures? A: In this case your MyPoint instance is a Python object allocated on the Python heap, so there should be no need to treat it differently from any other Python object. If, on the other hand, you got the MyPoint instance by calling say allocate_point() in mylib.so, then you would need to free it using whatever function is provided for doing so, e.g. free_point(p) in mylib.so.
ctypes memory management: how and when free the allocated resources?
I'm writing a small wrapper for a C library in Python with Ctypes, and I don't know if the structures allocated from Python will be automatically freed when they're out of scope. Example: from ctypes import * mylib = cdll.LoadLibrary("mylib.so") class MyPoint(Structure): _fields_ = [("x", c_int), ("y", c_int)] def foo(): p = MyPoint() #do something with the point foo() Will that point still be "alive" after foo returns? Do I have to call clib.free(pointer(p))? or does ctypes provide a function to free memory allocated for C structures?
[ "In this case your MyPoint instance is a Python object allocated on the Python heap, so there should be no need to treat it differently from any other Python object. If, on the other hand, you got the MyPoint instance by calling say allocate_point() in mylib.so, then you would need to free it using whatever function is provided for doing so, e.g. free_point(p) in mylib.so.\n" ]
[ 5 ]
[]
[]
[ "c", "ctypes", "memory", "python" ]
stackoverflow_0001453776_c_ctypes_memory_python.txt
Q: Python bracket convention What do you think is the convention that is mostly used when writing dictionary literals in the code? I'll write one possible convention as an answer. A: my_dictionary = { 1: 'something', 2: 'some other thing', } A: I'd say there is almost no standard. I've seen two ways of indenting: Indent 1: my_dictionary = { 'uno': 'something', 'number two': 'some other thing', } Indent 2: my_dictionary = {'uno': 'something', 'number two': 'some other thing', } I've seen three placed to have the end bracket: End 1: my_dictionary = {'uno': 'something', 'number two': 'some other thing', } End 2: my_dictionary = {'uno': 'something', 'number two': 'some other thing', } End 3: my_dictionary = {'uno': 'something', 'number two': 'some other thing',} And sometimes you justify the values: my_dictionary = {'uno': 'something', 'number two': 'some other thing', } And sometimes even the colons: my_dictionary = {'uno' : 'something', 'number two' : 'some other thing', } Which looks weird. And sometimes you have an end comma, and sometimes not: my_dictionary = {'uno': 'something', 'number two': 'some other thing'} And sometimes you stick it all on one row (if it fits). my_dictionary = {'uno': 'something', 2: 'some other thing'} Everyone seems to have their own combination of these styles. Personally I tend towards the style you use in your example, unless there is a reason not to. Common reasons not to is when you have a dictionary as a part of a statement. Like so: amethodthattakesadict({'hey': 'this', 'looks': 'a', 'bit': 'shitty', }) I'd recommend that you adapt yourself to the style of the guy who wrote the code you are editing. If it's your code: Do as you like. :-) A: About the end brace: I prefer it like this: my_dictionary = { 'a': 'first value', 'b': 'second', } and I'll tell you why: because Python code indentation has no close token, code is indented like that: the first line (if, while, def, etc) is outdented from the rest of the clause, with all the other lines indented the same amount. The last line of the clause is indented along with everything else. The next line indented the same as the first line is the first line of the next clause, not the last line of this one. So I like to indent data structures using a similar convention to that of code clauses, even though data structures have an explicit closing token, and so could have more flexibility. A: Indentation style 1, ending style 3 (after Lennart's answer): my_dictionary = {1: 'thing_one', 2: 'thing_two', ... n: 'thing_n'} This might be the most seamless indentation style for bracketed entities in Python code, because it closely resembles Python's whitespace formatting. Dropping to C-style indentation in Python code always seemed a little awkward to me, and I suspect that it is mostly used because programmers accustomed to C-like languages (because of their ubiquity) have maybe not been exposed to different indentation styles. A drawback might be that insertion at the beginning or end is a little harder than in other styles. Considering a proper editor which supports writing Python code, it should not make that much of a difference. Try this indentation style in context, and compare it to C-style indentation side by side, then decide which one is looking more pythonic and coherent. Maybe this could be called lisp-style indentation, because it is the way in which lisp code has been indented since centuries decades, but it is, for example, also often used in Smalltalk code. One thing you will often read in discussions about the placement of parenthesis (in Lisp like languages) is: "Why give them extra lines? Are they that important?". At the end of the day though, it is mostly a taste thing.
Python bracket convention
What do you think is the convention that is mostly used when writing dictionary literals in the code? I'll write one possible convention as an answer.
[ "my_dictionary = {\n 1: 'something',\n 2: 'some other thing',\n}\n\n", "I'd say there is almost no standard.\nI've seen two ways of indenting:\nIndent 1:\nmy_dictionary = {\n 'uno': 'something',\n 'number two': 'some other thing',\n}\n\nIndent 2:\nmy_dictionary = {'uno': 'something',\n 'number two': 'some other thing',\n }\n\nI've seen three placed to have the end bracket:\nEnd 1:\nmy_dictionary = {'uno': 'something',\n 'number two': 'some other thing',\n}\n\nEnd 2:\nmy_dictionary = {'uno': 'something',\n 'number two': 'some other thing',\n }\n\nEnd 3:\nmy_dictionary = {'uno': 'something',\n 'number two': 'some other thing',}\n\nAnd sometimes you justify the values:\nmy_dictionary = {'uno': 'something',\n 'number two': 'some other thing',\n }\n\nAnd sometimes even the colons:\nmy_dictionary = {'uno' : 'something',\n 'number two' : 'some other thing',\n }\n\nWhich looks weird.\nAnd sometimes you have an end comma, and sometimes not:\nmy_dictionary = {'uno': 'something',\n 'number two': 'some other thing'}\n\nAnd sometimes you stick it all on one row (if it fits).\nmy_dictionary = {'uno': 'something', 2: 'some other thing'}\n\nEveryone seems to have their own combination of these styles. Personally I tend towards the style you use in your example, unless there is a reason not to. Common reasons not to is when you have a dictionary as a part of a statement. Like so:\namethodthattakesadict({'hey': 'this',\n 'looks': 'a',\n 'bit': 'shitty',\n })\n\nI'd recommend that you adapt yourself to the style of the guy who wrote the code you are editing. If it's your code: Do as you like. :-)\n", "About the end brace: I prefer it like this:\nmy_dictionary = {\n 'a': 'first value',\n 'b': 'second',\n }\n\nand I'll tell you why: because Python code indentation has no close token, code is indented like that: the first line (if, while, def, etc) is outdented from the rest of the clause, with all the other lines indented the same amount. The last line of the clause is indented along with everything else. The next line indented the same as the first line is the first line of the next clause, not the last line of this one. \nSo I like to indent data structures using a similar convention to that of code clauses, even though data structures have an explicit closing token, and so could have more flexibility.\n", "Indentation style 1, ending style 3 (after Lennart's answer):\n my_dictionary = {1: 'thing_one',\n 2: 'thing_two',\n ...\n n: 'thing_n'}\n\nThis might be the most seamless indentation style for bracketed entities in Python code, because it closely resembles Python's whitespace formatting. Dropping to C-style indentation in Python code always seemed a little awkward to me, and I suspect that it is mostly used because programmers accustomed to C-like languages (because of their ubiquity) have maybe not been exposed to different indentation styles.\nA drawback might be that insertion at the beginning or end is a little harder than in other styles. Considering a proper editor which supports writing Python code, it should not make that much of a difference.\nTry this indentation style in context, and compare it to C-style indentation side by side, then decide which one is looking more pythonic and coherent.\nMaybe this could be called lisp-style indentation, because it is the way in which lisp code has been indented since centuries decades, but it is, for example, also often used in Smalltalk code. One thing you will often read in discussions about the placement of parenthesis (in Lisp like languages) is: \"Why give them extra lines? Are they that important?\".\nAt the end of the day though, it is mostly a taste thing.\n" ]
[ 21, 15, 8, 4 ]
[ "I do this, if the dictionary is too large to fit on a single line:\nd = \\\n {\n 'a' : 'b',\n 'c' : 'd'\n }\n\n" ]
[ -3 ]
[ "coding_style", "conventions", "python" ]
stackoverflow_0001431862_coding_style_conventions_python.txt
Q: Flow control in threading.Thread I Have run into a few examples of managing threads with the threading module (using Python 2.6). What I am trying to understand is how is this example calling the "run" method and where. I do not see it anywhere. The ThreadUrl class gets instantiated in the main() function as "t" and this is where I would normally expect the code to start the "run" method. Maybe this is not the preferred way of working with threads? Please enlighten me: #!/usr/bin/env python import Queue import time import urllib2 import threading import datetime hosts = ["http://example.com/", "http://www.google.com"] queue = Queue.Queue() class ThreadUrl(threading.Thread): """Threaded Url Grab""" def __init__(self, queue): threading.Thread.__init__(self) self.queue = queue def run(self): while True: #grabs host from queue host = self.queue.get() #grabs urls of hosts and prints first 1024 bytes of page url = urllib2.urlopen(host) print url.read(10) #signals to queue job is done self.queue.task_done() start = time.time() def main(): #spawn a pool of threads, and pass them queue instance for i in range(1): t = ThreadUrl(queue) t.setDaemon(True) t.start() for host in hosts: queue.put(host) queue.join() main() print "Elapsed time: %s" % (time.time() - start) A: Per the pydoc: Thread.start() Start the thread’s activity. It must be called at most once per thread object. It arranges for the object’s run() method to be invoked in a separate thread of control. This method will raise a RuntimeException if called more than once on the same thread object. The way to think of python Thread objects is that they take some chunk of python code that is written synchronously (either in the run method or via the target argument) and wrap it up in C code that knows how to make it run asynchronously. The beauty of this is that you get to treat start like an opaque method: you don't have any business overriding it unless you're rewriting the class in C, but you get to treat run very concretely. This can be useful if, for example, you want to test your thread's logic synchronously. All you need is to call t.run() and it will execute just as any other method would. A: The method run() is called behind the scene by "threading.Thread" (Google inheritance and polymorphism concepts of OOP). The invocation will be done just after t.start() has called. If you have an access to threading.py (find it in python folder). You will see a class name Thread. In that class, there is a method called "start()". start() called '_start_new_thread(self.__bootstrap, ())' a low-level thread start-up which will run a wrapper method called '__bootstrap()' by a new thread. '__bootstrap()', then, called '__bootstrap_inner()' which do some more preparation before, finally, call 'run()'. Read the source, you can learn a lot. :D A: t.start() creates a new thread in the OS and when this thread begins it will call the thread's run() method (or a different function if you provide a target in the Thread constructor)
Flow control in threading.Thread
I Have run into a few examples of managing threads with the threading module (using Python 2.6). What I am trying to understand is how is this example calling the "run" method and where. I do not see it anywhere. The ThreadUrl class gets instantiated in the main() function as "t" and this is where I would normally expect the code to start the "run" method. Maybe this is not the preferred way of working with threads? Please enlighten me: #!/usr/bin/env python import Queue import time import urllib2 import threading import datetime hosts = ["http://example.com/", "http://www.google.com"] queue = Queue.Queue() class ThreadUrl(threading.Thread): """Threaded Url Grab""" def __init__(self, queue): threading.Thread.__init__(self) self.queue = queue def run(self): while True: #grabs host from queue host = self.queue.get() #grabs urls of hosts and prints first 1024 bytes of page url = urllib2.urlopen(host) print url.read(10) #signals to queue job is done self.queue.task_done() start = time.time() def main(): #spawn a pool of threads, and pass them queue instance for i in range(1): t = ThreadUrl(queue) t.setDaemon(True) t.start() for host in hosts: queue.put(host) queue.join() main() print "Elapsed time: %s" % (time.time() - start)
[ "Per the pydoc:\n\nThread.start()\nStart the thread’s activity.\nIt must be called at most once per thread object. It arranges for the\n object’s run() method to be invoked in\n a separate thread of control.\nThis method will raise a RuntimeException if called more than\n once on the same thread object.\n\nThe way to think of python Thread objects is that they take some chunk of python code that is written synchronously (either in the run method or via the target argument) and wrap it up in C code that knows how to make it run asynchronously. The beauty of this is that you get to treat start like an opaque method: you don't have any business overriding it unless you're rewriting the class in C, but you get to treat run very concretely. This can be useful if, for example, you want to test your thread's logic synchronously. All you need is to call t.run() and it will execute just as any other method would.\n", "The method run() is called behind the scene by \"threading.Thread\" (Google inheritance and polymorphism concepts of OOP). The invocation will be done just after t.start() has called.\nIf you have an access to threading.py (find it in python folder). You will see a class name Thread. In that class, there is a method called \"start()\". start() called '_start_new_thread(self.__bootstrap, ())' a low-level thread start-up which will run a wrapper method called '__bootstrap()' by a new thread. '__bootstrap()', then, called '__bootstrap_inner()' which do some more preparation before, finally, call 'run()'.\nRead the source, you can learn a lot. :D\n", "t.start() creates a new thread in the OS and when this thread begins it will call the thread's run() method (or a different function if you provide a target in the Thread constructor)\n" ]
[ 7, 4, 0 ]
[]
[]
[ "control_flow", "multithreading", "python" ]
stackoverflow_0001454941_control_flow_multithreading_python.txt
Q: Allowing user to rollback from db audit trail with SQLAlchemy I'm starting to use SQLAlchemy for a new project where I was planning to implement an audit trail similar to the one proposed on these questions: Implementing Audit Trail for Objects in C#? Audit trails and implementing SOX/HIPAA/etc, best practices for sensitive data Ideas on database design for capturing audit trails What is the best implementation for DB Audit Trail? Is this the best approach to creating an audit trail? Effective strategy for leaving an audit trail/change history for DB applications? Data Auditing in NHibernate and SqlServer. As I will have the full history of the "interesting" objects, I was thinking in allowing users to rollback to a given version, giving them the possibility to have unlimited undo. Would this be possible to be done in a clean way with SQLAlchemy? What would be the correct way to expose this feature in the internal API (business logic and ORM)? I was something along the ways of user.rollback(ver=42). A: Although I haven't used SQLAlchemy specifically, I can give you some general tips that can be easily implemented in any ORM: Separate out the versioned item into two tables, say Document and DocumentVersion. Document stores information that will never change between versions, and DocumentVersion stores information that does change. Give each DocumentVersion a "parent" reference. Make a foreign key to the same table, pointing to the previous version of the document. Roll back to previous versions by updating a reference from Document to the "current" version. Don't delete versions from the bottom of the chain. When they make newer versions after rolling back, it will create another branch of versions. Example, create A, B, C, rollback to B, create D, E: (A) | (B) | \ (C) (D) | (E)
Allowing user to rollback from db audit trail with SQLAlchemy
I'm starting to use SQLAlchemy for a new project where I was planning to implement an audit trail similar to the one proposed on these questions: Implementing Audit Trail for Objects in C#? Audit trails and implementing SOX/HIPAA/etc, best practices for sensitive data Ideas on database design for capturing audit trails What is the best implementation for DB Audit Trail? Is this the best approach to creating an audit trail? Effective strategy for leaving an audit trail/change history for DB applications? Data Auditing in NHibernate and SqlServer. As I will have the full history of the "interesting" objects, I was thinking in allowing users to rollback to a given version, giving them the possibility to have unlimited undo. Would this be possible to be done in a clean way with SQLAlchemy? What would be the correct way to expose this feature in the internal API (business logic and ORM)? I was something along the ways of user.rollback(ver=42).
[ "Although I haven't used SQLAlchemy specifically, I can give you some general tips that can be easily implemented in any ORM:\n\nSeparate out the versioned item into two tables, say Document and DocumentVersion. Document stores information that will never change between versions, and DocumentVersion stores information that does change.\nGive each DocumentVersion a \"parent\" reference. Make a foreign key to the same table, pointing to the previous version of the document.\nRoll back to previous versions by updating a reference from Document to the \"current\" version. Don't delete versions from the bottom of the chain.\nWhen they make newer versions after rolling back, it will create another branch of versions.\n\nExample, create A, B, C, rollback to B, create D, E:\n(A)\n |\n(B)\n | \\\n(C) (D)\n |\n (E)\n\n" ]
[ 8 ]
[]
[]
[ "audit", "python", "rollback", "sqlalchemy" ]
stackoverflow_0001454874_audit_python_rollback_sqlalchemy.txt
Q: How to define properties in __init__ I whish to define properties in a class from a member function. Below is some test code showing how I would like this to work. However I don't get the expected behaviour. class Basket(object): def __init__(self): # add all the properties for p in self.PropNames(): setattr(self, p, property(lambda : p) ) def PropNames(self): # The names of all the properties return ['Apple', 'Pear'] # normal property Air = property(lambda s : "Air") if __name__ == "__main__": b = Basket() print b.Air # outputs: "Air" print b.Apple # outputs: <property object at 0x...> print b.Pear # outputs: <property object at 0x...> How could I get this to work? A: You need to set the properties on the class (ie: self.__class__), not on the object (ie: self). For example: class Basket(object): def __init__(self): # add all the properties setattr(self.__class__, 'Apple', property(lambda s : 'Apple') ) setattr(self.__class__, 'Pear', property(lambda s : 'Pear') ) # normal property Air = property(lambda s : "Air") if __name__ == "__main__": b = Basket() print b.Air # outputs: "Air" print b.Apple # outputs: "Apple" print b.Pear # outputs: "Pear" For what it's worth, your usage of p when creating lamdas in the loop, doesn't give the behavior that you would expect. Since the value of p is changed while going through the loop, the two properties set in the loop both return the same value: the last value of p. A: This does what you wanted: class Basket(object): def __init__(self): # add all the properties def make_prop( name ): def getter( self ): return "I'm a " + name return property(getter) for p in self.PropNames(): setattr(Basket, p, make_prop(p) ) def PropNames(self): # The names of all the properties return ['Apple', 'Pear', 'Bread'] # normal property Air = property(lambda s : "I'm Air") if __name__ == "__main__": b = Basket() print b.Air print b.Apple print b.Pear Another way to do it would be a metaclass ... but they confuse a lot of people ^^. Because I'm bored: class WithProperties(type): """ Converts `__props__` names to actual properties """ def __new__(cls, name, bases, attrs): props = set( attrs.get('__props__', () ) ) for base in bases: props |= set( getattr( base, '__props__', () ) ) def make_prop( name ): def getter( self ): return "I'm a " + name return property( getter ) for prop in props: attrs[ prop ] = make_prop( prop ) return super(WithProperties, cls).__new__(cls, name, bases, attrs) class Basket(object): __metaclass__ = WithProperties __props__ = ['Apple', 'Pear'] Air = property(lambda s : "I'm Air") class OtherBasket(Basket): __props__ = ['Fish', 'Bread'] if __name__ == "__main__": b = Basket() print b.Air print b.Apple print b.Pear c = OtherBasket() print c.Air print c.Apple print c.Pear print c.Fish print c.Bread A: Why are you defining properties at __init__ time? It's confusing and clever, so you better have a really good reason. The loop problem that Stef pointed out is just one example of why this should be avoided. If you need to redifine which properties a subclass has, you can just do del self.<property name> in the subclass __init__ method, or define new properties in the subclass. Also, some style nitpicks: Indent to 4 spaces, not 2 Don't mix quote types unnecessarily Use underscores instead of camel case for method names. PropNames -> prop_names PropNames doesn't really need to be a method
How to define properties in __init__
I whish to define properties in a class from a member function. Below is some test code showing how I would like this to work. However I don't get the expected behaviour. class Basket(object): def __init__(self): # add all the properties for p in self.PropNames(): setattr(self, p, property(lambda : p) ) def PropNames(self): # The names of all the properties return ['Apple', 'Pear'] # normal property Air = property(lambda s : "Air") if __name__ == "__main__": b = Basket() print b.Air # outputs: "Air" print b.Apple # outputs: <property object at 0x...> print b.Pear # outputs: <property object at 0x...> How could I get this to work?
[ "You need to set the properties on the class (ie: self.__class__), not on the object (ie: self). For example:\nclass Basket(object):\n\n def __init__(self):\n # add all the properties\n setattr(self.__class__, 'Apple', property(lambda s : 'Apple') )\n setattr(self.__class__, 'Pear', property(lambda s : 'Pear') )\n\n # normal property\n Air = property(lambda s : \"Air\")\n\nif __name__ == \"__main__\":\n b = Basket()\n print b.Air # outputs: \"Air\"\n print b.Apple # outputs: \"Apple\"\n print b.Pear # outputs: \"Pear\"\n\nFor what it's worth, your usage of p when creating lamdas in the loop, doesn't give the behavior that you would expect. Since the value of p is changed while going through the loop, the two properties set in the loop both return the same value: the last value of p.\n", "This does what you wanted:\nclass Basket(object):\n def __init__(self):\n # add all the properties\n\n def make_prop( name ):\n def getter( self ):\n return \"I'm a \" + name\n return property(getter)\n\n for p in self.PropNames():\n setattr(Basket, p, make_prop(p) )\n\n def PropNames(self):\n # The names of all the properties\n return ['Apple', 'Pear', 'Bread']\n\n # normal property\n Air = property(lambda s : \"I'm Air\")\n\nif __name__ == \"__main__\":\n b = Basket()\n print b.Air \n print b.Apple \n print b.Pear \n\nAnother way to do it would be a metaclass ... but they confuse a lot of people ^^.\nBecause I'm bored:\nclass WithProperties(type):\n \"\"\" Converts `__props__` names to actual properties \"\"\"\n def __new__(cls, name, bases, attrs):\n props = set( attrs.get('__props__', () ) )\n for base in bases:\n props |= set( getattr( base, '__props__', () ) )\n\n def make_prop( name ):\n def getter( self ):\n return \"I'm a \" + name\n return property( getter )\n\n for prop in props:\n attrs[ prop ] = make_prop( prop )\n\n return super(WithProperties, cls).__new__(cls, name, bases, attrs) \n\nclass Basket(object):\n __metaclass__ = WithProperties\n __props__ = ['Apple', 'Pear']\n\n Air = property(lambda s : \"I'm Air\")\n\nclass OtherBasket(Basket):\n __props__ = ['Fish', 'Bread']\n\nif __name__ == \"__main__\":\n b = Basket()\n print b.Air \n print b.Apple \n print b.Pear \n\n c = OtherBasket()\n print c.Air \n print c.Apple \n print c.Pear\n print c.Fish \n print c.Bread \n\n", "Why are you defining properties at __init__ time? It's confusing and clever, so you better have a really good reason. The loop problem that Stef pointed out is just one example of why this should be avoided.\nIf you need to redifine which properties a subclass has, you can just do del self.<property name> in the subclass __init__ method, or define new properties in the subclass.\nAlso, some style nitpicks:\n\nIndent to 4 spaces, not 2\nDon't mix quote types unnecessarily\nUse underscores instead of camel case for method names. PropNames -> prop_names\nPropNames doesn't really need to be a method\n\n" ]
[ 13, 3, 0 ]
[]
[]
[ "constructor", "properties", "python" ]
stackoverflow_0001454984_constructor_properties_python.txt
Q: Python-based Gallery web applications? I'm trying to cut my last dependencies on PHP and MySQL. The last stumbling block is a image gallery I set up for a client a while ago. The whole website is built around Django and Zine, except for the image gallery, which is based on plogger. I'd love to replace plogger with a Python solution. Requirements include: good admin interface with batch upload (my client thinks FTP is some kind of disease) uses a templating system (e.g. Jinja) WSGI interface supports PostgreSQL bonus points if it is a Django app I looked at django-photologue, which seems to be a good base for building a gallery app. But it isn't really a drop-in gallery app, which is what I'm looking for. A: There is django-photo-gallery, django photo album and another django-photo-gallery (don't know if its the same one.) Anything else, and you'll have to make your own.
Python-based Gallery web applications?
I'm trying to cut my last dependencies on PHP and MySQL. The last stumbling block is a image gallery I set up for a client a while ago. The whole website is built around Django and Zine, except for the image gallery, which is based on plogger. I'd love to replace plogger with a Python solution. Requirements include: good admin interface with batch upload (my client thinks FTP is some kind of disease) uses a templating system (e.g. Jinja) WSGI interface supports PostgreSQL bonus points if it is a Django app I looked at django-photologue, which seems to be a good base for building a gallery app. But it isn't really a drop-in gallery app, which is what I'm looking for.
[ "There is django-photo-gallery, django photo album and another django-photo-gallery (don't know if its the same one.)\nAnything else, and you'll have to make your own.\n" ]
[ 7 ]
[]
[]
[ "gallery", "python", "web_applications" ]
stackoverflow_0001455224_gallery_python_web_applications.txt
Q: What's a Django/Python solution for providing a one-time url for people to download files? I'm looking for a way to sell someone a card at an event that will have a unique code that they will be able to use later in order to download a file (mp3, pdf, etc.) only one time and mask the true file location so a savvy person downloading the file won't be able to download the file more than once. It would be nice to host the file on Amazon S3 to save on bandwidth where our server is co-located. My thought for the codes would be to pre-generate the unique codes that will get printed on the cards and store those in a database that could also have a field that stores the number of times the file was downloaded. This way we could set how many attempts we would allow the user for downloading the file. The part that I need direction on is how do I hide/mask the original file location so people can't steal that url and then download the file as many times as they want. I've done Google searches and I'm either not searching using the right keywords or there aren't very many libraries or snippets out there already for this type of thing. I'm guessing that I might be able to rig something up using django.views.static.serve that acts as a sort of proxy between the actual file and the user downloading the file. The only drawback to this method I would think is that I would need to use the actual web server and wouldn't be able to store the file on Amazon S3. Any suggestions or thoughts are greatly appreciated. A: Neat idea. However, I would warn against the single-download method, because there is no guarantee that their first download attempt will be successful. Perhaps use a time-expiration method instead? But it is certainly possible to do this with Django. Here is an outline of the basic approach: Set up a django url for serving these files Use a GET parameter which is a unique string to identify which file to get. Keep a database table which has a FileField for the file to download. This table maps the unique strings to the location of the file on the file system. To serve the file as a download, set the response headers in the view like this: (path is the location of the file to serve) with open(path, 'rb') as f: response = HttpResponse(f.read()) response['Content-Type'] = 'application/octet-stream'; response['Content-Disposition'] = 'attachment; filename="%s"' % 'insert_filename_here' return response Since we are using this Django page to serve the file, the user cannot find out the original file location. A: You can just use something simple such as mod_xsendfile. This functionality is also available in other popular webservers such lighttpd or nginx. It works like this: when enabled your application (e.g. a trivial PHP script) can send a special response header, causing the webserver to serve a static file. If you want it to work with S3 you will need to handle each and every request this way, meaning the traffic will go through your site, from there to AWS, back to your site and back to the client. Does S3 support symbolic links / aliases? If so you might just redirect a valid user to one of the symbolic URLs and delete that symlink after a couple of hours.
What's a Django/Python solution for providing a one-time url for people to download files?
I'm looking for a way to sell someone a card at an event that will have a unique code that they will be able to use later in order to download a file (mp3, pdf, etc.) only one time and mask the true file location so a savvy person downloading the file won't be able to download the file more than once. It would be nice to host the file on Amazon S3 to save on bandwidth where our server is co-located. My thought for the codes would be to pre-generate the unique codes that will get printed on the cards and store those in a database that could also have a field that stores the number of times the file was downloaded. This way we could set how many attempts we would allow the user for downloading the file. The part that I need direction on is how do I hide/mask the original file location so people can't steal that url and then download the file as many times as they want. I've done Google searches and I'm either not searching using the right keywords or there aren't very many libraries or snippets out there already for this type of thing. I'm guessing that I might be able to rig something up using django.views.static.serve that acts as a sort of proxy between the actual file and the user downloading the file. The only drawback to this method I would think is that I would need to use the actual web server and wouldn't be able to store the file on Amazon S3. Any suggestions or thoughts are greatly appreciated.
[ "Neat idea. However, I would warn against the single-download method, because there is no guarantee that their first download attempt will be successful. Perhaps use a time-expiration method instead?\nBut it is certainly possible to do this with Django. Here is an outline of the basic approach:\n\nSet up a django url for serving these files\nUse a GET parameter which is a unique string to identify which file to get.\nKeep a database table which has a FileField for the file to download. This table maps the unique strings to the location of the file on the file system.\nTo serve the file as a download, set the response headers in the view like this:\n\n(path is the location of the file to serve)\nwith open(path, 'rb') as f:\n response = HttpResponse(f.read())\nresponse['Content-Type'] = 'application/octet-stream';\nresponse['Content-Disposition'] = 'attachment; filename=\"%s\"' % 'insert_filename_here'\nreturn response\n\nSince we are using this Django page to serve the file, the user cannot find out the original file location.\n", "You can just use something simple such as mod_xsendfile. This functionality is also available in other popular webservers such lighttpd or nginx.\nIt works like this: when enabled your application (e.g. a trivial PHP script) can send a special response header, causing the webserver to serve a static file.\nIf you want it to work with S3 you will need to handle each and every request this way, meaning the traffic will go through your site, from there to AWS, back to your site and back to the client. Does S3 support symbolic links / aliases? If so you might just redirect a valid user to one of the symbolic URLs and delete that symlink after a couple of hours.\n" ]
[ 3, 2 ]
[]
[]
[ "django", "download", "proxy", "python", "url" ]
stackoverflow_0001455109_django_download_proxy_python_url.txt
Q: Django: use archive_index with date_field from a related model Hello (please excuse me for my ugly english :p), Imagine these two simple models : from django.contrib.contenttypes import generic from django.db import models class SomeModel(models.Model): content_type = models.ForeignKey(ContentType) object_id = models.PositiveIntegerField(_('object id')) content_object = generic.GenericForeignKey('content_type', 'object_id') published_at = models.DateTimeField('Publication date') class SomeOtherModel(models.Model): related = generic.GenericRelation(SomeModel) I would like to use the archive_index generic view with SomeOtherModel, but it doesn't work : from django.views.generic.date_based import archive_index archive_index(request, SometherModel.objects.all(), 'related__published_at') The error comes from archive_index at line 28 (using django 1.1) : date_list = queryset.dates(date_field, 'year')[::-1] The raised exception is : SomeOtherModel has no field named 'related__published_at' Have you any idea to fix it ? Thank you very much :) A: From digging through the Django source code, the generic view archive_index does not appear to support related fields that are GenericRelations. This is because the queryset method dates does not support generic relations. Consider filing this as a bug / feature request on the Django bug tracker.
Django: use archive_index with date_field from a related model
Hello (please excuse me for my ugly english :p), Imagine these two simple models : from django.contrib.contenttypes import generic from django.db import models class SomeModel(models.Model): content_type = models.ForeignKey(ContentType) object_id = models.PositiveIntegerField(_('object id')) content_object = generic.GenericForeignKey('content_type', 'object_id') published_at = models.DateTimeField('Publication date') class SomeOtherModel(models.Model): related = generic.GenericRelation(SomeModel) I would like to use the archive_index generic view with SomeOtherModel, but it doesn't work : from django.views.generic.date_based import archive_index archive_index(request, SometherModel.objects.all(), 'related__published_at') The error comes from archive_index at line 28 (using django 1.1) : date_list = queryset.dates(date_field, 'year')[::-1] The raised exception is : SomeOtherModel has no field named 'related__published_at' Have you any idea to fix it ? Thank you very much :)
[ "From digging through the Django source code, the generic view archive_index does not appear to support related fields that are GenericRelations.\nThis is because the queryset method dates does not support generic relations. Consider filing this as a bug / feature request on the Django bug tracker.\n" ]
[ 1 ]
[]
[]
[ "django", "django_generic_views", "django_models", "django_views", "python" ]
stackoverflow_0001453465_django_django_generic_views_django_models_django_views_python.txt
Q: Python code does not work as expected when I run script as a Windows Service Here is the code to get Desktop path on Windows Vista. import pythoncom import win32com.client pythoncom.CoInitialize() shell = win32com.client.Dispatch("WScript.Shell") desktop_path = shell.SpecialFolders("Desktop") Code works fine when I tried on python interpreter but its not working when I execute the same code from Python script, which runs as a windows service. Function returns desktop path as empty string. Any idea what is wrong here? Is there any other alternative to get Desktop path when python script runs as Windows Service? A: Most likely, your service is running under an account which doesn't have a user desktop folder. Also note that by default, services have no access to the GUI - if your app has one, you have to mark your service as being allowed to interact with the desktop (user session, not folder).
Python code does not work as expected when I run script as a Windows Service
Here is the code to get Desktop path on Windows Vista. import pythoncom import win32com.client pythoncom.CoInitialize() shell = win32com.client.Dispatch("WScript.Shell") desktop_path = shell.SpecialFolders("Desktop") Code works fine when I tried on python interpreter but its not working when I execute the same code from Python script, which runs as a windows service. Function returns desktop path as empty string. Any idea what is wrong here? Is there any other alternative to get Desktop path when python script runs as Windows Service?
[ "Most likely, your service is running under an account which doesn't have a user desktop folder. Also note that by default, services have no access to the GUI - if your app has one, you have to mark your service as being allowed to interact with the desktop (user session, not folder).\n" ]
[ 3 ]
[]
[]
[ "python", "service", "windows", "windows_vista" ]
stackoverflow_0001455592_python_service_windows_windows_vista.txt
Q: Could not get out of python loop I want to get out of loop when there is no data but loop seems to be stopping at recvfrom image='' while 1: data,address=self.socket.recvfrom(512) if data is None:break image=image+data count=count+1 print str(count)+' packets received...' A: Try setting to a non-blocking socket. You would do this before the loop starts. You can also try a socket with a timeout. A: recvfrom may indeed stop (waiting for data) unless you've set your socket to non-blocking or timeout mode. Moreover, if the socket gets closed by your counterpart, the indication of "socket was closed, nothing more to receive" is not a value of None for data -- it's an empty string, ''. So you could change your test to if not data: break for more generality. A: What is the blocking mode of your socket? If you are in blocking mode (which I think is the default), your program would stop until data is available... You would then not get to the next line after the recv() until data is coming. If you switch to non-blocking mode, however (see socket.setblocking(flag)), I think that it will raise an exception you would have to catch rather than null-check. A: You might want to set socket.setdefaulttimeout(n) to get out of the loop if no data is returned after specified time period.
Could not get out of python loop
I want to get out of loop when there is no data but loop seems to be stopping at recvfrom image='' while 1: data,address=self.socket.recvfrom(512) if data is None:break image=image+data count=count+1 print str(count)+' packets received...'
[ "Try setting to a non-blocking socket. You would do this before the loop starts. You can also try a socket with a timeout.\n", "recvfrom may indeed stop (waiting for data) unless you've set your socket to non-blocking or timeout mode. Moreover, if the socket gets closed by your counterpart, the indication of \"socket was closed, nothing more to receive\" is not a value of None for data -- it's an empty string, ''. So you could change your test to if not data: break for more generality.\n", "What is the blocking mode of your socket?\nIf you are in blocking mode (which I think is the default), your program would stop until data is available... You would then not get to the next line after the recv() until data is coming. \nIf you switch to non-blocking mode, however (see socket.setblocking(flag)), I think that it will raise an exception you would have to catch rather than null-check.\n", "You might want to set socket.setdefaulttimeout(n) to get out of the loop if no data is returned after specified time period.\n" ]
[ 4, 2, 0, 0 ]
[]
[]
[ "loops", "python", "sockets" ]
stackoverflow_0001455630_loops_python_sockets.txt
Q: Can we run google app engine on ubuntu/windows and serve web application I see google provide SDK and utilties to develop and run the web application in development (developer-pc) and port them to google app engine live (at google server). Can we use google app engine to run the local web application without using google infrastructure? Basically I want a decent job scheduler and persistent job queue for python (I am not using google infrastructure). I see google provides task queue implementation along with their app engine sdk. Can I use google app engine SDK to development my full fledged python application for task queue? A: You can run App Engine apps on top of appscale which in turn does run on Eucalyptus, Xen, and other clustering solutions you can deploy on Ubuntu (not sure about there being any Windows support) -- looks like it may require substantial system installation, configuration, and administration work to get started (sorry, no first-hand experience yet), but once you've done that investment it appears it may be smoother going forwards. (Automation of task queues is a relatively recent addition to appscale, but it's apparently working and can be patched in from a bazaar branch until it gets fully integrated into the trunk of the appscale project). Edit: since there seems to be some confusion about licensing of this code, I'll point out that the App Engine SDK, as per its site, is under Apache License 2.0, and appscale's under the New BSD License. Both are extremely permissive and liberal open-source licenses that basically allow you all sorts of reuses, remixes, mashups, redistributions, etc, etc. Edit: Nick also suggests mentioning TwistedAE, another effort to build an open source way (also Apache License 2.0) to deploy App Engine apps on your own infrastructure; I have no direct experience with it, and it IS still pre-alpha, but it does seem very promising and well worth keeping an eye on (tx Nick!).
Can we run google app engine on ubuntu/windows and serve web application
I see google provide SDK and utilties to develop and run the web application in development (developer-pc) and port them to google app engine live (at google server). Can we use google app engine to run the local web application without using google infrastructure? Basically I want a decent job scheduler and persistent job queue for python (I am not using google infrastructure). I see google provides task queue implementation along with their app engine sdk. Can I use google app engine SDK to development my full fledged python application for task queue?
[ "You can run App Engine apps on top of appscale which in turn does run on Eucalyptus, Xen, and other clustering solutions you can deploy on Ubuntu (not sure about there being any Windows support) -- looks like it may require substantial system installation, configuration, and administration work to get started (sorry, no first-hand experience yet), but once you've done that investment it appears it may be smoother going forwards. (Automation of task queues is a relatively recent addition to appscale, but it's apparently working and can be patched in from a bazaar branch until it gets fully integrated into the trunk of the appscale project).\nEdit: since there seems to be some confusion about licensing of this code, I'll point out that the App Engine SDK, as per its site, is under Apache License 2.0, and appscale's under the New BSD License. Both are extremely permissive and liberal open-source licenses that basically allow you all sorts of reuses, remixes, mashups, redistributions, etc, etc.\nEdit: Nick also suggests mentioning TwistedAE, another effort to build an open source way (also Apache License 2.0) to deploy App Engine apps on your own infrastructure; I have no direct experience with it, and it IS still pre-alpha, but it does seem very promising and well worth keeping an eye on (tx Nick!).\n" ]
[ 8 ]
[ "I don't believe so. According to the App Engine terms of service:\n\n7.1. Google gives you a personal, worldwide, royalty-free,\n non-assignable and non-exclusive\n license to use the software provided\n to you by Google as part of the\n Service as provided to you by Google\n (referred to as the \"Google App Engine\n Software\" below). This license is for\n the sole purpose of enabling you to\n use and enjoy the benefit of the\n Service as provided by Google, in the\n manner permitted by the Terms.\n\n(emphasis mine)\nYou'd want to check with a lawyer, but to me this sounds like the dev_appserver.py server is only to be used for development of applications which are then deployed to the GAE \"service\", not for running your own servers internally.\nI also suspect that running a production service off dev_appserver.py would be inadvisable for performance reasons. Without special effort, threaded Python web servers can generally only accomodate one request at a time, which limits your performance and scalability. This is due to an implementation detail of CPython, called the GIL. See http://docs.python.org/c-api/init.html#thread-state-and-the-global-interpreter-lock for a detailed explanation.\n" ]
[ -1 ]
[ "google_app_engine", "python", "task" ]
stackoverflow_0001455800_google_app_engine_python_task.txt
Q: Given a class type how do I create an instance in Python? Let's say I have this : class whatever(object): def __init__(self): pass and this function: def create_object(type_name): # create an object of type_name I'd like to be able to call the create_object like this: inst = create_object(whatever) and get back an instance of whatever. I think this should be doable without using eval, I'd like to know how to do this. Please notice that I'm NOT using a string as a parameter for create_object. A: The most obvious way: def create_object(type_name): return type_name() A: def create_object(typeobject): return typeobject() As you so explicitly say that the arg to create_object is NOT meant to be a string, I assume it's meant to be the type object itself, just like in the create_object(whatever) example you give, in which whatever is indeed the type itself. A: If I understand correctly, what you want is: def create_object(type_name, *args): # create an object of type_name return type_name(*args) inst = create_object(whatever) I don't really know why you want to do this, but would be interesting to hear from you what are your reasons to need such a construct. A: def create_object(type_name): return type_name() you can of course skip the function altogether and create the instance of whatever like this: inst = whatever()
Given a class type how do I create an instance in Python?
Let's say I have this : class whatever(object): def __init__(self): pass and this function: def create_object(type_name): # create an object of type_name I'd like to be able to call the create_object like this: inst = create_object(whatever) and get back an instance of whatever. I think this should be doable without using eval, I'd like to know how to do this. Please notice that I'm NOT using a string as a parameter for create_object.
[ "The most obvious way:\ndef create_object(type_name):\n return type_name()\n\n", "def create_object(typeobject):\n return typeobject()\n\nAs you so explicitly say that the arg to create_object is NOT meant to be a string, I assume it's meant to be the type object itself, just like in the create_object(whatever) example you give, in which whatever is indeed the type itself.\n", "If I understand correctly, what you want is:\ndef create_object(type_name, *args):\n # create an object of type_name\n return type_name(*args)\n\ninst = create_object(whatever)\n\nI don't really know why you want to do this, but would be interesting to hear from you what are your reasons to need such a construct.\n", "def create_object(type_name):\n return type_name()\n\nyou can of course skip the function altogether and create the instance of whatever like this:\ninst = whatever()\n\n" ]
[ 7, 5, 2, 2 ]
[]
[]
[ "oop", "python" ]
stackoverflow_0001455835_oop_python.txt
Q: Apache vs Twisted I know Twisted is a framework that allows you to do asynchronous non-blocking i/o but I still do not understand how that is different from what Apache server does. If anyone could explain the need for twisted, I would appreciate it.. A: Twisted is a platform for developing internet applications, for handling the underlying communications and such. It doesn't "do" anything out of the box--you've got to program it. Apache is an internet application, of sorts. Upon install, you have a working web server which can serve up static and dynamic web pages. Beyond that, it can be extended to do more than that, if you wish. A: They are two different things, one is a pure WEB server and one is a WEB framework with a builtin event driven servers. Twisted is good for constructing high-end ad-hoc network services. A: FYI, FriendFeed/Facebook just open sourced their custom server and framework: Tornado. Matt Heitzenroder of Apparatus has run an initial comparison test and looks like Tornado left twisted in the dust. A: @alphazero You read that Twisted vs. Tornado benchmark wrong (or you didn't read it at all). Quote from the article: " Lower mean response time is better." Twisted is lower. People want their webservers to respond with lower (faster) times. Twisted leaves Tornado in the dust... or, in reality, they differ by a nearly trivial constant factor.
Apache vs Twisted
I know Twisted is a framework that allows you to do asynchronous non-blocking i/o but I still do not understand how that is different from what Apache server does. If anyone could explain the need for twisted, I would appreciate it..
[ "Twisted is a platform for developing internet applications, for handling the underlying communications and such. It doesn't \"do\" anything out of the box--you've got to program it.\nApache is an internet application, of sorts. Upon install, you have a working web server which can serve up static and dynamic web pages. Beyond that, it can be extended to do more than that, if you wish.\n", "They are two different things, one is a pure WEB server and one is a WEB framework with a builtin event driven servers.\nTwisted is good for constructing high-end ad-hoc network services.\n", "FYI, FriendFeed/Facebook just open sourced their custom server and framework: Tornado. Matt Heitzenroder of Apparatus has run an initial comparison test and looks like Tornado left twisted in the dust.\n", "@alphazero You read that Twisted vs. Tornado benchmark wrong (or you didn't read it at all). Quote from the article: \" Lower mean response time is better.\" Twisted is lower. People want their webservers to respond with lower (faster) times.\nTwisted leaves Tornado in the dust... or, in reality, they differ by a nearly trivial constant factor.\n" ]
[ 11, 2, 2, 2 ]
[]
[]
[ "apache", "python", "twisted" ]
stackoverflow_0001410967_apache_python_twisted.txt
Q: Python: Does a dict value pointer store its key? I'm wondering if there is a built-in way to do this... Take this simple code for example: D = {'one': objectA(), 'two': objectB(), 'three': objectC()} object_a = D['one'] I believe object_a is just pointing at the objectA() created on the first line, and knows nothing about the dictionary D, but my question is, does Python store the Key of the dictionary value? Is there a way to get the Key 'one' if all you have is the variable object_a (without looping over the dictionary, of course)? If not, I can store the value 'one' inside objectA(), but I'm just curious if Python already stores that info. A: I think no. Consider the case of adding a single object to a (large) number of different dictionaries. It would become quite expensive for Python to track that for you, it would cost a lot for a feature not used by most. A: The dict mapping is not trivially "reversible" as you describe. The key must be immutable. It must be immutable so that it can be hashed for lookup and not suffer spontaneous changes. The value does not have to be immutable, it is not hashed for quick lookup. You cannot simply go from value back to key without (1) creating an immutable value and (2) populating some other kind of mapping with the "reversed" value -> key mapping. A: Is there a way to get the Key 'one' if all you have is the variable object_a (without looping over the dictionary, of course)? No, Python imposes no such near-useless redundancy on you. If objA is a factory callable: d = {'zap': objA()} a = d['zap'] and b = objA() just as well as L = [objA()] c = L[0] all result in exactly the same kind of references in a, b and c, to exactly equivalent objects (if that's what objA gives you in the first place), without one bit wasted (neither in said objects nor in any redundant and totally hypothetical auxiliary structure) to record "this is/was a value in list L and/or dict d at these index/key" ((or indices/keys since of cource there could be many)). A: Like others have said, there is no built-in way to do this, since it takes up memory and is not usually needed. If not, I can store the value 'one' inside objectA(), but I'm just curious if Python already stores that info. Just wanted to add that it should be pretty easy to add a more general solution which does this automatically. For example: def MakeDictReversible(dict): for k, v in dict.iteritems(): v.dict_key = k This function just embeds every object in the dictionary with a member "dict_key", which is the dictionary key used to store the object. Of course, this code can only work once (i.e., run this on two different dictionaries which share an object, and the object's "dict_key" member will be overwritten by the second dictionary).
Python: Does a dict value pointer store its key?
I'm wondering if there is a built-in way to do this... Take this simple code for example: D = {'one': objectA(), 'two': objectB(), 'three': objectC()} object_a = D['one'] I believe object_a is just pointing at the objectA() created on the first line, and knows nothing about the dictionary D, but my question is, does Python store the Key of the dictionary value? Is there a way to get the Key 'one' if all you have is the variable object_a (without looping over the dictionary, of course)? If not, I can store the value 'one' inside objectA(), but I'm just curious if Python already stores that info.
[ "I think no.\nConsider the case of adding a single object to a (large) number of different dictionaries. It would become quite expensive for Python to track that for you, it would cost a lot for a feature not used by most.\n", "The dict mapping is not trivially \"reversible\" as you describe.\n\nThe key must be immutable. It must be immutable so that it can be hashed for lookup and not suffer spontaneous changes.\nThe value does not have to be immutable, it is not hashed for quick lookup. \n\nYou cannot simply go from value back to key without (1) creating an immutable value and (2) populating some other kind of mapping with the \"reversed\" value -> key mapping.\n", "\nIs there a way to get the Key 'one' if\n all you have is the variable object_a\n (without looping over the dictionary,\n of course)?\n\nNo, Python imposes no such near-useless redundancy on you. If objA is a factory callable:\nd = {'zap': objA()}\na = d['zap']\n\nand\nb = objA()\n\njust as well as\nL = [objA()]\nc = L[0]\n\nall result in exactly the same kind of references in a, b and c, to exactly equivalent objects (if that's what objA gives you in the first place), without one bit wasted (neither in said objects nor in any redundant and totally hypothetical auxiliary structure) to record \"this is/was a value in list L and/or dict d at these index/key\" ((or indices/keys since of cource there could be many)).\n", "Like others have said, there is no built-in way to do this, since it takes up memory and is not usually needed.\n\nIf not, I can store the value 'one' inside objectA(), but I'm just curious if Python already stores that info.\n\nJust wanted to add that it should be pretty easy to add a more general solution which does this automatically. For example:\ndef MakeDictReversible(dict):\n for k, v in dict.iteritems():\n v.dict_key = k\n\nThis function just embeds every object in the dictionary with a member \"dict_key\", which is the dictionary key used to store the object.\nOf course, this code can only work once (i.e., run this on two different dictionaries which share an object, and the object's \"dict_key\" member will be overwritten by the second dictionary).\n" ]
[ 7, 3, 2, 0 ]
[]
[]
[ "dictionary", "python" ]
stackoverflow_0001454437_dictionary_python.txt
Q: Can I "embed" a Python back-end in an AIR application? I'm trying to find out if there is a way I could embed a Python back-end into an AIR application? I'm looking to employ an approach similar to the one outlined here to implement the business logic for my application, but additionally, I would like to provide the user with a single binary which they can load. I don't want the user to have to fire up a seperate server process to make this work. Is this possible in some way or am I out of luck? A: Probably. We are using a J2EE server side which uses SOAP webservices to talk to our AIR application on the frontend. You should be able to do the same because soap doesn't care which technology sits on either side of it. You can always have the application launch from a single binary which first fires up the server, then the client, if both are expected to sit on the users system. Also it gives you flexibility to have a more service oriented model later, if you want to. Without knowing what your app does, it is hard to know if that makes sense or not. For setting up the python side of SOAP webservices, here's a useful link to a diveintopython article. Then, if you have your server running with the wsdl, FlexBuilder can generate the AIR side of the webservices for you. A: You cannot embed your Python server in an AIR application. So basically you are out of luck. The simplest solution probably is to run a server on a central location that all your users can connect to from their AIR apps. That means that all/most of the data will be on your server, and not on the users computer, I don't know if that is a big issue but I guess it is. Also depending on your target systems you could create the program you want yourself without (fully) depending on AIR. You can generate executables for windows and osx from Flash CS3/4 or you can use a special (commercial) executable-maker that provides some more functionality. Wrapping this exe and your python program in a meta-executable that launches both should be possible with some work. Of course you won't have the benefits if the AIR installer etc in this case. A: OK, so since it didn't seem possible to go that way around, I came up with an alternative that seems to work for what I want. Instead of trying to embed Python inside AIR, I've gone the other way around: I'm building my Python code into a stand-alone executable using PyInstaller and bundling the AIR application as a resource. The Python code then starts up it's webserver and fires off the AIR app which can then connect to the (local) remote services as required.
Can I "embed" a Python back-end in an AIR application?
I'm trying to find out if there is a way I could embed a Python back-end into an AIR application? I'm looking to employ an approach similar to the one outlined here to implement the business logic for my application, but additionally, I would like to provide the user with a single binary which they can load. I don't want the user to have to fire up a seperate server process to make this work. Is this possible in some way or am I out of luck?
[ "Probably. We are using a J2EE server side which uses SOAP webservices to talk to our AIR application on the frontend. You should be able to do the same because soap doesn't care which technology sits on either side of it.\nYou can always have the application launch from a single binary which first fires up the server, then the client, if both are expected to sit on the users system. Also it gives you flexibility to have a more service oriented model later, if you want to. Without knowing what your app does, it is hard to know if that makes sense or not.\nFor setting up the python side of SOAP webservices, here's a useful link to a diveintopython article. Then, if you have your server running with the wsdl, FlexBuilder can generate the AIR side of the webservices for you.\n", "You cannot embed your Python server in an AIR application. So basically you are out of luck.\nThe simplest solution probably is to run a server on a central location that all your users can connect to from their AIR apps. That means that all/most of the data will be on your server, and not on the users computer, I don't know if that is a big issue but I guess it is.\nAlso depending on your target systems you could create the program you want yourself without (fully) depending on AIR. You can generate executables for windows and osx from Flash CS3/4 or you can use a special (commercial) executable-maker that provides some more functionality. Wrapping this exe and your python program in a meta-executable that launches both should be possible with some work. Of course you won't have the benefits if the AIR installer etc in this case.\n", "OK, so since it didn't seem possible to go that way around, I came up with an alternative that seems to work for what I want.\nInstead of trying to embed Python inside AIR, I've gone the other way around: I'm building my Python code into a stand-alone executable using PyInstaller and bundling the AIR application as a resource. The Python code then starts up it's webserver and fires off the AIR app which can then connect to the (local) remote services as required.\n" ]
[ 1, 1, 1 ]
[]
[]
[ "air", "apache_flex", "python" ]
stackoverflow_0001455722_air_apache_flex_python.txt
Q: Python Eval: What's wrong with this code? I'm trying to write a very simple Python utility for personal use that counts the number of lines in a text file for which a predicate specified at the command line is true. Here's the code: import sys pred = sys.argv[2] if sys.argv[1] == "stdin" : handle = sys.stdin else : handle = open(sys.argv[1]) result = 0 for line in handle : eval('result += 1 if ' + pred + ' else 0') print result When I run it using python count.py myFile.txt "int(line) == 0", I get the following error: File "c:/pycode/count.py", line 10, in <module> eval('toAdd = 1 if ' + pred + ' else 0') File "<string>", line 1 toAdd = 1 if int(line) == 0 else 0 This looks like perfectly valid Python code to me (though I've never used Python's eval before, so I don't know what its quirks, if any, are). Please tell me how I can fix this to make it work. A: Try using exec instead of eval. The difference between the 2 is explained here A: try: for line in handle: result += 1 if eval(pred) else 0 A: #!/usr/bin/env python import fileinput, sys pred = eval('lambda line: ' + sys.argv[1]) print sum(1 for line in fileinput.input(sys.argv[2:]) if pred(line)) Usage: pywc.py predicate [FILE]... Print number of lines that satisfy predicate for given FILE(s). With no FILE, or when FILE is -, read standard input. A: The python eval() function evaluates expressions, not statements. Try replacing the eval() line with: result += eval(pred + " else 0") A: Really, you are looking for the compile function: >> a = compile("toAdd = 1 if int('0') == 0 else 0", 'tmp2.py', 'exec') >>> eval(a) >>> toAdd 1 eval is intended only for expressions... compile while compile sequence of statements into a codeblock that can then be eval'ed.
Python Eval: What's wrong with this code?
I'm trying to write a very simple Python utility for personal use that counts the number of lines in a text file for which a predicate specified at the command line is true. Here's the code: import sys pred = sys.argv[2] if sys.argv[1] == "stdin" : handle = sys.stdin else : handle = open(sys.argv[1]) result = 0 for line in handle : eval('result += 1 if ' + pred + ' else 0') print result When I run it using python count.py myFile.txt "int(line) == 0", I get the following error: File "c:/pycode/count.py", line 10, in <module> eval('toAdd = 1 if ' + pred + ' else 0') File "<string>", line 1 toAdd = 1 if int(line) == 0 else 0 This looks like perfectly valid Python code to me (though I've never used Python's eval before, so I don't know what its quirks, if any, are). Please tell me how I can fix this to make it work.
[ "Try using exec instead of eval. The difference between the 2 is explained here\n", "try:\nfor line in handle:\n result += 1 if eval(pred) else 0\n\n", "#!/usr/bin/env python\nimport fileinput, sys\n\npred = eval('lambda line: ' + sys.argv[1])\nprint sum(1 for line in fileinput.input(sys.argv[2:]) if pred(line))\n\nUsage: pywc.py predicate [FILE]...\nPrint number of lines that satisfy predicate for given FILE(s).\nWith no FILE, or when FILE is -, read standard input.\n", "The python eval() function evaluates expressions, not statements. Try replacing the eval() line with:\nresult += eval(pred + \" else 0\")\n\n", "Really, you are looking for the compile function:\n>> a = compile(\"toAdd = 1 if int('0') == 0 else 0\", 'tmp2.py', 'exec')\n>>> eval(a)\n>>> toAdd\n1\n\neval is intended only for expressions... compile while compile sequence of statements into a codeblock that can then be eval'ed.\n" ]
[ 11, 5, 3, 2, 0 ]
[]
[]
[ "eval", "python", "syntax_error" ]
stackoverflow_0001456760_eval_python_syntax_error.txt
Q: Return a random word from a word list in python I would like to retrieve a random word from a file using python, but I do not believe my following method is best or efficient. Please assist. import fileinput import _random file = [line for line in fileinput.input("/etc/dictionaries-common/words")] rand = _random.Random() print file[int(rand.random() * len(file))], A: The random module defines choice(), which does what you want: import random words = [line.strip() for line in open('/etc/dictionaries-common/words')] print(random.choice(words)) Note also that this assumes that each word is by itself on a line in the file. If the file is very big, or if you perform this operation frequently, you may find that constantly rereading the file impacts your application's performance negatively. A: Another solution is to use getline import linecache import random line_number = random.randint(0, total_num_lines) linecache.getline('/etc/dictionaries-common/words', line_number) From the documentation: The linecache module allows one to get any line from any file, while attempting to optimize internally, using a cache, the common case where many lines are read from a single file EDIT: You can calculate the total number once and store it, since the dictionary file is unlikely to change. A: >>> import random >>> random.choice(list(open('/etc/dictionaries-common/words'))) 'jaundiced\n' It is efficient human-time-wise. btw, your implementation coincides with the one from stdlib's random.py: def choice(self, seq): """Choose a random element from a non-empty sequence.""" return seq[int(self.random() * len(seq))] Measure time performance I was wondering what is the relative performance of the presented solutions. linecache-based is the obvious favorite. How much slower is the random.choice's one-liner compared to honest algorithm implemented in select_random_line()? # nadia_known_num_lines 9.6e-06 seconds 1.00 # nadia 0.056 seconds 5843.51 # jfs 0.062 seconds 1.10 # dcrosta_no_strip 0.091 seconds 1.48 # dcrosta 0.13 seconds 1.41 # mark_ransom_no_strip 0.66 seconds 5.10 # mark_ransom_choose_from 0.67 seconds 1.02 # mark_ransom 0.69 seconds 1.04 (Each function is called 10 times (cached performance)). These result show that simple solution (dcrosta) is faster in this case than a more deliberate one (mark_ransom). Code that was used for comparison (as a gist): import linecache import random from timeit import default_timer WORDS_FILENAME = "/etc/dictionaries-common/words" def measure(func): measure.func_to_measure.append(func) return func measure.func_to_measure = [] @measure def dcrosta(): words = [line.strip() for line in open(WORDS_FILENAME)] return random.choice(words) @measure def dcrosta_no_strip(): words = [line for line in open(WORDS_FILENAME)] return random.choice(words) def select_random_line(filename): selection = None count = 0 for line in file(filename, "r"): if random.randint(0, count) == 0: selection = line.strip() count = count + 1 return selection @measure def mark_ransom(): return select_random_line(WORDS_FILENAME) def select_random_line_no_strip(filename): selection = None count = 0 for line in file(filename, "r"): if random.randint(0, count) == 0: selection = line count = count + 1 return selection @measure def mark_ransom_no_strip(): return select_random_line_no_strip(WORDS_FILENAME) def choose_from(iterable): """Choose a random element from a finite `iterable`. If `iterable` is a sequence then use `random.choice()` for efficiency. Return tuple (random element, total number of elements) """ selection, i = None, None for i, item in enumerate(iterable): if random.randint(0, i) == 0: selection = item return selection, (i+1 if i is not None else 0) @measure def mark_ransom_choose_from(): return choose_from(open(WORDS_FILENAME)) @measure def nadia(): global total_num_lines total_num_lines = sum(1 for _ in open(WORDS_FILENAME)) line_number = random.randint(0, total_num_lines) return linecache.getline(WORDS_FILENAME, line_number) @measure def nadia_known_num_lines(): line_number = random.randint(0, total_num_lines) return linecache.getline(WORDS_FILENAME, line_number) @measure def jfs(): return random.choice(list(open(WORDS_FILENAME))) def timef(func, number=1000, timer=default_timer): """Return number of seconds it takes to execute `func()`.""" start = timer() for _ in range(number): func() return (timer() - start) / number def main(): # measure time times = dict((f.__name__, timef(f, number=10)) for f in measure.func_to_measure) # print from fastest to slowest maxname_len = max(map(len, times)) last = None for name in sorted(times, key=times.__getitem__): print "%s %4.2g seconds %.2f" % (name.ljust(maxname_len), times[name], last and times[name] / last or 1) last = times[name] if __name__ == "__main__": main() A: Pythonizing my answer from What’s the best way to return a random line in a text file using C? : import random def select_random_line(filename): selection = None count = 0 for line in file(filename, "r"): if random.randint(0, count) == 0: selection = line.strip() count = count + 1 return selection print select_random_line("/etc/dictionaries-common/words") Edit: the original version of my answer used readlines, which didn't work as I thought and was totally unnecessary. This version will iterate through the file instead of reading it all into memory, and do it in a single pass, which should make it much more efficient than any answer I've seen thus far. Generalized version import random def choose_from(iterable): """Choose a random element from a finite `iterable`. If `iterable` is a sequence then use `random.choice()` for efficiency. Return tuple (random element, total number of elements) """ selection, i = None, None for i, item in enumerate(iterable): if random.randint(0, i) == 0: selection = item return selection, (i+1 if i is not None else 0) Examples print choose_from(open("/etc/dictionaries-common/words")) print choose_from(dict(a=1, b=2)) print choose_from(i for i in range(10) if i % 3 == 0) print choose_from(i for i in range(10) if i % 11 == 0 and i) # empty print choose_from([0]) # one element chunk, n = choose_from(urllib2.urlopen("http://google.com")) print (chunk[:20], n) Output ('yeps\n', 98569) ('a', 2) (6, 4) (None, 0) (0, 1) ('window._gjp && _gjp(', 10) A: You could do this without using fileinput: import random data = open("/etc/dictionaries-common/words").readlines() print random.choice(data) I have also used data instead of file because file is a predefined type in Python. A: I don't have code for you but as far as an algorithm goes: Find the file's size Do a random seek with the seek() function Find the next (or previous) whitespace character Return the word that starts after that whitespace character A: Efficiency and verbosity aren't the same thing in this case. It's tempting to go for the most beautiful, pythonic approach that does everything in one or two lines but for file I/O, stick with classic fopen-style, low-level interaction, even if it does take up a few more lines of code. I could copy and paste some code and claim it to be my own (others can if they want) but have a look at this: http://mail.python.org/pipermail/tutor/2007-July/055635.html A: There are a few different ways to optimize this problem. You can optimize for speed, or for space. If you want a quick but memory-hungry solution, read in the entire file using file.readlines() and then use random.choice() If you want a memory-efficient solution, first check the number of lines in the file by calling somefile.readline() repeatedly until it returns "", then generate a random number smaller then the number of lines (say, n), seek back to the beginning of the file, and finally call somefile.readline() n times. The next call to somefile.readline() will return the desired random line. This approach wastes no memory holding "unnecessary" lines. Of course, if you plan on getting lots of random lines from the file, this will be horribly inefficient, and it's better to just keep the entire file in memory, like in the first approach.
Return a random word from a word list in python
I would like to retrieve a random word from a file using python, but I do not believe my following method is best or efficient. Please assist. import fileinput import _random file = [line for line in fileinput.input("/etc/dictionaries-common/words")] rand = _random.Random() print file[int(rand.random() * len(file))],
[ "The random module defines choice(), which does what you want:\nimport random\n\nwords = [line.strip() for line in open('/etc/dictionaries-common/words')]\nprint(random.choice(words))\n\nNote also that this assumes that each word is by itself on a line in the file. If the file is very big, or if you perform this operation frequently, you may find that constantly rereading the file impacts your application's performance negatively.\n", "Another solution is to use getline\nimport linecache\nimport random\nline_number = random.randint(0, total_num_lines)\nlinecache.getline('/etc/dictionaries-common/words', line_number)\n\nFrom the documentation:\n\nThe linecache module allows one to get\nany line from any file, while\nattempting to optimize internally,\nusing a cache, the common case where\nmany lines are read from a single file\n\nEDIT:\nYou can calculate the total number once and store it, since the dictionary file is unlikely to change.\n", ">>> import random\n>>> random.choice(list(open('/etc/dictionaries-common/words')))\n'jaundiced\\n'\n\nIt is efficient human-time-wise. \nbtw, your implementation coincides with the one from stdlib's random.py:\n def choice(self, seq):\n \"\"\"Choose a random element from a non-empty sequence.\"\"\"\n return seq[int(self.random() * len(seq))] \n\nMeasure time performance\nI was wondering what is the relative performance of the presented solutions. linecache-based is the obvious favorite. How much slower is the random.choice's one-liner compared to honest algorithm implemented in select_random_line()?\n# nadia_known_num_lines 9.6e-06 seconds 1.00\n# nadia 0.056 seconds 5843.51\n# jfs 0.062 seconds 1.10\n# dcrosta_no_strip 0.091 seconds 1.48\n# dcrosta 0.13 seconds 1.41\n# mark_ransom_no_strip 0.66 seconds 5.10\n# mark_ransom_choose_from 0.67 seconds 1.02\n# mark_ransom 0.69 seconds 1.04\n\n(Each function is called 10 times (cached performance)).\nThese result show that simple solution (dcrosta) is faster in this case than a more deliberate one (mark_ransom).\nCode that was used for comparison (as a gist):\nimport linecache\nimport random\nfrom timeit import default_timer\n\n\nWORDS_FILENAME = \"/etc/dictionaries-common/words\"\n\n\ndef measure(func):\n measure.func_to_measure.append(func)\n return func\nmeasure.func_to_measure = []\n\n\n@measure\ndef dcrosta():\n words = [line.strip() for line in open(WORDS_FILENAME)]\n return random.choice(words)\n\n\n@measure\ndef dcrosta_no_strip():\n words = [line for line in open(WORDS_FILENAME)]\n return random.choice(words)\n\n\ndef select_random_line(filename):\n selection = None\n count = 0\n for line in file(filename, \"r\"):\n if random.randint(0, count) == 0:\n selection = line.strip()\n count = count + 1\n return selection\n\n\n@measure\ndef mark_ransom():\n return select_random_line(WORDS_FILENAME)\n\n\ndef select_random_line_no_strip(filename):\n selection = None\n count = 0\n for line in file(filename, \"r\"):\n if random.randint(0, count) == 0:\n selection = line\n count = count + 1\n return selection\n\n\n@measure\ndef mark_ransom_no_strip():\n return select_random_line_no_strip(WORDS_FILENAME)\n\n\ndef choose_from(iterable):\n \"\"\"Choose a random element from a finite `iterable`.\n\n If `iterable` is a sequence then use `random.choice()` for efficiency.\n\n Return tuple (random element, total number of elements)\n \"\"\"\n selection, i = None, None\n for i, item in enumerate(iterable):\n if random.randint(0, i) == 0:\n selection = item\n\n return selection, (i+1 if i is not None else 0)\n\n\n@measure\ndef mark_ransom_choose_from():\n return choose_from(open(WORDS_FILENAME))\n\n\n@measure\ndef nadia():\n global total_num_lines\n total_num_lines = sum(1 for _ in open(WORDS_FILENAME))\n\n line_number = random.randint(0, total_num_lines)\n return linecache.getline(WORDS_FILENAME, line_number)\n\n\n@measure\ndef nadia_known_num_lines():\n line_number = random.randint(0, total_num_lines)\n return linecache.getline(WORDS_FILENAME, line_number)\n\n\n@measure\ndef jfs():\n return random.choice(list(open(WORDS_FILENAME)))\n\n\ndef timef(func, number=1000, timer=default_timer):\n \"\"\"Return number of seconds it takes to execute `func()`.\"\"\"\n start = timer()\n for _ in range(number):\n func()\n return (timer() - start) / number\n\n\ndef main():\n # measure time\n times = dict((f.__name__, timef(f, number=10))\n for f in measure.func_to_measure)\n\n # print from fastest to slowest\n maxname_len = max(map(len, times))\n last = None\n for name in sorted(times, key=times.__getitem__):\n print \"%s %4.2g seconds %.2f\" % (name.ljust(maxname_len), times[name],\n last and times[name] / last or 1)\n last = times[name]\n\n\nif __name__ == \"__main__\":\n main()\n\n", "Pythonizing my answer from What’s the best way to return a random line in a text file using C? :\nimport random\n\ndef select_random_line(filename):\n selection = None\n count = 0\n for line in file(filename, \"r\"):\n if random.randint(0, count) == 0:\n selection = line.strip()\n count = count + 1\n return selection\n\nprint select_random_line(\"/etc/dictionaries-common/words\")\n\nEdit: the original version of my answer used readlines, which didn't work as I thought and was totally unnecessary. This version will iterate through the file instead of reading it all into memory, and do it in a single pass, which should make it much more efficient than any answer I've seen thus far.\nGeneralized version\nimport random\n\ndef choose_from(iterable):\n \"\"\"Choose a random element from a finite `iterable`.\n\n If `iterable` is a sequence then use `random.choice()` for efficiency.\n\n Return tuple (random element, total number of elements)\n \"\"\"\n selection, i = None, None\n for i, item in enumerate(iterable):\n if random.randint(0, i) == 0:\n selection = item\n\n return selection, (i+1 if i is not None else 0)\n\nExamples\nprint choose_from(open(\"/etc/dictionaries-common/words\"))\nprint choose_from(dict(a=1, b=2))\nprint choose_from(i for i in range(10) if i % 3 == 0)\nprint choose_from(i for i in range(10) if i % 11 == 0 and i) # empty\nprint choose_from([0]) # one element\nchunk, n = choose_from(urllib2.urlopen(\"http://google.com\"))\nprint (chunk[:20], n)\n\nOutput\n\n('yeps\\n', 98569)\n('a', 2)\n(6, 4)\n(None, 0)\n(0, 1)\n('window._gjp && _gjp(', 10)\n\n", "You could do this without using fileinput:\nimport random\ndata = open(\"/etc/dictionaries-common/words\").readlines()\nprint random.choice(data)\n\nI have also used data instead of file because file is a predefined type in Python.\n", "I don't have code for you but as far as an algorithm goes:\n\nFind the file's size\nDo a random seek with the seek() function\nFind the next (or previous) whitespace character\nReturn the word that starts after that whitespace character\n\n", "Efficiency and verbosity aren't the same thing in this case. It's tempting to go for the most beautiful, pythonic approach that does everything in one or two lines but for file I/O, stick with classic fopen-style, low-level interaction, even if it does take up a few more lines of code.\nI could copy and paste some code and claim it to be my own (others can if they want) but have a look at this: http://mail.python.org/pipermail/tutor/2007-July/055635.html\n", "There are a few different ways to optimize this problem. You can optimize for speed, or for space.\nIf you want a quick but memory-hungry solution, read in the entire file using file.readlines() and then use random.choice()\nIf you want a memory-efficient solution, first check the number of lines in the file by calling somefile.readline() repeatedly until it returns \"\", then generate a random number smaller then the number of lines (say, n), seek back to the beginning of the file, and finally call somefile.readline() n times. The next call to somefile.readline() will return the desired random line. This approach wastes no memory holding \"unnecessary\" lines. Of course, if you plan on getting lots of random lines from the file, this will be horribly inefficient, and it's better to just keep the entire file in memory, like in the first approach.\n" ]
[ 17, 9, 9, 3, 1, 1, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001456617_python.txt
Q: Renaming contents of text file using Regular Expressions I have a text file with several lines in the following format: gatename #outputs #inputs list_of_inputs_separated_by_spaces * gate_id example: nand 3 2 10 11 * G0 (The two inputs to the nand gate are 10 and 11) or 2 1 10 * G1 (The only input to the or gate is gate 10) What I need to do is rename the contents such that I eliminate the #outputs column so that the end result is: gatename #outputs list_of_inputs_separated_by_spaces * gate_id nand 2 10 11 * G0 or 1 10 * G1 I tried using the find and replace function of Eclipse (the find parameter was a regex statement that didn't work), but it ended up messing up the gatename. I am considering using a Python script and iterating over each line of the text file. what I need help with is determining what the appropriate regex statement is. A: This is basically what the cut utility is for: cut -d " " -f 1,3- (update: I forgot the -f option, sorry.) This takes a file, considers fields delimited by spaces, and outputs the first, third and following fields. (If you're on Windows, you should have these unix-style utilities anyway, they can be incredibly useful.) Using a regex, you could replace (\w+) \d+ (.*) with $1 $2. Something like: sed -r -e "s/([^ ]+) [0-9]+ (.*)/\1 \2/" file or perl -p -e "s/(\w+) \d+ (.*)/\1 $2/" file A: Something like...: for theline in fileinput.input(inplace=1): print re.sub(r'(\w+\s*+)\d+\s+(.*)', r'\1\2', theline), ...should meet your needs. A: Personally, if it is this structured of a document, don't bother with a regex. Just loop through the file, do a split on the " " character, then simply omit the second entry. A: You can indeed use Eclipse's find and replace feature, using the following: Find: ^([a-z]+) \d Replace with: \1 This is essentially matching the gatename at the beginning of each line (^([a-z]+)) followed by the output (\d), and replacing it with just the matched gatename (\1). A: I don't know what platform you're using Eclipse on, but if it's linux or you have cygwin, cut is very fast! cut -d" " --complement -f2 $FILE This will use space as the delimiter, and select the complement of the second field. If you really want to use a regular expression, you can do something like this: sed -r 's/^ *([^ ]+) +[^ ]+ +(.+)/\1 \2/' $FILE You could easily use the same expression in python or perl, of course, but Mitchel's right - splitting is easy. (Unless the text is extremely long, and it'll waste time unnecessarily splitting other fields).
Renaming contents of text file using Regular Expressions
I have a text file with several lines in the following format: gatename #outputs #inputs list_of_inputs_separated_by_spaces * gate_id example: nand 3 2 10 11 * G0 (The two inputs to the nand gate are 10 and 11) or 2 1 10 * G1 (The only input to the or gate is gate 10) What I need to do is rename the contents such that I eliminate the #outputs column so that the end result is: gatename #outputs list_of_inputs_separated_by_spaces * gate_id nand 2 10 11 * G0 or 1 10 * G1 I tried using the find and replace function of Eclipse (the find parameter was a regex statement that didn't work), but it ended up messing up the gatename. I am considering using a Python script and iterating over each line of the text file. what I need help with is determining what the appropriate regex statement is.
[ "This is basically what the cut utility is for:\ncut -d \" \" -f 1,3-\n\n(update: I forgot the -f option, sorry.)\nThis takes a file, considers fields delimited by spaces, and outputs the first, third and following fields.\n(If you're on Windows, you should have these unix-style utilities anyway, they can be incredibly useful.)\nUsing a regex, you could replace (\\w+) \\d+ (.*) with $1 $2. Something like:\nsed -r -e \"s/([^ ]+) [0-9]+ (.*)/\\1 \\2/\" file\n\nor\nperl -p -e \"s/(\\w+) \\d+ (.*)/\\1 $2/\" file\n\n", "Something like...:\nfor theline in fileinput.input(inplace=1):\n print re.sub(r'(\\w+\\s*+)\\d+\\s+(.*)', r'\\1\\2', theline),\n\n...should meet your needs.\n", "Personally, if it is this structured of a document, don't bother with a regex.\nJust loop through the file, do a split on the \" \" character, then simply omit the second entry.\n", "You can indeed use Eclipse's find and replace feature, using the following:\nFind: ^([a-z]+) \\d\nReplace with: \\1\n\nThis is essentially matching the gatename at the beginning of each line (^([a-z]+)) followed by the output (\\d), and replacing it with just the matched gatename (\\1).\n", "I don't know what platform you're using Eclipse on, but if it's linux or you have cygwin, cut is very fast!\ncut -d\" \" --complement -f2 $FILE\n\nThis will use space as the delimiter, and select the complement of the second field.\nIf you really want to use a regular expression, you can do something like this:\nsed -r 's/^ *([^ ]+) +[^ ]+ +(.+)/\\1 \\2/' $FILE\n\nYou could easily use the same expression in python or perl, of course, but Mitchel's right - splitting is easy. (Unless the text is extremely long, and it'll waste time unnecessarily splitting other fields).\n" ]
[ 4, 2, 1, 1, 0 ]
[]
[]
[ "eclipse", "python", "regex" ]
stackoverflow_0001457100_eclipse_python_regex.txt
Q: Simple file transfer over wifi between computer and mobile phone using python I'd like to be able to transfer files between my mobile phone and computer. The phone is a smartphone that can run python 2.5.4 and the computer is running windows xp (with python 2.5.4 and 3.1.1). I'd like to have a simple python program on the phone that can send files to the computer and get files from the computer. The phone end should only run when invoked, the computer end can be a server, although preferably something that does not use a lot of resources. The phone end should be able to figure out what's in the relevant directory on the computer. At the moment I'm getting files from computer to phone by running windows web server on the computer (ugh) and a script with socket.set_ default _ access_point (so the program can pick my router's ssid or other transport) and urlretrieve (to get the files) on the phone. I'm sending files the other way by email using smtplib. Suggestions would be appreciated, whether a general idea, existing programs or anything in between. A: I would use paramiko. It's secure fast and really simple. How bout this? So we start by importing the module, and specifying the log file: import paramiko paramiko.util.log_to_file('/tmp/paramiko.log') We open an SSH transport: host = "example.com" port = 22 transport = paramiko.Transport((host, port)) Next we want to authenticate. We can do this with a password: password = "example101" username = "warrior" transport.connect(username = username, password = password) Another way is to use an SSH key: import os privatekeyfile = os.path.expanduser('~/.ssh/id_rsa') mykey = paramiko.RSAKey.from_private_key_file(privatekeyfile) username = 'warrior' transport.connect(username = username, pkey = mykey) Now we can start the SFTP client: sftp = paramiko.SFTPClient.from_transport(transport) Now lets pull a file across from the remote to the local system: filepath = '/home/zeth/lenna.jpg' localpath = '/home/zeth/lenna.jpg' sftp.get(filepath, localpath) Now lets go the other way: filepath = '/home/zeth/lenna.jpg' localpath = '/home/zeth/lenna.jpg' sftp.put(filepath, localpath) Lastly, we need to close the SFTP connection and the transport: sftp.close() transport.close() How's that?? I have to give credit to this for the example. A: I ended up using python's ftplib on the phone and FileZilla, an ftp sever, on the computer. Advantages are high degree of simplicity, although there may be security issues. In case anyone cares, here's the guts of the client side code to send and receive files. Actual implementation has a bit more infrastructure. from ftplib import FTP import os ftp = FTP() ftp.connect(server, port) ftp.login(user, pwd) files = ftp.nlst() # get a list of files on the server # decide which file we want fn = 'test.py' # filename on server and for local storage d = 'c:/temp/' # local directory to store file path = os.path.join(d,fn) r = ftp.retrbinary('RETR %s' % fn, open(path, 'wb').write) print(r) # should be: 226 Transfer OK f = open(path, 'rb') # send file at path r = ftp.storbinary('STOR %s' % fn, f) # call it fn on server print(r) # should be: 226 Transfer OK f.close() ftp.quit() A: There are a couple of examples out there, but you have to keep in mind that, IIRC, PyBluez will work only on Linux. I've previously done OBEX-related things, mostly fetching things from mobile phones, using the obexftp program 2 which is part of the OpenOBEX project 3. Naturally, you can call the obexftp program from Python and interpret the responses and exit codes using functions in the os, popen2 and subprocess modules. I believe that obexftp also supports "push" mode, but you could probably find something else related to OpenOBEX if it does not. Since Bluetooth communications are supported using sockets in GNU/ Linux distributions and in Python (provided that the Bluetooth support is detected and configured), you could communicate with phones using plain network programming, but this would probably require you to implement the OBEX protocols yourself - not a straightforward task for a number of reasons, including one I mention below. Thus, it's probably easier to go with obexftp at least initially. You also have lightblue, that is a cross-os bluetooth library. There is also a complete script, PUTools: Python Utility Tools for PyS60 Python (examples has Windows screenshots), that has a: Python interpreter that takes input and shows output on PC, connects over Bluetooth to phone, and executes on the phone. You also get simple shell functionality for the phone (cd, ls, rm, etc.). The tool also allows you to synchronize files both from PC to phone (very useful in application development) and from phone to PC (your images, logfiles from the program you are working on, etc.).
Simple file transfer over wifi between computer and mobile phone using python
I'd like to be able to transfer files between my mobile phone and computer. The phone is a smartphone that can run python 2.5.4 and the computer is running windows xp (with python 2.5.4 and 3.1.1). I'd like to have a simple python program on the phone that can send files to the computer and get files from the computer. The phone end should only run when invoked, the computer end can be a server, although preferably something that does not use a lot of resources. The phone end should be able to figure out what's in the relevant directory on the computer. At the moment I'm getting files from computer to phone by running windows web server on the computer (ugh) and a script with socket.set_ default _ access_point (so the program can pick my router's ssid or other transport) and urlretrieve (to get the files) on the phone. I'm sending files the other way by email using smtplib. Suggestions would be appreciated, whether a general idea, existing programs or anything in between.
[ "I would use paramiko. It's secure fast and really simple. How bout this?\nSo we start by importing the module, and specifying the log file:\nimport paramiko\nparamiko.util.log_to_file('/tmp/paramiko.log')\n\nWe open an SSH transport:\nhost = \"example.com\"\nport = 22\ntransport = paramiko.Transport((host, port))\n\nNext we want to authenticate. We can do this with a password:\npassword = \"example101\"\nusername = \"warrior\"\ntransport.connect(username = username, password = password)\n\nAnother way is to use an SSH key:\nimport os\nprivatekeyfile = os.path.expanduser('~/.ssh/id_rsa')\nmykey = paramiko.RSAKey.from_private_key_file(privatekeyfile)\nusername = 'warrior'\ntransport.connect(username = username, pkey = mykey)\n\nNow we can start the SFTP client:\nsftp = paramiko.SFTPClient.from_transport(transport)\n\nNow lets pull a file across from the remote to the local system:\nfilepath = '/home/zeth/lenna.jpg'\nlocalpath = '/home/zeth/lenna.jpg'\nsftp.get(filepath, localpath)\n\nNow lets go the other way:\nfilepath = '/home/zeth/lenna.jpg'\nlocalpath = '/home/zeth/lenna.jpg'\nsftp.put(filepath, localpath)\n\nLastly, we need to close the SFTP connection and the transport:\nsftp.close()\ntransport.close()\n\nHow's that?? I have to give credit to this for the example.\n", "I ended up using python's ftplib on the phone and FileZilla, an ftp sever, on the computer. Advantages are high degree of simplicity, although there may be security issues.\nIn case anyone cares, here's the guts of the client side code to send and receive files. Actual implementation has a bit more infrastructure.\nfrom ftplib import FTP\nimport os\n\nftp = FTP()\nftp.connect(server, port)\nftp.login(user, pwd)\n\nfiles = ftp.nlst() # get a list of files on the server\n# decide which file we want\n\nfn = 'test.py' # filename on server and for local storage\nd = 'c:/temp/' # local directory to store file\npath = os.path.join(d,fn)\nr = ftp.retrbinary('RETR %s' % fn, open(path, 'wb').write)\nprint(r) # should be: 226 Transfer OK\n\nf = open(path, 'rb') # send file at path\nr = ftp.storbinary('STOR %s' % fn, f) # call it fn on server\nprint(r) # should be: 226 Transfer OK\nf.close()\n\nftp.quit()\n\n", "There are a couple of examples out there, but you have to keep in mind that, IIRC, PyBluez will work only on Linux.\n\nI've previously done OBEX-related things, mostly fetching things from\n mobile phones, using the obexftp program 2 which is part of the\n OpenOBEX project 3. Naturally, you can call the obexftp program from\n Python and interpret the responses and exit codes using functions in\n the os, popen2 and subprocess modules. I believe that obexftp also\n supports \"push\" mode, but you could probably find something else\n related to OpenOBEX if it does not.\nSince Bluetooth communications are supported using sockets in GNU/\n Linux distributions and in Python (provided that the Bluetooth support\n is detected and configured), you could communicate with phones using\n plain network programming, but this would probably require you to\n implement the OBEX protocols yourself - not a straightforward task for\n a number of reasons, including one I mention below. Thus, it's\n probably easier to go with obexftp at least initially.\n\nYou also have lightblue, that is a cross-os bluetooth library.\nThere is also a complete script, PUTools: Python Utility Tools for PyS60 Python (examples has Windows screenshots), that has a:\n\nPython interpreter that takes input and shows output on PC, connects over Bluetooth to phone, and executes on the phone. You also get simple shell functionality for the phone (cd, ls, rm, etc.). The tool also allows you to synchronize files both from PC to phone (very useful in application development) and from phone to PC (your images, logfiles from the program you are working on, etc.).\n\n" ]
[ 3, 1, 0 ]
[]
[]
[ "file_transfer", "mobile_phones", "python" ]
stackoverflow_0001451849_file_transfer_mobile_phones_python.txt
Q: Can Python encode a string to match ASP.NET membership provider's EncodePassword I'm working on a Python script to create hashed strings from an existing system similar to that of ASP.NET's MembershipProvider. Using Python, is there a way to take a hexadecimal string and convert it back to a binary and then do a base64 encoding, somehow treating the original string as Unicode. Let's try some code. I'm looking to re-encode a hashed password so that the hashes would be equal in Python and ASP.NET/C#: import base64 import sha import binascii def EncodePassword(password): # strings are currently stored as hex hex_hashed_password = sha.sha(password).hexdigest() # attempt to convert hex to base64 bin_hashed_password = binascii.unhexlify(hex_hashed_password) return base64.standard_b64encode(bin_hashed_password) print EncodePassword("password") # W6ph5Mm5Pz8GgiULbPgzG37mj9g= The ASP.NET MembershipProvider users this method to encode: static string EncodePassword(string pass) { byte[] bytes = Encoding.Unicode.GetBytes(pass); //bytes = Encoding.ASCII.GetBytes(pass); byte[] inArray = null; HashAlgorithm algorithm = HashAlgorithm.Create("SHA1"); inArray = algorithm.ComputeHash(bytes); return Convert.ToBase64String(inArray); } string s = EncodePassword("password"); // 6Pl/upEE0epQR5SObftn+s2fW3M= That doesn't match. But, when I run it with the password encoded with ASCII encoding, it matches, so the Unicode part of the .NET method is what's the difference. W6ph5Mm5Pz8GgiULbPgzG37mj9g= Is there a way in the python script to get an output to match the default .NET version? A: This is the trick: Encoding.Unicode “Unicode” encoding is confusing Microsoft-speak for UTF-16LE (specifically, without any BOM). Encode the string to that before hashing and you get the right answer: >>> import hashlib >>> p= u'password' >>> hashlib.sha1(p.encode('utf-16le')).digest().encode('base64') '6Pl/upEE0epQR5SObftn+s2fW3M=\n'
Can Python encode a string to match ASP.NET membership provider's EncodePassword
I'm working on a Python script to create hashed strings from an existing system similar to that of ASP.NET's MembershipProvider. Using Python, is there a way to take a hexadecimal string and convert it back to a binary and then do a base64 encoding, somehow treating the original string as Unicode. Let's try some code. I'm looking to re-encode a hashed password so that the hashes would be equal in Python and ASP.NET/C#: import base64 import sha import binascii def EncodePassword(password): # strings are currently stored as hex hex_hashed_password = sha.sha(password).hexdigest() # attempt to convert hex to base64 bin_hashed_password = binascii.unhexlify(hex_hashed_password) return base64.standard_b64encode(bin_hashed_password) print EncodePassword("password") # W6ph5Mm5Pz8GgiULbPgzG37mj9g= The ASP.NET MembershipProvider users this method to encode: static string EncodePassword(string pass) { byte[] bytes = Encoding.Unicode.GetBytes(pass); //bytes = Encoding.ASCII.GetBytes(pass); byte[] inArray = null; HashAlgorithm algorithm = HashAlgorithm.Create("SHA1"); inArray = algorithm.ComputeHash(bytes); return Convert.ToBase64String(inArray); } string s = EncodePassword("password"); // 6Pl/upEE0epQR5SObftn+s2fW3M= That doesn't match. But, when I run it with the password encoded with ASCII encoding, it matches, so the Unicode part of the .NET method is what's the difference. W6ph5Mm5Pz8GgiULbPgzG37mj9g= Is there a way in the python script to get an output to match the default .NET version?
[ "This is the trick:\n\nEncoding.Unicode\n\n“Unicode” encoding is confusing Microsoft-speak for UTF-16LE (specifically, without any BOM). Encode the string to that before hashing and you get the right answer:\n>>> import hashlib\n>>> p= u'password'\n>>> hashlib.sha1(p.encode('utf-16le')).digest().encode('base64')\n'6Pl/upEE0epQR5SObftn+s2fW3M=\\n'\n\n" ]
[ 5 ]
[]
[]
[ ".net", "asp.net", "c#", "python", "unicode" ]
stackoverflow_0001456770_.net_asp.net_c#_python_unicode.txt
Q: python shell command - why won't it work? I wonder if anyone has any insights into this. I have a bash script that should put my ssh key onto a remote machine. Adopted from here, the script reads, #!/usr/bin/sh REMOTEHOST=user@remote KEY="$HOME/.ssh/id_rsa.pub" KEYCODE=`cat $KEY` ssh -q $REMOTEHOST "mkdir ~/.ssh 2>/dev/null; chmod 700 ~/.ssh; echo "$KEYCODE" >> ~/.ssh/authorized_keys; chmod 644 ~/.ssh/authorized_keys" This works. The equivalent python script should be #!/usr/bin/python import os os.system('ssh -q %(REMOTEHOST)s "mkdir ~/.ssh 2>/dev/null; chmod 700 ~/.ssh; echo "%(KEYCODE)s" >> ~/.ssh/authorized_keys; chmod 644 ~/.ssh/authorized_keys"' % {'REMOTEHOST':'user@remote', 'KEYCODE':open(os.path.join(os.environ['HOME'], '.ssh/id_rsa.pub'),'r').read()}) But in this case, I get that sh: line 1: >> ~/.ssh/authorized_keys; chmod 644 ~/.ssh/authorized_keys: No such file or directory What am I doing wrong? I tried escaping the inner-most quotes but same error message... Thank you in advance for your responses. A: You have a serious question -- in that os.system isn't behaving the way you expect it to -- but also, you should seriously rethink the approach as a whole. You're launching a Python interpreter -- but then, via os.system, telling that Python interpreter to launch a shell! os.system shouldn't be used at all in modern Python (subprocess is a complete replacement)... but using any Python call which starts a shell instance is exceptionally silly in this kind of use case. Now, in terms of the actual, immediate problem -- look at how your quotation marks are nesting. You'll see that the quote you're starting before mkdir is being closed in the echo, allowing your command to be split in a spot you don't intend. The following fixes this immediate issue, but is still awful and evil (starts a subshell unnecessarily, doesn't properly check output status, and should be converted to use subprocess.Popen()): os.system('''ssh -q %(REMOTEHOST)s "mkdir ~/.ssh 2>/dev/null; chmod 700 ~/.ssh; echo '%(KEYCODE)s' >> ~/.ssh/authorized_keys; chmod 644 ~/.ssh/authorized_keys"''' % { 'REMOTEHOST':'user@remote', 'KEYCODE':open(os.path.join(os.environ['HOME'], '.ssh/id_rsa.pub'),'r').read() })
python shell command - why won't it work?
I wonder if anyone has any insights into this. I have a bash script that should put my ssh key onto a remote machine. Adopted from here, the script reads, #!/usr/bin/sh REMOTEHOST=user@remote KEY="$HOME/.ssh/id_rsa.pub" KEYCODE=`cat $KEY` ssh -q $REMOTEHOST "mkdir ~/.ssh 2>/dev/null; chmod 700 ~/.ssh; echo "$KEYCODE" >> ~/.ssh/authorized_keys; chmod 644 ~/.ssh/authorized_keys" This works. The equivalent python script should be #!/usr/bin/python import os os.system('ssh -q %(REMOTEHOST)s "mkdir ~/.ssh 2>/dev/null; chmod 700 ~/.ssh; echo "%(KEYCODE)s" >> ~/.ssh/authorized_keys; chmod 644 ~/.ssh/authorized_keys"' % {'REMOTEHOST':'user@remote', 'KEYCODE':open(os.path.join(os.environ['HOME'], '.ssh/id_rsa.pub'),'r').read()}) But in this case, I get that sh: line 1: >> ~/.ssh/authorized_keys; chmod 644 ~/.ssh/authorized_keys: No such file or directory What am I doing wrong? I tried escaping the inner-most quotes but same error message... Thank you in advance for your responses.
[ "You have a serious question -- in that os.system isn't behaving the way you expect it to -- but also, you should seriously rethink the approach as a whole.\nYou're launching a Python interpreter -- but then, via os.system, telling that Python interpreter to launch a shell! os.system shouldn't be used at all in modern Python (subprocess is a complete replacement)... but using any Python call which starts a shell instance is exceptionally silly in this kind of use case.\nNow, in terms of the actual, immediate problem -- look at how your quotation marks are nesting. You'll see that the quote you're starting before mkdir is being closed in the echo, allowing your command to be split in a spot you don't intend.\nThe following fixes this immediate issue, but is still awful and evil (starts a subshell unnecessarily, doesn't properly check output status, and should be converted to use subprocess.Popen()):\nos.system('''ssh -q %(REMOTEHOST)s \"mkdir ~/.ssh 2>/dev/null; chmod 700 ~/.ssh; echo '%(KEYCODE)s' >> ~/.ssh/authorized_keys; chmod 644 ~/.ssh/authorized_keys\"''' % {\n 'REMOTEHOST':'user@remote',\n 'KEYCODE':open(os.path.join(os.environ['HOME'], '.ssh/id_rsa.pub'),'r').read()\n})\n\n" ]
[ 5 ]
[]
[]
[ "python", "shell" ]
stackoverflow_0001457757_python_shell.txt
Q: How do I find the name of the file that is the importer, within the imported file? How do I find the name of the file that is the "importer", within the imported file? If a.py and b.py both import c.py, is there anyway that c.py can know the name of the file importing it? A: Use sys.path[0] returns the path of the script that launched the python interpreter. If you can this script directly, it will return the path of the script. If the script however, was imported from another script, it will return the path of that script. See Python Path Issues A: In the top-level of c.py (i.e. outside of any function or class), you should be able to get the information you need by running import traceback and then examining the result of traceback.extract_stack(). At the time that top-level code is run, the importer of the module (and its importer, etc. recursively) are all on the callstack. A: That's why you have parameters. It's not the job of c.py to determine who imported it. It's the job of a.py or b.py to pass the variable __name__ to the functions or classes in c.py. A: It can be done by inspecting the stack: #inside c.py: import inspect FRAME_FILENAME = 1 print "Imported from: ", inspect.getouterframes(inspect.currentframe())[-1][FRAME_FILENAME] #or: print "Imported from: ", inspect.stack()[-1][FRAME_FILENAME] But inspecting the stack can be buggy. Why do you need to know where a file is being imported from? Why not have the file that does the importing (a.py and b.py) pass in a name into c.py? (assuming you have control of a.py and b.py)
How do I find the name of the file that is the importer, within the imported file?
How do I find the name of the file that is the "importer", within the imported file? If a.py and b.py both import c.py, is there anyway that c.py can know the name of the file importing it?
[ "Use\nsys.path[0]\nreturns the path of the script that launched the python interpreter. If you can this script directly, it will return the path of the script. If the script however, was imported from another script, it will return the path of that script.\nSee Python Path Issues\n", "In the top-level of c.py (i.e. outside of any function or class), you should be able to get the information you need by running\nimport traceback\n\nand then examining the result of traceback.extract_stack(). At the time that top-level code is run, the importer of the module (and its importer, etc. recursively) are all on the callstack.\n", "That's why you have parameters.\nIt's not the job of c.py to determine who imported it.\nIt's the job of a.py or b.py to pass the variable __name__ to the functions or classes in c.py.\n", "It can be done by inspecting the stack:\n#inside c.py:\nimport inspect\nFRAME_FILENAME = 1\nprint \"Imported from: \", inspect.getouterframes(inspect.currentframe())[-1][FRAME_FILENAME]\n#or:\nprint \"Imported from: \", inspect.stack()[-1][FRAME_FILENAME]\n\nBut inspecting the stack can be buggy. Why do you need to know where a file is being imported from? Why not have the file that does the importing (a.py and b.py) pass in a name into c.py? (assuming you have control of a.py and b.py)\n" ]
[ 3, 2, 2, 1 ]
[]
[]
[ "import", "python" ]
stackoverflow_0001457308_import_python.txt
Q: Get every combination of strings I had a combinatorics assignment that involved getting every word with length less than or equal to 6 from a specific combination of strings. In this case, it was S = { 'a', 'ab', 'ba' }. The professor just started listing them off, but I thought it would be easier solved with a program. The only problem is that I can't get a good algorithm that would actually compute every possible option. If anyone could help, I'd appreciate it. I usually program in Python but really I just need help with the algorithm. A: Assuming you DO mean combinations (no repetitions, order does not matter): import itertools S = [ 'a', 'ab', 'ba' ] for i in range(len(S)+1): for c in itertools.combinations(S, i): cc = ''.join(c) if len(cc) <= 6: print c emits all the possibilities: () ('a',) ('ab',) ('ba',) ('a', 'ab') ('a', 'ba') ('ab', 'ba') ('a', 'ab', 'ba') If you mean something different than "combinations", it's just an issue of using the right iterator or generator in the for (e.g., itertools.permutations, or something else of your own devising). Edit: if for example you mean "repetitions and order ARE important", def reps(seq, n): return itertools.product(*[seq]*n) for i in range(7): for c in reps(S, i): cc = ''.join(c) if len(cc) <= 6: print c will give you the required 85 lines of output. Edit again: I had the wrong loop limit (and therefore wrong output length) -- tx to the commenter who pointed that out. Also, this approach can produce a string > 1 times, if the ''.join's of different tuples are considered equivalent; e.g., it produces ('a', 'ba') as distinct from ('ab', 'a') although their ''.join is the same (same "word" from different so-called "combinations", I guess -- terminology in use not being entirely clear). A: You can iteratively generate all the strings made from one part, two parts, three parts and so on, until all the strings generated in a step are longer than six characters. Further steps would only generate even longer strings, so all possible short strings have already been generated. If you collect these short strings in each step you end up with a set of all possible generated short strings. In Python: S = set(['a', 'ab', 'ba']) collect = set() step = set(['']) while step: step = set(a+b for a in step for b in S if len(a+b) <= 6) collect |= step print sorted(collect) A: def combos(S,n): if (n <= 0): return for s in S: if len(s) <= n: yield s for t in combos(S,n-len(s)): yield s+t for x in combos(["a","ab","ba"],6): print x Prints output: a aa aaa aaaa aaaaa aaaaaa aaaaab aaaaba aaaab aaaaba aaaba aaabaa aaab aaaba aaabaa aaabab aaabba aaba aabaa aabaaa aabaab aababa aab aaba aabaa aabaaa aabaab aababa aabab aababa aabba aabbaa aba abaa abaaa abaaaa abaaab abaaba abaab abaaba ababa ababaa ab aba abaa abaaa abaaaa abaaab abaaba abaab abaaba ababa ababaa abab ababa ababaa ababab ababba abba abbaa abbaaa abbaab abbaba ba baa baaa baaaa baaaaa baaaab baaaba baaab baaaba baaba baabaa baab baaba baabaa baabab baabba baba babaa babaaa babaab bababa A: Doing it recursively is one way: cache = {} def words_of_length(s, n=6): # memoise results if n in cache: return cache[n] # base cases if n < 0: return [] if n == 0: return [""] # inductive case res = set() for x in s: res |= set( (x+y for y in words_of_length(s, n-len(x))) ) # sort and memoise result cache[n] = sorted(res) # sort results return cache[n] def words_no_longer_than(s, n=6): return sum( [words_of_length(s, i) for i in range(n+1)], [] )
Get every combination of strings
I had a combinatorics assignment that involved getting every word with length less than or equal to 6 from a specific combination of strings. In this case, it was S = { 'a', 'ab', 'ba' }. The professor just started listing them off, but I thought it would be easier solved with a program. The only problem is that I can't get a good algorithm that would actually compute every possible option. If anyone could help, I'd appreciate it. I usually program in Python but really I just need help with the algorithm.
[ "Assuming you DO mean combinations (no repetitions, order does not matter):\nimport itertools\n\nS = [ 'a', 'ab', 'ba' ]\n\nfor i in range(len(S)+1):\n for c in itertools.combinations(S, i):\n cc = ''.join(c)\n if len(cc) <= 6:\n print c\n\nemits all the possibilities:\n()\n('a',)\n('ab',)\n('ba',)\n('a', 'ab')\n('a', 'ba')\n('ab', 'ba')\n('a', 'ab', 'ba')\n\nIf you mean something different than \"combinations\", it's just an issue of using the right iterator or generator in the for (e.g., itertools.permutations, or something else of your own devising).\nEdit: if for example you mean \"repetitions and order ARE important\",\ndef reps(seq, n):\n return itertools.product(*[seq]*n)\n\nfor i in range(7):\n for c in reps(S, i):\n cc = ''.join(c)\n if len(cc) <= 6:\n print c\n\nwill give you the required 85 lines of output.\nEdit again: I had the wrong loop limit (and therefore wrong output length) -- tx to the commenter who pointed that out. Also, this approach can produce a string > 1 times, if the ''.join's of different tuples are considered equivalent; e.g., it produces ('a', 'ba') as distinct from ('ab', 'a') although their ''.join is the same (same \"word\" from different so-called \"combinations\", I guess -- terminology in use not being entirely clear).\n", "You can iteratively generate all the strings made from one part, two parts, three parts and so on, until all the strings generated in a step are longer than six characters. Further steps would only generate even longer strings, so all possible short strings have already been generated. If you collect these short strings in each step you end up with a set of all possible generated short strings.\nIn Python:\nS = set(['a', 'ab', 'ba'])\n\ncollect = set()\nstep = set([''])\nwhile step:\n step = set(a+b for a in step for b in S if len(a+b) <= 6)\n collect |= step\n\nprint sorted(collect)\n\n", "def combos(S,n):\n if (n <= 0): return\n for s in S:\n if len(s) <= n: yield s\n for t in combos(S,n-len(s)): yield s+t\n\nfor x in combos([\"a\",\"ab\",\"ba\"],6): print x\n\nPrints output:\na\naa\naaa\naaaa\naaaaa\naaaaaa\naaaaab\naaaaba\naaaab\naaaaba\naaaba\naaabaa\naaab\naaaba\naaabaa\naaabab\naaabba\naaba\naabaa\naabaaa\naabaab\naababa\naab\naaba\naabaa\naabaaa\naabaab\naababa\naabab\naababa\naabba\naabbaa\naba\nabaa\nabaaa\nabaaaa\nabaaab\nabaaba\nabaab\nabaaba\nababa\nababaa\nab\naba\nabaa\nabaaa\nabaaaa\nabaaab\nabaaba\nabaab\nabaaba\nababa\nababaa\nabab\nababa\nababaa\nababab\nababba\nabba\nabbaa\nabbaaa\nabbaab\nabbaba\nba\nbaa\nbaaa\nbaaaa\nbaaaaa\nbaaaab\nbaaaba\nbaaab\nbaaaba\nbaaba\nbaabaa\nbaab\nbaaba\nbaabaa\nbaabab\nbaabba\nbaba\nbabaa\nbabaaa\nbabaab\nbababa\n\n", "Doing it recursively is one way:\ncache = {}\ndef words_of_length(s, n=6):\n # memoise results\n if n in cache: return cache[n]\n\n # base cases\n if n < 0: return []\n if n == 0: return [\"\"]\n\n # inductive case\n res = set()\n for x in s:\n res |= set( (x+y for y in words_of_length(s, n-len(x))) )\n\n # sort and memoise result\n cache[n] = sorted(res)\n\n # sort results\n return cache[n]\n\ndef words_no_longer_than(s, n=6):\n return sum( [words_of_length(s, i) for i in range(n+1)], [] )\n\n" ]
[ 10, 8, 4, 1 ]
[]
[]
[ "combinatorics", "python", "string" ]
stackoverflow_0001457814_combinatorics_python_string.txt
Q: TurtleGraphics Python: Bouncing turtle off the walls? So, I am trying to make a realistic bouncing function, where the turtle hits a wall and bounces off at the corresponding angle. My code looks like this: def bounce(num_steps, step_size, initial_heading): turtle.reset() top = turtle.window_height()/2 bottom = -top right = turtle.window_width()/2 left = -right turtle.left(initial_heading) for step in range(num_steps): turtle.forward(step_size) x, y = turtle.position() if left <= x <= right and bottom <= y <= top: pass else: turtle.left(180-2 * (turtle.heading())) So, this works for the side walls, but I don't get how to make it bounce correctly off the top/bottom. Any suggestions? A: Try something like this: if not (left <= x <= right): turtle.left(180 - 2 * turtle.heading()) elif not (bottom <= y <= top): turtle.left(-2 * turtle.heading()) else: pass My python syntax is a little rusty, sorry :P. But the math is a little different for a horizontal vs. a vertical flip. EDIT: I suspect that what is happening is your turtle is getting into a situation where it is pointing upwards and stuck above the top wall. That would lead it to just flip indefinitely. You could try adding the following conditions: if (x <= left and 90 <= turtle.heading() <= 270) or (right <= x and not 90 <= turtle.heading() <= 270): turtle.left(180 - 2 * turtle.heading()) elif (y <= bottom and turtle.heading() >= 180) or (top <= y and turtle.heading <= 180): turtle.left(-2 * turtle.heading()) else: pass If that works, there is probably a bug elsewhere in your code. Edge handling is tricky to get right. I assume that turtle.heading() will always return something between 0 and 360 - if not then it will be even more tricky to get right. A: Gday, Your problem seems to be that you are using the same trigonometry to calculate the right and left walls, as you are the top and bottom. A piece of paper and a pencil should suffice to calculate the required deflections. def inbounds(limit, value): 'returns boolean answer to question "is turtle position within my axis limits"' return -limit < value * 2 < limit def bounce(num_steps, step_size, initial_heading): '''given the number of steps, the size of the steps and an initial heading in degrees, plot the resultant course on a turtle window, taking into account elastic collisions with window borders. ''' turtle.reset() height = turtle.window_height() width = turtle.window_width() turtle.left(initial_heading) for step in xrange(num_steps): turtle.forward(step_size) x, y = turtle.position() if not inbounds(height, y): turtle.setheading(-turtle.heading()) if not inbounds(width, x): turtle.setheading(180 - turtle.heading()) I've used the setheading function and a helper function (inbounds) to further declare the intent of the code here. Providing some kind of doc-string is also good practice in any code that you write (provided the message it states is accurate!!) Your mileage may vary on the use of xrange, Python 3.0+ renames it to simply range.
TurtleGraphics Python: Bouncing turtle off the walls?
So, I am trying to make a realistic bouncing function, where the turtle hits a wall and bounces off at the corresponding angle. My code looks like this: def bounce(num_steps, step_size, initial_heading): turtle.reset() top = turtle.window_height()/2 bottom = -top right = turtle.window_width()/2 left = -right turtle.left(initial_heading) for step in range(num_steps): turtle.forward(step_size) x, y = turtle.position() if left <= x <= right and bottom <= y <= top: pass else: turtle.left(180-2 * (turtle.heading())) So, this works for the side walls, but I don't get how to make it bounce correctly off the top/bottom. Any suggestions?
[ "Try something like this:\nif not (left <= x <= right):\n turtle.left(180 - 2 * turtle.heading())\nelif not (bottom <= y <= top):\n turtle.left(-2 * turtle.heading())\nelse:\n pass\n\nMy python syntax is a little rusty, sorry :P. But the math is a little different for a horizontal vs. a vertical flip.\nEDIT:\nI suspect that what is happening is your turtle is getting into a situation where it is pointing upwards and stuck above the top wall. That would lead it to just flip indefinitely. You could try adding the following conditions:\nif (x <= left and 90 <= turtle.heading() <= 270) or (right <= x and not 90 <= turtle.heading() <= 270):\n turtle.left(180 - 2 * turtle.heading())\nelif (y <= bottom and turtle.heading() >= 180) or (top <= y and turtle.heading <= 180):\n turtle.left(-2 * turtle.heading())\nelse:\n pass\n\nIf that works, there is probably a bug elsewhere in your code. Edge handling is tricky to get right. I assume that turtle.heading() will always return something between 0 and 360 - if not then it will be even more tricky to get right.\n", "Gday,\nYour problem seems to be that you are using the same trigonometry to calculate the right and left walls, as you are the top and bottom. A piece of paper and a pencil should suffice to calculate the required deflections. \ndef inbounds(limit, value):\n 'returns boolean answer to question \"is turtle position within my axis limits\"'\n return -limit < value * 2 < limit\n\ndef bounce(num_steps, step_size, initial_heading):\n '''given the number of steps, the size of the steps \n and an initial heading in degrees, plot the resultant course\n on a turtle window, taking into account elastic collisions \n with window borders.\n '''\n\n turtle.reset()\n height = turtle.window_height()\n width = turtle.window_width()\n turtle.left(initial_heading)\n\n for step in xrange(num_steps):\n turtle.forward(step_size)\n x, y = turtle.position()\n\n if not inbounds(height, y):\n turtle.setheading(-turtle.heading())\n\n if not inbounds(width, x):\n turtle.setheading(180 - turtle.heading())\n\nI've used the setheading function and a helper function (inbounds) to further declare the intent of the code here. Providing some kind of doc-string is also good practice in any code that you write (provided the message it states is accurate!!)\nYour mileage may vary on the use of xrange, Python 3.0+ renames it to simply range.\n" ]
[ 1, 0 ]
[]
[]
[ "python", "turtle_graphics" ]
stackoverflow_0001457332_python_turtle_graphics.txt
Q: Which process was responsible for an event signalled by inotify? I am using pyinotify to detect access, changes, etc. on files in a given directory. Is there an easier way to find out which process was responsible for that - without having to patch inotify? A: No, you can't, that information isn't in the struct inotify_event sent by the kernel. Actually there isn't any guarantee that the process responsible is still running when you get the event. A: Assuming you are on Linux (pyinotify would tend to indicate this) you could use SELinux (running in permissive mode of course) to wrap a process(es) and log all their file access/creation/deletion/etc.
Which process was responsible for an event signalled by inotify?
I am using pyinotify to detect access, changes, etc. on files in a given directory. Is there an easier way to find out which process was responsible for that - without having to patch inotify?
[ "No, you can't, that information isn't in the struct inotify_event sent by the kernel.\nActually there isn't any guarantee that the process responsible is still running when you get the event.\n", "Assuming you are on Linux (pyinotify would tend to indicate this) you could use SELinux (running in permissive mode of course) to wrap a process(es) and log all their file access/creation/deletion/etc.\n" ]
[ 1, 1 ]
[]
[]
[ "inotify", "pyinotify", "python" ]
stackoverflow_0000922200_inotify_pyinotify_python.txt
Q: Need help installing MySQL for Python Trying to install MySQL for Python. Two problems: 1) Instructions over the net says installation is python setup.py For me, it results with can't open file 'setup.py': [Errno 2] No such file or directory 2) README.txt says: The Z MySQL database adapter uses the MySQLdb package.This must be installed before you can use the Z MySQL DA. You can find this at: http://sourceforge.net/projects/mysql-python Which simply leads to the package itself, not anything else. Thanks for your help. PS. I'm using a Mac + Leopard. A: You are confusing A Zope product (ZMySQLDA) with the python-mysqldb package. Try one of the download files, if it doesn't help, go for the source. Note that the source trunk is clearly divided into ZMySQLDA/ and MySQLdb/ . A: If you're using a Debian based distro you can : apt-get install python-mysqldb ( or aptitude if you prefer ).
Need help installing MySQL for Python
Trying to install MySQL for Python. Two problems: 1) Instructions over the net says installation is python setup.py For me, it results with can't open file 'setup.py': [Errno 2] No such file or directory 2) README.txt says: The Z MySQL database adapter uses the MySQLdb package.This must be installed before you can use the Z MySQL DA. You can find this at: http://sourceforge.net/projects/mysql-python Which simply leads to the package itself, not anything else. Thanks for your help. PS. I'm using a Mac + Leopard.
[ "You are confusing A Zope product (ZMySQLDA) with the python-mysqldb package.\nTry one of the download files, if it doesn't help, go for the source.\nNote that the source trunk is clearly divided into ZMySQLDA/ and MySQLdb/ .\n", "If you're using a Debian based distro you can :\napt-get install python-mysqldb ( or aptitude if you prefer ).\n" ]
[ 2, 1 ]
[]
[]
[ "adapter", "mysql", "python", "zope" ]
stackoverflow_0001458500_adapter_mysql_python_zope.txt
Q: Find the file with a given SVN URL within a SVN working copy Given a starting point in a Subversion working copy (e.g. current working directory), and a target SVN URL, I'd like to find the file in the working copy that has that SVN URL. For example, given this current directory: c:\Subversion\ProjectA\a\b\c\ which has this SVN URL: https://svnserver/svn/ProjectA/trunk/a/b/c/ I'd like to locate the file on the hard drive with this target SVN URL: https://svnserver/svn/ProjectA/trunk/a/x/y/test.txt which in this example would be: c:\Subversion\ProjectA\a\x\y\test.txt Firstly, does the SVN API provide a function to do this? Secondly, if not, what is a good reliable (cross-platform) method to implement it? Python is my target language, and I'm using pySvn although the native Python SVN bindings could be an alternative. A: Subversion doesn't use this backward mapping from url to working copy location itself. The most stable way to check for the url use would be to perform a recursive 'svn info' call over the working copy. This gives you the url for all files and directories and you can do the matching yourself. You could optimize this a bit by trying to map the urls on local paths and only look at sensible locations, but you would miss other locations created by 'svn switch'. I don't know how 'svn info' is mapped in the python bindings, but there is most likely some info2() function on a client class you can use.
Find the file with a given SVN URL within a SVN working copy
Given a starting point in a Subversion working copy (e.g. current working directory), and a target SVN URL, I'd like to find the file in the working copy that has that SVN URL. For example, given this current directory: c:\Subversion\ProjectA\a\b\c\ which has this SVN URL: https://svnserver/svn/ProjectA/trunk/a/b/c/ I'd like to locate the file on the hard drive with this target SVN URL: https://svnserver/svn/ProjectA/trunk/a/x/y/test.txt which in this example would be: c:\Subversion\ProjectA\a\x\y\test.txt Firstly, does the SVN API provide a function to do this? Secondly, if not, what is a good reliable (cross-platform) method to implement it? Python is my target language, and I'm using pySvn although the native Python SVN bindings could be an alternative.
[ "Subversion doesn't use this backward mapping from url to working copy location itself. The most stable way to check for the url use would be to perform a recursive 'svn info' call over the working copy.\nThis gives you the url for all files and directories and you can do the matching yourself.\nYou could optimize this a bit by trying to map the urls on local paths and only look at sensible locations, but you would miss other locations created by 'svn switch'.\nI don't know how 'svn info' is mapped in the python bindings, but there is most likely some info2() function on a client class you can use.\n" ]
[ 4 ]
[]
[]
[ "pysvn", "python", "svn" ]
stackoverflow_0001458457_pysvn_python_svn.txt
Q: Python socket programming and ISO-OSI model I am sending packets from one pc to other. I am using python socket socket.socket(socket.AF_INET, socket.SOCK_DGRAM ). Do we need to take care of order in which packets are received ? In ISO-OSI model layers below transport layer handle all packets communication. Do all ISO-OSI layers present in the program ? Or some of them present in operating system ? On localhost I get all packets in order. Will it make any difference over internet ? A: SOCK_DGRAM means you want to send packets by UDP -- no order guarantee, no guarantee of reception, no guarantee of lack of repetition. SOCK_STREAM would imply TCP -- no packet boundary guarantee, but (unless the connection's dropped;-) guarantee of order, reception, and no duplication. TCP/IP, the networking model that won the heart and soul of every live practitioned and made the Internet happen, is not compliant to ISO/OSI -- a standard designed at the drafting table and never really winning in the real world. The Internet as she lives and breathes is TCP/IP all the way. Don't rely on tests done on a low-latency local network as in ANY way representative of what will happen out there in the real world. Welcome to the real world, BTW, and, good luck (you'll need some!-). A: To answer your immediate question, if you're using SOCK_STREAM, then you're actually using TCP, which is an implementation of the transport layer which does take care of packet ordering and integrity for you. So it sounds like that's what you want to use. SOCK_DGRAM is actually UDP, which doesn't take care of any integrity for you. Do we need to take care of order in which packets are received ? In ISO-OSI model layers below transport layer handle all packets communication. Do all ISO-OSI layers present in the program ? Just to clear this up, in the ISO-OSI model, all the layers below the transport layer handle sending of a single packet from one computer to the other, and don't "understand" the concept of packet ordering (it doesn't apply to them). In this model, there is another layer (the session layer, above the transport layer) which is responsible for defining the session behavior. It is this layer which decides whether to have things put in place to prevent reordering, to ensure integrity, and so on. In the modern world, the ISO-OSI model is more of an idealistic template, rather than an actual model. TCP/IP is the actual implementation which is used almost everywhere. In TCP/IP, the transport layer is the one that has the role of defining whether there is any session behavior or not.
Python socket programming and ISO-OSI model
I am sending packets from one pc to other. I am using python socket socket.socket(socket.AF_INET, socket.SOCK_DGRAM ). Do we need to take care of order in which packets are received ? In ISO-OSI model layers below transport layer handle all packets communication. Do all ISO-OSI layers present in the program ? Or some of them present in operating system ? On localhost I get all packets in order. Will it make any difference over internet ?
[ "SOCK_DGRAM means you want to send packets by UDP -- no order guarantee, no guarantee of reception, no guarantee of lack of repetition. SOCK_STREAM would imply TCP -- no packet boundary guarantee, but (unless the connection's dropped;-) guarantee of order, reception, and no duplication. TCP/IP, the networking model that won the heart and soul of every live practitioned and made the Internet happen, is not compliant to ISO/OSI -- a standard designed at the drafting table and never really winning in the real world.\nThe Internet as she lives and breathes is TCP/IP all the way. Don't rely on tests done on a low-latency local network as in ANY way representative of what will happen out there in the real world. Welcome to the real world, BTW, and, good luck (you'll need some!-).\n", "To answer your immediate question, if you're using SOCK_STREAM, then you're actually using TCP, which is an implementation of the transport layer which does take care of packet ordering and integrity for you. So it sounds like that's what you want to use. SOCK_DGRAM is actually UDP, which doesn't take care of any integrity for you.\n\nDo we need to take care of order in which packets are received ? In ISO-OSI model layers below transport layer handle all packets communication. Do all ISO-OSI layers present in the program ?\n\nJust to clear this up, in the ISO-OSI model, all the layers below the transport layer handle sending of a single packet from one computer to the other, and don't \"understand\" the concept of packet ordering (it doesn't apply to them).\nIn this model, there is another layer (the session layer, above the transport layer) which is responsible for defining the session behavior. It is this layer which decides whether to have things put in place to prevent reordering, to ensure integrity, and so on.\nIn the modern world, the ISO-OSI model is more of an idealistic template, rather than an actual model. TCP/IP is the actual implementation which is used almost everywhere.\nIn TCP/IP, the transport layer is the one that has the role of defining whether there is any session behavior or not.\n" ]
[ 5, 4 ]
[]
[]
[ "python", "sockets" ]
stackoverflow_0001458087_python_sockets.txt
Q: Python - Is there a way around 'os.listdir()' returning gibberish for bad folder name? I have a simple script written in Python: import os def Path(SourcePath): for Folder in os.listdir(SourcePath): print "TESTING: %s" % Folder Path("\\\\192.168.0.36\\PDFs") When i run this it recurses through a remote share on the LAN and just simply displays the names of the folders found. This share primarily contains folders. The problem is that if a folder name has a space at the end of it's name, the above script lists jibberish. For example, if i have the following folders in the above share: "6008386 HH - Walkers Crisps" "6008157 CPP - Santas Chocolate " "6007458 SCA - Morrisons Bananas" Notice that "6008157 CPP - Santas Chocolate " has a space at the end. This is the listing from the above script: "TESTING: 6008386 HH - Walkers Crisps" "TESTING: 6EBA72~1" "TESTING: 6007458 SCA - Morrisons Bananas" How can i avoid this while recursing the remote dir? I could fix the folder name if only it was returned properly by 'os.listdir()'. Any ideas on how to tackle this? A: Windows uses generated 8.3 "placeholders" when a filename over CIFS contains characters which are illegal in a Windows filename. In this case, it's happening because your "Santas Chocolate " filename ends with a space. Windows filenames can't end with spaces, so it uses a placeholder to make the file accessible. I don't think you can use GetLongPathName for this--there's no long filename to map to, because that would, by definition, be an illegal filename. If you have filenames like this, I don't think there's any way to find out what it actually is on the server, and it would do you a limited amount of good, since you couldn't refer to it by that filename. A: That is not (g|j)ibberish, it's a short (8.3) filename. It's Windows-specific, but you might be able to use GetLongPathName() to map it back to a long name.
Python - Is there a way around 'os.listdir()' returning gibberish for bad folder name?
I have a simple script written in Python: import os def Path(SourcePath): for Folder in os.listdir(SourcePath): print "TESTING: %s" % Folder Path("\\\\192.168.0.36\\PDFs") When i run this it recurses through a remote share on the LAN and just simply displays the names of the folders found. This share primarily contains folders. The problem is that if a folder name has a space at the end of it's name, the above script lists jibberish. For example, if i have the following folders in the above share: "6008386 HH - Walkers Crisps" "6008157 CPP - Santas Chocolate " "6007458 SCA - Morrisons Bananas" Notice that "6008157 CPP - Santas Chocolate " has a space at the end. This is the listing from the above script: "TESTING: 6008386 HH - Walkers Crisps" "TESTING: 6EBA72~1" "TESTING: 6007458 SCA - Morrisons Bananas" How can i avoid this while recursing the remote dir? I could fix the folder name if only it was returned properly by 'os.listdir()'. Any ideas on how to tackle this?
[ "Windows uses generated 8.3 \"placeholders\" when a filename over CIFS contains characters which are illegal in a Windows filename.\nIn this case, it's happening because your \"Santas Chocolate \" filename ends with a space. Windows filenames can't end with spaces, so it uses a placeholder to make the file accessible.\nI don't think you can use GetLongPathName for this--there's no long filename to map to, because that would, by definition, be an illegal filename. If you have filenames like this, I don't think there's any way to find out what it actually is on the server, and it would do you a limited amount of good, since you couldn't refer to it by that filename.\n", "That is not (g|j)ibberish, it's a short (8.3) filename. It's Windows-specific, but you might be able to use GetLongPathName() to map it back to a long name.\n" ]
[ 4, 3 ]
[]
[]
[ "python" ]
stackoverflow_0001458847_python.txt
Q: Is there a C# equivalent to Python's unhexlify? Possible Duplicate: How to convert hex to a byte array? I'm searching for a python compatible method in C# to convert hex to binary. I've reversed a hash in Python by doing this: import sha import base64 import binascii hexvalue = "5baa61e4c9b93f3f0682250b6cf8331b7ee68fd8" binaryval = binascii.unhexlify(hexvalue) print base64.standard_b64encode(binaryval) >> W6ph5Mm5Pz8GgiULbPgzG37mj9g= So far all of the various Hex2Binary C# methods I've found end up throwing OverflowExceptions: static string Hex2Binary(string hexvalue) { string binaryval = ""; long b = Convert.ToInt64(hexvalue, 16); binaryval = Convert.ToString(b); byte[] bytes = Encoding.UTF8.GetBytes(binaryval); return Convert.ToBase64String(bytes); } Anybody got a tip on how to produce a C# method to match the python output? A: This value is too big for a long (64bits), that's why you get an OverflowException. But it's very easy to convert hex to binary byte by byte (well, nibble by nibble actually) : static string Hex2Binary(string hexvalue) { StringBuilder binaryval = new StringBuilder(); for(int i=0; i < hexvalue.Length; i++) { string byteString = hexvalue.Substring(i, 1); byte b = Convert.ToByte(byteString, 16); binaryval.Append(Convert.ToString(b, 2).PadLeft(4, '0')); } return binaryval.ToString(); } EDIT: the method above converts to binary, not to base64. This one converts to base64 : static string Hex2Base64(string hexvalue) { if (hexvalue.Length % 2 != 0) hexvalue = "0" + hexvalue; int len = hexvalue.Length / 2; byte[] bytes = new byte[len]; for(int i = 0; i < len; i++) { string byteString = hexvalue.Substring(2 * i, 2); bytes[i] = Convert.ToByte(byteString, 16); } return Convert.ToBase64String(bytes); } A: You can divide your hexadecimal string into two-digit groups, and then use byte.parse to convert them to bytes. Use the NumberStyles.AllowHexSpecifier for that, for example: byte.Parse("3F", NumberStyles.AllowHexSpecifier); The following routine will convert a hex string to a byte array: private byte[] Hex2Binary(string hex) { var chars = hex.ToCharArray(); var bytes = new List<byte>(); for(int index = 0; index < chars.Length; index += 2) { var chunk = new string(chars, index, 2); bytes.Add(byte.Parse(chunk, NumberStyles.AllowHexSpecifier)); } return bytes.ToArray(); }
Is there a C# equivalent to Python's unhexlify?
Possible Duplicate: How to convert hex to a byte array? I'm searching for a python compatible method in C# to convert hex to binary. I've reversed a hash in Python by doing this: import sha import base64 import binascii hexvalue = "5baa61e4c9b93f3f0682250b6cf8331b7ee68fd8" binaryval = binascii.unhexlify(hexvalue) print base64.standard_b64encode(binaryval) >> W6ph5Mm5Pz8GgiULbPgzG37mj9g= So far all of the various Hex2Binary C# methods I've found end up throwing OverflowExceptions: static string Hex2Binary(string hexvalue) { string binaryval = ""; long b = Convert.ToInt64(hexvalue, 16); binaryval = Convert.ToString(b); byte[] bytes = Encoding.UTF8.GetBytes(binaryval); return Convert.ToBase64String(bytes); } Anybody got a tip on how to produce a C# method to match the python output?
[ "This value is too big for a long (64bits), that's why you get an OverflowException.\nBut it's very easy to convert hex to binary byte by byte (well, nibble by nibble actually) :\nstatic string Hex2Binary(string hexvalue)\n{\n StringBuilder binaryval = new StringBuilder();\n for(int i=0; i < hexvalue.Length; i++)\n {\n string byteString = hexvalue.Substring(i, 1);\n byte b = Convert.ToByte(byteString, 16);\n binaryval.Append(Convert.ToString(b, 2).PadLeft(4, '0'));\n }\n return binaryval.ToString();\n}\n\nEDIT: the method above converts to binary, not to base64. This one converts to base64 :\nstatic string Hex2Base64(string hexvalue)\n{\n if (hexvalue.Length % 2 != 0)\n hexvalue = \"0\" + hexvalue;\n int len = hexvalue.Length / 2;\n byte[] bytes = new byte[len];\n for(int i = 0; i < len; i++)\n {\n string byteString = hexvalue.Substring(2 * i, 2);\n bytes[i] = Convert.ToByte(byteString, 16);\n }\n return Convert.ToBase64String(bytes);\n}\n\n", "You can divide your hexadecimal string into two-digit groups, and then use byte.parse to convert them to bytes. Use the NumberStyles.AllowHexSpecifier for that, for example:\nbyte.Parse(\"3F\", NumberStyles.AllowHexSpecifier);\n\nThe following routine will convert a hex string to a byte array:\nprivate byte[] Hex2Binary(string hex)\n{\n var chars = hex.ToCharArray();\n var bytes = new List<byte>();\n for(int index = 0; index < chars.Length; index += 2) {\n var chunk = new string(chars, index, 2);\n bytes.Add(byte.Parse(chunk, NumberStyles.AllowHexSpecifier));\n }\n return bytes.ToArray();\n}\n\n" ]
[ 6, 1 ]
[]
[]
[ "binary", "c#", "hex", "python" ]
stackoverflow_0001459006_binary_c#_hex_python.txt
Q: Reading a float from string I have a simple string that I want to read into a float without losing any visible information as illustrated below: s = ' 1.0000\n' When I do f = float(s), I get f=1.0 How to trick this to get f=1.0000 ? Thank you A: Direct answer: You can't. Floats are imprecise, by design. While python's floats have more than enough precision to represent 1.0000, they will never represent a "1-point-zero-zero-zero-zero". Chances are, this is as good as you need. You can always use string formatting, if you need to display four decimal digits. print '%.3f' % float(1.0000) Indirect answer: Use the decimal module. from decimal import Decimal d = Decimal('1.0000') The decimal package is designed to handle all these issues with arbitrary precision. A decimal "1.0000" is exactly 1.0000, no more, no less. Note, however, that complications with rounding means you can't convert from a float directly to a Decimal; you have to pass a string (or an integer) to the constructor. A: >>> 1.0 == 1.00 == 1.000 True In other words, you're losing no info -- Python considers trailing zeros in floats' decimal parts to be irrelevant. If you need to keep track very specifically of "number of significant digits", that's also feasible, but you'll need to explain to us all exactly what you're trying to accomplish beyond Python's normal float (e.g., would decimal in the Python standard library have anything to do with it? etc, etc...). A: Indulge me while I reinvent the wheel a little bit! ;) Should you want the reprsentation of the object (the way it is displayed in the shell) to have the form 1.0000 then you may need to change the __repr__ method of the float. If you would like the object to be printed to a file or screen you will need to change the __str__ method. Here is a class that will do both. class ExactFloat(float): def __repr__(self): return '%.4f' % self __str__ = __repr__ # usage of the shiny new class to follow.. >>> f = ExactFloat(' 1.0000\n') >>> f 1.0000 >>> print f 1.0000 Now there are also a myriad of other better ways to do this, but like I said, I love re-inventing wheels :) A: Here's an even shinier version of @Simon Edwards' ExactFloat, that counts the number of digits after the period and displays that number of digits when the number is converted to a string. import re class ExactFloat(float): def __init__(self, str_value): float.__init__(self, str_value) mo = re.match("^\d+\.(\d+)", str_value) self.digits_after_period = len(mo.group(1)) def __repr__(self): return '%.*f' % (self.digits_after_period, self) def __str__(self): return self.__repr__() print ExactFloat("1.000") print ExactFloat("1.0") print ExactFloat("23.234500") Example: $ python exactfloat.py 1.000 1.0 23.234500
Reading a float from string
I have a simple string that I want to read into a float without losing any visible information as illustrated below: s = ' 1.0000\n' When I do f = float(s), I get f=1.0 How to trick this to get f=1.0000 ? Thank you
[ "Direct answer: You can't. Floats are imprecise, by design. While python's floats have more than enough precision to represent 1.0000, they will never represent a \"1-point-zero-zero-zero-zero\". Chances are, this is as good as you need. You can always use string formatting, if you need to display four decimal digits.\nprint '%.3f' % float(1.0000)\n\nIndirect answer: Use the decimal module.\nfrom decimal import Decimal\nd = Decimal('1.0000')\n\nThe decimal package is designed to handle all these issues with arbitrary precision. A decimal \"1.0000\" is exactly 1.0000, no more, no less. Note, however, that complications with rounding means you can't convert from a float directly to a Decimal; you have to pass a string (or an integer) to the constructor. \n", ">>> 1.0 == 1.00 == 1.000\nTrue\n\nIn other words, you're losing no info -- Python considers trailing zeros in floats' decimal parts to be irrelevant. If you need to keep track very specifically of \"number of significant digits\", that's also feasible, but you'll need to explain to us all exactly what you're trying to accomplish beyond Python's normal float (e.g., would decimal in the Python standard library have anything to do with it? etc, etc...).\n", "Indulge me while I reinvent the wheel a little bit! ;)\nShould you want the reprsentation of the object (the way it is displayed in the shell) to have the form 1.0000 then you may need to change the __repr__ method of the float. If you would like the object to be printed to a file or screen you will need to change the __str__ method. Here is a class that will do both.\nclass ExactFloat(float):\n def __repr__(self):\n return '%.4f' % self\n __str__ = __repr__\n\n\n# usage of the shiny new class to follow..\n>>> f = ExactFloat(' 1.0000\\n')\n>>> f\n1.0000\n>>> print f\n1.0000\n\nNow there are also a myriad of other better ways to do this, but like I said, I love\nre-inventing wheels :)\n", "Here's an even shinier version of @Simon Edwards' ExactFloat, that counts the number of digits after the period and displays that number of digits when the number is converted to a string.\nimport re\n\nclass ExactFloat(float):\n def __init__(self, str_value):\n float.__init__(self, str_value)\n mo = re.match(\"^\\d+\\.(\\d+)\", str_value)\n self.digits_after_period = len(mo.group(1))\n\n def __repr__(self):\n return '%.*f' % (self.digits_after_period, self)\n\n def __str__(self):\n return self.__repr__()\n\n\nprint ExactFloat(\"1.000\")\nprint ExactFloat(\"1.0\")\nprint ExactFloat(\"23.234500\")\n\nExample:\n$ python exactfloat.py\n1.000\n1.0\n23.234500\n\n" ]
[ 9, 5, 1, 1 ]
[]
[]
[ "floating_point", "python", "string" ]
stackoverflow_0001458203_floating_point_python_string.txt
Q: How can I protect myself from a zip bomb? I just read about zip bombs, i.e. zip files that contain very large amount of highly compressible data (00000000000000000...). When opened they fill the server's disk. How can I detect a zip file is a zip bomb before unzipping it? UPDATE Can you tell me how is this done in Python or Java? A: Try this in Python: import zipfile with zipfile.ZipFile('a_file.zip') as z print(f'total files size={sum(e.file_size for e in z.infolist())}') A: Zip is, erm, an "interesting" format. A robust solution is to stream the data out, and stop when you have had enough. In Java, use ZipInputStream rather than ZipFile. The latter also requires you to store the data in a temporary file, which is also not the greatest of ideas. A: Reading over the description on Wikipedia - Deny any compressed files that contain compressed files.      Use ZipFile.entries() to retrieve a list of files, then ZipEntry.getName() to find the file extension. Deny any compressed files that contain files over a set size, or the size can not be determined at startup.      While iterating over the files use ZipEntry.getSize() to retrieve the file size. A: Don't allow the upload process to write enough data to fill up the disk, ie solve the problem, not just one possible cause of the problem. A: Check a zip header first :) A: If the ZIP decompressor you use can provide the data on original and compressed size you can use that data. Otherwise start unzipping and monitor the output size - if it grows too much cut it loose. A: Make sure you are not using your system drive for temp storage. I am not sure if a virusscanner will check it if it encounters it. Also you can look at the information inside the zip file and retrieve a list of the content. How to do this depends on the utility used to extract the file, so you need to provide more information here
How can I protect myself from a zip bomb?
I just read about zip bombs, i.e. zip files that contain very large amount of highly compressible data (00000000000000000...). When opened they fill the server's disk. How can I detect a zip file is a zip bomb before unzipping it? UPDATE Can you tell me how is this done in Python or Java?
[ "Try this in Python:\nimport zipfile\n\nwith zipfile.ZipFile('a_file.zip') as z\n print(f'total files size={sum(e.file_size for e in z.infolist())}')\n\n", "Zip is, erm, an \"interesting\" format. A robust solution is to stream the data out, and stop when you have had enough. In Java, use ZipInputStream rather than ZipFile. The latter also requires you to store the data in a temporary file, which is also not the greatest of ideas.\n", "Reading over the description on Wikipedia - \nDeny any compressed files that contain compressed files.\n     Use ZipFile.entries() to retrieve a list of files, then ZipEntry.getName() to find the file extension.\nDeny any compressed files that contain files over a set size, or the size can not be determined at startup.\n     While iterating over the files use ZipEntry.getSize() to retrieve the file size.\n", "Don't allow the upload process to write enough data to fill up the disk, ie solve the problem, not just one possible cause of the problem.\n", "Check a zip header first :)\n", "If the ZIP decompressor you use can provide the data on original and compressed size you can use that data. Otherwise start unzipping and monitor the output size - if it grows too much cut it loose.\n", "Make sure you are not using your system drive for temp storage. I am not sure if a virusscanner will check it if it encounters it. \nAlso you can look at the information inside the zip file and retrieve a list of the content. How to do this depends on the utility used to extract the file, so you need to provide more information here\n" ]
[ 25, 24, 13, 6, 5, 4, 1 ]
[]
[]
[ "compression", "java", "python", "security", "zip" ]
stackoverflow_0001459080_compression_java_python_security_zip.txt
Q: module reimported if imported from different path In a big application I am working, several people import same modules differently e.g. import x or from y import x the side effects of that is x is imported twice and may introduce very subtle bugs, if someone is relying on global attributes e.g. suppose I have a package mypakcage with three file mymodule.py, main.py and init.py mymodule.py contents l = [] class A(object): pass main.py contents def add(x): from mypackage import mymodule mymodule.l.append(x) print "updated list",mymodule.l def get(): import mymodule return mymodule.l add(1) print "lets check",get() add(1) print "lets check again",get() it prints updated list [1] lets check [] updated list [1, 1] lets check again [] because now there are two lists in two different modules, similarly class A is different To me it looks serious enough because classes itself will be treated differently e.g. below code prints False def create(): from mypackage import mymodule return mymodule.A() def check(a): import mymodule return isinstance(a, mymodule.A) print check(create()) Question: Is there any way to avoid this? except enforcing that module should be imported one way onyl. Can't this be handled by python import mechanism, I have seen several bugs related to this in django code and elsewhere too. A: Each module namespace is imported only once. Issue is, you're importing them differently. On the first you're importing from the global package, and on the second you're doing a local, non-packaged import. Python sees modules as different. The first import is internally cached as mypackage.mymodule and the second one as mymodule only. A way to solve this is to always use absolute imports. That is, always give your module absolute import paths from the top-level package onwards: def add(x): from mypackage import mymodule mymodule.l.append(x) print "updated list",mymodule.l def get(): from mypackage import mymodule return mymodule.l Remember that your entry point (the file you run, main.py) also should be outside the package. When you want the entry point code to be inside the package, usually you use a run a small script instead. Example: runme.py, outside the package: from mypackage.main import main main() And in main.py you add: def main(): # your code I find this document by Jp Calderone to be a great tip on how to (not) structure your python project. Following it you won't have issues. Pay attention to the bin folder - it is outside the package. I'll reproduce the entire text here: Filesystem structure of a Python project Do: name the directory something related to your project. For example, if your project is named "Twisted", name the top-level directory for its source files Twisted. When you do releases, you should include a version number suffix: Twisted-2.5. create a directory Twisted/bin and put your executables there, if you have any. Don't give them a .py extension, even if they are Python source files. Don't put any code in them except an import of and call to a main function defined somewhere else in your projects. If your project is expressable as a single Python source file, then put it into the directory and name it something related to your project. For example, Twisted/twisted.py. If you need multiple source files, create a package instead (Twisted/twisted/, with an empty Twisted/twisted/__init__.py) and place your source files in it. For example, Twisted/twisted/internet.py. put your unit tests in a sub-package of your package (note - this means that the single Python source file option above was a trick - you always need at least one other file for your unit tests). For example, Twisted/twisted/test/. Of course, make it a package with Twisted/twisted/test/__init__.py. Place tests in files like Twisted/twisted/test/test_internet.py. add Twisted/README and Twisted/setup.py to explain and install your software, respectively, if you're feeling nice. Don't: put your source in a directory called src or lib. This makes it hard to run without installing. put your tests outside of your Python package. This makes it hard to run the tests against an installed version. create a package that only has a __init__.py and then put all your code into __init__.py. Just make a module instead of a package, it's simpler. try to come up with magical hacks to make Python able to import your module or package without having the user add the directory containing it to their import path (either via PYTHONPATH or some other mechanism). You will not correctly handle all cases and users will get angry at you when your software doesn't work in their environment. A: I can only replicate this if main.py is the file you are actually running. In that case you will get the current directory of main.py on the sys path. But you apparently also have a system path set so that mypackage can be imported. Python will in that situation not realize that mymodule and mypackage.mymodule is the same module, and you get this effect. This change illustrates this: def add(x): from mypackage import mymodule print "mypackage.mymodule path", mymodule mymodule.l.append(x) print "updated list",mymodule.l def get(): import mymodule print "mymodule path", mymodule return mymodule.l add(1) print "lets check",get() add(1) print "lets check again",get() $ export PYTHONPATH=. $ python mypackage/main.py mypackage.mymodule path <module 'mypackage.mymodule' from '/tmp/mypackage/mymodule.pyc'> mymodule path <module 'mymodule' from '/tmp/mypackage/mymodule.pyc'> But add another mainfile, in the currect directory: realmain.py: from mypackage import main and the result is different: mypackage.mymodule path <module 'mypackage.mymodule' from '/tmp/mypackage/mymodule.pyc'> mymodule path <module 'mypackage.mymodule' from '/tmp/mypackage/mymodule.pyc'> So I suspect that you have your main python file within the package. And in that case the solution is to not do that. :-)
module reimported if imported from different path
In a big application I am working, several people import same modules differently e.g. import x or from y import x the side effects of that is x is imported twice and may introduce very subtle bugs, if someone is relying on global attributes e.g. suppose I have a package mypakcage with three file mymodule.py, main.py and init.py mymodule.py contents l = [] class A(object): pass main.py contents def add(x): from mypackage import mymodule mymodule.l.append(x) print "updated list",mymodule.l def get(): import mymodule return mymodule.l add(1) print "lets check",get() add(1) print "lets check again",get() it prints updated list [1] lets check [] updated list [1, 1] lets check again [] because now there are two lists in two different modules, similarly class A is different To me it looks serious enough because classes itself will be treated differently e.g. below code prints False def create(): from mypackage import mymodule return mymodule.A() def check(a): import mymodule return isinstance(a, mymodule.A) print check(create()) Question: Is there any way to avoid this? except enforcing that module should be imported one way onyl. Can't this be handled by python import mechanism, I have seen several bugs related to this in django code and elsewhere too.
[ "Each module namespace is imported only once. Issue is, you're importing them differently. On the first you're importing from the global package, and on the second you're doing a local, non-packaged import. Python sees modules as different. The first import is internally cached as mypackage.mymodule and the second one as mymodule only.\nA way to solve this is to always use absolute imports. That is, always give your module absolute import paths from the top-level package onwards:\ndef add(x):\n from mypackage import mymodule\n mymodule.l.append(x)\n print \"updated list\",mymodule.l\n\ndef get():\n from mypackage import mymodule\n return mymodule.l\n\nRemember that your entry point (the file you run, main.py) also should be outside the package. When you want the entry point code to be inside the package, usually you use a run a small script instead. Example:\nrunme.py, outside the package:\nfrom mypackage.main import main\nmain()\n\nAnd in main.py you add:\ndef main():\n # your code\n\nI find this document by Jp Calderone to be a great tip on how to (not) structure your python project. Following it you won't have issues. Pay attention to the bin folder - it is outside the package. I'll reproduce the entire text here:\n\nFilesystem structure of a Python project\nDo:\n\nname the directory something\n related to your project. For example,\n if your project is named \"Twisted\",\n name the top-level directory for its\n source files Twisted. When you do\n releases, you should include a version\n number suffix: Twisted-2.5. \ncreate a directory Twisted/bin and\n put your executables there, if you\n have any. Don't give them a .py\n extension, even if they are Python\n source files. Don't put any code in\n them except an import of and call to a\n main function defined somewhere else\n in your projects. \nIf your project\n is expressable as a single Python\n source file, then put it into the\n directory and name it something\n related to your project. For example,\n Twisted/twisted.py. If you need\n multiple source files, create a\n package instead (Twisted/twisted/,\n with an empty\n Twisted/twisted/__init__.py) and\n place your source files in it. For\n example,\n Twisted/twisted/internet.py. \nput\n your unit tests in a sub-package of\n your package (note - this means that\n the single Python source file option\n above was a trick - you always need at\n least one other file for your unit\n tests). For example,\n Twisted/twisted/test/. Of course,\n make it a package with\n Twisted/twisted/test/__init__.py.\n Place tests in files like\n Twisted/twisted/test/test_internet.py.\nadd Twisted/README and Twisted/setup.py to explain and\n install your software, respectively,\n if you're feeling nice.\n\nDon't:\n\nput your source in a directory\n called src or lib. This makes it\n hard to run without installing. \nput\n your tests outside of your Python\n package. This makes it hard to run the\n tests against an installed version. \ncreate a package that only has a\n __init__.py and then put all your\n code into __init__.py. Just make a\n module instead of a package, it's\n simpler. \ntry to come up with\n magical hacks to make Python able to\n import your module or package without\n having the user add the directory\n containing it to their import path\n (either via PYTHONPATH or some other\n mechanism). You will not correctly\n handle all cases and users will get\n angry at you when your software\n doesn't work in their environment.\n\n\n", "I can only replicate this if main.py is the file you are actually running. In that case you will get the current directory of main.py on the sys path. But you apparently also have a system path set so that mypackage can be imported.\nPython will in that situation not realize that mymodule and mypackage.mymodule is the same module, and you get this effect. This change illustrates this:\ndef add(x):\n from mypackage import mymodule\n print \"mypackage.mymodule path\", mymodule\n mymodule.l.append(x)\n print \"updated list\",mymodule.l\n\ndef get():\n import mymodule\n print \"mymodule path\", mymodule\n return mymodule.l\n\nadd(1)\nprint \"lets check\",get()\n\nadd(1)\nprint \"lets check again\",get()\n\n\n$ export PYTHONPATH=.\n$ python mypackage/main.py \n\nmypackage.mymodule path <module 'mypackage.mymodule' from '/tmp/mypackage/mymodule.pyc'>\nmymodule path <module 'mymodule' from '/tmp/mypackage/mymodule.pyc'>\n\nBut add another mainfile, in the currect directory:\nrealmain.py:\nfrom mypackage import main\n\nand the result is different:\nmypackage.mymodule path <module 'mypackage.mymodule' from '/tmp/mypackage/mymodule.pyc'>\nmymodule path <module 'mypackage.mymodule' from '/tmp/mypackage/mymodule.pyc'>\n\nSo I suspect that you have your main python file within the package. And in that case the solution is to not do that. :-)\n" ]
[ 5, 3 ]
[]
[]
[ "python", "python_import" ]
stackoverflow_0001459236_python_python_import.txt
Q: Python ctypes and not enough arguments (4 bytes missing) The function i'm trying to call is: void FormatError (HRESULT hrError,PCHAR pszText); from a custom dll using windll. c_p = c_char_p() windll.thedll.FormatError(errcode, c_p) Results in: ValueError: Procedure probably called with not enough arguments (4 bytes missing) Using cdll instead increases the bytes missing counter to 12. errcode above is the errercode returned from another function out of the same dll. How do I get the call right? A: At the very least, you'll get more descriptive errors if you properly set up the argtypes and the restype. Try doing it this way: windll.thedll.FormatError.argtypes = [ctypes.HRESULT, ctypes.c_char_p] windll.thedll.FormatError.restype = None There's also a very good chance you are using the wrong calling convention -- check out the Calling Functions section and the Loading Libraries section for details on how to use a different calling convention. A: Have you tried to use the ctypes.HRESULT? A: Actually I think you want to use FormatError as provided by ctypes http://docs.python.org/library/ctypes.html#ctypes.FormatError ctypes.FormatError([code]) Windows only: Returns a textual description of the error code. If no error code is specified, the last error code is used by calling the Windows api function GetLastError.
Python ctypes and not enough arguments (4 bytes missing)
The function i'm trying to call is: void FormatError (HRESULT hrError,PCHAR pszText); from a custom dll using windll. c_p = c_char_p() windll.thedll.FormatError(errcode, c_p) Results in: ValueError: Procedure probably called with not enough arguments (4 bytes missing) Using cdll instead increases the bytes missing counter to 12. errcode above is the errercode returned from another function out of the same dll. How do I get the call right?
[ "At the very least, you'll get more descriptive errors if you properly set up the argtypes and the restype.\nTry doing it this way:\nwindll.thedll.FormatError.argtypes = [ctypes.HRESULT, ctypes.c_char_p]\nwindll.thedll.FormatError.restype = None\n\nThere's also a very good chance you are using the wrong calling convention -- check out the Calling Functions section and the Loading Libraries section for details on how to use a different calling convention.\n", "Have you tried to use the ctypes.HRESULT? \n", "Actually I think you want to use FormatError as provided by ctypes\nhttp://docs.python.org/library/ctypes.html#ctypes.FormatError\n\nctypes.FormatError([code])\nWindows only: Returns a textual description of the error code. If\n no error code is specified, the last error code is used by calling\n the Windows api function GetLastError.\n\n" ]
[ 2, 0, 0 ]
[]
[]
[ "ctypes", "python", "windows" ]
stackoverflow_0001458813_ctypes_python_windows.txt
Q: pyparsing - load ABNF? can pyparsing read ABNF from a file instead of having to define it in terms of python objects? If not, is there something which can do similar (load an ABNF file into a parser object) A: See this example submitted by Seo Sanghyeon, which reads EBNF and parses it (using pyparsing) to create a pyparsing parser. A: There are lots of Python parsing packages: Python Parsing Tools. ANTLR in particular is very well-respected, and reads a grammar from a dedicated file.
pyparsing - load ABNF?
can pyparsing read ABNF from a file instead of having to define it in terms of python objects? If not, is there something which can do similar (load an ABNF file into a parser object)
[ "See this example submitted by Seo Sanghyeon, which reads EBNF and parses it (using pyparsing) to create a pyparsing parser.\n", "There are lots of Python parsing packages: Python Parsing Tools. ANTLR in particular is very well-respected, and reads a grammar from a dedicated file. \n" ]
[ 9, 2 ]
[]
[]
[ "parsing", "pyparsing", "python" ]
stackoverflow_0001459371_parsing_pyparsing_python.txt
Q: Trying to embed python into tinycc, says python symbols are undefined I've literally spent the past half hour searching for the solution to this, and everything involves GCC. What I do here works absolutely fine with GCC, however I'm using TinyCC, and this is where I'm getting confused. First the code: #include <Python.h> #include <stdio.h> int main(int argc, char*argv[]) { Py_Initialize(); PyRun_SimpleString("print(\"Hello World!\")"); Py_Finalize(); return 0; } I then call tcc like so: tcc -o tinypyembed.exe tiny.c -IC:\Python26\include -LC:\Python26\libs -lpython26 It then becomes a big fat jerk and spits out tcc: undefined symbol 'Py_Initialize' tcc: undefined symbol 'PyRun_SimpleStringFlags' tcc: undefined symbol 'Py_Finalize' I'm totally at my wits end and really do appreciate it if anyone knows what's up. After asking a friend to try this out I have discovered that it is in fact a windows issue. May this stay here as a warning to anyone else who may try tinycc with python on windows. A: Did you use tiny_impdef.exe to create a .def file for the Python DLL? A: Full solution for Windows: tiny_impdef as per bk1e's advice tiny_impdef.exe c:\WINDOWS\system32\python25.dll add python25.def (or python26.def) to compilation list tcc tiny.c python25.def -IC:\Python25\include -LC:\Python25\libs -lpython25 (replace 25 with 26 for Python2.6)
Trying to embed python into tinycc, says python symbols are undefined
I've literally spent the past half hour searching for the solution to this, and everything involves GCC. What I do here works absolutely fine with GCC, however I'm using TinyCC, and this is where I'm getting confused. First the code: #include <Python.h> #include <stdio.h> int main(int argc, char*argv[]) { Py_Initialize(); PyRun_SimpleString("print(\"Hello World!\")"); Py_Finalize(); return 0; } I then call tcc like so: tcc -o tinypyembed.exe tiny.c -IC:\Python26\include -LC:\Python26\libs -lpython26 It then becomes a big fat jerk and spits out tcc: undefined symbol 'Py_Initialize' tcc: undefined symbol 'PyRun_SimpleStringFlags' tcc: undefined symbol 'Py_Finalize' I'm totally at my wits end and really do appreciate it if anyone knows what's up. After asking a friend to try this out I have discovered that it is in fact a windows issue. May this stay here as a warning to anyone else who may try tinycc with python on windows.
[ "Did you use tiny_impdef.exe to create a .def file for the Python DLL?\n", "Full solution for Windows:\n\ntiny_impdef as per bk1e's advice\ntiny_impdef.exe c:\\WINDOWS\\system32\\python25.dll\nadd python25.def (or python26.def) to compilation list\ntcc tiny.c python25.def -IC:\\Python25\\include -LC:\\Python25\\libs -lpython25\n(replace 25 with 26 for Python2.6)\n\n" ]
[ 3, 2 ]
[]
[]
[ "c", "compilation", "python" ]
stackoverflow_0000743044_c_compilation_python.txt