content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 35
137
|
---|---|---|---|---|---|---|---|---|
Q:
Must a secure cryptographic signature reside outside of the file it refers to?
I'm programming a pet project in Python, and it involves users A & B interacting over network, attempting to insure that each has a local copy of the same file from user C.
The idea is that C gives each a file that has been digitally signed. A & B trade the digital signatures they have, and check it out on their own copy. If the signature fails, then one of them has an incorrect/corrupt/modified version of the file.
The question is, therefore, can C distribute a single file that somehow includes it's own signature? Or does C need to supply the file and signature separately?
A:
The digital signature from C alone should be enough for both A and B to confirm that their file is not corrupted, without ever communicating with eachother. If A and B did not receive a signature from C, they could each create a cryptographic hash of the file and compare the hash, but that does not require any digital signing on C's part.
If you want C to sign the file, either send the signature and the file seperately, or wrap them both in some sort of container, such as a zip file or home grown solution (e.g., the first line in the file represents the signature, the rest is the payload).
To answer your question literally, the signature doesn't have to be outside the file per se, but the part that is being signed cannot include the signature itself.
A:
If you have control over the file format, yes. Include the signature in a header before the content proper, and make the signature cover only the content section of the file, not the entire file. Something like:
SIGNATURE=72ba51288199b829a4b9ca2ac911e60c
BEGIN_CONTENTS
... real file contents here ...
|
Must a secure cryptographic signature reside outside of the file it refers to?
|
I'm programming a pet project in Python, and it involves users A & B interacting over network, attempting to insure that each has a local copy of the same file from user C.
The idea is that C gives each a file that has been digitally signed. A & B trade the digital signatures they have, and check it out on their own copy. If the signature fails, then one of them has an incorrect/corrupt/modified version of the file.
The question is, therefore, can C distribute a single file that somehow includes it's own signature? Or does C need to supply the file and signature separately?
|
[
"The digital signature from C alone should be enough for both A and B to confirm that their file is not corrupted, without ever communicating with eachother. If A and B did not receive a signature from C, they could each create a cryptographic hash of the file and compare the hash, but that does not require any digital signing on C's part.\nIf you want C to sign the file, either send the signature and the file seperately, or wrap them both in some sort of container, such as a zip file or home grown solution (e.g., the first line in the file represents the signature, the rest is the payload).\nTo answer your question literally, the signature doesn't have to be outside the file per se, but the part that is being signed cannot include the signature itself.\n",
"If you have control over the file format, yes. Include the signature in a header before the content proper, and make the signature cover only the content section of the file, not the entire file. Something like:\nSIGNATURE=72ba51288199b829a4b9ca2ac911e60c\nBEGIN_CONTENTS\n... real file contents here ...\n\n"
] |
[
5,
4
] |
[] |
[] |
[
"cryptography",
"digital_signature",
"file",
"python"
] |
stackoverflow_0001132766_cryptography_digital_signature_file_python.txt
|
Q:
Interact with a Windows console application via Python
I am using python 2.5 on Windows. I wish to interact with a console process via Popen. I currently have this small snippet of code:
p = Popen( ["console_app.exe"], stdin=PIPE, stdout=PIPE )
# issue command 1...
p.stdin.write( 'command1\n' )
result1 = p.stdout.read() # <---- we never return here
# issue command 2...
p.stdin.write( 'command2\n' )
result2 = p.stdout.read()
I can write to stdin but can not read from stdout. Have I missed a step? I don't want to use p.communicate( "command" )[0] as it terminates the process and I need to interact with the process dynamically over time.
Thanks in advance.
A:
Your problem here is that you are trying to control an interactive application.
stdout.read() will continue reading until it has reached the end of the stream, file or pipe. Unfortunately, in case of an interactive program, the pipe is only closed then whe program exits; which is never, if the command you sent it was anything other than "quit".
You will have to revert to reading the output of the subprocess line-by-line using stdout.readline(), and you'd better have a way to tell when the program is ready to accept a command, and when the command you issued to the program is finished and you can supply a new one. In case of a program like cmd.exe, even readline() won't suffice as the line that indicates a new command can be sent is not terminated by a newline, so will have to analyze the output byte-by-byte. Here's a sample script that runs cmd.exe, looks for the prompt, then issues a dir and then an exit:
from subprocess import *
import re
class InteractiveCommand:
def __init__(self, process, prompt):
self.process = process
self.prompt = prompt
self.output = ""
self.wait_for_prompt()
def wait_for_prompt(self):
while not self.prompt.search(self.output):
c = self.process.stdout.read(1)
if c == "":
break
self.output += c
# Now we're at a prompt; clear the output buffer and return its contents
tmp = self.output
self.output = ""
return tmp
def command(self, command):
self.process.stdin.write(command + "\n")
return self.wait_for_prompt()
p = Popen( ["cmd.exe"], stdin=PIPE, stdout=PIPE )
prompt = re.compile(r"^C:\\.*>", re.M)
cmd = InteractiveCommand(p, prompt)
listing = cmd.command("dir")
cmd.command("exit")
print listing
If the timing isn't important, and interactivity for a user isn't required, it can be a lot simpler just to batch up the calls:
from subprocess import *
p = Popen( ["cmd.exe"], stdin=PIPE, stdout=PIPE )
p.stdin.write("dir\n")
p.stdin.write("exit\n")
print p.stdout.read()
A:
Have you tried to force windows end lines?
i.e.
p.stdin.write( 'command1 \r\n' )
p.stdout.readline()
UPDATE:
I've just checked the solution on windows cmd.exe and it works with readline(). But it has one problem Popen's stdout.readline blocks. So if the app will ever return something without endline your app will stuck forever.
But there is a work around for that check out: http://code.activestate.com/recipes/440554/
A:
I think you might want to try to use readline() instead?
Edit: sorry, misunderstoud.
Maybe this question can help you?
A:
Is it possible that the console app is buffering its output in some way so that it is only being sent to stdout when the pipe is closed? If you have access to the code for the console app, maybe sticking a flush after a batch of output data might help?
Alternatively, is it actually writing to stderr and instead of stdout for some reason?
Just looked at your code again and thought of something else, I see you're sending in "command\n". Could the console app be simply waiting for a carriage return character instead of a new line? Maybe the console app is waiting for you to submit the command before it produces any output.
A:
Had the exact same problem here. I dug into DrPython source code and stole wx.Execute() solution, which is working fine, especially if your script is already using wx. I never found correct solution on windows platform though...
|
Interact with a Windows console application via Python
|
I am using python 2.5 on Windows. I wish to interact with a console process via Popen. I currently have this small snippet of code:
p = Popen( ["console_app.exe"], stdin=PIPE, stdout=PIPE )
# issue command 1...
p.stdin.write( 'command1\n' )
result1 = p.stdout.read() # <---- we never return here
# issue command 2...
p.stdin.write( 'command2\n' )
result2 = p.stdout.read()
I can write to stdin but can not read from stdout. Have I missed a step? I don't want to use p.communicate( "command" )[0] as it terminates the process and I need to interact with the process dynamically over time.
Thanks in advance.
|
[
"Your problem here is that you are trying to control an interactive application.\nstdout.read() will continue reading until it has reached the end of the stream, file or pipe. Unfortunately, in case of an interactive program, the pipe is only closed then whe program exits; which is never, if the command you sent it was anything other than \"quit\".\nYou will have to revert to reading the output of the subprocess line-by-line using stdout.readline(), and you'd better have a way to tell when the program is ready to accept a command, and when the command you issued to the program is finished and you can supply a new one. In case of a program like cmd.exe, even readline() won't suffice as the line that indicates a new command can be sent is not terminated by a newline, so will have to analyze the output byte-by-byte. Here's a sample script that runs cmd.exe, looks for the prompt, then issues a dir and then an exit:\nfrom subprocess import *\nimport re\n\nclass InteractiveCommand:\n def __init__(self, process, prompt):\n self.process = process\n self.prompt = prompt\n self.output = \"\"\n self.wait_for_prompt()\n\n def wait_for_prompt(self):\n while not self.prompt.search(self.output):\n c = self.process.stdout.read(1)\n if c == \"\":\n break\n self.output += c\n\n # Now we're at a prompt; clear the output buffer and return its contents\n tmp = self.output\n self.output = \"\"\n return tmp\n\n def command(self, command):\n self.process.stdin.write(command + \"\\n\")\n return self.wait_for_prompt()\n\np = Popen( [\"cmd.exe\"], stdin=PIPE, stdout=PIPE )\nprompt = re.compile(r\"^C:\\\\.*>\", re.M)\ncmd = InteractiveCommand(p, prompt)\n\nlisting = cmd.command(\"dir\")\ncmd.command(\"exit\")\n\nprint listing\n\nIf the timing isn't important, and interactivity for a user isn't required, it can be a lot simpler just to batch up the calls:\nfrom subprocess import *\n\np = Popen( [\"cmd.exe\"], stdin=PIPE, stdout=PIPE )\np.stdin.write(\"dir\\n\")\np.stdin.write(\"exit\\n\")\n\nprint p.stdout.read()\n\n",
"Have you tried to force windows end lines? \ni.e.\np.stdin.write( 'command1 \\r\\n' )\np.stdout.readline()\n\nUPDATE:\nI've just checked the solution on windows cmd.exe and it works with readline(). But it has one problem Popen's stdout.readline blocks. So if the app will ever return something without endline your app will stuck forever.\nBut there is a work around for that check out: http://code.activestate.com/recipes/440554/\n",
"I think you might want to try to use readline() instead?\nEdit: sorry, misunderstoud.\nMaybe this question can help you?\n",
"Is it possible that the console app is buffering its output in some way so that it is only being sent to stdout when the pipe is closed? If you have access to the code for the console app, maybe sticking a flush after a batch of output data might help?\nAlternatively, is it actually writing to stderr and instead of stdout for some reason?\nJust looked at your code again and thought of something else, I see you're sending in \"command\\n\". Could the console app be simply waiting for a carriage return character instead of a new line? Maybe the console app is waiting for you to submit the command before it produces any output.\n",
"Had the exact same problem here. I dug into DrPython source code and stole wx.Execute() solution, which is working fine, especially if your script is already using wx. I never found correct solution on windows platform though...\n"
] |
[
8,
2,
0,
0,
0
] |
[] |
[] |
[
"python",
"windows"
] |
stackoverflow_0001124884_python_windows.txt
|
Q:
Weird Problem with Classes and Optional Arguments
Okay so this was driving me nuts all day.
Why does this happen:
class Foo:
def __init__(self, bla = {}):
self.task_defs = bla
def __str__(self):
return ''.join(str(self.task_defs))
a = Foo()
b = Foo()
a.task_defs['BAR'] = 1
print 'B is ==> %s' % str(b)
print 'A is ==> %s' % str(a)
Gives me the output:
B is ==> {'BAR': 1}
A is ==> {'BAR': 1}
I know it has to do with python passing everything by reference.
But why does this happen? This was literally making me go insane all day, basically causing me to tear my stuff apart. Shouldn't python be smart enough to deal with something like this?
A:
Since you have bla initially set to a mutable type (in this case a dict) in the arguments, it gets shared since bla doesn't get reinitialized to a new dict instance for each instance created for Foo. Here, try this instead:
class Foo:
def __init__(self, bla=None):
if bla is None:
bla = {}
self.task_defs = bla
def __str__(self):
return ''.join(str(self.task_defs))
a = Foo()
b = Foo()
a.task_defs['BAR'] = 1
print 'B is ==> %s' % str(b)
print 'A is ==> %s' % str(a)
|
Weird Problem with Classes and Optional Arguments
|
Okay so this was driving me nuts all day.
Why does this happen:
class Foo:
def __init__(self, bla = {}):
self.task_defs = bla
def __str__(self):
return ''.join(str(self.task_defs))
a = Foo()
b = Foo()
a.task_defs['BAR'] = 1
print 'B is ==> %s' % str(b)
print 'A is ==> %s' % str(a)
Gives me the output:
B is ==> {'BAR': 1}
A is ==> {'BAR': 1}
I know it has to do with python passing everything by reference.
But why does this happen? This was literally making me go insane all day, basically causing me to tear my stuff apart. Shouldn't python be smart enough to deal with something like this?
|
[
"Since you have bla initially set to a mutable type (in this case a dict) in the arguments, it gets shared since bla doesn't get reinitialized to a new dict instance for each instance created for Foo. Here, try this instead:\nclass Foo:\n def __init__(self, bla=None):\n if bla is None:\n bla = {}\n self.task_defs = bla\n def __str__(self):\n return ''.join(str(self.task_defs))\n\na = Foo()\nb = Foo()\na.task_defs['BAR'] = 1\nprint 'B is ==> %s' % str(b)\nprint 'A is ==> %s' % str(a)\n\n"
] |
[
6
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001133309_python.txt
|
Q:
Do I test a class that does nothing?
In my application, I have two classes: a logger that actually logs to the database and a dummy logger that does nothing (used when logging is disabled). Here is the entire DummyLog class:
class DummyLog(object):
def insert_master_log(self, spec_name, file_name, data_source,
environment_name):
pass
def update_master_log(self, inserts, updates, failures, total):
pass
On one hand, I should probably let this go and not test it since there's not really any code to test. But then, my "test-infected" instinct tells me that this is an excuse and that the simplicity of the class means I should be more willing to test it. I'm just having trouble thinking of what to test.
Any ideas? Or should I just let this go and not write any tests?
A:
If you don't test, how will you really know it does nothing?
:)
Sorry - couldn't resist. Seriously - I would test because some day it might do more?
A:
If it can't fail. There is nothing to test.
Test case results need to contain at least one successful state, and at least one unsuccessful state. If any input into the test results in successful output. Then there is not a test you could create the would ever fail.
A:
Of course you can test a class that doesn't do anything. You test that it does, in fact, not do anything.
In practice, that means:
it exists and can be instantiated;
it doesn't throw an exception when used; call every method and simply assert that they succeed. This also doubles as checking that the class does, in fact, define every method that it's expected to.
Don't run this test on DummyLog; use it as a generic test and run it on all loggers. When you add another method to the real Log class a year from now and forget to add it to DummyLog, the test will notice. (As long as you do remember to add a general test for the method, of course, but hopefully adding the test should be habitual, even if you forget about related classes.)
A:
Be pragmatic, there is nothing to test here.
A:
Depends on your theory.
If you do a test driven type of person, then in order for that code to exist, you had to write the test for it.
If you are thinking of it as, I wrote it, how do I test it, then I think it warrants a test since you are relying on it to do nothing. You need the test to make sure someone doesn't come behind you and delete that code (it may even be you).
A:
If anyone's interested, here's a test I ended up writing:
def test_dummy_loader():
from loader_logs import DummyLog
from copy import copy
dummy = DummyLog()
initial = copy(dummy.__dict__)
dummy.insert_master_log('', '', '', '')
dummy.update_master_log(0, 0, 0, 0)
post = copy(dummy.__dict__)
assert initial == post
Essentially, it tests that there are no attributes getting set on the object when the two dummy methods get called. Granted, it still doesn't test that the methods truly do nothing, but at least it's something.
A:
It obviously does something or you wouldn't have written it.
I'm going to guess its meant to mirror the interface of your real logging class. So test that it has the same interface, that it takes the same arguments. You're likely to change your logging while forgetting to update the dummy. If that seems redundant IT IS because they probably should just inherit from the same interface.
And then yes, you can test that it doesn't log anything. It might seem silly but its amazing what maintenance programmers will do.
A:
If it doesn't do anything then there's nothing to test. If you really want you could verify that it doesn't modify any state. I'm not familiar enough with Python to know if there's an easy way to make verify that your methods don't call any other methods, but you could do that as well if you really want.
A:
I don't know Python, but I can think of one test- check that the class actually creates without error. This will at least regression test the class and means it should work in all circumstances.
You never know someone might edit the class in the future and get it to throw an exception or something weird!
Personally though, unless you are aiming for an insanely high level of test coverage I wouldn't bother.
That said would it be catastrophic if this class did throw an exception? I'm guessing that would be one of those bugs that without a unit test would only be caught in the field.
A:
Argument: You don't base your tests on the reading of the implementation but on the intended behaviour. You test that the darn thing doesn't crash when called.
This case a bit obsessive perhaps, and frankly I maybe I wouldn't bother. But it only take a small increment over these null functions for there to be pitfalls worth testing.
A:
That DummyLogger is what is called in design pattern a "Null Object".
Subclass your real Logger from that, create some test for the real logger and then use the same test but with the DummyLogger.
class TestLogger(unittest.TestCase):
def setUp(self):
self.logger = RealLogger()
def test_log_debug ..
def test_log_error ..
class TestNullLogger(TestLogger):
def setUp(self):
self.logger = DummyLogger()
But As many suggested you ain't gonna need it. When it brakes, fix it.
A:
You could test the arguments that are passed to it. If this is a dummy object that will get called with a particular set of arguments then modifying that arguments will make it fail. A test like that will ensure that if it does get modified at least no other code break that depends on it.
A:
According to the "You Ain't Gonna Need It" rule, you shouldn't write a test when there is nothing to test even if one day it might do something.
How to test that something has done nothing? That's a nice philosophical question :)
A:
I think the only useful benefit of a test for such a class is to hopefully catch if someone starts modifying it down the road. Otherwise I wouldn't bother.
A:
You at least want it to not break anything when used in place of the actual logger. So reuse the actual logger tests and factor out the assertions that test that it actually logs.
|
Do I test a class that does nothing?
|
In my application, I have two classes: a logger that actually logs to the database and a dummy logger that does nothing (used when logging is disabled). Here is the entire DummyLog class:
class DummyLog(object):
def insert_master_log(self, spec_name, file_name, data_source,
environment_name):
pass
def update_master_log(self, inserts, updates, failures, total):
pass
On one hand, I should probably let this go and not test it since there's not really any code to test. But then, my "test-infected" instinct tells me that this is an excuse and that the simplicity of the class means I should be more willing to test it. I'm just having trouble thinking of what to test.
Any ideas? Or should I just let this go and not write any tests?
|
[
"If you don't test, how will you really know it does nothing?\n:)\nSorry - couldn't resist. Seriously - I would test because some day it might do more?\n",
"If it can't fail. There is nothing to test.\nTest case results need to contain at least one successful state, and at least one unsuccessful state. If any input into the test results in successful output. Then there is not a test you could create the would ever fail.\n",
"Of course you can test a class that doesn't do anything. You test that it does, in fact, not do anything.\nIn practice, that means:\n\nit exists and can be instantiated;\nit doesn't throw an exception when used; call every method and simply assert that they succeed. This also doubles as checking that the class does, in fact, define every method that it's expected to.\n\nDon't run this test on DummyLog; use it as a generic test and run it on all loggers. When you add another method to the real Log class a year from now and forget to add it to DummyLog, the test will notice. (As long as you do remember to add a general test for the method, of course, but hopefully adding the test should be habitual, even if you forget about related classes.)\n",
"Be pragmatic, there is nothing to test here. \n",
"Depends on your theory.\nIf you do a test driven type of person, then in order for that code to exist, you had to write the test for it.\nIf you are thinking of it as, I wrote it, how do I test it, then I think it warrants a test since you are relying on it to do nothing. You need the test to make sure someone doesn't come behind you and delete that code (it may even be you).\n",
"If anyone's interested, here's a test I ended up writing:\ndef test_dummy_loader():\n from loader_logs import DummyLog\n from copy import copy\n dummy = DummyLog()\n initial = copy(dummy.__dict__)\n dummy.insert_master_log('', '', '', '')\n dummy.update_master_log(0, 0, 0, 0)\n post = copy(dummy.__dict__)\n assert initial == post\n\nEssentially, it tests that there are no attributes getting set on the object when the two dummy methods get called. Granted, it still doesn't test that the methods truly do nothing, but at least it's something.\n",
"It obviously does something or you wouldn't have written it.\nI'm going to guess its meant to mirror the interface of your real logging class. So test that it has the same interface, that it takes the same arguments. You're likely to change your logging while forgetting to update the dummy. If that seems redundant IT IS because they probably should just inherit from the same interface.\nAnd then yes, you can test that it doesn't log anything. It might seem silly but its amazing what maintenance programmers will do.\n",
"If it doesn't do anything then there's nothing to test. If you really want you could verify that it doesn't modify any state. I'm not familiar enough with Python to know if there's an easy way to make verify that your methods don't call any other methods, but you could do that as well if you really want.\n",
"I don't know Python, but I can think of one test- check that the class actually creates without error. This will at least regression test the class and means it should work in all circumstances.\nYou never know someone might edit the class in the future and get it to throw an exception or something weird!\nPersonally though, unless you are aiming for an insanely high level of test coverage I wouldn't bother.\nThat said would it be catastrophic if this class did throw an exception? I'm guessing that would be one of those bugs that without a unit test would only be caught in the field.\n",
"Argument: You don't base your tests on the reading of the implementation but on the intended behaviour. You test that the darn thing doesn't crash when called.\nThis case a bit obsessive perhaps, and frankly I maybe I wouldn't bother. But it only take a small increment over these null functions for there to be pitfalls worth testing.\n",
"That DummyLogger is what is called in design pattern a \"Null Object\".\nSubclass your real Logger from that, create some test for the real logger and then use the same test but with the DummyLogger.\nclass TestLogger(unittest.TestCase):\n def setUp(self):\n self.logger = RealLogger()\n def test_log_debug ..\n def test_log_error ..\n\nclass TestNullLogger(TestLogger):\n def setUp(self): \n self.logger = DummyLogger()\n\nBut As many suggested you ain't gonna need it. When it brakes, fix it.\n",
"You could test the arguments that are passed to it. If this is a dummy object that will get called with a particular set of arguments then modifying that arguments will make it fail. A test like that will ensure that if it does get modified at least no other code break that depends on it.\n",
"According to the \"You Ain't Gonna Need It\" rule, you shouldn't write a test when there is nothing to test even if one day it might do something.\nHow to test that something has done nothing? That's a nice philosophical question :)\n",
"I think the only useful benefit of a test for such a class is to hopefully catch if someone starts modifying it down the road. Otherwise I wouldn't bother.\n",
"You at least want it to not break anything when used in place of the actual logger. So reuse the actual logger tests and factor out the assertions that test that it actually logs.\n"
] |
[
12,
4,
4,
3,
2,
2,
2,
1,
1,
1,
1,
0,
0,
0,
0
] |
[] |
[] |
[
"logging",
"polymorphism",
"python",
"testing",
"unit_testing"
] |
stackoverflow_0001127626_logging_polymorphism_python_testing_unit_testing.txt
|
Q:
Creating a decorator in a class with access to the (current) class itself
Currently, I'm doing it in this fashion:
class Spam(object):
decorated = None
@classmethod
def decorate(cls, funct):
if cls.decorated is None:
cls.decorated = []
cls.decorated.append(funct)
return funct
class Eggs(Spam):
pass
@Eggs.decorate
def foo():
print "spam and eggs"
print Eggs.decorated # [<function foo at 0x...>]
print Spam.decorated # None
I need to be able to do this in a subclass as shown. The problem is that I can't seem to figure out how to make the decorated field not shared between instances. Right now I have a hackish solution by initially setting it to None and then checking it when the function is decorated, but that only works one way. In other words, if I subclass Eggs and then decorate something with the Eggs.decorate function, it affects all subclasses.
I guess my question is: is it possible to have mutable class fields that don't get shared between base and sub classes?
A:
I figured it out through using metaclasses. Thanks for all who posted. Here is my solution if anybody comes across a similar problem:
class SpamMeta(type):
def __new__(cls, name, bases, dct):
SpamType = type.__new__(cls, name, bases, dct)
SpamType.decorated = []
return SpamType
class Spam(object):
__metaclass__ = SpamMeta
@classmethod
def decorate(cls, funct):
cls.decorated.append(funct)
return funct
class Eggs(Spam):
pass
@Eggs.decorate
def foo():
print "spam and eggs"
print Eggs.decorated # [<function foo at 0x...>]
print Spam.decorated # []
A:
I'm fairly sure you can't. I thought about doing this with property(), but unfortunately the class of the class itself--where a property would need to go--is ClassType itself.
You can write your decorator like this, but it changes the interface a little:
class Spam(object):
decorated = {}
@classmethod
def get_decorated_methods(cls):
return cls.decorated.setdefault(cls, [])
@classmethod
def decorate(cls, funct):
cls.get_decorated_methods().append(funct)
return funct
class Eggs(Spam):
pass
@Spam.decorate
def foo_and_spam():
print "spam"
@Eggs.decorate
def foo_and_eggs():
print "eggs"
print Eggs.get_decorated_methods() # [<function foo_and_eggs at 0x...>]
print Spam.get_decorated_methods() # [<function foo_and_spam at 0x...>]
A:
Not that I have anything against metaclasses, but you can also solve it without them:
from collections import defaultdict
class Spam(object):
_decorated = defaultdict(list)
@classmethod
def decorate(cls, func):
cls._decorated[cls].append(func)
return func
@classmethod
def decorated(cls):
return cls._decorated[cls]
class Eggs(Spam):
pass
@Eggs.decorate
def foo():
print "spam and eggs"
print Eggs.decorated() # [<function foo at 0x...>]
print Spam.decorated() # []
It is not possible to have properties on class objects (unless you revert to metaclasses again), therefore it is mandatory to get the list of decorated methods via a classmethod again. There is an extra layer of indirection involved compared to the metaclass solution.
|
Creating a decorator in a class with access to the (current) class itself
|
Currently, I'm doing it in this fashion:
class Spam(object):
decorated = None
@classmethod
def decorate(cls, funct):
if cls.decorated is None:
cls.decorated = []
cls.decorated.append(funct)
return funct
class Eggs(Spam):
pass
@Eggs.decorate
def foo():
print "spam and eggs"
print Eggs.decorated # [<function foo at 0x...>]
print Spam.decorated # None
I need to be able to do this in a subclass as shown. The problem is that I can't seem to figure out how to make the decorated field not shared between instances. Right now I have a hackish solution by initially setting it to None and then checking it when the function is decorated, but that only works one way. In other words, if I subclass Eggs and then decorate something with the Eggs.decorate function, it affects all subclasses.
I guess my question is: is it possible to have mutable class fields that don't get shared between base and sub classes?
|
[
"I figured it out through using metaclasses. Thanks for all who posted. Here is my solution if anybody comes across a similar problem:\nclass SpamMeta(type):\n\n def __new__(cls, name, bases, dct):\n SpamType = type.__new__(cls, name, bases, dct)\n SpamType.decorated = []\n return SpamType\n\n\nclass Spam(object):\n\n __metaclass__ = SpamMeta\n\n @classmethod\n def decorate(cls, funct):\n cls.decorated.append(funct)\n return funct\n\n\nclass Eggs(Spam):\n pass\n\n\[email protected]\ndef foo():\n print \"spam and eggs\"\n\n\nprint Eggs.decorated # [<function foo at 0x...>]\nprint Spam.decorated # []\n\n",
"I'm fairly sure you can't. I thought about doing this with property(), but unfortunately the class of the class itself--where a property would need to go--is ClassType itself.\nYou can write your decorator like this, but it changes the interface a little:\nclass Spam(object):\n decorated = {}\n\n @classmethod\n def get_decorated_methods(cls):\n return cls.decorated.setdefault(cls, [])\n\n @classmethod\n def decorate(cls, funct):\n cls.get_decorated_methods().append(funct)\n return funct\n\n\nclass Eggs(Spam):\n pass\n\n\[email protected]\ndef foo_and_spam():\n print \"spam\"\n\[email protected]\ndef foo_and_eggs():\n print \"eggs\"\n\nprint Eggs.get_decorated_methods() # [<function foo_and_eggs at 0x...>]\nprint Spam.get_decorated_methods() # [<function foo_and_spam at 0x...>]\n\n",
"Not that I have anything against metaclasses, but you can also solve it without them:\nfrom collections import defaultdict\n\nclass Spam(object):\n _decorated = defaultdict(list)\n\n @classmethod\n def decorate(cls, func):\n cls._decorated[cls].append(func)\n return func\n\n @classmethod\n def decorated(cls):\n return cls._decorated[cls]\n\n\nclass Eggs(Spam):\n pass\n\[email protected]\ndef foo():\n print \"spam and eggs\"\n\nprint Eggs.decorated() # [<function foo at 0x...>]\nprint Spam.decorated() # []\n\nIt is not possible to have properties on class objects (unless you revert to metaclasses again), therefore it is mandatory to get the list of decorated methods via a classmethod again. There is an extra layer of indirection involved compared to the metaclass solution.\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"class",
"decorator",
"inheritance",
"python"
] |
stackoverflow_0001129821_class_decorator_inheritance_python.txt
|
Q:
How can I retrieve last x elements in Django
I am trying to retrieve the latest 5 posts (by post time)
In the views.py, if I try blog_post_list = blogPosts.objects.all()[:5] It retreives the first 5 elements of the blogPosts objects, how can I reverse this to retreive the latest ones?
Cheers
A:
blog_post_list = blogPosts.objects.all().reverse()[:5]
# OR
blog_post_list = blogPosts.objects.all().order_by('-DEFAULT_ORDER_KEY')[:5]
I prefer the first.
A:
Based on Nick Presta's answer and your comment, try:
blog_post_list = blogPosts.objects.all().order_by('-pub_date')[:5]
|
How can I retrieve last x elements in Django
|
I am trying to retrieve the latest 5 posts (by post time)
In the views.py, if I try blog_post_list = blogPosts.objects.all()[:5] It retreives the first 5 elements of the blogPosts objects, how can I reverse this to retreive the latest ones?
Cheers
|
[
"blog_post_list = blogPosts.objects.all().reverse()[:5]\n# OR\nblog_post_list = blogPosts.objects.all().order_by('-DEFAULT_ORDER_KEY')[:5]\n\nI prefer the first.\n",
"Based on Nick Presta's answer and your comment, try:\nblog_post_list = blogPosts.objects.all().order_by('-pub_date')[:5]\n\n"
] |
[
8,
4
] |
[] |
[] |
[
"django",
"list",
"python"
] |
stackoverflow_0001133715_django_list_python.txt
|
Q:
Speeding up GTK tree view
I'm writing an application for the Maemo platform using pygtk and the rendering speed of the tree view seems to be a problem. Since the application is a media controller I'm using transition animations in the UI. These animations slide the controls into view when moving around the UI. The issue with the tree control is that it is slow.
Just moving the widget around in the middle of the screen is not that slow but if the cells are being exposed the framerate really drops. What makes this more annoying is that if the only area that is being exposed is the title row with the row labels, the framerate remains under control.
Judging by this I'm suspecting the GTK tree view is drawing the full cells again each time a single row of pixels is being exposed. Is there a way to somehow force GTK to draw the whole widget into some buffer even if parts of it are off screen and then use the buffer to draw the widget when animating?
Also is there a difference between using Viewport and scrolling that up and using Layout panel and moving the widgets down? I'd have imagined Viewport is faster but I saw no real difference when I tried both versions.
I understand this isn't necessarily what GTK has been created for. Other alternative I've tried is pygame but I'd prefer some higher level implementaion that has widget based event handling built in. Also pygtk has the benefit of running in Windows and a window so development is easier.
A:
I never did this myself but you could try to implement the caching yourself. Instead of using the predefined cell renderers, implement your own cell renderer (possibly as a wrapper for the actual one), but cache the pixmaps.
In PyGTK, you can use gtk.GenericCellRenderer. In your decorator cell renderer, do the following when asked to render:
keep a cache of off-screen pixmaps (or better, just one large one) and a cache of sizes
if asked to predict the size or render, create a key from the relevant properties
if the key exists in the cache, use the cached pixmap, blit the cached pixmap on the drawable you are given
otherwise, first have the actual cell renderer do the work and then copy it
The last step also implies that caching does incur an overhead during the first time the cell is renderered. This problem can be mitigated a bit by using a caching strategy. You might want to try out different things, based on the distribution of rendered values:
if all cells are unique, not much to do than caching everything up to a certain limit, or some MRU strategy
if you have some kind of Zipf distribution, i.e. some cells are very common, while others are very rare, you should only cache the cells with high frequency and get rid off the caching overhead for rare cell values.
That being said, I can't say if it's going to make any difference. My experience from a somewhat similar problem is that anything involving text is usually slow enough that caching makes sense---sorry that I can't give simpler advice.
Before you try that, you could also simple write a decorating cell renderer which just counts how often your cells are actually rendered and get some timing information, so that you get an idea where the hot spots are and if caching the values would make any sense at all.
|
Speeding up GTK tree view
|
I'm writing an application for the Maemo platform using pygtk and the rendering speed of the tree view seems to be a problem. Since the application is a media controller I'm using transition animations in the UI. These animations slide the controls into view when moving around the UI. The issue with the tree control is that it is slow.
Just moving the widget around in the middle of the screen is not that slow but if the cells are being exposed the framerate really drops. What makes this more annoying is that if the only area that is being exposed is the title row with the row labels, the framerate remains under control.
Judging by this I'm suspecting the GTK tree view is drawing the full cells again each time a single row of pixels is being exposed. Is there a way to somehow force GTK to draw the whole widget into some buffer even if parts of it are off screen and then use the buffer to draw the widget when animating?
Also is there a difference between using Viewport and scrolling that up and using Layout panel and moving the widgets down? I'd have imagined Viewport is faster but I saw no real difference when I tried both versions.
I understand this isn't necessarily what GTK has been created for. Other alternative I've tried is pygame but I'd prefer some higher level implementaion that has widget based event handling built in. Also pygtk has the benefit of running in Windows and a window so development is easier.
|
[
"I never did this myself but you could try to implement the caching yourself. Instead of using the predefined cell renderers, implement your own cell renderer (possibly as a wrapper for the actual one), but cache the pixmaps.\nIn PyGTK, you can use gtk.GenericCellRenderer. In your decorator cell renderer, do the following when asked to render:\n\nkeep a cache of off-screen pixmaps (or better, just one large one) and a cache of sizes\nif asked to predict the size or render, create a key from the relevant properties\nif the key exists in the cache, use the cached pixmap, blit the cached pixmap on the drawable you are given\notherwise, first have the actual cell renderer do the work and then copy it\n\nThe last step also implies that caching does incur an overhead during the first time the cell is renderered. This problem can be mitigated a bit by using a caching strategy. You might want to try out different things, based on the distribution of rendered values:\n\nif all cells are unique, not much to do than caching everything up to a certain limit, or some MRU strategy\nif you have some kind of Zipf distribution, i.e. some cells are very common, while others are very rare, you should only cache the cells with high frequency and get rid off the caching overhead for rare cell values.\n\nThat being said, I can't say if it's going to make any difference. My experience from a somewhat similar problem is that anything involving text is usually slow enough that caching makes sense---sorry that I can't give simpler advice.\nBefore you try that, you could also simple write a decorating cell renderer which just counts how often your cells are actually rendered and get some timing information, so that you get an idea where the hot spots are and if caching the values would make any sense at all.\n"
] |
[
1
] |
[] |
[] |
[
"drawing",
"gtk",
"optimization",
"pygtk",
"python"
] |
stackoverflow_0001132512_drawing_gtk_optimization_pygtk_python.txt
|
Q:
Simple tray icon application using pygtk
I'm writing a webmail checker in python and I want it to just sit on the tray icon and warn me when there is a new email. Could anyone point me in the right direction as far as the gtk code?
I already coded the bits necessary to check for new email but it's CLI right now.
A:
You'll want to use a gtk.StatusIcon to actually display the icon. Here are the docs. If you're just getting started with gui programming you might want to work though a bit of the pygtk tutorial.
A:
This http://www.pygtk.org/docs/pygtk/class-gtkstatusicon.html should get you going.
|
Simple tray icon application using pygtk
|
I'm writing a webmail checker in python and I want it to just sit on the tray icon and warn me when there is a new email. Could anyone point me in the right direction as far as the gtk code?
I already coded the bits necessary to check for new email but it's CLI right now.
|
[
"You'll want to use a gtk.StatusIcon to actually display the icon. Here are the docs. If you're just getting started with gui programming you might want to work though a bit of the pygtk tutorial.\n",
"This http://www.pygtk.org/docs/pygtk/class-gtkstatusicon.html should get you going.\n"
] |
[
6,
3
] |
[] |
[] |
[
"pygtk",
"python",
"tray",
"trayicon"
] |
stackoverflow_0001134749_pygtk_python_tray_trayicon.txt
|
Q:
Adding tuples to produce a tuple with a subtotal per 'column'
What is the most pythonic way of adding the values of two or more tuples to produce a total for each 'column'?
Eg:
>>> a = (10, 20)
>>> b = (40, 50)
>>> c = (1, 3)
>>> ???
(51, 73)
I've so far considered the following:
def sumtuples(*tuples):
return (sum(v1 for v1,_ in tuples), sum(v2 for _,v2 in tuples))
>>> print sumtuples(a, b, c)
(51, 73)
I'm sure this far from ideal - how can it be improved?
A:
I guess you could use reduce, though it's debatable whether that's pythonic ..
In [13]: reduce(lambda s, t: (s[0]+t[0], s[1]+t[1]), [a, b, c], (0, 0))
Out[13]: (51, 73)
Here's another way using map and zip:
In [14]: map(sum, zip(a, b, c))
Out[14]: [51, 73]
or, if you're passing your collection of tuples in as a list:
In [15]: tups = [a, b, c]
In [15]: map(sum, zip(*tups))
Out[15]: [51, 73]
and, using a list comprehension instead of map:
In [16]: [sum(z) for z in zip(*tups)]
Out[16]: [51, 73]
A:
Since we're going crazy,
a = (10, 20)
b = (40, 50)
c = (1, 3)
def sumtuples(*tuples):
return map(sum, zip(*tuples))
sumtuples(a,b,c)
[51, 73]
Truth is, almost every time I post one of these crazy solutions, the 'naive' method seems to work out faster and more readable...
A:
Not pure Python, but the preferred way if you have SciPy installed:
from scipy import array
a = array((10, 20))
b = array((40, 50))
c = array((1, 3))
print tuple(a+b+c)
A:
If your set of tuples is going to be relatively small, your solution is fine. However, if you're going to be working on very large data sets you should consider using reduce as it will only iterate over the list once compared to your original solution which iterates over the list of tuples twice.
>>> a = (10, 20)
>>> b = (40, 50)
>>> c = (1, 3)
>>> values=[a,b,c]
>>> reduce(lambda x,y: (x[0]+y[0],x[1]+y[1]), values,(0,0))
(51, 73)
A:
These solutions all suffer from one of two problems:
they only work on exactly two columns; ((1,2,3),(2,3,4),(3,4,5)) doesn't work; or
they don't work on an iterator, so generating a billion rows doesn't work (or wastes tons of memory).
Don't get caught up in the "pythonic" buzzword at the expense of not getting a correct answer.
def sum_columns(it):
result = []
for row in it:
if len(result) <= len(row):
extend_by = len(row) - len(result)
result.extend([0] * extend_by)
for idx, val in enumerate(row):
result[idx] += val
return result
a = (1, 20)
b = (4, 50)
c = (0, 30, 3)
print sum_columns([a,b,c])
def generate_rows():
for i in range(1000):
yield (i, 1, 2)
lst = generate_rows()
print sum_columns(lst)
|
Adding tuples to produce a tuple with a subtotal per 'column'
|
What is the most pythonic way of adding the values of two or more tuples to produce a total for each 'column'?
Eg:
>>> a = (10, 20)
>>> b = (40, 50)
>>> c = (1, 3)
>>> ???
(51, 73)
I've so far considered the following:
def sumtuples(*tuples):
return (sum(v1 for v1,_ in tuples), sum(v2 for _,v2 in tuples))
>>> print sumtuples(a, b, c)
(51, 73)
I'm sure this far from ideal - how can it be improved?
|
[
"I guess you could use reduce, though it's debatable whether that's pythonic ..\nIn [13]: reduce(lambda s, t: (s[0]+t[0], s[1]+t[1]), [a, b, c], (0, 0))\nOut[13]: (51, 73)\n\nHere's another way using map and zip:\nIn [14]: map(sum, zip(a, b, c))\nOut[14]: [51, 73]\n\nor, if you're passing your collection of tuples in as a list:\nIn [15]: tups = [a, b, c]\n\nIn [15]: map(sum, zip(*tups))\nOut[15]: [51, 73]\n\nand, using a list comprehension instead of map:\nIn [16]: [sum(z) for z in zip(*tups)]\nOut[16]: [51, 73]\n\n",
"Since we're going crazy,\na = (10, 20)\nb = (40, 50)\nc = (1, 3)\n\ndef sumtuples(*tuples):\n return map(sum, zip(*tuples))\n\nsumtuples(a,b,c)\n[51, 73]\n\nTruth is, almost every time I post one of these crazy solutions, the 'naive' method seems to work out faster and more readable...\n",
"Not pure Python, but the preferred way if you have SciPy installed:\nfrom scipy import array\na = array((10, 20))\nb = array((40, 50))\nc = array((1, 3))\n\nprint tuple(a+b+c)\n\n",
"If your set of tuples is going to be relatively small, your solution is fine. However, if you're going to be working on very large data sets you should consider using reduce as it will only iterate over the list once compared to your original solution which iterates over the list of tuples twice.\n>>> a = (10, 20)\n>>> b = (40, 50)\n>>> c = (1, 3)\n>>> values=[a,b,c]\n>>> reduce(lambda x,y: (x[0]+y[0],x[1]+y[1]), values,(0,0))\n(51, 73)\n\n",
"These solutions all suffer from one of two problems:\n\nthey only work on exactly two columns; ((1,2,3),(2,3,4),(3,4,5)) doesn't work; or\nthey don't work on an iterator, so generating a billion rows doesn't work (or wastes tons of memory).\n\nDon't get caught up in the \"pythonic\" buzzword at the expense of not getting a correct answer.\ndef sum_columns(it):\n result = []\n for row in it:\n if len(result) <= len(row):\n extend_by = len(row) - len(result)\n result.extend([0] * extend_by)\n\n for idx, val in enumerate(row):\n result[idx] += val\n\n return result\n\na = (1, 20)\nb = (4, 50)\nc = (0, 30, 3)\nprint sum_columns([a,b,c])\n\ndef generate_rows():\n for i in range(1000):\n yield (i, 1, 2)\n\nlst = generate_rows()\nprint sum_columns(lst)\n\n"
] |
[
6,
2,
1,
0,
0
] |
[] |
[] |
[
"python",
"tuples"
] |
stackoverflow_0001133286_python_tuples.txt
|
Q:
Python processes stops responding to SIGTERM / SIGINT after being restarted
I'm having a weird problem with some python processes running using a watchdog process.
The watchdog process is written in python and is the parent, and has a function called start_child(name) which uses subprocess.Popen to open the child process. The Popen object is recorded so that the watchdog can monitor the process using poll() and eventually end it with terminate() when needed.
If the child dies unexpectedly, the watchdog calls start_child(name) again and records the new Popen object.
There are 7 child processes, all of which are also python. If I run any of the children manually, I can send SIGTERM or SIGINT using kill and get the results I expect (the process ends).
However, when run from the watchdog process, the child will only end after the FIRST signal. When the watchdog restarts the child, the new child process no longer responds to SIGTERM or SIGINT. I have no idea what is causing this.
watchdog.py
class watchdog:
# <snip> various init stuff
def start(self):
self.running = true
kids = ['app1', 'app2', 'app3', 'app4', 'app5', 'app6', 'app7']
self.processes = {}
for kid in kids:
self.start_child(kid)
self.thread = threading.Thread(target=self._monitor)
self.thread.start()
while self.running:
time.sleep(10)
def start_child(self, name):
try:
proc = subprocess.Popen(name)
self.processes[name] = proc
except:
print "oh no"
else:
print "started child ok"
def _monitor(self):
while self.running:
time.sleep(1)
if self.running:
for kid, proc in self.processes.iteritems():
if proc.poll() is not None: # process ended
self.start_child(kid)
So what happens is watchdog.start() launches all 7 processes, and if I send any process SIGTERM, it ends, and the monitor thread starts it again. However, if I then send the new process SIGTERM, it ignores it.
I should be able to keep sending kill -15 to the restarted processes over and over again. Why do they ignore it after being restarted?
A:
As explained here: http://blogs.gentoo.org/agaffney/2005/03/18/python_sucks , when Python creates a new thread, it blocks all signals for that thread (and for any processes that thread spawns).
I fixed this using sigprocmask, called through ctypes. This may or may not be the "correct" way to do it, but it does work.
In the child process, during __init__:
libc = ctypes.cdll.LoadLibrary("libc.so")
mask = '\x00' * 17 # 16 byte empty mask + null terminator
libc.sigprocmask(3, mask, None) # '3' on FreeBSD is the value for SIG_SETMASK
|
Python processes stops responding to SIGTERM / SIGINT after being restarted
|
I'm having a weird problem with some python processes running using a watchdog process.
The watchdog process is written in python and is the parent, and has a function called start_child(name) which uses subprocess.Popen to open the child process. The Popen object is recorded so that the watchdog can monitor the process using poll() and eventually end it with terminate() when needed.
If the child dies unexpectedly, the watchdog calls start_child(name) again and records the new Popen object.
There are 7 child processes, all of which are also python. If I run any of the children manually, I can send SIGTERM or SIGINT using kill and get the results I expect (the process ends).
However, when run from the watchdog process, the child will only end after the FIRST signal. When the watchdog restarts the child, the new child process no longer responds to SIGTERM or SIGINT. I have no idea what is causing this.
watchdog.py
class watchdog:
# <snip> various init stuff
def start(self):
self.running = true
kids = ['app1', 'app2', 'app3', 'app4', 'app5', 'app6', 'app7']
self.processes = {}
for kid in kids:
self.start_child(kid)
self.thread = threading.Thread(target=self._monitor)
self.thread.start()
while self.running:
time.sleep(10)
def start_child(self, name):
try:
proc = subprocess.Popen(name)
self.processes[name] = proc
except:
print "oh no"
else:
print "started child ok"
def _monitor(self):
while self.running:
time.sleep(1)
if self.running:
for kid, proc in self.processes.iteritems():
if proc.poll() is not None: # process ended
self.start_child(kid)
So what happens is watchdog.start() launches all 7 processes, and if I send any process SIGTERM, it ends, and the monitor thread starts it again. However, if I then send the new process SIGTERM, it ignores it.
I should be able to keep sending kill -15 to the restarted processes over and over again. Why do they ignore it after being restarted?
|
[
"As explained here: http://blogs.gentoo.org/agaffney/2005/03/18/python_sucks , when Python creates a new thread, it blocks all signals for that thread (and for any processes that thread spawns).\nI fixed this using sigprocmask, called through ctypes. This may or may not be the \"correct\" way to do it, but it does work.\nIn the child process, during __init__:\nlibc = ctypes.cdll.LoadLibrary(\"libc.so\")\nmask = '\\x00' * 17 # 16 byte empty mask + null terminator \nlibc.sigprocmask(3, mask, None) # '3' on FreeBSD is the value for SIG_SETMASK\n\n"
] |
[
5
] |
[
"Wouldn't it be better to restore the default signal handlers within Python rather than via ctypes? In your child process, use the signal module:\nimport signal\nfor sig in range(1, signal.NSIG):\n try:\n signal.signal(sig, signal.SIG_DFL)\n except RuntimeError:\n pass\n\nRuntimeError is raised when trying to set signals such as SIGKILL which can't be caught.\n"
] |
[
-1
] |
[
"freebsd",
"ipc",
"python"
] |
stackoverflow_0001133693_freebsd_ipc_python.txt
|
Q:
Encryption with Python
I'm making an encryption function in Python and I want to encrypt a random number using a public key.
I wish to know that if I use Crypto package (Crypto.publicKey.pubkey) than how can I use the method like...
def encrypt(self,plaintext,k)
Here the k is itself a random number, is this mean the key. Can somebody help me with somewhat related?
A:
Are you trying to encrypt a session/message key for symmetric encryption using the public key of the recipient? It might be more straightforward to use, say, SSH or TLS in those cases.
Back to your question:
Me Too Crypto (M2Crypto) is a nice wrapper around openssl.
First, you need to get the public key of the recipient:
recip = M2Crypto.RSA.load_pub_key(open('recipient_public_key.pem','rb').read())
Now you can encrypt your message:
plaintext = random_integer_you_want_to_encrypt
msg = recip.public_encrypt(plaintext,RSA.pkcs1_padding)
Now only someone with the private key of the recipient can decrypt it.
A:
you can try Pycrypto.
A:
The value k that you pass to encrypt is not part of the key. k is a random value that is used to randomize the encryption. It should be a different random number every time you encrypt a message.
Unfortunately, depending on what public key algorithm you use this k needs to satisfy more or less strict conditions. I.e., your encryption may be completely insecure if k does not contain enough entropy. This makes using pycrypto difficult, because you need to know more about the cryptosystem you use than the developer of the library. In my opinion this is a serious flaw of pycrypto and I'd recommend that you use a more high level crypto library that doesn't require that you know any such details. (i.e. something like M2Crypto)
|
Encryption with Python
|
I'm making an encryption function in Python and I want to encrypt a random number using a public key.
I wish to know that if I use Crypto package (Crypto.publicKey.pubkey) than how can I use the method like...
def encrypt(self,plaintext,k)
Here the k is itself a random number, is this mean the key. Can somebody help me with somewhat related?
|
[
"Are you trying to encrypt a session/message key for symmetric encryption using the public key of the recipient? It might be more straightforward to use, say, SSH or TLS in those cases.\nBack to your question:\nMe Too Crypto (M2Crypto) is a nice wrapper around openssl.\nFirst, you need to get the public key of the recipient:\nrecip = M2Crypto.RSA.load_pub_key(open('recipient_public_key.pem','rb').read())\n\nNow you can encrypt your message:\nplaintext = random_integer_you_want_to_encrypt\nmsg = recip.public_encrypt(plaintext,RSA.pkcs1_padding)\n\nNow only someone with the private key of the recipient can decrypt it.\n",
"you can try Pycrypto.\n",
"The value k that you pass to encrypt is not part of the key. k is a random value that is used to randomize the encryption. It should be a different random number every time you encrypt a message. \nUnfortunately, depending on what public key algorithm you use this k needs to satisfy more or less strict conditions. I.e., your encryption may be completely insecure if k does not contain enough entropy. This makes using pycrypto difficult, because you need to know more about the cryptosystem you use than the developer of the library. In my opinion this is a serious flaw of pycrypto and I'd recommend that you use a more high level crypto library that doesn't require that you know any such details. (i.e. something like M2Crypto)\n"
] |
[
4,
3,
0
] |
[] |
[] |
[
"cryptography",
"encryption",
"python"
] |
stackoverflow_0001130687_cryptography_encryption_python.txt
|
Q:
When should I use varargs in designing a Python API?
Is there a good rule of thumb as to when you should prefer varargs function signatures in your API over passing an iterable to a function? ("varargs" being short for "variadic" or "variable-number-of-arguments"; i.e. *args)
For example, os.path.join has a vararg signature:
os.path.join(first_component, *rest) -> str
Whereas min allows either:
min(iterable[, key=func]) -> val
min(a, b, c, ...[, key=func]) -> val
Whereas any/all only permit an iterable:
any(iterable) -> bool
A:
Consider using varargs when you expect your users to specify the list of arguments as code at the callsite or having a single value is the common case. When you expect your users to get the arguments from somewhere else, don't use varargs. When in doubt, err on the side of not using varargs.
Using your examples, the most common usecase for os.path.join is to have a path prefix and append a filename/relative path onto it, so the call usually looks like os.path.join(prefix, some_file). On the other hand, any() is usually used to process a list of data, when you know all the elements you don't use any([a,b,c]), you use a or b or c.
A:
My rule of thumb is to use it when you might often switch between passing one and multiple parameters. Instead of having two functions (some GUI code for example):
def enable_tab(tab_name)
def enable_tabs(tabs_list)
or even worse, having just one function
def enable_tabs(tabs_list)
and using it as enable_tabls(['tab1']), I tend to use just: def enable_tabs(*tabs). Although, seeing something like enable_tabs('tab1') looks kind of wrong (because of the plural), I prefer it over the alternatives.
A:
You should use it when your parameter list is variable.
Yeah, I know the answer is kinda daft, but it's true. Maybe your question was a bit diffuse. :-)
Default arguments, like min() above is more useful when you either want to different behaviours (like min() above) or when you simply don't want to force the caller to send in all parameters.
The *arg is for when you have a variable list of arguments of the same type. Joining is a typical example. You can replace it with an argument that takes a list as well.
**kw is for when you have many arguments of different types, where each argument also is connected to a name. A typical example is when you want a generic function for handling form submission or similar.
A:
They are completely different interfaces.
In one case, you have one parameter, in the other you have many.
any(1, 2, 3)
TypeError: any() takes exactly one argument (3 given)
os.path.join("1", "2", "3")
'1\\2\\3'
It really depends on what you want to emphasize: any works over a list (well, sort of), while os.path.join works over a set of strings.
Therefore, in the first case you request a list; in the second, you request directly the strings.
In other terms, the expressiveness of the interface should be the main guideline for choosing the way parameters should be passed.
|
When should I use varargs in designing a Python API?
|
Is there a good rule of thumb as to when you should prefer varargs function signatures in your API over passing an iterable to a function? ("varargs" being short for "variadic" or "variable-number-of-arguments"; i.e. *args)
For example, os.path.join has a vararg signature:
os.path.join(first_component, *rest) -> str
Whereas min allows either:
min(iterable[, key=func]) -> val
min(a, b, c, ...[, key=func]) -> val
Whereas any/all only permit an iterable:
any(iterable) -> bool
|
[
"Consider using varargs when you expect your users to specify the list of arguments as code at the callsite or having a single value is the common case. When you expect your users to get the arguments from somewhere else, don't use varargs. When in doubt, err on the side of not using varargs.\nUsing your examples, the most common usecase for os.path.join is to have a path prefix and append a filename/relative path onto it, so the call usually looks like os.path.join(prefix, some_file). On the other hand, any() is usually used to process a list of data, when you know all the elements you don't use any([a,b,c]), you use a or b or c.\n",
"My rule of thumb is to use it when you might often switch between passing one and multiple parameters. Instead of having two functions (some GUI code for example):\ndef enable_tab(tab_name)\ndef enable_tabs(tabs_list)\n\nor even worse, having just one function\ndef enable_tabs(tabs_list)\n\nand using it as enable_tabls(['tab1']), I tend to use just: def enable_tabs(*tabs). Although, seeing something like enable_tabs('tab1') looks kind of wrong (because of the plural), I prefer it over the alternatives.\n",
"You should use it when your parameter list is variable.\nYeah, I know the answer is kinda daft, but it's true. Maybe your question was a bit diffuse. :-)\nDefault arguments, like min() above is more useful when you either want to different behaviours (like min() above) or when you simply don't want to force the caller to send in all parameters.\nThe *arg is for when you have a variable list of arguments of the same type. Joining is a typical example. You can replace it with an argument that takes a list as well.\n**kw is for when you have many arguments of different types, where each argument also is connected to a name. A typical example is when you want a generic function for handling form submission or similar.\n",
"They are completely different interfaces.\nIn one case, you have one parameter, in the other you have many.\nany(1, 2, 3)\nTypeError: any() takes exactly one argument (3 given)\n\nos.path.join(\"1\", \"2\", \"3\")\n'1\\\\2\\\\3'\n\nIt really depends on what you want to emphasize: any works over a list (well, sort of), while os.path.join works over a set of strings.\nTherefore, in the first case you request a list; in the second, you request directly the strings.\nIn other terms, the expressiveness of the interface should be the main guideline for choosing the way parameters should be passed.\n"
] |
[
8,
4,
0,
0
] |
[] |
[] |
[
"api",
"python",
"variadic_functions"
] |
stackoverflow_0001136673_api_python_variadic_functions.txt
|
Q:
Problem compiling MySQLdb for Python 2.6 on Win32
I'm using Django and Python 2.6, and I want to grow my application using a MySQL backend. Problem is that there isn't a win32 package for MySQLdb on Python 2.6.
Now I'm no hacker, but I thought I might compile it myself using MSVC++9 Express. But I run into a problem that the compiler quickly can't find config_win.h, which I assume is a header file for MySQL so that the MySQLdb package can know what calls it can make into MySQL.
Am I right? And if so, where do I get the header files for MySQL?
A:
Thanks all! I found that I hadn't installed the developer components in MySQL. Once that was done the problem was solved and I easily compiled the MySQLdb for Python 2.6.
I've made the package available at my site.
A:
I think that the header files are shipped with MySQL, just make sure you check the appropriate options when installing (I think that sources and headers are under "developer components" in the installation dialog).
A:
Have you considered using a pre-built stack with Python, MySQL, Apache, etc.?
For example: http://bitnami.org/stack/djangostack
A:
Also see this post on the mysql-python blog: MySQL-python-1.2.3 beta 2 released - dated March 2009. MySQLdb for Python 2.6 is still a work in progress...
|
Problem compiling MySQLdb for Python 2.6 on Win32
|
I'm using Django and Python 2.6, and I want to grow my application using a MySQL backend. Problem is that there isn't a win32 package for MySQLdb on Python 2.6.
Now I'm no hacker, but I thought I might compile it myself using MSVC++9 Express. But I run into a problem that the compiler quickly can't find config_win.h, which I assume is a header file for MySQL so that the MySQLdb package can know what calls it can make into MySQL.
Am I right? And if so, where do I get the header files for MySQL?
|
[
"Thanks all! I found that I hadn't installed the developer components in MySQL. Once that was done the problem was solved and I easily compiled the MySQLdb for Python 2.6.\nI've made the package available at my site.\n",
"I think that the header files are shipped with MySQL, just make sure you check the appropriate options when installing (I think that sources and headers are under \"developer components\" in the installation dialog).\n",
"Have you considered using a pre-built stack with Python, MySQL, Apache, etc.?\nFor example: http://bitnami.org/stack/djangostack\n",
"Also see this post on the mysql-python blog: MySQL-python-1.2.3 beta 2 released - dated March 2009. MySQLdb for Python 2.6 is still a work in progress...\n"
] |
[
9,
3,
1,
1
] |
[] |
[] |
[
"mysql",
"python",
"winapi"
] |
stackoverflow_0000316484_mysql_python_winapi.txt
|
Q:
Using python scripts in subversion hooks on windows
My main goal is to get this up and running.
My hook gets called when I do the commit with Tortoise SVN, but it always exits when I get to this line: Python "%~dp0trac-post-commit-hook.py" -p "%TRAC_ENV%" -r "%REV%" || EXIT 5
If I try and replace the call to the python script with any simple Python script it still doesn't work so I'm assuming it is a problem with the call to Python and not the script itself.
I have tried setting the PYTHON_PATH variable and also set %PATH% to include Python.
I have trac up and running so Python is working on the server itself.
Here is some background info:
Python is installed on Windows server and script is called from local machine so
IF NOT EXIST %TRAC_ENV% EXIT 3
and
SET PYTHON_PATH=X:\Python26
IF NOT EXIST %PYTHON_PATH% EXIT 4
fail unless I point set them to the mapped network drive (That is point them at X and Y drives not C and E drives)
Python scripts can be called anywhere from the command line from the server regardless of the drive so the PATH variable should be set correctly
Appears to be an issue with calling python scripts externally, but not sure how I go about changing the permissions for this.
Thanks in advance.
A:
Take the following things into account:
network drive mappings and subst
mappings are user specific. Make sure
the drives exist for the user account
under which the svn server is
running.
subversion hook scripts are run
without any environment variables
being set for security reasons, not even %path%. Call
the python executable with an
absolute path, e.g.
c:\python25\python.exe.
|
Using python scripts in subversion hooks on windows
|
My main goal is to get this up and running.
My hook gets called when I do the commit with Tortoise SVN, but it always exits when I get to this line: Python "%~dp0trac-post-commit-hook.py" -p "%TRAC_ENV%" -r "%REV%" || EXIT 5
If I try and replace the call to the python script with any simple Python script it still doesn't work so I'm assuming it is a problem with the call to Python and not the script itself.
I have tried setting the PYTHON_PATH variable and also set %PATH% to include Python.
I have trac up and running so Python is working on the server itself.
Here is some background info:
Python is installed on Windows server and script is called from local machine so
IF NOT EXIST %TRAC_ENV% EXIT 3
and
SET PYTHON_PATH=X:\Python26
IF NOT EXIST %PYTHON_PATH% EXIT 4
fail unless I point set them to the mapped network drive (That is point them at X and Y drives not C and E drives)
Python scripts can be called anywhere from the command line from the server regardless of the drive so the PATH variable should be set correctly
Appears to be an issue with calling python scripts externally, but not sure how I go about changing the permissions for this.
Thanks in advance.
|
[
"Take the following things into account:\n\nnetwork drive mappings and subst\nmappings are user specific. Make sure\nthe drives exist for the user account\nunder which the svn server is\nrunning.\nsubversion hook scripts are run\nwithout any environment variables\nbeing set for security reasons, not even %path%. Call\nthe python executable with an\nabsolute path, e.g.\nc:\\python25\\python.exe.\n\n"
] |
[
3
] |
[] |
[] |
[
"hook",
"python",
"svn",
"svn_hooks",
"windows"
] |
stackoverflow_0001135499_hook_python_svn_svn_hooks_windows.txt
|
Q:
Execution of script using Popen fails
I need to execute a script in the background through a service.
The service kicks off the script using Popen.
p = Popen('/path/to/script/script.py', shell=True)
Why doesn't the following script work when I include the file writes in the for loop?
#!/usr/bin/python
import os
import time
def run():
fd = open('/home/dilleyjrr/testOutput.txt', 'w')
fd.write('Start:\n')
fd.flush()
for x in (1,2,3,4,5):
fd.write(x + '\n')
fd.flush()
time.sleep(1)
fd.write('Done!!!!\n')
fd.flush()
fd.close()
if __name__ == '__main__':
run()
A:
Here's your bug:
for x in (1,2,3,4,5):
fd.write(x + '\n')
You cannot sum an int to a string. Use instead (e.g.)
for x in (1,2,3,4,5):
fd.write('%s\n' % x)
A:
What error are you getting? It's hard to see the problem without the error. Is there anyway that the file is opened somewhere else?
|
Execution of script using Popen fails
|
I need to execute a script in the background through a service.
The service kicks off the script using Popen.
p = Popen('/path/to/script/script.py', shell=True)
Why doesn't the following script work when I include the file writes in the for loop?
#!/usr/bin/python
import os
import time
def run():
fd = open('/home/dilleyjrr/testOutput.txt', 'w')
fd.write('Start:\n')
fd.flush()
for x in (1,2,3,4,5):
fd.write(x + '\n')
fd.flush()
time.sleep(1)
fd.write('Done!!!!\n')
fd.flush()
fd.close()
if __name__ == '__main__':
run()
|
[
"Here's your bug:\nfor x in (1,2,3,4,5):\n fd.write(x + '\\n')\n\nYou cannot sum an int to a string. Use instead (e.g.)\nfor x in (1,2,3,4,5):\n fd.write('%s\\n' % x)\n\n",
"What error are you getting? It's hard to see the problem without the error. Is there anyway that the file is opened somewhere else?\n"
] |
[
1,
0
] |
[] |
[] |
[
"mod_python",
"popen",
"python"
] |
stackoverflow_0001138111_mod_python_popen_python.txt
|
Q:
Get first non-empty string from a list in python
In Python I have a list of strings, some of which may be the empty string. What's the best way to get the first non-empty string?
A:
next(s for s in list_of_string if s)
Edit: py3k proof version as advised by Stephan202 in comments, thanks.
A:
To remove all empty strings,
[s for s in list_of_strings if s]
To get the first non-empty string, simply create this list and get the first element, or use the lazy method as suggested by wuub.
A:
def get_nonempty(list_of_strings):
for s in list_of_strings:
if s:
return s
A:
Here's a short way:
filter(None, list_of_strings)[0]
EDIT:
Here's a slightly longer way that is better:
from itertools import ifilter
ifilter(None, list_of_strings).next()
A:
to get the first non empty string in a list, you just have to loop over it and check if its not empty. that's all there is to it.
arr = ['','',2,"one"]
for i in arr:
if i:
print i
break
A:
Based on your question I'll have to assume a lot, but to "get" the first non-empty string:
(i for i, s in enumerate(x) if s).next()
which returns its index in the list. The 'x' binding points to your list of strings.
|
Get first non-empty string from a list in python
|
In Python I have a list of strings, some of which may be the empty string. What's the best way to get the first non-empty string?
|
[
"next(s for s in list_of_string if s)\n\nEdit: py3k proof version as advised by Stephan202 in comments, thanks. \n",
"To remove all empty strings,\n[s for s in list_of_strings if s]\nTo get the first non-empty string, simply create this list and get the first element, or use the lazy method as suggested by wuub.\n",
"def get_nonempty(list_of_strings):\n for s in list_of_strings:\n if s:\n return s\n\n",
"Here's a short way:\nfilter(None, list_of_strings)[0]\n\nEDIT:\nHere's a slightly longer way that is better:\nfrom itertools import ifilter\nifilter(None, list_of_strings).next()\n\n",
"to get the first non empty string in a list, you just have to loop over it and check if its not empty. that's all there is to it.\narr = ['','',2,\"one\"]\nfor i in arr:\n if i:\n print i\n break\n\n",
"Based on your question I'll have to assume a lot, but to \"get\" the first non-empty string:\n(i for i, s in enumerate(x) if s).next()\n\nwhich returns its index in the list. The 'x' binding points to your list of strings.\n"
] |
[
28,
6,
4,
3,
1,
0
] |
[] |
[] |
[
"list",
"python",
"string"
] |
stackoverflow_0001138024_list_python_string.txt
|
Q:
Tired of ASP.NET, which of the following should I learn and why?
Which of the following technology is easy to learn and fun for developing a website? If you could only pick one which would it be and why
Clojure/Compojure+Ring/Moustache+Ring
Groovy/Grails
Python/Django
Ruby/Rails
Turbogear
Cappuccino or Sproutcore
Javascript/jQuery
A:
Have you considered turning off the computer and going outside instead?
Remember to wear pants!
A:
Have you tried ASP.NET MVC? It is actually very different to ASP.NET (vanilla), but retains your knowledge of the .NET framework. Most people wouldn't look back...
With the view based on your html (rather than whatever the controls decide to emit), it is also ideally placed to work alongside jQuery (it is even installed in the default project template) for all your dhtml/ajax needs.
Resources:
ASP.NET MVC in Action
jQuery in Action
A:
OK, first, apparently we all need a pants check. Done?
I'm of two minds:
if you are looking for a practical language / platform to pick up that you hope to use to help you in your day-to-day then I'd go with Python/Django. Python has developed into a really sweat and powerful language and Django is as nice a web development MVC as any other and pretty easy to pick up and get going with. You can run it locally, its easy to deploy on Apache w/ mod_python. Did I mention that Python is a really nice language? Also good support in the tools world, google app engine etc....
if you are looking to expand your thinking/though processes about the way you program and think about programming then I'm with Joel Spolsky - choose HAppS (Joel would go Haslkell) or Clojure which I've not used but I've done a lot of lisp and it makes you think different and the language constructs like the macro capability will change the way you think of solving problems
A:
I would probably learn Ruby on Rails. It has a lot of different methodologies compared to ASP.NET, and it might open your eyes to some different and very powerful approaches to web apps.
A:
Let's start by clarifying your question. Why are you "tired of ASP.NET?" Is it because of the tedious webforms model that tries so hard to protect you from the browser/server conversation that it ends up getting in the way?
Or is it because you have been trying to work with one of the tiresome 3rd party enhancement controls that build on the tedium of the webforms model?
Or do are you simply tired of working with five different languages at once: ASP.NET, HTML, CSS, Javascript, and C#/VB?
If you answered yes to the first two of these questions here's some advice:
Get some rest.
Try ASP.NET MVC. It gets out of your way and lets you work with the browser and IIS
Realize that changing web development models will be difficult no matter which one you choose to move to. The path is smoother the fewer things you change (see number 2).
If you answered yes only to the 3rd question (five different languages) then all I can tell you is, welcome to web development. It will be this way for awhile.
A:
I recommend Clojure and Compojure because Clojure is awesome. Clojure is a new and modern LISP implemented on the JVM and can interact seamlessly with any Java library. It already has 3 IDE plugins in development, a book written about it, a very smart and open-minded person running the whole operation and a great newbie friendly community. The language is simple, easy to learn and yet really powerful. A good way to open your mind to new ideas without going as far as pure functional programming. Coding websites with Clojure is a breeze and really fun. It has a lot going for it and a lot of momentum. All the kool kids are doin' it so I recommend giving it a try!
A:
Javascript, because the skills you learn will complement your current Asp.net skills.
A:
If you main goal is to broaden yourself, I'd suggest looking at things like Seaside or HAppS.
A:
I would suggest jQuery or python, both are fun to work with and useful for either web work or just common tasks.
A:
You should wait until you get an answer from someone who's used more than one of those. That said (I've only used rails, python, and javascript), one way to frame it would be as a balance between sheer intellectual joy and practicality. My thoughts on Rails and Python from that perspective:
Rails is going to be different and interesting, and it was hip in 2005-2007. There may be something more hip now. (Hip counts when you want to get future colleagues excited about what you've done, when they haven't done it.) I'd venture that it's at least as eye-opening as something based on LISP or Smalltalk or Haskell, but probably more practical because you may actually end up using it at a job or for contract work. Clojure, Seaside, and HAppS sound really cool, but until one of them really catches on, you're unlikely to ever use any of that stuff again in your career unless you're a computer science PhD working with other PhD's. (Edit in response to comments: please don't read this as a disparagement of those frameworks. As Rayne and MarkusQ have noted, depending on your motivations, they may be just what you're looking for. I'm just trying to communicate one method for weighing the alternatives based on your goals.)
Python is a great language to know all around. I haven't used Django, but it has some industry traction (not as much as rails). Python as a language though will serve you well no matter what you do -- it's great for banging out utility scripts and rapidly prototyping ideas. There's a huge community and tons of libraries.
You can gauge a technology's potential usefulness for moneymaking by searching for it on craigslist, dice.com, monster.com, etc.
A:
Definitely clojure. It is the most different of all languages mentioned in the list, so it would be probably most fun to learn / use.
A:
Nobody seems to be voting for groovy. I'd go for that. I don't know anything about grails, but groovy the language is pretty cool. In the past nine months at my job I've been required to learn python and ruby. In the process I also took some time to understand groovy. groovy is the language that had me hooked before I finished reading the first chapter of Groovy in Action.
Ruby is the one I'm actively using now, and while I did nothing but python for six months that's my least favorite of the bunch. Python is not a bad language per se, I just didn't enjoy using it. I find ruby to be a very pleasant language and am glad I had the opportunity to learn it.
Fully learning javascript might be the more practical choice, but I'd still vote for Groovy. I'm anxious to find an opportunity to use it at work.
A:
Ruby on Rails, because that's what I use.
A:
I have worked with several technologies... not touched ASP. NET. Heard about it from other people who are under its influence.
I have started working with Ruby on Rails and it is fun. Since you want to learn and develop web sites, you should go for Ruby on Rails. There are lot of things you can do with RoR on web. I like things that you can do with RMagick. (cropping images, thumbnails,slideshow etc)
Talk about multi-lingual sites... and there you have "gettext".
I vote for RoR.
A:
I'll add in my vote for Groovy, as well as another one for Ruby. Both Grails and Rails are excellent frameworks, although Rails will get you a job a lot sooner than Grails. Both are truly a pleasure to work with, and have actually made me enjoy coding again.
Groovy is nice because you can use any Java library. So, lightning-fast database access, XML parsing, PDF generation, and so on. In a nutshell, Groovy is Java, if Java had been written by a bunch of Ruby guys.
Grails is also great, although it's a lot buggier than Rails, and if you want to do anything complicated you're going to need to learn a bit about Spring, Hibernate, and Java. Grails does have better internationalization support and more deployment options, as well as a really good integrated scheduler (Quartz) for long-running and scheduled tasks.
Rails is Ruby all the way down, so you can very easily read the framework code and figure out how things worked -- I did this in order to figure out how to implement a graph (data structure), and was really pleased with how easy it was to figure out how to change things.
A:
Learn Ruby on Rails. It'll change the way you see web development. It did for me!
A valid alternative is Django and Python. I don't use it, but I consider it to be just as good as Rails.
A:
I've used Ruby on Rails but also have done quite a bit of Groovy and Grails work.
If you don't have any previous experience I would go with either of those.
They're both fun to learn, pretty easy, and are very powerful.
They're both backed up by frameworks:
Ruby had Rails/Merb
Groovy has Grails
They can both use jQuery.
I don't know much about Python/Django combination.
A:
I've started to learn Ruby on Rails along with MVC (since conceptually there similar) and found it a great relief from the same routine with .Net.
|
Tired of ASP.NET, which of the following should I learn and why?
|
Which of the following technology is easy to learn and fun for developing a website? If you could only pick one which would it be and why
Clojure/Compojure+Ring/Moustache+Ring
Groovy/Grails
Python/Django
Ruby/Rails
Turbogear
Cappuccino or Sproutcore
Javascript/jQuery
|
[
"Have you considered turning off the computer and going outside instead?\nRemember to wear pants!\n",
"Have you tried ASP.NET MVC? It is actually very different to ASP.NET (vanilla), but retains your knowledge of the .NET framework. Most people wouldn't look back...\nWith the view based on your html (rather than whatever the controls decide to emit), it is also ideally placed to work alongside jQuery (it is even installed in the default project template) for all your dhtml/ajax needs.\nResources:\n\nASP.NET MVC in Action\njQuery in Action\n\n",
"OK, first, apparently we all need a pants check. Done?\nI'm of two minds:\n\nif you are looking for a practical language / platform to pick up that you hope to use to help you in your day-to-day then I'd go with Python/Django. Python has developed into a really sweat and powerful language and Django is as nice a web development MVC as any other and pretty easy to pick up and get going with. You can run it locally, its easy to deploy on Apache w/ mod_python. Did I mention that Python is a really nice language? Also good support in the tools world, google app engine etc....\nif you are looking to expand your thinking/though processes about the way you program and think about programming then I'm with Joel Spolsky - choose HAppS (Joel would go Haslkell) or Clojure which I've not used but I've done a lot of lisp and it makes you think different and the language constructs like the macro capability will change the way you think of solving problems\n\n",
"I would probably learn Ruby on Rails. It has a lot of different methodologies compared to ASP.NET, and it might open your eyes to some different and very powerful approaches to web apps.\n",
"Let's start by clarifying your question. Why are you \"tired of ASP.NET?\" Is it because of the tedious webforms model that tries so hard to protect you from the browser/server conversation that it ends up getting in the way?\nOr is it because you have been trying to work with one of the tiresome 3rd party enhancement controls that build on the tedium of the webforms model?\nOr do are you simply tired of working with five different languages at once: ASP.NET, HTML, CSS, Javascript, and C#/VB?\nIf you answered yes to the first two of these questions here's some advice:\n\nGet some rest.\nTry ASP.NET MVC. It gets out of your way and lets you work with the browser and IIS\nRealize that changing web development models will be difficult no matter which one you choose to move to. The path is smoother the fewer things you change (see number 2).\n\nIf you answered yes only to the 3rd question (five different languages) then all I can tell you is, welcome to web development. It will be this way for awhile.\n",
"I recommend Clojure and Compojure because Clojure is awesome. Clojure is a new and modern LISP implemented on the JVM and can interact seamlessly with any Java library. It already has 3 IDE plugins in development, a book written about it, a very smart and open-minded person running the whole operation and a great newbie friendly community. The language is simple, easy to learn and yet really powerful. A good way to open your mind to new ideas without going as far as pure functional programming. Coding websites with Clojure is a breeze and really fun. It has a lot going for it and a lot of momentum. All the kool kids are doin' it so I recommend giving it a try!\n",
"Javascript, because the skills you learn will complement your current Asp.net skills.\n",
"If you main goal is to broaden yourself, I'd suggest looking at things like Seaside or HAppS.\n",
"I would suggest jQuery or python, both are fun to work with and useful for either web work or just common tasks.\n",
"You should wait until you get an answer from someone who's used more than one of those. That said (I've only used rails, python, and javascript), one way to frame it would be as a balance between sheer intellectual joy and practicality. My thoughts on Rails and Python from that perspective:\n\nRails is going to be different and interesting, and it was hip in 2005-2007. There may be something more hip now. (Hip counts when you want to get future colleagues excited about what you've done, when they haven't done it.) I'd venture that it's at least as eye-opening as something based on LISP or Smalltalk or Haskell, but probably more practical because you may actually end up using it at a job or for contract work. Clojure, Seaside, and HAppS sound really cool, but until one of them really catches on, you're unlikely to ever use any of that stuff again in your career unless you're a computer science PhD working with other PhD's. (Edit in response to comments: please don't read this as a disparagement of those frameworks. As Rayne and MarkusQ have noted, depending on your motivations, they may be just what you're looking for. I'm just trying to communicate one method for weighing the alternatives based on your goals.)\nPython is a great language to know all around. I haven't used Django, but it has some industry traction (not as much as rails). Python as a language though will serve you well no matter what you do -- it's great for banging out utility scripts and rapidly prototyping ideas. There's a huge community and tons of libraries.\n\nYou can gauge a technology's potential usefulness for moneymaking by searching for it on craigslist, dice.com, monster.com, etc.\n",
"Definitely clojure. It is the most different of all languages mentioned in the list, so it would be probably most fun to learn / use.\n",
"Nobody seems to be voting for groovy. I'd go for that. I don't know anything about grails, but groovy the language is pretty cool. In the past nine months at my job I've been required to learn python and ruby. In the process I also took some time to understand groovy. groovy is the language that had me hooked before I finished reading the first chapter of Groovy in Action. \nRuby is the one I'm actively using now, and while I did nothing but python for six months that's my least favorite of the bunch. Python is not a bad language per se, I just didn't enjoy using it. I find ruby to be a very pleasant language and am glad I had the opportunity to learn it.\nFully learning javascript might be the more practical choice, but I'd still vote for Groovy. I'm anxious to find an opportunity to use it at work. \n",
"Ruby on Rails, because that's what I use.\n",
"I have worked with several technologies... not touched ASP. NET. Heard about it from other people who are under its influence.\nI have started working with Ruby on Rails and it is fun. Since you want to learn and develop web sites, you should go for Ruby on Rails. There are lot of things you can do with RoR on web. I like things that you can do with RMagick. (cropping images, thumbnails,slideshow etc)\nTalk about multi-lingual sites... and there you have \"gettext\".\nI vote for RoR.\n",
"I'll add in my vote for Groovy, as well as another one for Ruby. Both Grails and Rails are excellent frameworks, although Rails will get you a job a lot sooner than Grails. Both are truly a pleasure to work with, and have actually made me enjoy coding again.\nGroovy is nice because you can use any Java library. So, lightning-fast database access, XML parsing, PDF generation, and so on. In a nutshell, Groovy is Java, if Java had been written by a bunch of Ruby guys.\nGrails is also great, although it's a lot buggier than Rails, and if you want to do anything complicated you're going to need to learn a bit about Spring, Hibernate, and Java. Grails does have better internationalization support and more deployment options, as well as a really good integrated scheduler (Quartz) for long-running and scheduled tasks.\nRails is Ruby all the way down, so you can very easily read the framework code and figure out how things worked -- I did this in order to figure out how to implement a graph (data structure), and was really pleased with how easy it was to figure out how to change things.\n",
"Learn Ruby on Rails. It'll change the way you see web development. It did for me!\nA valid alternative is Django and Python. I don't use it, but I consider it to be just as good as Rails.\n",
"I've used Ruby on Rails but also have done quite a bit of Groovy and Grails work.\nIf you don't have any previous experience I would go with either of those.\nThey're both fun to learn, pretty easy, and are very powerful.\nThey're both backed up by frameworks:\nRuby had Rails/Merb\nGroovy has Grails\nThey can both use jQuery.\nI don't know much about Python/Django combination.\n",
"I've started to learn Ruby on Rails along with MVC (since conceptually there similar) and found it a great relief from the same routine with .Net.\n"
] |
[
31,
30,
12,
10,
9,
8,
6,
5,
4,
4,
4,
3,
1,
1,
1,
1,
0,
0
] |
[] |
[] |
[
"asp.net_mvc",
"clojure",
"groovy",
"python",
"ruby"
] |
stackoverflow_0000656987_asp.net_mvc_clojure_groovy_python_ruby.txt
|
Q:
Regex in python
I'm trying to use regular expression right now and I'm really confuse. I want to make some validation with this regular expression :
^[A-Za-z0-9_.][A-Za-z0-9_ ]*
I want to make it so there is a limit of character (32) and I want to "match" all the string.
ex:
string : ".hello hello"
-this should work
string : ".hello hello /.."
-This should be rejected because of the /..
Thanks!
A:
this?
^[A-Za-z0-9_.][A-Za-z0-9_ ]{0,31}$
A:
From 0 to 32 chars:
^[\w\d_.]{0,32}$
|
Regex in python
|
I'm trying to use regular expression right now and I'm really confuse. I want to make some validation with this regular expression :
^[A-Za-z0-9_.][A-Za-z0-9_ ]*
I want to make it so there is a limit of character (32) and I want to "match" all the string.
ex:
string : ".hello hello"
-this should work
string : ".hello hello /.."
-This should be rejected because of the /..
Thanks!
|
[
"this?\n^[A-Za-z0-9_.][A-Za-z0-9_ ]{0,31}$\n\n",
"From 0 to 32 chars: \n\n^[\\w\\d_.]{0,32}$ \n\n"
] |
[
2,
0
] |
[] |
[] |
[
"python",
"regex",
"validation"
] |
stackoverflow_0001138747_python_regex_validation.txt
|
Q:
psycopg2 on OSX: do I have to install PostgreSQL too?
I want to access a postgreSQL database that's running on a remote machine, from Python in OS/X. Do I have to install postgres on the mac as well? Or will psycopg2 work on its own.
Any hints for a good installation guide for psycopg2 for os/x?
A:
macports tells me that the psycopg2 package has a dependency on the postgres client and libraries (but not the db server). If you successfully installed psycopg, then you should be good to go.
If you haven't installed yet, consider using macports or fink to deal with dependency resolution for you. In most cases, this will make things easier (occasionally build problems erupt).
A:
psycopg2 requires the PostgreSQL libpq libraries and the pg_config utility, which means you need a decent chunk of PostgreSQL to be installed. You could install Postgres and psycopg2 via MacPorts, but the version situation is somewhat messy--you may need to install a newer Python as well, particularly if you want to use a recent version of the PostgreSQL libraries. Depending on what you want to do, for example if you have some other Python you want to use, it may be easier to grab a more standard PostgreSQL install and just build psycopg2 yourself. That's pretty easy if you've already got gcc etc. installed, typically the only build issue is making sure it looks in the right place for the libpq include files. See Getting psycopg2 to work on Mac OS X Leopard and Installing psycopg2 on OS X for a few recipes covering the usual build issues you might run into.
A:
You can install from an OS X PostgreSQL package. Allow it to change your memory settings and reboot (it's reversible by removing '/etc/sysctl.conf') - the README file (which tells you to do this yourself) is out of date. Then use (or get, if you haven't already and) EasyInstall.
Check where the PostgreSQL installer has put things - mine is here:
/Library/PostgreSQL/8.4/
Add this path to your .bash_login or .bash_profile file in your home directory (make one if you don't have it already):
export PATH="/Library/PostgreSQL/8.4/bin:$PATH"
Then (on an Intel iMac running OS 10.4.11 and Python 2.6) do:
sudo easy_install psycopg2
This found psycopg2 2.0.11 and (on my setup) gave the following readout:
warning: no files found matching '*.html' under directory 'doc'
warning: no files found matching 'MANIFEST'
zip_safe flag not set; analyzing archive contents...
Adding psycopg2 2.0.11 to easy-install.pth file
Installed /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/psycopg2-2.0.11-py2.6-macosx-10.3-i386.egg
Processing dependencies for psycopg2
Finished processing dependencies for psycopg2
So I guess I have no psycopg2 documentation... however, despite the warnings, I could then do:
python
>>>import psycopg2
>>>
Success? Perhaps. I haven't tried running anything yet, but getting a successful import was the first goal. BTW this was for Django.
|
psycopg2 on OSX: do I have to install PostgreSQL too?
|
I want to access a postgreSQL database that's running on a remote machine, from Python in OS/X. Do I have to install postgres on the mac as well? Or will psycopg2 work on its own.
Any hints for a good installation guide for psycopg2 for os/x?
|
[
"macports tells me that the psycopg2 package has a dependency on the postgres client and libraries (but not the db server). If you successfully installed psycopg, then you should be good to go.\nIf you haven't installed yet, consider using macports or fink to deal with dependency resolution for you. In most cases, this will make things easier (occasionally build problems erupt).\n",
"psycopg2 requires the PostgreSQL libpq libraries and the pg_config utility, which means you need a decent chunk of PostgreSQL to be installed. You could install Postgres and psycopg2 via MacPorts, but the version situation is somewhat messy--you may need to install a newer Python as well, particularly if you want to use a recent version of the PostgreSQL libraries. Depending on what you want to do, for example if you have some other Python you want to use, it may be easier to grab a more standard PostgreSQL install and just build psycopg2 yourself. That's pretty easy if you've already got gcc etc. installed, typically the only build issue is making sure it looks in the right place for the libpq include files. See Getting psycopg2 to work on Mac OS X Leopard and Installing psycopg2 on OS X for a few recipes covering the usual build issues you might run into.\n",
"You can install from an OS X PostgreSQL package. Allow it to change your memory settings and reboot (it's reversible by removing '/etc/sysctl.conf') - the README file (which tells you to do this yourself) is out of date. Then use (or get, if you haven't already and) EasyInstall.\nCheck where the PostgreSQL installer has put things - mine is here:\n/Library/PostgreSQL/8.4/\n\nAdd this path to your .bash_login or .bash_profile file in your home directory (make one if you don't have it already):\nexport PATH=\"/Library/PostgreSQL/8.4/bin:$PATH\"\n\nThen (on an Intel iMac running OS 10.4.11 and Python 2.6) do:\nsudo easy_install psycopg2\n\nThis found psycopg2 2.0.11 and (on my setup) gave the following readout:\nwarning: no files found matching '*.html' under directory 'doc'\nwarning: no files found matching 'MANIFEST'\nzip_safe flag not set; analyzing archive contents...\nAdding psycopg2 2.0.11 to easy-install.pth file\n\nInstalled /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/psycopg2-2.0.11-py2.6-macosx-10.3-i386.egg\nProcessing dependencies for psycopg2\nFinished processing dependencies for psycopg2\n\nSo I guess I have no psycopg2 documentation... however, despite the warnings, I could then do:\npython\n>>>import psycopg2\n>>>\n\nSuccess? Perhaps. I haven't tried running anything yet, but getting a successful import was the first goal. BTW this was for Django.\n"
] |
[
3,
1,
1
] |
[] |
[] |
[
"macos",
"postgresql",
"python"
] |
stackoverflow_0001052957_macos_postgresql_python.txt
|
Q:
How do I receive SNMP traps on OS X?
I need to receive and parse some SNMP traps (messages) and I would appreciate any advice on getting the code I have working on my OS X machine. I have been given some Java code that runs on Windows with net-snmp. I'd like to either get the Java code running on my development machine or whip up some Python code to do the same.
I was able to get the Java code to compile on my OS X machine and it runs without any complaints, including none of the exceptions I would expect to be thrown if it was unable to bind to socket 8255. However, it never reports receiving any SNMP traps, which makes me wonder whether it's really able to read on the socket. Here's what I gather to be the code from the Java program that binds to the socket:
DatagramChannel dgChannel1=DatagramChannel.open();
Selector mux=Selector.open();
dgChannel1.socket().bind(new InetSocketAddress(8255));
dgChannel1.configureBlocking(false);
dgChannel1.register(mux,SelectionKey.OP_READ);
while(mux.select()>0) {
Iterator keyIt = mux.selectedKeys().iterator();
while (keyIt.hasNext()) {
SelectionKey key = (SelectionKey) keyIt.next();
if (key.isReadable()) {
/* processing */
}
}
}
Since I don't know Java and like to mess around with Python, I installed libsnmp via easy_install and tried to get that working. The sample programs traplistener.py and trapsender.py have no problem talking to each other but if I run traplistener.py waiting for my own SNMP signals I again fail to receive anything. I should note that I had to run the python programs via sudo in order to have permission to access the sockets. Running the java program via sudo had no effect.
All this makes me suspect that both programs are having problem with OS X and its sockets, perhaps their permissions. For instance, I had to change the permissions on the /dev/bpf devices for Wireshark to work. Another thought is that it has something to do with my machine having multiple network adapters enabled, including eth0 (ethernet, where I see the trap messages thanks to Wireshark) and eth1 (wifi). Could this be the problem?
As you can see, I know very little about sockets or SNMP, so any help is much appreciated!
Update: Using lsof (sudo lsof -i -n -P to be exact) it appears that my problem is that the java program is only listen on IPv6 when the trap sender is using IPv4. I've tried disabling IPv6 (sudo ip6 -x) and telling java to use IPv4 (java -jar bridge.jar -Djava.net.preferIPv4Stack=true) but I keep finding my program using IPv6. Any thoughts?
java 16444 peter 34u IPv6 0x12f3ad98 0t0 UDP *:8255
Update 2: Ok, I guess I had the java parameter order wrong: java -Djava.net.preferIPv4Stack=true -jar bridge.jar puts the program on IPv4. However, my program still shows no signs of receiving the packets that I know are there.
A:
The standard port number for SNMP traps is 162.
Is there a reason you're specifying a different port number ? You can normally change the port number that traps are sent on/received on, but obviously both ends have to agree. So I'm wondering if this is your problem.
A:
Ok, the solution to get my code working was to run the program as java -Djava.net.preferIPv4Stack=true -jar bridge.jar and to power cycle the SNMP trap sender. Thanks for your help, Brian.
|
How do I receive SNMP traps on OS X?
|
I need to receive and parse some SNMP traps (messages) and I would appreciate any advice on getting the code I have working on my OS X machine. I have been given some Java code that runs on Windows with net-snmp. I'd like to either get the Java code running on my development machine or whip up some Python code to do the same.
I was able to get the Java code to compile on my OS X machine and it runs without any complaints, including none of the exceptions I would expect to be thrown if it was unable to bind to socket 8255. However, it never reports receiving any SNMP traps, which makes me wonder whether it's really able to read on the socket. Here's what I gather to be the code from the Java program that binds to the socket:
DatagramChannel dgChannel1=DatagramChannel.open();
Selector mux=Selector.open();
dgChannel1.socket().bind(new InetSocketAddress(8255));
dgChannel1.configureBlocking(false);
dgChannel1.register(mux,SelectionKey.OP_READ);
while(mux.select()>0) {
Iterator keyIt = mux.selectedKeys().iterator();
while (keyIt.hasNext()) {
SelectionKey key = (SelectionKey) keyIt.next();
if (key.isReadable()) {
/* processing */
}
}
}
Since I don't know Java and like to mess around with Python, I installed libsnmp via easy_install and tried to get that working. The sample programs traplistener.py and trapsender.py have no problem talking to each other but if I run traplistener.py waiting for my own SNMP signals I again fail to receive anything. I should note that I had to run the python programs via sudo in order to have permission to access the sockets. Running the java program via sudo had no effect.
All this makes me suspect that both programs are having problem with OS X and its sockets, perhaps their permissions. For instance, I had to change the permissions on the /dev/bpf devices for Wireshark to work. Another thought is that it has something to do with my machine having multiple network adapters enabled, including eth0 (ethernet, where I see the trap messages thanks to Wireshark) and eth1 (wifi). Could this be the problem?
As you can see, I know very little about sockets or SNMP, so any help is much appreciated!
Update: Using lsof (sudo lsof -i -n -P to be exact) it appears that my problem is that the java program is only listen on IPv6 when the trap sender is using IPv4. I've tried disabling IPv6 (sudo ip6 -x) and telling java to use IPv4 (java -jar bridge.jar -Djava.net.preferIPv4Stack=true) but I keep finding my program using IPv6. Any thoughts?
java 16444 peter 34u IPv6 0x12f3ad98 0t0 UDP *:8255
Update 2: Ok, I guess I had the java parameter order wrong: java -Djava.net.preferIPv4Stack=true -jar bridge.jar puts the program on IPv4. However, my program still shows no signs of receiving the packets that I know are there.
|
[
"The standard port number for SNMP traps is 162. \nIs there a reason you're specifying a different port number ? You can normally change the port number that traps are sent on/received on, but obviously both ends have to agree. So I'm wondering if this is your problem.\n",
"Ok, the solution to get my code working was to run the program as java -Djava.net.preferIPv4Stack=true -jar bridge.jar and to power cycle the SNMP trap sender. Thanks for your help, Brian.\n"
] |
[
0,
0
] |
[] |
[] |
[
"java",
"macos",
"python",
"snmp",
"sockets"
] |
stackoverflow_0001135981_java_macos_python_snmp_sockets.txt
|
Q:
Detecting Retweets using computationally inexpensive Python hashing algorithms
In order to be able to detect RT of a particular tweet, I plan to store hashes of each formatted tweet in the database.
What hashing algorithm should I use. Cryptic is of course not essential. Just a minimal way of storing a data as something which can then be compared if it is the same, in an efficient way.
My first attempt at this was by using md5 hashes. But I figured there can be hashing algorithms that are much more efficient, as security is not required.
A:
Do you really need to hash at all? Twitter messages are short enough (and disk space cheap enough) that it may be better to just store the whole message, rather than eating up clock cycles to hash it.
A:
I am not familiar with Python (sorry, Ruby guy typing here) however you could try a few things.
Assumptions:
You will likely be storing hundreds of thousands of Tweets over time, so comparing one hash against "every record" in the table will be inefficient. Also, RTs are not always carbon copies of the original tweet. After all, the original author's name is usually included and takes up some of the 140 character limit. So perhaps you could use a solution that matches more accurately than a "dumb" hash?
Tagging & Indexing
Tag and index the component parts of
the message in a standard way. This
could include treating hashed #....,
at-marked @.... and URL strings as
"tags". After removing noise words
and punctuation, you could also
treat the remaining words as tags
too.
Fast Searching
Databases are terrible at finding
multiple group membership very
quickly (I'll assume your using either
Mysql or Postgresql, which are
terrible at this). Instead try one
of the free text engines like
Sphinx Search. They are very
very fast at resolving multiple group membership (i.e.
checking if keywords are present).
Using Sphinx or similar, we search on
all of the "tags" we extracted. This
will probably return a smallish
result set of "potential original Tweets". Then compare them one by one
using similarity matching algorithm
(here is one in Python http://code.google.com/p/pylevenshtein/)
Now let me warmly welcome you to the world of text mining.
Good luck!
A:
I echo Chris' comment about not using a hash at all (your database engine can hopefully index 140-character fields efficiently).
If you did want to use a hash, MD5 would be my first choice as well (16 bytes), followed by SHA-1 (20 bytes).
Whatever you do, don't use sum-of-characters. I can't immediately come up with a function that would have more collisions (all anagrams hash the same), plus it's slower!
$ python -m timeit -s 'from hashlib import md5' 'd=md5("There once was a man named Michael Finnegan.").digest()'
100000 loops, best of 3: 2.47 usec per loop
$ python -m timeit 'd=sum(ord(c) for c in "There once was a man named Michael Finnegan.")'
100000 loops, best of 3: 13.9 usec per loop
A:
There are a few issues here. First, RT's are not always identical. Some people add a comment. Others change the URL for tracking. Others add in the person that they are RT'ing (which may or may not be the originator).
So if you are going to hash the tweet, you need to boil it down to the meat of the tweet, and only hash that. Good luck.
Above, someone mentioned that with 32-bits, you will start having collisions at about 65K tweets. Of course, you could have collisions on tweet #2. But I think the author of that comment was confused, since 2^16 = ~65K, but 2^32 = ~4 Trillion. So you have a little more room there.
A better algorithm might be to try to derive the "unique" parts of the tweet, and fingerprint it. It's not a hash, it's a fingerprint of a few key words that define uniqueness.
A:
Well, tweets are only 140 characters long, so you could even store the entire tweet in the database...
but if you really want to "hash" them somehow, a simple way would be to just take the sum of the ASCII values of all the characters in the tweet:
sum(ord(c) for c in tweet)
Of course, whenever you have a match of hashes, you should check the tweets themselves for sameness, because the probability of finding two tweets that give the same "sum-hash" is probably non-negligible.
A:
Python's shelve module? http://docs.python.org/library/shelve.html
A:
You are trying to hash a string right? Builtin types can be hashed right away, just do hash("some string") and you get some int. Its the same function python uses for dictonarys, so it is probably the best choice.
|
Detecting Retweets using computationally inexpensive Python hashing algorithms
|
In order to be able to detect RT of a particular tweet, I plan to store hashes of each formatted tweet in the database.
What hashing algorithm should I use. Cryptic is of course not essential. Just a minimal way of storing a data as something which can then be compared if it is the same, in an efficient way.
My first attempt at this was by using md5 hashes. But I figured there can be hashing algorithms that are much more efficient, as security is not required.
|
[
"Do you really need to hash at all? Twitter messages are short enough (and disk space cheap enough) that it may be better to just store the whole message, rather than eating up clock cycles to hash it.\n",
"I am not familiar with Python (sorry, Ruby guy typing here) however you could try a few things. \nAssumptions: \nYou will likely be storing hundreds of thousands of Tweets over time, so comparing one hash against \"every record\" in the table will be inefficient. Also, RTs are not always carbon copies of the original tweet. After all, the original author's name is usually included and takes up some of the 140 character limit. So perhaps you could use a solution that matches more accurately than a \"dumb\" hash?\n\nTagging & Indexing\nTag and index the component parts of\nthe message in a standard way. This\ncould include treating hashed #....,\nat-marked @.... and URL strings as\n\"tags\". After removing noise words\nand punctuation, you could also\ntreat the remaining words as tags\ntoo.\nFast Searching\nDatabases are terrible at finding\nmultiple group membership very\nquickly (I'll assume your using either\nMysql or Postgresql, which are\nterrible at this). Instead try one\nof the free text engines like\nSphinx Search. They are very\nvery fast at resolving multiple group membership (i.e.\nchecking if keywords are present).\nUsing Sphinx or similar, we search on\nall of the \"tags\" we extracted. This\nwill probably return a smallish\nresult set of \"potential original Tweets\". Then compare them one by one\nusing similarity matching algorithm\n(here is one in Python http://code.google.com/p/pylevenshtein/)\n\nNow let me warmly welcome you to the world of text mining. \nGood luck!\n",
"I echo Chris' comment about not using a hash at all (your database engine can hopefully index 140-character fields efficiently).\nIf you did want to use a hash, MD5 would be my first choice as well (16 bytes), followed by SHA-1 (20 bytes).\nWhatever you do, don't use sum-of-characters. I can't immediately come up with a function that would have more collisions (all anagrams hash the same), plus it's slower!\n$ python -m timeit -s 'from hashlib import md5' 'd=md5(\"There once was a man named Michael Finnegan.\").digest()'\n100000 loops, best of 3: 2.47 usec per loop\n$ python -m timeit 'd=sum(ord(c) for c in \"There once was a man named Michael Finnegan.\")'\n100000 loops, best of 3: 13.9 usec per loop\n\n",
"There are a few issues here. First, RT's are not always identical. Some people add a comment. Others change the URL for tracking. Others add in the person that they are RT'ing (which may or may not be the originator).\nSo if you are going to hash the tweet, you need to boil it down to the meat of the tweet, and only hash that. Good luck.\nAbove, someone mentioned that with 32-bits, you will start having collisions at about 65K tweets. Of course, you could have collisions on tweet #2. But I think the author of that comment was confused, since 2^16 = ~65K, but 2^32 = ~4 Trillion. So you have a little more room there.\nA better algorithm might be to try to derive the \"unique\" parts of the tweet, and fingerprint it. It's not a hash, it's a fingerprint of a few key words that define uniqueness.\n",
"Well, tweets are only 140 characters long, so you could even store the entire tweet in the database...\nbut if you really want to \"hash\" them somehow, a simple way would be to just take the sum of the ASCII values of all the characters in the tweet:\nsum(ord(c) for c in tweet)\n\nOf course, whenever you have a match of hashes, you should check the tweets themselves for sameness, because the probability of finding two tweets that give the same \"sum-hash\" is probably non-negligible.\n",
"Python's shelve module? http://docs.python.org/library/shelve.html\n",
"You are trying to hash a string right? Builtin types can be hashed right away, just do hash(\"some string\") and you get some int. Its the same function python uses for dictonarys, so it is probably the best choice.\n"
] |
[
6,
4,
2,
2,
1,
0,
0
] |
[] |
[] |
[
"hash",
"md5",
"python",
"twitter"
] |
stackoverflow_0000815313_hash_md5_python_twitter.txt
|
Q:
Python RSA Decryption Using OpenSSL Generated Keys
Does anyone know the simplest way to import an OpenSSL RSA private/public key (using a passphrase) with a Python library and use it to decrypt a message.
I've taken a look at ezPyCrypto, but can't seem to get it to recognise an OpenSSL RSA key, I've tried importing a key with importKey as follows:
key.importKey(myKey, passphrase='PASSPHRASE')
myKey in my case is an OpenSSL RSA public/private keypair represented as a string.
This balks with:
unbound method importKey() must be called with key instance as first
argument (got str instance instead)
The API doc says:
importKey(self, keystring, **kwds)
Can somebody suggest how I read a key in using ezPyCrypto? I've also tried:
key(key, passphrase='PASSPHRASE')
but this balks with:
ezPyCrypto.CryptoKeyError: Attempted
to import invalid key, or passphrase
is bad
Link to docs here:
http://www.freenet.org.nz/ezPyCrypto/detail/index.html
EDIT: Just an update on this. Successfully imported an RSA key, but had real problem decrypting because eqPyCrypto doesn't support the AES block cipher. Just so that people know. I successfully managed to do what I wanted using ncrypt (http://tachyon.in/ncrypt/). I had some compilation issues with M2Crypto because of SWIG and OpenSSL compilation problems, despite having versions installed that exceeded the minimum requirements. It would seem that the Python encryption/decryption frameworks are a bit of a minefield at the moment. Ho hum, thanks for your help.
A:
The first error is telling you that importKey needs to be called on an instance of key.
k = key()
k.importKey(myKey, passphrase='PASSPHRASE')
However, the documentation seems to suggest that this is a better way of doing what you want:
k = key(keyobj=myKey, passphrase='PASSPHRASE')
A:
It is not clear what are you trying to achieve, but you could give M2Crypto a try. From my point of view it is the best OpenSSL wrapper available for Python.
Here is a sample RSA encryption/decription code:
import M2Crypto as m2c
import textwrap
key = m2c.RSA.load_key('key.pem', lambda prompt: 'mypassword')
# encrypt something:
data = 'testing 123'
encrypted = key.public_encrypt(data, m2c.RSA.pkcs1_padding)
print "Encrypted data:"
print "\n".join(textwrap.wrap(' '.join(['%02x' % ord(b) for b in encrypted ])))
# and now decrypt it:
decrypted = key.private_decrypt(encrypted, m2c.RSA.pkcs1_padding)
print "Decrypted data:"
print decrypted
print data == decrypted
|
Python RSA Decryption Using OpenSSL Generated Keys
|
Does anyone know the simplest way to import an OpenSSL RSA private/public key (using a passphrase) with a Python library and use it to decrypt a message.
I've taken a look at ezPyCrypto, but can't seem to get it to recognise an OpenSSL RSA key, I've tried importing a key with importKey as follows:
key.importKey(myKey, passphrase='PASSPHRASE')
myKey in my case is an OpenSSL RSA public/private keypair represented as a string.
This balks with:
unbound method importKey() must be called with key instance as first
argument (got str instance instead)
The API doc says:
importKey(self, keystring, **kwds)
Can somebody suggest how I read a key in using ezPyCrypto? I've also tried:
key(key, passphrase='PASSPHRASE')
but this balks with:
ezPyCrypto.CryptoKeyError: Attempted
to import invalid key, or passphrase
is bad
Link to docs here:
http://www.freenet.org.nz/ezPyCrypto/detail/index.html
EDIT: Just an update on this. Successfully imported an RSA key, but had real problem decrypting because eqPyCrypto doesn't support the AES block cipher. Just so that people know. I successfully managed to do what I wanted using ncrypt (http://tachyon.in/ncrypt/). I had some compilation issues with M2Crypto because of SWIG and OpenSSL compilation problems, despite having versions installed that exceeded the minimum requirements. It would seem that the Python encryption/decryption frameworks are a bit of a minefield at the moment. Ho hum, thanks for your help.
|
[
"The first error is telling you that importKey needs to be called on an instance of key.\nk = key()\nk.importKey(myKey, passphrase='PASSPHRASE')\n\nHowever, the documentation seems to suggest that this is a better way of doing what you want:\nk = key(keyobj=myKey, passphrase='PASSPHRASE')\n\n",
"It is not clear what are you trying to achieve, but you could give M2Crypto a try. From my point of view it is the best OpenSSL wrapper available for Python.\nHere is a sample RSA encryption/decription code:\nimport M2Crypto as m2c\nimport textwrap\nkey = m2c.RSA.load_key('key.pem', lambda prompt: 'mypassword')\n\n# encrypt something:\ndata = 'testing 123'\nencrypted = key.public_encrypt(data, m2c.RSA.pkcs1_padding)\nprint \"Encrypted data:\"\nprint \"\\n\".join(textwrap.wrap(' '.join(['%02x' % ord(b) for b in encrypted ])))\n\n# and now decrypt it:\ndecrypted = key.private_decrypt(encrypted, m2c.RSA.pkcs1_padding)\nprint \"Decrypted data:\"\nprint decrypted\nprint data == decrypted\n\n"
] |
[
6,
5
] |
[] |
[] |
[
"encryption",
"openssl",
"python",
"rsa"
] |
stackoverflow_0001139622_encryption_openssl_python_rsa.txt
|
Q:
Python inheritance and calling parent class constructor
This is what I'm trying to do in Python:
class BaseClass:
def __init__(self):
print 'The base class constructor ran!'
self.__test = 42
class ChildClass(BaseClass):
def __init__(self):
print 'The child class constructor ran!'
BaseClass.__init__(self)
def doSomething(self):
print 'Test is: ', self.__test
test = ChildClass()
test.doSomething()
Which results in:
AttributeError: ChildClass instance has no attribute '_ChildClass__test'
What gives? Why doesn't this work as I expect?
A:
From python documentation:
Private name mangling: When an identifier that textually occurs in a class definition begins with two or more underscore characters and does not end in two or more underscores, it is considered a private name of that class. Private names are transformed to a longer form before code is generated for them. The transformation inserts the class name in front of the name, with leading underscores removed, and a single underscore inserted in front of the class name. For example, the identifier __spam occurring in a class named Ham will be transformed to _Ham__spam. This transformation is independent of the syntactical context in which the identifier is used. If the transformed name is extremely long (longer than 255 characters), implementation defined truncation may happen. If the class name consists only of underscores, no transformation is done.
So your attribute is not named __test but _BaseClass__test.
However you should not depend on that, use self._test instead and most python developers will know that the attribute is an internal part of the class, not the public interface.
A:
You could use Python's introspection facilities to get you the information you are looking for.
A simple dir(test) will give you
['_BaseClass__test', '__doc__', '__init__', '__module__', 'doSomething']
Note the '_BaseClass__test'. That's what you're looking for.
Check this for more information.
A:
Double underscored variables can be considered "private". They are mangled with the class name, amongst other to protect multiple baseclasses from overriding eachothers members.
Use a single underscore if you want your attribute to be considered private by other developers.
|
Python inheritance and calling parent class constructor
|
This is what I'm trying to do in Python:
class BaseClass:
def __init__(self):
print 'The base class constructor ran!'
self.__test = 42
class ChildClass(BaseClass):
def __init__(self):
print 'The child class constructor ran!'
BaseClass.__init__(self)
def doSomething(self):
print 'Test is: ', self.__test
test = ChildClass()
test.doSomething()
Which results in:
AttributeError: ChildClass instance has no attribute '_ChildClass__test'
What gives? Why doesn't this work as I expect?
|
[
"From python documentation:\n\nPrivate name mangling: When an identifier that textually occurs in a class definition begins with two or more underscore characters and does not end in two or more underscores, it is considered a private name of that class. Private names are transformed to a longer form before code is generated for them. The transformation inserts the class name in front of the name, with leading underscores removed, and a single underscore inserted in front of the class name. For example, the identifier __spam occurring in a class named Ham will be transformed to _Ham__spam. This transformation is independent of the syntactical context in which the identifier is used. If the transformed name is extremely long (longer than 255 characters), implementation defined truncation may happen. If the class name consists only of underscores, no transformation is done.\n\nSo your attribute is not named __test but _BaseClass__test.\nHowever you should not depend on that, use self._test instead and most python developers will know that the attribute is an internal part of the class, not the public interface. \n",
"You could use Python's introspection facilities to get you the information you are looking for.\nA simple dir(test) will give you\n['_BaseClass__test', '__doc__', '__init__', '__module__', 'doSomething']\n\nNote the '_BaseClass__test'. That's what you're looking for.\nCheck this for more information.\n",
"Double underscored variables can be considered \"private\". They are mangled with the class name, amongst other to protect multiple baseclasses from overriding eachothers members.\nUse a single underscore if you want your attribute to be considered private by other developers.\n"
] |
[
20,
5,
3
] |
[] |
[] |
[
"oop",
"python"
] |
stackoverflow_0001139828_oop_python.txt
|
Q:
Python fails to execute firefox webbrowser from a root executed script with privileges drop
I can't run firefox from a sudoed python script that drops privileges to normal user. If i write
$ sudo python
>>> import os
>>> import pwd, grp
>>> uid = pwd.getpwnam('norby')[2]
>>> gid = grp.getgrnam('norby')[2]
>>> os.setegid(gid)
>>> os.seteuid(uid)
>>> import webbrowser
>>> webbrowser.get('firefox').open('www.google.it')
True
>>> # It returns true but doesn't work
>>> from subprocess import Popen,PIPE
>>> p = Popen('firefox www.google.it', shell=True,stdout=PIPE,stderr=PIPE)
>>> # Doesn't execute the command
>>> You shouldn't really run Iceweasel through sudo WITHOUT the -H option.
Continuing as if you used the -H option.
No protocol specified
Error: cannot open display: :0
I think that is not a python problem, but firefox/iceweasel/debian configuration problem. Maybe firefox read only UID and not EUID, and doesn't execute process because UID is equal 0. What do you think about?
A:
This could be your environment. Changing the permissions will still leave environment variables like $HOME pointing at the root user's directory, which will be inaccessible. It may be worth trying altering these variables by changing os.environ before launching the browser. There may also be other variables worth checking.
|
Python fails to execute firefox webbrowser from a root executed script with privileges drop
|
I can't run firefox from a sudoed python script that drops privileges to normal user. If i write
$ sudo python
>>> import os
>>> import pwd, grp
>>> uid = pwd.getpwnam('norby')[2]
>>> gid = grp.getgrnam('norby')[2]
>>> os.setegid(gid)
>>> os.seteuid(uid)
>>> import webbrowser
>>> webbrowser.get('firefox').open('www.google.it')
True
>>> # It returns true but doesn't work
>>> from subprocess import Popen,PIPE
>>> p = Popen('firefox www.google.it', shell=True,stdout=PIPE,stderr=PIPE)
>>> # Doesn't execute the command
>>> You shouldn't really run Iceweasel through sudo WITHOUT the -H option.
Continuing as if you used the -H option.
No protocol specified
Error: cannot open display: :0
I think that is not a python problem, but firefox/iceweasel/debian configuration problem. Maybe firefox read only UID and not EUID, and doesn't execute process because UID is equal 0. What do you think about?
|
[
"This could be your environment. Changing the permissions will still leave environment variables like $HOME pointing at the root user's directory, which will be inaccessible. It may be worth trying altering these variables by changing os.environ before launching the browser. There may also be other variables worth checking.\n"
] |
[
1
] |
[] |
[] |
[
"browser",
"debian",
"python",
"uid"
] |
stackoverflow_0001139835_browser_debian_python_uid.txt
|
Q:
Using Python to Automate Creation/Manipulation of Excel Spreadsheets
I have some data in CSV format that I want to pull into an Excel spreadsheet and then create some standard set of graphs for. Since the data is originally generated in a Python app, I was hoping to simply extend the app so that it could do all the post processing and I wouldn't have to do it by hand. Is there an easy interface with Python to work with and manipulate Excel spreadsheets? Any good samples of doing this? Is this Windows only (I'm primarily working on a Mac and have Excel, but could do this on Windows if necessary).
A:
xlutils (and the included packages xlrd and xlwt) should allow your Python program to handily do any creation, reading and manipulation of Excel files you might want!
A:
On Windows you could use the pywin32 package to create an Excel COM Object and then manipulate it from a script. You need to have an installed Excel on that machine though. I haven't done this myself so I can't give you and details but I've seen this working so can at least confirm that it's possible. No idea about OS X, unfortunately.
|
Using Python to Automate Creation/Manipulation of Excel Spreadsheets
|
I have some data in CSV format that I want to pull into an Excel spreadsheet and then create some standard set of graphs for. Since the data is originally generated in a Python app, I was hoping to simply extend the app so that it could do all the post processing and I wouldn't have to do it by hand. Is there an easy interface with Python to work with and manipulate Excel spreadsheets? Any good samples of doing this? Is this Windows only (I'm primarily working on a Mac and have Excel, but could do this on Windows if necessary).
|
[
"xlutils (and the included packages xlrd and xlwt) should allow your Python program to handily do any creation, reading and manipulation of Excel files you might want!\n",
"On Windows you could use the pywin32 package to create an Excel COM Object and then manipulate it from a script. You need to have an installed Excel on that machine though. I haven't done this myself so I can't give you and details but I've seen this working so can at least confirm that it's possible. No idea about OS X, unfortunately.\n"
] |
[
7,
1
] |
[] |
[] |
[
"excel",
"python"
] |
stackoverflow_0001140311_excel_python.txt
|
Q:
How to produce the i-th combination/permutation without iterating
Given any iterable, for example: "ABCDEF"
Treating it almost like a numeral system as such:
A
B
C
D
E
F
AA
AB
AC
AD
AE
AF
BA
BB
BC
....
FF
AAA
AAB
....
How would I go about finding the ith member in this list? Efficiently, not by counting up through all of them. I want to find the billionth (for example) member in this list. I'm trying to do this in python and I am using 2.4 (not by choice) which might be relevant because I do not have access to itertools.
Nice, but not required: Could the solution be generalized for pseudo-"mixed radix" system?
--- RESULTS ---
# ------ paul -----
def f0(x, alph='ABCDE'):
result = ''
ct = len(alph)
while x>=0:
result += alph[x%ct]
x /= ct-1
return result[::-1]
# ----- Glenn Maynard -----
import math
def idx_to_length_and_value(n, length):
chars = 1
while True:
cnt = pow(length, chars)
if cnt > n:
return chars, n
chars += 1
n -= cnt
def conv_base(chars, n, values):
ret = []
for i in range(0, chars):
c = values[n % len(values)]
ret.append(c)
n /= len(values)
return reversed(ret)
def f1(i, values = "ABCDEF"):
chars, n = idx_to_length_and_value(i, len(values))
return "".join(conv_base(chars, n, values))
# -------- Laurence Gonsalves ------
def f2(i, seq):
seq = tuple(seq)
n = len(seq)
max = n # number of perms with 'digits' digits
digits = 1
last_max = 0
while i >= max:
last_max = max
max = n * (max + 1)
digits += 1
result = ''
i -= last_max
while digits:
digits -= 1
result = seq[i % n] + result
i //= n
return result
# -------- yairchu -------
def f3(x, alphabet = 'ABCDEF'):
x += 1 # Make us skip "" as a valid word
group_size = 1
num_letters = 0
while 1: #for num_letters in itertools.count():
if x < group_size:
break
x -= group_size
group_size *= len(alphabet)
num_letters +=1
letters = []
for i in range(num_letters):
x, m = divmod(x, len(alphabet))
letters.append(alphabet[m])
return ''.join(reversed(letters))
# ----- testing ----
import time
import random
tries = [random.randint(1,1000000000000) for i in range(10000)]
numbs = 'ABCDEF'
time0 = time.time()
s0 = [f1(i, numbs) for i in tries]
print 's0 paul',time.time()-time0, 'sec'
time0 = time.time()
s1 = [f1(i, numbs) for i in tries]
print 's1 Glenn Maynard',time.time()-time0, 'sec'
time0 = time.time()
s2 = [f2(i, numbs) for i in tries]
print 's2 Laurence Gonsalves',time.time()-time0, 'sec'
time0 = time.time()
s3 = [f3(i,numbs) for i in tries]
print 's3 yairchu',time.time()-time0, 'sec'
times:
s0 paul 0.470999956131 sec
s1 Glenn Maynard 0.472999811172 sec
s2 Laurence Gonsalves 0.259000062943 sec
s3 yairchu 0.325000047684 sec
>>> s0==s1==s2==s3
True
A:
Third time's the charm:
def perm(i, seq):
seq = tuple(seq)
n = len(seq)
max = n # number of perms with 'digits' digits
digits = 1
last_max = 0
while i >= max:
last_max = max
max = n * (max + 1)
digits += 1
result = ''
i -= last_max
while digits:
digits -= 1
result = seq[i % n] + result
i //= n
return result
A:
Multi-radix solution at the bottom.
import math
def idx_to_length_and_value(n, length):
chars = 1
while True:
cnt = pow(length, chars)
if cnt > n:
return chars, n
chars += 1
n -= cnt
def conv_base(chars, n, values):
ret = []
for i in range(0, chars):
c = values[n % len(values)]
ret.append(c)
n /= len(values)
return reversed(ret)
values = "ABCDEF"
for i in range(0, 100):
chars, n = idx_to_length_and_value(i, len(values))
print "".join(conv_base(chars, n, values))
import math
def get_max_value_for_digits(digits_list):
max_vals = []
for val in digits_list:
val = len(val)
if max_vals:
val *= max_vals[-1]
max_vals.append(val)
return max_vals
def idx_to_length_and_value(n, digits_list):
chars = 1
max_vals = get_max_value_for_digits(digits_list)
while True:
if chars-1 >= len(max_vals):
raise OverflowError, "number not representable"
max_val = max_vals[chars-1]
if n < max_val:
return chars, n
chars += 1
n -= max_val
def conv_base(chars, n, digits_list):
ret = []
for i in range(chars-1, -1, -1):
digits = digits_list[i]
radix = len(digits)
c = digits[n % len(digits)]
ret.append(c)
n /= radix
return reversed(ret)
digits_list = ["ABCDEF", "ABC", "AB"]
for i in range(0, 120):
chars, n = idx_to_length_and_value(i, digits_list)
print "".join(conv_base(chars, n, digits_list))
A:
What you're doing is close to a conversion from base 10 (your number) to base 6, with ABCDEF being your digits. The only difference is "AA" and "A" are different, which is wrong if you consider "A" the zero-digit.
If you add the next greater power of six to your number, and then do a base conversion to base 6 using these digits, and finally strip the first digit (which should be a "B", i.e. a "1"), you've got the result.
I just want to post an idea here, not an implementation, because the question smells a lot like homework to me (I do give the benefit of the doubt; it's just my feeling).
A:
First compute the length by summing up powers of six until you exceed your index (or better use the formula for the geometric series).
Subtract the sum of smaller powers from the index.
Compute the representation to base 6, fill leading zeros and map 0 -> A, ..., 5 -> F.
A:
This works (and is what i finally settled on), and thought it was worth posting because it is tidy. However it is slower than most answers. Can i perform % and / in the same operation?
def f0(x, alph='ABCDE'):
result = ''
ct = len(alph)
while x>=0:
result += alph[x%ct]
x /= ct-1
return result[::-1]
A:
alphabet = 'ABCDEF'
def idx_to_excel_column_name(x):
x += 1 # Make us skip "" as a valid word
group_size = 1
for num_letters in itertools.count():
if x < group_size:
break
x -= group_size
group_size *= len(alphabet)
letters = []
for i in range(num_letters):
x, m = divmod(x, len(alphabet))
letters.append(alphabet[m])
return ''.join(reversed(letters))
def excel_column_name_to_idx(name):
q = len(alphabet)
x = 0
for letter in name:
x *= q
x += alphabet.index(letter)
return x+q**len(name)//(q-1)-1
A:
Since we are converting from a number Base(10) to a number Base(7), whilst avoiding all "0" in the output, we will have to adjust the orginal number, so we do skip by one every time the result would contain a "0".
1 => A, or 1 in base [0ABCDEF]
7 => AA, or 8 in base [0ABCDEF]
13 => BA, or 15 in base [0ABCDEF]
42 => FF, or 48 in base [0ABCDEF]
43 =>AAA, or 50 in base [0ABCDEF]
Here's some Perl code that shows what I'm trying to explain
(sorry, didn't see this is a Python request)
use strict;
use warnings;
my @Symbols=qw/0 A B C D E F/;
my $BaseSize=@Symbols ;
for my $NR ( 1 .. 45) {
printf ("Convert %3i => %s\n",$NR ,convert($NR));
}
sub convert {
my ($nr,$res)=@_;
return $res unless $nr>0;
$res="" unless defined($res);
#Adjust to skip '0'
$nr=$nr + int(($nr-1)/($BaseSize-1));
return convert(int($nr/$BaseSize),$Symbols[($nr % ($BaseSize))] . $res);
}
A:
In perl you'd just convert your input i from base(10) to base(length of "ABCDEF"), then do a tr/012345/ABCDEF/ which is the same as y/0-5/A-F/. Surely Python has a similar feature set.
Oh, as pointed out by Yarichu the combinations are a tad different because if A represented 0, then there would be no combinations with leading A (though he said it a bit different). It seems I thought the problem to be more trivial than it is. You cannot just transliterate different base numbers, because numbers containing the equivalent of 0 would be
skipped in the sequence.
So what I suggested is actually only the last step of what starblue suggested, which is essentially what Laurence Gonsalves implemented ftw. Oh, and there is no transliteration (tr// or y//) operation in Python, what a shame.
|
How to produce the i-th combination/permutation without iterating
|
Given any iterable, for example: "ABCDEF"
Treating it almost like a numeral system as such:
A
B
C
D
E
F
AA
AB
AC
AD
AE
AF
BA
BB
BC
....
FF
AAA
AAB
....
How would I go about finding the ith member in this list? Efficiently, not by counting up through all of them. I want to find the billionth (for example) member in this list. I'm trying to do this in python and I am using 2.4 (not by choice) which might be relevant because I do not have access to itertools.
Nice, but not required: Could the solution be generalized for pseudo-"mixed radix" system?
--- RESULTS ---
# ------ paul -----
def f0(x, alph='ABCDE'):
result = ''
ct = len(alph)
while x>=0:
result += alph[x%ct]
x /= ct-1
return result[::-1]
# ----- Glenn Maynard -----
import math
def idx_to_length_and_value(n, length):
chars = 1
while True:
cnt = pow(length, chars)
if cnt > n:
return chars, n
chars += 1
n -= cnt
def conv_base(chars, n, values):
ret = []
for i in range(0, chars):
c = values[n % len(values)]
ret.append(c)
n /= len(values)
return reversed(ret)
def f1(i, values = "ABCDEF"):
chars, n = idx_to_length_and_value(i, len(values))
return "".join(conv_base(chars, n, values))
# -------- Laurence Gonsalves ------
def f2(i, seq):
seq = tuple(seq)
n = len(seq)
max = n # number of perms with 'digits' digits
digits = 1
last_max = 0
while i >= max:
last_max = max
max = n * (max + 1)
digits += 1
result = ''
i -= last_max
while digits:
digits -= 1
result = seq[i % n] + result
i //= n
return result
# -------- yairchu -------
def f3(x, alphabet = 'ABCDEF'):
x += 1 # Make us skip "" as a valid word
group_size = 1
num_letters = 0
while 1: #for num_letters in itertools.count():
if x < group_size:
break
x -= group_size
group_size *= len(alphabet)
num_letters +=1
letters = []
for i in range(num_letters):
x, m = divmod(x, len(alphabet))
letters.append(alphabet[m])
return ''.join(reversed(letters))
# ----- testing ----
import time
import random
tries = [random.randint(1,1000000000000) for i in range(10000)]
numbs = 'ABCDEF'
time0 = time.time()
s0 = [f1(i, numbs) for i in tries]
print 's0 paul',time.time()-time0, 'sec'
time0 = time.time()
s1 = [f1(i, numbs) for i in tries]
print 's1 Glenn Maynard',time.time()-time0, 'sec'
time0 = time.time()
s2 = [f2(i, numbs) for i in tries]
print 's2 Laurence Gonsalves',time.time()-time0, 'sec'
time0 = time.time()
s3 = [f3(i,numbs) for i in tries]
print 's3 yairchu',time.time()-time0, 'sec'
times:
s0 paul 0.470999956131 sec
s1 Glenn Maynard 0.472999811172 sec
s2 Laurence Gonsalves 0.259000062943 sec
s3 yairchu 0.325000047684 sec
>>> s0==s1==s2==s3
True
|
[
"Third time's the charm:\ndef perm(i, seq):\n seq = tuple(seq)\n n = len(seq)\n max = n # number of perms with 'digits' digits\n digits = 1\n last_max = 0\n while i >= max:\n last_max = max\n max = n * (max + 1)\n digits += 1\n result = ''\n i -= last_max\n while digits:\n digits -= 1\n result = seq[i % n] + result\n i //= n\n return result\n\n",
"Multi-radix solution at the bottom.\nimport math\ndef idx_to_length_and_value(n, length):\n chars = 1\n while True:\n cnt = pow(length, chars)\n if cnt > n:\n return chars, n\n\n chars += 1\n n -= cnt\n\ndef conv_base(chars, n, values):\n ret = []\n for i in range(0, chars):\n c = values[n % len(values)]\n ret.append(c)\n n /= len(values)\n\n return reversed(ret)\n\nvalues = \"ABCDEF\"\nfor i in range(0, 100):\n chars, n = idx_to_length_and_value(i, len(values))\n print \"\".join(conv_base(chars, n, values))\n\n\nimport math\ndef get_max_value_for_digits(digits_list):\n max_vals = []\n\n for val in digits_list:\n val = len(val)\n if max_vals:\n val *= max_vals[-1]\n max_vals.append(val)\n return max_vals\n\ndef idx_to_length_and_value(n, digits_list):\n chars = 1\n max_vals = get_max_value_for_digits(digits_list)\n\n while True:\n if chars-1 >= len(max_vals):\n raise OverflowError, \"number not representable\"\n max_val = max_vals[chars-1]\n if n < max_val:\n return chars, n\n\n chars += 1\n n -= max_val\n\ndef conv_base(chars, n, digits_list):\n ret = []\n for i in range(chars-1, -1, -1):\n digits = digits_list[i]\n radix = len(digits)\n\n c = digits[n % len(digits)]\n ret.append(c)\n n /= radix\n\n return reversed(ret)\n\ndigits_list = [\"ABCDEF\", \"ABC\", \"AB\"]\nfor i in range(0, 120):\n chars, n = idx_to_length_and_value(i, digits_list)\n print \"\".join(conv_base(chars, n, digits_list))\n\n",
"What you're doing is close to a conversion from base 10 (your number) to base 6, with ABCDEF being your digits. The only difference is \"AA\" and \"A\" are different, which is wrong if you consider \"A\" the zero-digit.\nIf you add the next greater power of six to your number, and then do a base conversion to base 6 using these digits, and finally strip the first digit (which should be a \"B\", i.e. a \"1\"), you've got the result.\nI just want to post an idea here, not an implementation, because the question smells a lot like homework to me (I do give the benefit of the doubt; it's just my feeling).\n",
"First compute the length by summing up powers of six until you exceed your index (or better use the formula for the geometric series).\nSubtract the sum of smaller powers from the index.\nCompute the representation to base 6, fill leading zeros and map 0 -> A, ..., 5 -> F.\n",
"This works (and is what i finally settled on), and thought it was worth posting because it is tidy. However it is slower than most answers. Can i perform % and / in the same operation?\ndef f0(x, alph='ABCDE'):\n result = ''\n ct = len(alph)\n while x>=0:\n result += alph[x%ct]\n x /= ct-1\n return result[::-1]\n\n",
"alphabet = 'ABCDEF'\n\ndef idx_to_excel_column_name(x):\n x += 1 # Make us skip \"\" as a valid word\n group_size = 1\n for num_letters in itertools.count():\n if x < group_size:\n break\n x -= group_size\n group_size *= len(alphabet)\n letters = []\n for i in range(num_letters):\n x, m = divmod(x, len(alphabet))\n letters.append(alphabet[m])\n return ''.join(reversed(letters))\n\ndef excel_column_name_to_idx(name):\n q = len(alphabet)\n x = 0\n for letter in name:\n x *= q\n x += alphabet.index(letter)\n return x+q**len(name)//(q-1)-1\n\n",
"Since we are converting from a number Base(10) to a number Base(7), whilst avoiding all \"0\" in the output, we will have to adjust the orginal number, so we do skip by one every time the result would contain a \"0\".\n 1 => A, or 1 in base [0ABCDEF]\n 7 => AA, or 8 in base [0ABCDEF]\n13 => BA, or 15 in base [0ABCDEF]\n42 => FF, or 48 in base [0ABCDEF]\n43 =>AAA, or 50 in base [0ABCDEF]\n\nHere's some Perl code that shows what I'm trying to explain\n(sorry, didn't see this is a Python request)\nuse strict;\nuse warnings;\nmy @Symbols=qw/0 A B C D E F/;\nmy $BaseSize=@Symbols ;\nfor my $NR ( 1 .. 45) {\n printf (\"Convert %3i => %s\\n\",$NR ,convert($NR));\n}\n\nsub convert {\n my ($nr,$res)=@_;\n return $res unless $nr>0;\n $res=\"\" unless defined($res);\n #Adjust to skip '0'\n $nr=$nr + int(($nr-1)/($BaseSize-1));\n return convert(int($nr/$BaseSize),$Symbols[($nr % ($BaseSize))] . $res);\n}\n\n",
"In perl you'd just convert your input i from base(10) to base(length of \"ABCDEF\"), then do a tr/012345/ABCDEF/ which is the same as y/0-5/A-F/. Surely Python has a similar feature set.\nOh, as pointed out by Yarichu the combinations are a tad different because if A represented 0, then there would be no combinations with leading A (though he said it a bit different). It seems I thought the problem to be more trivial than it is. You cannot just transliterate different base numbers, because numbers containing the equivalent of 0 would be \nskipped in the sequence.\nSo what I suggested is actually only the last step of what starblue suggested, which is essentially what Laurence Gonsalves implemented ftw. Oh, and there is no transliteration (tr// or y//) operation in Python, what a shame.\n"
] |
[
5,
5,
3,
2,
2,
1,
1,
0
] |
[] |
[] |
[
"combinatorics",
"python"
] |
stackoverflow_0001129704_combinatorics_python.txt
|
Q:
Jython or JRuby?
It's a high level conceptual question. I have two separate code bases that serve the same purpose, one built in Python and the other in Ruby. I need to develop something that will run on JVM. So I have two choices: convert the Python code to Jython or convert the Ruby to JRuby. Since I don't know any of them, I was wondering if anyone can give me some guidance. Like which one runs faster, or more importantly which one has tools available for easy code migration(.pyc to .jar files)?
A:
In both cases, most of the code should Just Work™. I don't know of a really compelling reason to choose Jython over JRuby or vice versa if you'll be learning either from scratch. Python places a heavy emphasis on readability and not using "magic", but Ruby tends to give you a little more rope to do fancy things, e.g., define your own DSL. The main difference is in the community, and largely revolves around the different focus mentioned above.
A:
If you are going to be investing time and effort into either platform you should check how active the development is on both platforms. Subscribe to the mailing lists and newsgroups to get an idea of the community, check the source control system for both projects and get a feeling for how active the development is.
I am more familiar with Python than Ruby. The Jython project after a period slow movement has really picked up momentum, a Python 2.5 compatible version was released in June. This is a major step forward as Python 2.5 introduces some very useful language enhancements: http://docs.python.org/whatsnew/2.5.html
A:
The compatibility in either case is at the source-code level; with necessary changes where the Python or Ruby code invokes packages that involve native code (especially, standard Python packages like ctypes are not present in Jython).
A:
Performance may be the deciding factor: in this benchmark (which, like all benchmarks, should be taken with a grain of salt), JRuby ran somewaht faster than native Ruby, while Jython was outperformed by CPython by a factor of 3.
A:
Anything you can do in one, you can do in the other.
Learn enough of both to realise which one appeals to your coding sensibilities. There is no right or wrong answer here.
|
Jython or JRuby?
|
It's a high level conceptual question. I have two separate code bases that serve the same purpose, one built in Python and the other in Ruby. I need to develop something that will run on JVM. So I have two choices: convert the Python code to Jython or convert the Ruby to JRuby. Since I don't know any of them, I was wondering if anyone can give me some guidance. Like which one runs faster, or more importantly which one has tools available for easy code migration(.pyc to .jar files)?
|
[
"In both cases, most of the code should Just Work™. I don't know of a really compelling reason to choose Jython over JRuby or vice versa if you'll be learning either from scratch. Python places a heavy emphasis on readability and not using \"magic\", but Ruby tends to give you a little more rope to do fancy things, e.g., define your own DSL. The main difference is in the community, and largely revolves around the different focus mentioned above.\n",
"If you are going to be investing time and effort into either platform you should check how active the development is on both platforms. Subscribe to the mailing lists and newsgroups to get an idea of the community, check the source control system for both projects and get a feeling for how active the development is.\nI am more familiar with Python than Ruby. The Jython project after a period slow movement has really picked up momentum, a Python 2.5 compatible version was released in June. This is a major step forward as Python 2.5 introduces some very useful language enhancements: http://docs.python.org/whatsnew/2.5.html\n",
"The compatibility in either case is at the source-code level; with necessary changes where the Python or Ruby code invokes packages that involve native code (especially, standard Python packages like ctypes are not present in Jython).\n",
"Performance may be the deciding factor: in this benchmark (which, like all benchmarks, should be taken with a grain of salt), JRuby ran somewaht faster than native Ruby, while Jython was outperformed by CPython by a factor of 3.\n",
"Anything you can do in one, you can do in the other. \nLearn enough of both to realise which one appeals to your coding sensibilities. There is no right or wrong answer here.\n"
] |
[
5,
2,
1,
1,
1
] |
[] |
[] |
[
"jruby",
"jython",
"python",
"ruby"
] |
stackoverflow_0001130697_jruby_jython_python_ruby.txt
|
Q:
Add local variable to running generator
Lately, I tried to set local variables from outside of a running generator. The generator code also should access these variables.
One trouble was, that when accessing the variables, it seamed that the interpreter was thinking it must be a global since the variable was not set in the local scope. But I don't wanted to change the global variables and did also not want to copy the whole global scope to make the variables artificially local.
An other trouble was, that it seams that the dictionaries for locals (and globals?) seamed to be read-only when accessed from outside.
Is there any legal (or at least partial legal way) to introduce locals into a running generator instance?
Edit for clarification:
I don't mean the "send" function. This is of course a neat function, but since I want to set multiple variables with differing names, it is not conveniant for my purposes.
A:
What you may be looking for, is the send method, which allows a value to be sent into a generator. The reference provides an example:
>>> def echo(value=None):
... print "Execution starts when 'next()' is called for the first time."
... try:
... while True:
... try:
... value = (yield value)
... except Exception, e:
... value = e
... finally:
... print "Don't forget to clean up when 'close()' is called."
...
>>> generator = echo(1)
>>> print generator.next()
Execution starts when 'next()' is called for the first time.
1
>>> print generator.next()
None
>>> print generator.send(2)
2
>>> generator.throw(TypeError, "spam")
TypeError('spam',)
>>> generator.close()
Don't forget to clean up when 'close()' is called.
Let me give an example of my own. (Watch out! The code above is Python 2.6, but below I'll write Python 3; py3k ref):
>>> def amplify(iter, amp=1):
... for i in iter:
... reply = (yield i * amp)
... amp = reply if reply != None else amp
...
>>> it = amplify(range(10))
>>> next(it)
0
>>> next(it)
1
>>> it.send(3) # 2 * 3 = 6
6
>>> it.send(8) # 3 * 8 = 24
24
>>> next(it) # 4 * 8 = 32
32
Of course, if your really want to, you can also do this without send. E.g. by encapsulating the generator inside a class (but it's not nearly as elegant!):
>>> class MyIter:
... def __init__(self, iter, amp=1):
... self.iter = iter
... self.amp = amp
... def __iter__(self):
... for i in self.iter:
... yield i * self.amp
... def __call__(self):
... return iter(self)
...
>>> iterable = MyIter(range(10))
>>> iterator = iterable()
>>> next(iterator)
0
>>> next(iterator)
1
>>> iterable.amp = 3
>>> next(iterator)
6
>>> iterable.amp = 8
>>> next(iterator)
24
>>> next(iterator)
32
Update: Alright, now that you have updated your question, let me have another stab at the problem. Perhaps this is what you mean?
>>> def amplify(iter, loc={}):
... for i in iter:
... yield i * loc.get('amp', 1)
...
>>> it = amplify(range(10), locals())
>>> next(it)
0
>>> next(it)
1
>>> amp = 3
>>> next(it)
6
>>> amp = 8
>>> next(it)
24
>>> next(it)
32
Note that locals() should be treated as read-only and is scope dependent. As you can see, you'll need to explicitly pass locals() to the generator. I see no way around this...
A:
locals() always returns a read-only dict. You could create your own "locals" dictionary:
def gen_func():
lcls = {}
for i in range(5):
yield (i, lcls)
print lcls
for (val, lcls) in gen_func():
lcls[val] = val
Any other mutable structure will also work.
A:
If you want to have a coroutine or a generator that also acts as a sink, you should use the send method, as in Stephan202's answers. If you want to change the runtime behavior by settings various attributes in the generator, there's an old recipe by Raymond Hettinger:
def foo_iter(self):
self.v = "foo"
while True:
yield self.v
enableAttributes(foo_iter)
it = foo_iter()
print it.next()
it.v = "boo"
print it.next()
This will print:
foo
boo
It shouldn't be too difficult to convert the enableAttributes function into a proper decorator.
|
Add local variable to running generator
|
Lately, I tried to set local variables from outside of a running generator. The generator code also should access these variables.
One trouble was, that when accessing the variables, it seamed that the interpreter was thinking it must be a global since the variable was not set in the local scope. But I don't wanted to change the global variables and did also not want to copy the whole global scope to make the variables artificially local.
An other trouble was, that it seams that the dictionaries for locals (and globals?) seamed to be read-only when accessed from outside.
Is there any legal (or at least partial legal way) to introduce locals into a running generator instance?
Edit for clarification:
I don't mean the "send" function. This is of course a neat function, but since I want to set multiple variables with differing names, it is not conveniant for my purposes.
|
[
"What you may be looking for, is the send method, which allows a value to be sent into a generator. The reference provides an example:\n>>> def echo(value=None):\n... print \"Execution starts when 'next()' is called for the first time.\"\n... try:\n... while True:\n... try:\n... value = (yield value)\n... except Exception, e:\n... value = e\n... finally:\n... print \"Don't forget to clean up when 'close()' is called.\"\n...\n>>> generator = echo(1)\n>>> print generator.next()\nExecution starts when 'next()' is called for the first time.\n1\n>>> print generator.next()\nNone\n>>> print generator.send(2)\n2\n>>> generator.throw(TypeError, \"spam\")\nTypeError('spam',)\n>>> generator.close()\nDon't forget to clean up when 'close()' is called.\n\n\nLet me give an example of my own. (Watch out! The code above is Python 2.6, but below I'll write Python 3; py3k ref):\n>>> def amplify(iter, amp=1):\n... for i in iter:\n... reply = (yield i * amp)\n... amp = reply if reply != None else amp \n... \n>>> it = amplify(range(10))\n>>> next(it)\n0\n>>> next(it)\n1\n>>> it.send(3) # 2 * 3 = 6\n6\n>>> it.send(8) # 3 * 8 = 24\n24\n>>> next(it) # 4 * 8 = 32\n32\n\nOf course, if your really want to, you can also do this without send. E.g. by encapsulating the generator inside a class (but it's not nearly as elegant!):\n>>> class MyIter:\n... def __init__(self, iter, amp=1):\n... self.iter = iter\n... self.amp = amp\n... def __iter__(self):\n... for i in self.iter:\n... yield i * self.amp\n... def __call__(self):\n... return iter(self)\n... \n>>> iterable = MyIter(range(10))\n>>> iterator = iterable()\n>>> next(iterator)\n0\n>>> next(iterator)\n1\n>>> iterable.amp = 3\n>>> next(iterator)\n6\n>>> iterable.amp = 8\n>>> next(iterator)\n24\n>>> next(iterator)\n32\n\n\nUpdate: Alright, now that you have updated your question, let me have another stab at the problem. Perhaps this is what you mean?\n>>> def amplify(iter, loc={}):\n... for i in iter:\n... yield i * loc.get('amp', 1)\n... \n>>> it = amplify(range(10), locals())\n>>> next(it)\n0\n>>> next(it)\n1\n>>> amp = 3\n>>> next(it)\n6\n>>> amp = 8\n>>> next(it)\n24\n>>> next(it)\n32\n\nNote that locals() should be treated as read-only and is scope dependent. As you can see, you'll need to explicitly pass locals() to the generator. I see no way around this...\n",
"locals() always returns a read-only dict. You could create your own \"locals\" dictionary:\ndef gen_func():\n lcls = {}\n for i in range(5):\n yield (i, lcls)\n print lcls\n\n\nfor (val, lcls) in gen_func():\n lcls[val] = val\n\nAny other mutable structure will also work.\n",
"If you want to have a coroutine or a generator that also acts as a sink, you should use the send method, as in Stephan202's answers. If you want to change the runtime behavior by settings various attributes in the generator, there's an old recipe by Raymond Hettinger:\ndef foo_iter(self):\n self.v = \"foo\"\n while True:\n yield self.v\n\nenableAttributes(foo_iter)\nit = foo_iter()\nprint it.next()\nit.v = \"boo\"\nprint it.next()\n\nThis will print:\nfoo\nboo\n\nIt shouldn't be too difficult to convert the enableAttributes function into a proper decorator. \n"
] |
[
5,
1,
1
] |
[] |
[] |
[
"generator",
"local",
"python",
"variables"
] |
stackoverflow_0001140665_generator_local_python_variables.txt
|
Q:
Why was the 'thread' module renamed to '_thread' in Python 3.x?
Python 3.x renamed the low-level module 'thread' to '_thread' -- I don't see why in the documentation. Does anyone know?
A:
It's been quite a long time since the low-level thread module was informally deprecated, with all users heartily encouraged to use the higher-level threading module instead; now with the ability to introduce backwards incompatibilities in Python 3, we've made that deprecation rather more than just "informal", that's all!-)
A:
It looks like the thread module became obsolete in 3.x in favor of the threading module. See PEP 3108.
A:
I think the old thread module is deprecated in favour of the higher level threading module.
|
Why was the 'thread' module renamed to '_thread' in Python 3.x?
|
Python 3.x renamed the low-level module 'thread' to '_thread' -- I don't see why in the documentation. Does anyone know?
|
[
"It's been quite a long time since the low-level thread module was informally deprecated, with all users heartily encouraged to use the higher-level threading module instead; now with the ability to introduce backwards incompatibilities in Python 3, we've made that deprecation rather more than just \"informal\", that's all!-)\n",
"It looks like the thread module became obsolete in 3.x in favor of the threading module. See PEP 3108.\n",
"I think the old thread module is deprecated in favour of the higher level threading module.\n"
] |
[
10,
9,
7
] |
[] |
[] |
[
"multithreading",
"python",
"python_3.x"
] |
stackoverflow_0001141047_multithreading_python_python_3.x.txt
|
Q:
Count lines of code in a Django Project
Is there an easy way to count the lines of code you have written for your django project?
Edit: The shell stuff is cool, but how about on Windows?
A:
Yep:
shell]$ find /my/source -name "*.py" -type f -exec cat {} + | wc -l
Job's a good 'un.
A:
You might want to look at CLOC -- it's not Django specific but it supports Python. It can show you lines counts for actual code, comments, blank lines, etc.
A:
Starting with Aiden's answer, and with a bit of help in a question of my own, I ended up with this god-awful mess:
# find the combined LOC of files
# usage: loc Documents/fourU py html
function loc {
#find $1 -name $2 -type f -exec cat {} + | wc -l
namelist=''
let i=2
while [ $i -le $# ]; do
namelist="$namelist -name \"*.$@[$i]\""
if [ $i != $# ]; then
namelist="$namelist -or "
fi
let i=i+1
done
#echo $namelist
#echo "find $1 $namelist" | sh
#echo "find $1 $namelist" | sh | xargs cat
echo "find $1 $namelist" | sh | xargs cat | wc -l
}
which allows you to specify any number of extensions you want to match. As far as I can tell, it outputs the right answer, but... I thought this would be a one-liner, else I wouldn't have started in bash, and it just kinda grew from there.
I'm sure that those more knowledgable than I can improve upon this, so I'm going to put it in community wiki.
A:
Check out the wc command on unix.
A:
Get wc command on Windows using GnuWin32 (http://gnuwin32.sourceforge.net/packages/coreutils.htm)
wc *.py
|
Count lines of code in a Django Project
|
Is there an easy way to count the lines of code you have written for your django project?
Edit: The shell stuff is cool, but how about on Windows?
|
[
"Yep:\nshell]$ find /my/source -name \"*.py\" -type f -exec cat {} + | wc -l\n\nJob's a good 'un.\n",
"You might want to look at CLOC -- it's not Django specific but it supports Python. It can show you lines counts for actual code, comments, blank lines, etc.\n",
"Starting with Aiden's answer, and with a bit of help in a question of my own, I ended up with this god-awful mess:\n# find the combined LOC of files\n# usage: loc Documents/fourU py html\nfunction loc {\n #find $1 -name $2 -type f -exec cat {} + | wc -l\n namelist=''\n let i=2\n while [ $i -le $# ]; do\n namelist=\"$namelist -name \\\"*.$@[$i]\\\"\"\n if [ $i != $# ]; then\n namelist=\"$namelist -or \"\n fi\n let i=i+1\n done\n #echo $namelist\n #echo \"find $1 $namelist\" | sh\n #echo \"find $1 $namelist\" | sh | xargs cat\n echo \"find $1 $namelist\" | sh | xargs cat | wc -l\n}\n\nwhich allows you to specify any number of extensions you want to match. As far as I can tell, it outputs the right answer, but... I thought this would be a one-liner, else I wouldn't have started in bash, and it just kinda grew from there.\nI'm sure that those more knowledgable than I can improve upon this, so I'm going to put it in community wiki.\n",
"Check out the wc command on unix.\n",
"Get wc command on Windows using GnuWin32 (http://gnuwin32.sourceforge.net/packages/coreutils.htm)\n\nwc *.py\n\n"
] |
[
19,
8,
4,
1,
0
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0001133391_django_python.txt
|
Q:
Traversing a Python object tree
I'm trying to implement dynamic reloading objects in Python, that reflect code changes live.
Modules reloading is working, but I have to recreate every instance of the modules' classes for changes to become effective.
The problem is that objects data (objects __dict__ content) is lost during the process.
So I tried another approach:
def refresh(obj, memo=None):
if memo is None:
memo = {}
d = id(obj)
if d in memo:
return
memo[d] = None
try:
obj.__class__ = getattr(sys.modules[obj.__class__.__module__],
obj.__class__.__name__)
except TypeError:
return
for item in obj.__dict__.itervalues():
if isinstance(item, dict):
for k, v in item.iteritems():
refresh(k, memo)
refresh(v, memo)
elif isinstance(item, (list, tuple)):
for v in item:
refresh(v, memo)
else:
refresh(item, memo)
And surprisingly it works ! After calling refresh() on my objects, the new code becomes effective, without need to recreate them.
But I'm not sure if this is the correct way to traverse an object ? Is there a better way to traverse an object's components ?
A:
See this recipe in the Python Cookbook (or maybe even better its version in the "printed" one, which I believe you can actually read for free with google book search, or for sure on O'Reilly's "Safari" site using a free 1-week trial subscription -- I did a lot of editing on Hudson's original recipe to get the "printed book" version!).
|
Traversing a Python object tree
|
I'm trying to implement dynamic reloading objects in Python, that reflect code changes live.
Modules reloading is working, but I have to recreate every instance of the modules' classes for changes to become effective.
The problem is that objects data (objects __dict__ content) is lost during the process.
So I tried another approach:
def refresh(obj, memo=None):
if memo is None:
memo = {}
d = id(obj)
if d in memo:
return
memo[d] = None
try:
obj.__class__ = getattr(sys.modules[obj.__class__.__module__],
obj.__class__.__name__)
except TypeError:
return
for item in obj.__dict__.itervalues():
if isinstance(item, dict):
for k, v in item.iteritems():
refresh(k, memo)
refresh(v, memo)
elif isinstance(item, (list, tuple)):
for v in item:
refresh(v, memo)
else:
refresh(item, memo)
And surprisingly it works ! After calling refresh() on my objects, the new code becomes effective, without need to recreate them.
But I'm not sure if this is the correct way to traverse an object ? Is there a better way to traverse an object's components ?
|
[
"See this recipe in the Python Cookbook (or maybe even better its version in the \"printed\" one, which I believe you can actually read for free with google book search, or for sure on O'Reilly's \"Safari\" site using a free 1-week trial subscription -- I did a lot of editing on Hudson's original recipe to get the \"printed book\" version!).\n"
] |
[
1
] |
[] |
[] |
[
"python",
"reload",
"traversal"
] |
stackoverflow_0001141039_python_reload_traversal.txt
|
Q:
How can I get the string result of a python method from the XML-RPC client in Java
I wrote:
Object result = (Object)client.execute("method",params);
in java client.
Actually, the result should be printed in string format. But I can only output the address of "Object result", how can I get the content?
And I have tried String result = (String)client.execute("method",params);
It says lang.until.Object can not cast to lang.util.String.
As the server is written in Python, I was wondering how can I retrieve String from the method.
A:
I'm hesitant to post this because it seems rather obvious - forgive me if you've tried this, but how about:
String result = (String)client.execute("method",params);
A:
so maybe the object returned is not a string... are you sure that you're returning a string in your python application? I seriously doubt it.
|
How can I get the string result of a python method from the XML-RPC client in Java
|
I wrote:
Object result = (Object)client.execute("method",params);
in java client.
Actually, the result should be printed in string format. But I can only output the address of "Object result", how can I get the content?
And I have tried String result = (String)client.execute("method",params);
It says lang.until.Object can not cast to lang.util.String.
As the server is written in Python, I was wondering how can I retrieve String from the method.
|
[
"I'm hesitant to post this because it seems rather obvious - forgive me if you've tried this, but how about:\nString result = (String)client.execute(\"method\",params);\n\n",
"so maybe the object returned is not a string... are you sure that you're returning a string in your python application? I seriously doubt it.\n"
] |
[
0,
0
] |
[] |
[] |
[
"java",
"python",
"xml_rpc"
] |
stackoverflow_0001140752_java_python_xml_rpc.txt
|
Q:
Google App engine template unicode decoding problem
When trying to render a Django template file in Google App Engine
from google.appengine.ext.webapp import template
templatepath = os.path.join(os.path.dirname(file), 'template.html')
self.response.out.write (template.render( templatepath , template_values))
I come across the following error:
<type
'exceptions.UnicodeDecodeError'>:
'ascii' codec can't decode byte 0xe2
in position 17692: ordinal not in
range(128)
args = ('ascii', '<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0
Str...07/a-beautiful-method-to-find-peace-of-mind/
--> ', 17692, 17693, 'ordinal not in range(128)')
encoding = 'ascii'
end = 17693
message = ''
object = '<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0
Str...07/a-beautiful-method-to-find-peace-of-mind/
-->
reason = 'ordinal not in range(128)'
start = 17692
It seems that the underlying django template engine has assumed the "ascii" encoding, which should have been "utf-8".
Anyone who knows what might have caused the trouble and how to solve it?
Thanks.
A:
Well, turns out the rendered results returned by the template needs to be decoded first:
self.response.out.write (template.render( templatepath , template_values).decode('utf-8') )
A silly mistake, but thanks for everyone's answers anyway. :)
A:
Are you using Django 0.96 or Django 1.0? You can check by looking at your main.py and seeing if it contains the following:
from google.appengine.dist import use_library
use_library('django', '1.0')
If you're using Django 1.0, both FILE_CHARSET and DEFAULT_CHARSET should default to 'utf-8'. If your template is saved under a different encoding, just set FILE_CHARSET to whatever that is.
If you're using Django 0.96, you might want to try directly reading the template from the disk and then manually handling the encoding.
e.g., replace
template.render( templatepath , template_values)
with
Template(unicode(template_fh.read(), 'utf-8')).render(template_values)
A:
Did you check in your text editor that the template is encoded in utf-8?
|
Google App engine template unicode decoding problem
|
When trying to render a Django template file in Google App Engine
from google.appengine.ext.webapp import template
templatepath = os.path.join(os.path.dirname(file), 'template.html')
self.response.out.write (template.render( templatepath , template_values))
I come across the following error:
<type
'exceptions.UnicodeDecodeError'>:
'ascii' codec can't decode byte 0xe2
in position 17692: ordinal not in
range(128)
args = ('ascii', '<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0
Str...07/a-beautiful-method-to-find-peace-of-mind/
--> ', 17692, 17693, 'ordinal not in range(128)')
encoding = 'ascii'
end = 17693
message = ''
object = '<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0
Str...07/a-beautiful-method-to-find-peace-of-mind/
-->
reason = 'ordinal not in range(128)'
start = 17692
It seems that the underlying django template engine has assumed the "ascii" encoding, which should have been "utf-8".
Anyone who knows what might have caused the trouble and how to solve it?
Thanks.
|
[
"Well, turns out the rendered results returned by the template needs to be decoded first:\n\nself.response.out.write (template.render( templatepath , template_values).decode('utf-8') )\n\nA silly mistake, but thanks for everyone's answers anyway. :)\n",
"Are you using Django 0.96 or Django 1.0? You can check by looking at your main.py and seeing if it contains the following: \n\nfrom google.appengine.dist import use_library\nuse_library('django', '1.0')\nIf you're using Django 1.0, both FILE_CHARSET and DEFAULT_CHARSET should default to 'utf-8'. If your template is saved under a different encoding, just set FILE_CHARSET to whatever that is.\nIf you're using Django 0.96, you might want to try directly reading the template from the disk and then manually handling the encoding.\ne.g., replace \ntemplate.render( templatepath , template_values) \nwith \nTemplate(unicode(template_fh.read(), 'utf-8')).render(template_values)\n",
"Did you check in your text editor that the template is encoded in utf-8? \n"
] |
[
6,
2,
1
] |
[] |
[] |
[
"django",
"google_app_engine",
"python",
"unicode"
] |
stackoverflow_0001139151_django_google_app_engine_python_unicode.txt
|
Q:
Alternative XML parser for ElementTree to ease UTF-8 woes?
I am parsing some XML with the elementtree.parse() function. It works, except for some utf-8 characters(single byte character above 128). I see that the default parser is XMLTreeBuilder which is based on expat.
Is there an alternative parser that I can use that may be less strict and allow utf-8 characters?
This is the error I'm getting with the default parser:
ExpatError: not well-formed (invalid token): line 311, column 190
The character causing this is a single byte x92 (in hex). I'm not certain this is even a valid utf-8 character. But it would be nice to handle it because most text editors display this as: í
EDIT: The context of the character is: canít , where I assume it is supposed to be a fancy apostraphe, but in the hex editor, that same sequence is: 63 61 6E 92 74
A:
I'll start from the question: "Is there an alternative parser that I can use that may be less strict and allow utf-8 characters?"
All XML parsers will accept data encoded in UTF-8. In fact, UTF-8 is the default encoding.
An XML document may start with a declaration like this:
`<?xml version="1.0" encoding="UTF-8"?>`
or like this:
<?xml version="1.0"?>
or not have a declaration at all ... in each case the parser will decode the document using UTF-8.
However your data is NOT encoded in UTF-8 ... it's probably Windows-1252 aka cp1252.
If the encoding is not UTF-8, then either the creator should include a declaration (or the recipient can prepend one) or the recipient can transcode the data to UTF-8. The following showcases what works and what doesn't:
>>> import xml.etree.ElementTree as ET
>>> from StringIO import StringIO as sio
>>> raw_text = '<root>can\x92t</root>' # text encoded in cp1252, no XML declaration
>>> t = ET.parse(sio(raw_text))
[tracebacks omitted]
xml.parsers.expat.ExpatError: not well-formed (invalid token): line 1, column 9
# parser is expecting UTF-8
>>> t = ET.parse(sio('<?xml version="1.0" encoding="UTF-8"?>' + raw_text))
xml.parsers.expat.ExpatError: not well-formed (invalid token): line 1, column 47
# parser is expecting UTF-8 again
>>> t = ET.parse(sio('<?xml version="1.0" encoding="cp1252"?>' + raw_text))
>>> t.getroot().text
u'can\u2019t'
# parser was told to expect cp1252; it works
>>> import unicodedata
>>> unicodedata.name(u'\u2019')
'RIGHT SINGLE QUOTATION MARK'
# not quite an apostrophe, but better than an exception
>>> fixed_text = raw_text.decode('cp1252').encode('utf8')
# alternative: we transcode the data to UTF-8
>>> t = ET.parse(sio(fixed_text))
>>> t.getroot().text
u'can\u2019t'
# UTF-8 is the default; no declaration needed
A:
It looks like you have CP1252 text. If so, it should be specified at the top of the file, eg.:
<?xml version="1.0" encoding="CP1252" ?>
This does work with ElementTree.
If you're creating these files yourself, don't write them in this encoding. Save them as UTF-8 and do your part to help kill obsolete text encodings.
If you're receiving CP1252 data with no encoding specification, and you know for sure that it's always going to be CP1252, you can just convert it to UTF-8 before sending it to the parser:
s.decode("CP1252").encode("UTF-8")
A:
Byte 0x92 is never valid as the first byte of a UTF-8 character. It can be valid as a subsequent byte, however. See this UTF-8 guide for a table of valid byte sequences.
Could you give us an idea of what bytes are surrounding 0x92? Does the XML declaration include a character encoding?
A:
Ah. That is "can´t", obviously, and indeed, 0x92 is an apostrophe in many Windows code pages. Your editor assumes instead that it's a Mac file. ;)
If it's a one-off, fixing the file is the right thing to do. But almost always when you need to import other peoples XML there is a lot of things that simply do not agree with the stated encoding. I've found that the best solution is to decode with error setting 'xmlcharrefreplace', and in severe cases do your own custom character replacement that fixes the most common problems for that particular customer.
I'll also recommend lxml as XML library in Python, but that's not the problem here.
|
Alternative XML parser for ElementTree to ease UTF-8 woes?
|
I am parsing some XML with the elementtree.parse() function. It works, except for some utf-8 characters(single byte character above 128). I see that the default parser is XMLTreeBuilder which is based on expat.
Is there an alternative parser that I can use that may be less strict and allow utf-8 characters?
This is the error I'm getting with the default parser:
ExpatError: not well-formed (invalid token): line 311, column 190
The character causing this is a single byte x92 (in hex). I'm not certain this is even a valid utf-8 character. But it would be nice to handle it because most text editors display this as: í
EDIT: The context of the character is: canít , where I assume it is supposed to be a fancy apostraphe, but in the hex editor, that same sequence is: 63 61 6E 92 74
|
[
"I'll start from the question: \"Is there an alternative parser that I can use that may be less strict and allow utf-8 characters?\"\nAll XML parsers will accept data encoded in UTF-8. In fact, UTF-8 is the default encoding.\nAn XML document may start with a declaration like this:\n`<?xml version=\"1.0\" encoding=\"UTF-8\"?>`\n\nor like this:\n <?xml version=\"1.0\"?>\nor not have a declaration at all ... in each case the parser will decode the document using UTF-8.\nHowever your data is NOT encoded in UTF-8 ... it's probably Windows-1252 aka cp1252.\nIf the encoding is not UTF-8, then either the creator should include a declaration (or the recipient can prepend one) or the recipient can transcode the data to UTF-8. The following showcases what works and what doesn't:\n>>> import xml.etree.ElementTree as ET\n>>> from StringIO import StringIO as sio\n\n>>> raw_text = '<root>can\\x92t</root>' # text encoded in cp1252, no XML declaration\n\n>>> t = ET.parse(sio(raw_text))\n[tracebacks omitted]\nxml.parsers.expat.ExpatError: not well-formed (invalid token): line 1, column 9\n# parser is expecting UTF-8\n\n>>> t = ET.parse(sio('<?xml version=\"1.0\" encoding=\"UTF-8\"?>' + raw_text))\nxml.parsers.expat.ExpatError: not well-formed (invalid token): line 1, column 47\n# parser is expecting UTF-8 again\n\n>>> t = ET.parse(sio('<?xml version=\"1.0\" encoding=\"cp1252\"?>' + raw_text))\n>>> t.getroot().text\nu'can\\u2019t'\n# parser was told to expect cp1252; it works\n\n>>> import unicodedata\n>>> unicodedata.name(u'\\u2019')\n'RIGHT SINGLE QUOTATION MARK'\n# not quite an apostrophe, but better than an exception\n\n>>> fixed_text = raw_text.decode('cp1252').encode('utf8')\n# alternative: we transcode the data to UTF-8\n\n>>> t = ET.parse(sio(fixed_text))\n>>> t.getroot().text\nu'can\\u2019t'\n# UTF-8 is the default; no declaration needed\n\n",
"It looks like you have CP1252 text. If so, it should be specified at the top of the file, eg.:\n<?xml version=\"1.0\" encoding=\"CP1252\" ?>\n\nThis does work with ElementTree.\nIf you're creating these files yourself, don't write them in this encoding. Save them as UTF-8 and do your part to help kill obsolete text encodings.\nIf you're receiving CP1252 data with no encoding specification, and you know for sure that it's always going to be CP1252, you can just convert it to UTF-8 before sending it to the parser:\ns.decode(\"CP1252\").encode(\"UTF-8\")\n\n",
"Byte 0x92 is never valid as the first byte of a UTF-8 character. It can be valid as a subsequent byte, however. See this UTF-8 guide for a table of valid byte sequences.\nCould you give us an idea of what bytes are surrounding 0x92? Does the XML declaration include a character encoding?\n",
"Ah. That is \"can´t\", obviously, and indeed, 0x92 is an apostrophe in many Windows code pages. Your editor assumes instead that it's a Mac file. ;)\nIf it's a one-off, fixing the file is the right thing to do. But almost always when you need to import other peoples XML there is a lot of things that simply do not agree with the stated encoding. I've found that the best solution is to decode with error setting 'xmlcharrefreplace', and in severe cases do your own custom character replacement that fixes the most common problems for that particular customer.\nI'll also recommend lxml as XML library in Python, but that's not the problem here.\n"
] |
[
15,
4,
1,
1
] |
[] |
[] |
[
"elementtree",
"python",
"utf_8",
"xml"
] |
stackoverflow_0001139090_elementtree_python_utf_8_xml.txt
|
Q:
How to get repository for core-plot
I am not able to get the repository for core-plot. What I am doing is that I am typing this in the terminal:
hg clone https://core-plot.googlecode.com/hg/ core-plot
and this is what I get:
Traceback (most recent call last):
File "/usr/local/bin/hg", line 25, in
mercurial.util.set_binary(fp)
File "/Library/Python/2.5/site-packages/mercurial/demandimport.py", line 75, in __getattribute__
self._load()
File "/Library/Python/2.5/site-packages/mercurial/demandimport.py", line 47, in _load
mod = _origimport(head, globals, locals)
File "/Library/Python/2.5/site-packages/mercurial/util.py", line 93, in
_encoding = locale.getlocale()[1]
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/locale.py", line 460, in getlocale
return _parse_localename(localename)
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/locale.py", line 373, in _parse_localename
raise ValueError, 'unknown locale: %s' % localename
ValueError: unknown locale: UTF-8
I can't seem to get it to install. Please give me guidance on how to install the repository.
A:
Have you installed Mercurial on your computer? If not, you can download an installer here: http://mercurial.berkwood.com/
A:
It looks like you're having a problem with your locale. Are you using Leopard? If so, check your Terminal preferences. In the Terminal prefs, open up the Settings pane, and click the Advanced tab. The "Character Encoding" menu should be set to "Unicode (UTF-8)". Also make sure that "Set LANG variable on startup" is checked.
You can check your locale setting by opening up the Terminal and typing echo $LANG. Mine returns en_US.UTF-8 (US English, UTF-8). Not sure what your preferred language is, but it should be <langcode>.UTF-8 -- make sure it ends with UTF-8.
A:
The repository works perfectly fine:
↪ hg clone https://core-plot.googlecode.com/hg/ core-plot
requesting all changes
adding changesets
adding manifests
adding file changes
added 406 changesets with 3444 changes to 1861 files
updating working directory
1018 files updated, 0 files merged, 0 files removed, 0 files unresolved
So I suspect your problem is that hg isn't in your path, or you've not installed Mercurial. You should grab a copy of the installer, or install via your package management system (MacPorts, Apt, YUM etc.)
A:
It looks to me like you have a broken Python installation. However, since you're trying to get Mercurial working, please contact the Mercurial team through the correct channels. Use the
Mercurial mailinglist or the
Mercurial bug tracker.
Doing so means that many more people see your problem and hopefully there will be someone who is using Mac who can help you (I'm using Debian and don't know what Apple has done to your Python installation...).
A:
LANG can be overridden by HGENCODING. If "echo $HGENCODING" produces "UTF-8" that's your culprit. Unset it or set it to en_US.UTF-8 (or whatever language you prefer, but it should end in .UTF-8). You could also try setting HGENCODING or LANG to "C" if you have no need for non-ascii characters, or just as a test.
|
How to get repository for core-plot
|
I am not able to get the repository for core-plot. What I am doing is that I am typing this in the terminal:
hg clone https://core-plot.googlecode.com/hg/ core-plot
and this is what I get:
Traceback (most recent call last):
File "/usr/local/bin/hg", line 25, in
mercurial.util.set_binary(fp)
File "/Library/Python/2.5/site-packages/mercurial/demandimport.py", line 75, in __getattribute__
self._load()
File "/Library/Python/2.5/site-packages/mercurial/demandimport.py", line 47, in _load
mod = _origimport(head, globals, locals)
File "/Library/Python/2.5/site-packages/mercurial/util.py", line 93, in
_encoding = locale.getlocale()[1]
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/locale.py", line 460, in getlocale
return _parse_localename(localename)
File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/locale.py", line 373, in _parse_localename
raise ValueError, 'unknown locale: %s' % localename
ValueError: unknown locale: UTF-8
I can't seem to get it to install. Please give me guidance on how to install the repository.
|
[
"Have you installed Mercurial on your computer? If not, you can download an installer here: http://mercurial.berkwood.com/\n",
"It looks like you're having a problem with your locale. Are you using Leopard? If so, check your Terminal preferences. In the Terminal prefs, open up the Settings pane, and click the Advanced tab. The \"Character Encoding\" menu should be set to \"Unicode (UTF-8)\". Also make sure that \"Set LANG variable on startup\" is checked.\nYou can check your locale setting by opening up the Terminal and typing echo $LANG. Mine returns en_US.UTF-8 (US English, UTF-8). Not sure what your preferred language is, but it should be <langcode>.UTF-8 -- make sure it ends with UTF-8.\n",
"The repository works perfectly fine:\n↪ hg clone https://core-plot.googlecode.com/hg/ core-plot\nrequesting all changes\nadding changesets\nadding manifests\nadding file changes\nadded 406 changesets with 3444 changes to 1861 files\nupdating working directory\n1018 files updated, 0 files merged, 0 files removed, 0 files unresolved\n\nSo I suspect your problem is that hg isn't in your path, or you've not installed Mercurial. You should grab a copy of the installer, or install via your package management system (MacPorts, Apt, YUM etc.)\n",
"It looks to me like you have a broken Python installation. However, since you're trying to get Mercurial working, please contact the Mercurial team through the correct channels. Use the\n\nMercurial mailinglist or the\nMercurial bug tracker.\n\nDoing so means that many more people see your problem and hopefully there will be someone who is using Mac who can help you (I'm using Debian and don't know what Apple has done to your Python installation...).\n",
"LANG can be overridden by HGENCODING. If \"echo $HGENCODING\" produces \"UTF-8\" that's your culprit. Unset it or set it to en_US.UTF-8 (or whatever language you prefer, but it should end in .UTF-8). You could also try setting HGENCODING or LANG to \"C\" if you have no need for non-ascii characters, or just as a test.\n"
] |
[
1,
1,
0,
0,
0
] |
[] |
[] |
[
"core_plot",
"macos",
"mercurial",
"python",
"terminal"
] |
stackoverflow_0001097711_core_plot_macos_mercurial_python_terminal.txt
|
Q:
encryption with python
If I want to use:
recip = M2Crypto.RSA.load_pub_key(open('recipient_public_key.pem','rb').read())
Then how will it retrieve the key? What will recip will print?
I need to get the public key of the recipient from the server(open key server) and for that first I need to store the key on server.
A:
check what public_key.pem returns .clear this how you want to recognize your recipient .
|
encryption with python
|
If I want to use:
recip = M2Crypto.RSA.load_pub_key(open('recipient_public_key.pem','rb').read())
Then how will it retrieve the key? What will recip will print?
I need to get the public key of the recipient from the server(open key server) and for that first I need to store the key on server.
|
[
"check what public_key.pem returns .clear this how you want to recognize your recipient .\n"
] |
[
0
] |
[] |
[] |
[
"cryptography",
"python",
"rsa"
] |
stackoverflow_0001141542_cryptography_python_rsa.txt
|
Q:
How to hide a bulletpoint in blog
how to hide a bullet points? example like this website
http://www.grainge.org/pages/various_rh_projects/alt_dropdowns/showhide_3/showhide3.htm
you can see the example
first Hotspot
second hotspot
if we click 'first' it appears but if not it's not appear. how to do that
A:
This is done in JavaScript, not python, I would wager.
Basic strategy:
Start by adding (in the HTML) class="hideme" to the div's or p's or li's you want to affect.
Then using something like the below hideClass(class) function (jQuery would be worth looking at too), select all parts of the page with class="hideme" and set their style to display: none to hide or display: block to show
.
function hideClass(name)
{
var matches = getElementsByClassName(name);
for (var i = 0; i < matches.length; i++)
{
var match = matches[i];
match.style.display = "none";
}
}
This calls getElementsByClassName.js available here:
http://code.google.com/p/getelementsbyclassname/
A function showClass(name) could be made similarly, with match.style.display = "block";
A:
This is certainly done with javascript.
Another possibility is to have empty elements
<div id="myelt"></div>
and to change the html content of this element
document.getElementById('myelt').innerHTML = "My text";
A:
In jQuery you could do it like this (v. quick example):
$(function(){
$('ul ul')
.hide() //Hide the sub-lists
.siblings('a').click(function(){
$(this).siblings('ul').toggle(); //show or hide the hidden ul
});
});
This should also allow for sub-lists with hidden children and hotspots.
|
How to hide a bulletpoint in blog
|
how to hide a bullet points? example like this website
http://www.grainge.org/pages/various_rh_projects/alt_dropdowns/showhide_3/showhide3.htm
you can see the example
first Hotspot
second hotspot
if we click 'first' it appears but if not it's not appear. how to do that
|
[
"This is done in JavaScript, not python, I would wager.\nBasic strategy:\n\nStart by adding (in the HTML) class=\"hideme\" to the div's or p's or li's you want to affect. \nThen using something like the below hideClass(class) function (jQuery would be worth looking at too), select all parts of the page with class=\"hideme\" and set their style to display: none to hide or display: block to show\n\n.\nfunction hideClass(name)\n{\n var matches = getElementsByClassName(name);\n for (var i = 0; i < matches.length; i++)\n {\n var match = matches[i];\n match.style.display = \"none\";\n }\n}\n\nThis calls getElementsByClassName.js available here:\nhttp://code.google.com/p/getelementsbyclassname/\nA function showClass(name) could be made similarly, with match.style.display = \"block\";\n",
"This is certainly done with javascript.\nAnother possibility is to have empty elements \n<div id=\"myelt\"></div>\nand to change the html content of this element \ndocument.getElementById('myelt').innerHTML = \"My text\";\n",
"In jQuery you could do it like this (v. quick example):\n$(function(){\n $('ul ul')\n .hide() //Hide the sub-lists\n .siblings('a').click(function(){\n $(this).siblings('ul').toggle(); //show or hide the hidden ul\n });\n});\n\nThis should also allow for sub-lists with hidden children and hotspots.\n"
] |
[
1,
1,
0
] |
[] |
[] |
[
"css",
"html",
"javascript",
"jquery",
"python"
] |
stackoverflow_0001141774_css_html_javascript_jquery_python.txt
|
Q:
MySQL db problem in Python
For me mysql db has been successfully instaled in my system.I verified through the following code that it is successfully installed without any errors.
C:\Python26>python
Python 2.6.1 (r261:67517, Dec 4 2008, 16:51:00) [MSC v.1500 32 bit (Intel)] on
win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import MySQLdb
>>>
But when I imported the mysqldb in my script its giving No module name MySQLdb.
Kindly let me know the problem and the solution..
I am using python 2.6 and mysql is 4.0.3 in windows XP.
Thanks in advance...
A:
1) Try using your package manager to download python-mysql which includes MySQLdb.
2) Ensure /usr/lib/python2.4/site-packages/ is in your PYTHONPATH, e.g.:
>>> import sys
>>> from pprint import pprint
>>> pprint(sys.path)
['',
'/usr/lib/python2.4',
'/usr/lib/python2.4/plat-linux2',
'/usr/lib/python2.4/lib-tk',
'/usr/lib/python2.4/site-packages']
3) You seem to be using the correct capitalization in your example, but it bears mentioning that the module name is case-sensitive, i.e. MySQLdb (correct) != mysqldb (incorrect).
Edit: Looks like nilamo has found the problem. As mentioned in a comment: you might be running your script with Python 2.6, but MySQLdb is installed in 2.4's site-packages directory.
A:
Since you show you are running linux, but you mention that mysql is running on windows, I suspect that you don't have MySQL, or the MySQL libraries or Python bindings, installed on the linux machine.
|
MySQL db problem in Python
|
For me mysql db has been successfully instaled in my system.I verified through the following code that it is successfully installed without any errors.
C:\Python26>python
Python 2.6.1 (r261:67517, Dec 4 2008, 16:51:00) [MSC v.1500 32 bit (Intel)] on
win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import MySQLdb
>>>
But when I imported the mysqldb in my script its giving No module name MySQLdb.
Kindly let me know the problem and the solution..
I am using python 2.6 and mysql is 4.0.3 in windows XP.
Thanks in advance...
|
[
"1) Try using your package manager to download python-mysql which includes MySQLdb.\n2) Ensure /usr/lib/python2.4/site-packages/ is in your PYTHONPATH, e.g.:\n>>> import sys\n>>> from pprint import pprint\n>>> pprint(sys.path)\n['',\n '/usr/lib/python2.4',\n '/usr/lib/python2.4/plat-linux2',\n '/usr/lib/python2.4/lib-tk',\n '/usr/lib/python2.4/site-packages']\n\n3) You seem to be using the correct capitalization in your example, but it bears mentioning that the module name is case-sensitive, i.e. MySQLdb (correct) != mysqldb (incorrect).\nEdit: Looks like nilamo has found the problem. As mentioned in a comment: you might be running your script with Python 2.6, but MySQLdb is installed in 2.4's site-packages directory.\n",
"Since you show you are running linux, but you mention that mysql is running on windows, I suspect that you don't have MySQL, or the MySQL libraries or Python bindings, installed on the linux machine.\n"
] |
[
2,
0
] |
[] |
[] |
[
"mysql",
"python"
] |
stackoverflow_0001141790_mysql_python.txt
|
Q:
Problem executing with Python+MySQL
I am not getting the reason why my python script is not working though I hv put all the things correctly as my knowledge.The below test I did and it worked fine.But when I import the MySQLdb in my script it gives error as no module name MySQLdb.
**C:\Python26>python
Python 2.6.1 (r261:67517, Dec 4 2008, 16:51:00) [MSC v.1500 32 bit (Intel)] on
win32
Type "help", "copyright", "credits" or "license" for more information.
import MySQLdb
**
Kindly let me know the reason for this error.
And all the development is going on in windows XP, python 2.6, mysql 4.0.3
Earlier 1 hour back I have posted the question but some mistake was there in the question itself..
A:
seems like the path is not set properly.
|
Problem executing with Python+MySQL
|
I am not getting the reason why my python script is not working though I hv put all the things correctly as my knowledge.The below test I did and it worked fine.But when I import the MySQLdb in my script it gives error as no module name MySQLdb.
**C:\Python26>python
Python 2.6.1 (r261:67517, Dec 4 2008, 16:51:00) [MSC v.1500 32 bit (Intel)] on
win32
Type "help", "copyright", "credits" or "license" for more information.
import MySQLdb
**
Kindly let me know the reason for this error.
And all the development is going on in windows XP, python 2.6, mysql 4.0.3
Earlier 1 hour back I have posted the question but some mistake was there in the question itself..
|
[
"seems like the path is not set properly.\n"
] |
[
0
] |
[] |
[] |
[
"mysql",
"python"
] |
stackoverflow_0001142098_mysql_python.txt
|
Q:
Python networking library for a simple card game
I'm trying to implement a fairly simple card game in Python so that two players can play together other the Internet. I have no problem with doing the GUI, but I don't know the first thing about how to do the networking part. A couple libraries I've found so far:
PyRO: seems nice and seems to fit the problem nicely by having shared Card objects in various states.
Twisted with pyglet-twisted: this looks powerful but complicated; I've used Pyglet before though so maybe it wouldn't be too bad.
Can anyone recommend the most appropriate one for my game (not necessarily on this list, I've probably missed lots of good ones)?
A:
Both of those libraries are very good and would work perfectly for your card game.
Pyro might be easier to learn and use, but Twisted will scale better if you ever want to move into a very large number of players.
Twisted can be daunting at first but there are some books to help you get over the hump.
The are some other libraries to choose from but the two you found are mature and used widely within the Python community so you'll have a better chance of finding people to answer any questions.
My personal recommendation would be to use Pyro if you're just wanting to play around with networking but go with Twisted if you have grand plans for lots of players on the internet.
A:
If you decide you don't want to use a 3rd party library, I'd recommend the asynchat module in the standard library. It's perfect for sending/receiving through a simple protocol.
A:
Twisted is the better of the two libraries but the time spent learning to use it but learning networking will take you similar amount of time (at least for me).
If I were you I'd rather learn networking it will be much more useful to you in the future. The concepts are the same for most languages so its more portable as well. If you are going to take this approach have a look at http://www.amk.ca/python/howto/sockets/ it will take you through everything.
|
Python networking library for a simple card game
|
I'm trying to implement a fairly simple card game in Python so that two players can play together other the Internet. I have no problem with doing the GUI, but I don't know the first thing about how to do the networking part. A couple libraries I've found so far:
PyRO: seems nice and seems to fit the problem nicely by having shared Card objects in various states.
Twisted with pyglet-twisted: this looks powerful but complicated; I've used Pyglet before though so maybe it wouldn't be too bad.
Can anyone recommend the most appropriate one for my game (not necessarily on this list, I've probably missed lots of good ones)?
|
[
"Both of those libraries are very good and would work perfectly for your card game.\nPyro might be easier to learn and use, but Twisted will scale better if you ever want to move into a very large number of players.\nTwisted can be daunting at first but there are some books to help you get over the hump.\nThe are some other libraries to choose from but the two you found are mature and used widely within the Python community so you'll have a better chance of finding people to answer any questions.\nMy personal recommendation would be to use Pyro if you're just wanting to play around with networking but go with Twisted if you have grand plans for lots of players on the internet.\n",
"If you decide you don't want to use a 3rd party library, I'd recommend the asynchat module in the standard library. It's perfect for sending/receiving through a simple protocol.\n",
"Twisted is the better of the two libraries but the time spent learning to use it but learning networking will take you similar amount of time (at least for me). \nIf I were you I'd rather learn networking it will be much more useful to you in the future. The concepts are the same for most languages so its more portable as well. If you are going to take this approach have a look at http://www.amk.ca/python/howto/sockets/ it will take you through everything.\n"
] |
[
8,
5,
3
] |
[] |
[] |
[
"networking",
"python"
] |
stackoverflow_0001141130_networking_python.txt
|
Q:
Convert HTML to Django Fixture (JSON)
We've got a couple of Django flatpages in our project, that are based on actual HTML files. These files undergo some changes once in a while and hence have to updated in the database. So I came up with the idea of simply copying the plain HTML text into a JSON fixture and do an manage.py loaddata.
However, the problem is, that there are quite some characters inside the HTML that have to be escaped in order to pass as JSON. Is there some script, sed command or maybe even an official Django solution for that problem?
A:
You could write your own manage.py command to read in the HTML file and adding them to the flatpages:
# Assuming variable html contains the new HTML file,
#+ and var id the ID of the flatpage.
from django.contrib.flatpages.models import FlatPage
fp = FlatPage.objects.get (id=id)
fp.content = html
fp.save()
|
Convert HTML to Django Fixture (JSON)
|
We've got a couple of Django flatpages in our project, that are based on actual HTML files. These files undergo some changes once in a while and hence have to updated in the database. So I came up with the idea of simply copying the plain HTML text into a JSON fixture and do an manage.py loaddata.
However, the problem is, that there are quite some characters inside the HTML that have to be escaped in order to pass as JSON. Is there some script, sed command or maybe even an official Django solution for that problem?
|
[
"You could write your own manage.py command to read in the HTML file and adding them to the flatpages:\n# Assuming variable html contains the new HTML file,\n#+ and var id the ID of the flatpage.\nfrom django.contrib.flatpages.models import FlatPage\nfp = FlatPage.objects.get (id=id)\nfp.content = html\nfp.save()\n\n"
] |
[
1
] |
[] |
[] |
[
"django",
"json",
"python"
] |
stackoverflow_0001142702_django_json_python.txt
|
Q:
AttributeError: 'unicode' object has no attribute '_meta'
I am getting this error on "python manage.py migrate contacts".
The error info does not pinpoint problem location.
Here is the error description:
http://dpaste.com/68162/
Hers is a sample model definition:
http://dpaste.com/68173/
Can someone point me to right direction???
I got this: http://blog.e-shell.org/66
but can not figure out the problem.
A:
Figured out the problem. There was this line:
note = GenericRelation('Comment', object_id_field='object_pk')
in model Company and Person. But Comment class was undefined. I commented the line at both places. It works now.
Thanks for your time.
|
AttributeError: 'unicode' object has no attribute '_meta'
|
I am getting this error on "python manage.py migrate contacts".
The error info does not pinpoint problem location.
Here is the error description:
http://dpaste.com/68162/
Hers is a sample model definition:
http://dpaste.com/68173/
Can someone point me to right direction???
I got this: http://blog.e-shell.org/66
but can not figure out the problem.
|
[
"Figured out the problem. There was this line:\nnote = GenericRelation('Comment', object_id_field='object_pk')\n\nin model Company and Person. But Comment class was undefined. I commented the line at both places. It works now.\nThanks for your time.\n"
] |
[
2
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0001142717_django_python.txt
|
Q:
Using and Installing Django Custom Field Models
I found a custom field model (JSONField) that I would like to integrate into my Django project.
Where do I actually put the JSONField.py file? -- Would it reside in my Django project or would I put it in something like: /django/db/models/fields/
Since I assume it can be done multiple ways, would it then impact how JSONField (or any custom field for that matter) would get imported into my models.py file as well?
A:
It's worth remembering that Django is just Python, and so the same rules apply to Django customisations as they would for any other random Python library you might download. To use a bit of code, it has to be in a module somewhere on your Pythonpath, and then you can just to from foo import x.
I sometimes have a lib directory within my Django project structure, and put into it all the various things I might need to import. In this case I might put the JSONField code into a module called fields, as I might have other customised fields.
Since I know my project is already on the Pythonpath, I can just do from lib.fields import JSONField, then I can just do myfield = JSONField(options) in the model definition.
A:
For the first question, I would rather not put it into django directory, because in case of upgrades you may end up loosing all of your changes. It is a general point: modifying an external piece of code will lead to increased maintenance costs.
Therefore, I would suggest you putting it into some place accessible from your pythonpath - it could be a module in your project, or directly inside the site-packages directory.
As about the second question, just "installing" it will not impact your existing models.
You have to explicitly use it, by either by adding it to all of your models that need it, either by defining a model that uses it, and from whom all of your models will inherit.
A:
The best thing would be to keep Django and customizations apart. You could place the file anywhere on your pythonpath really
|
Using and Installing Django Custom Field Models
|
I found a custom field model (JSONField) that I would like to integrate into my Django project.
Where do I actually put the JSONField.py file? -- Would it reside in my Django project or would I put it in something like: /django/db/models/fields/
Since I assume it can be done multiple ways, would it then impact how JSONField (or any custom field for that matter) would get imported into my models.py file as well?
|
[
"It's worth remembering that Django is just Python, and so the same rules apply to Django customisations as they would for any other random Python library you might download. To use a bit of code, it has to be in a module somewhere on your Pythonpath, and then you can just to from foo import x. \nI sometimes have a lib directory within my Django project structure, and put into it all the various things I might need to import. In this case I might put the JSONField code into a module called fields, as I might have other customised fields. \nSince I know my project is already on the Pythonpath, I can just do from lib.fields import JSONField, then I can just do myfield = JSONField(options) in the model definition.\n",
"For the first question, I would rather not put it into django directory, because in case of upgrades you may end up loosing all of your changes. It is a general point: modifying an external piece of code will lead to increased maintenance costs.\nTherefore, I would suggest you putting it into some place accessible from your pythonpath - it could be a module in your project, or directly inside the site-packages directory.\nAs about the second question, just \"installing\" it will not impact your existing models.\nYou have to explicitly use it, by either by adding it to all of your models that need it, either by defining a model that uses it, and from whom all of your models will inherit.\n",
"The best thing would be to keep Django and customizations apart. You could place the file anywhere on your pythonpath really \n"
] |
[
2,
1,
0
] |
[] |
[] |
[
"django",
"django_models",
"python"
] |
stackoverflow_0001141524_django_django_models_python.txt
|
Q:
PyQt: Overriding QGraphicsView.drawItems
I need to customize the drawing process of a QGraphicsView, and so I override the drawItems method like this:
self.graphicsview.drawItems=self.drawer.drawItems
where self.graphicsview is a QGraphicsView, and self.drawer is a custom class with a method drawItems.
In this method I check a few flags to decide how to draw each item, and then call item.paint, like this:
def drawItems(self, painter, items, options):
for item in items:
print "Processing", item
# ... Do checking ...
item.paint(painter, options, self.target)
self.target is the QGraphicsView's QGraphicsScene.
However, once it reaches item.paint, it breaks out of the loop - without any errors. If I put conditionals around the painting, and for each possible type of QGraphicsItem paste the code that is supposed to be executed (by looking at the Qt git-sources), everything works.
Not a very nice solution though... And I don't understand how it could even break out of the loop?
A:
There is an exception that occurs when the items are painted, but it is not reported right away. On my system (PyQt 4.5.1, Python 2.6), no exception is reported when I monkey-patch the following method:
def drawItems(painter, items, options):
print len(items)
for idx, i in enumerate(items):
print idx, i
if idx > 5:
raise ValueError()
Output:
45
0 <PyQt4.QtGui.QGraphicsPathItem object at 0x3585270>
1 <PyQt4.QtGui.QGraphicsSimpleTextItem object at 0x356ca68>
2 <PyQt4.QtGui.QGraphicsSimpleTextItem object at 0x356ce20>
3 <PyQt4.QtGui.QGraphicsSimpleTextItem object at 0x356cc88>
4 <PyQt4.QtGui.QGraphicsSimpleTextItem object at 0x356cc00>
5 <PyQt4.QtGui.QGraphicsSimpleTextItem object at 0x356caf0>
6 <PyQt4.QtGui.QGraphicsSimpleTextItem object at 0x356cb78>
However, once I close the application, the following method is printed:
Exception ValueError: ValueError() in <module 'threading' from '/usr/lib/python2.6/threading.pyc'> ignored
I tried printing threading.currentThread(), but it returns the same thread whether it's called in- or outside the monkey-patched drawItems method.
In your code, this is likely due to the fact that you pass options (which is a list of style options objects) to the individual items rather than the respective option object. Using this code should give you the correct results:
def drawItems(self, painter, items, options):
for item, option in zip(items, options):
print "Processing", item
# ... Do checking ...
item.paint(painter, option, self.target)
Also, you say the self.target is the scene object. The documentation for paint() says:
This function, which is usually called by QGraphicsView, paints the contents of an item in local coordinates. ... The widget argument is optional. If provided, it points to the widget that is being painted on; otherwise, it is 0. For cached painting, widget is always 0.
and the type is QWidget*. QGraphicsScene inherits from QObject and is not a widget, so it is likely that this is wrong, too.
Still, the fact that the exception is not reported at all, or not right away suggests some foul play, you should contact the maintainer.
A:
The reason why the loop suddenly exits is that an Exception is thrown. Python doesn't handle it (there is no try: block), so it's passed to the called (Qt's C++ code) which has no idea about Python exceptions, so it's lost.
Add a try/except around the loop and you should see the reason why this happens.
Note: Since Python 2.4, you should not override methods this way anymore.
Instead, you must derive a new class from QGraphicsView and add your drawItems() method to this new class. This will replace the original method properly.
Don't forget to call super() in the __init__ method! Otherwise, your object won't work properly.
|
PyQt: Overriding QGraphicsView.drawItems
|
I need to customize the drawing process of a QGraphicsView, and so I override the drawItems method like this:
self.graphicsview.drawItems=self.drawer.drawItems
where self.graphicsview is a QGraphicsView, and self.drawer is a custom class with a method drawItems.
In this method I check a few flags to decide how to draw each item, and then call item.paint, like this:
def drawItems(self, painter, items, options):
for item in items:
print "Processing", item
# ... Do checking ...
item.paint(painter, options, self.target)
self.target is the QGraphicsView's QGraphicsScene.
However, once it reaches item.paint, it breaks out of the loop - without any errors. If I put conditionals around the painting, and for each possible type of QGraphicsItem paste the code that is supposed to be executed (by looking at the Qt git-sources), everything works.
Not a very nice solution though... And I don't understand how it could even break out of the loop?
|
[
"There is an exception that occurs when the items are painted, but it is not reported right away. On my system (PyQt 4.5.1, Python 2.6), no exception is reported when I monkey-patch the following method:\ndef drawItems(painter, items, options):\n print len(items)\n for idx, i in enumerate(items):\n print idx, i\n if idx > 5:\n raise ValueError()\n\nOutput:\n45\n0 <PyQt4.QtGui.QGraphicsPathItem object at 0x3585270>\n1 <PyQt4.QtGui.QGraphicsSimpleTextItem object at 0x356ca68>\n2 <PyQt4.QtGui.QGraphicsSimpleTextItem object at 0x356ce20>\n3 <PyQt4.QtGui.QGraphicsSimpleTextItem object at 0x356cc88>\n4 <PyQt4.QtGui.QGraphicsSimpleTextItem object at 0x356cc00>\n5 <PyQt4.QtGui.QGraphicsSimpleTextItem object at 0x356caf0>\n6 <PyQt4.QtGui.QGraphicsSimpleTextItem object at 0x356cb78>\n\nHowever, once I close the application, the following method is printed:\nException ValueError: ValueError() in <module 'threading' from '/usr/lib/python2.6/threading.pyc'> ignored\n\nI tried printing threading.currentThread(), but it returns the same thread whether it's called in- or outside the monkey-patched drawItems method.\nIn your code, this is likely due to the fact that you pass options (which is a list of style options objects) to the individual items rather than the respective option object. Using this code should give you the correct results:\ndef drawItems(self, painter, items, options):\n for item, option in zip(items, options):\n print \"Processing\", item\n # ... Do checking ...\n item.paint(painter, option, self.target)\n\nAlso, you say the self.target is the scene object. The documentation for paint() says:\n\nThis function, which is usually called by QGraphicsView, paints the contents of an item in local coordinates. ... The widget argument is optional. If provided, it points to the widget that is being painted on; otherwise, it is 0. For cached painting, widget is always 0.\n\nand the type is QWidget*. QGraphicsScene inherits from QObject and is not a widget, so it is likely that this is wrong, too.\nStill, the fact that the exception is not reported at all, or not right away suggests some foul play, you should contact the maintainer.\n",
"The reason why the loop suddenly exits is that an Exception is thrown. Python doesn't handle it (there is no try: block), so it's passed to the called (Qt's C++ code) which has no idea about Python exceptions, so it's lost.\nAdd a try/except around the loop and you should see the reason why this happens.\nNote: Since Python 2.4, you should not override methods this way anymore.\nInstead, you must derive a new class from QGraphicsView and add your drawItems() method to this new class. This will replace the original method properly.\nDon't forget to call super() in the __init__ method! Otherwise, your object won't work properly.\n"
] |
[
3,
1
] |
[] |
[] |
[
"pyqt",
"python"
] |
stackoverflow_0001142970_pyqt_python.txt
|
Q:
Alternatives to using pack_into() when manipulating a list of bytes?
I'm reading in a binary file into a list and parsing the binary data. I'm using unpack() to extract certain parts of the data as primitive data types, and I want to edit that data and insert it back into the original list of bytes. Using pack_into() would make it easy, except that I'm using Python 2.4, and pack_into() wasn't introduced until 2.5
Does anyone know of a good way to go about serializing the data this way so that I can accomplish essentially the same functionality as pack_into()?
A:
Have you looked at the bitstring module? It's designed to make the construction, parsing and modification of binary data easier than using the struct and array modules directly.
It's especially made for working at the bit level, but will work with bytes just as well. It will also work with Python 2.4.
from bitstring import BitString
s = BitString(filename='somefile')
# replace byte range with new values
# The step of '8' signifies byte rather than bit indicies.
s[10:15:8] = '0x001122'
# Search and replace byte value with two bytes
s.replace('0xcc', '0xddee', bytealigned=True)
# Different interpretations of the data are available through properties
if s[5:7:8].int > 1000:
s[5:7:8] = 1000
# Use the bytes property to get back to a Python string
open('newfile', 'wb').write(s.bytes)
The underlying data stored in the BitString is just an array object, but with a comprehensive set of functions and special methods to make it simple to modify and interpret.
A:
Do you mean editing data in a buffer object? Documentation on manipulating those at all from Python directly is fairly scarce.
If you just want to edit bytes in a string, it's simple enough, though; struct.pack_into is new to 2.5, but struct.pack isn't:
import struct
s = open("file").read()
ofs = 1024
fmt = "Ih"
size = struct.calcsize(fmt)
before, data, after = s[0:ofs], s[ofs:ofs+size], s[ofs+size:]
values = list(struct.unpack(fmt, data))
values[0] += 5
values[1] /= 2
data = struct.pack(fmt, *values)
s = "".join([before, data, after])
|
Alternatives to using pack_into() when manipulating a list of bytes?
|
I'm reading in a binary file into a list and parsing the binary data. I'm using unpack() to extract certain parts of the data as primitive data types, and I want to edit that data and insert it back into the original list of bytes. Using pack_into() would make it easy, except that I'm using Python 2.4, and pack_into() wasn't introduced until 2.5
Does anyone know of a good way to go about serializing the data this way so that I can accomplish essentially the same functionality as pack_into()?
|
[
"Have you looked at the bitstring module? It's designed to make the construction, parsing and modification of binary data easier than using the struct and array modules directly.\nIt's especially made for working at the bit level, but will work with bytes just as well. It will also work with Python 2.4.\nfrom bitstring import BitString\ns = BitString(filename='somefile')\n\n# replace byte range with new values\n# The step of '8' signifies byte rather than bit indicies.\ns[10:15:8] = '0x001122'\n\n# Search and replace byte value with two bytes\ns.replace('0xcc', '0xddee', bytealigned=True)\n\n# Different interpretations of the data are available through properties\nif s[5:7:8].int > 1000:\n s[5:7:8] = 1000\n\n# Use the bytes property to get back to a Python string\nopen('newfile', 'wb').write(s.bytes)\n\nThe underlying data stored in the BitString is just an array object, but with a comprehensive set of functions and special methods to make it simple to modify and interpret.\n",
"Do you mean editing data in a buffer object? Documentation on manipulating those at all from Python directly is fairly scarce.\nIf you just want to edit bytes in a string, it's simple enough, though; struct.pack_into is new to 2.5, but struct.pack isn't:\nimport struct\ns = open(\"file\").read()\nofs = 1024\nfmt = \"Ih\"\nsize = struct.calcsize(fmt)\n\nbefore, data, after = s[0:ofs], s[ofs:ofs+size], s[ofs+size:]\nvalues = list(struct.unpack(fmt, data))\nvalues[0] += 5\nvalues[1] /= 2\ndata = struct.pack(fmt, *values)\ns = \"\".join([before, data, after])\n\n"
] |
[
4,
1
] |
[] |
[] |
[
"binary",
"python",
"struct"
] |
stackoverflow_0001133044_binary_python_struct.txt
|
Q:
defining functions in decorator
Why does this not work? How can I make it work? That is, how can I make gu accessible inside my decorated function?
def decorate(f):
def new_f():
def gu():
pass
f()
return new_f
@decorate
def fu():
gu()
fu()
Do I need to add gu to a dictionary of defined functions somehow? Or can I add gu to the local namespace of f before calling it?
A:
If you need to pass gu to fu you need to do this explicitly by parameters:
def decorate(f):
def new_f():
def gu():
pass
f(gu)
return new_f
@decorate
def fu(gu):
gu()
fu()
A:
gu is local to the new_f function, which is local to the decorate function.
A:
gu() is only defined within new_f(). Unless you return it or anchor it to new_f() or something else, it cannot be referenced from outside new_f()
I don't know what you're up to, but this scheme seems very complex. Maybe you can find a less complicated solution.
A:
In principle you can create a new function using the same code as the old one but substituting the global scope with an amended one:
import new
def with_bar(func):
def bar(x):
return x + 1
f_globals = func.func_globals.copy()
f_globals['bar'] = bar
return new.function(func.func_code, f_globals,
func.func_name, func.func_defaults, func.func_closure)
@with_bar
def foo(x):
return bar(x)
print foo(5) # prints 6
In practice you really should find a better way to do this. Passing in functions as parameters is one option. There might be other approaches too, but it's hard to tell what would fit without a high-level problem description.
A:
Why not make your decorator a class rather than a function? It's apparently possible, as I discovered when I looked through the help for the property builtin. (Previously, I had thought that you could merely apply decorators to classes, and not that the decorators themselves could be classes.)
(Of course, gu would have to be a method of the class or of an inner class.)
|
defining functions in decorator
|
Why does this not work? How can I make it work? That is, how can I make gu accessible inside my decorated function?
def decorate(f):
def new_f():
def gu():
pass
f()
return new_f
@decorate
def fu():
gu()
fu()
Do I need to add gu to a dictionary of defined functions somehow? Or can I add gu to the local namespace of f before calling it?
|
[
"If you need to pass gu to fu you need to do this explicitly by parameters:\ndef decorate(f):\n def new_f():\n def gu():\n pass\n f(gu)\n return new_f\n\n@decorate\ndef fu(gu):\n gu()\n\nfu()\n\n",
"gu is local to the new_f function, which is local to the decorate function.\n",
"gu() is only defined within new_f(). Unless you return it or anchor it to new_f() or something else, it cannot be referenced from outside new_f()\nI don't know what you're up to, but this scheme seems very complex. Maybe you can find a less complicated solution.\n",
"In principle you can create a new function using the same code as the old one but substituting the global scope with an amended one:\nimport new\n\ndef with_bar(func):\n def bar(x):\n return x + 1\n f_globals = func.func_globals.copy()\n f_globals['bar'] = bar\n return new.function(func.func_code, f_globals,\n func.func_name, func.func_defaults, func.func_closure)\n\n@with_bar\ndef foo(x):\n return bar(x)\n\nprint foo(5) # prints 6\n\nIn practice you really should find a better way to do this. Passing in functions as parameters is one option. There might be other approaches too, but it's hard to tell what would fit without a high-level problem description.\n",
"Why not make your decorator a class rather than a function? It's apparently possible, as I discovered when I looked through the help for the property builtin. (Previously, I had thought that you could merely apply decorators to classes, and not that the decorators themselves could be classes.)\n(Of course, gu would have to be a method of the class or of an inner class.)\n"
] |
[
3,
1,
1,
0,
0
] |
[] |
[] |
[
"aop",
"argument_passing",
"decorator",
"python"
] |
stackoverflow_0001141902_aop_argument_passing_decorator_python.txt
|
Q:
Python @property versus method performance - which one to use?
I have written some code that uses attributes of an object:
class Foo:
def __init__(self):
self.bar = "baz"
myFoo = Foo()
print (myFoo.bar)
Now I want to do some fancy calculation to return bar. I could use @property to make methods act as the attribute bar, or I could refactor my code to use myFoo.bar().
Should I go back and add parens to all my bar accesses or use @property? Assume my code base is small now but due to entropy it will increase.
A:
If it's logically a property/attribute of the object, I'd say keep it as a property. If it's likely to become parametrised, by which I mean you may want to invoke myFoo.bar(someArgs) then bite the bullet now and make it a method.
Under most circumstances, performance is unlikely to be an issue.
A:
Wondering about performance is needless when it's so easy to measure it:
$ python -mtimeit -s'class X(object):
> @property
> def y(self): return 23
> x=X()' 'x.y'
1000000 loops, best of 3: 0.685 usec per loop
$ python -mtimeit -s'class X(object):
def y(self): return 23
x=X()' 'x.y()'
1000000 loops, best of 3: 0.447 usec per loop
$
(on my slow laptop -- if you wonder why the 2nd case doesn't have secondary shell prompts, it's because I built it from the first with an up-arrow in bash, and that repeats the linebreak structure but not the prompts!-).
So unless you're in a case where you know 200+ nanoseconds or so will matter (a tight inner loop you're trying to optimize to the utmost), you can afford to use the property approach; if you do some computations to get the value, the 200+ nanoseconds will of course become a smaller fraction of the total time taken.
I do agree with other answers that if the computations become too heavy, or you may ever want parameters, etc, a method is preferable -- similarly, I'll add, if you ever need to stash the callable somewhere but only call it later, and other fancy functional programming tricks; but I wanted to make the performance point quantitatively, since timeit makes such measurements so easy!-)
A:
In cases like these, I find it much better to choose the option that makes the most sense. You won't get any noticeable performance loss with small differences like these. It's much more important that your code is easy to use and maintain.
As for choosing between using a method and a @property, it's a matter of taste, but since properties disguise themselves as simple attributes, nothing elaborate should be going on. A method indicates that it might be an expensive operation, and developers using your code will consider caching the value rather than fetching it again and again.
So again, don't go on performance, always consider maintainability vs. performance. Computers get faster and faster as time goes by. The same does not stand for the readability of code.
In short, if you want to get a simple calculated value, @property is an excellent choice; if you want an elaborate value calculated, a method indicates that better.
A:
I would go for the refactoring, but only for a matter of style - it seems clearer to me that "fancy calculations" might be ongoing with a method call, while I would expect a property to be almost a no-op, but this is a matter of taste.
Don't worry about the decorator's performance... if you think that it might be a problem, measure the performance in both cases and see how much it does add (my guess is that it will be totally negligible if compared to your fancy calculations).
A:
I agree with what most people here have said, I did much measurement when building hydrologic models in Python a couple years ago and found that the speed hit from using @property was completely overshadowed by calculation.
As an example, creating method local variables (removing the "dot factor" in my calculations increased performance by almost an order of magnitude more than removing @property (these results are averaged over a mid-scale application).
I'd look elsewhere for optimization, when it's necessary, and focus initially on getting good, maintainable code written. At this point, if @property is intuitive in your case, use it. If not, make a method.
A:
That's exactly what @property is meant for.
|
Python @property versus method performance - which one to use?
|
I have written some code that uses attributes of an object:
class Foo:
def __init__(self):
self.bar = "baz"
myFoo = Foo()
print (myFoo.bar)
Now I want to do some fancy calculation to return bar. I could use @property to make methods act as the attribute bar, or I could refactor my code to use myFoo.bar().
Should I go back and add parens to all my bar accesses or use @property? Assume my code base is small now but due to entropy it will increase.
|
[
"If it's logically a property/attribute of the object, I'd say keep it as a property. If it's likely to become parametrised, by which I mean you may want to invoke myFoo.bar(someArgs) then bite the bullet now and make it a method.\nUnder most circumstances, performance is unlikely to be an issue.\n",
"Wondering about performance is needless when it's so easy to measure it:\n$ python -mtimeit -s'class X(object):\n> @property\n> def y(self): return 23\n> x=X()' 'x.y'\n1000000 loops, best of 3: 0.685 usec per loop\n$ python -mtimeit -s'class X(object):\n\n def y(self): return 23\nx=X()' 'x.y()'\n1000000 loops, best of 3: 0.447 usec per loop\n$ \n\n(on my slow laptop -- if you wonder why the 2nd case doesn't have secondary shell prompts, it's because I built it from the first with an up-arrow in bash, and that repeats the linebreak structure but not the prompts!-).\nSo unless you're in a case where you know 200+ nanoseconds or so will matter (a tight inner loop you're trying to optimize to the utmost), you can afford to use the property approach; if you do some computations to get the value, the 200+ nanoseconds will of course become a smaller fraction of the total time taken.\nI do agree with other answers that if the computations become too heavy, or you may ever want parameters, etc, a method is preferable -- similarly, I'll add, if you ever need to stash the callable somewhere but only call it later, and other fancy functional programming tricks; but I wanted to make the performance point quantitatively, since timeit makes such measurements so easy!-)\n",
"In cases like these, I find it much better to choose the option that makes the most sense. You won't get any noticeable performance loss with small differences like these. It's much more important that your code is easy to use and maintain.\nAs for choosing between using a method and a @property, it's a matter of taste, but since properties disguise themselves as simple attributes, nothing elaborate should be going on. A method indicates that it might be an expensive operation, and developers using your code will consider caching the value rather than fetching it again and again.\nSo again, don't go on performance, always consider maintainability vs. performance. Computers get faster and faster as time goes by. The same does not stand for the readability of code.\nIn short, if you want to get a simple calculated value, @property is an excellent choice; if you want an elaborate value calculated, a method indicates that better.\n",
"I would go for the refactoring, but only for a matter of style - it seems clearer to me that \"fancy calculations\" might be ongoing with a method call, while I would expect a property to be almost a no-op, but this is a matter of taste.\nDon't worry about the decorator's performance... if you think that it might be a problem, measure the performance in both cases and see how much it does add (my guess is that it will be totally negligible if compared to your fancy calculations).\n",
"I agree with what most people here have said, I did much measurement when building hydrologic models in Python a couple years ago and found that the speed hit from using @property was completely overshadowed by calculation. \nAs an example, creating method local variables (removing the \"dot factor\" in my calculations increased performance by almost an order of magnitude more than removing @property (these results are averaged over a mid-scale application). \nI'd look elsewhere for optimization, when it's necessary, and focus initially on getting good, maintainable code written. At this point, if @property is intuitive in your case, use it. If not, make a method. \n",
"That's exactly what @property is meant for.\n"
] |
[
22,
21,
7,
3,
2,
1
] |
[] |
[] |
[
"performance",
"properties",
"python"
] |
stackoverflow_0001142133_performance_properties_python.txt
|
Q:
How do I convert (or scale) axis values and redefine the tick frequency in matplotlib?
I am displaying a jpg image (I rotate this by 90 degrees, if this is relevant) and of course
the axes display the pixel coordinates. I would like to convert the axis so that instead of displaying the pixel number, it will display my unit of choice - be it radians, degrees, or in my case an astronomical coordinate. I know the conversion from pixel to (eg) degree. Here is a snippet of what my code looks like currently:
import matplotlib.pyplot as plt
import Image
import matplotlib
thumb = Image.open(self.image)
thumb = thumb.rotate(90)
dpi = plt.rcParams['figure.dpi']
figsize = thumb.size[0]/dpi, thumb.size[1]/dpi
fig = plt.figure(figsize=figsize)
plt.imshow(thumb, origin='lower',aspect='equal')
plt.show()
...so following on from this, can I take each value that matplotlib would print on the axis, and change/replace it with a string to output instead? I would want to do this for a specific coordinate format - eg, rather than an angle of 10.44 (degrees), I would like it to read 10 26' 24'' (ie, degrees, arcmins, arcsecs)
Finally on this theme, I'd want control over the tick frequency, on the plot. Matplotlib might print the axis value every 50 pixels, but I'd really want it every (for example) degree.
It sounds like I would like to define some kind of array with the pixel values and their converted values (degrees etc) that I want to be displayed, having control over the sampling frequency over the range xmin/xmax range.
Are there any matplotlib experts on Stack Overflow? If so, thanks very much in advance for your help! To make this a more learning experience, I'd really appreciate being prodded in the direction of tutorials etc on this kind of matplotlib problem. I've found myself getting very confused with axes, axis, figures, artists etc!
Cheers,
Dave
A:
It looks like you're dealing with the matplotlib.pyplot interface, which means that you'll be able to bypass most of the dealing with artists, axes, and the like. You can control the values and labels of the tick marks by using the matplotlib.pyplot.xticks command, as follows:
tick_locs = [list of locations where you want your tick marks placed]
tick_lbls = [list of corresponding labels for each of the tick marks]
plt.xticks(tick_locs, tick_lbls)
For your particular example, you'll have to compute what the tick marks are relative to the units (i.e. pixels) of your original plot (since you're using imshow) - you said you know how to do this, though.
I haven't dealt with images much, but you may be able to use a different plotting method (e.g. pcolor) that allows you to supply x and y information. That may give you a few more options for specifying the units of your image.
For tutorials, you would do well to look through the matplotlib gallery - find something you like, and read the code that produced it. One of the guys in our office recently bought a book on Python visualization - that may be worthwhile looking at.
The way that I generally think of all the various pieces is as follows:
A Figure is a container for all the Axes
An Axes is the space where what you draw (i.e. your plot) actually shows up
An Axis is the actual x and y axes
Artists? That's too deep in the interface for me: I've never had to worry about those yet, even though I rarely use the pyplot module in production plots.
|
How do I convert (or scale) axis values and redefine the tick frequency in matplotlib?
|
I am displaying a jpg image (I rotate this by 90 degrees, if this is relevant) and of course
the axes display the pixel coordinates. I would like to convert the axis so that instead of displaying the pixel number, it will display my unit of choice - be it radians, degrees, or in my case an astronomical coordinate. I know the conversion from pixel to (eg) degree. Here is a snippet of what my code looks like currently:
import matplotlib.pyplot as plt
import Image
import matplotlib
thumb = Image.open(self.image)
thumb = thumb.rotate(90)
dpi = plt.rcParams['figure.dpi']
figsize = thumb.size[0]/dpi, thumb.size[1]/dpi
fig = plt.figure(figsize=figsize)
plt.imshow(thumb, origin='lower',aspect='equal')
plt.show()
...so following on from this, can I take each value that matplotlib would print on the axis, and change/replace it with a string to output instead? I would want to do this for a specific coordinate format - eg, rather than an angle of 10.44 (degrees), I would like it to read 10 26' 24'' (ie, degrees, arcmins, arcsecs)
Finally on this theme, I'd want control over the tick frequency, on the plot. Matplotlib might print the axis value every 50 pixels, but I'd really want it every (for example) degree.
It sounds like I would like to define some kind of array with the pixel values and their converted values (degrees etc) that I want to be displayed, having control over the sampling frequency over the range xmin/xmax range.
Are there any matplotlib experts on Stack Overflow? If so, thanks very much in advance for your help! To make this a more learning experience, I'd really appreciate being prodded in the direction of tutorials etc on this kind of matplotlib problem. I've found myself getting very confused with axes, axis, figures, artists etc!
Cheers,
Dave
|
[
"It looks like you're dealing with the matplotlib.pyplot interface, which means that you'll be able to bypass most of the dealing with artists, axes, and the like. You can control the values and labels of the tick marks by using the matplotlib.pyplot.xticks command, as follows:\ntick_locs = [list of locations where you want your tick marks placed]\ntick_lbls = [list of corresponding labels for each of the tick marks]\nplt.xticks(tick_locs, tick_lbls)\n\nFor your particular example, you'll have to compute what the tick marks are relative to the units (i.e. pixels) of your original plot (since you're using imshow) - you said you know how to do this, though.\nI haven't dealt with images much, but you may be able to use a different plotting method (e.g. pcolor) that allows you to supply x and y information. That may give you a few more options for specifying the units of your image.\nFor tutorials, you would do well to look through the matplotlib gallery - find something you like, and read the code that produced it. One of the guys in our office recently bought a book on Python visualization - that may be worthwhile looking at.\nThe way that I generally think of all the various pieces is as follows:\n\nA Figure is a container for all the Axes\nAn Axes is the space where what you draw (i.e. your plot) actually shows up\nAn Axis is the actual x and y axes\nArtists? That's too deep in the interface for me: I've never had to worry about those yet, even though I rarely use the pyplot module in production plots.\n\n"
] |
[
38
] |
[] |
[] |
[
"matplotlib",
"python"
] |
stackoverflow_0001143848_matplotlib_python.txt
|
Q:
How to add a button (Add-in) to Outlook using Python
I'm looking the way to build an AddIn for Outlook with Python that add a button to the toolbar that has a behavior (doesn't matter).
I've searched around and didn't found anything. The only things I've found are backend, no GUI.
thanks!
A:
You could study the source for the SpamBayes outlook addin:
http://spambayes.svn.sourceforge.net/viewvc/spambayes/trunk/spambayes/Outlook2000/addin.py?revision=3243&view=markup
which used "Spam" and "Not Spam" buttons. (Search for _AddControl function.)
General info on the addin here:
http://spambayes.sourceforge.net/windows.html
|
How to add a button (Add-in) to Outlook using Python
|
I'm looking the way to build an AddIn for Outlook with Python that add a button to the toolbar that has a behavior (doesn't matter).
I've searched around and didn't found anything. The only things I've found are backend, no GUI.
thanks!
|
[
"You could study the source for the SpamBayes outlook addin:\n\nhttp://spambayes.svn.sourceforge.net/viewvc/spambayes/trunk/spambayes/Outlook2000/addin.py?revision=3243&view=markup\n\nwhich used \"Spam\" and \"Not Spam\" buttons. (Search for _AddControl function.)\nGeneral info on the addin here:\n\nhttp://spambayes.sourceforge.net/windows.html\n\n"
] |
[
1
] |
[] |
[] |
[
"outlook",
"python",
"winapi"
] |
stackoverflow_0001143798_outlook_python_winapi.txt
|
Q:
Hacking JavaScript Array Into JSON With Python
I am fetching a .js file from a remote site that contains data I want to process as JSON using the simplejson library on my Google App Engine site. The .js file looks like this:
var txns = [
{ apples: '100', oranges: '20', type: 'SELL'},
{ apples: '200', oranges: '10', type: 'BUY'}]
I have no control over the format of this file. What I did at first just to hack through it was to chop the "var txns = " bit off of the string and then do a series of .replace(old, new, [count]) on the string until it looked like standard JSON:
cleanJSON = malformedJSON.replace("'", '"').replace('apples:', '"apples":').replace('oranges:', '"oranges":').replace('type:', '"type":').replace('{', '{"transaction":{').replace('}', '}}')
So that it now looks like:
[{ "transaction" : { "apples": "100", "oranges": "20", "type": "SELL"} },
{ "transaction" : { "apples": "200", "oranges": "10", "type": "BUY"} }]
How would you tackle this formatting issue? Is there a known way (library, script) to format a JavaScript array into JSON notation?
A:
It's not too difficult to write your own little parsor for that using PyParsing.
import json
from pyparsing import *
data = """var txns = [
{ apples: '100', oranges: '20', type: 'SELL'},
{ apples: '200', oranges: '10', type: 'BUY'}]"""
def js_grammar():
key = Word(alphas).setResultsName("key")
value = QuotedString("'").setResultsName("value")
pair = Group(key + Literal(":").suppress() + value)
object_ = nestedExpr("{", "}", delimitedList(pair, ","))
array = nestedExpr("[", "]", delimitedList(object_, ","))
return array + StringEnd()
JS_GRAMMAR = js_grammar()
def parse(js):
return JS_GRAMMAR.parseString(js[len("var txns = "):])[0]
def to_dict(object_):
return dict((p.key, p.value) for p in object_)
result = [
{"transaction": to_dict(object_)}
for object_ in parse(data)]
print json.dumps(result)
This is going to print
[{"transaction": {"type": "SELL", "apples": "100", "oranges": "20"}},
{"transaction": {"type": "BUY", "apples": "200", "oranges": "10"}}]
You can also add the assignment to the grammar itself. Given there are already off-the-shelf parsers for it, you should better use those.
A:
I would use the yaml parser as its better at most things like this. It comes with GAE as well as it is used for the config files. Json is a subset of yaml.
All you have to do is get rid of "var txns =" then yaml should do the rest.
import yaml
string = """[{ apples: '100', oranges: '20', type: 'SELL'},
{ apples: '200', oranges: '10', type: 'BUY'}]"""
list = yaml.load(string)
print list
This gives you.
[{'type': 'SELL', 'apples': '100', 'oranges': '20'},
{'type': 'BUY', 'apples': '200', 'oranges': '10'}]
Once loaded you can always dump it back as a json.
A:
If you know that's what it's always going to look like, you could do a regex to find unquoted space-delimited text that ends with a colon and surround it with quotes.
I'm always worried about unexpected input with a regex like that, though. How do you know the remote source won't change what you get?
A:
You could create an intermediate page containing a Javascript script that just loads the remote one and dumps it to JSON. Then Python can make requests to your intermediate page and get out nice JSON.
|
Hacking JavaScript Array Into JSON With Python
|
I am fetching a .js file from a remote site that contains data I want to process as JSON using the simplejson library on my Google App Engine site. The .js file looks like this:
var txns = [
{ apples: '100', oranges: '20', type: 'SELL'},
{ apples: '200', oranges: '10', type: 'BUY'}]
I have no control over the format of this file. What I did at first just to hack through it was to chop the "var txns = " bit off of the string and then do a series of .replace(old, new, [count]) on the string until it looked like standard JSON:
cleanJSON = malformedJSON.replace("'", '"').replace('apples:', '"apples":').replace('oranges:', '"oranges":').replace('type:', '"type":').replace('{', '{"transaction":{').replace('}', '}}')
So that it now looks like:
[{ "transaction" : { "apples": "100", "oranges": "20", "type": "SELL"} },
{ "transaction" : { "apples": "200", "oranges": "10", "type": "BUY"} }]
How would you tackle this formatting issue? Is there a known way (library, script) to format a JavaScript array into JSON notation?
|
[
"It's not too difficult to write your own little parsor for that using PyParsing.\nimport json\nfrom pyparsing import *\n\ndata = \"\"\"var txns = [\n { apples: '100', oranges: '20', type: 'SELL'}, \n { apples: '200', oranges: '10', type: 'BUY'}]\"\"\"\n\n\ndef js_grammar():\n key = Word(alphas).setResultsName(\"key\")\n value = QuotedString(\"'\").setResultsName(\"value\")\n pair = Group(key + Literal(\":\").suppress() + value)\n object_ = nestedExpr(\"{\", \"}\", delimitedList(pair, \",\"))\n array = nestedExpr(\"[\", \"]\", delimitedList(object_, \",\"))\n return array + StringEnd()\n\nJS_GRAMMAR = js_grammar()\n\ndef parse(js):\n return JS_GRAMMAR.parseString(js[len(\"var txns = \"):])[0]\n\ndef to_dict(object_):\n return dict((p.key, p.value) for p in object_)\n\nresult = [\n {\"transaction\": to_dict(object_)}\n for object_ in parse(data)]\nprint json.dumps(result)\n\nThis is going to print\n[{\"transaction\": {\"type\": \"SELL\", \"apples\": \"100\", \"oranges\": \"20\"}},\n {\"transaction\": {\"type\": \"BUY\", \"apples\": \"200\", \"oranges\": \"10\"}}]\n\nYou can also add the assignment to the grammar itself. Given there are already off-the-shelf parsers for it, you should better use those. \n",
"I would use the yaml parser as its better at most things like this. It comes with GAE as well as it is used for the config files. Json is a subset of yaml.\nAll you have to do is get rid of \"var txns =\" then yaml should do the rest. \nimport yaml\n\nstring = \"\"\"[{ apples: '100', oranges: '20', type: 'SELL'}, \n { apples: '200', oranges: '10', type: 'BUY'}]\"\"\"\n\nlist = yaml.load(string)\n\nprint list\n\nThis gives you.\n[{'type': 'SELL', 'apples': '100', 'oranges': '20'},\n {'type': 'BUY', 'apples': '200', 'oranges': '10'}]\n\nOnce loaded you can always dump it back as a json.\n",
"If you know that's what it's always going to look like, you could do a regex to find unquoted space-delimited text that ends with a colon and surround it with quotes.\nI'm always worried about unexpected input with a regex like that, though. How do you know the remote source won't change what you get?\n",
"You could create an intermediate page containing a Javascript script that just loads the remote one and dumps it to JSON. Then Python can make requests to your intermediate page and get out nice JSON.\n"
] |
[
5,
4,
0,
0
] |
[] |
[] |
[
"javascript",
"json",
"python"
] |
stackoverflow_0001144400_javascript_json_python.txt
|
Q:
How to ensure user submit only english text
I am building a project involving natural language processing, since the nlp module currently only deal with english text, so I have to make sure the user submitted content (not long, only several words) is in english. Are there established ways to achieve this? Python or Javascript way preferred.
A:
If the content is long enough I would suggest some frequency analysis on the letters.
But for a few words I think your best bet is to compare them to an English dictionary and accept the input if half of them match.
A:
Check the Language Recognition Chart
A:
I think the most effective way would be to ask the users to submit english text only :)
You can show a language selection drop-down over your text area with English/ Other as the options. When user selects "Other", disable the text area with a message that only English language is supported [at the moment].
A:
Google has a javascript API that has an implementation of language detection. I've only play tested with it, never used it in production.
http://code.google.com/apis/ajaxlanguage/documentation/#Detect
A:
Try n-gram based statistical language recognition. This is a link to a demo of an algorithm using this technique, there is also a link to a paper describing the algorithm there. Try the demo, it performs quite well even on very short texts (3-4 words).
A:
You are already doing NLP, if your module doesn't understand what language the text was then either the module doesn't work or the input was not in the correct language.
A:
Try:
http://wordlist.sourceforge.net/
For a list of English words.
You will need to be careful of names, e.g. "Canberra" or "Bill Clinton". These won't appear in the word list. I suggest just checking whether the first letter is capitalized as a first attempt.
A:
You could break the phrase up into words and check a dictionary (there are some that you can download, this may be of interest), but that would require that the dictionary you used was good enough.
It would also fall over for proper nouns (my name isn't in the dictionary for example).
A:
The Dictionary Switcher Firefox extensions has an option to detect the right dictionary as I type.
I guess it checks words against the installed dictionaries, and selects the one giving the less errors...
You can't expect all words of the text to be in the dictionary: abbreviations, proper nouns, typos... Beside, some words are common to several languages: a French rock group even made the titles of their disks to have a (different) meaning both in French and in English. So it is a statistical thing: if more than x% of words are found in a good English dictionary, chances are the user types in this language (even if there are mistakes, like probably in this answer, since I am not native English).
A:
Maybe "Ensuring that the user submits only English text [PHP]" article will help you. The code is written in PHP, but is small enough to be easily rewritten.
|
How to ensure user submit only english text
|
I am building a project involving natural language processing, since the nlp module currently only deal with english text, so I have to make sure the user submitted content (not long, only several words) is in english. Are there established ways to achieve this? Python or Javascript way preferred.
|
[
"If the content is long enough I would suggest some frequency analysis on the letters. \nBut for a few words I think your best bet is to compare them to an English dictionary and accept the input if half of them match.\n",
"Check the Language Recognition Chart \n",
"I think the most effective way would be to ask the users to submit english text only :)\nYou can show a language selection drop-down over your text area with English/ Other as the options. When user selects \"Other\", disable the text area with a message that only English language is supported [at the moment].\n",
"Google has a javascript API that has an implementation of language detection. I've only play tested with it, never used it in production.\nhttp://code.google.com/apis/ajaxlanguage/documentation/#Detect\n",
"Try n-gram based statistical language recognition. This is a link to a demo of an algorithm using this technique, there is also a link to a paper describing the algorithm there. Try the demo, it performs quite well even on very short texts (3-4 words).\n",
"You are already doing NLP, if your module doesn't understand what language the text was then either the module doesn't work or the input was not in the correct language.\n",
"Try:\nhttp://wordlist.sourceforge.net/\nFor a list of English words.\nYou will need to be careful of names, e.g. \"Canberra\" or \"Bill Clinton\". These won't appear in the word list. I suggest just checking whether the first letter is capitalized as a first attempt.\n",
"You could break the phrase up into words and check a dictionary (there are some that you can download, this may be of interest), but that would require that the dictionary you used was good enough.\nIt would also fall over for proper nouns (my name isn't in the dictionary for example).\n",
"The Dictionary Switcher Firefox extensions has an option to detect the right dictionary as I type.\nI guess it checks words against the installed dictionaries, and selects the one giving the less errors...\nYou can't expect all words of the text to be in the dictionary: abbreviations, proper nouns, typos... Beside, some words are common to several languages: a French rock group even made the titles of their disks to have a (different) meaning both in French and in English. So it is a statistical thing: if more than x% of words are found in a good English dictionary, chances are the user types in this language (even if there are mistakes, like probably in this answer, since I am not native English).\n",
"Maybe \"Ensuring that the user submits only English text [PHP]\" article will help you. The code is written in PHP, but is small enough to be easily rewritten.\n"
] |
[
7,
6,
5,
5,
3,
3,
1,
0,
0,
0
] |
[] |
[] |
[
"javascript",
"nlp",
"python"
] |
stackoverflow_0000196924_javascript_nlp_python.txt
|
Q:
Python value unpacking error
I'm building a per-user file browsing/uploading application using Django and when I run this function
def walkdeep(request, path):
path, dirs, files = walktoo('/home/damon/walktemp/%s' % path)
return render_to_response('walk.html', {
'path' : path[0],
'dirs' : path[1],
'files' : path[2],
}, context_instance=RequestContext(request))
def walktoo(dir):
for path, dirs, files in os.walk(dir):
yield path, dirs, files
print path, dirs, files
I get this error:
need more than 1 value to unpack
Also, i know this is a silly way to do this, any advice would be appreciated.
edit:
this was actually very silly on my part, i completely forgot about os.listdir(dir) which is a much more reasonable function for my purposes. if you use the selected answer, it clears up the above issue i was having, but not with the results i wanted.
A:
path, dirs, files = walktoo('/home/damon/walktemp/%s' % path)
In this line, you're expecting walktoo to return a tuple of three values, which are then to be unpacked into path, dirs, and files. However, your walktoo function is a generator object: calling walktoo() yields a single value, the generator. You have to call next() on the generator (or call it implicitly by doing some sort of iteration on it) to get what you actually want, namely the 3-tuple that it yields.
I'm not entirely clear what you want to do -- your walkdeep() function is written like it only wants to use the first value returned by walktoo(). Did you mean to do something like this?
for path, dirs, files in walktoo(...):
# do something
A:
Based on your comment to Adam Rosenfield, this is another approach to get one layer of os.walk(dir).
path, dirs, files = [_ for _ in os.walk('/home/damon/walktemp/%s' % path)][0]
This is as an alternative to your walktoo(dir) funciton.
Also, make sure your second parameter to render_to_response uses the variables you created:
{'path' : path,
'dirs' : dirs,
'files' : files,}
path is a string, so by saying path[0] ... path[1] ... path[2] you're actually saying to use the first, second, and third character of the string.
|
Python value unpacking error
|
I'm building a per-user file browsing/uploading application using Django and when I run this function
def walkdeep(request, path):
path, dirs, files = walktoo('/home/damon/walktemp/%s' % path)
return render_to_response('walk.html', {
'path' : path[0],
'dirs' : path[1],
'files' : path[2],
}, context_instance=RequestContext(request))
def walktoo(dir):
for path, dirs, files in os.walk(dir):
yield path, dirs, files
print path, dirs, files
I get this error:
need more than 1 value to unpack
Also, i know this is a silly way to do this, any advice would be appreciated.
edit:
this was actually very silly on my part, i completely forgot about os.listdir(dir) which is a much more reasonable function for my purposes. if you use the selected answer, it clears up the above issue i was having, but not with the results i wanted.
|
[
"path, dirs, files = walktoo('/home/damon/walktemp/%s' % path)\n\nIn this line, you're expecting walktoo to return a tuple of three values, which are then to be unpacked into path, dirs, and files. However, your walktoo function is a generator object: calling walktoo() yields a single value, the generator. You have to call next() on the generator (or call it implicitly by doing some sort of iteration on it) to get what you actually want, namely the 3-tuple that it yields.\nI'm not entirely clear what you want to do -- your walkdeep() function is written like it only wants to use the first value returned by walktoo(). Did you mean to do something like this?\nfor path, dirs, files in walktoo(...):\n # do something\n\n",
"Based on your comment to Adam Rosenfield, this is another approach to get one layer of os.walk(dir).\npath, dirs, files = [_ for _ in os.walk('/home/damon/walktemp/%s' % path)][0]\n\nThis is as an alternative to your walktoo(dir) funciton.\nAlso, make sure your second parameter to render_to_response uses the variables you created:\n{'path' : path,\n 'dirs' : dirs,\n 'files' : files,}\n\npath is a string, so by saying path[0] ... path[1] ... path[2] you're actually saying to use the first, second, and third character of the string.\n"
] |
[
7,
1
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0001144953_django_python.txt
|
Q:
Looking for values in nested tuple
Say I have:
t = (
('dog', 'Dog'),
('cat', 'Cat'),
('fish', 'Fish'),
)
And I need to check if a value is in the first bit of the nested tuple (ie. the lowercase bits). How can I do this? The capitalised values do not matter really, I want to search for a string in only the lowercase values.
if 'fish' in t:
print "Fish in t."
Doesn't work.
Is there a good way of doing this without doing a for loop with if statements?
A:
The elements of a tuple can be extracted by specifying an index: ('a', 'b')[0] == 'a'. You can use a list comprehension to iterate over all elements of some iterable. A tuple is also iterable. Lastly, any() tells whether any element in a given iterable evaluates to True. Putting all this together:
>>> t = (
... ('dog', 'Dog'),
... ('cat', 'Cat'),
... ('fish', 'Fish'),
... )
>>> def contains(w, t):
... return any(w == e[0] for e in t)
...
>>> contains('fish', t)
True
>>> contains('dish', t)
False
A:
Try:
any('fish' == tup[0] for tup in t)
EDIT: Stephan is right; fixed 'fish' == tup[0]. Also see his more complete answer.
A:
When you have an iterable of key-value pairs such as:
t = (
('dog', 'Dog'),
('cat', 'Cat'),
('fish', 'Fish'),
)
You can "cast" it to a dictionary using the dict() constructor, then use the in keyword.
if 'fish' in dict(t):
print 'fish is in t'
This is very similar to the above answer.
A:
You could do something like this:
if 'fish' in (item[0] for item in t):
print "Fish in t."
or this:
if any(item[0] == 'fish' for item in t):
print "Fish in t."
If you don't care about the order but want to keep the association between 'dog' and 'Dog', you may want to use a dictionary instead:
t = {
'dog': 'Dog',
'cat': 'Cat',
'fish': 'Fish',
}
if 'fish' in t:
print "Fish in t."
|
Looking for values in nested tuple
|
Say I have:
t = (
('dog', 'Dog'),
('cat', 'Cat'),
('fish', 'Fish'),
)
And I need to check if a value is in the first bit of the nested tuple (ie. the lowercase bits). How can I do this? The capitalised values do not matter really, I want to search for a string in only the lowercase values.
if 'fish' in t:
print "Fish in t."
Doesn't work.
Is there a good way of doing this without doing a for loop with if statements?
|
[
"The elements of a tuple can be extracted by specifying an index: ('a', 'b')[0] == 'a'. You can use a list comprehension to iterate over all elements of some iterable. A tuple is also iterable. Lastly, any() tells whether any element in a given iterable evaluates to True. Putting all this together:\n>>> t = (\n... ('dog', 'Dog'),\n... ('cat', 'Cat'),\n... ('fish', 'Fish'),\n... )\n>>> def contains(w, t):\n... return any(w == e[0] for e in t)\n... \n>>> contains('fish', t)\nTrue\n>>> contains('dish', t)\nFalse\n\n",
"Try:\nany('fish' == tup[0] for tup in t)\n\nEDIT: Stephan is right; fixed 'fish' == tup[0]. Also see his more complete answer.\n",
"When you have an iterable of key-value pairs such as:\nt = (\n ('dog', 'Dog'),\n ('cat', 'Cat'),\n ('fish', 'Fish'),\n)\n\nYou can \"cast\" it to a dictionary using the dict() constructor, then use the in keyword.\nif 'fish' in dict(t):\n print 'fish is in t'\n\nThis is very similar to the above answer.\n",
"You could do something like this:\nif 'fish' in (item[0] for item in t):\n print \"Fish in t.\"\n\nor this:\nif any(item[0] == 'fish' for item in t):\n print \"Fish in t.\"\n\nIf you don't care about the order but want to keep the association between 'dog' and 'Dog', you may want to use a dictionary instead:\nt = {\n 'dog': 'Dog',\n 'cat': 'Cat',\n 'fish': 'Fish',\n}\n\nif 'fish' in t:\n print \"Fish in t.\"\n\n"
] |
[
10,
5,
3,
2
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001144178_python.txt
|
Q:
Using URLS that accept slashes as part of the parameter in Django
Is there a way in Django to accept 'n' parameters which are delimited by a '/' (forward slash)?
I was thinking this may work, but it does not. Django still recognizes forward slashes as delimiters.
(r'^(?P<path>[-\w]+/)$', 'some.view', {}),
A:
Add the right url to your urlpatterns:
# ...
("^foo/(.*)$", "foo"), # or whatever
# ...
And process it in your view, like AlbertoPL said:
fields = paramPassedInAccordingToThatUrl.split('/')
A:
Certainly, Django can accept any URL which can be described by a regular expression - including one which has a prefix followed by a '/' followed by a variable number of segments separated by '/'. The exact regular expression will depend on what you want to accept - but an example in Django is given by /admin URLs which parse the suffix of the URL in the view.
|
Using URLS that accept slashes as part of the parameter in Django
|
Is there a way in Django to accept 'n' parameters which are delimited by a '/' (forward slash)?
I was thinking this may work, but it does not. Django still recognizes forward slashes as delimiters.
(r'^(?P<path>[-\w]+/)$', 'some.view', {}),
|
[
"Add the right url to your urlpatterns:\n# ...\n(\"^foo/(.*)$\", \"foo\"), # or whatever\n# ...\n\nAnd process it in your view, like AlbertoPL said:\nfields = paramPassedInAccordingToThatUrl.split('/')\n\n",
"Certainly, Django can accept any URL which can be described by a regular expression - including one which has a prefix followed by a '/' followed by a variable number of segments separated by '/'. The exact regular expression will depend on what you want to accept - but an example in Django is given by /admin URLs which parse the suffix of the URL in the view.\n"
] |
[
3,
1
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0001145334_django_python.txt
|
Q:
How to make easy_install expand a package into directories rather than a single egg file?
How exactly do I configure my setup.py file so that when someone runs easy_install the package gets expanded into \site-packages\ as a directory, rather than remaining inside an egg.
The issue I'm encountering is that one of the django apps I've created won't auto-detect if it resides inside an egg.
EDIT: For example, if I type easy_install photologue it simply installs a \photologue\ directory into site-packages. This the the behaviour I'd like, but it seems that in order to make that happen, there needs to be at least one directory/module within the directory being packaged.
A:
You add zip_safe = False as an option to setup().
I don't think it has to do with directories. Setuptools will happily eggify packages with loads of directories in it.
Then of course it's another problem that this part of Django doesn't find the package even though it's zipped. It should.
|
How to make easy_install expand a package into directories rather than a single egg file?
|
How exactly do I configure my setup.py file so that when someone runs easy_install the package gets expanded into \site-packages\ as a directory, rather than remaining inside an egg.
The issue I'm encountering is that one of the django apps I've created won't auto-detect if it resides inside an egg.
EDIT: For example, if I type easy_install photologue it simply installs a \photologue\ directory into site-packages. This the the behaviour I'd like, but it seems that in order to make that happen, there needs to be at least one directory/module within the directory being packaged.
|
[
"You add zip_safe = False as an option to setup().\nI don't think it has to do with directories. Setuptools will happily eggify packages with loads of directories in it.\nThen of course it's another problem that this part of Django doesn't find the package even though it's zipped. It should.\n"
] |
[
5
] |
[] |
[] |
[
"django",
"easy_install",
"egg",
"python",
"setuptools"
] |
stackoverflow_0001145524_django_easy_install_egg_python_setuptools.txt
|
Q:
py2exe: Compiled Python Windows Application won't run because of DLL
I will confess I'm very new to Python and I don't really know what I'm doing yet. Recently I created a very small Windows application using Python 2.6.2 and wxPython 2.8. And it works great; I'm quite pleased with how well it works normally. By normally I mean when I invoke it directly through the Python interpreter, like this:
> python myapp.py
However, I wanted to go a step further and actually compile this into a standalone executable file. So I followed these instructions from the wxPython wiki which utilize py2exe. At first it gave me errors in the command line, saying MSVCR90.dll was missing. Then I copied MSVCR90.dll to my Python\DLLs folder. That looked at first like it fixed it, since it successfully did what it needed to do. It did finish with a quick warning that there were some DLL files the program depends on and I may or may not need to distribute them.
So I navigated into the dist folder that py2exe had created and tried running my executable. But trying to open it only popped up an error dialog that said:
This application failed to start because MSVCR90.dll was not found.
Re-installing the application may fix this problem.
So I went ahead and copied MSVCR90.dll again into this dist folder. But that didn't do the trick. Then I copied it into the WINDOWS\system32 directory. That didn't do it either. What do I need to do to get this thing to work?
A:
You can't just copy msvcr*.dll - they need to be set up using the rules for side-by-side assemblies. You can do this by installing the redistributable package as Sam points out, or you can put them alongside your executables as long as you obey the rules.
See the section "Deploying Visual C++ library DLLs as private assemblies" here: How to Deploy using XCopy for details, but basically your application looks like this:
c:\My App\MyApp.exe
c:\My App\Microsoft.VC90.CRT\Microsoft.VC90.CRT.manifest
c:\My App\Microsoft.VC90.CRT\msvcr90.dll
One benefit of this is that non-admin users can use your app (I believe you need to be an admin to install the runtime via the redistributable installer). And there's no need for any installer - you can just copy the files onto a PC and it all works.
A:
I believe installing Microsoft C++ Redistributable Package will install the DLL you need correctly.
|
py2exe: Compiled Python Windows Application won't run because of DLL
|
I will confess I'm very new to Python and I don't really know what I'm doing yet. Recently I created a very small Windows application using Python 2.6.2 and wxPython 2.8. And it works great; I'm quite pleased with how well it works normally. By normally I mean when I invoke it directly through the Python interpreter, like this:
> python myapp.py
However, I wanted to go a step further and actually compile this into a standalone executable file. So I followed these instructions from the wxPython wiki which utilize py2exe. At first it gave me errors in the command line, saying MSVCR90.dll was missing. Then I copied MSVCR90.dll to my Python\DLLs folder. That looked at first like it fixed it, since it successfully did what it needed to do. It did finish with a quick warning that there were some DLL files the program depends on and I may or may not need to distribute them.
So I navigated into the dist folder that py2exe had created and tried running my executable. But trying to open it only popped up an error dialog that said:
This application failed to start because MSVCR90.dll was not found.
Re-installing the application may fix this problem.
So I went ahead and copied MSVCR90.dll again into this dist folder. But that didn't do the trick. Then I copied it into the WINDOWS\system32 directory. That didn't do it either. What do I need to do to get this thing to work?
|
[
"You can't just copy msvcr*.dll - they need to be set up using the rules for side-by-side assemblies. You can do this by installing the redistributable package as Sam points out, or you can put them alongside your executables as long as you obey the rules.\nSee the section \"Deploying Visual C++ library DLLs as private assemblies\" here: How to Deploy using XCopy for details, but basically your application looks like this:\nc:\\My App\\MyApp.exe\nc:\\My App\\Microsoft.VC90.CRT\\Microsoft.VC90.CRT.manifest\nc:\\My App\\Microsoft.VC90.CRT\\msvcr90.dll\n\nOne benefit of this is that non-admin users can use your app (I believe you need to be an admin to install the runtime via the redistributable installer). And there's no need for any installer - you can just copy the files onto a PC and it all works.\n",
"I believe installing Microsoft C++ Redistributable Package will install the DLL you need correctly.\n"
] |
[
8,
2
] |
[] |
[] |
[
"py2exe",
"python",
"wxpython"
] |
stackoverflow_0001145662_py2exe_python_wxpython.txt
|
Q:
Difference between "inspect" and "interactive" command line flags in Python
What is the difference between "inspect" and "interactive" flags?
The sys.flags function prints both of them.
How can they both have "-i" flag according to the documentation of sys.flags?
How can I set them separately? If I use "python -i", both of them will be set
to 1.
Related:
tell whether python is in -i mode
A:
According to pythonrun.c corresponding Py_InspectFlag and Py_InteractiveFlag are used as follows:
int Py_InspectFlag; /* Needed to determine whether to exit at SystemError */
/* snip */
static void
handle_system_exit(void)
{
PyObject *exception, *value, *tb;
int exitcode = 0;
if (Py_InspectFlag)
/* Don't exit if -i flag was given. This flag is set to 0
* when entering interactive mode for inspecting. */
return;
/* snip */
}
Python doesn't exit on SystemExit if "inspect" flag is true.
int Py_InteractiveFlag; /* Needed by Py_FdIsInteractive() below */
/* snip */
/*
* The file descriptor fd is considered ``interactive'' if either
* a) isatty(fd) is TRUE, or
* b) the -i flag was given, and the filename associated with
* the descriptor is NULL or "<stdin>" or "???".
*/
int
Py_FdIsInteractive(FILE *fp, const char *filename)
{
if (isatty((int)fileno(fp)))
return 1;
if (!Py_InteractiveFlag)
return 0;
return (filename == NULL) ||
(strcmp(filename, "<stdin>") == 0) ||
(strcmp(filename, "???") == 0);
}
If "interactive" flag is false and current input is not associated with a terminal then python doesn't bother entering "interactive" mode (unbuffering stdout, printing version, showing prompt, etc).
-i option turns on both flags. "inspect" flag is also on if PYTHONINSPECT environment variable is not empty (see main.c).
Basically it means if you set PYTHONINSPECT variable and run your module then python doesn't exit on SystemExit (e.g., at the end of the script) and shows you an interactive prompt instead of (allowing you to inspect your module state (thus "inspect" name for the flag)).
A:
man python says about the -i flag:
When a script is passed as first
argument or the -c option is used,
enter interactive mode after executing
the script or the command. It does
not read the $PYTHONSTARTUP file.
This can be useful to inspect global
variables or a stack trace when a
script raises an exception.
Hence -i allows inspection of a script in interactive mode. -i implies both of these things. You can be interactive without inspecting (namely by just calling python, without arguments), but not vice versa.
|
Difference between "inspect" and "interactive" command line flags in Python
|
What is the difference between "inspect" and "interactive" flags?
The sys.flags function prints both of them.
How can they both have "-i" flag according to the documentation of sys.flags?
How can I set them separately? If I use "python -i", both of them will be set
to 1.
Related:
tell whether python is in -i mode
|
[
"According to pythonrun.c corresponding Py_InspectFlag and Py_InteractiveFlag are used as follows:\nint Py_InspectFlag; /* Needed to determine whether to exit at SystemError */\n/* snip */\nstatic void\nhandle_system_exit(void)\n{\n PyObject *exception, *value, *tb;\n int exitcode = 0;\n\n if (Py_InspectFlag)\n /* Don't exit if -i flag was given. This flag is set to 0\n * when entering interactive mode for inspecting. */\n return;\n /* snip */\n}\n\nPython doesn't exit on SystemExit if \"inspect\" flag is true.\nint Py_InteractiveFlag; /* Needed by Py_FdIsInteractive() below */\n/* snip */\n/*\n * The file descriptor fd is considered ``interactive'' if either\n * a) isatty(fd) is TRUE, or\n * b) the -i flag was given, and the filename associated with\n * the descriptor is NULL or \"<stdin>\" or \"???\".\n */\nint\nPy_FdIsInteractive(FILE *fp, const char *filename)\n{\n if (isatty((int)fileno(fp)))\n return 1;\n if (!Py_InteractiveFlag)\n return 0;\n return (filename == NULL) ||\n (strcmp(filename, \"<stdin>\") == 0) ||\n (strcmp(filename, \"???\") == 0);\n}\n\nIf \"interactive\" flag is false and current input is not associated with a terminal then python doesn't bother entering \"interactive\" mode (unbuffering stdout, printing version, showing prompt, etc).\n-i option turns on both flags. \"inspect\" flag is also on if PYTHONINSPECT environment variable is not empty (see main.c).\nBasically it means if you set PYTHONINSPECT variable and run your module then python doesn't exit on SystemExit (e.g., at the end of the script) and shows you an interactive prompt instead of (allowing you to inspect your module state (thus \"inspect\" name for the flag)). \n",
"man python says about the -i flag:\n\nWhen a script is passed as first\n argument or the -c option is used,\n enter interactive mode after executing\n the script or the command. It does\n not read the $PYTHONSTARTUP file. \n This can be useful to inspect global\n variables or a stack trace when a\n script raises an exception.\n\nHence -i allows inspection of a script in interactive mode. -i implies both of these things. You can be interactive without inspecting (namely by just calling python, without arguments), but not vice versa.\n"
] |
[
9,
0
] |
[] |
[] |
[
"command_line",
"interpreter",
"python"
] |
stackoverflow_0001145428_command_line_interpreter_python.txt
|
Q:
Name this python/ruby language construct (using array values to satisfy function parameters)
What is this language construct called?
In Python I can say:
def a(b,c): return b+c
a(*[4,5])
and get 9. Likewise in Ruby:
def a(b,c) b+c end
a(*[4,5])
What is this called, when one passes a single array to a function which otherwise requires multiple arguments?
What is the name of the * operator?
What other languages support this cool feature?
A:
The Python docs call this Unpacking Argument Lists. It's a pretty handy feature. In Python, you can also use a double asterisk (**) to unpack a dictionary (hash) into keyword arguments. They also work in reverse. I can define a function like this:
def sum(*args):
result = 0
for a in args:
result += a
return result
sum(1,2)
sum(9,5,7,8)
sum(1.7,2.3,8.9,3.4)
To pack all arguments into an arbitrarily sized list.
A:
In ruby, it is often called "splat".
Also in ruby, you can use it to mean 'all of the other elements in the list'.
a, *rest = [1,2,3,4,5,6]
a # => 1
rest # => [2, 3, 4, 5, 6]
It can also appear on either side of the assignment operator:
a = d, *e
In this usage, it is a bit like scheme's cdr, although it needn't be all but the head of the list.
A:
The typical terminology for this is called "applying a function to a list",
or "apply" for short.
See http://en.wikipedia.org/wiki/Apply
It has been in LISP since pretty much its inception back in 1960 odd.
Glad python rediscovered it :-}
Apply is typically on a list or a representation of a list such
as an array. However, one can apply functions to arguments that
come from other palces, such as structs. Our PARLANSE language
has fixed types (int, float, string, ...) and structures.
Oddly enough, a function argument list looks a lot like a structure
definintion, and in PARLANSE, it is a structure definition,
and you can "apply" a PARLANSE function to a compatible structure.
You can "make" structure instances, too, thus:
(define S
(structure [t integer]
[f float]
[b (array boolean 1 3)]
)structure
)define s
(= A (array boolean 1 3 ~f ~F ~f))
(= s (make S -3 19.2 (make (array boolean 1 3) ~f ~t ~f))
(define foo (function string S) ...)
(foo +17 3e-2 A) ; standard function call
(foo s) ; here's the "apply"
PARLANSE looks like lisp but isn't.
A:
Ruby calls it splat, though David Black has also come up with the neat unar{,ra}y operator (i.e. unary unarray operator)
A:
I've been calling it "list expansion", but I don't think that's standard terminology (I don't think there's any...). Lisp in all versions (Scheme included), and Haskell and other functional languages, can do it easily enough, but I don't think it's easy to do in "mainstream" languages (maybe you can pull it off as a "reflection" stunt in some!-).
A:
Haskell has it too (for pairs), with the uncurry function:
ghci> let f x y = 2*x + y
f :: (Num a) => a -> a -> a
ghci> f 1 2
4
ghci> f 10 3
23
ghci> uncurry f (1,2)
4
ghci> uncurry f (10,3)
23
You can also make it into an operator, so it's more splat-like:
ghci> f `uncurry` (1,2)
4
ghci> let (***) = uncurry
(***) :: (a -> b -> c) -> (a, b) -> c
ghci> f *** (10,3)
23
And though it'd be easy to define similar functions for the 3-tuple, 4-tuple, etc cases, there isn't any general function for n-tuples (like splat works in other languages) because of Haskell's strict typing.
A:
The majority of the questions have already been answered, but as to the question "What is the name of the * operator?": the technical term is "asterisk" (comes from the Latin word asteriscum, meaning "little star", which, in turn, comes from the Greek ἀστερίσκος). Often, though, it will be referred to as "star" or, as stated above, "splat".
|
Name this python/ruby language construct (using array values to satisfy function parameters)
|
What is this language construct called?
In Python I can say:
def a(b,c): return b+c
a(*[4,5])
and get 9. Likewise in Ruby:
def a(b,c) b+c end
a(*[4,5])
What is this called, when one passes a single array to a function which otherwise requires multiple arguments?
What is the name of the * operator?
What other languages support this cool feature?
|
[
"The Python docs call this Unpacking Argument Lists. It's a pretty handy feature. In Python, you can also use a double asterisk (**) to unpack a dictionary (hash) into keyword arguments. They also work in reverse. I can define a function like this:\ndef sum(*args):\n result = 0\n for a in args:\n result += a\n return result\n\nsum(1,2)\nsum(9,5,7,8)\nsum(1.7,2.3,8.9,3.4)\n\nTo pack all arguments into an arbitrarily sized list.\n",
"In ruby, it is often called \"splat\".\nAlso in ruby, you can use it to mean 'all of the other elements in the list'.\na, *rest = [1,2,3,4,5,6]\na # => 1\nrest # => [2, 3, 4, 5, 6]\n\nIt can also appear on either side of the assignment operator:\na = d, *e\n\nIn this usage, it is a bit like scheme's cdr, although it needn't be all but the head of the list.\n",
"The typical terminology for this is called \"applying a function to a list\",\nor \"apply\" for short. \nSee http://en.wikipedia.org/wiki/Apply\nIt has been in LISP since pretty much its inception back in 1960 odd.\nGlad python rediscovered it :-}\nApply is typically on a list or a representation of a list such\nas an array. However, one can apply functions to arguments that\ncome from other palces, such as structs. Our PARLANSE language\nhas fixed types (int, float, string, ...) and structures.\nOddly enough, a function argument list looks a lot like a structure\ndefinintion, and in PARLANSE, it is a structure definition,\nand you can \"apply\" a PARLANSE function to a compatible structure.\nYou can \"make\" structure instances, too, thus:\n\n\n (define S\n (structure [t integer]\n [f float]\n [b (array boolean 1 3)]\n )structure\n )define s\n\n (= A (array boolean 1 3 ~f ~F ~f))\n\n (= s (make S -3 19.2 (make (array boolean 1 3) ~f ~t ~f))\n\n\n (define foo (function string S) ...)\n\n (foo +17 3e-2 A) ; standard function call\n\n (foo s) ; here's the \"apply\"\n\n\nPARLANSE looks like lisp but isn't.\n",
"Ruby calls it splat, though David Black has also come up with the neat unar{,ra}y operator (i.e. unary unarray operator)\n",
"I've been calling it \"list expansion\", but I don't think that's standard terminology (I don't think there's any...). Lisp in all versions (Scheme included), and Haskell and other functional languages, can do it easily enough, but I don't think it's easy to do in \"mainstream\" languages (maybe you can pull it off as a \"reflection\" stunt in some!-).\n",
"Haskell has it too (for pairs), with the uncurry function:\nghci> let f x y = 2*x + y\nf :: (Num a) => a -> a -> a\nghci> f 1 2\n4\nghci> f 10 3\n23\nghci> uncurry f (1,2)\n4\nghci> uncurry f (10,3)\n23\n\nYou can also make it into an operator, so it's more splat-like:\nghci> f `uncurry` (1,2)\n4\nghci> let (***) = uncurry\n(***) :: (a -> b -> c) -> (a, b) -> c\nghci> f *** (10,3)\n23\n\nAnd though it'd be easy to define similar functions for the 3-tuple, 4-tuple, etc cases, there isn't any general function for n-tuples (like splat works in other languages) because of Haskell's strict typing.\n",
"The majority of the questions have already been answered, but as to the question \"What is the name of the * operator?\": the technical term is \"asterisk\" (comes from the Latin word asteriscum, meaning \"little star\", which, in turn, comes from the Greek ἀστερίσκος). Often, though, it will be referred to as \"star\" or, as stated above, \"splat\".\n"
] |
[
29,
10,
5,
3,
2,
2,
1
] |
[] |
[] |
[
"language_features",
"python",
"ruby",
"syntax"
] |
stackoverflow_0001141504_language_features_python_ruby_syntax.txt
|
Q:
XML parsing expat in python handling data
I am attempting to parse an XML file using python expat. I have the following line in my XML file:
<Action><fail/></Action>
expat identifies the start and end tags but converts the & lt; to the less than character and the same for the greater than character and thus parses it like this:
outcome:
START 'Action'
DATA '<'
DATA 'fail/'
DATA '>'
END 'Action'
instead of the desired:
START 'Action'
DATA '<fail/>'
END 'Action'
I would like to have the desired outcome, how do I prevent expat from messing up?
A:
expat does not mess up, < is simply the XML encoding for the character <. Quite to the contrary, if expat would return the literal <, this would be a bug with respect to the XML spec. That being said, you can of course get the escaped version back by using xml.sax.saxutils.escape:
>>> from xml.sax.saxutils import escape
>>> escape("<fail/>")
'<fail/>'
The expat parser is also free to report all string data in whatever chunks it seems fit, so you have to concatenate them yourself.
A:
Both SAX and StAX parsers are free to break up the strings in whatever way is convenient for them (although StAX has a COALESCE mode for forcing it to assemble the pieces for you).
The reason is that it is often possible to write software in certain cases that streams and doesn't have to care about the overhead of reassembling the string fragments.
Usually I accumulate text in a variable, and use the contents when I see the next StartElement or EndElement event. At that point, I also reset the accumulated-text variable to empty.
|
XML parsing expat in python handling data
|
I am attempting to parse an XML file using python expat. I have the following line in my XML file:
<Action><fail/></Action>
expat identifies the start and end tags but converts the & lt; to the less than character and the same for the greater than character and thus parses it like this:
outcome:
START 'Action'
DATA '<'
DATA 'fail/'
DATA '>'
END 'Action'
instead of the desired:
START 'Action'
DATA '<fail/>'
END 'Action'
I would like to have the desired outcome, how do I prevent expat from messing up?
|
[
"expat does not mess up, < is simply the XML encoding for the character <. Quite to the contrary, if expat would return the literal <, this would be a bug with respect to the XML spec. That being said, you can of course get the escaped version back by using xml.sax.saxutils.escape:\n>>> from xml.sax.saxutils import escape\n>>> escape(\"<fail/>\")\n'<fail/>'\n\nThe expat parser is also free to report all string data in whatever chunks it seems fit, so you have to concatenate them yourself.\n",
"Both SAX and StAX parsers are free to break up the strings in whatever way is convenient for them (although StAX has a COALESCE mode for forcing it to assemble the pieces for you).\nThe reason is that it is often possible to write software in certain cases that streams and doesn't have to care about the overhead of reassembling the string fragments.\nUsually I accumulate text in a variable, and use the contents when I see the next StartElement or EndElement event. At that point, I also reset the accumulated-text variable to empty.\n"
] |
[
2,
1
] |
[] |
[] |
[
"expat_parser",
"parsing",
"python",
"xml"
] |
stackoverflow_0001145015_expat_parser_parsing_python_xml.txt
|
Q:
Trying to import a module that imports another module, getting ImportError
In ajax.py, I have this import statement:
import components.db_init as db
In components/db_init.py, I have this import statement:
# import locals from ORM (Storm)
from storm.locals import *
And in components/storm/locals.py, it has this:
from storm.properties import Bool, Int, Float, RawStr, Chars, Unicode, Pickle
from storm.properties import List, Decimal, DateTime, Date, Time, Enum
from storm.properties import TimeDelta
from storm.references import Reference, ReferenceSet, Proxy
from storm.database import create_database
from storm.exceptions import StormError
from storm.store import Store, AutoReload
from storm.expr import Select, Insert, Update, Delete, Join, SQL
from storm.expr import Like, In, Asc, Desc, And, Or, Min, Max, Count, Not
from storm.info import ClassAlias
from storm.base import Storm
So, when I run that import statement in ajax.py, I get this error:
<type 'exceptions.ImportError'>: No module named storm.properties
I can run components/db_init.py fine without any exceptions... so I have no idea what's up.
Can someone shed some light on this problem?
A:
I would guess that storm.locals' idea of its package name is different from what you think it is (most likely it thinks it's in components.storm.locals). You can check this by printing __name__ at the top of storm.locals, I believe. If you use imports which aren't relative to the current package, the package names have to match.
Using a relative import would probably work here. Since locals and properties are in the same package, inside storm.locals you should be able to just do
from properties import Bool
and so on.
A:
You either need to
add (...)/components/storm to
PYTHONPATH,
use relative imports
in components/storm/locals.py or
import properties instead of storm.properties
|
Trying to import a module that imports another module, getting ImportError
|
In ajax.py, I have this import statement:
import components.db_init as db
In components/db_init.py, I have this import statement:
# import locals from ORM (Storm)
from storm.locals import *
And in components/storm/locals.py, it has this:
from storm.properties import Bool, Int, Float, RawStr, Chars, Unicode, Pickle
from storm.properties import List, Decimal, DateTime, Date, Time, Enum
from storm.properties import TimeDelta
from storm.references import Reference, ReferenceSet, Proxy
from storm.database import create_database
from storm.exceptions import StormError
from storm.store import Store, AutoReload
from storm.expr import Select, Insert, Update, Delete, Join, SQL
from storm.expr import Like, In, Asc, Desc, And, Or, Min, Max, Count, Not
from storm.info import ClassAlias
from storm.base import Storm
So, when I run that import statement in ajax.py, I get this error:
<type 'exceptions.ImportError'>: No module named storm.properties
I can run components/db_init.py fine without any exceptions... so I have no idea what's up.
Can someone shed some light on this problem?
|
[
"I would guess that storm.locals' idea of its package name is different from what you think it is (most likely it thinks it's in components.storm.locals). You can check this by printing __name__ at the top of storm.locals, I believe. If you use imports which aren't relative to the current package, the package names have to match.\nUsing a relative import would probably work here. Since locals and properties are in the same package, inside storm.locals you should be able to just do\nfrom properties import Bool\n\nand so on.\n",
"You either need to \n\nadd (...)/components/storm to\nPYTHONPATH,\nuse relative imports\nin components/storm/locals.py or \nimport properties instead of storm.properties\n\n"
] |
[
2,
1
] |
[] |
[] |
[
"import",
"package",
"python",
"relative_path"
] |
stackoverflow_0001145794_import_package_python_relative_path.txt
|
Q:
IE8 automation and https
I'm trying to use IE8 through COM to access a secured site (namely, SourceForge), in Python. Here is the script:
from win32com.client import gencache
from win32com.client import Dispatch
import pythoncom
gencache.EnsureModule('{EAB22AC0-30C1-11CF-A7EB-0000C05BAE0B}', 0, 1, 1)
class SourceForge(object):
def __init__(self, baseURL='https://sourceforget.net/', *args, **kwargs):
super(SourceForge, self).__init__(*args, **kwargs)
self.__browser = Dispatch('InternetExplorer.Application')
self.__browser.Visible = True
self.__browser.Navigate(baseURL)
def run(self):
while True:
pythoncom.PumpMessages()
def main():
sf = SourceForge()
sf.run()
if __name__ == '__main__':
main()
If I launch IE by hand, fine. If I launch the script, I get a generic error page "Internet Explorer cannot display this page". If I change baseURL to use http instead of https, the script works. I guess this is some security "feature". I tried adding the site to the list of trusted sites. I tried to enable IE scripting in the options for the Internet zone. Doesn't work. Google was no help.
So, does anybody know something about this ? Is there a mysterious option to enable or am I doomed ?
I'm on Windows XP SP3 BTW, Python 2.5 and pywin32 build 213.
A:
I can't open https://sourceforget.net/ -- not by hand, not by script.
Are you sure this link is right?
|
IE8 automation and https
|
I'm trying to use IE8 through COM to access a secured site (namely, SourceForge), in Python. Here is the script:
from win32com.client import gencache
from win32com.client import Dispatch
import pythoncom
gencache.EnsureModule('{EAB22AC0-30C1-11CF-A7EB-0000C05BAE0B}', 0, 1, 1)
class SourceForge(object):
def __init__(self, baseURL='https://sourceforget.net/', *args, **kwargs):
super(SourceForge, self).__init__(*args, **kwargs)
self.__browser = Dispatch('InternetExplorer.Application')
self.__browser.Visible = True
self.__browser.Navigate(baseURL)
def run(self):
while True:
pythoncom.PumpMessages()
def main():
sf = SourceForge()
sf.run()
if __name__ == '__main__':
main()
If I launch IE by hand, fine. If I launch the script, I get a generic error page "Internet Explorer cannot display this page". If I change baseURL to use http instead of https, the script works. I guess this is some security "feature". I tried adding the site to the list of trusted sites. I tried to enable IE scripting in the options for the Internet zone. Doesn't work. Google was no help.
So, does anybody know something about this ? Is there a mysterious option to enable or am I doomed ?
I'm on Windows XP SP3 BTW, Python 2.5 and pywin32 build 213.
|
[
"I can't open https://sourceforget.net/ -- not by hand, not by script.\nAre you sure this link is right?\n"
] |
[
2
] |
[] |
[] |
[
"python",
"winapi",
"windows"
] |
stackoverflow_0001147193_python_winapi_windows.txt
|
Q:
Does Jython have the GIL?
I was sure that it hasn't, but looking for a definite answer on the Interwebs left me in doubt. For example, I got a 2008 post which sort of looked like a joke at first glance but seemed to be serious at looking closer.
Edit:
... and turned out to be a joke after looking even closer. Sorry for the confusion. Actually the comments on that post answer my question, as Nikhil has pointed out correctly.
We realized that CPython is far ahead of us in this area, and that we are lacking in compatibility. After serious brainstorming (and a few glasses of wine), we decided that introducing a Global Interpreter Lock in Jython would solve the entire issue!
Now, what's the status here? The "differences" page on sourceforge doesn't mention the GIL at all. Is there any official source I have overlooked?
Note also that I'm aware of the ongoing discussion whether the GIL matters at all, but I don't care about that for the moment.
A:
The quote you found was indeed a joke, here is a demo of Jython's implementation of the GIL:
Jython 2.5.0 (trunk:6550M, Jul 20 2009, 08:40:15)
[Java HotSpot(TM) Client VM (Apple Inc.)] on java1.5.0_19
Type "help", "copyright", "credits" or "license" for more information.
>>> from __future__ import GIL
File "<stdin>", line 1
SyntaxError: Never going to happen!
>>>
A:
No, it does not. It's a part of the VM implementation, not the language.
See also:
from __future__ import braces
A:
Both Jython and IronPython "lack" the GIL, because it's an implementation detail of the underlying VM. There was a lot of information I've found sometime ago, now the only thing I could come up with is this.
Remember that the GIL is only a problem on multiprocessor enviroment only, and that it's unlikely to go away in the foreseable future from CPython.
|
Does Jython have the GIL?
|
I was sure that it hasn't, but looking for a definite answer on the Interwebs left me in doubt. For example, I got a 2008 post which sort of looked like a joke at first glance but seemed to be serious at looking closer.
Edit:
... and turned out to be a joke after looking even closer. Sorry for the confusion. Actually the comments on that post answer my question, as Nikhil has pointed out correctly.
We realized that CPython is far ahead of us in this area, and that we are lacking in compatibility. After serious brainstorming (and a few glasses of wine), we decided that introducing a Global Interpreter Lock in Jython would solve the entire issue!
Now, what's the status here? The "differences" page on sourceforge doesn't mention the GIL at all. Is there any official source I have overlooked?
Note also that I'm aware of the ongoing discussion whether the GIL matters at all, but I don't care about that for the moment.
|
[
"The quote you found was indeed a joke, here is a demo of Jython's implementation of the GIL:\nJython 2.5.0 (trunk:6550M, Jul 20 2009, 08:40:15) \n[Java HotSpot(TM) Client VM (Apple Inc.)] on java1.5.0_19\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> from __future__ import GIL\n File \"<stdin>\", line 1\nSyntaxError: Never going to happen!\n>>> \n\n",
"No, it does not. It's a part of the VM implementation, not the language.\nSee also:\nfrom __future__ import braces\n\n",
"Both Jython and IronPython \"lack\" the GIL, because it's an implementation detail of the underlying VM. There was a lot of information I've found sometime ago, now the only thing I could come up with is this.\nRemember that the GIL is only a problem on multiprocessor enviroment only, and that it's unlikely to go away in the foreseable future from CPython.\n"
] |
[
26,
23,
5
] |
[
"Google is making a Python implementation that is an modified cpython with performance improvements called unladen swallow. This will take care of removing the GIL.\nSee: Unladen Swallow\n"
] |
[
-1
] |
[
"jython",
"multithreading",
"python"
] |
stackoverflow_0001120354_jython_multithreading_python.txt
|
Q:
Connection refused when trying to open, write and close a socket a few times (Python)
I have a program that listens on a port waiting for a small amount of data to tell it what to do. I run 5 instances of that program, one on each port from 5000 to 5004 inclusively.
I have a second program written in Python that creates a socket "s", writes the data to port 5000, then closes. It then increments the port number and creates a new socket connected to 5001. It does this until it reaches 5004.
The first three socket connections work just fine. The programs on ports 5000 to 5002 receive the data just fine and off they go! However, the fourth socket connection results in a "Connection Refused" error message. Meanwhile, the fourth listening program still reports that it is waiting on port 5003 -- I know that it's listening properly because I am able to connect to it using Telnet.
I've tried changing the port numbers, and each time it's the fourth connection that fails.
Is there some limit in the system I should check on? I only have one socket open at any time on the sending side, and the fact that I can get 3 of the 5 through eliminates a lot of the basic troubleshooting steps I can think of.
Any thoughts? Thanks!
--- EDIT ---
I'm on CentOS Linux with Python version 2.6. Here's some example code:
try:
portno = 5000
for index, data in enumerate(data_list):
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(('myhostname', portno))
s.sendall(data)
s.close()
portno = portno + 1
except socket.error, (value,message):
print 'Error: Send could not be performed.\nPort: ' + str(portno) + '\nMessage: ' + message
sys.exit(2)
A:
This sounds a lot like the anti-portscan measure of your firewall kicking in.
A:
I don't know that much about sockets, so this may be really bad style... use at own risk. This code:
#!/usr/bin/python
import threading, time
from socket import *
portrange = range(10000,10005)
class Sock(threading.Thread):
def __init__(self, port):
self.port = port
threading.Thread.__init__ ( self )
def start(self):
self.s = socket(AF_INET, SOCK_STREAM)
self.s.bind(("localhost", self.port))
self.s.listen(1)
print "listening on port %i"%self.port
threading.Thread.start(self)
def run(self):
# wait for client to connect
connection, address = self.s.accept()
data = True
while data:
data = connection.recv(1024)
if data:
connection.send('echo %s'%(data))
connection.close()
socketHandles = [Sock(port) for port in portrange]
for sock in socketHandles:
sock.start()
# time.sleep(0.5)
for port in portrange:
print 'sending "ping" to port %i'%port
s = socket(AF_INET, SOCK_STREAM)
s.connect(("localhost", port))
s.send('ping')
data = s.recv(1024)
print 'reply was: %s'%data
s.close()
should output:
listening on port 10000
listening on port 10001
listening on port 10002
listening on port 10003
listening on port 10004
sending "ping" to port 10000
reply was: echo ping
sending "ping" to port 10001
reply was: echo ping
sending "ping" to port 10002
reply was: echo ping
sending "ping" to port 10003
reply was: echo ping
sending "ping" to port 10004
reply was: echo ping
perhaps this will help you see if it's your firewall causing troubles. I suppose you could try splitting this into client and server as well.
A:
Are you running the 2nd Python program from within Idle? If so - try it outside of Idle and see if the results are any different.
|
Connection refused when trying to open, write and close a socket a few times (Python)
|
I have a program that listens on a port waiting for a small amount of data to tell it what to do. I run 5 instances of that program, one on each port from 5000 to 5004 inclusively.
I have a second program written in Python that creates a socket "s", writes the data to port 5000, then closes. It then increments the port number and creates a new socket connected to 5001. It does this until it reaches 5004.
The first three socket connections work just fine. The programs on ports 5000 to 5002 receive the data just fine and off they go! However, the fourth socket connection results in a "Connection Refused" error message. Meanwhile, the fourth listening program still reports that it is waiting on port 5003 -- I know that it's listening properly because I am able to connect to it using Telnet.
I've tried changing the port numbers, and each time it's the fourth connection that fails.
Is there some limit in the system I should check on? I only have one socket open at any time on the sending side, and the fact that I can get 3 of the 5 through eliminates a lot of the basic troubleshooting steps I can think of.
Any thoughts? Thanks!
--- EDIT ---
I'm on CentOS Linux with Python version 2.6. Here's some example code:
try:
portno = 5000
for index, data in enumerate(data_list):
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(('myhostname', portno))
s.sendall(data)
s.close()
portno = portno + 1
except socket.error, (value,message):
print 'Error: Send could not be performed.\nPort: ' + str(portno) + '\nMessage: ' + message
sys.exit(2)
|
[
"This sounds a lot like the anti-portscan measure of your firewall kicking in.\n",
"I don't know that much about sockets, so this may be really bad style... use at own risk. This code:\n#!/usr/bin/python\nimport threading, time\nfrom socket import *\nportrange = range(10000,10005)\n\nclass Sock(threading.Thread):\n def __init__(self, port):\n self.port = port\n threading.Thread.__init__ ( self )\n\n def start(self):\n self.s = socket(AF_INET, SOCK_STREAM)\n self.s.bind((\"localhost\", self.port))\n self.s.listen(1)\n print \"listening on port %i\"%self.port\n threading.Thread.start(self)\n def run(self):\n # wait for client to connect\n connection, address = self.s.accept()\n data = True\n while data:\n data = connection.recv(1024)\n if data:\n connection.send('echo %s'%(data))\n connection.close()\n\nsocketHandles = [Sock(port) for port in portrange]\n\nfor sock in socketHandles:\n sock.start()\n\n# time.sleep(0.5)\n\nfor port in portrange:\n print 'sending \"ping\" to port %i'%port\n s = socket(AF_INET, SOCK_STREAM) \n s.connect((\"localhost\", port))\n s.send('ping')\n data = s.recv(1024)\n print 'reply was: %s'%data\n s.close()\n\nshould output:\nlistening on port 10000\nlistening on port 10001\nlistening on port 10002\nlistening on port 10003\nlistening on port 10004\nsending \"ping\" to port 10000\nreply was: echo ping\nsending \"ping\" to port 10001\nreply was: echo ping\nsending \"ping\" to port 10002\nreply was: echo ping\nsending \"ping\" to port 10003\nreply was: echo ping\nsending \"ping\" to port 10004\nreply was: echo ping\n\nperhaps this will help you see if it's your firewall causing troubles. I suppose you could try splitting this into client and server as well.\n",
"Are you running the 2nd Python program from within Idle? If so - try it outside of Idle and see if the results are any different.\n"
] |
[
1,
1,
0
] |
[] |
[] |
[
"limit",
"max",
"python",
"sockets"
] |
stackoverflow_0001145540_limit_max_python_sockets.txt
|
Q:
Create instance of a python class , declared in python, with C API
I want to create an instance of a Python class defined in the __main__ scope with the C API.
For example, the class is called MyClass and is defined as follows:
class MyClass:
def __init__(self):
pass
The class type lives under __main__ scope.
Within the C application, I want to create an instance of this class. This could have been simply possible with PyInstance_New as it takes class name. However this function is not available in Python3.
Any help or suggestions for alternatives are appreciated.
Thanks, Paul
A:
I believe the simplest approach is:
/* get sys.modules dict */
PyObject* sys_mod_dict = PyImport_GetModuleDict();
/* get the __main__ module object */
PyObject* main_mod = PyMapping_GetItemString(sys_mod_dict, "__main__");
/* call the class inside the __main__ module */
PyObject* instance = PyObject_CallMethod(main_mod, "MyClass", "");
plus of course error checking. You need only DECREF instance when you're done with it, the other two are borrowed references.
|
Create instance of a python class , declared in python, with C API
|
I want to create an instance of a Python class defined in the __main__ scope with the C API.
For example, the class is called MyClass and is defined as follows:
class MyClass:
def __init__(self):
pass
The class type lives under __main__ scope.
Within the C application, I want to create an instance of this class. This could have been simply possible with PyInstance_New as it takes class name. However this function is not available in Python3.
Any help or suggestions for alternatives are appreciated.
Thanks, Paul
|
[
"I believe the simplest approach is:\n/* get sys.modules dict */\nPyObject* sys_mod_dict = PyImport_GetModuleDict();\n/* get the __main__ module object */\nPyObject* main_mod = PyMapping_GetItemString(sys_mod_dict, \"__main__\");\n/* call the class inside the __main__ module */\nPyObject* instance = PyObject_CallMethod(main_mod, \"MyClass\", \"\");\n\nplus of course error checking. You need only DECREF instance when you're done with it, the other two are borrowed references.\n"
] |
[
20
] |
[] |
[] |
[
"c",
"python",
"python_c_api"
] |
stackoverflow_0001147452_c_python_python_c_api.txt
|
Q:
Python Xlib catch/send mouseclick
At the moment I'm trying to use Python to detect when the left mouse button is being held and then start to rapidly send this event instead of only once. What I basically want to do is that when the left mouse button is held it clicks and clicks again until you let it go. But I'm a bit puzzled with the whole Xlib, I think it's very confusing actually. Any help on how to do this would be really awesome. That's what I've got so far:
#!/usr/bin/env python
import Xlib
import Xlib.display
def main():
display = Xlib.display.Display()
root = display.screen().root
while True:
event = root.display.next_event()
print event
if __name__ == "__main__":
main()
But there is unfortunately no output in the console. After a quick search on the internet I found the following:
root.change_attributes(event_mask=Xlib.X.KeyPressMask)
root.grab_key(keycode, Xlib.X.AnyModifier, 1, Xlib.X.GrabModeAsync,
Xlib.X.GrabModeAsync)
This is seemingly import to catch a special event with the given keycode. But firstly what keycode does the left-mouse click have, if any at all? And secondly how can I detect when it is being held down and then start sending the mouseclick event rapidly. I would be really grateful for help. (Maybe a way to stop this script with a hotkey would be cool aswell...)
A:
Actually you want Xlib.X.ButtonPressMask | Xlib.X.ButtonReleaseMask, to get events for button presses and releases (different from key presses and releases). The events are ButtonPress and ButtonRelease, and the detail instance variable gives you the button number. From when you get the press event, to when you get the release event, you know the button is being held down. Of course you can also receive key events and do something else (e.g. exit your script) when a certain key is pressed.
Edit: this version works fine for me, for example...:
import Xlib
import Xlib.display
def main():
display = Xlib.display.Display(':0')
root = display.screen().root
root.change_attributes(event_mask=
Xlib.X.ButtonPressMask | Xlib.X.ButtonReleaseMask)
while True:
event = root.display.next_event()
print event
if __name__ == "__main__":
main()
|
Python Xlib catch/send mouseclick
|
At the moment I'm trying to use Python to detect when the left mouse button is being held and then start to rapidly send this event instead of only once. What I basically want to do is that when the left mouse button is held it clicks and clicks again until you let it go. But I'm a bit puzzled with the whole Xlib, I think it's very confusing actually. Any help on how to do this would be really awesome. That's what I've got so far:
#!/usr/bin/env python
import Xlib
import Xlib.display
def main():
display = Xlib.display.Display()
root = display.screen().root
while True:
event = root.display.next_event()
print event
if __name__ == "__main__":
main()
But there is unfortunately no output in the console. After a quick search on the internet I found the following:
root.change_attributes(event_mask=Xlib.X.KeyPressMask)
root.grab_key(keycode, Xlib.X.AnyModifier, 1, Xlib.X.GrabModeAsync,
Xlib.X.GrabModeAsync)
This is seemingly import to catch a special event with the given keycode. But firstly what keycode does the left-mouse click have, if any at all? And secondly how can I detect when it is being held down and then start sending the mouseclick event rapidly. I would be really grateful for help. (Maybe a way to stop this script with a hotkey would be cool aswell...)
|
[
"Actually you want Xlib.X.ButtonPressMask | Xlib.X.ButtonReleaseMask, to get events for button presses and releases (different from key presses and releases). The events are ButtonPress and ButtonRelease, and the detail instance variable gives you the button number. From when you get the press event, to when you get the release event, you know the button is being held down. Of course you can also receive key events and do something else (e.g. exit your script) when a certain key is pressed.\nEdit: this version works fine for me, for example...:\nimport Xlib\nimport Xlib.display\n\ndef main():\n display = Xlib.display.Display(':0')\n root = display.screen().root\n root.change_attributes(event_mask=\n Xlib.X.ButtonPressMask | Xlib.X.ButtonReleaseMask)\n\n while True:\n event = root.display.next_event()\n print event\n\nif __name__ == \"__main__\":\n main()\n\n"
] |
[
5
] |
[] |
[] |
[
"click",
"events",
"mouse",
"python",
"xlib"
] |
stackoverflow_0001147653_click_events_mouse_python_xlib.txt
|
Q:
What is the DRY way to configure different log file locations for different settings?
I am using python's logging module in a django project. I am performing the basic logging configuration in my settings.py file. Something like this:
import logging
import logging.handlers
logger = logging.getLogger('project_logger')
logger.setLevel(logging.INFO)
LOG_FILENAME = '/path/to/log/file/in/development/environment'
handler = logging.handlers.TimedRotatingFileHandler(LOG_FILENAME, when = 'midnight')
formatter = logging.Formatter(LOG_MSG_FORMAT)
handler.setFormatter(formatter)
logger.addHandler(handler)
I have a separate settings file for production. This file (production.py) imports everything from settings and overrides some of the options (set DEBUG to False, for instance). I wish to use a different LOG_FILENAME for production. How should I go about it? I can repeat the entire configuration section in production.py but that creates problems if /path/to/log/file/in/development/environment is not present in the production machine. Besides it doesn't look too "DRY".
Can anyone suggest a better way to go about this?
A:
Why don't you put this statements at the end of settings.py and use the DEBUG flal es indicator for developement?
Something like this:
import logging
import logging.handlers
logger = logging.getLogger('project_logger')
logger.setLevel(logging.INFO)
[snip]
if DEBUG:
LOG_FILENAME = '/path/to/log/file/in/development/environment'
else:
LOG_FILENAME = '/path/to/log/file/in/production/environment'
handler = logging.handlers.TimedRotatingFileHandler(LOG_FILENAME, when = 'midnight')
formatter = logging.Formatter(LOG_MSG_FORMAT)
handler.setFormatter(formatter)
logger.addHandler(handler)
A:
Found a reasonably "DRY" solution that worked. Thanks to Python logging in Django
I now have a log.py which looks something like this:
import logging, logging.handlers
from django.conf import settings
LOGGING_INITIATED = False
LOGGER_NAME = 'project_logger'
def init_logging():
logger = logging.getLogger(LOGGER_NAME)
logger.setLevel(logging.INFO)
handler = logging.handlers.TimedRotatingFileHandler(settings.LOG_FILENAME, when = 'midnight')
formatter = logging.Formatter(LOG_MSG_FORMAT)
handler.setFormatter(formatter)
logger.addHandler(handler)
if not LOGGING_INITIATED:
LOGGING_INITIATED = True
init_logging()
My settings.py now contains
LOG_FILENAME = '/path/to/log/file/in/development/environment
and production.py contains:
from settings import *
LOG_FILENAME = '/path/to/log/file/in/production/environment'
|
What is the DRY way to configure different log file locations for different settings?
|
I am using python's logging module in a django project. I am performing the basic logging configuration in my settings.py file. Something like this:
import logging
import logging.handlers
logger = logging.getLogger('project_logger')
logger.setLevel(logging.INFO)
LOG_FILENAME = '/path/to/log/file/in/development/environment'
handler = logging.handlers.TimedRotatingFileHandler(LOG_FILENAME, when = 'midnight')
formatter = logging.Formatter(LOG_MSG_FORMAT)
handler.setFormatter(formatter)
logger.addHandler(handler)
I have a separate settings file for production. This file (production.py) imports everything from settings and overrides some of the options (set DEBUG to False, for instance). I wish to use a different LOG_FILENAME for production. How should I go about it? I can repeat the entire configuration section in production.py but that creates problems if /path/to/log/file/in/development/environment is not present in the production machine. Besides it doesn't look too "DRY".
Can anyone suggest a better way to go about this?
|
[
"Why don't you put this statements at the end of settings.py and use the DEBUG flal es indicator for developement?\nSomething like this:\nimport logging \nimport logging.handlers\nlogger = logging.getLogger('project_logger')\nlogger.setLevel(logging.INFO)\n\n[snip]\nif DEBUG:\n LOG_FILENAME = '/path/to/log/file/in/development/environment'\nelse:\n LOG_FILENAME = '/path/to/log/file/in/production/environment'\n\nhandler = logging.handlers.TimedRotatingFileHandler(LOG_FILENAME, when = 'midnight')\nformatter = logging.Formatter(LOG_MSG_FORMAT)\nhandler.setFormatter(formatter)\nlogger.addHandler(handler)\n\n",
"Found a reasonably \"DRY\" solution that worked. Thanks to Python logging in Django\nI now have a log.py which looks something like this:\nimport logging, logging.handlers\nfrom django.conf import settings\n\nLOGGING_INITIATED = False\nLOGGER_NAME = 'project_logger'\n\ndef init_logging():\n logger = logging.getLogger(LOGGER_NAME)\n logger.setLevel(logging.INFO)\n handler = logging.handlers.TimedRotatingFileHandler(settings.LOG_FILENAME, when = 'midnight')\n formatter = logging.Formatter(LOG_MSG_FORMAT)\n handler.setFormatter(formatter)\n logger.addHandler(handler)\n\nif not LOGGING_INITIATED:\n LOGGING_INITIATED = True\n init_logging()\n\nMy settings.py now contains \nLOG_FILENAME = '/path/to/log/file/in/development/environment\n\nand production.py contains:\nfrom settings import *\nLOG_FILENAME = '/path/to/log/file/in/production/environment'\n\n"
] |
[
1,
1
] |
[] |
[] |
[
"django",
"logging",
"python"
] |
stackoverflow_0001147812_django_logging_python.txt
|
Q:
Can you suggest any extended examples on object-oriented software design?
I am looking for instructional materials on object-oriented software design that are framed as extended examples. In other words, over the course of several lessons or chapters, the author would develop a moderately large piece of software and explain the design approach step by step. Ideally, the material would address not only the design of the primary software being built but also offer useful advice on the rest of the development process -- testing, deployment, etc.
A:
Head First Object-Oriented Analysis and Design
A:
This is indispensable for understanding large scale oo design. In though its implemented in c++ the concepts are completely general and can be used effectively on any platform:
Large Scale OO Design
Truly a classic!!
|
Can you suggest any extended examples on object-oriented software design?
|
I am looking for instructional materials on object-oriented software design that are framed as extended examples. In other words, over the course of several lessons or chapters, the author would develop a moderately large piece of software and explain the design approach step by step. Ideally, the material would address not only the design of the primary software being built but also offer useful advice on the rest of the development process -- testing, deployment, etc.
|
[
"Head First Object-Oriented Analysis and Design \n",
"This is indispensable for understanding large scale oo design. In though its implemented in c++ the concepts are completely general and can be used effectively on any platform:\nLarge Scale OO Design\nTruly a classic!!\n"
] |
[
3,
2
] |
[] |
[] |
[
"oop",
"perl",
"python",
"ruby"
] |
stackoverflow_0001148196_oop_perl_python_ruby.txt
|
Q:
Python socket accept blocks - prevents app from quitting
I've written a very simple python class which waits for connections on a socket. The intention is to stick this class into an existing app and asyncronously send data to connecting clients.
The problem is that when waiting on an socket.accept(), I cannot end my application by pressing ctrl-c. Neither can I detect when my class goes out of scope and notify it to end.
Ideally the application below should quit after the time.sleep(4) expires. As you can see below, I tried using select, but this also prevents the app from responding to ctrl-c. If I could detect that the variable 'a' has gone out of scope in the main method, I could set the quitting flag (and reduce the timeout on select to make it responsive).
Any ideas?
thanks
import sys
import socket
import threading
import time
import select
class Server( threading.Thread ):
def __init__( self, i_port ):
threading.Thread.__init__( self )
self.quitting = False
self.serversocket = socket.socket( socket.AF_INET, socket.SOCK_STREAM )
self.serversocket.bind( (socket.gethostname(), i_port ) )
self.serversocket.listen(5)
self.start()
def run( self ):
# Wait for connection
while not self.quitting:
rr,rw,err = select.select( [self.serversocket],[],[], 20 )
if rr:
(clientsocket, address) = self.serversocket.accept()
clientsocket.close()
def main():
a = Server( 6543 )
time.sleep(4)
if __name__=='__main__':
main()
A:
Add self.setDaemon(True) to the __init__ before self.start().
(In Python 2.6 and later, self.daemon = True is preferred).
The key idea is explained here:
The entire Python program exits when
no alive non-daemon threads are left.
So, you need to make "daemons" of those threads who should not keep the whole process alive just by being alive themselves. The main thread is always non-daemon, by the way.
A:
I don't recommend the setDaemon feature for normal shutdown. It's sloppy; instead of having a clean shutdown path for threads, it simply kills the thread with no chance for cleanup. It's good to set it, so your program doesn't get stuck if the main thread exits unexpectedly, but it's not a good normal shutdown path except for quick hacks.
import sys, os, socket, threading, time, select
class Server(threading.Thread):
def __init__(self, i_port):
threading.Thread.__init__(self)
self.setDaemon(True)
self.quitting = False
self.serversocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.serversocket.bind((socket.gethostname(), i_port))
self.serversocket.listen(5)
self.start()
def shutdown(self):
if self.quitting:
return
self.quitting = True
self.join()
def run(self):
# Wait for connection
while not self.quitting:
rr,rw,err = select.select([self.serversocket],[],[], 1)
print rr
if rr:
(clientsocket, address) = self.serversocket.accept()
clientsocket.close()
print "shutting down"
self.serversocket.close()
def main():
a = Server(6543)
try:
time.sleep(4)
finally:
a.shutdown()
if __name__=='__main__':
main()
Note that this will delay for up to a second after calling shutdown(), which is poor behavior. This is normally easy to fix: create a wakeup pipe() that you can write to, and include it in the select; but although this is very basic, I couldn't find any way to do this in Python. (os.pipe() returns file descriptors, not file objects that we can write to.) I havn't dig deeper, since it's tangental to the question.
|
Python socket accept blocks - prevents app from quitting
|
I've written a very simple python class which waits for connections on a socket. The intention is to stick this class into an existing app and asyncronously send data to connecting clients.
The problem is that when waiting on an socket.accept(), I cannot end my application by pressing ctrl-c. Neither can I detect when my class goes out of scope and notify it to end.
Ideally the application below should quit after the time.sleep(4) expires. As you can see below, I tried using select, but this also prevents the app from responding to ctrl-c. If I could detect that the variable 'a' has gone out of scope in the main method, I could set the quitting flag (and reduce the timeout on select to make it responsive).
Any ideas?
thanks
import sys
import socket
import threading
import time
import select
class Server( threading.Thread ):
def __init__( self, i_port ):
threading.Thread.__init__( self )
self.quitting = False
self.serversocket = socket.socket( socket.AF_INET, socket.SOCK_STREAM )
self.serversocket.bind( (socket.gethostname(), i_port ) )
self.serversocket.listen(5)
self.start()
def run( self ):
# Wait for connection
while not self.quitting:
rr,rw,err = select.select( [self.serversocket],[],[], 20 )
if rr:
(clientsocket, address) = self.serversocket.accept()
clientsocket.close()
def main():
a = Server( 6543 )
time.sleep(4)
if __name__=='__main__':
main()
|
[
"Add self.setDaemon(True) to the __init__ before self.start().\n(In Python 2.6 and later, self.daemon = True is preferred).\nThe key idea is explained here:\n\nThe entire Python program exits when\n no alive non-daemon threads are left.\n\nSo, you need to make \"daemons\" of those threads who should not keep the whole process alive just by being alive themselves. The main thread is always non-daemon, by the way.\n",
"I don't recommend the setDaemon feature for normal shutdown. It's sloppy; instead of having a clean shutdown path for threads, it simply kills the thread with no chance for cleanup. It's good to set it, so your program doesn't get stuck if the main thread exits unexpectedly, but it's not a good normal shutdown path except for quick hacks.\nimport sys, os, socket, threading, time, select\n\nclass Server(threading.Thread):\n def __init__(self, i_port):\n threading.Thread.__init__(self)\n self.setDaemon(True)\n self.quitting = False\n self.serversocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n self.serversocket.bind((socket.gethostname(), i_port))\n self.serversocket.listen(5)\n self.start()\n\n def shutdown(self):\n if self.quitting:\n return\n\n self.quitting = True\n self.join()\n\n def run(self):\n # Wait for connection\n while not self.quitting:\n rr,rw,err = select.select([self.serversocket],[],[], 1)\n print rr\n if rr:\n (clientsocket, address) = self.serversocket.accept()\n clientsocket.close()\n\n print \"shutting down\"\n self.serversocket.close()\n\ndef main():\n a = Server(6543)\n try:\n time.sleep(4)\n finally:\n a.shutdown()\n\nif __name__=='__main__':\n main()\n\nNote that this will delay for up to a second after calling shutdown(), which is poor behavior. This is normally easy to fix: create a wakeup pipe() that you can write to, and include it in the select; but although this is very basic, I couldn't find any way to do this in Python. (os.pipe() returns file descriptors, not file objects that we can write to.) I havn't dig deeper, since it's tangental to the question.\n"
] |
[
9,
4
] |
[] |
[] |
[
"python",
"sockets"
] |
stackoverflow_0001148062_python_sockets.txt
|
Q:
How to install Python 3rd party libgmail-0.1.11.tar.tar into Python in Windows XP home?
I do not know Python, I have installed it only and downloaded the libgmail package. So, please give me verbatim steps in installing the libgmail library. My python directory is c:\python26, so please do not skip any steps in the answer.
Thanks!
A:
The easiest way might be to install easy_install using the instructions at that page and then typing the following at the command line:
easy_install libgmail
If it can't be found, then you can point it directly to the file that you downloaded:
easy_install c:\biglongpath\libgmail.zip
A:
Extract the archive to a temporary directory, and type "python setup.py install".
A:
All you have to do is extract it, and put it somewhere (I prefer the Libs folder in your Python directory). Then read the readme. It explains that you need to do:
python setup.py
in your command line. Then you're done.
A:
Let's say you downloaded and unzipped it to C:/libgmail-0.1.11. Open a command prompt and:
cd C:/libgmail-0.1.11
Then build an Windows installer:
python setup.py bdist --format=wininst
Then go to C:/libgmail-0.1.11/dist and you'll find an installer. Double click it, follow the "next" procedure and you're done.
What's nice about this method is that you can easily uninstall the library from Control Panel/Add or Remove Programs.
A:
Here's how I did it:
Make sure C:\Python26 and C:\Python26\scripts are both on your system path.
Install setuptools. You'll have to download the source distribution, and extract it. You will likely need something like 7zip for this. If you use 7zip note that you will need to extract it twice. Once to get a .tar file, and again to get a directory out of that tar file.
Open a command prompt and cd to the directory you created. Run the command python setup.py install.
Run the command easy_install mechanize.
Install libgmail just like you did setuptools.
This was a lot of work, but you now have the easy_install tool available to simplify installing these kinds of things in the future. If you're doing anything serious, you may also want to consider setting up a virtualenv and using pip instead of easy_install.
|
How to install Python 3rd party libgmail-0.1.11.tar.tar into Python in Windows XP home?
|
I do not know Python, I have installed it only and downloaded the libgmail package. So, please give me verbatim steps in installing the libgmail library. My python directory is c:\python26, so please do not skip any steps in the answer.
Thanks!
|
[
"The easiest way might be to install easy_install using the instructions at that page and then typing the following at the command line:\neasy_install libgmail\n\nIf it can't be found, then you can point it directly to the file that you downloaded:\neasy_install c:\\biglongpath\\libgmail.zip\n\n",
"Extract the archive to a temporary directory, and type \"python setup.py install\".\n",
"All you have to do is extract it, and put it somewhere (I prefer the Libs folder in your Python directory). Then read the readme. It explains that you need to do:\npython setup.py\n\nin your command line. Then you're done.\n",
"Let's say you downloaded and unzipped it to C:/libgmail-0.1.11. Open a command prompt and:\ncd C:/libgmail-0.1.11\n\nThen build an Windows installer:\npython setup.py bdist --format=wininst\n\nThen go to C:/libgmail-0.1.11/dist and you'll find an installer. Double click it, follow the \"next\" procedure and you're done.\nWhat's nice about this method is that you can easily uninstall the library from Control Panel/Add or Remove Programs.\n",
"Here's how I did it:\n\nMake sure C:\\Python26 and C:\\Python26\\scripts are both on your system path.\nInstall setuptools. You'll have to download the source distribution, and extract it. You will likely need something like 7zip for this. If you use 7zip note that you will need to extract it twice. Once to get a .tar file, and again to get a directory out of that tar file.\n\n\nOpen a command prompt and cd to the directory you created. Run the command python setup.py install.\n\nRun the command easy_install mechanize.\nInstall libgmail just like you did setuptools.\n\nThis was a lot of work, but you now have the easy_install tool available to simplify installing these kinds of things in the future. If you're doing anything serious, you may also want to consider setting up a virtualenv and using pip instead of easy_install.\n"
] |
[
3,
2,
1,
1,
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001147713_python.txt
|
Q:
Python threads - crashing when they access postgreSQL
here is a simple threading program which works fine:
import psycopg2
import threading
import time
class testit(threading.Thread):
def __init__(self, currency):
threading.Thread.__init__(self)
self.currency = currency
def run(self):
global SQLConnection
global cursor
SQLString = "Select dval from ddata where dname ='%s' and ddate = '2009-07-17'" \
%self.currency
z = time.time()
while (time.time() - z) < 2:
print SQLString
SQLConnection = psycopg2.connect(database = "db", user = "xxxx", password = "xxxx")
cursor = SQLConnection.cursor()
a = testit('EURCZK')
b = testit('EURPLN')
a.start()
b.start()
However as soon as I try to start accessing the postgresql database in the thread with the following code, I always get a stop-sign crash:
import psycopg2
import threading
import time
class testit(threading.Thread):
def __init__(self, currency):
threading.Thread.__init__(self)
self.currency = currency
def run(self):
global SQLConnection
global cursor
SQLString = "Select dval from ddata where dname ='%s'and ddate = '2009-07-17'" %self.currency
z = time.time()
while (time.time() - z) < 2:
cursor.execute(SQLString)
print cursor.fetchall()
SQLConnection = psycopg2.connect(database = "db", user = "xxxx", password = "xxxx")
cursor = SQLConnection.cursor()
a = testit('EURCZK')
b = testit('EURPLN')
a.start()
b.start()
The only difference between the two is in the while loop. I am fairly new to thread programming. Is the postgres library (psycopg2) not "thread safe"? All this is running on Windows XP. Anything I can do?
Thanks.
A:
global SQLConnection
global cursor
Seems you're accessing globals from multiple threads ? You should never do that unless those globals are thread safe, or you provide the proper locking yourself.
You now have 2 threads accessing the same connection and the same cursor. They'll step on eachothers toes. psycopg2 connection might be thread safe but cursors are not.
Use one cursor(probably one connection as well) per thread.
A:
bingo it's working. Someone left an answer but then seems to have removed it, to give each thread its own connection. And yep that solves it. So this code works:
import psycopg2
import threading
import time
class testit(threading.Thread):
def __init__(self, currency):
threading.Thread.__init__(self)
self.currency = currency
self.SQLConnection = psycopg2.connect(database = "db", user = "xxxx", password = "xxxx")
self.cursor = self.SQLConnection.cursor()
def run(self):
SQLString = "Select dval from ddata where dname ='%s' and ddate = '2009-07-17'" \
%self.currency
z = time.time()
while (time.time() - z) < 2:
self.cursor.execute(SQLString)
print self.cursor.fetchall()
a = testit('EURCZK')
b = testit('EURPLN')
a.start()
b.start()
|
Python threads - crashing when they access postgreSQL
|
here is a simple threading program which works fine:
import psycopg2
import threading
import time
class testit(threading.Thread):
def __init__(self, currency):
threading.Thread.__init__(self)
self.currency = currency
def run(self):
global SQLConnection
global cursor
SQLString = "Select dval from ddata where dname ='%s' and ddate = '2009-07-17'" \
%self.currency
z = time.time()
while (time.time() - z) < 2:
print SQLString
SQLConnection = psycopg2.connect(database = "db", user = "xxxx", password = "xxxx")
cursor = SQLConnection.cursor()
a = testit('EURCZK')
b = testit('EURPLN')
a.start()
b.start()
However as soon as I try to start accessing the postgresql database in the thread with the following code, I always get a stop-sign crash:
import psycopg2
import threading
import time
class testit(threading.Thread):
def __init__(self, currency):
threading.Thread.__init__(self)
self.currency = currency
def run(self):
global SQLConnection
global cursor
SQLString = "Select dval from ddata where dname ='%s'and ddate = '2009-07-17'" %self.currency
z = time.time()
while (time.time() - z) < 2:
cursor.execute(SQLString)
print cursor.fetchall()
SQLConnection = psycopg2.connect(database = "db", user = "xxxx", password = "xxxx")
cursor = SQLConnection.cursor()
a = testit('EURCZK')
b = testit('EURPLN')
a.start()
b.start()
The only difference between the two is in the while loop. I am fairly new to thread programming. Is the postgres library (psycopg2) not "thread safe"? All this is running on Windows XP. Anything I can do?
Thanks.
|
[
"global SQLConnection\nglobal cursor\n\nSeems you're accessing globals from multiple threads ? You should never do that unless those globals are thread safe, or you provide the proper locking yourself.\nYou now have 2 threads accessing the same connection and the same cursor. They'll step on eachothers toes. psycopg2 connection might be thread safe but cursors are not.\nUse one cursor(probably one connection as well) per thread.\n",
"bingo it's working. Someone left an answer but then seems to have removed it, to give each thread its own connection. And yep that solves it. So this code works:\nimport psycopg2\nimport threading\nimport time\n\nclass testit(threading.Thread):\n def __init__(self, currency):\n threading.Thread.__init__(self)\n self.currency = currency \n self.SQLConnection = psycopg2.connect(database = \"db\", user = \"xxxx\", password = \"xxxx\")\n self.cursor = self.SQLConnection.cursor()\n\n def run(self):\n SQLString = \"Select dval from ddata where dname ='%s' and ddate = '2009-07-17'\" \\\n %self.currency\n z = time.time()\n while (time.time() - z) < 2:\n self.cursor.execute(SQLString)\n print self.cursor.fetchall()\n\na = testit('EURCZK')\nb = testit('EURPLN')\na.start()\nb.start()\n\n"
] |
[
2,
0
] |
[] |
[] |
[
"postgresql",
"python"
] |
stackoverflow_0001148671_postgresql_python.txt
|
Q:
MetaPython: Adding Methods to a Class
I would like to add some methods to a class definition at runtime. However, when running the following code, I get some surprising (to me) results.
test.py
class klass(object):
pass
for i in [1,2]:
def f(self):
print(i)
setattr(klass, 'f' + str(i), f)
I get the following when testing on the command line:
>>> import test
>>> k = test.klass()
>>> k.f1()
2
>>> k.f2()
2
Why does k.f1() return 2 instead of 1? It seems rather counter intuitive to me.
notes
This test was done using python3.0 on a kubuntu machine.
A:
It's the usual problem of binding -- you want early binding for the use of i inside the function and Python is doing late binding for it. You can force the earlier binding this way:
class klass(object):
pass
for i in [1,2]:
def f(self, i=i):
print(i)
setattr(klass, 'f' + str(i), f)
or by wrapping f into an outer function layer taking i as an argument:
class klass(object):
pass
def fmaker(i):
def f(self):
print(i)
return f
for i in [1,2]:
setattr(klass, 'f' + str(i), fmaker(i))
A:
My guess is that it's because print (i) prints i not by value, but by reference. Thus, when leaving the for loop, i has the value 2, which will be printed both times.
|
MetaPython: Adding Methods to a Class
|
I would like to add some methods to a class definition at runtime. However, when running the following code, I get some surprising (to me) results.
test.py
class klass(object):
pass
for i in [1,2]:
def f(self):
print(i)
setattr(klass, 'f' + str(i), f)
I get the following when testing on the command line:
>>> import test
>>> k = test.klass()
>>> k.f1()
2
>>> k.f2()
2
Why does k.f1() return 2 instead of 1? It seems rather counter intuitive to me.
notes
This test was done using python3.0 on a kubuntu machine.
|
[
"It's the usual problem of binding -- you want early binding for the use of i inside the function and Python is doing late binding for it. You can force the earlier binding this way:\nclass klass(object):\n pass\n\nfor i in [1,2]:\n def f(self, i=i):\n print(i)\n setattr(klass, 'f' + str(i), f)\n\nor by wrapping f into an outer function layer taking i as an argument:\nclass klass(object):\n pass\n\ndef fmaker(i):\n def f(self):\n print(i)\n return f\n\nfor i in [1,2]:\n setattr(klass, 'f' + str(i), fmaker(i))\n\n",
"My guess is that it's because print (i) prints i not by value, but by reference. Thus, when leaving the for loop, i has the value 2, which will be printed both times. \n"
] |
[
11,
0
] |
[] |
[] |
[
"binding",
"metaprogramming",
"python"
] |
stackoverflow_0001148827_binding_metaprogramming_python.txt
|
Q:
Using Task Queues to schedule the fetching/parsing of a number of feeds in App Engine (Python)
Say I had over 10,000 feeds that I wanted to periodically fetch/parse.
If the period were say 1h that would be 24x10000 = 240,000 fetches.
The current 10k limit of the labs Task Queue API would preclude one from
setting up one task per fetch. How then would one do this?
Update: RE: Fetching nurls per task - Given the 30second timeout per request at some point this would hit a ceiling. Is
there anyway to parallelize it so each task queue initiates a bunch of async parallel fetches each of which would take less than 30sec to finish but the lot together may take more than that.
A:
Here's the asynchronous urlfetch API:
http://code.google.com/appengine/docs/python/urlfetch/asynchronousrequests.html
Set of a bunch of requests with a reasonable deadline (give yourself some headroom under your timeout, so that if one request times out you still have time to process the others). Then wait on each one in turn and process as they complete.
I haven't used this technique myself in GAE, so you're on your own finding any non-obvious gotchas. Sadly there doesn't seem to be a select() style call in the API to wait for the first of several requests to complete.
A:
2 fetches per task? 3?
A:
Group up the fetches, so instead of queuing 1 fetch you queue up, say, a work unit that does 10 fetches.
|
Using Task Queues to schedule the fetching/parsing of a number of feeds in App Engine (Python)
|
Say I had over 10,000 feeds that I wanted to periodically fetch/parse.
If the period were say 1h that would be 24x10000 = 240,000 fetches.
The current 10k limit of the labs Task Queue API would preclude one from
setting up one task per fetch. How then would one do this?
Update: RE: Fetching nurls per task - Given the 30second timeout per request at some point this would hit a ceiling. Is
there anyway to parallelize it so each task queue initiates a bunch of async parallel fetches each of which would take less than 30sec to finish but the lot together may take more than that.
|
[
"Here's the asynchronous urlfetch API:\nhttp://code.google.com/appengine/docs/python/urlfetch/asynchronousrequests.html\nSet of a bunch of requests with a reasonable deadline (give yourself some headroom under your timeout, so that if one request times out you still have time to process the others). Then wait on each one in turn and process as they complete.\nI haven't used this technique myself in GAE, so you're on your own finding any non-obvious gotchas. Sadly there doesn't seem to be a select() style call in the API to wait for the first of several requests to complete.\n",
"2 fetches per task? 3?\n",
"Group up the fetches, so instead of queuing 1 fetch you queue up, say, a work unit that does 10 fetches.\n"
] |
[
3,
2,
0
] |
[] |
[] |
[
"feed",
"google_app_engine",
"python"
] |
stackoverflow_0001148709_feed_google_app_engine_python.txt
|
Q:
Difference in SHA512 between python hashlib and sha512sum tool
I am getting different message digests from the linux 'sha512sum' tool and the python hashlib library.
Here is what I get on my Ubuntu 8.10:
$ echo test | sha512sum
0e3e75234abc68f4378a86b3f4b32a198ba301845b0cd6e50106e874345700cc6663a86c1ea125dc5e92be17c98f9a0f85ca9d5f595db2012f7cc3571945c123 -
$ python
Python 2.5.2 (r252:60911, Oct 5 2008, 19:24:49)
[GCC 4.3.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import hashlib
>>> hashlib.sha512("test").hexdigest()
'ee26b0dd4af7e749aa1a8ee3c10ae9923f618980772e473f8819a5d4940e0db27ac185f8a0e1d5f84f88bc887fd67b143732c304cc5fa9ad8e6f57f50028a8ff'
Both should calculate the message digest of the string "test", why do you think I am getting different results?
A:
I think the difference is that echo adds a newline character to its output.
Try echo -n test | sha512sum
A:
echo is adding a newline:
$ python -c 'import hashlib; print hashlib.sha512("test\n").hexdigest()'
0e3e75234abc68f4378a86b3f4b32a198ba301845b0cd6e50106e874345700cc6663a86c1ea125dc5e92be17c98f9a0f85ca9d5f595db2012f7cc3571945c123
To avoid that, use echo -n.
A:
Different input, different output. Try comparing like with like:
C:\junk>echo test| python -c "import sys, hashlib; x = sys.stdin.read(); print len(x), repr(x); print hashlib.sha512(x).hexdigest()"
5 'test\n'
0e3e75234abc68f4378a86b3f4b32a198ba301845b0cd6e50106e874345700cc6663a86c1ea125dc5e92be17c98f9a0f85ca9d5f595db2012f7cc3571945c123
|
Difference in SHA512 between python hashlib and sha512sum tool
|
I am getting different message digests from the linux 'sha512sum' tool and the python hashlib library.
Here is what I get on my Ubuntu 8.10:
$ echo test | sha512sum
0e3e75234abc68f4378a86b3f4b32a198ba301845b0cd6e50106e874345700cc6663a86c1ea125dc5e92be17c98f9a0f85ca9d5f595db2012f7cc3571945c123 -
$ python
Python 2.5.2 (r252:60911, Oct 5 2008, 19:24:49)
[GCC 4.3.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import hashlib
>>> hashlib.sha512("test").hexdigest()
'ee26b0dd4af7e749aa1a8ee3c10ae9923f618980772e473f8819a5d4940e0db27ac185f8a0e1d5f84f88bc887fd67b143732c304cc5fa9ad8e6f57f50028a8ff'
Both should calculate the message digest of the string "test", why do you think I am getting different results?
|
[
"I think the difference is that echo adds a newline character to its output.\nTry echo -n test | sha512sum\n",
"echo is adding a newline:\n$ python -c 'import hashlib; print hashlib.sha512(\"test\\n\").hexdigest()'\n0e3e75234abc68f4378a86b3f4b32a198ba301845b0cd6e50106e874345700cc6663a86c1ea125dc5e92be17c98f9a0f85ca9d5f595db2012f7cc3571945c123\n\nTo avoid that, use echo -n.\n",
"Different input, different output. Try comparing like with like:\nC:\\junk>echo test| python -c \"import sys, hashlib; x = sys.stdin.read(); print len(x), repr(x); print hashlib.sha512(x).hexdigest()\"\n5 'test\\n'\n0e3e75234abc68f4378a86b3f4b32a198ba301845b0cd6e50106e874345700cc6663a86c1ea125dc5e92be17c98f9a0f85ca9d5f595db2012f7cc3571945c123\n\n"
] |
[
20,
11,
2
] |
[] |
[] |
[
"digest",
"hashlib",
"python",
"sha512"
] |
stackoverflow_0001147875_digest_hashlib_python_sha512.txt
|
Q:
Python xml.dom and bad XML
I'm trying to extract some data from various HTML pages using a python program. Unfortunately, some of these pages contain user-entered data which occasionally has "slight" errors - namely tag mismatching.
Is there a good way to have python's xml.dom try to correct errors or something of the sort? Alternatively, is there a better way to extract data from HTML pages which may contain errors?
A:
You could use HTML Tidy to clean up, or Beautiful Soup to parse. Could be that you have to save the result to a temp file, but it should work.
Cheers,
A:
I used to use BeautifulSoup for such tasks but now I have shifted to HTML5lib (http://code.google.com/p/html5lib/) which works well in many cases where BeautifulSoup fails
other alternative is to use "Element Soup" (http://effbot.org/zone/element-soup.htm) which is a wrapper for Beautiful Soup using ElementTree
A:
lxml does a decent job at parsing invalid HTML.
According to their documentation Beautiful Soup and html5lib sometimes perform better depending on the input. With lxml you can choose which parser to use, and access them via an unified API.
A:
If jython is acceptable to you, tagsoup is very good at parsing junk - if it is, I found the jdom libraries far easier to use than other xml alternatives.
This is a snippet from a demo mockup to do with screen scraping from tfl's journey planner:
private Document getRoutePage(HashMap params) throws Exception {
String uri = "http://journeyplanner.tfl.gov.uk/bcl/XSLT_TRIP_REQUEST2";
HttpWrapper hw = new HttpWrapper();
String page = hw.urlEncPost(uri, params);
SAXBuilder builder = new SAXBuilder("org.ccil.cowan.tagsoup.Parser");
Reader pageReader = new StringReader(page);
return builder.build(pageReader);
}
|
Python xml.dom and bad XML
|
I'm trying to extract some data from various HTML pages using a python program. Unfortunately, some of these pages contain user-entered data which occasionally has "slight" errors - namely tag mismatching.
Is there a good way to have python's xml.dom try to correct errors or something of the sort? Alternatively, is there a better way to extract data from HTML pages which may contain errors?
|
[
"You could use HTML Tidy to clean up, or Beautiful Soup to parse. Could be that you have to save the result to a temp file, but it should work.\nCheers,\n",
"I used to use BeautifulSoup for such tasks but now I have shifted to HTML5lib (http://code.google.com/p/html5lib/) which works well in many cases where BeautifulSoup fails\nother alternative is to use \"Element Soup\" (http://effbot.org/zone/element-soup.htm) which is a wrapper for Beautiful Soup using ElementTree\n",
"lxml does a decent job at parsing invalid HTML.\nAccording to their documentation Beautiful Soup and html5lib sometimes perform better depending on the input. With lxml you can choose which parser to use, and access them via an unified API.\n",
"If jython is acceptable to you, tagsoup is very good at parsing junk - if it is, I found the jdom libraries far easier to use than other xml alternatives.\nThis is a snippet from a demo mockup to do with screen scraping from tfl's journey planner:\n\n private Document getRoutePage(HashMap params) throws Exception {\n String uri = \"http://journeyplanner.tfl.gov.uk/bcl/XSLT_TRIP_REQUEST2\";\n HttpWrapper hw = new HttpWrapper();\n String page = hw.urlEncPost(uri, params);\n SAXBuilder builder = new SAXBuilder(\"org.ccil.cowan.tagsoup.Parser\");\n Reader pageReader = new StringReader(page);\n return builder.build(pageReader);\n }\n\n"
] |
[
3,
0,
0,
0
] |
[] |
[] |
[
"dom",
"expat_parser",
"python",
"xml"
] |
stackoverflow_0001147090_dom_expat_parser_python_xml.txt
|
Q:
how would i design a db to contain a set of url regexes (python) that could be matched against an incoming url
Say I have the following set of urls in a db
url data
^(.*)google.com/search foobar
^(.*)google.com/alerts barfoo
^(.*)blah.com/foo/(.*) foofoo
... 100's more
Given any url in the wild, I would like to check to
see if that url belongs to an existing set of urls and get the
corresponding data field.
My questions are:
How would I design the db to do it
django does urlresolution by looping through each regex and checking for a match
given that there maybe 1000's of urls is this the best way to approach this?
Are there any existing implementations I can look at?
A:
"2. django does urlresolution by looping through each regex and checking for a match given that there maybe 1000's of urls is this the best way to approach this?"
"3. Are there any existing implementations I can look at?"
If running a large number of regular expressions does turn out to be a problem, you should check out esmre, which is a Python extension module for speeding up large collections of regular expressions. It works by extracting the fixed strings of each regular expression and putting them in an Aho-Corasick-inspired pattern matcher to quickly eliminate almost all of the work.
A:
Django has the advantage that its URLs are generally hierarchical. While the entire Django project may well have 100s or more URLs it's probably dealing only with a dozen or less patterns at a time. Do you have any structure in your URLs that you could exploit this way?
Other than that, you could try creating some kind of heuristics. E.g. finding the "fixed" parts of your patterns and then eliminating some of them and then (by a simple substring search) and only then switch to regex matching.
At the extreme end of the spectrum, you could create a product automaton. That would be super fast but the memory requirements would probably be impractical (and likely to remain so for the next few centuries).
A:
Before determining that the django approach could not possibly work, try implementing it, and applying a typical workload. For a really thourough approach, you could actually time the cost of each regex and that can guide you in improving the most costly and most frequently used regexes. In particular, you could arrange for the most frequently used, inexpensive regexes to the front of the list. This is probably a better choice than inventing a new technology to fix a problem you don't even know you have yet.
A:
You'll certainly need more care in your design of regular expressions. For example, the prefix ^(.*) will match any input - and while you may need the prefix to capture a group for various reasons, having it there will mean that you can't really eliminate any of the URLs in your database easily.
I sort of agree with TokenMacGuy's comment about the intractability of regexes, but the situation may not be completely hopeless depending on the true scale of your problem. For example, for an URL to match, then its first character should match; so for example you could pre-filter your URLs by saying which first character in the input will match that URL. So, you have a secondary table MatchingFirstCharacters which is a lookup between initial characters and URLs which match up to that initial character. (This will only work if you don't have lots of ambiguous prefixes, as I mentioned in the first paragraph of my answer.) Using this approach will mean you don't necessarily have to load all the regexes for full matching - just the ones where at least the first character matches. I suppose the idea could be generalised further, but that's an exercise for the reader ;-)
A:
The plan I'm leaning towards is one which picks of the domain name + tld from
a url, uses that as a key to find out all the regexes and than loops through
each of this regex subset to find a match.
I use two tables for this
class Urlregex(db.Model):
"""
the data field is structured as a newline separated record list
and each record is a space separated list of regex's and
dispatch key. Example of one such record
domain_tld: google.com
data:
^(.*)google.com/search(.*) google-search
"""
domain_tld = db.StringProperty()
data = db.TextProperty()
class Urldispatch(db.Model):
urlkey = db.StringProperty()
data = db.TextProperty()
So, for the cost of 2 db reads and looping through a domain specific url subset
any incoming url should be able to be matched against a large db of urls.
|
how would i design a db to contain a set of url regexes (python) that could be matched against an incoming url
|
Say I have the following set of urls in a db
url data
^(.*)google.com/search foobar
^(.*)google.com/alerts barfoo
^(.*)blah.com/foo/(.*) foofoo
... 100's more
Given any url in the wild, I would like to check to
see if that url belongs to an existing set of urls and get the
corresponding data field.
My questions are:
How would I design the db to do it
django does urlresolution by looping through each regex and checking for a match
given that there maybe 1000's of urls is this the best way to approach this?
Are there any existing implementations I can look at?
|
[
"\n\"2. django does urlresolution by looping through each regex and checking for a match given that there maybe 1000's of urls is this the best way to approach this?\"\n\"3. Are there any existing implementations I can look at?\"\n\nIf running a large number of regular expressions does turn out to be a problem, you should check out esmre, which is a Python extension module for speeding up large collections of regular expressions. It works by extracting the fixed strings of each regular expression and putting them in an Aho-Corasick-inspired pattern matcher to quickly eliminate almost all of the work.\n",
"Django has the advantage that its URLs are generally hierarchical. While the entire Django project may well have 100s or more URLs it's probably dealing only with a dozen or less patterns at a time. Do you have any structure in your URLs that you could exploit this way?\nOther than that, you could try creating some kind of heuristics. E.g. finding the \"fixed\" parts of your patterns and then eliminating some of them and then (by a simple substring search) and only then switch to regex matching.\nAt the extreme end of the spectrum, you could create a product automaton. That would be super fast but the memory requirements would probably be impractical (and likely to remain so for the next few centuries).\n",
"Before determining that the django approach could not possibly work, try implementing it, and applying a typical workload. For a really thourough approach, you could actually time the cost of each regex and that can guide you in improving the most costly and most frequently used regexes. In particular, you could arrange for the most frequently used, inexpensive regexes to the front of the list. This is probably a better choice than inventing a new technology to fix a problem you don't even know you have yet.\n",
"You'll certainly need more care in your design of regular expressions. For example, the prefix ^(.*) will match any input - and while you may need the prefix to capture a group for various reasons, having it there will mean that you can't really eliminate any of the URLs in your database easily. \nI sort of agree with TokenMacGuy's comment about the intractability of regexes, but the situation may not be completely hopeless depending on the true scale of your problem. For example, for an URL to match, then its first character should match; so for example you could pre-filter your URLs by saying which first character in the input will match that URL. So, you have a secondary table MatchingFirstCharacters which is a lookup between initial characters and URLs which match up to that initial character. (This will only work if you don't have lots of ambiguous prefixes, as I mentioned in the first paragraph of my answer.) Using this approach will mean you don't necessarily have to load all the regexes for full matching - just the ones where at least the first character matches. I suppose the idea could be generalised further, but that's an exercise for the reader ;-)\n",
"The plan I'm leaning towards is one which picks of the domain name + tld from\na url, uses that as a key to find out all the regexes and than loops through\neach of this regex subset to find a match.\nI use two tables for this\nclass Urlregex(db.Model):\n \"\"\"\n the data field is structured as a newline separated record list\n and each record is a space separated list of regex's and \n dispatch key. Example of one such record\n\n domain_tld: google.com\n data:\n ^(.*)google.com/search(.*) google-search\n\n \"\"\"\n domain_tld = db.StringProperty()\n data = db.TextProperty()\n\nclass Urldispatch(db.Model):\n urlkey = db.StringProperty()\n data = db.TextProperty()\n\nSo, for the cost of 2 db reads and looping through a domain specific url subset\nany incoming url should be able to be matched against a large db of urls.\n"
] |
[
1,
0,
0,
0,
0
] |
[] |
[] |
[
"python",
"regex",
"url_routing"
] |
stackoverflow_0001145955_python_regex_url_routing.txt
|
Q:
How to resume program (or exit) after opening webbrowser?
I'm making a small Python program, which calls the webbrowser module to open a URL. Opening the URL works wonderfully.
My problem is that once this line of code is reached, the problem is unresponsive. How do I get the program to proceed past this line of code and continue to execute? Below the problematic line is the problematic line, in context:
if viewinbrowser == "y":
print "I can definitely do that. Loading URL now!"
webbrowser.open_new(url)
print "Exiting..."
sys.exit()
The program does not get as far as executing the print "Exiting...", which I added because I noticed the program wasn't leaving the if statement for some reason.
I am running this program from the command line, in case that's important. Edit: I am running on Kubuntu 9.04 i386, using KDE 4.3 via backports. I use Firefox 3.5 as my default browser, declared in the System Settings for KDE, and it is called correctly by the program. (At least, a new tab opens up in Firefox with the desired URL—I believe that is the desired functionality.) /Edit
Also, I assume this problem would happen with pretty much any external call, but I'm very new to Python and don't know the terminology to search for on this site. (Searching for "python webbrowser" didn't yield anything helpful.) So, I apologize if it's already been discussed under a different heading!
Any suggestions?
A:
This looks like it depends on which platform you're running on.
MacOSX - returns True immediately and opens up browser window. Presumably your desired behavior.
Linux (no X) - Open up links textmode browser. Once this is closed, returns True.
Linux (with X) - Opens up Konquerer (in my case). Returns True immediately. Your desired behavior.
I'm guessing you're on Windows, which, as another commentor mentioned doesn't have fork. I'm also guessing that the webbrowser module uses fork internally, which is why it's not working for you on Windows. If so, then using the threading module to create a new thread that opens the webbrowser might be the easiest solution:
>>> import webbrowser
>>> import threading
>>> x=lambda: webbrowser.open_new('http://scompt.com')
>>> t=threading.Thread(target=x)
>>> t.start()
A:
The easiest thing to do here is probably to fork. I'm pretty sure this doesn't work in Windows unfortunately, since I think their process model might be different from Unix-like operating systems. The process will be similar, though.
import os
import sys
pid = os.fork()
if pid:
# we are the parent, continue on
print("This runs in a separate process from the else clause.")
else:
# child runs browser then quits.
webbrowser.open_new(url)
print("Exiting...")
sys.exit()
A:
The webbrowser module makes a system call to start a separate program (the web browser), then waits ( "blocks" ) for an exit code. This happens any time you start a program from another program. You have to (A) write your own function that does not block waiting for the webbrowser to exit (by using threads, fork(), or similar), or find out if the webbrowser module has a non-blocking call.
|
How to resume program (or exit) after opening webbrowser?
|
I'm making a small Python program, which calls the webbrowser module to open a URL. Opening the URL works wonderfully.
My problem is that once this line of code is reached, the problem is unresponsive. How do I get the program to proceed past this line of code and continue to execute? Below the problematic line is the problematic line, in context:
if viewinbrowser == "y":
print "I can definitely do that. Loading URL now!"
webbrowser.open_new(url)
print "Exiting..."
sys.exit()
The program does not get as far as executing the print "Exiting...", which I added because I noticed the program wasn't leaving the if statement for some reason.
I am running this program from the command line, in case that's important. Edit: I am running on Kubuntu 9.04 i386, using KDE 4.3 via backports. I use Firefox 3.5 as my default browser, declared in the System Settings for KDE, and it is called correctly by the program. (At least, a new tab opens up in Firefox with the desired URL—I believe that is the desired functionality.) /Edit
Also, I assume this problem would happen with pretty much any external call, but I'm very new to Python and don't know the terminology to search for on this site. (Searching for "python webbrowser" didn't yield anything helpful.) So, I apologize if it's already been discussed under a different heading!
Any suggestions?
|
[
"This looks like it depends on which platform you're running on.\n\nMacOSX - returns True immediately and opens up browser window. Presumably your desired behavior.\nLinux (no X) - Open up links textmode browser. Once this is closed, returns True.\nLinux (with X) - Opens up Konquerer (in my case). Returns True immediately. Your desired behavior.\n\nI'm guessing you're on Windows, which, as another commentor mentioned doesn't have fork. I'm also guessing that the webbrowser module uses fork internally, which is why it's not working for you on Windows. If so, then using the threading module to create a new thread that opens the webbrowser might be the easiest solution:\n>>> import webbrowser\n>>> import threading\n>>> x=lambda: webbrowser.open_new('http://scompt.com')\n>>> t=threading.Thread(target=x)\n>>> t.start()\n\n",
"The easiest thing to do here is probably to fork. I'm pretty sure this doesn't work in Windows unfortunately, since I think their process model might be different from Unix-like operating systems. The process will be similar, though.\nimport os\nimport sys\n\npid = os.fork()\nif pid:\n # we are the parent, continue on\n print(\"This runs in a separate process from the else clause.\")\n\nelse:\n # child runs browser then quits.\n webbrowser.open_new(url)\n print(\"Exiting...\")\n sys.exit()\n\n",
"The webbrowser module makes a system call to start a separate program (the web browser), then waits ( \"blocks\" ) for an exit code. This happens any time you start a program from another program. You have to (A) write your own function that does not block waiting for the webbrowser to exit (by using threads, fork(), or similar), or find out if the webbrowser module has a non-blocking call.\n"
] |
[
6,
4,
0
] |
[] |
[] |
[
"browser",
"if_statement",
"python"
] |
stackoverflow_0001149233_browser_if_statement_python.txt
|
Q:
Signals registered more than once in django1.1 testserver
I've defined a signal handler function in my models.py file. At the bottom of that file, I use signals.post_save.connect(myhandler, sender=myclass) as recommended in the docs at http://docs.djangoproject.com/en/dev/topics/signals/.
However, when I run the test server, simple print-statement debugging shows that the models.py file gets imported twice and (as far as I can tell), this causes my signal handler to get registered twice. This means that every action is handled twice, which is obviously not the intended behaviour.
The first import seems to occur during the model checking phase, and the second happens right when the model itself is needed during the first request handled by the server.
Should I be registering my signals handlers elsewhere? Is this a bug in the 1.1 test server? Am I missing something else?
A:
The signature for the connect method is
def connect(self, receiver, sender=None, weak=True, dispatch_uid=None)
where the dispatch_uid parameter is an identifier used to uniquely identify a particular instance of a receiver. This will usually be a string, though it may be anything hashable. If receivers have a dispatch_uid attribute, the receiver will not be added if another receiver already exists with that dispatch_uid.
So, you could specify a dispatch_uid in your connect call to see if that eliminates the problem.
|
Signals registered more than once in django1.1 testserver
|
I've defined a signal handler function in my models.py file. At the bottom of that file, I use signals.post_save.connect(myhandler, sender=myclass) as recommended in the docs at http://docs.djangoproject.com/en/dev/topics/signals/.
However, when I run the test server, simple print-statement debugging shows that the models.py file gets imported twice and (as far as I can tell), this causes my signal handler to get registered twice. This means that every action is handled twice, which is obviously not the intended behaviour.
The first import seems to occur during the model checking phase, and the second happens right when the model itself is needed during the first request handled by the server.
Should I be registering my signals handlers elsewhere? Is this a bug in the 1.1 test server? Am I missing something else?
|
[
"The signature for the connect method is \ndef connect(self, receiver, sender=None, weak=True, dispatch_uid=None)\n\nwhere the dispatch_uid parameter is an identifier used to uniquely identify a particular instance of a receiver. This will usually be a string, though it may be anything hashable. If receivers have a dispatch_uid attribute, the receiver will not be added if another receiver already exists with that dispatch_uid.\nSo, you could specify a dispatch_uid in your connect call to see if that eliminates the problem.\n"
] |
[
4
] |
[] |
[] |
[
"django",
"django_models",
"django_signals",
"python"
] |
stackoverflow_0001149317_django_django_models_django_signals_python.txt
|
Q:
How to create Classes in Python with highly constrained instances
In Python, there are examples of built-in classes with highly constrained instances. For example, "None" is the only instance of its class, and in the bool class there are only two objects, "True" and "False" (I hope I am more-or-less correct so far).
Another good example are integers: if a and b are instances of the int type then a == b implies that a is b.
Two questions:
How does one create a class with similarly constrained instances? For example we could ask for a class with exactly 5 instances. Or there could be infinitely many instances, like type int, but these are not arbitrary.
if integers form a class, why does int() give the 0 instance? compare this to a user-defined class Cl, where Cl() would give an instance of the class, not a specific unique instance, like 0. Shouldn't int() return an unspecified integer object, i.e. an integer without a specified value?
A:
You're talking about giving a class value semantics, which is typically done by creating class instances in the normal way, but remembering each one, and if a matching instance would be created, give the already created instance instead. In python, this can be achieved by overloading a classes __new__ method.
Brief example, say we wanted to use pairs of integers to represent coordinates, and have the proper value semantics.
class point(object):
memo = {}
def __new__(cls, x, y):
if (x, y) in cls.memo: # if it already exists,
return cls.memo[(x, y)] # return the existing instance
else: # otherwise,
newPoint = object.__new__(cls) # create it,
newPoint.x = x # initialize it, as you would in __init__
newPoint.y = y
cls.memo[(x, y)] = newPoint # memoize it,
return newPoint # and return it!
A:
Looks like #1 has been well answered already and I just want to explain a principle, related to #2, which appears to have been missed by all respondents: for most built-in types, calling the type without parameters (the "default constructor") returns an instance of that type which evaluates as false. That means an empty container for container types, a number which compares equal to zero for number types.
>>> import decimal
>>> decimal.Decimal()
Decimal("0")
>>> set()
set([])
>>> float()
0.0
>>> tuple()
()
>>> dict()
{}
>>> list()
[]
>>> str()
''
>>> bool()
False
See? Pretty regular indeed! Moreover, for mutable types, like most containers, calling the type always returns a new instance; for immutable types, like numbers and strings, it doesn't matter (it's a possible internal optimization to return new reference to an existing immutable instance, but the implementation is not required to perform such optimization, and if it does it can and often will perform them quite selectively) since it's never correct to compare immutable type instances with is or equivalently by id().
If you design a type of which some instances can evaluate as false (by having __len__ or __nonzero__ special methods in the class), it's advisable to follow the same precept (have __init__ [or __new__ for immutables], if called without arguments [[beyond self for __init__ and 'cls' for __new__ of course]], prepare a [[new, if mutable]] "empty" or "zero-like" instance of the class).
A:
For the first question you could implement a singleton class design pattern http://en.wikipedia.org/wiki/Singleton_pattern you should from that restrict the number of instances.
For the second question, I think this kind of explains your issue http://docs.python.org/library/stdtypes.html
Because integers are types, there are limitations to it.
Here is another resource...
http://docs.python.org/library/functions.html#built-in-funcs
A:
Create them in advance and return one of those from __new__ instead of creating a new object, or cache created instances (weakrefs are handy here) and return one of those instead of creating a new object.
Integers are special. Effectively this means you can never use identity to compare them as you would use identity to compare other objects. Since they are immutable and rarely used in any way other than a value context, this isn't much of a problem. This is done, as far as I can tell, for implementation reasons more than anything else. (And since there's no clear indication that it's an incorrect way, it's a good decision.)
Singletons such as None: Create the class with the name you want to give the variable, and then rebind the variable to the (only) instance, or delete the class afterwards. This is handy when you want to emulate an interface such as getattr, where a parameter is optional but using None is different from not providing a value.
class raise_error(object): pass
raise_error = raise_error()
def example(d, key, default=raise_error):
"""Return d[key] if key in d, else return default or
raise KeyError if default isn't supplied."""
try:
return d[key]
except KeyError:
if default is raise_error:
raise
return default
A:
To answer the more generic question of how to create constrained instances, it depends on the constraint. Both you examples above are a sort of "singletons", although the second example is a variation where you can have many instances of one class, but you will have only one per input value.
These can both done by overriding the class' __new__ method, so that the class creates the instances if it hasn't been created already, and both returns it, and stores it as an attribute on the class (as has been suggested above). However, a slightly less hackish way is to use metaclasses. These are classes that change the behaviour of classes, and singletons is a great example of when to use metaclasses. And the great thing about this is that you can reuse metaclasses. By creating a Singleton metaclass, you can then use this metaclass for all Singletons you have.
A nice Python example in on Wikipedia: http://en.wikipedia.org/wiki/Singleton_pattern#Python
Here is a variation that will create a different instance depending on parameters:
(It's not perfect. If you pass in a parameter which is a dict, it will fail, for example. But it's a start):
# Notice how you subclass not from object, but from type. You are in other words
# creating a new type of type.
class SingletonPerValue(type):
def __init__(cls, name, bases, dict):
super(SingletonPerValue, cls).__init__(name, bases, dict)
# Here we store the created instances later.
cls.instances = {}
def __call__(cls, *args, **kw):
# We make a tuple out of all parameters. This is so we can use it as a key
# This will fail if you send in unhasheable parameters.
params = args + tuple(kw.items())
# Check in cls.instances if this combination of parameter has been used:
if params not in cls.instances:
# No, this is a new combination of parameters. Create a new instance,
# and store it in the dictionary:
cls.instances[params] = super(SingletonPerValue, cls).__call__(*args, **kw)
return cls.instances[params]
class MyClass(object):
# Say that this class should use a specific metaclass:
__metaclass__ = SingletonPerValue
def __init__(self, value):
self.value = value
print 1, MyClass(1)
print 2, MyClass(2)
print 2, MyClass(2)
print 2, MyClass(2)
print 3, MyClass(3)
But there are other constraints in Python than instantiation. Many of them can be done with metaclasses. Others have shortcuts, here is a class that only allows you to set the attributes 'items' and 'fruit', for example.
class Constrained(object):
__slots__ = ['items', 'fruit']
con = Constrained()
con.items = 6
con.fruit = "Banana"
con.yummy = True
If you want restrictions on attributes, but not quite these strong, you can override __getattr__, __setattr__ and __delattr__ to make many fantastic and horrid things happen. :-) There are also packages out there that let you set constraints on attributes, etc.
A:
I think some names for the concept you're thinking about are interning and immutable objects.
As for an answer to your specific questions, I think for #1, you could look up your constrained instance in a class method and return it.
For question #2, I think it's just a matter of how you specify your class. A non-specific instance of the int class would be pretty useless, so just spec it so it's impossible to create.
A:
Note: this is not really an answer to your question, but more a comment I could not fit in the "comment" space.
Please note that a == b does NOT implies that a is b.
It is true only for first handfuls of integers (like first hundred or so - I do not know exactly) and it is only an implementation detail of CPython, that actually changed with the switch to Python 3.0.
For example:
>>> n1 = 4000
>>> n2 = 4000
>>> n1 == n2
True
>>> n1 is n2
False
|
How to create Classes in Python with highly constrained instances
|
In Python, there are examples of built-in classes with highly constrained instances. For example, "None" is the only instance of its class, and in the bool class there are only two objects, "True" and "False" (I hope I am more-or-less correct so far).
Another good example are integers: if a and b are instances of the int type then a == b implies that a is b.
Two questions:
How does one create a class with similarly constrained instances? For example we could ask for a class with exactly 5 instances. Or there could be infinitely many instances, like type int, but these are not arbitrary.
if integers form a class, why does int() give the 0 instance? compare this to a user-defined class Cl, where Cl() would give an instance of the class, not a specific unique instance, like 0. Shouldn't int() return an unspecified integer object, i.e. an integer without a specified value?
|
[
"You're talking about giving a class value semantics, which is typically done by creating class instances in the normal way, but remembering each one, and if a matching instance would be created, give the already created instance instead. In python, this can be achieved by overloading a classes __new__ method. \nBrief example, say we wanted to use pairs of integers to represent coordinates, and have the proper value semantics. \nclass point(object):\n memo = {}\n def __new__(cls, x, y):\n if (x, y) in cls.memo: # if it already exists, \n return cls.memo[(x, y)] # return the existing instance\n else: # otherwise, \n newPoint = object.__new__(cls) # create it, \n newPoint.x = x # initialize it, as you would in __init__\n newPoint.y = y \n cls.memo[(x, y)] = newPoint # memoize it, \n return newPoint # and return it!\n\n\n",
"Looks like #1 has been well answered already and I just want to explain a principle, related to #2, which appears to have been missed by all respondents: for most built-in types, calling the type without parameters (the \"default constructor\") returns an instance of that type which evaluates as false. That means an empty container for container types, a number which compares equal to zero for number types.\n>>> import decimal\n>>> decimal.Decimal()\nDecimal(\"0\")\n>>> set()\nset([])\n>>> float()\n0.0\n>>> tuple()\n()\n>>> dict()\n{}\n>>> list()\n[]\n>>> str()\n''\n>>> bool()\nFalse\n\nSee? Pretty regular indeed! Moreover, for mutable types, like most containers, calling the type always returns a new instance; for immutable types, like numbers and strings, it doesn't matter (it's a possible internal optimization to return new reference to an existing immutable instance, but the implementation is not required to perform such optimization, and if it does it can and often will perform them quite selectively) since it's never correct to compare immutable type instances with is or equivalently by id().\nIf you design a type of which some instances can evaluate as false (by having __len__ or __nonzero__ special methods in the class), it's advisable to follow the same precept (have __init__ [or __new__ for immutables], if called without arguments [[beyond self for __init__ and 'cls' for __new__ of course]], prepare a [[new, if mutable]] \"empty\" or \"zero-like\" instance of the class).\n",
"For the first question you could implement a singleton class design pattern http://en.wikipedia.org/wiki/Singleton_pattern you should from that restrict the number of instances.\nFor the second question, I think this kind of explains your issue http://docs.python.org/library/stdtypes.html\nBecause integers are types, there are limitations to it.\nHere is another resource...\nhttp://docs.python.org/library/functions.html#built-in-funcs\n",
"\nCreate them in advance and return one of those from __new__ instead of creating a new object, or cache created instances (weakrefs are handy here) and return one of those instead of creating a new object.\nIntegers are special. Effectively this means you can never use identity to compare them as you would use identity to compare other objects. Since they are immutable and rarely used in any way other than a value context, this isn't much of a problem. This is done, as far as I can tell, for implementation reasons more than anything else. (And since there's no clear indication that it's an incorrect way, it's a good decision.)\n\nSingletons such as None: Create the class with the name you want to give the variable, and then rebind the variable to the (only) instance, or delete the class afterwards. This is handy when you want to emulate an interface such as getattr, where a parameter is optional but using None is different from not providing a value.\nclass raise_error(object): pass\nraise_error = raise_error()\n\ndef example(d, key, default=raise_error):\n \"\"\"Return d[key] if key in d, else return default or\n raise KeyError if default isn't supplied.\"\"\"\n try:\n return d[key]\n except KeyError:\n if default is raise_error:\n raise\n return default\n\n",
"To answer the more generic question of how to create constrained instances, it depends on the constraint. Both you examples above are a sort of \"singletons\", although the second example is a variation where you can have many instances of one class, but you will have only one per input value.\nThese can both done by overriding the class' __new__ method, so that the class creates the instances if it hasn't been created already, and both returns it, and stores it as an attribute on the class (as has been suggested above). However, a slightly less hackish way is to use metaclasses. These are classes that change the behaviour of classes, and singletons is a great example of when to use metaclasses. And the great thing about this is that you can reuse metaclasses. By creating a Singleton metaclass, you can then use this metaclass for all Singletons you have. \nA nice Python example in on Wikipedia: http://en.wikipedia.org/wiki/Singleton_pattern#Python\nHere is a variation that will create a different instance depending on parameters:\n(It's not perfect. If you pass in a parameter which is a dict, it will fail, for example. But it's a start):\n# Notice how you subclass not from object, but from type. You are in other words\n# creating a new type of type.\nclass SingletonPerValue(type):\n def __init__(cls, name, bases, dict):\n super(SingletonPerValue, cls).__init__(name, bases, dict)\n # Here we store the created instances later.\n cls.instances = {}\n\n def __call__(cls, *args, **kw):\n # We make a tuple out of all parameters. This is so we can use it as a key\n # This will fail if you send in unhasheable parameters.\n params = args + tuple(kw.items())\n # Check in cls.instances if this combination of parameter has been used:\n if params not in cls.instances:\n # No, this is a new combination of parameters. Create a new instance,\n # and store it in the dictionary:\n cls.instances[params] = super(SingletonPerValue, cls).__call__(*args, **kw)\n\n return cls.instances[params]\n\n\nclass MyClass(object):\n # Say that this class should use a specific metaclass:\n __metaclass__ = SingletonPerValue\n\n def __init__(self, value):\n self.value = value\n\nprint 1, MyClass(1)\nprint 2, MyClass(2)\nprint 2, MyClass(2)\nprint 2, MyClass(2)\nprint 3, MyClass(3)\n\nBut there are other constraints in Python than instantiation. Many of them can be done with metaclasses. Others have shortcuts, here is a class that only allows you to set the attributes 'items' and 'fruit', for example. \nclass Constrained(object):\n __slots__ = ['items', 'fruit']\n\ncon = Constrained()\ncon.items = 6\ncon.fruit = \"Banana\"\ncon.yummy = True\n\nIf you want restrictions on attributes, but not quite these strong, you can override __getattr__, __setattr__ and __delattr__ to make many fantastic and horrid things happen. :-) There are also packages out there that let you set constraints on attributes, etc.\n",
"I think some names for the concept you're thinking about are interning and immutable objects.\nAs for an answer to your specific questions, I think for #1, you could look up your constrained instance in a class method and return it. \nFor question #2, I think it's just a matter of how you specify your class. A non-specific instance of the int class would be pretty useless, so just spec it so it's impossible to create.\n",
"Note: this is not really an answer to your question, but more a comment I could not fit in the \"comment\" space.\nPlease note that a == b does NOT implies that a is b.\nIt is true only for first handfuls of integers (like first hundred or so - I do not know exactly) and it is only an implementation detail of CPython, that actually changed with the switch to Python 3.0.\nFor example:\n>>> n1 = 4000\n>>> n2 = 4000\n>>> n1 == n2\nTrue\n>>> n1 is n2\nFalse\n\n"
] |
[
5,
4,
3,
2,
2,
1,
1
] |
[] |
[] |
[
"class",
"math",
"python"
] |
stackoverflow_0001149253_class_math_python.txt
|
Q:
How to Redirect To Same Page on Failed Login
The Django framework easily handles redirecting when a user fails to log in properly. However, this redirection goes to a separate login page. I can set the template to be the same as the page I logged in on, but none of my other objects exist in the new page.
For example, I have a front page that shows a bunch of news articles. On the sidebar is a login form. When the user logs in, but fails to authenticate, I would like it to return to the front page and preserve all the news articles that show. As of current, none of the news articles show up.
How can I fix this problem? Any help is appreciated.
Edit: Remember that I have dynamic content that is being displayed, and I would like it to still display! Futhermore, the main page is not the only place a user can log in. The sidebar never changes, so the user can potentially log in from any page on the site, and all of the content on that page exactly as it was still needs to be displayed upon failure to log in.
A:
Do you want to redirect to the referring page on failed login?
... authentication code above
if user.is_authenticated():
#show success view
else:
return HttpResponseRedirect(request.META.get('HTTP_REFERER', reverse('index'))
you might want to check that referring page url is set correctly, otherwise set it to default url (assuming that your default url is named "index").
A:
Use an <IFRAME> in the sidebar to
call the login view -- all postbacks
will happen within the iframe, so
your page stays intact. If the
visitor logs in successfully, you
can use javascript to redirect the
parent page to some other URL
Use AJAX to post the login form --
acheives the same effect as (1), but
it means your visitors will need to
have javascript-enabled browsers
I personally prefer to have the login on a separate page. If you're only worried about your visitors losing their current page (and not say, bound by a fussy client), you can have the login show up in a lightbox. I've used all three approaches in the past, and I'd be happy to post some code samples if you're interested.
A:
This is because redirecting to a view misses the original context you use to render the page in the first place.
You are missing just a simple logic here. You are trying to render the same template again, but with no news_article list.
I suppose (in the first place), you are rendering the template which shows you Articles as well as login form, by sending two things 1. Login Form, and 2. Articles List.
But secondly, when user fails to authenticate, you are not passing the same things again. Pass those variables again as context (you can also add error message if your form is not handling error messages).
if user.is_authenticated():
#show success view
else:
return render_to_response('same_template.html', {
'error_msg': 'Username or password you provided was incorrect',
'news_articles': NewsArticles.objects.all()[:3],
'login_form': LoginForm(request.POST);
})
Edit: The reality is that, a context is used to render a template, and it's the complete responsibility of that template, what it wants to pass in further navigation. And as I see, if you are not passing something further, you are not getting it further.
If you want some automated context, develop your own context processor, something like the auth-context-processor, which automatically adds like 'user', always available to the template.
And by the way, you are going to miss that kind of context anyway, even if login is authenticated. So if that particular context is really important, either try sending the primary keys of articles along with the login form submit, or store that in global (ugliest thing ever) or just reconsider and separate the flow (good thing, I feel).
|
How to Redirect To Same Page on Failed Login
|
The Django framework easily handles redirecting when a user fails to log in properly. However, this redirection goes to a separate login page. I can set the template to be the same as the page I logged in on, but none of my other objects exist in the new page.
For example, I have a front page that shows a bunch of news articles. On the sidebar is a login form. When the user logs in, but fails to authenticate, I would like it to return to the front page and preserve all the news articles that show. As of current, none of the news articles show up.
How can I fix this problem? Any help is appreciated.
Edit: Remember that I have dynamic content that is being displayed, and I would like it to still display! Futhermore, the main page is not the only place a user can log in. The sidebar never changes, so the user can potentially log in from any page on the site, and all of the content on that page exactly as it was still needs to be displayed upon failure to log in.
|
[
"Do you want to redirect to the referring page on failed login?\n... authentication code above\n\nif user.is_authenticated():\n #show success view\nelse:\n return HttpResponseRedirect(request.META.get('HTTP_REFERER', reverse('index'))\n\nyou might want to check that referring page url is set correctly, otherwise set it to default url (assuming that your default url is named \"index\").\n",
"\nUse an <IFRAME> in the sidebar to\ncall the login view -- all postbacks\nwill happen within the iframe, so\nyour page stays intact. If the\nvisitor logs in successfully, you\ncan use javascript to redirect the\nparent page to some other URL\nUse AJAX to post the login form --\nacheives the same effect as (1), but\nit means your visitors will need to\nhave javascript-enabled browsers\n\nI personally prefer to have the login on a separate page. If you're only worried about your visitors losing their current page (and not say, bound by a fussy client), you can have the login show up in a lightbox. I've used all three approaches in the past, and I'd be happy to post some code samples if you're interested.\n",
"This is because redirecting to a view misses the original context you use to render the page in the first place.\nYou are missing just a simple logic here. You are trying to render the same template again, but with no news_article list. \nI suppose (in the first place), you are rendering the template which shows you Articles as well as login form, by sending two things 1. Login Form, and 2. Articles List. \nBut secondly, when user fails to authenticate, you are not passing the same things again. Pass those variables again as context (you can also add error message if your form is not handling error messages).\nif user.is_authenticated():\n #show success view\nelse:\n return render_to_response('same_template.html', {\n 'error_msg': 'Username or password you provided was incorrect',\n 'news_articles': NewsArticles.objects.all()[:3],\n 'login_form': LoginForm(request.POST);\n })\n\nEdit: The reality is that, a context is used to render a template, and it's the complete responsibility of that template, what it wants to pass in further navigation. And as I see, if you are not passing something further, you are not getting it further. \nIf you want some automated context, develop your own context processor, something like the auth-context-processor, which automatically adds like 'user', always available to the template.\nAnd by the way, you are going to miss that kind of context anyway, even if login is authenticated. So if that particular context is really important, either try sending the primary keys of articles along with the login form submit, or store that in global (ugliest thing ever) or just reconsider and separate the flow (good thing, I feel).\n"
] |
[
6,
1,
0
] |
[] |
[] |
[
"authentication",
"django",
"python",
"redirect"
] |
stackoverflow_0001129091_authentication_django_python_redirect.txt
|
Q:
Attribute Cache in Django - What's the point?
I was just looking over EveryBlock's source code and I noticed this code in the alerts/models.py code:
def _get_user(self):
if not hasattr(self, '_user_cache'):
from ebpub.accounts.models import User
try:
self._user_cache = User.objects.get(id=self.user_id)
except User.DoesNotExist:
self._user_cache = None
return self._user_cache
user = property(_get_user)
I've noticed this pattern around a bunch, but I don't quite understand the use. Is the whole idea to make sure that when accessing the FK on self (self = alert object), that you only grab the user object once from the db? Why wouldn't you just rely upon the db caching amd django's ForeignKey() field? I noticed that the model definition only holds the user id and not a foreign key field:
class EmailAlert(models.Model):
user_id = models.IntegerField()
...
Any insights would be appreciated.
A:
I don't know why this is an IntegerField; it looks like it definitely should be a ForeignKey(User) field--you lose things like select_related() here and other things because of that, too.
As to the caching, many databases don't cache results--they (or rather, the OS) will cache the data on disk needed to get the result, so looking it up a second time should be faster than the first, but it'll still take work.
It also still takes a database round-trip to look it up. In my experience, with Django, doing an item lookup can take around 0.5 to 1ms, for an SQL command to a local Postgresql server plus sometimes nontrivial overhead of QuerySet. 1ms is a lot if you don't need it--do that a few times and you can turn a 30ms request into a 35ms request.
If your SQL server isn't local and you actually have network round-trips to deal with, the numbers get bigger.
Finally, people generally expect accessing a property to be fast; when they're complex enough to cause SQL queries, caching the result is generally a good idea.
A:
Although databases do cache things internally, there's still an overhead in going back to the db every time you want to check the value of a related field - setting up the query within Django, the network latency in connecting to the db and returning the data over the network, instantiating the object in Django, etc. If you know the data hasn't changed in the meantime - and within the context of a single web request you probably don't care if it has - it makes much more sense to get the data once and cache it, rather than querying it every single time.
One of the applications I work on has an extremely complex home page containing a huge amount of data. Previously it was carrying out over 400 db queries to render. I've refactored it now so it 'only' uses 80, using very similar techniques to the one you've posted, and you'd better believe that it gives a massive performance boost.
|
Attribute Cache in Django - What's the point?
|
I was just looking over EveryBlock's source code and I noticed this code in the alerts/models.py code:
def _get_user(self):
if not hasattr(self, '_user_cache'):
from ebpub.accounts.models import User
try:
self._user_cache = User.objects.get(id=self.user_id)
except User.DoesNotExist:
self._user_cache = None
return self._user_cache
user = property(_get_user)
I've noticed this pattern around a bunch, but I don't quite understand the use. Is the whole idea to make sure that when accessing the FK on self (self = alert object), that you only grab the user object once from the db? Why wouldn't you just rely upon the db caching amd django's ForeignKey() field? I noticed that the model definition only holds the user id and not a foreign key field:
class EmailAlert(models.Model):
user_id = models.IntegerField()
...
Any insights would be appreciated.
|
[
"I don't know why this is an IntegerField; it looks like it definitely should be a ForeignKey(User) field--you lose things like select_related() here and other things because of that, too.\nAs to the caching, many databases don't cache results--they (or rather, the OS) will cache the data on disk needed to get the result, so looking it up a second time should be faster than the first, but it'll still take work.\nIt also still takes a database round-trip to look it up. In my experience, with Django, doing an item lookup can take around 0.5 to 1ms, for an SQL command to a local Postgresql server plus sometimes nontrivial overhead of QuerySet. 1ms is a lot if you don't need it--do that a few times and you can turn a 30ms request into a 35ms request.\nIf your SQL server isn't local and you actually have network round-trips to deal with, the numbers get bigger.\nFinally, people generally expect accessing a property to be fast; when they're complex enough to cause SQL queries, caching the result is generally a good idea.\n",
"Although databases do cache things internally, there's still an overhead in going back to the db every time you want to check the value of a related field - setting up the query within Django, the network latency in connecting to the db and returning the data over the network, instantiating the object in Django, etc. If you know the data hasn't changed in the meantime - and within the context of a single web request you probably don't care if it has - it makes much more sense to get the data once and cache it, rather than querying it every single time.\nOne of the applications I work on has an extremely complex home page containing a huge amount of data. Previously it was carrying out over 400 db queries to render. I've refactored it now so it 'only' uses 80, using very similar techniques to the one you've posted, and you'd better believe that it gives a massive performance boost.\n"
] |
[
4,
3
] |
[] |
[] |
[
"django",
"django_models",
"python"
] |
stackoverflow_0001150368_django_django_models_python.txt
|
Q:
cleaning up when using exceptions and files in python
I'm learning python for a couple of days now and am struggling with its 'spirit'.
I'm comming from the C/C++/Java/Perl school and I understand that python is not C (at all) that's why I'm trying to understand the spirit to get the most out of it (and so far it's hard)...
My question is especially focused on exception handling and cleaning:
The code at the end of this post is meant to simulate a fairly common case of file opening/parsing where you need to close the file in case of an error...
Most samples I have seen use the 'else' clause of a try statement to close the file... which made sense to me until I realized that the error might be due to
the opening itself (in which case
there is no need to close the not
opened file)
the parsing (in which
case the file needs to be closed)
The trap here is that if you use the 'else' clause of a try bloc then the file never gets closed if the error happens during parsing!
On the other end using the 'finally' clause result in an extra necessary check because the file_desc variable may not exist if the error happened during the opened (see comments in the code below)...
This extra check is inefficient and full of shit because any reasonable program may contain hundreds of symbols and parsing the results of dir() is a pain in the ass... Not to mention the lack of readability of such a statement...
Most other languages allow for variable definitions which could save the day here... but in python, everything seems to be implicit...
Normally, one would just declare a file_desc variable, then use many try/catch blocs for every task... one for opening, one for parsing and the last one for the closing()... no need to nest them... here I don't know a way to declare the variable... so I'm stuck right at the begining of the problem !
so what is the spirit of python here ???
split the opening/parsing in two different methods ? How ?
use some kind of nested try/except clauses ??? How ?
maybe there is a way to declare the file_desc variable and then there would be no need for the extra checking... is it at all possible ??? desirable ???
what about the close() statement ??? what if it raises an error ?
thanx for your hints... here is the sample code:
class FormatError(Exception):
def __init__(self, message):
self.strerror = message
def __str__(self):
return repr(message)
file_name = raw_input("Input a filename please: ")
try:
file_desc = open(file_name, 'r')
# read the file...
while True:
current_line = file_desc.readline()
if not current_line: break
print current_line.rstrip("\n")
# lets simulate some parsing error...
raise FormatError("oops... the file format is wrong...")
except FormatError as format_error:
print "The file {0} is invalid: {1}".format(file_name, format_error.strerror)
except IOError as io_error:
print "The file {0} could not be read: {1}".format(file_name, io_error.strerror)
else:
file_desc.close()
# finally:
# if 'file_desc' in dir() and not file_desc.closed:
# file_desc.close()
if 'file_desc' in dir():
print "The file exists and closed={0}".format(file_desc.closed)
else:
print "The file has never been defined..."
A:
The easiest way to deal with this is to use the fact that file objects in Python 2.5+ are context managers. You can use the with statement to enter a context; the context manager's __exit__ method is automatically called when exiting this with scope. The file object's context management automatically closes the file then.
try:
with file("hello.txt") as input_file:
for line in input_file:
if "hello" not in line:
raise ValueError("Every line must contain 'hello'!")
except IOError:
print "Damnit, couldn't open the file."
except:
raise
else:
print "Everything went fine!"
The open hello.txt handle will automatically be closed, and exceptions from within the with scope are propagated outside.
A:
Just a note: you can always declare a variable, and then it would become something like this:
file_desc = None
try:
file_desc = open(file_name, 'r')
except IOError, err:
pass
finally:
if file_desc:
close(file_desc)
Of course, if you are using a newer version of Python, the construct using context manager is way better; however, I wanted to point out how you can generically deal with exceptions and variable scope in Python.
A:
As of Python 2.5, there's a with command that simplifies some of what you're fighting with. Read more about it here. Here's a transformed version of your code:
class FormatError(Exception):
def __init__(self, message):
self.strerror = message
def __str__(self):
return repr(message)
file_name = raw_input("Input a filename please: ")
with open(file_name, 'r') as file_desc:
try:
# read the file...
while True:
current_line = file_desc.readline()
if not current_line: break
print current_line.rstrip("\n")
# lets simulate some parsing error...
raise FormatError("oops... the file format is wrong...")
except FormatError as format_error:
print "The file {0} is invalid: {1}".format(file_name, format_error.strerror)
except IOError as io_error:
print "The file {0} could not be read: {1}".format(file_name, io_error.strerror)
if 'file_desc' in dir():
print "The file exists and closed={0}".format(file_desc.closed)
else:
print "The file has never been defined..."
A:
OK, I'm an ass.
edit:and BTW, many thanx for those who already answered while I was posting this
The code below does the trick.
You must create a nested block with the 'with as' statement to make sure the file is cleaned:
class FormatError(Exception):
def __init__(self, message):
self.strerror = message
def __str__(self):
return repr(message)
file_name = raw_input("Input a filename please: ")
try:
#
# THIS IS PYTHON'S SPIRIT... no else/finally
#
with open(file_name, 'r') as file_desc:
# read the file...
while True:
current_line = file_desc.readline()
if not current_line: break
print current_line.rstrip("\n")
raise FormatError("oops... the file format is wrong...")
print "will never get here"
except FormatError as format_error:
print "The file {0} is invalid: {1}".format(file_name, format_error.strerror)
except IOError as io_error:
print "The file {0} could not be read: {1}".format(file_name, io_error.strerror)
if 'file_desc' in dir():
print "The file exists and closed={0}".format(file_desc.closed)
else:
print "The file has never been defined..."
A:
Close can to my knowledge never return an error.
In fact, the file handle will be closed when garbage collected, so you don't have to do it explicitly in Python. Although it's still good programming to do so, obviously.
|
cleaning up when using exceptions and files in python
|
I'm learning python for a couple of days now and am struggling with its 'spirit'.
I'm comming from the C/C++/Java/Perl school and I understand that python is not C (at all) that's why I'm trying to understand the spirit to get the most out of it (and so far it's hard)...
My question is especially focused on exception handling and cleaning:
The code at the end of this post is meant to simulate a fairly common case of file opening/parsing where you need to close the file in case of an error...
Most samples I have seen use the 'else' clause of a try statement to close the file... which made sense to me until I realized that the error might be due to
the opening itself (in which case
there is no need to close the not
opened file)
the parsing (in which
case the file needs to be closed)
The trap here is that if you use the 'else' clause of a try bloc then the file never gets closed if the error happens during parsing!
On the other end using the 'finally' clause result in an extra necessary check because the file_desc variable may not exist if the error happened during the opened (see comments in the code below)...
This extra check is inefficient and full of shit because any reasonable program may contain hundreds of symbols and parsing the results of dir() is a pain in the ass... Not to mention the lack of readability of such a statement...
Most other languages allow for variable definitions which could save the day here... but in python, everything seems to be implicit...
Normally, one would just declare a file_desc variable, then use many try/catch blocs for every task... one for opening, one for parsing and the last one for the closing()... no need to nest them... here I don't know a way to declare the variable... so I'm stuck right at the begining of the problem !
so what is the spirit of python here ???
split the opening/parsing in two different methods ? How ?
use some kind of nested try/except clauses ??? How ?
maybe there is a way to declare the file_desc variable and then there would be no need for the extra checking... is it at all possible ??? desirable ???
what about the close() statement ??? what if it raises an error ?
thanx for your hints... here is the sample code:
class FormatError(Exception):
def __init__(self, message):
self.strerror = message
def __str__(self):
return repr(message)
file_name = raw_input("Input a filename please: ")
try:
file_desc = open(file_name, 'r')
# read the file...
while True:
current_line = file_desc.readline()
if not current_line: break
print current_line.rstrip("\n")
# lets simulate some parsing error...
raise FormatError("oops... the file format is wrong...")
except FormatError as format_error:
print "The file {0} is invalid: {1}".format(file_name, format_error.strerror)
except IOError as io_error:
print "The file {0} could not be read: {1}".format(file_name, io_error.strerror)
else:
file_desc.close()
# finally:
# if 'file_desc' in dir() and not file_desc.closed:
# file_desc.close()
if 'file_desc' in dir():
print "The file exists and closed={0}".format(file_desc.closed)
else:
print "The file has never been defined..."
|
[
"The easiest way to deal with this is to use the fact that file objects in Python 2.5+ are context managers. You can use the with statement to enter a context; the context manager's __exit__ method is automatically called when exiting this with scope. The file object's context management automatically closes the file then.\ntry:\n with file(\"hello.txt\") as input_file:\n for line in input_file:\n if \"hello\" not in line:\n raise ValueError(\"Every line must contain 'hello'!\")\nexcept IOError:\n print \"Damnit, couldn't open the file.\"\nexcept:\n raise\nelse:\n print \"Everything went fine!\"\n\nThe open hello.txt handle will automatically be closed, and exceptions from within the with scope are propagated outside.\n",
"Just a note: you can always declare a variable, and then it would become something like this:\nfile_desc = None\ntry:\n file_desc = open(file_name, 'r')\nexcept IOError, err:\n pass\nfinally:\n if file_desc:\n close(file_desc)\n\nOf course, if you are using a newer version of Python, the construct using context manager is way better; however, I wanted to point out how you can generically deal with exceptions and variable scope in Python.\n",
"As of Python 2.5, there's a with command that simplifies some of what you're fighting with. Read more about it here. Here's a transformed version of your code:\nclass FormatError(Exception):\n def __init__(self, message):\n self.strerror = message\n def __str__(self):\n return repr(message)\n\n\nfile_name = raw_input(\"Input a filename please: \")\nwith open(file_name, 'r') as file_desc:\n try:\n # read the file...\n while True:\n current_line = file_desc.readline()\n if not current_line: break\n print current_line.rstrip(\"\\n\")\n # lets simulate some parsing error...\n raise FormatError(\"oops... the file format is wrong...\")\n except FormatError as format_error:\n print \"The file {0} is invalid: {1}\".format(file_name, format_error.strerror)\n except IOError as io_error:\n print \"The file {0} could not be read: {1}\".format(file_name, io_error.strerror)\n\nif 'file_desc' in dir():\n print \"The file exists and closed={0}\".format(file_desc.closed)\nelse:\n print \"The file has never been defined...\"\n\n",
"OK, I'm an ass.\n edit:and BTW, many thanx for those who already answered while I was posting this\nThe code below does the trick.\nYou must create a nested block with the 'with as' statement to make sure the file is cleaned:\nclass FormatError(Exception):\n def __init__(self, message):\n self.strerror = message\n def __str__(self):\n return repr(message)\n\n\nfile_name = raw_input(\"Input a filename please: \")\ntry:\n #\n # THIS IS PYTHON'S SPIRIT... no else/finally\n #\n with open(file_name, 'r') as file_desc:\n # read the file...\n while True:\n current_line = file_desc.readline()\n if not current_line: break\n print current_line.rstrip(\"\\n\")\n raise FormatError(\"oops... the file format is wrong...\")\n print \"will never get here\"\nexcept FormatError as format_error:\n print \"The file {0} is invalid: {1}\".format(file_name, format_error.strerror)\nexcept IOError as io_error:\n print \"The file {0} could not be read: {1}\".format(file_name, io_error.strerror)\n\nif 'file_desc' in dir():\n print \"The file exists and closed={0}\".format(file_desc.closed)\nelse:\n print \"The file has never been defined...\"\n\n",
"Close can to my knowledge never return an error.\nIn fact, the file handle will be closed when garbage collected, so you don't have to do it explicitly in Python. Although it's still good programming to do so, obviously.\n"
] |
[
6,
3,
1,
0,
0
] |
[] |
[] |
[
"exception",
"file_io",
"python"
] |
stackoverflow_0001149983_exception_file_io_python.txt
|
Q:
html form submission in python and php is simple, can a novice do it in java?
I've made two versions of a script that submits a (https) web page form and collects the results. One version uses Snoopy.class in php, and the other uses urllib and urllib2 in python. Now I would like to make a java version.
Snoopy makes the php version exceedingly easy to write, and it runs fine on my own (OS X) machine. But it allocated too much memory, and was killed at the same point (during curl execution), when run on the pair.com web hosting service. Runs fine on dreamhost.com web hosting service.
So I decided to try a python version while I looked into what could cause the memory problem, and urllib and urllib2 made this very easy. The script runs fine. Gets about 70,000 database records, using several hundred form submissions, saving to a file of about 10MB, in about 7 minutes.
Looking into how to do this with java, I get the feeling it will not be the same walk-in-the-park as it was with php and python. Is form submission in java not for mere mortals?
I spent most of the day just trying to figure out how to set up Apache HttpClient. That is, before I gave up. If it takes me more than a few more days to sort that out, then it will be the subject of another question, I suppose.
HttpClient innovation.ch does not support https.
And WebClient looks like it will take me at least a few days to figure out.
So, php and python versions were a breeze. Can a java version be made in a few simple lines as well? If not, I'll leave it for a later day since I'm only a novice. If so, can some kind soul please point me toward the light?
Thanks.
For comparison, the essential lines of code from the two versions:
python version
import urllib
import urllib2
submitVars['firstName'] = "John"
submitVars['lastName'] = "Doe"
submitUrl = "https URL of form action goes here"
referer = "URL of referring web page goes here"
submitVarsUrlencoded = urllib.urlencode(submitVars)
req = urllib2.Request(submitUrl, submitVarsUrlencoded)
req.add_header('Referer', referer)
response = urllib2.urlopen(req)
thePage = response.read()
php version
require('Snoopy.class.php');
$snoopy = new Snoopy;
$submit_vars["first_name"] = "John";
$submit_vars["last_name"] = "Doe";
$submit_url = "https URL of form action goes here";
$snoopy->referer = "URL of referring web page goes here";
$snoopy->submit($submit_url,$submit_vars);
$the_page = $snoopy->results;
A:
Use HttpComponents http://hc.apache.org/. You need:
HttpComponents Core, direct download
HttpComponents Client, direct download
Commons Logging
Example code:
import org.apache.http.message.BasicNameValuePair;
import org.apache.http.NameValuePair;
import org.apache.http.HttpResponse;
import org.apache.http.HttpEntity;
import org.apache.http.impl.client.DefaultHttpClient;
import org.apache.http.client.entity.UrlEncodedFormEntity;
import org.apache.http.client.methods.HttpPost;
import org.apache.http.client.HttpClient;
import java.util.ArrayList;
import java.util.List;
import java.io.OutputStream;
import java.io.ByteArrayOutputStream;
public class HttpClientTest {
public static void main(String[] args) throws Exception {
// request parameters
List<NameValuePair> formparams = new ArrayList<NameValuePair>();
formparams.add(new BasicNameValuePair("q", "quality"));
UrlEncodedFormEntity entity = new UrlEncodedFormEntity(formparams, "UTF-8");
HttpPost httppost = new HttpPost("http://stackoverflow.com/search");
httppost.setEntity(entity);
// execute the request
HttpClient httpclient = new DefaultHttpClient();
HttpResponse response = httpclient.execute(httppost);
// display the response status code
System.out.println(response.getStatusLine().getStatusCode());
// display the response body
HttpEntity responseEntity = response.getEntity();
OutputStream out = new ByteArrayOutputStream();
responseEntity.writeTo(out);
System.out.println(out);
}
}
Save it to HttpClientTest.java. Have this java file, httpcore-4.0.1.jar and httpclient-4.0-alpha4.jar in the same directory Supposing you have the sun java 1.6 jdk installed, compile it:
javac HttpClientTest.java -cp httpcore-4.0.1.jar;httpclient-4.0-alpha4.jar;commons-logging-1.1.1.jar
Execute it
java HttpClientTest.class -cp httpcore-4.0.1.jar;httpclient-4.0-alpha4.jar;commons-logging-1.1.1.jar
I would argue that is as simple in java as it is in php or python (your examples). In all cases you need:
the sdk configured
a library (with dependencies)
sample code
A:
What would be so wrong with Apache HttpClient?
Just make sure you add the dependencies also to classpath, that is HttpComponents.
PostMethod post = new PostMethod("https URL of form action goes here");
NameValuePair[] data = {
new NameValuePair("first_name", "joe"),
new NameValuePair("last_name", "Doe")
};
post.setRequestBody(data);
post.addRequestHeader("Referer", "URL of referring web page goes here");
// TODO: execute method and handle any error responses.
...
InputStream inPage = post.getResponseBodyAsStream();
// handle response.
A:
Using HttpClient is certainly the more robust solution, but this can be done without an external library dependency. See here for an example of how.
A:
MercerTraieste and Tarnschaf kindly offered partial solutions to the problem. It took me a few more days, and untold hours of brain-splitting nightmare, before I gave up trying to figure out how to add a referer to the http post, and sent a new question to stackoverflow.
Jon Skeet answered instantly that I only needed...
httppost.addHeader("Referer", referer);
...which makes me look pretty dumb. How did I overlook that one?
Here is the resulting code, based almost entirely on MercerTraieste's suggestion. In my case, I needed to download, and place in my classpath:
HttpComponents
httpclient-4.0-beta2.jar
httpcore-4.0.1.jar
Apache Commons
commons-logging-1.1.1.jar
import org.apache.http.Header;
import org.apache.http.HeaderElement;
import org.apache.http.HttpRequestInterceptor;
import org.apache.http.HttpRequest;
import org.apache.http.HttpException;
import org.apache.http.NameValuePair;
import org.apache.http.HttpResponse;
import org.apache.http.HttpEntity;
import org.apache.http.client.entity.UrlEncodedFormEntity;
import org.apache.http.client.methods.HttpPost;
import org.apache.http.client.HttpClient;
import org.apache.http.protocol.HttpContext;
import org.apache.http.message.BasicNameValuePair;
import org.apache.http.impl.client.DefaultHttpClient;
import java.util.ArrayList;
import java.util.List;
import java.io.OutputStream;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
public class HttpClientTest
{
public static void main(String[] args) throws Exception
{
// initialize some variables
String referer = "URL of referring web page goes here";
String submitUrl = "https URL of form action goes here";
List<NameValuePair> formparams = new ArrayList<NameValuePair>();
formparams.add(new BasicNameValuePair("firstName", "John"));
formparams.add(new BasicNameValuePair("lastName", "Doe"));
// set up httppost
UrlEncodedFormEntity entity = new UrlEncodedFormEntity(formparams, "UTF-8");
HttpPost httppost = new HttpPost(submitUrl);
httppost.setEntity(entity);
// add referer
httppost.addHeader("Referer", referer);
// create httpclient
DefaultHttpClient httpclient = new DefaultHttpClient();
// execute the request
HttpResponse response = httpclient.execute(httppost);
// display the response body
HttpEntity responseEntity = response.getEntity();
OutputStream out = new ByteArrayOutputStream();
responseEntity.writeTo(out);
System.out.println(out);
}
}
|
html form submission in python and php is simple, can a novice do it in java?
|
I've made two versions of a script that submits a (https) web page form and collects the results. One version uses Snoopy.class in php, and the other uses urllib and urllib2 in python. Now I would like to make a java version.
Snoopy makes the php version exceedingly easy to write, and it runs fine on my own (OS X) machine. But it allocated too much memory, and was killed at the same point (during curl execution), when run on the pair.com web hosting service. Runs fine on dreamhost.com web hosting service.
So I decided to try a python version while I looked into what could cause the memory problem, and urllib and urllib2 made this very easy. The script runs fine. Gets about 70,000 database records, using several hundred form submissions, saving to a file of about 10MB, in about 7 minutes.
Looking into how to do this with java, I get the feeling it will not be the same walk-in-the-park as it was with php and python. Is form submission in java not for mere mortals?
I spent most of the day just trying to figure out how to set up Apache HttpClient. That is, before I gave up. If it takes me more than a few more days to sort that out, then it will be the subject of another question, I suppose.
HttpClient innovation.ch does not support https.
And WebClient looks like it will take me at least a few days to figure out.
So, php and python versions were a breeze. Can a java version be made in a few simple lines as well? If not, I'll leave it for a later day since I'm only a novice. If so, can some kind soul please point me toward the light?
Thanks.
For comparison, the essential lines of code from the two versions:
python version
import urllib
import urllib2
submitVars['firstName'] = "John"
submitVars['lastName'] = "Doe"
submitUrl = "https URL of form action goes here"
referer = "URL of referring web page goes here"
submitVarsUrlencoded = urllib.urlencode(submitVars)
req = urllib2.Request(submitUrl, submitVarsUrlencoded)
req.add_header('Referer', referer)
response = urllib2.urlopen(req)
thePage = response.read()
php version
require('Snoopy.class.php');
$snoopy = new Snoopy;
$submit_vars["first_name"] = "John";
$submit_vars["last_name"] = "Doe";
$submit_url = "https URL of form action goes here";
$snoopy->referer = "URL of referring web page goes here";
$snoopy->submit($submit_url,$submit_vars);
$the_page = $snoopy->results;
|
[
"Use HttpComponents http://hc.apache.org/. You need:\n\nHttpComponents Core, direct download\nHttpComponents Client, direct download\nCommons Logging\n\nExample code:\nimport org.apache.http.message.BasicNameValuePair;\nimport org.apache.http.NameValuePair;\nimport org.apache.http.HttpResponse;\nimport org.apache.http.HttpEntity;\nimport org.apache.http.impl.client.DefaultHttpClient;\nimport org.apache.http.client.entity.UrlEncodedFormEntity;\nimport org.apache.http.client.methods.HttpPost;\nimport org.apache.http.client.HttpClient;\n\nimport java.util.ArrayList;\nimport java.util.List;\nimport java.io.OutputStream;\nimport java.io.ByteArrayOutputStream;\n\npublic class HttpClientTest {\n public static void main(String[] args) throws Exception {\n\n // request parameters\n List<NameValuePair> formparams = new ArrayList<NameValuePair>();\n formparams.add(new BasicNameValuePair(\"q\", \"quality\"));\n UrlEncodedFormEntity entity = new UrlEncodedFormEntity(formparams, \"UTF-8\");\n HttpPost httppost = new HttpPost(\"http://stackoverflow.com/search\");\n httppost.setEntity(entity);\n\n // execute the request\n HttpClient httpclient = new DefaultHttpClient();\n HttpResponse response = httpclient.execute(httppost);\n\n // display the response status code\n System.out.println(response.getStatusLine().getStatusCode());\n\n // display the response body\n HttpEntity responseEntity = response.getEntity();\n OutputStream out = new ByteArrayOutputStream();\n responseEntity.writeTo(out);\n System.out.println(out);\n }\n}\n\nSave it to HttpClientTest.java. Have this java file, httpcore-4.0.1.jar and httpclient-4.0-alpha4.jar in the same directory Supposing you have the sun java 1.6 jdk installed, compile it:\njavac HttpClientTest.java -cp httpcore-4.0.1.jar;httpclient-4.0-alpha4.jar;commons-logging-1.1.1.jar \n\nExecute it\njava HttpClientTest.class -cp httpcore-4.0.1.jar;httpclient-4.0-alpha4.jar;commons-logging-1.1.1.jar \n\nI would argue that is as simple in java as it is in php or python (your examples). In all cases you need:\n\nthe sdk configured\na library (with dependencies)\nsample code\n\n",
"What would be so wrong with Apache HttpClient?\nJust make sure you add the dependencies also to classpath, that is HttpComponents.\nPostMethod post = new PostMethod(\"https URL of form action goes here\");\nNameValuePair[] data = {\n new NameValuePair(\"first_name\", \"joe\"),\n new NameValuePair(\"last_name\", \"Doe\")\n};\npost.setRequestBody(data);\n\npost.addRequestHeader(\"Referer\", \"URL of referring web page goes here\");\n\n// TODO: execute method and handle any error responses.\n...\nInputStream inPage = post.getResponseBodyAsStream();\n// handle response.\n\n",
"Using HttpClient is certainly the more robust solution, but this can be done without an external library dependency. See here for an example of how.\n",
"MercerTraieste and Tarnschaf kindly offered partial solutions to the problem. It took me a few more days, and untold hours of brain-splitting nightmare, before I gave up trying to figure out how to add a referer to the http post, and sent a new question to stackoverflow.\nJon Skeet answered instantly that I only needed...\nhttppost.addHeader(\"Referer\", referer);\n\n...which makes me look pretty dumb. How did I overlook that one?\nHere is the resulting code, based almost entirely on MercerTraieste's suggestion. In my case, I needed to download, and place in my classpath:\nHttpComponents\n\nhttpclient-4.0-beta2.jar\nhttpcore-4.0.1.jar\n\nApache Commons\n\ncommons-logging-1.1.1.jar\n\n\nimport org.apache.http.Header;\nimport org.apache.http.HeaderElement;\nimport org.apache.http.HttpRequestInterceptor;\nimport org.apache.http.HttpRequest;\nimport org.apache.http.HttpException;\nimport org.apache.http.NameValuePair;\nimport org.apache.http.HttpResponse;\nimport org.apache.http.HttpEntity;\nimport org.apache.http.client.entity.UrlEncodedFormEntity;\nimport org.apache.http.client.methods.HttpPost;\nimport org.apache.http.client.HttpClient;\nimport org.apache.http.protocol.HttpContext;\nimport org.apache.http.message.BasicNameValuePair;\nimport org.apache.http.impl.client.DefaultHttpClient;\n\nimport java.util.ArrayList;\nimport java.util.List;\nimport java.io.OutputStream;\nimport java.io.ByteArrayOutputStream;\nimport java.io.IOException;\n\npublic class HttpClientTest\n{\n public static void main(String[] args) throws Exception\n {\n // initialize some variables\n String referer = \"URL of referring web page goes here\";\n String submitUrl = \"https URL of form action goes here\";\n List<NameValuePair> formparams = new ArrayList<NameValuePair>();\n formparams.add(new BasicNameValuePair(\"firstName\", \"John\"));\n formparams.add(new BasicNameValuePair(\"lastName\", \"Doe\"));\n\n // set up httppost\n UrlEncodedFormEntity entity = new UrlEncodedFormEntity(formparams, \"UTF-8\");\n HttpPost httppost = new HttpPost(submitUrl);\n httppost.setEntity(entity);\n\n // add referer\n httppost.addHeader(\"Referer\", referer);\n\n // create httpclient\n DefaultHttpClient httpclient = new DefaultHttpClient();\n\n // execute the request\n HttpResponse response = httpclient.execute(httppost);\n\n // display the response body\n HttpEntity responseEntity = response.getEntity();\n OutputStream out = new ByteArrayOutputStream();\n responseEntity.writeTo(out);\n System.out.println(out);\n }\n}\n\n"
] |
[
3,
2,
2,
2
] |
[] |
[] |
[
"http",
"java",
"php",
"python"
] |
stackoverflow_0001116921_http_java_php_python.txt
|
Q:
Python 2.5 socket._fileobject is what in Python 3.1?
I'm porting some code that runs on Python 2.5 to Python 3.1. A couple of classes subclass the socket._fileobject:
class X(socket._fileobject):
....
Is there an equivalent to socket._fileobject in Python 3.1? A quick scan of the source code doesn't turn up anything useful. Thanks!
A:
Python 3 uses SocketIO instead of _fileobject in the makefile() method, so that's probably the way to go.
|
Python 2.5 socket._fileobject is what in Python 3.1?
|
I'm porting some code that runs on Python 2.5 to Python 3.1. A couple of classes subclass the socket._fileobject:
class X(socket._fileobject):
....
Is there an equivalent to socket._fileobject in Python 3.1? A quick scan of the source code doesn't turn up anything useful. Thanks!
|
[
"Python 3 uses SocketIO instead of _fileobject in the makefile() method, so that's probably the way to go. \n"
] |
[
1
] |
[] |
[] |
[
"python",
"python_3.x"
] |
stackoverflow_0001150653_python_python_3.x.txt
|
Q:
Best Technology for a medical 3D Planning Software
I am looking to build a new Interactive 3D planning software similar to this one http://www.materialise.com/materialise/view/en/131410-SimPlant.html
I was looking for some expert advise about the best technologies to use to build the different components of the software (ie: UI, Image processing, visualization, 3D, etc.. )
The software need to be able to process the images very quickly and in the same time I need to be able to deliver the software to the market fast, so the technologies used should allow for both rapid application development, and high performance. Any advise would be appreciated
A:
The Python Imaging Library, PIL, is a good compromise between speed-to-market and good performance (and you can always use scipy and its core part, numpy, to enrich it for more advanced image-processing needs, if you pick Python as your pivot language!-). Similarly, visualization (including 3D) are excellently covered in third-party Python extensions -- check out EPD, the Enthought Python Distribution, for a good idea of what libraries might best help you in such tasks (you can always build your own versions if you don't want to partner with Enthought for commercial distribution... but it might be worth checking them out, as they have excellent commercial contacts as well as tech skills;-).
When and if you want to dip down into C++ for some specific component, Boost.Python, SIP, or Cython will make it child's play to integrate the component into your Python mainstream. For UI &c, PyQt is great...
In other words, while I'm obviously biased, in your shoes I'd unhesitatingly go for Python as the "core" and investigate the various options I've mentioned for visualization, UI, etc, etc. One caveat: for quick time-to-market, stick with Python 2.6: the newest 3.1, while great in many respects, is likely to still miss compatible versions of many third party extensions that will make your life way easier and sweeter with Python 2.6!
A:
Take a look at VTK (vtk.org) for an general purpose visualization toolkit and the ITK (itk.org)
which is an image analysis toolkit built on top of vtk. Both are BSD licensed.
A:
ITK is not built on top of VTK, although they are related. One can easily process data with ITK and then switch to a VTK pipeline for the visualization and interaction functionality.
We have built fairly large and complex medical image processing and visualization applications (experimental, surgical planning) in Python using a combination of VTK, ITK and wxPython. The licensing of all these components is such that you can use them in commercial applications.
|
Best Technology for a medical 3D Planning Software
|
I am looking to build a new Interactive 3D planning software similar to this one http://www.materialise.com/materialise/view/en/131410-SimPlant.html
I was looking for some expert advise about the best technologies to use to build the different components of the software (ie: UI, Image processing, visualization, 3D, etc.. )
The software need to be able to process the images very quickly and in the same time I need to be able to deliver the software to the market fast, so the technologies used should allow for both rapid application development, and high performance. Any advise would be appreciated
|
[
"The Python Imaging Library, PIL, is a good compromise between speed-to-market and good performance (and you can always use scipy and its core part, numpy, to enrich it for more advanced image-processing needs, if you pick Python as your pivot language!-). Similarly, visualization (including 3D) are excellently covered in third-party Python extensions -- check out EPD, the Enthought Python Distribution, for a good idea of what libraries might best help you in such tasks (you can always build your own versions if you don't want to partner with Enthought for commercial distribution... but it might be worth checking them out, as they have excellent commercial contacts as well as tech skills;-).\nWhen and if you want to dip down into C++ for some specific component, Boost.Python, SIP, or Cython will make it child's play to integrate the component into your Python mainstream. For UI &c, PyQt is great...\nIn other words, while I'm obviously biased, in your shoes I'd unhesitatingly go for Python as the \"core\" and investigate the various options I've mentioned for visualization, UI, etc, etc. One caveat: for quick time-to-market, stick with Python 2.6: the newest 3.1, while great in many respects, is likely to still miss compatible versions of many third party extensions that will make your life way easier and sweeter with Python 2.6!\n",
"Take a look at VTK (vtk.org) for an general purpose visualization toolkit and the ITK (itk.org) \nwhich is an image analysis toolkit built on top of vtk. Both are BSD licensed.\n",
"ITK is not built on top of VTK, although they are related. One can easily process data with ITK and then switch to a VTK pipeline for the visualization and interaction functionality.\nWe have built fairly large and complex medical image processing and visualization applications (experimental, surgical planning) in Python using a combination of VTK, ITK and wxPython. The licensing of all these components is such that you can use them in commercial applications.\n"
] |
[
2,
2,
0
] |
[] |
[] |
[
"3d",
"c++",
"image_manipulation",
"image_processing",
"python"
] |
stackoverflow_0001056600_3d_c++_image_manipulation_image_processing_python.txt
|
Q:
python saving unicode into file
i'm having some trouble figuring out how to save unicode into a file in python. I have the following code, and if i run it in a script test.py, it should create a new file called priceinfo.txt, and write what's in price_info to the file. But i do not see the file, can anyone enlighten me on what could be the problem?
Thanks a lot!
price_info = u'it costs \u20ac 5'
f = codecs.open('priceinfo.txt','wb','utf-8')
f.write(price_info)
f.close()
A:
I can think of several reasons:
the file gets created, but in a different directory. Be certain what the working
directory of the script is.
you don't have permission to create the file, in the directory where you want to create it.
you have some error in your Python script, and it does not get executed at all.
To find out which one it is, run the script in a command window, and check for any error output that you get.
A:
Assuming no error messages from the program (which would be the result of forgetting to import the codecs module), are you sure you're looking in the right place? That code writes priceinfo.txt in the current working directory (IOW are you sure that you're looking inside the working directory?)
|
python saving unicode into file
|
i'm having some trouble figuring out how to save unicode into a file in python. I have the following code, and if i run it in a script test.py, it should create a new file called priceinfo.txt, and write what's in price_info to the file. But i do not see the file, can anyone enlighten me on what could be the problem?
Thanks a lot!
price_info = u'it costs \u20ac 5'
f = codecs.open('priceinfo.txt','wb','utf-8')
f.write(price_info)
f.close()
|
[
"I can think of several reasons:\n\nthe file gets created, but in a different directory. Be certain what the working \ndirectory of the script is.\nyou don't have permission to create the file, in the directory where you want to create it.\nyou have some error in your Python script, and it does not get executed at all.\n\nTo find out which one it is, run the script in a command window, and check for any error output that you get.\n",
"Assuming no error messages from the program (which would be the result of forgetting to import the codecs module), are you sure you're looking in the right place? That code writes priceinfo.txt in the current working directory (IOW are you sure that you're looking inside the working directory?)\n"
] |
[
3,
1
] |
[] |
[] |
[
"file",
"python",
"unicode"
] |
stackoverflow_0001150994_file_python_unicode.txt
|
Q:
$_SERVER vs. WSGI environ parameter
I'm designing a site. It is in a very early stage, and I have to make a decision whether or not to use a SingleSignOn service provided by the server. (it's a campus site, and more and more sites are using SSO here, so generally it's a good idea).
The target platform is most probably going to be django via mod_wsgi. However, any documentation provided with this service features php code. This method heavily relies on using custom $_SERVER['HTTPsomething'] variables. Unfortunately, right now I don't have access to this environment.
(How) can I access these custom variables in django? According the the WSGI documentation, the environ variable should contain as many as possible variables. Can I be sure that I can access them?
A:
In Django, the server environment variables are provided as dictionary members of the META attribute on the request object - so in your view, you can always access them via request.META['foo'] where foo is the name of the variable.
An easy way to see what is available is to create a view containing assert False to trigger an error. As long as you're running with DEBUG=True, you'll see a nice error page containing lots of information about the server status, including a full list of all the request attributes.
A:
To determine the set of variables passed in the raw WSGI environment, before Django does anything to them, put the following code in the WSGI script file in place of your Django stuff.
import StringIO
def application(environ, start_response):
headers = []
headers.append(('Content-type', 'text/plain'))
start_response('200 OK', headers)
input = environ['wsgi.input']
output = StringIO.StringIO()
keys = environ.keys()
keys.sort()
for key in keys:
print >> output, '%s: %s' % (key, repr(environ[key]))
print >> output
length = int(environ.get('CONTENT_LENGTH', '0'))
output.write(input.read(length))
return [output.getvalue()]
It will display back to the browser the set of key/value pairs.
Finding out how the SSO mechanism works is important. If it does the sensible thing, you will possibly find that it sets REMOTE_USER and possibly AUTH_TYPE variables. If REMOTE_USER is set it is an indicator that the user named in the variable has been authenticated by some higher level authentication mechanism in Apache. These variables would normally be set for HTTP Basic and Digest authentication, but to work with as many systems as possible, a SSO mechanism, should also use them.
If they are set, then there is a Django feature, described at:
http://docs.djangoproject.com/en/dev/howto/auth-remote-user/
which can then be used to have Django accept authentication done at a higher level.
Even if the SSO mechanism doesn't use REMOTE_USER, but instead uses custom headers, you can use a custom WSGI wrapper around the whole Django application to translate any custom headers to a suitable REMOTE_USER value which Django can then make use of.
A:
Well, $_SERVER is PHP. You are likely to be able to access the same variables via WSGI as well, but to be sure you need to figure out exactly how the SSO works, so you know what creates these variables (probably Apache) and that you can access them.
Or, you can get yourself access and try it out. :)
|
$_SERVER vs. WSGI environ parameter
|
I'm designing a site. It is in a very early stage, and I have to make a decision whether or not to use a SingleSignOn service provided by the server. (it's a campus site, and more and more sites are using SSO here, so generally it's a good idea).
The target platform is most probably going to be django via mod_wsgi. However, any documentation provided with this service features php code. This method heavily relies on using custom $_SERVER['HTTPsomething'] variables. Unfortunately, right now I don't have access to this environment.
(How) can I access these custom variables in django? According the the WSGI documentation, the environ variable should contain as many as possible variables. Can I be sure that I can access them?
|
[
"In Django, the server environment variables are provided as dictionary members of the META attribute on the request object - so in your view, you can always access them via request.META['foo'] where foo is the name of the variable.\nAn easy way to see what is available is to create a view containing assert False to trigger an error. As long as you're running with DEBUG=True, you'll see a nice error page containing lots of information about the server status, including a full list of all the request attributes. \n",
"To determine the set of variables passed in the raw WSGI environment, before Django does anything to them, put the following code in the WSGI script file in place of your Django stuff.\nimport StringIO\n\ndef application(environ, start_response):\n headers = []\n headers.append(('Content-type', 'text/plain'))\n\n start_response('200 OK', headers)\n\n input = environ['wsgi.input']\n output = StringIO.StringIO()\n\n keys = environ.keys()\n keys.sort()\n for key in keys:\n print >> output, '%s: %s' % (key, repr(environ[key]))\n print >> output\n\n length = int(environ.get('CONTENT_LENGTH', '0'))\n output.write(input.read(length))\n\n return [output.getvalue()]\n\nIt will display back to the browser the set of key/value pairs.\nFinding out how the SSO mechanism works is important. If it does the sensible thing, you will possibly find that it sets REMOTE_USER and possibly AUTH_TYPE variables. If REMOTE_USER is set it is an indicator that the user named in the variable has been authenticated by some higher level authentication mechanism in Apache. These variables would normally be set for HTTP Basic and Digest authentication, but to work with as many systems as possible, a SSO mechanism, should also use them.\nIf they are set, then there is a Django feature, described at:\nhttp://docs.djangoproject.com/en/dev/howto/auth-remote-user/\nwhich can then be used to have Django accept authentication done at a higher level.\nEven if the SSO mechanism doesn't use REMOTE_USER, but instead uses custom headers, you can use a custom WSGI wrapper around the whole Django application to translate any custom headers to a suitable REMOTE_USER value which Django can then make use of.\n",
"Well, $_SERVER is PHP. You are likely to be able to access the same variables via WSGI as well, but to be sure you need to figure out exactly how the SSO works, so you know what creates these variables (probably Apache) and that you can access them.\nOr, you can get yourself access and try it out. :)\n"
] |
[
6,
3,
0
] |
[] |
[] |
[
"django",
"environment_variables",
"php",
"python"
] |
stackoverflow_0001149881_django_environment_variables_php_python.txt
|
Q:
A question regarding string instance uniqueness in python
I was trying to figure out which integers python only instantiates once (-6 to 256 it seems), and in the process stumbled on some string behaviour I can't see the pattern in. Sometimes, equal strings created in different ways share the same id, sometimes not. This code:
A = "10000"
B = "10000"
C = "100" + "00"
D = "%i"%10000
E = str(10000)
F = str(10000)
G = str(100) + "00"
H = "0".join(("10","00"))
for obj in (A,B,C,D,E,F,G,H):
print obj, id(obj), obj is A
prints:
10000 4959776 True
10000 4959776 True
10000 4959776 True
10000 4959776 True
10000 4959456 False
10000 4959488 False
10000 4959520 False
10000 4959680 False
I don't even see the pattern - save for the fact that the first four don't have an explicit function call - but surely that can't be it, since the "+" in C for example implies a function call to add. I especially don't understand why C and G are different, seeing as that implies that the ids of the components of the addition are more important than the outcome.
So, what is the special treatment that A-D undergo, making them come out as the same instance?
A:
In terms of language specification, any compliant Python compiler and runtime is fully allowed, for any instance of an immutable type, to make a new instance OR find an existing instance of the same type that's equal to the required value and use a new reference to that same instance. This means it's always incorrect to use is or by-id comparison among immutables, and any minor release may tweak or change strategy in this matter to enhance optimization.
In terms of implementations, the tradeoff are pretty clear: trying to reuse an existing instance may mean time spent (perhaps wasted) trying to find such an instance, but if the attempt succeeds then some memory is saved (as well as the time to allocate and later free the memory bits needed to hold a new instance).
How to solve those implementation tradeoffs is not entirely obvious -- if you can identify heuristics that indicate that finding a suitable existing instance is likely and the search (even if it fails) will be fast, then you may want to attempt the search-and-reuse when the heuristics suggest it, but skip it otherwise.
In your observations you seem to have found a particular dot-release implementation that performs a modicum of peephole optimization when that's entirely safe, fast, and simple, so the assignments A to D all boil down to exactly the same as A (but E to F don't, as they involve named functions or methods that the optimizer's authors may reasonably have considered not 100% safe to assume semantics for -- and low-ROI if that was done -- so they're not peephole-optimized).
Thus, A to D reusing the same instance boils down to A and B doing so (as C and D get peephole-optimized to exactly the same construct).
That reuse, in turn, clearly suggests compiler tactics/optimizer heuristics whereby identical literal constants of an immutable type in the same function's local namespace are collapsed to references to just one instance in the function's .func_code.co_consts (to use current CPython's terminology for attributes of functions and code objects) -- reasonable tactics and heuristics, as reuse of the same immutable constant literal within one function are somewhat frequent, AND the price is only paid once (at compile time) while the advantage is accrued many times (every time the function runs, maybe within loops etc etc).
(It so happens that these specific tactics and heuristics, given their clearly-positive tradeoffs, have been pervasive in all recent versions of CPython, and, I believe, IronPython, Jython, and PyPy as well;-).
This is a somewhat worthy and interesting are of study if you're planning to write compilers, runtime environments, peephole optimizers, etc etc, for Python itself or similar languages. I guess that deep study of the internals (ideally of many different correct implementations, of course, so as not to fixate on the quirks of a specific one -- good thing Python currently enjoys at least 4 separate production-worthy implementations, not to mention several versions of each!) can also help, indirectly, make one a better Python programmer -- but it's particularly important to focus on what's guaranteed by the language itself, which is somewhat less than what you'll find in common among separate implementations, because the parts that "just happen" to be in common right now (without being required to be so by the language specs) may perfectly well change under you at the next point release of one or another implementation and, if your production code was mistakenly relying on such details, that might cause nasty surprises;-). Plus -- it's hardly ever necessary, or even particularly helpful, to rely on such variable implementation details rather than on language-mandated behavior (unless you're coding something like an optimizer, debugger, profiler, or the like, of course;-).
A:
Python is allowed to inline string constants; A,B,C,D are actually the same literals (if Python sees a constant expression, it treats it as a constant).
str is actually a class, so str(whatever) is calling this class' constructor, which should yield a fresh object. This explains E,F,G (note that each of these has separate identity).
As for H, I am not sure, but I'd go for explanation that this expression is too complicated for Python to figure out it's actually a constant, so it computes a new string.
A:
I believe short strings that can be evaluated at compile time, will be interned automatically. In the last examples, the result can't be evaluated at compile time because str or join might be redefined.
A:
in answer to S.Lott's suggestion of examining the byte code:
import dis
def moo():
A = "10000"
B = "10000"
C = "100" + "00"
D = "%i"%10000
E = str(10000)
F = str(10000)
G = "1000"+str(0)
H = "0".join(("10","00"))
I = str("10000")
for obj in (A,B,C,D,E,F,G,H, I):
print obj, id(obj), obj is A
moo()
print dis.dis(moo)
yields:
10000 4968128 True
10000 4968128 True
10000 4968128 True
10000 4968128 True
10000 2840928 False
10000 2840896 False
10000 2840864 False
10000 2840832 False
10000 4968128 True
4 0 LOAD_CONST 1 ('10000')
3 STORE_FAST 0 (A)
5 6 LOAD_CONST 1 ('10000')
9 STORE_FAST 1 (B)
6 12 LOAD_CONST 10 ('10000')
15 STORE_FAST 2 (C)
7 18 LOAD_CONST 11 ('10000')
21 STORE_FAST 3 (D)
8 24 LOAD_GLOBAL 0 (str)
27 LOAD_CONST 5 (10000)
30 CALL_FUNCTION 1
33 STORE_FAST 4 (E)
9 36 LOAD_GLOBAL 0 (str)
39 LOAD_CONST 5 (10000)
42 CALL_FUNCTION 1
45 STORE_FAST 5 (F)
10 48 LOAD_CONST 6 ('1000')
51 LOAD_GLOBAL 0 (str)
54 LOAD_CONST 7 (0)
57 CALL_FUNCTION 1
60 BINARY_ADD
61 STORE_FAST 6 (G)
11 64 LOAD_CONST 8 ('0')
67 LOAD_ATTR 1 (join)
70 LOAD_CONST 12 (('10', '00'))
73 CALL_FUNCTION 1
76 STORE_FAST 7 (H)
12 79 LOAD_GLOBAL 0 (str)
82 LOAD_CONST 1 ('10000')
85 CALL_FUNCTION 1
88 STORE_FAST 8 (I)
14 91 SETUP_LOOP 66 (to 160)
94 LOAD_FAST 0 (A)
97 LOAD_FAST 1 (B)
100 LOAD_FAST 2 (C)
103 LOAD_FAST 3 (D)
106 LOAD_FAST 4 (E)
109 LOAD_FAST 5 (F)
112 LOAD_FAST 6 (G)
115 LOAD_FAST 7 (H)
118 LOAD_FAST 8 (I)
121 BUILD_TUPLE 9
124 GET_ITER
>> 125 FOR_ITER 31 (to 159)
128 STORE_FAST 9 (obj)
15 131 LOAD_FAST 9 (obj)
134 PRINT_ITEM
135 LOAD_GLOBAL 2 (id)
138 LOAD_FAST 9 (obj)
141 CALL_FUNCTION 1
144 PRINT_ITEM
145 LOAD_FAST 9 (obj)
148 LOAD_FAST 0 (A)
151 COMPARE_OP 8 (is)
154 PRINT_ITEM
155 PRINT_NEWLINE
156 JUMP_ABSOLUTE 125
>> 159 POP_BLOCK
>> 160 LOAD_CONST 0 (None)
163 RETURN_VALUE
so it would seem that indeed the compiler understands A-D to mean the same thing, and so it saves memory by only generating it once (as suggested by Alex,Maciej and Greg). (added case I seems to just be str() realising it's trying to make a string from a string, and just passing it through.)
Thanks everyone, that's a lot clearer now.
|
A question regarding string instance uniqueness in python
|
I was trying to figure out which integers python only instantiates once (-6 to 256 it seems), and in the process stumbled on some string behaviour I can't see the pattern in. Sometimes, equal strings created in different ways share the same id, sometimes not. This code:
A = "10000"
B = "10000"
C = "100" + "00"
D = "%i"%10000
E = str(10000)
F = str(10000)
G = str(100) + "00"
H = "0".join(("10","00"))
for obj in (A,B,C,D,E,F,G,H):
print obj, id(obj), obj is A
prints:
10000 4959776 True
10000 4959776 True
10000 4959776 True
10000 4959776 True
10000 4959456 False
10000 4959488 False
10000 4959520 False
10000 4959680 False
I don't even see the pattern - save for the fact that the first four don't have an explicit function call - but surely that can't be it, since the "+" in C for example implies a function call to add. I especially don't understand why C and G are different, seeing as that implies that the ids of the components of the addition are more important than the outcome.
So, what is the special treatment that A-D undergo, making them come out as the same instance?
|
[
"In terms of language specification, any compliant Python compiler and runtime is fully allowed, for any instance of an immutable type, to make a new instance OR find an existing instance of the same type that's equal to the required value and use a new reference to that same instance. This means it's always incorrect to use is or by-id comparison among immutables, and any minor release may tweak or change strategy in this matter to enhance optimization.\nIn terms of implementations, the tradeoff are pretty clear: trying to reuse an existing instance may mean time spent (perhaps wasted) trying to find such an instance, but if the attempt succeeds then some memory is saved (as well as the time to allocate and later free the memory bits needed to hold a new instance).\nHow to solve those implementation tradeoffs is not entirely obvious -- if you can identify heuristics that indicate that finding a suitable existing instance is likely and the search (even if it fails) will be fast, then you may want to attempt the search-and-reuse when the heuristics suggest it, but skip it otherwise.\nIn your observations you seem to have found a particular dot-release implementation that performs a modicum of peephole optimization when that's entirely safe, fast, and simple, so the assignments A to D all boil down to exactly the same as A (but E to F don't, as they involve named functions or methods that the optimizer's authors may reasonably have considered not 100% safe to assume semantics for -- and low-ROI if that was done -- so they're not peephole-optimized).\nThus, A to D reusing the same instance boils down to A and B doing so (as C and D get peephole-optimized to exactly the same construct).\nThat reuse, in turn, clearly suggests compiler tactics/optimizer heuristics whereby identical literal constants of an immutable type in the same function's local namespace are collapsed to references to just one instance in the function's .func_code.co_consts (to use current CPython's terminology for attributes of functions and code objects) -- reasonable tactics and heuristics, as reuse of the same immutable constant literal within one function are somewhat frequent, AND the price is only paid once (at compile time) while the advantage is accrued many times (every time the function runs, maybe within loops etc etc).\n(It so happens that these specific tactics and heuristics, given their clearly-positive tradeoffs, have been pervasive in all recent versions of CPython, and, I believe, IronPython, Jython, and PyPy as well;-).\nThis is a somewhat worthy and interesting are of study if you're planning to write compilers, runtime environments, peephole optimizers, etc etc, for Python itself or similar languages. I guess that deep study of the internals (ideally of many different correct implementations, of course, so as not to fixate on the quirks of a specific one -- good thing Python currently enjoys at least 4 separate production-worthy implementations, not to mention several versions of each!) can also help, indirectly, make one a better Python programmer -- but it's particularly important to focus on what's guaranteed by the language itself, which is somewhat less than what you'll find in common among separate implementations, because the parts that \"just happen\" to be in common right now (without being required to be so by the language specs) may perfectly well change under you at the next point release of one or another implementation and, if your production code was mistakenly relying on such details, that might cause nasty surprises;-). Plus -- it's hardly ever necessary, or even particularly helpful, to rely on such variable implementation details rather than on language-mandated behavior (unless you're coding something like an optimizer, debugger, profiler, or the like, of course;-).\n",
"Python is allowed to inline string constants; A,B,C,D are actually the same literals (if Python sees a constant expression, it treats it as a constant).\nstr is actually a class, so str(whatever) is calling this class' constructor, which should yield a fresh object. This explains E,F,G (note that each of these has separate identity).\nAs for H, I am not sure, but I'd go for explanation that this expression is too complicated for Python to figure out it's actually a constant, so it computes a new string.\n",
"I believe short strings that can be evaluated at compile time, will be interned automatically. In the last examples, the result can't be evaluated at compile time because str or join might be redefined.\n",
"in answer to S.Lott's suggestion of examining the byte code:\nimport dis\ndef moo():\n A = \"10000\"\n B = \"10000\"\n C = \"100\" + \"00\"\n D = \"%i\"%10000\n E = str(10000)\n F = str(10000)\n G = \"1000\"+str(0)\n H = \"0\".join((\"10\",\"00\"))\n I = str(\"10000\")\n\n for obj in (A,B,C,D,E,F,G,H, I):\n print obj, id(obj), obj is A\nmoo()\nprint dis.dis(moo)\n\nyields:\n10000 4968128 True\n10000 4968128 True\n10000 4968128 True\n10000 4968128 True\n10000 2840928 False\n10000 2840896 False\n10000 2840864 False\n10000 2840832 False\n10000 4968128 True\n 4 0 LOAD_CONST 1 ('10000')\n 3 STORE_FAST 0 (A)\n\n 5 6 LOAD_CONST 1 ('10000')\n 9 STORE_FAST 1 (B)\n\n 6 12 LOAD_CONST 10 ('10000')\n 15 STORE_FAST 2 (C)\n\n 7 18 LOAD_CONST 11 ('10000')\n 21 STORE_FAST 3 (D)\n\n 8 24 LOAD_GLOBAL 0 (str)\n 27 LOAD_CONST 5 (10000)\n 30 CALL_FUNCTION 1\n 33 STORE_FAST 4 (E)\n\n 9 36 LOAD_GLOBAL 0 (str)\n 39 LOAD_CONST 5 (10000)\n 42 CALL_FUNCTION 1\n 45 STORE_FAST 5 (F)\n\n 10 48 LOAD_CONST 6 ('1000')\n 51 LOAD_GLOBAL 0 (str)\n 54 LOAD_CONST 7 (0)\n 57 CALL_FUNCTION 1\n 60 BINARY_ADD \n 61 STORE_FAST 6 (G)\n\n 11 64 LOAD_CONST 8 ('0')\n 67 LOAD_ATTR 1 (join)\n 70 LOAD_CONST 12 (('10', '00'))\n 73 CALL_FUNCTION 1\n 76 STORE_FAST 7 (H)\n\n 12 79 LOAD_GLOBAL 0 (str)\n 82 LOAD_CONST 1 ('10000')\n 85 CALL_FUNCTION 1\n 88 STORE_FAST 8 (I)\n\n 14 91 SETUP_LOOP 66 (to 160)\n 94 LOAD_FAST 0 (A)\n 97 LOAD_FAST 1 (B)\n 100 LOAD_FAST 2 (C)\n 103 LOAD_FAST 3 (D)\n 106 LOAD_FAST 4 (E)\n 109 LOAD_FAST 5 (F)\n 112 LOAD_FAST 6 (G)\n 115 LOAD_FAST 7 (H)\n 118 LOAD_FAST 8 (I)\n 121 BUILD_TUPLE 9\n 124 GET_ITER \n >> 125 FOR_ITER 31 (to 159)\n 128 STORE_FAST 9 (obj)\n\n 15 131 LOAD_FAST 9 (obj)\n 134 PRINT_ITEM \n 135 LOAD_GLOBAL 2 (id)\n 138 LOAD_FAST 9 (obj)\n 141 CALL_FUNCTION 1\n 144 PRINT_ITEM \n 145 LOAD_FAST 9 (obj)\n 148 LOAD_FAST 0 (A)\n 151 COMPARE_OP 8 (is)\n 154 PRINT_ITEM \n 155 PRINT_NEWLINE \n 156 JUMP_ABSOLUTE 125\n >> 159 POP_BLOCK \n >> 160 LOAD_CONST 0 (None)\n 163 RETURN_VALUE \n\nso it would seem that indeed the compiler understands A-D to mean the same thing, and so it saves memory by only generating it once (as suggested by Alex,Maciej and Greg). (added case I seems to just be str() realising it's trying to make a string from a string, and just passing it through.)\nThanks everyone, that's a lot clearer now. \n"
] |
[
10,
4,
1,
1
] |
[] |
[] |
[
"instance",
"python",
"string",
"uniqueidentifier"
] |
stackoverflow_0001150765_instance_python_string_uniqueidentifier.txt
|
Q:
How to escape a hash (#) char in python?
I'm using pyodbc to query an AS400 (unfortunately), and some column names have hashes in them! Here is a small example:
self.cursor.execute('select LPPLNM, LPPDR# from BSYDTAD.LADWJLFU')
for row in self.cursor:
p = Patient()
p.last = row.LPPLNM
p.pcp = row.LPPDR#
I get errors like this obviously:
AttributeError: 'pyodbc.Row' object has no attribute 'LPPDR'
Is there some way to escape this? Seems doubtful that a hash is even allowed in a var name. I just picked up python today, so forgive me if the answer is common knowledge.
Thanks, Pete
A:
Use the getattr function
p.pcp = getattr(row, "LPPDR#")
This is, in general, the way that you deal with attributes which aren't legal Python identifiers. For example, you can say
setattr(p, "&)(@#$@!!~%&", "Hello World!")
print getattr(p, "&)(@#$@!!~%&") # prints "Hello World!"
Also, as JG suggests, you can give your columns an alias, such as by saying
SELECT LPPDR# AS LPPDR ...
A:
You can try to give the column an alias, i.e.:
self.cursor.execute('select LPPLNM, LPPDR# as LPPDR from BSYDTAD.LADWJLFU')
A:
self.cursor.execute returns a tuple, so this would also work:
for row in self.cursor:
p = Patient()
p.last = row[0]
p.pcp = row[1]
But I prefer the other answers :-)
A:
The question has been answered, but this is just another alternative (based on Adam Bernier's answer + tuple unpacking) which I think is the cleanest:
for row in self.cursor:
p = Patient()
p.last, p.pcp = row
|
How to escape a hash (#) char in python?
|
I'm using pyodbc to query an AS400 (unfortunately), and some column names have hashes in them! Here is a small example:
self.cursor.execute('select LPPLNM, LPPDR# from BSYDTAD.LADWJLFU')
for row in self.cursor:
p = Patient()
p.last = row.LPPLNM
p.pcp = row.LPPDR#
I get errors like this obviously:
AttributeError: 'pyodbc.Row' object has no attribute 'LPPDR'
Is there some way to escape this? Seems doubtful that a hash is even allowed in a var name. I just picked up python today, so forgive me if the answer is common knowledge.
Thanks, Pete
|
[
"Use the getattr function\np.pcp = getattr(row, \"LPPDR#\")\n\nThis is, in general, the way that you deal with attributes which aren't legal Python identifiers. For example, you can say\nsetattr(p, \"&)(@#$@!!~%&\", \"Hello World!\")\nprint getattr(p, \"&)(@#$@!!~%&\") # prints \"Hello World!\"\n\nAlso, as JG suggests, you can give your columns an alias, such as by saying\nSELECT LPPDR# AS LPPDR ...\n\n",
"You can try to give the column an alias, i.e.:\n self.cursor.execute('select LPPLNM, LPPDR# as LPPDR from BSYDTAD.LADWJLFU')\n\n",
"self.cursor.execute returns a tuple, so this would also work:\nfor row in self.cursor:\n p = Patient()\n p.last = row[0]\n p.pcp = row[1]\n\nBut I prefer the other answers :-)\n",
"The question has been answered, but this is just another alternative (based on Adam Bernier's answer + tuple unpacking) which I think is the cleanest:\nfor row in self.cursor:\n p = Patient()\n p.last, p.pcp = row\n\n"
] |
[
7,
5,
2,
1
] |
[] |
[] |
[
"escaping",
"odbc",
"pyodbc",
"python"
] |
stackoverflow_0001150581_escaping_odbc_pyodbc_python.txt
|
Q:
How to get Python syntax highlighting for Visual Studio?
Visual Studio 2008 is great as text editor, but it lacks Python syntax highlighting, can I get this as an add-on? Where can I find it?
A:
Have you considered installing IronPython and using that to edit your work?
http://www.codeplex.com/IronPythonStudio
|
How to get Python syntax highlighting for Visual Studio?
|
Visual Studio 2008 is great as text editor, but it lacks Python syntax highlighting, can I get this as an add-on? Where can I find it?
|
[
"Have you considered installing IronPython and using that to edit your work? \n\nhttp://www.codeplex.com/IronPythonStudio\n\n"
] |
[
2
] |
[] |
[] |
[
"python",
"syntax_highlighting",
"visual_studio",
"visual_studio_2008"
] |
stackoverflow_0001151207_python_syntax_highlighting_visual_studio_visual_studio_2008.txt
|
Q:
map raw sql to django orm
Is there a way to simplify this working code?
This code gets for an object all the different vote types, there are like 20 possible, and counts each type.
I prefer not to write raw sql but use the orm. It is a little bit more tricky because I use generic foreign key in the model.
def get_object_votes(self, obj):
"""
Get a dictionary mapping vote to votecount
"""
ctype = ContentType.objects.get_for_model(obj)
cursor = connection.cursor()
cursor.execute("""
SELECT v.vote , COUNT(*)
FROM votes v
WHERE %d = v.object_id AND %d = v.content_type_id
GROUP BY 1
ORDER BY 1 """ % ( obj.id, ctype.id )
)
votes = {}
for row in cursor.fetchall():
votes[row[0]] = row[1]
return votes
The models im using
class Vote(models.Model):
user = models.ForeignKey(User)
content_type = models.ForeignKey(ContentType)
object_id = models.PositiveIntegerField()
payload = generic.GenericForeignKey('content_type', 'object_id')
vote = models.IntegerField(choices = possible_votes.items() )
class Issue(models.Model):
title = models.CharField( blank=True, max_length=200)
A:
The code Below did the trick for me!
def get_object_votes(self, obj, all=False):
"""
Get a dictionary mapping vote to votecount
"""
object_id = obj._get_pk_val()
ctype = ContentType.objects.get_for_model(obj)
queryset = self.filter(content_type=ctype, object_id=object_id)
if not all:
queryset = queryset.filter(is_archived=False) # only pick active votes
queryset = queryset.values('vote')
queryset = queryset.annotate(vcount=Count("vote")).order_by()
votes = {}
for count in queryset:
votes[count['vote']] = count['vcount']
return votes
A:
Yes, definitely use the ORM. What you should really be doing is this in your model:
class Obj(models.Model):
#whatever the object has
class Vote(models.Model):
obj = models.ForeignKey(Obj) #this ties a vote to its object
Then to get all of the votes from an object, have these Django calls be in one of your view functions:
obj = Obj.objects.get(id=#the id)
votes = obj.vote_set.all()
From there it's fairly easy to see how to count them (get the length of the list called votes).
I recommend reading about many-to-one relationships from the documentation, it's quite handy.
http://www.djangoproject.com/documentation/models/many_to_one/
|
map raw sql to django orm
|
Is there a way to simplify this working code?
This code gets for an object all the different vote types, there are like 20 possible, and counts each type.
I prefer not to write raw sql but use the orm. It is a little bit more tricky because I use generic foreign key in the model.
def get_object_votes(self, obj):
"""
Get a dictionary mapping vote to votecount
"""
ctype = ContentType.objects.get_for_model(obj)
cursor = connection.cursor()
cursor.execute("""
SELECT v.vote , COUNT(*)
FROM votes v
WHERE %d = v.object_id AND %d = v.content_type_id
GROUP BY 1
ORDER BY 1 """ % ( obj.id, ctype.id )
)
votes = {}
for row in cursor.fetchall():
votes[row[0]] = row[1]
return votes
The models im using
class Vote(models.Model):
user = models.ForeignKey(User)
content_type = models.ForeignKey(ContentType)
object_id = models.PositiveIntegerField()
payload = generic.GenericForeignKey('content_type', 'object_id')
vote = models.IntegerField(choices = possible_votes.items() )
class Issue(models.Model):
title = models.CharField( blank=True, max_length=200)
|
[
"The code Below did the trick for me!\ndef get_object_votes(self, obj, all=False):\n \"\"\"\n Get a dictionary mapping vote to votecount\n \"\"\"\n object_id = obj._get_pk_val()\n ctype = ContentType.objects.get_for_model(obj)\n queryset = self.filter(content_type=ctype, object_id=object_id)\n\n if not all:\n queryset = queryset.filter(is_archived=False) # only pick active votes\n\n queryset = queryset.values('vote')\n queryset = queryset.annotate(vcount=Count(\"vote\")).order_by()\n\n votes = {}\n\n for count in queryset:\n votes[count['vote']] = count['vcount']\n\n return votes\n\n",
"Yes, definitely use the ORM. What you should really be doing is this in your model:\nclass Obj(models.Model):\n #whatever the object has\n\nclass Vote(models.Model):\n obj = models.ForeignKey(Obj) #this ties a vote to its object\n\nThen to get all of the votes from an object, have these Django calls be in one of your view functions:\nobj = Obj.objects.get(id=#the id)\nvotes = obj.vote_set.all()\n\nFrom there it's fairly easy to see how to count them (get the length of the list called votes).\nI recommend reading about many-to-one relationships from the documentation, it's quite handy.\nhttp://www.djangoproject.com/documentation/models/many_to_one/\n"
] |
[
1,
0
] |
[] |
[] |
[
"django",
"django_models",
"orm",
"python",
"sql"
] |
stackoverflow_0001150898_django_django_models_orm_python_sql.txt
|
Q:
How do you make the Python Msqldb module use ? in stead of %s for query parameters?
MySqlDb is a fantastic Python module -- but one part is incredibly annoying.
Query parameters look like this
cursor.execute("select * from Books where isbn=%s", (isbn,))
whereas everywhere else in the known universe (oracle, sqlserver, access, sybase...)
they look like this
cursor.execute("select * from Books where isbn=?", (isbn,))
This means that if you want to be portable you have to somehow switch
between the two notations ? and %s,
which is really annoying. (Please don't tell me to use an
ORM layer -- I will strangle you).
Supposedly you can convince mysqldb to use the standard syntax, but I haven't yet
made it work. Any suggestions?
A:
I found a lot of information out there about paramstyle that seemed to imply it might be what you wanted, but according to this wiki you have to use the paramstyle your library uses, and most of them do not allow you to change it:
paramstyle is specific to the library you use, and informational - you have to use the one it uses. This is probably the most annoying part of this standard. (a few allow you to set different paramstyles, but this isn't standard behavior)
I found some posts that talked about MySQLdb allowing this, but apparently it doesn't as someone indicated it didn't work for them.
A:
From what I can see you cannot use '?' for a parameter marker with
MySQLdb (out of box)
you can however use named parameters
i.e.
cursor.execute("%(param1)s = %(param1)s", {'param1':1})
would effectively execute
1=1
in mysql
but sort of like Eli answered (but not hackish)
you could instead do:
MyNewCursorModule.py
import MySQLdb.cursors import Cursor
class MyNewCursor(Cursor):
def execute(self, query, args=None):
"""This cursor is able to use '?' as a parameter marker"""
return Cursor.execute(self, query.replace('?', '%s'), args)
def executemany(self, query, args):
...implement...
in this case you would have a custom cursor which would do what you want it to do
and it's not a hack. It's just a subclass ;)
and use it with:
from MyNewCursorModule import MyNewCursor
conn = MySQLdb.connect(...connection information...
cursorclass=MyNewCursor)
(you can also give the class to the connection.cursor function to create it there if you want to use the normal execute most of the time (a temp override))
...you can also change the simple replacement to something a little more correct
(assuming there is a way to escape the question mark), but that is something I'll
leave up to you :)
A:
I don't recommend doing this, but the simplest solution is to monkeypatch the Cursor class:
from MySQLdb.cursors import Cursor
old_execute = Cursor.execute
def new_execute(self, query, args):
return old_execute(self, query.replace("?", "%s"), args)
Cursor.execute = new_execute
|
How do you make the Python Msqldb module use ? in stead of %s for query parameters?
|
MySqlDb is a fantastic Python module -- but one part is incredibly annoying.
Query parameters look like this
cursor.execute("select * from Books where isbn=%s", (isbn,))
whereas everywhere else in the known universe (oracle, sqlserver, access, sybase...)
they look like this
cursor.execute("select * from Books where isbn=?", (isbn,))
This means that if you want to be portable you have to somehow switch
between the two notations ? and %s,
which is really annoying. (Please don't tell me to use an
ORM layer -- I will strangle you).
Supposedly you can convince mysqldb to use the standard syntax, but I haven't yet
made it work. Any suggestions?
|
[
"I found a lot of information out there about paramstyle that seemed to imply it might be what you wanted, but according to this wiki you have to use the paramstyle your library uses, and most of them do not allow you to change it:\n\nparamstyle is specific to the library you use, and informational - you have to use the one it uses. This is probably the most annoying part of this standard. (a few allow you to set different paramstyles, but this isn't standard behavior) \n\nI found some posts that talked about MySQLdb allowing this, but apparently it doesn't as someone indicated it didn't work for them.\n",
"From what I can see you cannot use '?' for a parameter marker with\nMySQLdb (out of box)\nyou can however use named parameters\ni.e. \ncursor.execute(\"%(param1)s = %(param1)s\", {'param1':1})\n\nwould effectively execute\n1=1\nin mysql\nbut sort of like Eli answered (but not hackish)\nyou could instead do:\nMyNewCursorModule.py\nimport MySQLdb.cursors import Cursor\n\nclass MyNewCursor(Cursor):\n def execute(self, query, args=None):\n \"\"\"This cursor is able to use '?' as a parameter marker\"\"\"\n return Cursor.execute(self, query.replace('?', '%s'), args)\n\n def executemany(self, query, args):\n ...implement...\n\nin this case you would have a custom cursor which would do what you want it to do\nand it's not a hack. It's just a subclass ;)\nand use it with:\nfrom MyNewCursorModule import MyNewCursor\n\nconn = MySQLdb.connect(...connection information...\n cursorclass=MyNewCursor)\n\n(you can also give the class to the connection.cursor function to create it there if you want to use the normal execute most of the time (a temp override))\n...you can also change the simple replacement to something a little more correct\n(assuming there is a way to escape the question mark), but that is something I'll\nleave up to you :)\n",
"I don't recommend doing this, but the simplest solution is to monkeypatch the Cursor class:\nfrom MySQLdb.cursors import Cursor\nold_execute = Cursor.execute\ndef new_execute(self, query, args):\n return old_execute(self, query.replace(\"?\", \"%s\"), args) \nCursor.execute = new_execute\n\n"
] |
[
2,
2,
1
] |
[] |
[] |
[
"database",
"mysql",
"python",
"sql"
] |
stackoverflow_0000825042_database_mysql_python_sql.txt
|
Q:
Error on connecting to Oracle from py2exe'd program: Unable to acquire Oracle environment handle
My python program (Python 2.6) works fine when I run it using the Python interpreter, it connects to the Oracle database (10g XE) without error. However, when I compile it using py2exe, the executable version fails with "Unable to acquire Oracle environment handle" at the call to cx_Oracle.connect().
I've tried the following with no joy:
Oracle instant client 10g and 11g
Oracle XE Client
reinstall cx_Oracle-5.0.2-10g.win32-py2.6.msi
setting ORACLE_HOME as well as PATH
another computer with just an Oracle client and the exe
various options for building the exe (no compression and/or using zip file)
My testcase:
testora.py:
import cx_Oracle
import decimal # needed for py2exe to compile this correctly
def testora():
"""testora
>>> testora.testora()
<cx_Oracle.Connection to scott@localhost:1521/orcl>
X
"""
orcl = cx_Oracle.connect('scott/tiger@localhost:1521/orcl')
print orcl
curs = orcl.cursor()
result = curs.execute('SELECT * FROM DUAL')
for (dummy,) in result:
print dummy
if __name__ == '__main__':
testora()
build_testora.py:
from distutils.core import setup
import py2exe, sys
sys.argv.append('py2exe')
setup(
options = {'py2exe': {
'bundle_files': 2,
'compressed': True
}},
console = [{'script': "testora.py"}],
zipfile = None
)
Results:
C:\Python26\working>python testora.py
<cx_Oracle.Connection to scott@localhost:1521/orcl>
X
C:\Python26\working>python build_testora.py py2exe
C:\Python26\lib\site-packages\py2exe\build_exe.py:16: DeprecationWarning: the se
ts module is deprecated
import sets
running py2exe
creating C:\Python26\working\build
creating C:\Python26\working\build\bdist.win32
creating C:\Python26\working\build\bdist.win32\winexe
creating C:\Python26\working\build\bdist.win32\winexe\collect-2.6
creating C:\Python26\working\build\bdist.win32\winexe\bundle-2.6
creating C:\Python26\working\build\bdist.win32\winexe\temp
*** searching for required modules ***
*** parsing results ***
*** finding dlls needed ***
*** create binaries ***
*** byte compile python files ***
byte-compiling C:\Python26\lib\StringIO.py to StringIO.pyc
byte-compiling C:\Python26\lib\UserDict.py to UserDict.pyc
byte-compiling C:\Python26\lib\__future__.py to __future__.pyc
byte-compiling C:\Python26\lib\_abcoll.py to _abcoll.pyc
byte-compiling C:\Python26\lib\_strptime.py to _strptime.pyc
byte-compiling C:\Python26\lib\_threading_local.py to _threading_local.pyc
byte-compiling C:\Python26\lib\abc.py to abc.pyc
byte-compiling C:\Python26\lib\atexit.py to atexit.pyc
byte-compiling C:\Python26\lib\base64.py to base64.pyc
byte-compiling C:\Python26\lib\bdb.py to bdb.pyc
byte-compiling C:\Python26\lib\bisect.py to bisect.pyc
byte-compiling C:\Python26\lib\calendar.py to calendar.pyc
byte-compiling C:\Python26\lib\cmd.py to cmd.pyc
byte-compiling C:\Python26\lib\codecs.py to codecs.pyc
byte-compiling C:\Python26\lib\collections.py to collections.pyc
byte-compiling C:\Python26\lib\copy.py to copy.pyc
byte-compiling C:\Python26\lib\copy_reg.py to copy_reg.pyc
byte-compiling C:\Python26\lib\decimal.py to decimal.pyc
byte-compiling C:\Python26\lib\difflib.py to difflib.pyc
byte-compiling C:\Python26\lib\dis.py to dis.pyc
byte-compiling C:\Python26\lib\doctest.py to doctest.pyc
byte-compiling C:\Python26\lib\dummy_thread.py to dummy_thread.pyc
byte-compiling C:\Python26\lib\encodings\__init__.py to encodings\__init__.pyc
creating C:\Python26\working\build\bdist.win32\winexe\collect-2.6\encodings
byte-compiling C:\Python26\lib\encodings\aliases.py to encodings\aliases.pyc
byte-compiling C:\Python26\lib\encodings\ascii.py to encodings\ascii.pyc
byte-compiling C:\Python26\lib\encodings\base64_codec.py to encodings\base64_cod
ec.pyc
byte-compiling C:\Python26\lib\encodings\big5.py to encodings\big5.pyc
byte-compiling C:\Python26\lib\encodings\big5hkscs.py to encodings\big5hkscs.pyc
byte-compiling C:\Python26\lib\encodings\bz2_codec.py to encodings\bz2_codec.pyc
byte-compiling C:\Python26\lib\encodings\charmap.py to encodings\charmap.pyc
byte-compiling C:\Python26\lib\encodings\cp037.py to encodings\cp037.pyc
byte-compiling C:\Python26\lib\encodings\cp1006.py to encodings\cp1006.pyc
byte-compiling C:\Python26\lib\encodings\cp1026.py to encodings\cp1026.pyc
byte-compiling C:\Python26\lib\encodings\cp1140.py to encodings\cp1140.pyc
byte-compiling C:\Python26\lib\encodings\cp1250.py to encodings\cp1250.pyc
byte-compiling C:\Python26\lib\encodings\cp1251.py to encodings\cp1251.pyc
byte-compiling C:\Python26\lib\encodings\cp1252.py to encodings\cp1252.pyc
byte-compiling C:\Python26\lib\encodings\cp1253.py to encodings\cp1253.pyc
byte-compiling C:\Python26\lib\encodings\cp1254.py to encodings\cp1254.pyc
byte-compiling C:\Python26\lib\encodings\cp1255.py to encodings\cp1255.pyc
byte-compiling C:\Python26\lib\encodings\cp1256.py to encodings\cp1256.pyc
byte-compiling C:\Python26\lib\encodings\cp1257.py to encodings\cp1257.pyc
byte-compiling C:\Python26\lib\encodings\cp1258.py to encodings\cp1258.pyc
byte-compiling C:\Python26\lib\encodings\cp424.py to encodings\cp424.pyc
byte-compiling C:\Python26\lib\encodings\cp437.py to encodings\cp437.pyc
byte-compiling C:\Python26\lib\encodings\cp500.py to encodings\cp500.pyc
byte-compiling C:\Python26\lib\encodings\cp737.py to encodings\cp737.pyc
byte-compiling C:\Python26\lib\encodings\cp775.py to encodings\cp775.pyc
byte-compiling C:\Python26\lib\encodings\cp850.py to encodings\cp850.pyc
byte-compiling C:\Python26\lib\encodings\cp852.py to encodings\cp852.pyc
byte-compiling C:\Python26\lib\encodings\cp855.py to encodings\cp855.pyc
byte-compiling C:\Python26\lib\encodings\cp856.py to encodings\cp856.pyc
byte-compiling C:\Python26\lib\encodings\cp857.py to encodings\cp857.pyc
byte-compiling C:\Python26\lib\encodings\cp860.py to encodings\cp860.pyc
byte-compiling C:\Python26\lib\encodings\cp861.py to encodings\cp861.pyc
byte-compiling C:\Python26\lib\encodings\cp862.py to encodings\cp862.pyc
byte-compiling C:\Python26\lib\encodings\cp863.py to encodings\cp863.pyc
byte-compiling C:\Python26\lib\encodings\cp864.py to encodings\cp864.pyc
byte-compiling C:\Python26\lib\encodings\cp865.py to encodings\cp865.pyc
byte-compiling C:\Python26\lib\encodings\cp866.py to encodings\cp866.pyc
byte-compiling C:\Python26\lib\encodings\cp869.py to encodings\cp869.pyc
byte-compiling C:\Python26\lib\encodings\cp874.py to encodings\cp874.pyc
byte-compiling C:\Python26\lib\encodings\cp875.py to encodings\cp875.pyc
byte-compiling C:\Python26\lib\encodings\cp932.py to encodings\cp932.pyc
byte-compiling C:\Python26\lib\encodings\cp949.py to encodings\cp949.pyc
byte-compiling C:\Python26\lib\encodings\cp950.py to encodings\cp950.pyc
byte-compiling C:\Python26\lib\encodings\euc_jis_2004.py to encodings\euc_jis_20
04.pyc
byte-compiling C:\Python26\lib\encodings\euc_jisx0213.py to encodings\euc_jisx02
13.pyc
byte-compiling C:\Python26\lib\encodings\euc_jp.py to encodings\euc_jp.pyc
byte-compiling C:\Python26\lib\encodings\euc_kr.py to encodings\euc_kr.pyc
byte-compiling C:\Python26\lib\encodings\gb18030.py to encodings\gb18030.pyc
byte-compiling C:\Python26\lib\encodings\gb2312.py to encodings\gb2312.pyc
byte-compiling C:\Python26\lib\encodings\gbk.py to encodings\gbk.pyc
byte-compiling C:\Python26\lib\encodings\hex_codec.py to encodings\hex_codec.pyc
byte-compiling C:\Python26\lib\encodings\hp_roman8.py to encodings\hp_roman8.pyc
byte-compiling C:\Python26\lib\encodings\hz.py to encodings\hz.pyc
byte-compiling C:\Python26\lib\encodings\idna.py to encodings\idna.pyc
byte-compiling C:\Python26\lib\encodings\iso2022_jp.py to encodings\iso2022_jp.p
yc
byte-compiling C:\Python26\lib\encodings\iso2022_jp_1.py to encodings\iso2022_jp
_1.pyc
byte-compiling C:\Python26\lib\encodings\iso2022_jp_2.py to encodings\iso2022_jp
_2.pyc
byte-compiling C:\Python26\lib\encodings\iso2022_jp_2004.py to encodings\iso2022
_jp_2004.pyc
byte-compiling C:\Python26\lib\encodings\iso2022_jp_3.py to encodings\iso2022_jp
_3.pyc
byte-compiling C:\Python26\lib\encodings\iso2022_jp_ext.py to encodings\iso2022_
jp_ext.pyc
byte-compiling C:\Python26\lib\encodings\iso2022_kr.py to encodings\iso2022_kr.p
yc
byte-compiling C:\Python26\lib\encodings\iso8859_1.py to encodings\iso8859_1.pyc
byte-compiling C:\Python26\lib\encodings\iso8859_10.py to encodings\iso8859_10.p
yc
byte-compiling C:\Python26\lib\encodings\iso8859_11.py to encodings\iso8859_11.p
yc
byte-compiling C:\Python26\lib\encodings\iso8859_13.py to encodings\iso8859_13.p
yc
byte-compiling C:\Python26\lib\encodings\iso8859_14.py to encodings\iso8859_14.p
yc
byte-compiling C:\Python26\lib\encodings\iso8859_15.py to encodings\iso8859_15.p
yc
byte-compiling C:\Python26\lib\encodings\iso8859_16.py to encodings\iso8859_16.p
yc
byte-compiling C:\Python26\lib\encodings\iso8859_2.py to encodings\iso8859_2.pyc
byte-compiling C:\Python26\lib\encodings\iso8859_3.py to encodings\iso8859_3.pyc
byte-compiling C:\Python26\lib\encodings\iso8859_4.py to encodings\iso8859_4.pyc
byte-compiling C:\Python26\lib\encodings\iso8859_5.py to encodings\iso8859_5.pyc
byte-compiling C:\Python26\lib\encodings\iso8859_6.py to encodings\iso8859_6.pyc
byte-compiling C:\Python26\lib\encodings\iso8859_7.py to encodings\iso8859_7.pyc
byte-compiling C:\Python26\lib\encodings\iso8859_8.py to encodings\iso8859_8.pyc
byte-compiling C:\Python26\lib\encodings\iso8859_9.py to encodings\iso8859_9.pyc
byte-compiling C:\Python26\lib\encodings\johab.py to encodings\johab.pyc
byte-compiling C:\Python26\lib\encodings\koi8_r.py to encodings\koi8_r.pyc
byte-compiling C:\Python26\lib\encodings\koi8_u.py to encodings\koi8_u.pyc
byte-compiling C:\Python26\lib\encodings\latin_1.py to encodings\latin_1.pyc
byte-compiling C:\Python26\lib\encodings\mac_arabic.py to encodings\mac_arabic.p
yc
byte-compiling C:\Python26\lib\encodings\mac_centeuro.py to encodings\mac_centeu
ro.pyc
byte-compiling C:\Python26\lib\encodings\mac_croatian.py to encodings\mac_croati
an.pyc
byte-compiling C:\Python26\lib\encodings\mac_cyrillic.py to encodings\mac_cyrill
ic.pyc
byte-compiling C:\Python26\lib\encodings\mac_farsi.py to encodings\mac_farsi.pyc
byte-compiling C:\Python26\lib\encodings\mac_greek.py to encodings\mac_greek.pyc
byte-compiling C:\Python26\lib\encodings\mac_iceland.py to encodings\mac_iceland
.pyc
byte-compiling C:\Python26\lib\encodings\mac_latin2.py to encodings\mac_latin2.p
yc
byte-compiling C:\Python26\lib\encodings\mac_roman.py to encodings\mac_roman.pyc
byte-compiling C:\Python26\lib\encodings\mac_romanian.py to encodings\mac_romani
an.pyc
byte-compiling C:\Python26\lib\encodings\mac_turkish.py to encodings\mac_turkish
.pyc
byte-compiling C:\Python26\lib\encodings\mbcs.py to encodings\mbcs.pyc
byte-compiling C:\Python26\lib\encodings\palmos.py to encodings\palmos.pyc
byte-compiling C:\Python26\lib\encodings\ptcp154.py to encodings\ptcp154.pyc
byte-compiling C:\Python26\lib\encodings\punycode.py to encodings\punycode.pyc
byte-compiling C:\Python26\lib\encodings\quopri_codec.py to encodings\quopri_cod
ec.pyc
byte-compiling C:\Python26\lib\encodings\raw_unicode_escape.py to encodings\raw_
unicode_escape.pyc
byte-compiling C:\Python26\lib\encodings\rot_13.py to encodings\rot_13.pyc
byte-compiling C:\Python26\lib\encodings\shift_jis.py to encodings\shift_jis.pyc
byte-compiling C:\Python26\lib\encodings\shift_jis_2004.py to encodings\shift_ji
s_2004.pyc
byte-compiling C:\Python26\lib\encodings\shift_jisx0213.py to encodings\shift_ji
sx0213.pyc
byte-compiling C:\Python26\lib\encodings\string_escape.py to encodings\string_es
cape.pyc
byte-compiling C:\Python26\lib\encodings\tis_620.py to encodings\tis_620.pyc
byte-compiling C:\Python26\lib\encodings\undefined.py to encodings\undefined.pyc
byte-compiling C:\Python26\lib\encodings\unicode_escape.py to encodings\unicode_
escape.pyc
byte-compiling C:\Python26\lib\encodings\unicode_internal.py to encodings\unicod
e_internal.pyc
byte-compiling C:\Python26\lib\encodings\utf_16.py to encodings\utf_16.pyc
byte-compiling C:\Python26\lib\encodings\utf_16_be.py to encodings\utf_16_be.pyc
byte-compiling C:\Python26\lib\encodings\utf_16_le.py to encodings\utf_16_le.pyc
byte-compiling C:\Python26\lib\encodings\utf_32.py to encodings\utf_32.pyc
byte-compiling C:\Python26\lib\encodings\utf_32_be.py to encodings\utf_32_be.pyc
byte-compiling C:\Python26\lib\encodings\utf_32_le.py to encodings\utf_32_le.pyc
byte-compiling C:\Python26\lib\encodings\utf_7.py to encodings\utf_7.pyc
byte-compiling C:\Python26\lib\encodings\utf_8.py to encodings\utf_8.pyc
byte-compiling C:\Python26\lib\encodings\utf_8_sig.py to encodings\utf_8_sig.pyc
byte-compiling C:\Python26\lib\encodings\uu_codec.py to encodings\uu_codec.pyc
byte-compiling C:\Python26\lib\encodings\zlib_codec.py to encodings\zlib_codec.p
yc
byte-compiling C:\Python26\lib\functools.py to functools.pyc
byte-compiling C:\Python26\lib\genericpath.py to genericpath.pyc
byte-compiling C:\Python26\lib\getopt.py to getopt.pyc
byte-compiling C:\Python26\lib\gettext.py to gettext.pyc
byte-compiling C:\Python26\lib\heapq.py to heapq.pyc
byte-compiling C:\Python26\lib\inspect.py to inspect.pyc
byte-compiling C:\Python26\lib\keyword.py to keyword.pyc
byte-compiling C:\Python26\lib\linecache.py to linecache.pyc
byte-compiling C:\Python26\lib\locale.py to locale.pyc
byte-compiling C:\Python26\lib\ntpath.py to ntpath.pyc
byte-compiling C:\Python26\lib\numbers.py to numbers.pyc
byte-compiling C:\Python26\lib\opcode.py to opcode.pyc
byte-compiling C:\Python26\lib\optparse.py to optparse.pyc
byte-compiling C:\Python26\lib\os.py to os.pyc
byte-compiling C:\Python26\lib\os2emxpath.py to os2emxpath.pyc
byte-compiling C:\Python26\lib\pdb.py to pdb.pyc
byte-compiling C:\Python26\lib\pickle.py to pickle.pyc
byte-compiling C:\Python26\lib\posixpath.py to posixpath.pyc
byte-compiling C:\Python26\lib\pprint.py to pprint.pyc
byte-compiling C:\Python26\lib\quopri.py to quopri.pyc
byte-compiling C:\Python26\lib\random.py to random.pyc
byte-compiling C:\Python26\lib\re.py to re.pyc
byte-compiling C:\Python26\lib\repr.py to repr.pyc
byte-compiling C:\Python26\lib\shlex.py to shlex.pyc
byte-compiling C:\Python26\lib\site-packages\zipextimporter.py to zipextimporter
.pyc
byte-compiling C:\Python26\lib\sre.py to sre.pyc
byte-compiling C:\Python26\lib\sre_compile.py to sre_compile.pyc
byte-compiling C:\Python26\lib\sre_constants.py to sre_constants.pyc
byte-compiling C:\Python26\lib\sre_parse.py to sre_parse.pyc
byte-compiling C:\Python26\lib\stat.py to stat.pyc
byte-compiling C:\Python26\lib\string.py to string.pyc
byte-compiling C:\Python26\lib\stringprep.py to stringprep.pyc
byte-compiling C:\Python26\lib\struct.py to struct.pyc
byte-compiling C:\Python26\lib\subprocess.py to subprocess.pyc
byte-compiling C:\Python26\lib\tempfile.py to tempfile.pyc
byte-compiling C:\Python26\lib\textwrap.py to textwrap.pyc
byte-compiling C:\Python26\lib\threading.py to threading.pyc
byte-compiling C:\Python26\lib\token.py to token.pyc
byte-compiling C:\Python26\lib\tokenize.py to tokenize.pyc
byte-compiling C:\Python26\lib\traceback.py to traceback.pyc
byte-compiling C:\Python26\lib\types.py to types.pyc
byte-compiling C:\Python26\lib\unittest.py to unittest.pyc
byte-compiling C:\Python26\lib\warnings.py to warnings.pyc
*** copy extensions ***
copying C:\Python26\DLLs\bz2.pyd -> C:\Python26\working\build\bdist.win32\winexe
\collect-2.6
copying C:\Python26\DLLs\select.pyd -> C:\Python26\working\build\bdist.win32\win
exe\collect-2.6
copying C:\Python26\DLLs\unicodedata.pyd -> C:\Python26\working\build\bdist.win3
2\winexe\collect-2.6
copying C:\Python26\lib\site-packages\cx_Oracle.pyd -> C:\Python26\working\build
\bdist.win32\winexe\collect-2.6
*** copy dlls ***
copying C:\Oracle\XEClient\bin\OCI.dll -> C:\Python26\working\build\bdist.win32\
winexe\collect-2.6
copying C:\Python26\lib\site-packages\py2exe\run.exe -> C:\Python26\working\dist
\testora.exe
*** binary dependencies ***
Your executable(s) also depend on these dlls which are not included,
you may or may not need to distribute them.
Make sure you have the license if you distribute any of them, and
make sure you don't distribute files belonging to the operating system.
USER32.dll - C:\WINDOWS\system32\USER32.dll
SHELL32.dll - C:\WINDOWS\system32\SHELL32.dll
WSOCK32.dll - C:\WINDOWS\system32\WSOCK32.dll
ADVAPI32.dll - C:\WINDOWS\system32\ADVAPI32.dll
msvcrt.dll - C:\WINDOWS\system32\msvcrt.dll
KERNEL32.dll - C:\WINDOWS\system32\KERNEL32.dll
C:\Python26\working\dist>testora
Traceback (most recent call last):
File "testora.py", line 19, in <module>
File "testora.py", line 11, in testora
cx_Oracle.InterfaceError: Unable to acquire Oracle environment handle
A:
Did you make sure to exclude the OCI.dll when you built with py2exe? If the version of the DLL on your machine is incompatible with the client version on another machine you test it on (I noticed you tried a 11g client but 10g on your machine), then this configuration will not work (I forget the actual error message though).
A:
Revised build_testora.py, for future reference:
from distutils.core import setup
import py2exe, sys
sys.argv.append('py2exe')
setup(
options = {'py2exe': {
'bundle_files': 2,
'compressed': True,
'dll_excludes': ["oci.dll"]
}},
console = [{'script': "testora.py"}],
zipfile = None
)
|
Error on connecting to Oracle from py2exe'd program: Unable to acquire Oracle environment handle
|
My python program (Python 2.6) works fine when I run it using the Python interpreter, it connects to the Oracle database (10g XE) without error. However, when I compile it using py2exe, the executable version fails with "Unable to acquire Oracle environment handle" at the call to cx_Oracle.connect().
I've tried the following with no joy:
Oracle instant client 10g and 11g
Oracle XE Client
reinstall cx_Oracle-5.0.2-10g.win32-py2.6.msi
setting ORACLE_HOME as well as PATH
another computer with just an Oracle client and the exe
various options for building the exe (no compression and/or using zip file)
My testcase:
testora.py:
import cx_Oracle
import decimal # needed for py2exe to compile this correctly
def testora():
"""testora
>>> testora.testora()
<cx_Oracle.Connection to scott@localhost:1521/orcl>
X
"""
orcl = cx_Oracle.connect('scott/tiger@localhost:1521/orcl')
print orcl
curs = orcl.cursor()
result = curs.execute('SELECT * FROM DUAL')
for (dummy,) in result:
print dummy
if __name__ == '__main__':
testora()
build_testora.py:
from distutils.core import setup
import py2exe, sys
sys.argv.append('py2exe')
setup(
options = {'py2exe': {
'bundle_files': 2,
'compressed': True
}},
console = [{'script': "testora.py"}],
zipfile = None
)
Results:
C:\Python26\working>python testora.py
<cx_Oracle.Connection to scott@localhost:1521/orcl>
X
C:\Python26\working>python build_testora.py py2exe
C:\Python26\lib\site-packages\py2exe\build_exe.py:16: DeprecationWarning: the se
ts module is deprecated
import sets
running py2exe
creating C:\Python26\working\build
creating C:\Python26\working\build\bdist.win32
creating C:\Python26\working\build\bdist.win32\winexe
creating C:\Python26\working\build\bdist.win32\winexe\collect-2.6
creating C:\Python26\working\build\bdist.win32\winexe\bundle-2.6
creating C:\Python26\working\build\bdist.win32\winexe\temp
*** searching for required modules ***
*** parsing results ***
*** finding dlls needed ***
*** create binaries ***
*** byte compile python files ***
byte-compiling C:\Python26\lib\StringIO.py to StringIO.pyc
byte-compiling C:\Python26\lib\UserDict.py to UserDict.pyc
byte-compiling C:\Python26\lib\__future__.py to __future__.pyc
byte-compiling C:\Python26\lib\_abcoll.py to _abcoll.pyc
byte-compiling C:\Python26\lib\_strptime.py to _strptime.pyc
byte-compiling C:\Python26\lib\_threading_local.py to _threading_local.pyc
byte-compiling C:\Python26\lib\abc.py to abc.pyc
byte-compiling C:\Python26\lib\atexit.py to atexit.pyc
byte-compiling C:\Python26\lib\base64.py to base64.pyc
byte-compiling C:\Python26\lib\bdb.py to bdb.pyc
byte-compiling C:\Python26\lib\bisect.py to bisect.pyc
byte-compiling C:\Python26\lib\calendar.py to calendar.pyc
byte-compiling C:\Python26\lib\cmd.py to cmd.pyc
byte-compiling C:\Python26\lib\codecs.py to codecs.pyc
byte-compiling C:\Python26\lib\collections.py to collections.pyc
byte-compiling C:\Python26\lib\copy.py to copy.pyc
byte-compiling C:\Python26\lib\copy_reg.py to copy_reg.pyc
byte-compiling C:\Python26\lib\decimal.py to decimal.pyc
byte-compiling C:\Python26\lib\difflib.py to difflib.pyc
byte-compiling C:\Python26\lib\dis.py to dis.pyc
byte-compiling C:\Python26\lib\doctest.py to doctest.pyc
byte-compiling C:\Python26\lib\dummy_thread.py to dummy_thread.pyc
byte-compiling C:\Python26\lib\encodings\__init__.py to encodings\__init__.pyc
creating C:\Python26\working\build\bdist.win32\winexe\collect-2.6\encodings
byte-compiling C:\Python26\lib\encodings\aliases.py to encodings\aliases.pyc
byte-compiling C:\Python26\lib\encodings\ascii.py to encodings\ascii.pyc
byte-compiling C:\Python26\lib\encodings\base64_codec.py to encodings\base64_cod
ec.pyc
byte-compiling C:\Python26\lib\encodings\big5.py to encodings\big5.pyc
byte-compiling C:\Python26\lib\encodings\big5hkscs.py to encodings\big5hkscs.pyc
byte-compiling C:\Python26\lib\encodings\bz2_codec.py to encodings\bz2_codec.pyc
byte-compiling C:\Python26\lib\encodings\charmap.py to encodings\charmap.pyc
byte-compiling C:\Python26\lib\encodings\cp037.py to encodings\cp037.pyc
byte-compiling C:\Python26\lib\encodings\cp1006.py to encodings\cp1006.pyc
byte-compiling C:\Python26\lib\encodings\cp1026.py to encodings\cp1026.pyc
byte-compiling C:\Python26\lib\encodings\cp1140.py to encodings\cp1140.pyc
byte-compiling C:\Python26\lib\encodings\cp1250.py to encodings\cp1250.pyc
byte-compiling C:\Python26\lib\encodings\cp1251.py to encodings\cp1251.pyc
byte-compiling C:\Python26\lib\encodings\cp1252.py to encodings\cp1252.pyc
byte-compiling C:\Python26\lib\encodings\cp1253.py to encodings\cp1253.pyc
byte-compiling C:\Python26\lib\encodings\cp1254.py to encodings\cp1254.pyc
byte-compiling C:\Python26\lib\encodings\cp1255.py to encodings\cp1255.pyc
byte-compiling C:\Python26\lib\encodings\cp1256.py to encodings\cp1256.pyc
byte-compiling C:\Python26\lib\encodings\cp1257.py to encodings\cp1257.pyc
byte-compiling C:\Python26\lib\encodings\cp1258.py to encodings\cp1258.pyc
byte-compiling C:\Python26\lib\encodings\cp424.py to encodings\cp424.pyc
byte-compiling C:\Python26\lib\encodings\cp437.py to encodings\cp437.pyc
byte-compiling C:\Python26\lib\encodings\cp500.py to encodings\cp500.pyc
byte-compiling C:\Python26\lib\encodings\cp737.py to encodings\cp737.pyc
byte-compiling C:\Python26\lib\encodings\cp775.py to encodings\cp775.pyc
byte-compiling C:\Python26\lib\encodings\cp850.py to encodings\cp850.pyc
byte-compiling C:\Python26\lib\encodings\cp852.py to encodings\cp852.pyc
byte-compiling C:\Python26\lib\encodings\cp855.py to encodings\cp855.pyc
byte-compiling C:\Python26\lib\encodings\cp856.py to encodings\cp856.pyc
byte-compiling C:\Python26\lib\encodings\cp857.py to encodings\cp857.pyc
byte-compiling C:\Python26\lib\encodings\cp860.py to encodings\cp860.pyc
byte-compiling C:\Python26\lib\encodings\cp861.py to encodings\cp861.pyc
byte-compiling C:\Python26\lib\encodings\cp862.py to encodings\cp862.pyc
byte-compiling C:\Python26\lib\encodings\cp863.py to encodings\cp863.pyc
byte-compiling C:\Python26\lib\encodings\cp864.py to encodings\cp864.pyc
byte-compiling C:\Python26\lib\encodings\cp865.py to encodings\cp865.pyc
byte-compiling C:\Python26\lib\encodings\cp866.py to encodings\cp866.pyc
byte-compiling C:\Python26\lib\encodings\cp869.py to encodings\cp869.pyc
byte-compiling C:\Python26\lib\encodings\cp874.py to encodings\cp874.pyc
byte-compiling C:\Python26\lib\encodings\cp875.py to encodings\cp875.pyc
byte-compiling C:\Python26\lib\encodings\cp932.py to encodings\cp932.pyc
byte-compiling C:\Python26\lib\encodings\cp949.py to encodings\cp949.pyc
byte-compiling C:\Python26\lib\encodings\cp950.py to encodings\cp950.pyc
byte-compiling C:\Python26\lib\encodings\euc_jis_2004.py to encodings\euc_jis_20
04.pyc
byte-compiling C:\Python26\lib\encodings\euc_jisx0213.py to encodings\euc_jisx02
13.pyc
byte-compiling C:\Python26\lib\encodings\euc_jp.py to encodings\euc_jp.pyc
byte-compiling C:\Python26\lib\encodings\euc_kr.py to encodings\euc_kr.pyc
byte-compiling C:\Python26\lib\encodings\gb18030.py to encodings\gb18030.pyc
byte-compiling C:\Python26\lib\encodings\gb2312.py to encodings\gb2312.pyc
byte-compiling C:\Python26\lib\encodings\gbk.py to encodings\gbk.pyc
byte-compiling C:\Python26\lib\encodings\hex_codec.py to encodings\hex_codec.pyc
byte-compiling C:\Python26\lib\encodings\hp_roman8.py to encodings\hp_roman8.pyc
byte-compiling C:\Python26\lib\encodings\hz.py to encodings\hz.pyc
byte-compiling C:\Python26\lib\encodings\idna.py to encodings\idna.pyc
byte-compiling C:\Python26\lib\encodings\iso2022_jp.py to encodings\iso2022_jp.p
yc
byte-compiling C:\Python26\lib\encodings\iso2022_jp_1.py to encodings\iso2022_jp
_1.pyc
byte-compiling C:\Python26\lib\encodings\iso2022_jp_2.py to encodings\iso2022_jp
_2.pyc
byte-compiling C:\Python26\lib\encodings\iso2022_jp_2004.py to encodings\iso2022
_jp_2004.pyc
byte-compiling C:\Python26\lib\encodings\iso2022_jp_3.py to encodings\iso2022_jp
_3.pyc
byte-compiling C:\Python26\lib\encodings\iso2022_jp_ext.py to encodings\iso2022_
jp_ext.pyc
byte-compiling C:\Python26\lib\encodings\iso2022_kr.py to encodings\iso2022_kr.p
yc
byte-compiling C:\Python26\lib\encodings\iso8859_1.py to encodings\iso8859_1.pyc
byte-compiling C:\Python26\lib\encodings\iso8859_10.py to encodings\iso8859_10.p
yc
byte-compiling C:\Python26\lib\encodings\iso8859_11.py to encodings\iso8859_11.p
yc
byte-compiling C:\Python26\lib\encodings\iso8859_13.py to encodings\iso8859_13.p
yc
byte-compiling C:\Python26\lib\encodings\iso8859_14.py to encodings\iso8859_14.p
yc
byte-compiling C:\Python26\lib\encodings\iso8859_15.py to encodings\iso8859_15.p
yc
byte-compiling C:\Python26\lib\encodings\iso8859_16.py to encodings\iso8859_16.p
yc
byte-compiling C:\Python26\lib\encodings\iso8859_2.py to encodings\iso8859_2.pyc
byte-compiling C:\Python26\lib\encodings\iso8859_3.py to encodings\iso8859_3.pyc
byte-compiling C:\Python26\lib\encodings\iso8859_4.py to encodings\iso8859_4.pyc
byte-compiling C:\Python26\lib\encodings\iso8859_5.py to encodings\iso8859_5.pyc
byte-compiling C:\Python26\lib\encodings\iso8859_6.py to encodings\iso8859_6.pyc
byte-compiling C:\Python26\lib\encodings\iso8859_7.py to encodings\iso8859_7.pyc
byte-compiling C:\Python26\lib\encodings\iso8859_8.py to encodings\iso8859_8.pyc
byte-compiling C:\Python26\lib\encodings\iso8859_9.py to encodings\iso8859_9.pyc
byte-compiling C:\Python26\lib\encodings\johab.py to encodings\johab.pyc
byte-compiling C:\Python26\lib\encodings\koi8_r.py to encodings\koi8_r.pyc
byte-compiling C:\Python26\lib\encodings\koi8_u.py to encodings\koi8_u.pyc
byte-compiling C:\Python26\lib\encodings\latin_1.py to encodings\latin_1.pyc
byte-compiling C:\Python26\lib\encodings\mac_arabic.py to encodings\mac_arabic.p
yc
byte-compiling C:\Python26\lib\encodings\mac_centeuro.py to encodings\mac_centeu
ro.pyc
byte-compiling C:\Python26\lib\encodings\mac_croatian.py to encodings\mac_croati
an.pyc
byte-compiling C:\Python26\lib\encodings\mac_cyrillic.py to encodings\mac_cyrill
ic.pyc
byte-compiling C:\Python26\lib\encodings\mac_farsi.py to encodings\mac_farsi.pyc
byte-compiling C:\Python26\lib\encodings\mac_greek.py to encodings\mac_greek.pyc
byte-compiling C:\Python26\lib\encodings\mac_iceland.py to encodings\mac_iceland
.pyc
byte-compiling C:\Python26\lib\encodings\mac_latin2.py to encodings\mac_latin2.p
yc
byte-compiling C:\Python26\lib\encodings\mac_roman.py to encodings\mac_roman.pyc
byte-compiling C:\Python26\lib\encodings\mac_romanian.py to encodings\mac_romani
an.pyc
byte-compiling C:\Python26\lib\encodings\mac_turkish.py to encodings\mac_turkish
.pyc
byte-compiling C:\Python26\lib\encodings\mbcs.py to encodings\mbcs.pyc
byte-compiling C:\Python26\lib\encodings\palmos.py to encodings\palmos.pyc
byte-compiling C:\Python26\lib\encodings\ptcp154.py to encodings\ptcp154.pyc
byte-compiling C:\Python26\lib\encodings\punycode.py to encodings\punycode.pyc
byte-compiling C:\Python26\lib\encodings\quopri_codec.py to encodings\quopri_cod
ec.pyc
byte-compiling C:\Python26\lib\encodings\raw_unicode_escape.py to encodings\raw_
unicode_escape.pyc
byte-compiling C:\Python26\lib\encodings\rot_13.py to encodings\rot_13.pyc
byte-compiling C:\Python26\lib\encodings\shift_jis.py to encodings\shift_jis.pyc
byte-compiling C:\Python26\lib\encodings\shift_jis_2004.py to encodings\shift_ji
s_2004.pyc
byte-compiling C:\Python26\lib\encodings\shift_jisx0213.py to encodings\shift_ji
sx0213.pyc
byte-compiling C:\Python26\lib\encodings\string_escape.py to encodings\string_es
cape.pyc
byte-compiling C:\Python26\lib\encodings\tis_620.py to encodings\tis_620.pyc
byte-compiling C:\Python26\lib\encodings\undefined.py to encodings\undefined.pyc
byte-compiling C:\Python26\lib\encodings\unicode_escape.py to encodings\unicode_
escape.pyc
byte-compiling C:\Python26\lib\encodings\unicode_internal.py to encodings\unicod
e_internal.pyc
byte-compiling C:\Python26\lib\encodings\utf_16.py to encodings\utf_16.pyc
byte-compiling C:\Python26\lib\encodings\utf_16_be.py to encodings\utf_16_be.pyc
byte-compiling C:\Python26\lib\encodings\utf_16_le.py to encodings\utf_16_le.pyc
byte-compiling C:\Python26\lib\encodings\utf_32.py to encodings\utf_32.pyc
byte-compiling C:\Python26\lib\encodings\utf_32_be.py to encodings\utf_32_be.pyc
byte-compiling C:\Python26\lib\encodings\utf_32_le.py to encodings\utf_32_le.pyc
byte-compiling C:\Python26\lib\encodings\utf_7.py to encodings\utf_7.pyc
byte-compiling C:\Python26\lib\encodings\utf_8.py to encodings\utf_8.pyc
byte-compiling C:\Python26\lib\encodings\utf_8_sig.py to encodings\utf_8_sig.pyc
byte-compiling C:\Python26\lib\encodings\uu_codec.py to encodings\uu_codec.pyc
byte-compiling C:\Python26\lib\encodings\zlib_codec.py to encodings\zlib_codec.p
yc
byte-compiling C:\Python26\lib\functools.py to functools.pyc
byte-compiling C:\Python26\lib\genericpath.py to genericpath.pyc
byte-compiling C:\Python26\lib\getopt.py to getopt.pyc
byte-compiling C:\Python26\lib\gettext.py to gettext.pyc
byte-compiling C:\Python26\lib\heapq.py to heapq.pyc
byte-compiling C:\Python26\lib\inspect.py to inspect.pyc
byte-compiling C:\Python26\lib\keyword.py to keyword.pyc
byte-compiling C:\Python26\lib\linecache.py to linecache.pyc
byte-compiling C:\Python26\lib\locale.py to locale.pyc
byte-compiling C:\Python26\lib\ntpath.py to ntpath.pyc
byte-compiling C:\Python26\lib\numbers.py to numbers.pyc
byte-compiling C:\Python26\lib\opcode.py to opcode.pyc
byte-compiling C:\Python26\lib\optparse.py to optparse.pyc
byte-compiling C:\Python26\lib\os.py to os.pyc
byte-compiling C:\Python26\lib\os2emxpath.py to os2emxpath.pyc
byte-compiling C:\Python26\lib\pdb.py to pdb.pyc
byte-compiling C:\Python26\lib\pickle.py to pickle.pyc
byte-compiling C:\Python26\lib\posixpath.py to posixpath.pyc
byte-compiling C:\Python26\lib\pprint.py to pprint.pyc
byte-compiling C:\Python26\lib\quopri.py to quopri.pyc
byte-compiling C:\Python26\lib\random.py to random.pyc
byte-compiling C:\Python26\lib\re.py to re.pyc
byte-compiling C:\Python26\lib\repr.py to repr.pyc
byte-compiling C:\Python26\lib\shlex.py to shlex.pyc
byte-compiling C:\Python26\lib\site-packages\zipextimporter.py to zipextimporter
.pyc
byte-compiling C:\Python26\lib\sre.py to sre.pyc
byte-compiling C:\Python26\lib\sre_compile.py to sre_compile.pyc
byte-compiling C:\Python26\lib\sre_constants.py to sre_constants.pyc
byte-compiling C:\Python26\lib\sre_parse.py to sre_parse.pyc
byte-compiling C:\Python26\lib\stat.py to stat.pyc
byte-compiling C:\Python26\lib\string.py to string.pyc
byte-compiling C:\Python26\lib\stringprep.py to stringprep.pyc
byte-compiling C:\Python26\lib\struct.py to struct.pyc
byte-compiling C:\Python26\lib\subprocess.py to subprocess.pyc
byte-compiling C:\Python26\lib\tempfile.py to tempfile.pyc
byte-compiling C:\Python26\lib\textwrap.py to textwrap.pyc
byte-compiling C:\Python26\lib\threading.py to threading.pyc
byte-compiling C:\Python26\lib\token.py to token.pyc
byte-compiling C:\Python26\lib\tokenize.py to tokenize.pyc
byte-compiling C:\Python26\lib\traceback.py to traceback.pyc
byte-compiling C:\Python26\lib\types.py to types.pyc
byte-compiling C:\Python26\lib\unittest.py to unittest.pyc
byte-compiling C:\Python26\lib\warnings.py to warnings.pyc
*** copy extensions ***
copying C:\Python26\DLLs\bz2.pyd -> C:\Python26\working\build\bdist.win32\winexe
\collect-2.6
copying C:\Python26\DLLs\select.pyd -> C:\Python26\working\build\bdist.win32\win
exe\collect-2.6
copying C:\Python26\DLLs\unicodedata.pyd -> C:\Python26\working\build\bdist.win3
2\winexe\collect-2.6
copying C:\Python26\lib\site-packages\cx_Oracle.pyd -> C:\Python26\working\build
\bdist.win32\winexe\collect-2.6
*** copy dlls ***
copying C:\Oracle\XEClient\bin\OCI.dll -> C:\Python26\working\build\bdist.win32\
winexe\collect-2.6
copying C:\Python26\lib\site-packages\py2exe\run.exe -> C:\Python26\working\dist
\testora.exe
*** binary dependencies ***
Your executable(s) also depend on these dlls which are not included,
you may or may not need to distribute them.
Make sure you have the license if you distribute any of them, and
make sure you don't distribute files belonging to the operating system.
USER32.dll - C:\WINDOWS\system32\USER32.dll
SHELL32.dll - C:\WINDOWS\system32\SHELL32.dll
WSOCK32.dll - C:\WINDOWS\system32\WSOCK32.dll
ADVAPI32.dll - C:\WINDOWS\system32\ADVAPI32.dll
msvcrt.dll - C:\WINDOWS\system32\msvcrt.dll
KERNEL32.dll - C:\WINDOWS\system32\KERNEL32.dll
C:\Python26\working\dist>testora
Traceback (most recent call last):
File "testora.py", line 19, in <module>
File "testora.py", line 11, in testora
cx_Oracle.InterfaceError: Unable to acquire Oracle environment handle
|
[
"Did you make sure to exclude the OCI.dll when you built with py2exe? If the version of the DLL on your machine is incompatible with the client version on another machine you test it on (I noticed you tried a 11g client but 10g on your machine), then this configuration will not work (I forget the actual error message though).\n",
"Revised build_testora.py, for future reference:\nfrom distutils.core import setup\nimport py2exe, sys\n\nsys.argv.append('py2exe')\n\nsetup(\n options = {'py2exe': {\n 'bundle_files': 2,\n 'compressed': True,\n 'dll_excludes': [\"oci.dll\"]\n }},\n console = [{'script': \"testora.py\"}],\n zipfile = None\n )\n\n"
] |
[
8,
2
] |
[] |
[] |
[
"cx_oracle",
"oracle",
"py2exe",
"python"
] |
stackoverflow_0001151557_cx_oracle_oracle_py2exe_python.txt
|
Q:
What robot (web) libraries are available for python?
Specifically, are there any libraries that do not use sockets?
I will be running this code in Google App Engine, which does not allow the use of sockets.
Google app engine does allow the use of urllib2 to make web requests.
I've been trying to get mechanize to work, since that what I've used before, but if there's something easier, I'd rather do that.
thanks,
Mark
A:
urlfetch seems to do the same thing that you are looking for.
A:
To answer your question, twill and webunit are some other Python programmatic web browsing libraries. However, I'd be surprised if any of them worked off the bat with Google App Engine given the restricted stdlib.
|
What robot (web) libraries are available for python?
|
Specifically, are there any libraries that do not use sockets?
I will be running this code in Google App Engine, which does not allow the use of sockets.
Google app engine does allow the use of urllib2 to make web requests.
I've been trying to get mechanize to work, since that what I've used before, but if there's something easier, I'd rather do that.
thanks,
Mark
|
[
"urlfetch seems to do the same thing that you are looking for.\n",
"To answer your question, twill and webunit are some other Python programmatic web browsing libraries. However, I'd be surprised if any of them worked off the bat with Google App Engine given the restricted stdlib.\n"
] |
[
1,
0
] |
[] |
[] |
[
"google_app_engine",
"python",
"robot"
] |
stackoverflow_0000957661_google_app_engine_python_robot.txt
|
Q:
tkMessageBox
Can anybody help me out in how to activate 'close' button of askquestion() of tkMessageBox??
A:
By 'activate', do you mean make it so the user can close the message box by clicking the close ('X') button?
I do not think it is possible using tkMessageBox. I guess your best bet is to implement a dialog box with this functionality yourself.
BTW: What should askquestion() return when the user closes the dialog box?
|
tkMessageBox
|
Can anybody help me out in how to activate 'close' button of askquestion() of tkMessageBox??
|
[
"By 'activate', do you mean make it so the user can close the message box by clicking the close ('X') button?\nI do not think it is possible using tkMessageBox. I guess your best bet is to implement a dialog box with this functionality yourself.\nBTW: What should askquestion() return when the user closes the dialog box?\n"
] |
[
0
] |
[] |
[] |
[
"python",
"tkmessagebox"
] |
stackoverflow_0001151770_python_tkmessagebox.txt
|
Q:
Pythonic Swap of 2 lists elements
I found that I have to perform a swap in python and I write something like this:
arr[first], arr[second] = arr[second], arr[first]
I suppose this is not so pythonic. Does somebody know how to do a swap in python more elegant?
EDIT:
I think another example will show my doubts:
self.memberlist[someindexA], self.memberlist[someindexB] = self.memberlist[someindexB], self.memberlist[someindexA]
is this the only available solution for swap in python?
A:
a, b = b, a
Is a perfectly Pythonic idiom. It is short and readable, as long as your variable names are short enough.
A:
The one thing I might change in your example code: if you're going to use some long name such as self.memberlist over an over again, it's often more readable to alias ("assign") it to a shorter name first. So for example instead of the long, hard-to-read:
self.memberlist[someindexA], self.memberlist[someindexB] = self.memberlist[someindexB], self.memberlist[someindexA]
you could code:
L = self.memberlist
L[someindexA], L[someindexB] = L[someindexB], L[someindexA]
Remember that Python works by-reference so L refers to exactly the same object as self.memberlist, NOT a copy (by the same token, the assignment is extremely fast no matter how long the list may be, because it's not copied anyway -- it's just one more reference).
I don't think any further complication is warranted, though of course some fancy ones might easily be conceived, such as (for a, b "normal" indices >=0):
def slicer(a, b):
return slice(a, b+cmp(b,a), b-a), slice(b, a+cmp(a,b), a-b)
back, forth = slicer(someindexA, someindexB)
self.memberlist[back] = self.memberlist[forth]
I think figuring out these kinds of "advanced" uses is a nice conceit, useful mental exercise, and good fun -- I recommend that interested readers, once the general idea is clear, focus on the role of those +cmp and how they make things work for the three possibilities (a>b, a<b, a==b) [[not for negative indices, though -- why not, and how would slicer need to change to fix this?]]. But using such a fancy approach in production code would generally be overkill and quite unwarranted, making things murkier and harder to maintain than the simple and straightforward approach.
Remember, simple is better than complex!
A:
It's difficult to imagine how it could be made more elegant: using a hypothetical built-in function ... swap_sequence_elements(arr, first, second) elegant? maybe, but this is in YAGGI territory -- you aren't going to get it ;-) -- and the function call overhead would/should put you off implementing it yourself.
What you have is much more elegant than the alternative in-line way:
temp = arr[first]
arr[first] = arr[second]
arr[second] = temp
and (bonus!) is faster too (on the not unreasonable assumption that a bytecode ROT_TWO is faster than a LOAD_FAST plus a STORE_FAST).
A:
a, b = b, a is about as short as you'll get, it's only three characters (aside from the variable names).. It's about as Python'y as you'll get
One alternative is the usual use-a-temp-variable:
self.memberlist[someindexA], self.memberlist[someindexB] = self.memberlist[someindexB], self.memberlist[someindexA]
..becomes..
temp = self.memberlist[someindexB]
self.memberlist[someindexB] = self.memberlist[someindexA]
self.memberlist[someindexA] = temp
..which I think is messier and less "obvious"
Another way, which is maybe a bit more readable with long variable names:
a, b = self.memberlist[someindexA], self.memberlist[someindexB]
self.memberlist[someindexA], self.memberlist[someindexB] = b, a
|
Pythonic Swap of 2 lists elements
|
I found that I have to perform a swap in python and I write something like this:
arr[first], arr[second] = arr[second], arr[first]
I suppose this is not so pythonic. Does somebody know how to do a swap in python more elegant?
EDIT:
I think another example will show my doubts:
self.memberlist[someindexA], self.memberlist[someindexB] = self.memberlist[someindexB], self.memberlist[someindexA]
is this the only available solution for swap in python?
|
[
"a, b = b, a\n\nIs a perfectly Pythonic idiom. It is short and readable, as long as your variable names are short enough.\n",
"The one thing I might change in your example code: if you're going to use some long name such as self.memberlist over an over again, it's often more readable to alias (\"assign\") it to a shorter name first. So for example instead of the long, hard-to-read:\nself.memberlist[someindexA], self.memberlist[someindexB] = self.memberlist[someindexB], self.memberlist[someindexA]\n\nyou could code:\nL = self.memberlist\nL[someindexA], L[someindexB] = L[someindexB], L[someindexA]\n\nRemember that Python works by-reference so L refers to exactly the same object as self.memberlist, NOT a copy (by the same token, the assignment is extremely fast no matter how long the list may be, because it's not copied anyway -- it's just one more reference).\nI don't think any further complication is warranted, though of course some fancy ones might easily be conceived, such as (for a, b \"normal\" indices >=0):\ndef slicer(a, b):\n return slice(a, b+cmp(b,a), b-a), slice(b, a+cmp(a,b), a-b)\n\nback, forth = slicer(someindexA, someindexB)\nself.memberlist[back] = self.memberlist[forth]\n\nI think figuring out these kinds of \"advanced\" uses is a nice conceit, useful mental exercise, and good fun -- I recommend that interested readers, once the general idea is clear, focus on the role of those +cmp and how they make things work for the three possibilities (a>b, a<b, a==b) [[not for negative indices, though -- why not, and how would slicer need to change to fix this?]]. But using such a fancy approach in production code would generally be overkill and quite unwarranted, making things murkier and harder to maintain than the simple and straightforward approach.\nRemember, simple is better than complex!\n",
"It's difficult to imagine how it could be made more elegant: using a hypothetical built-in function ... swap_sequence_elements(arr, first, second) elegant? maybe, but this is in YAGGI territory -- you aren't going to get it ;-) -- and the function call overhead would/should put you off implementing it yourself.\nWhat you have is much more elegant than the alternative in-line way:\ntemp = arr[first]\narr[first] = arr[second]\narr[second] = temp\n\nand (bonus!) is faster too (on the not unreasonable assumption that a bytecode ROT_TWO is faster than a LOAD_FAST plus a STORE_FAST).\n",
"a, b = b, a is about as short as you'll get, it's only three characters (aside from the variable names).. It's about as Python'y as you'll get\nOne alternative is the usual use-a-temp-variable:\nself.memberlist[someindexA], self.memberlist[someindexB] = self.memberlist[someindexB], self.memberlist[someindexA]\n\n..becomes..\ntemp = self.memberlist[someindexB]\nself.memberlist[someindexB] = self.memberlist[someindexA]\nself.memberlist[someindexA] = temp\n\n..which I think is messier and less \"obvious\"\nAnother way, which is maybe a bit more readable with long variable names:\na, b = self.memberlist[someindexA], self.memberlist[someindexB]\nself.memberlist[someindexA], self.memberlist[someindexB] = b, a\n\n"
] |
[
16,
14,
1,
1
] |
[
"I suppose you could take advantage of the step argument of slice notation to do something like this:\nmyarr[:2] = myarr[:2][::-1]\nI'm not sure this is clearer or more pythonic though...\n"
] |
[
-1
] |
[
"python",
"swap"
] |
stackoverflow_0001149802_python_swap.txt
|
Q:
Where and how is django Model objects attribute defined?
I'm trying to get my head around Django ORM. I've been reading django.db.models.base.py source code but still could understand how does the Model.objects attributes in our Model object gets defined. Does anybody know how does django adds that objects attribute into our Model object?
Thanks in advance
A:
The Django ORM makes heavy use of Python metaclasses. From Wikipedia:
In object-oriented programming, a metaclass is a class whose instances are classes. Just as an ordinary class defines the behavior of certain objects, a metaclass defines the behavior of certain classes and their instances.
Here's a blog post that describes how metaclasses are used in the Django ORM: How the Heck do Django Models Work
|
Where and how is django Model objects attribute defined?
|
I'm trying to get my head around Django ORM. I've been reading django.db.models.base.py source code but still could understand how does the Model.objects attributes in our Model object gets defined. Does anybody know how does django adds that objects attribute into our Model object?
Thanks in advance
|
[
"The Django ORM makes heavy use of Python metaclasses. From Wikipedia:\n\nIn object-oriented programming, a metaclass is a class whose instances are classes. Just as an ordinary class defines the behavior of certain objects, a metaclass defines the behavior of certain classes and their instances.\n\nHere's a blog post that describes how metaclasses are used in the Django ORM: How the Heck do Django Models Work\n"
] |
[
5
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0001151879_django_python.txt
|
Q:
Allowing user to configure cron
I have this bash script on the server that runs every hour, via cron. I was perfectly happy, but now the user wants to be able to configure the frequency through the web interface.
I don't feel comfortable manipulating the cron configuration programmatically, but I'm not sure if the other options are any better.
The way I see it, I can either:
Schedule a script to run once a minute and check if it should really run "now"
Forgo cron altogether and use a deamon that is its own scheduler. This probably means rewriting the script in python
...or suck it up and manipulate the cron configuration from the web interface (written in python BTW)
What should I do?
EDIT: to clarify, the main reason I'm apprehensive about manipulating cron is because it's basically text manipulation with no validation, and if I mess it up, none of my other cron jobs will run.
Here's what I ended up doing:
Taking stefanw's advice, I added the following line at the top of my bash script:
if [ ! `cat /home/username/settings/run.freq` = $1 ]; then
exit 0
fi
I set up the following cron jobs:
0 */2 * * * /home/username/scripts/publish.sh 2_hours
@hourly /home/username/scripts/publish.sh 1_hour
*/30 * * * * /home/username/scripts/publish.sh 30_minutes
*/10 * * * * /home/username/scripts/publish.sh 10_minutes
From the web interface, I let the user choose between those four options, and based on what the user chose, I write the string 2_hours/1_hour/30_minutes/10_minutes into the file at /home/username/settings/run.freq.
I don't love it, but it seems like the best alternative.
A:
Give your users some reasonable choices like every minute, every 5 minutes, every half an hour, ... and translate these values to a cron job string. This is user friendly and forbids users to tamper directly with the cron job string.
A:
You could use a python scheduler library that does most of the work already:
pycron
scheduler-py
A:
What about Webmin? I have never used it myself but it seems you could configure the cron module and give permissions to the user who wants to configure the job.
A:
Well something I use is a main script started every minute by cron. this script check touched files. If the files are there the main cron script start a function/subscript. You just have to touch a defined file and "rm -f"ed it when done. It has the side benefits to be more concurrent proof if you want other way to start jobs. Then you can use your favourite web programming language to handle your users scheduling ...
The main script looks like :
[...]
if [ -e "${tag_archive_log_files}" ]; then
archive_log_files ${params}
rm -f ${tag_archive_files}
fi
if [ -e "${tag_purge_log_files}" ]; then
purge_log_files ${params}
rm -f ${tag_purge_log_files}
fi
[...]
A:
I found a module that can manipulate the cron info for me. It's called python-crontab, and it's available with easy_install. From the source:
Example Use:
from crontab import CronTab
tab = CronTab()
cron = tab.new(command='/usr/bin/echo')
cron.minute().during(5,50).every(5)
cron.hour().every(4)
cron2 = tab.new(command='/foo/bar',comment='SomeID')
cron2.every_reboot()
list = tab.find('bar')
cron3 = list[0]
cron3.clear()
cron3.minute().every(1)
print unicode(tab.render())
for cron4 in tab.find('echo'):
print cron4
for cron5 in tab:
print cron5
tab.remove_all('echo')
t.write()
(I kept googling for "cron" and couldn't find anything. The keyword I was missing was "crontab").
I'm now deciding if I'll use this or just replace cron entirely with a python based scheduler.
|
Allowing user to configure cron
|
I have this bash script on the server that runs every hour, via cron. I was perfectly happy, but now the user wants to be able to configure the frequency through the web interface.
I don't feel comfortable manipulating the cron configuration programmatically, but I'm not sure if the other options are any better.
The way I see it, I can either:
Schedule a script to run once a minute and check if it should really run "now"
Forgo cron altogether and use a deamon that is its own scheduler. This probably means rewriting the script in python
...or suck it up and manipulate the cron configuration from the web interface (written in python BTW)
What should I do?
EDIT: to clarify, the main reason I'm apprehensive about manipulating cron is because it's basically text manipulation with no validation, and if I mess it up, none of my other cron jobs will run.
Here's what I ended up doing:
Taking stefanw's advice, I added the following line at the top of my bash script:
if [ ! `cat /home/username/settings/run.freq` = $1 ]; then
exit 0
fi
I set up the following cron jobs:
0 */2 * * * /home/username/scripts/publish.sh 2_hours
@hourly /home/username/scripts/publish.sh 1_hour
*/30 * * * * /home/username/scripts/publish.sh 30_minutes
*/10 * * * * /home/username/scripts/publish.sh 10_minutes
From the web interface, I let the user choose between those four options, and based on what the user chose, I write the string 2_hours/1_hour/30_minutes/10_minutes into the file at /home/username/settings/run.freq.
I don't love it, but it seems like the best alternative.
|
[
"Give your users some reasonable choices like every minute, every 5 minutes, every half an hour, ... and translate these values to a cron job string. This is user friendly and forbids users to tamper directly with the cron job string.\n",
"You could use a python scheduler library that does most of the work already:\n\npycron\nscheduler-py\n\n",
"What about Webmin? I have never used it myself but it seems you could configure the cron module and give permissions to the user who wants to configure the job.\n",
"Well something I use is a main script started every minute by cron. this script check touched files. If the files are there the main cron script start a function/subscript. You just have to touch a defined file and \"rm -f\"ed it when done. It has the side benefits to be more concurrent proof if you want other way to start jobs. Then you can use your favourite web programming language to handle your users scheduling ...\nThe main script looks like :\n[...]\n\nif [ -e \"${tag_archive_log_files}\" ]; then\n archive_log_files ${params}\n rm -f ${tag_archive_files}\nfi\n\nif [ -e \"${tag_purge_log_files}\" ]; then\n purge_log_files ${params}\n rm -f ${tag_purge_log_files}\nfi\n\n[...]\n\n",
"I found a module that can manipulate the cron info for me. It's called python-crontab, and it's available with easy_install. From the source:\nExample Use:\n\nfrom crontab import CronTab\n\ntab = CronTab()\ncron = tab.new(command='/usr/bin/echo')\n\ncron.minute().during(5,50).every(5)\ncron.hour().every(4)\n\ncron2 = tab.new(command='/foo/bar',comment='SomeID')\ncron2.every_reboot()\n\nlist = tab.find('bar')\ncron3 = list[0]\ncron3.clear()\ncron3.minute().every(1)\n\nprint unicode(tab.render())\n\nfor cron4 in tab.find('echo'):\n print cron4\n\nfor cron5 in tab:\n print cron5\n\ntab.remove_all('echo')\n\nt.write()\n\n(I kept googling for \"cron\" and couldn't find anything. The keyword I was missing was \"crontab\").\nI'm now deciding if I'll use this or just replace cron entirely with a python based scheduler.\n"
] |
[
10,
3,
0,
0,
0
] |
[] |
[] |
[
"bash",
"cron",
"python"
] |
stackoverflow_0001136168_bash_cron_python.txt
|
Q:
using registered com object dll from .NET
I implemented a python com server and generate an executable and dll using py2exe tool.
then I used regsvr32.exe to register the dll.I got a message that the registration was successful. Then I tried to add reference to that dll in .NET. I browsed to the dll location and select it, but I got an error message box that says: A reference to the dll could not be added, please make sure that the file is accessible and that it is a valid assembly or COM component.The code of the server and setup script is added below.
I want to mention that I can run the server as a python script and consume it from .net using late binding.
Is there something I'm missing or doing wrong? I would appreciate any help.
thanks,
Sarah
hello.py
import pythoncom
import sys
class HelloWorld:
#pythoncom.frozen = 1
if hasattr(sys, 'importers'):
_reg_class_spec_ = "__main__.HelloWorld"
_reg_clsctx_ = pythoncom.CLSCTX_LOCAL_SERVER
_reg_clsid_ = pythoncom.CreateGuid()
_reg_desc_ = "Python Test COM Server"
_reg_progid_ = "Python.TestServer"
_public_methods_ = ['Hello']
_public_attrs_ = ['softspace', 'noCalls']
_readonly_attrs_ = ['noCalls']
def __init__(self):
self.softspace = 1
self.noCalls = 0
def Hello(self, who):
self.noCalls = self.noCalls + 1
# insert "softspace" number of spaces
print "Hello" + " " * self.softspace + str(who)
return "Hello" + " " * self.softspace + str(who)
if __name__=='__main__':
import sys
if hasattr(sys, 'importers'):
# running as packed executable.
if '--register' in sys.argv[1:] or '--unregister' in sys.argv[1:]:
# --register and --unregister work as usual
import win32com.server.register
win32com.server.register.UseCommandLine(HelloWorld)
else:
# start the server.
from win32com.server import localserver
localserver.main()
else:
import win32com.server.register
win32com.server.register.UseCommandLine(HelloWorld)
setup.py
from distutils.core import setup
import py2exe
setup(com_server = ["hello"])
A:
The line:
_reg_clsid_ = pythoncom.CreateGuid()
creates a new GUID everytime this file is called. You can create a GUID on the command line:
C:\>python -c "import pythoncom; print pythoncom.CreateGuid()"
{C86B66C2-408E-46EA-845E-71626F94D965}
and then change your line:
_reg_clsid_ = "{C86B66C2-408E-46EA-845E-71626F94D965}"
After making this change, I was able to run your code and test it with the following VBScript:
Set obj = CreateObject("Python.TestServer")
MsgBox obj.Hello("foo")
I don't have MSVC handy to see if this fixes the "add reference" problem.
A:
I will answer my question to help any one may have similar questions. I hope that would help.
I can not find my server on the COM tab because, .NET (& Visual-Studio) need COM servers with TLB. But Python's COM servers have no TLB.
So to use the server from .NET by (C# and Late binding). The following code shows how to make this:
// the C# code
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Reflection;
namespace ConsoleApplication2
{
class Program
{
static void Main(string[] args)
{
Type pythonServer;
object pythonObject;
pythonServer = Type.GetTypeFromProgID("PythonDemos.Utilities");
pythonObject = Activator.CreateInstance(pythonServer);
}
}
} `
A:
If you want to use a registered Com object, you need to find it on the Com tab in the Add Reference dialog box. You do not navigate to the dll.
|
using registered com object dll from .NET
|
I implemented a python com server and generate an executable and dll using py2exe tool.
then I used regsvr32.exe to register the dll.I got a message that the registration was successful. Then I tried to add reference to that dll in .NET. I browsed to the dll location and select it, but I got an error message box that says: A reference to the dll could not be added, please make sure that the file is accessible and that it is a valid assembly or COM component.The code of the server and setup script is added below.
I want to mention that I can run the server as a python script and consume it from .net using late binding.
Is there something I'm missing or doing wrong? I would appreciate any help.
thanks,
Sarah
hello.py
import pythoncom
import sys
class HelloWorld:
#pythoncom.frozen = 1
if hasattr(sys, 'importers'):
_reg_class_spec_ = "__main__.HelloWorld"
_reg_clsctx_ = pythoncom.CLSCTX_LOCAL_SERVER
_reg_clsid_ = pythoncom.CreateGuid()
_reg_desc_ = "Python Test COM Server"
_reg_progid_ = "Python.TestServer"
_public_methods_ = ['Hello']
_public_attrs_ = ['softspace', 'noCalls']
_readonly_attrs_ = ['noCalls']
def __init__(self):
self.softspace = 1
self.noCalls = 0
def Hello(self, who):
self.noCalls = self.noCalls + 1
# insert "softspace" number of spaces
print "Hello" + " " * self.softspace + str(who)
return "Hello" + " " * self.softspace + str(who)
if __name__=='__main__':
import sys
if hasattr(sys, 'importers'):
# running as packed executable.
if '--register' in sys.argv[1:] or '--unregister' in sys.argv[1:]:
# --register and --unregister work as usual
import win32com.server.register
win32com.server.register.UseCommandLine(HelloWorld)
else:
# start the server.
from win32com.server import localserver
localserver.main()
else:
import win32com.server.register
win32com.server.register.UseCommandLine(HelloWorld)
setup.py
from distutils.core import setup
import py2exe
setup(com_server = ["hello"])
|
[
"The line:\n_reg_clsid_ = pythoncom.CreateGuid()\n\ncreates a new GUID everytime this file is called. You can create a GUID on the command line:\nC:\\>python -c \"import pythoncom; print pythoncom.CreateGuid()\"\n{C86B66C2-408E-46EA-845E-71626F94D965}\n\nand then change your line:\n_reg_clsid_ = \"{C86B66C2-408E-46EA-845E-71626F94D965}\"\n\nAfter making this change, I was able to run your code and test it with the following VBScript:\nSet obj = CreateObject(\"Python.TestServer\") \nMsgBox obj.Hello(\"foo\")\n\nI don't have MSVC handy to see if this fixes the \"add reference\" problem.\n",
"I will answer my question to help any one may have similar questions. I hope that would help.\nI can not find my server on the COM tab because, .NET (& Visual-Studio) need COM servers with TLB. But Python's COM servers have no TLB.\nSo to use the server from .NET by (C# and Late binding). The following code shows how to make this:\n// the C# code\nusing System;\n\nusing System.Collections.Generic;\n\nusing System.Linq;\n\nusing System.Text;\n\nusing System.Reflection;\n\nnamespace ConsoleApplication2\n\n{\n\n class Program\n\n {\n static void Main(string[] args)\n\n {\n\n Type pythonServer;\n object pythonObject;\n pythonServer = Type.GetTypeFromProgID(\"PythonDemos.Utilities\");\n pythonObject = Activator.CreateInstance(pythonServer);\n\n }\n }\n} `\n\n",
"If you want to use a registered Com object, you need to find it on the Com tab in the Add Reference dialog box. You do not navigate to the dll.\n"
] |
[
2,
2,
0
] |
[] |
[] |
[
".net",
"com",
"py2exe",
"python"
] |
stackoverflow_0001083913_.net_com_py2exe_python.txt
|
Q:
Multiple regression in Python
I am currently using scipy's linregress function for single regression. I am unable to find if the same library, or another, is able to do multiple regression, that is, one dependent variable and more than one independent variable. I'd like to avoid R if possible. If you're wondering, I am doing FX market analysis with the goal of replicating one currency pair with multiple other currency pairs. Anyone help? Thanks,
Thomas
A:
Use the OLS class [http://www.scipy.org/Cookbook/OLS] from the SciPy cookbook.
A:
I'm not sure if this is what you need, but the Modular toolkit for Data Processing (MDP) libray recently implemented multivariate linear regression. It is under LGPL license.
|
Multiple regression in Python
|
I am currently using scipy's linregress function for single regression. I am unable to find if the same library, or another, is able to do multiple regression, that is, one dependent variable and more than one independent variable. I'd like to avoid R if possible. If you're wondering, I am doing FX market analysis with the goal of replicating one currency pair with multiple other currency pairs. Anyone help? Thanks,
Thomas
|
[
"Use the OLS class [http://www.scipy.org/Cookbook/OLS] from the SciPy cookbook.\n",
"I'm not sure if this is what you need, but the Modular toolkit for Data Processing (MDP) libray recently implemented multivariate linear regression. It is under LGPL license.\n"
] |
[
9,
2
] |
[] |
[] |
[
"math",
"python",
"regression",
"scipy"
] |
stackoverflow_0001151088_math_python_regression_scipy.txt
|
Q:
django documentation locally setting up
I was trying to setup django . I do have Django-1.1-alpha-1. I was trying to make the documentation which is located at Django-1.1-alpha-1/doc using make utility.
But I am getting some error saying
> C:\django\Django-1.1-alpha-1\docs>C:\cygwin\bin\make.exe html
mkdir -p _build/html _build/doctrees
sphinx-build -b html -d _build/doctrees . _build/html
make: sphinx-build: Command not found
make: *** [html] Error 127
Do anybody knows how to solve this issue and make a html documentation
Thanks
J
A:
Install sphinx.
$ easy_install -U Sphinx
|
django documentation locally setting up
|
I was trying to setup django . I do have Django-1.1-alpha-1. I was trying to make the documentation which is located at Django-1.1-alpha-1/doc using make utility.
But I am getting some error saying
> C:\django\Django-1.1-alpha-1\docs>C:\cygwin\bin\make.exe html
mkdir -p _build/html _build/doctrees
sphinx-build -b html -d _build/doctrees . _build/html
make: sphinx-build: Command not found
make: *** [html] Error 127
Do anybody knows how to solve this issue and make a html documentation
Thanks
J
|
[
"Install sphinx.\n$ easy_install -U Sphinx\n\n"
] |
[
8
] |
[] |
[] |
[
"django",
"python",
"python_sphinx"
] |
stackoverflow_0001152479_django_python_python_sphinx.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.