text
stringlengths 226
34.5k
|
---|
Getting a 404 on /wd/hub/session when I try to connect to selenium grid remotely via Python
Question: I can see two remotes under the console but when I try to connect remotely and
execute something it fails with a 404.
from selenium import webdriver
browser = webdriver.Remote(
command_executor='http://ec2-184-72-129-183.compute-1.amazonaws.com:4444/wd/hub',
desired_capabilities={'browserName': 'firefox'})
browser.get('http://www.google.com')
browser.quit()
Throws this exception
Traceback (most recent call last):
File "browser-shot.py", line 16, in <module>
desired_capabilities={'browserName': 'firefox'})
File "/usr/local/lib/python2.6/dist-packages/selenium/webdriver/remote/webdriver.py", line 62, in __init__
self.start_session(desired_capabilities, browser_profile)
File "/usr/local/lib/python2.6/dist-packages/selenium/webdriver/remote/webdriver.py", line 104, in start_session
'desiredCapabilities': desired_capabilities,
File "/usr/local/lib/python2.6/dist-packages/selenium/webdriver/remote/webdriver.py", line 155, in execute
self.error_handler.check_response(response)
File "/usr/local/lib/python2.6/dist-packages/selenium/webdriver/remote/errorhandler.py", line 125, in check_response
raise exception_class(value)
selenium.common.exceptions.WebDriverException: Message: '<html>\n<head>\n<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"/>\n<title>Error 404 </title>\n</head>\n<body><h2>HTTP ERROR: 404</h2><pre>NOT_FOUND</pre>\n<p>RequestURI=/wd/hub/session</p><p><i><small><a href="http://jetty.mortbay.org/">Powered by Jetty://</a></small></i></p><br/> \n<br/> \n<br/> \n<br/> \n<br/> \n<br/> \n<br/> \n<br/> \n<br/> \n<br/> \n<br/> \n<br/> \n<br/> \n<br/> \n<br/> \n<br/> \n<br/> \n<br/> \n<br/> \n<br/> \n\n</body>\n</html>\n'
Answer: This doesn't appear to be a python error. It seems like the machine you have
your selenium server on is refusing requests. In the error message, you have
an `<a href="http://jetty.mortbay.org/">Powered by Jetty://</a>` which got me
thinking that it is somehow related to Jetty.
I don't know much about Jetty, but you might want to look there for your
answers. One possible problem is that Jetty is also running on port 4444, but
I can't say for sure, because I don't know how Jetty works.
EDIT
I should also add that I tried your setup on my local machine, to one of my
servers (that doesn't have Jetty on it) and it worked perfectly fine.
|
Importing from custom package fails in Python
Question: So I have a `main.py file` inside `/home/richard/projects/hello-python`
directory:
import sys
sys.path.append('/home/richard/projects/hello-python')
from Encode import Ffmpeg
x = Ffmpeg()
x.encode()
I have then created a package in the `/home/richard/projects/hello-
python/Encode` directory:
__init__.py
Ffmpeg.py
Init file is empty. `Ffmpeg.py` file contains:
class Ffmpeg(object):
i = 150
def __init__(self):
print "i am constructor"
def encode(self):
print "hello world"
Now I run the `main.py` script like this:
python main.py
I get this output:
richard@richard-desktop:~/projects/hello-python$ python main.py
Traceback (most recent call last):
File "main.py", line 5, in <module>
x = Ffmpeg()
TypeError: 'module' object is not callable
richard@richard-desktop:~/projects/hello-python$
I think there is some problem with my `sys.path` so my module cannot be
imported correctly but I am not sure how to fix it.
Answer:
from Encode.Ffmpeg import Ffmpeg
|
Cannot seem to use import time and import datetime in same script in Python
Question: I'm using Python 2.7 on Windows and I am writing a script that uses both time
and datetime modules. I've done this before, but python seems to be touchy
about having both modules loaded and the methods I've used before don't seem
to be working. Here are the different syntax I've used and the errors I am
currently getting.
First I tried:
from datetime import *
from time import *
...
checktime = datetime.today() - timedelta(days=int(2))
checktime = checktime.timetuple()
...
filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn)
file = webgatelogdir + '/' + fn
filetime = localtime(filetimesecs)
...
else: time.sleep(60)
ERROR:
`else: time.sleep(60) AttributeError: 'builtin_function_or_method' object has
no attribute 'sleep'`
Then I tried:
from datetime import *
from time import *
...
checktime = datetime.today() - timedelta(days=int(2))
checktime = checktime.timetuple()
...
filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn)
file = webgatelogdir + '/' + fn
filetime = localtime(filetimesecs)
...
#else: time.sleep(60) # comment out time.sleep statement
and I got no errors, but no sleep delay either.
Next I tried:
from datetime import *
import time
...
checktime = datetime.today() - timedelta(days=int(2))
checktime = checktime.timetuple()
...
filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn)
file = webgatelogdir + '/' + fn
filetime = localtime(filetimesecs)
...
#else: time.sleep(60) # comment out time.sleep statement
ERROR:
`filetime = localtime(filetimesecs) NameError: name 'localtime' is not
defined`
Another modification and I tried this:
import time
import datetime
...
checktime = datetime.today() - timedelta(days=int(2))
checktime = checktime.timetuple()
...
filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn)
file = webgatelogdir + '/' + fn
filetime = localtime(filetimesecs)
...
#else: time.sleep(60) # comment out time.sleep statement
ERROR
`checktime = datetime.today() - timedelta(days=int(2)) AttributeError:
'module' object has no attribute 'today'`
Finally, I tried this:
import time
from datetime import *
...
checktime = datetime.today() - timedelta(days=int(2))
checktime = checktime.timetuple()
...
filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn)
file = webgatelogdir + '/' + fn
filetime = localtime(filetimesecs)
...
#else: time.sleep(60) # comment out time.sleep statement
ERROR:
`checktime = datetime.today() - timedelta(days=int(2)) AttributeError:
'module' object has no attribute 'today'`
So I'm not sure how to get the two modules to play nicely. Or I need another
method to put a delay in the script.
Suggestions? Or pointers to mistakes that I made?
Thanks.
Answer: Don't use `from ... import *` – this is a convenience syntax for interactive
use, and leads to confusion in scripts.
Here' a version that should work:
import time
import datetime
...
checktime = datetime.datetime.today() - datetime.timedelta(days=int(2))
checktime = checktime.timetuple()
...
filetimesecs = os.path.getmtime(webgatelogdir + '/' + fn)
file = webgatelogdir + '/' + fn
filetime = time.localtime(filetimesecs)
...
#else: time.sleep(60) # comment out time.sleep statement
When importing the modules using `import <modulename>`, you of course need to
use fully qualified names for all names in these modules
|
How to add Search_fields in Django
Question: I tried to add search fields in Django using python. Followings are the codes
that I have used.
# admin.py file
from django.db import models
from blog.models import Blog
from django.contrib import admin
admin.site.register(Blog)
class Blog(models.Model):
title = models.CharField(max_length=60)
body = models.TextField()
created = models.DateTimeField("Date Created")
updated = models.DateTimeField("Date Updated")
def __unicode__(self):
return self.title
class Comment(models.Model):
body = models.TextField()
author = models.CharField(max_length=60)
created = models.DateTimeField("Date Created")
updated = models.DateTimeField("Date Updated")
post = models.ForeignKey(Blog)
def __unicode__(self):
return self.body
class CommentInline(admin.TabularInline):
model = Comment
class BlogAdmin(admin.ModelAdmin):
list_display = ('title','created', 'updated')
search_fields = ['title','body']
list_filter = ('Date Created','Date Updated')
inlines = [CommentInline]
class CommentAdmin(admin.ModelAdmin):
list_display = ('post','author','body_first_60','created','updated')
list_filter = ('Date Created','Date Updated')
I tried to add a search_fields for title and body by using Following code.
class BlogAdmin(admin.ModelAdmin):
. . .
search_fields = ('title','body')
. . .
When I run this I can't see any search box. Why is that ? I want your help.
I'm just a beginner. Thanks!
Answer: The search fields should be a list, not a tuple.
class BlogAdmin(admin.ModelAdmin):
. . .
search_fields = ['title','body']
. . .
Then make sure that you associate this admin object with the model.
admin.site.register(Blog, BlogAdmin)
**EDIT:**
It's hard to tell from above, but you should consider just importing the
models from models.py instead of redefining them in your admin.py file. Again,
it looks like that's what you're doing above.
admin.py:
from django.db import models
from blog.models import Blog
from django.contrib import admin
class CommentInline(admin.TabularInline):
model = Comment
class BlogAdmin(admin.ModelAdmin):
list_display = ('title','created','updated',)
search_fields = ['title','body',]
list_filter = ('Date Created','Date Updated',)
inlines = [CommentInline,]
class CommentAdmin(admin.ModelAdmin):
list_display = ('post','author','body_first_60','created','updated',)
list_filter = ('Date Created','Date Updated',)
admin.site.register(Blog, BlogAdmin)
models.py
from django.db import models
class Blog(models.Model):
title = models.CharField(max_length=60)
body = models.TextField()
created = models.DateTimeField("Date Created")
updated = models.DateTimeField("Date Updated")
def __unicode__(self):
return self.title
class Comment(models.Model):
body = models.TextField()
author = models.CharField(max_length=60)
created = models.DateTimeField("Date Created")
updated = models.DateTimeField("Date Updated")
post = models.ForeignKey(Blog)
def __unicode__(self):
return self.body
|
python/excel cell -> png
Question: Folks, There is an excel document that needs weekly updating... Just a few
cells that need to be updated, which is totally doable via:
<http://www.python-excel.org/>
After these cells are updated, a graph is generated inside excel. Is it
possible to export this graph into a .png via python (ie, copy cells A3-B7 and
export into an image)?
Thoughts, ideas?
Thanks!
Answer: You could export an existing chart to PNG using [COM
extensions](http://sourceforge.net/projects/pywin32/). You will have to make
sure the chart has been already updated with the new data before exporting.
I found this discussion of the `Charts` object helpful:
<http://msdn.microsoft.com/en-us/library/aa213725(v=office.11).aspx>
You would up with something like this (untested code):
from win32com.client import Dispatch
xlsApp = Dispatch("Excel.Application")
xlsWB = xlsApp.Workbooks.Open(r'C:\TEST\WorkbookWithAChart.xlsx')
xlsSheet = xlsWB.Sheets("Sheet 1")
mychart = XlsSheet.Charts(1) #'1' is the index of the chart object in the wb
mychart.Export(Filename=r'C:\TEST\MyExportedChart.png')
Helpful references:
* [Quick Start to Client side COM](http://docs.activestate.com/activepython/2.4/pywin32/html/com/win32com/HTML/QuickStartClientCom.html)
* [Win32COM Documentation](http://docs.activestate.com/activepython/2.4/pywin32/html/com/win32com/HTML/docindex.html)
|
Populate wx.StaticText controls with dictionary key:value pairs
Question: I have a wxPython GUI application that contains 13 pairs of StaticText
controls that I would like to be able to set labels for problematically.
In terms of regression analysis, each pair of StaticText controls represents
an independent variable and its coefficient. These key:value pairs are
currently stored in a python dictionary, allowing me to use dictionary
comprehension for much of my work.
Right now, I am struggling to display the contents of my python dictionary
inside of my GUI. Any thoughts?
I am happy concatenating the key:value pair inside 1 StaticText control label,
as I think it would be less messy.
Answer: I'm sure there are lots of different ways to do this. I would probably use a
ListCtrl or better yet, ObjectListView. But I went ahead and created an
example using StaticText controls:
import wx
########################################################################
class MyPanel(wx.Panel):
""""""
#----------------------------------------------------------------------
def __init__(self, parent):
"""Constructor"""
wx.Panel.__init__(self, parent)
self.mainSizer = wx.BoxSizer(wx.VERTICAL)
self.createControls()
self.SetSizer(self.mainSizer)
#----------------------------------------------------------------------
def createControls(self):
""""""
myDict = {"var1":"co-eff1", "var2":"co-eff2",
"var3":"co-eff3", "var4":"co-eff4",
"var5":"co-eff5", "var6":"co-eff6",
"var7":"co-eff7", "var8":"co-eff8",
"var9":"co-eff9", "var10":"co-eff10",
"var11":"co-eff11", "var12":"co-eff12",
"var13":"co-eff13"}
for key in myDict:
lblOne = wx.StaticText(self, label=key)
lblTwo = wx.StaticText(self, label=myDict[key])
sizer = wx.BoxSizer(wx.HORIZONTAL)
sizer.Add(lblOne, 0, wx.ALL, 5)
sizer.Add(lblTwo, 0, wx.ALL, 5)
self.mainSizer.Add(sizer)
########################################################################
class MyFrame(wx.Frame):
""""""
#----------------------------------------------------------------------
def __init__(self):
"""Constructor"""
wx.Frame.__init__(self, None, title="Frame Example",
size=(400,400))
panel = MyPanel(self)
self.Show()
if __name__ == "__main__":
app = wx.App(False)
frame = MyFrame()
app.MainLoop()
If you want to see what a ListCtrl looks like, go and download the wxPython
demo package and look up the ListCtrl demo. For ObjectListView, you can read
my [tutorial](http://www.blog.pythonlibrary.org/2009/12/23/wxpython-using-
objectlistview-instead-of-a-listctrl/).
|
How to pass data by 'POST' method to from Javascript to Python
Question: I have this part of script from my GAE application which uses webapp2, which
accepts data from a form using post,
class RenderMarksheet(webapp2.RequestHandler):
def post(self):
regno = self.request.get('content') # Here's where I extract the data from the form
...
...
...
self.response.out.write(template.render(templates/render.html, template_values))
And the web form which posts to this script,
<form action="/sign" method="post" name="inputform" onsubmit="return validate()">
Register No : <input type="number" name="regno" placeholder="Your Register No."/>
<input type="submit" value="Get-My-GPA!" >
</form>
Now, I want to manually pass a specific data (a register no.), without using
the submit button from the form, to the python script( or the url, perhaps) ,
using Javascript, say a button that triggers a javascript method.
I have to POST the data using javascript(to implement AJAX). In python I do
this, to post the data to a url,
import http.client, urllib.parse
params = urllib.parse.urlencode({'regno':10109104021})
headers = {"Content-type": "application/x-www-form-urlencoded",
"Accept": "text/plain"}
conn = http.client.HTTPConnection("mydomain:8888")
conn.request("POST", "/sign", params, headers)
response = conn.getresponse()
print(response.status, response.reason)
data = response.read()
How can I post the data to the url, via Jquery or Javascript?
Answer: fastest is to use jQuery and use `$.post()`
|
Curve Control With PyQt
Question: Is there any curve control in pyqt?, I have attached a image which is based on
maya gradientControl. I am looking some thing similar with pyqt where I want
to edit the curve and each edit should trigger some signal.Right now I can use
sip and I can wrap maya gradientControl in to my pyqt window but its really
not working as expected. Here is the code what I am trying. Its just a QWidget
so its very hard to find what happening when I am adding a point on curve .
import os
import maya.cmds as cmds
import maya.mel as mel
import maya.OpenMayaUI as mui
import sys
import sip
from PyQt4 import QtGui, QtCore, uic
baseUI = os.path.join(os.path.dirname(__file__), "range_ctrl.ui")
baseUIClass, baseUIWidget = uic.loadUiType(baseUI)
def getMayaWindow():
windowPointer = mui.MQtUtil.mainWindow()
return sip.wrapinstance(long(windowPointer), QtCore.QObject)
def convertToQT(controlName):
controlPoniter = mui.MQtUtil.findControl(controlName)
if controlPoniter is not None:
return sip.wrapinstance(long(controlPoniter), QtCore.QObject)
class MayaRangeCtrl(baseUIWidget, baseUIClass):
def __init__(self, parent=getMayaWindow()):
super(baseUIWidget, self).__init__(parent)
self.setupUi(self)
self.setObjectName("mayaRangeCtrl")
self.setWindowTitle("Range Control")
self.p1_vbox = QtGui.QVBoxLayout(self.frame)
self.range_ctr = cmds.gradientControlNoAttr( 'mayaaaa', h=90)
mayaQTObj = convertToQT(self.range_ctr)
self.p1_vbox.addWidget(mayaQTObj)
self.setCentralWidget(self.frame)
self.show()
def main():
myWindow = MayaRangeCtrl()
def run():
main()
And here is the screen capture.

And the ui contain a mian window and a QFrame with. Here is the maya
[documentation](http://download.autodesk.com/global/docs/maya2013/en_us/CommandsPython/gradientControlNoAttr.html)
But I am looking some pure QT widgets or some idea how we can implement this.
I tried with QPolygon but no idea how we can manipulate control point run
time. any idea ?
Thanks in advance.
Answer: Because the gradient control is written in the C++ side of the maya code,
there is no public interface to it as a PyQt4 widget as you might have already
discovered (and as far as I know).
What sip will give you is a QWidget reference that lets you reparent and place
it within your app as you desire. But as for working with it from there on,
your best bet is to just connect up to the [python commands callbacks for the
gradient
control](http://download.autodesk.com/us/maya/2010help/CommandsPython/gradientControlNoAttr.html)
cmds.gradientControlNoAttr(self.range_ctr, e=True, changeCommand=self.myCallback)
If the available callbacks for the `gradientControlNoAttr` are not enough for
you, then I am afraid you will have to roll your own custom widget using your
own paint events (or using the QGraphics classes).
|
How to extract value in a xml using lxml in Python
Question:
<XMLReport><Report>
<Preflight errors="0" criticalfailures="0" noncriticalfailures="0" signoffs="0" fixes="0" warnings="10">
<PreflightResult type="Check" level="warning">
<PreflightResultEntry xml:lang="en-US">
<Message>PDF/X-1a:20000 : PDF/X-1a:20000 output intent is missing </Message>
<StringContext>
<BaseString>PDF/X-1a:20000 : %PDFXVersion% output intent is missing</BaseString>
</StringContext>
</PreflightResultEntry>
</PreflightResult>
</Preflight></Report>
I want to get all value/text in `<Message> </Message>` element using lxml in
Python.
Thanks
Answer: Easy from the [lxml tuto](http://lxml.de/tutorial.html):
>>> from lxml import etree
>>> s = """<Report>
<Preflight errors="0" criticalfailures="0" noncriticalfailures="0" signoffs="0" fixes="0" warnings="10">
<PreflightResult type="Check" level="warning">
<PreflightResultEntry xml:lang="en-US">
<Message>PDF/X-1a:20000 : PDF/X-1a:20000 output intent is missing </Message>
<StringContext>
<BaseString>PDF/X-1a:20000 : %PDFXVersion% output intent is missing</BaseString>
</StringContext>
</PreflightResultEntry>
</PreflightResult>
</Preflight></Report>
"""
>>> root = etree.XML(s)
>>> for message in root.findall('Preflight/PreflightResult/PreflightResultEntry/Message'):
print message.text
PDF/X-1a:20000 : PDF/X-1a:20000 output intent is missing
>>>
|
how to deploy python webservice on apache
Question: I'm a green hand in Python.I have got a simple webservice with python as
following:
enter code here
import soaplib
from soaplib.core.service import rpc, DefinitionBase
from soaplib.core.model.primitive import String, Integer
from soaplib.core.server import wsgi
from soaplib.core.model.clazz import Array
from soaplib.core.service import soap
class HelloWorldService(DefinitionBase):
@soap(String,Integer,_returns=Array(String))
def say_hello(self,name,times):
results = []
for i in range(0,times):
results.append('Hello, %s'%name)
return results
if __name__=='__main__':
try:
from wsgiref.simple_server import make_server
soap_application = soaplib.core.Application([HelloWorldService], 'tns')
wsgi_application = wsgi.Application(soap_application)
server = make_server('10.44.138.231', 9999, wsgi_application)
server.serve_forever()
except ImportError:
print "Error: example server code requires Python >= 2.5"
it's very fast when I access the service in localhost,but it will become very
slow from the another host in local area network.
so I want to deploy this program in apache,but it seems hard,I search this in
google for a long time and it makes me very tired now. who can give me a
help,Thank you
Answer: I would recommend using [`mod_wsgi`](http://code.google.com/p/modwsgi/)
(rather than `mod_python`), as WSGI is the standard way to host Python web
applications.
You need to have a function called `application` in the global scope, in your
case:
# ....
return results
soap_application = soaplib.core.Application([HelloWorldService], 'tns')
application = wsgi.Application(soap_application)
if __name__ == "__main__":
# ....
You then enable mod_wsgi in Apache and add directives in (`WSGIScriptAlias` is
the main one). The [help
pages](http://code.google.com/p/modwsgi/wiki/WhereToGetHelp) are reasonably
accessable if you've configured Apache before.
|
Python debug print the command
Question: Folks
I am not very up with Python but have inherited a load of Python scripts One
of which is given me a issue in that I am not 100% sure what one line is
running
What I need to do is print out the command line and its variables.
The line in question is
ldapModify(userdn, mods, uri=uri)
What I am hoping to see is something like
/usr/bin/ldapmodify xxxx cn=......
Can any kind soul help.
Answer: The Python ldap lib doesn't call on the ldap command line client, it binds
directly to the underlying system ldap lib.
If what you want is to know the values of the args passed to ldapModify, it's
quite simple: print them to sys.stderr :
import sys
try:
ldapModify(userdn,mods,uri=uri)
except Exception, e:
print >> sys.stderr, "oops, ldapModify failed with '%s'" % e
print >> sys.stderr, "userdns : '%s' - uri : '%s' - mods : '%s'" % (userdns, uri, mods)
# and reraise the error so you get the whole traceback
raise
|
drop trailing zeros from decimal
Question: I have a long list of Decimals and that I have to adjust by factors of 10,
100, 1000,..... 1000000 depending on certain conditions. When I multiply them
there is sometimes a useless trailing zero (though not always) that I want to
get rid of. For example...
from decimal import Decimal
# outputs 25.0, PROBLEM! I would like it to output 25
print Decimal('2.5') * 10
# outputs 2567.8000, PROBLEM! I would like it to output 2567.8
print Decimal('2.5678') * 1000
Is there a function that tells the decimal object to drop these insignificant
zeros? The only way I can think of doing this is to convert to a string and
replace them using regular expressions.
Should probably mention that I am using python 2.6.5
EDIT senderle's fine answer made me realize that I occasionally get a number
like 250.0 which when normalized produces 2.5E+2. I guess in these cases I
could try to sort them out and convert to a int
Answer: You can use the
[`normalize`](http://docs.python.org/2/library/decimal.html#decimal.Decimal.normalize)
method to remove extra precision.
>>> print decimal.Decimal('5.500')
5.500
>>> print decimal.Decimal('5.500').normalize()
5.5
To avoid stripping zeros to the left of the decimal point, you could do this:
def normalize_fraction(d):
normalized = d.normalize()
sign, digits, exponent = normalized.as_tuple()
if exponent > 0:
return decimal.Decimal((sign, digits + (0,) * exponent, 0))
else:
return normalized
Or more compactly, using `quantize` as suggested by
[user7116](http://stackoverflow.com/questions/11227620/drop-trailing-zeros-
from-decimal/11227743#comment14749895_11227743):
def normalize_fraction(d):
normalized = d.normalize()
sign, digit, exponent = normalized.as_tuple()
return normalized if exponent <= 0 else normalized.quantize(1)
You could also use `to_integral()` as shown
[here](http://stackoverflow.com/a/18769210/577088) but I think using
`as_tuple` this way is more self-documenting.
I tested these both against a few cases; please leave a comment if you find
something that doesn't work.
>>> normalize_fraction(decimal.Decimal('55.5'))
Decimal('55.5')
>>> normalize_fraction(decimal.Decimal('55.500'))
Decimal('55.5')
>>> normalize_fraction(decimal.Decimal('55500'))
Decimal('55500')
>>> normalize_fraction(decimal.Decimal('555E2'))
Decimal('55500')
|
plone change in code not visible in development site
Question: I am very new to plone. I have a project folder in eclipse. I have imported it
from the cvs project. I have zope as server and I start zope with
`./bin/instance restart`. When I make changes in my folder, I cannot see the
changes in the development website. I can't seem to find what is happening. I
even restarted zope after making changes in python. Can anyone help me with
this?
Answer: Make sure you start your Zope server with `bin/instance fg`, most likely the
name of the script if you used the Plone universal installer buildout.
To see changes in python code you'll either need to restart the server (CTRL-C
then start again) or use something like
[`plone.reload`](https://pypi.python.org/pypi/plone.reload) to request a
reload of changed code.
When starting your server with the `fg` command, it is automatically running
in debug mode and any templates, resources and skin items are reloaded
automatically. Start the server with `console` or `start` and it'll run in
production mode and templates and such are loaded from disk only once.
See the [Plone.org documentation on
buildout](http://plone.org/documentation/manual/developer-manual/managing-
projects-with-buildout/creating-a-buildout-for-your-project) for more
information.
The `bin/instance` command has a built-in help command, try:
bin/instance help
for a list of supported commands or run:
bin/instance help console
to get help on a specific command; the above example will print the help on
the `console` command.
|
Logging to two files with different settings
Question: I am already using a basic logging config where all messages across all
modules are stored in a single file. However, I need a more complex solution
now:
* Two files: the first remains the same.
* The second file should have some custom format.
I have been reading the docs for the module, bu they are very complex for me
at the moment. Loggers, handlers...
So, in short:
How to log to two files in Python 3, ie:
import logging
# ...
logging.file1.info('Write this to file 1')
logging.file2.info('Write this to file 2')
Answer: You can do something like this:
import logging
formatter = logging.Formatter('%(asctime)s %(levelname)s %(message)s')
# first file logger
logger_1 = logging.getLogger('simple_logger')
hdlr_1 = logging.FileHandler('simplefile_1.log')
hdlr_1.setFormatter(formatter)
logger_1.addHandler(hdlr_1)
# second file logger
logger_2 = logging.getLogger('simple_logger_2')
hdlr_2 = logging.FileHandler('simplefile_2.log')
hdlr_2.setFormatter(formatter)
logger_2.addHandler(hdlr_2)
logger_1.info('message 1')
logger_2.error('error foo')
|
Facebook publish HTTP Error 400 : bad request
Question: Hey I am trying to publish a score to Facebook through python's urllib2
library.
import urllib2,urllib
url = "https://graph.facebook.com/USER_ID/scores"
data = {}
data['score']=SCORE
data['access_token']='APP_ACCESS_TOKEN'
data_encode = urllib.urlencode(data)
request = urllib2.Request(url, data_encode)
response = urllib2.urlopen(request)
responseAsString = response.read()
I am getting this error:
response = urllib2.urlopen(request)
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.py", line 124, in urlopen
return _opener.open(url, data, timeout)
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.py", line 389, in open
response = meth(req, response)
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.py", line 502, in http_response
'http', request, response, code, msg, hdrs)
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.py", line 427, in error
return self._call_chain(*args)
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.py", line 361, in _call_chain
result = func(*args)
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.py", line 510, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 400: Bad Request
Not sure if this is relating to Facebook's Open Graph or improper urllib2 API
use.
Answer: You may have to provide user:agent as some browser. I remember getting similar
error while running crawler in some website, as it detected that no browser is
calling for it.
|
os.rename a file to current date in python?
Question: I'm trying to create a zipped archive directory containing files. This will be
done daily so the name of the archive directory must include the date it was
created. I'm unable to rename the directory once it is created due to an
incorrect syntax. Below is the script I am using:
import zipfile
print('creating archive')
zf = zipfile.ZipFile('archive.zip', mode='w')
try:
print('adding udp files')
zf.write('test.udp')
finally:
print ('closing')
zf.close()
print('renaming archive...')
import datetime
dt = str(datetime.datetime.now())
import os
newname = 'file_'+dt+'.zip'
os.rename('archive.zip', newname)
print('renaming complete...')
Below is the error message I am receiving:
Traceback (most recent call last): File ".\archive.py", line 17, in
os.rename('archive.zip',newname) WindowsError: [Error 123] The filename,
directory name, or volume label syntax is incorrect
I'm using python 3.2. Please let me know if anything else is required.
Thanks, Paul
Answer: You are trying to rename your archive to something like
file_2012-06-28 16:01:52.615855.zip
On Windows, `:` is not a valid character in a filename, so you need to choose
a different format, e.g. you could include the date only:
>>> datetime.date.today().isoformat()
'2012-06-28'
|
Where is the syntax error with **finally:* clause?
Question: I'm trying to run Selenium tests for a Django app on production server.
I am getting a syntax error on the **finally:** clause.
I don't see where the error is and all the tests ran fine in development.
Here is the code:
def activate_revision(self, user, revision):
self.title = revision.title
self.tagnames = revision.tagnames
self.body = self.rendered(revision.body)
self.active_revision = revision
# Try getting the previous revision
try:
prev_revision = NodeRevision.objects.get(node=self, revision=revision.revision-1)
update_activity = True
# Do not update the activity if only the tags are changed
if prev_revision.title == revision.title and prev_revision.body == revision.body \
and prev_revision.tagnames != revision.tagnames and not settings.UPDATE_LATEST_ACTIVITY_ON_TAG_EDIT:
update_activity = False
except NodeRevision.DoesNotExist:
update_activity = True
finally:
if update_activity:
self.update_last_activity(user)
self.save()
Here is the traceback:
$ python manage.py test forum
Traceback (most recent call last):
File "/usr/lib/python2.4/logging/__init__.py", line 731, in emit
msg = self.format(record)
File "/usr/lib/python2.4/logging/__init__.py", line 617, in format
return fmt.format(record)
File "/usr/lib/python2.4/logging/__init__.py", line 408, in format
s = self._fmt % record.__dict__
KeyError: 'funcName'
/home/spirituality/lib/python2.7/Django-1.3.1-py2.7.egg/django/db/models/fields/subclassing.py:80: DeprecationWarning: A Field class whose get_db_prep_lookup method hasn't been updated to take `connection` and `prepared` arguments.
new_class = super(SubfieldBase, cls).__new__(cls, name, bases, attrs)
/home/spirituality/lib/python2.7/Django-1.3.1-py2.7.egg/django/db/models/fields/subclassing.py:80: DeprecationWarning: A Field class whose get_db_prep_value method hasn't been updated to take `connection` and `prepared` arguments.
new_class = super(SubfieldBase, cls).__new__(cls, name, bases, attrs)
Traceback (most recent call last):
File "manage.py", line 13, in ?
execute_manager(settings)
File "/home/spirituality/lib/python2.7/Django-1.3.1-py2.7.egg/django/core/management/__init__.py", line 438, in execute_manager
utility.execute()
File "/home/spirituality/lib/python2.7/Django-1.3.1-py2.7.egg/django/core/management/__init__.py", line 379, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/home/spirituality/lib/python2.7/Django-1.3.1-py2.7.egg/django/core/management/base.py", line 191, in run_from_argv
self.execute(*args, **options.__dict__)
File "/home/spirituality/lib/python2.7/Django-1.3.1-py2.7.egg/django/core/management/base.py", line 220, in execute
output = self.handle(*args, **options)
File "/home/spirituality/lib/python2.7/South-0.7.3-py2.7.egg/south/management/commands/test.py", line 8, in handle
super(Command, self).handle(*args, **kwargs)
File "/home/spirituality/lib/python2.7/Django-1.3.1-py2.7.egg/django/core/management/commands/test.py", line 37, in handle
failures = test_runner.run_tests(test_labels)
File "/home/spirituality/lib/python2.7/Django-1.3.1-py2.7.egg/django/test/simple.py", line 358, in run_tests
suite = self.build_suite(test_labels, extra_tests)
File "/home/spirituality/lib/python2.7/Django-1.3.1-py2.7.egg/django/test/simple.py", line 247, in build_suite
app = get_app(label)
File "/home/spirituality/lib/python2.7/Django-1.3.1-py2.7.egg/django/db/models/loading.py", line 129, in get_app
self._populate()
File "/home/spirituality/lib/python2.7/Django-1.3.1-py2.7.egg/django/db/models/loading.py", line 61, in _populate
self.load_app(app_name, True)
File "/home/spirituality/lib/python2.7/Django-1.3.1-py2.7.egg/django/db/models/loading.py", line 78, in load_app
models = import_module('.models', app_name)
File "/home/spirituality/lib/python2.7/Django-1.3.1-py2.7.egg/django/utils/importlib.py", line 35, in import_module
__import__(name)
File "/home/spirituality/webapps/spirituality/spirit_app/forum/models/__init__.py", line 2, in ?
from question import Question ,QuestionRevision, QuestionSubscription
File "/home/spirituality/webapps/spirituality/spirit_app/forum/models/question.py", line 1, in ?
from base import *
File "/home/spirituality/webapps/spirituality/spirit_app/forum/models/base.py", line 349, in ?
from node import Node, NodeRevision, NodeManager
File "/home/spirituality/webapps/spirituality/spirit_app/forum/models/node.py", line 383
finally:
^
SyntaxError: invalid syntax
Answer: First part of the traceback suggests that it's Python 2.4 on production. As
per my comment above, the problem is that `try..except..finally` is only for
Python 2.5 and newer. Upgrade production or rewrite the code to nest
`try..except` inside an outer `try..finally`.
|
How do I combine a timezone aware date and time in Python?
Question: I have a date and a time that I'm attempting to combine in Python. The time is
timezone aware.
However, when I try and combine them, I get the wrong time.
import pytz
from datetime import time, date
NYC_TIME = pytz.timezone('America/New_York')
start_date = date(2012, 7, 7)
start_time = time(hour = 0, tzinfo = NYC_TIME)
combined = datetime.combine(start_date, start_time)
print combined
print NYC_TIME.normalize(combined)
This prints `2012-07-07 00:00:00-05:00`, which normalizes to `2012-07-07
01:00:00-04:00`. Why is this happening? How can I avoid it?
Answer: A time without a date attached must assume it's not in the Daylight Saving
period. Once you attach a date to it, that assumption can be corrected. The
zone offset changes, and the time changes as well to keep it at the same UTC
equivalent.
|
python-daemon blocks ioctl call to ctypes linked C userlib
Question: I have a Python application in the bottle web-server that accesses a C shared-
object library via the ctypes Python module on a Linux platform. The C so-lib
opens a device node (`/dev/myhwdev`) and asserts an IOCTL function against the
device's file descriptor. Although this is a complicated stack, it works great
until I wrap the bottle app in Python's python-daemon context, like so:
# -*- coding: utf-8 -*-
import daemon
import bottle
from bottle import run, route, request
from userlib_via_ctypes_module import *
userlib_grab_device_file_descriptor()
@route('/regread')
def show_regread():
address = request.query.address or request.forms.address
length = request.query.length or request.forms.length
return {'results':assert_ioctl_via_userlib(address, length)}
daemonContext = daemon.DaemonContext(
detach_process = False
)
with daemonContext:
try:
run(host = '0.0.0.0', port = '80', debug = True)
except:
print "(E) Bottle web-service was stopped.\n";
Simply commenting out the `with daemonContext` line (and correcting
indentation) allows this code to work correctly (i.e., serves the correct JSON
result). However, within the daemonContext, print statements in my userlib
show that the file-descriptor for my device node is opened correctly, but the
ioctl function silently fails with an error code of -1.
Closing the device's file-descriptor and reopening it (in either the userlib
code or the above route handler) allow the command to work correctly - once.
But, the daemon and bottle server lock up and ignore all further web requests.
Suggestions? Currently, I am ready to give up on the daemon module, since
everything works fine with out it.
Thanks!
Answer: In preparing this question, the answer became obvious to me.
The `userlib_grab_device_file_descriptor()` function called a C-level, SO-lib
function that opened the file-descriptor for the hardware device node, which
was passed to the userlib ioctl function.
The python-daemon closes ALL file-handles upon entry of the context -
including the inherited file-descriptor for the hardware device. The userlib
still thought the file-descriptor was valid. At least, it would print the FD
in my debug messages as an integer > 2. However, unbeknown to the userlib, the
file-handle had indeed been closed, so the IOCTL would just fail silently. I
wished there had been a better error message supplied by uclib or the kernel.
:(
Anyway, the answer was to move the file-handle opening to the inside of the
daemon context, like so:
...
with daemonContext:
try:
userlib_grab_device_file_descriptor() # open fd here
run(host = '0.0.0.0', port = '80', debug = True)
except:
print "(E) Bottle web-service was stopped.\n";
I tried using python-daemon's `files_preserve` attribute, but it works on
file-descriptor numbers, not filenames. Therefore, after opening the fd, my
userlib would have to pass the fd number up to the daemon, so it could exclude
the fd before entering the daemon. ... I found it easier to open the file-
descriptor _inside_ the daemon. :)
Hope this helps someone else. :)
|
Python: Urllib2 and OpenCV
Question: I have a program that saves an image in a local directory and then reads the
image from that directory.
But I dont want to save the image. I want to read it directly from the url.
Here's my code:
import cv2.cv as cv
import urllib2
url = "http://cache2.allpostersimages.com/p/LRG/18/1847/M5G8D00Z/posters/curious-cat.jpg"
filename = "my_test_image" + url[-4:]
print filename
opener = urllib2.build_opener()
page = opener.open(url)
img= page.read()
abc = open(filename, "wb")
abc.write(img)
abc.close()
img = cv.LoadImage(filename)
cv.ShowImage("Optical Flow", img)
cv.WaitKey(30)
If i change it to:
img = cv.LoadImage(img)
This will give me this error:
> argument 1 must be string without null bytes, not str
What can I do?
Answer: If you want you can use PIL.
import cv2.cv as cv
import urllib2
from cStringIO import StringIO
import PIL.Image as pil
url="some_url"
img_file = urllib2.urlopen(url)
im = StringIO(img_file.read())
source = pil.open(im).convert("RGB")
bitmap = cv.CreateImageHeader(source.size, cv.IPL_DEPTH_8U, 3)
cv.SetData(bitmap, source.tostring())
cv.CvtColor(bitmap, bitmap, cv.CV_RGB2BGR)
I guess by this method you don't need to save the image file.
|
Python Simple SSL Socket Server
Question: Just trying to set up a simple SSL server. I have never had anything SSL work
for me in the past. I have a loose understanding of how SSL certificates and
signing.
The code is simple
import socket, ssl
context = ssl.SSLContext(ssl.PROTOCOL_TLSv1)
context.load_cert_chain(certfile="mycertfile") ###############
bindsocket = socket.socket()
bindsocket.bind(('', 2099))
bindsocket.listen(5)
while True:
newsocket, fromaddr = bindsocket.accept()
sslsoc = context.wrap_socket(newsocket, server_side=True)
request = sslsoc.read()
print(request)
The line in there with the ###s after it is the one that isnt working. I don't
know what I have to do with openssl to generate a PEM file that will work
here.
Can anyone enlighten me as to how to make this simple socket work.
By the way, this is NOT used for HTTP.
Answer: you can use this command to generate a self-signed certificate
openssl req -new -x509 -days 365 -nodes -out cert.pem -keyout cert.pem
the openssl framework will ask you to enter some information, such as your
country, city, etc. just follow the instruction, and you will get a `cert.pem`
file. the output file will have both your RSA private key, with which you can
generate your public key, and the certificate. the output file looks like
this:
-----BEGIN RSA PRIVATE KEY-----
# your private key
-----END RSA PRIVATE KEY-----
-----BEGIN CERTIFICATE-----
# your certificate
-----END CERTIFICATE-----
just load it, and the ssl module will handle the rest for you:
context.load_cert_chain(certfile="cert.pem", keyfile="cert.pem")
btw, there is **no** "SSLContext" in python2. for guys who are using python2,
just assign the pem file when wrapping socket:
newsocket, fromaddr = bindsocket.accept()
connstream = ssl.wrap_socket(newsocket,
server_side=True,
certfile="cert.pem",
keyfile="cert.pem",
ssl_version=YOUR CHOICE)
available ssl version: `ssl.PROTOCOL_TLSv1`, `ssl.PROTOCOL_SSLv2`,
`ssl.PROTOCOL_SSLv3`, `ssl.PROTOCOL_SSLv23`. if you have no idea,
`ssl.PROTOCOL_SSLv23` may be your choice as it provides the most compatibility
with other versions.
|
how can i use a json output in python
Question: I am trying to figure out how to get a `json` output in `python`. here is the
url:
[http://maps.googleapis.com/maps/api/distancematrix/json?origins=Vancouver+BC|Seattle&destinations=San+Francisco|Victoria+BC&mode=bicycling&language=fr-
FR&sensor=false](http://maps.googleapis.com/maps/api/distancematrix/json?origins=Vancouver+BC|Seattle&destinations=San+Francisco|Victoria+BC&mode=bicycling&language=fr-
FR&sensor=false)
whose output would be like this
{
"status": "OK",
"origin_addresses": [ "Vancouver, BC, Canada", "Seattle, État de Washington, États-Unis" ],
"destination_addresses": [ "San Francisco, Californie, États-Unis", "Victoria, BC, Canada" ],
"rows": [ {
"elements": [ {
"status": "OK",
"duration": {
"value": 340110,
"text": "3 jours 22 heures"
},
"distance": {
"value": 1734542,
"text": "1 735 km"
}
}, {
"status": "OK",
"duration": {
"value": 24487,
"text": "6 heures 48 minutes"
},
"distance": {
"value": 129324,
"text": "129 km"
}
} ]
}, {
"elements": [ {
"status": "OK",
"duration": {
"value": 288834,
"text": "3 jours 8 heures"
},
"distance": {
"value": 1489604,
"text": "1 490 km"
}
}, {
"status": "OK",
"duration": {
"value": 14388,
"text": "4 heures 0 minutes"
},
"distance": {
"value": 135822,
"text": "136 km"
}
} ]
} ]
}
How can i print this output in python ?
Can anyone help me out ?
Thanks
Answer: Depending on your version of Python, importing JSON might not be enough.
If you are running a version of python less than 2.6, you need to install
simplejson from your commandline.
> pip install simplejson
After that, just import normally.
import simplejson as json
The following should work in Python 2.x. There are a few differences in 3.x,
I'll leave that as an exercise for your imagination.
try:
import json
except:
import simplejson as json
url = "http://maps.googleapis.com/maps/api/distancematrix/json?origins=Vancouver+BC|Seattle&destinations=San+Francisco|Victoria+BC&mode=bicycling&language=fr-FR&sensor=false"
contents = urllib2.urlopen(url).read()
json_array = json.loads(contents)
print repr(json_array)
|
Where I could find good explanation how Google App Engine python27 threads works and what are limitations?
Question: Where is good information about **threads implementation in python27 on Google
App Engine** especially but not only:
1. What is threading limitation (how many thread could be spawn, what is number of spawned threads).
2. How handlers is spawned in threading (how looks initialization of handler).
3. How to be thread safe on Google App Engine (what additional assumption is take in GAE).
4. How simulate thread in SDK?
5. All other important things need to be know but not asked/missed in these three points.
Please help with some information or links. I know threading but not
understand implementation in Google App Engine and want learned it and share
with you.
Answer: Checking out 'Getting the Most Out of Python 2.7 on App Engine' at #io12
#cloudplatform
<https://developers.google.com/events/io/sessions/gooio2012/300/>
|
package that works inside and outside django. Is this a good design?
Question: I'm relative new to django and in generall to the python world. But I have
experience with ruby (been working with rails for 2 years) so many concepts of
python/django are not that new to me.
Anyway, I am writing an small package in python that will have a database
connection and I want to use this package inside django but also in the
feature outside django. So I decided take advantage of **django.db** and not
worry about writing any database connection and managment stuff.
So I started writing my first models and wanted to make a first test outside
of a django environment and I'm finding myself with some difficulties.
I thought about having the same configuration mechanism for my package as for
any django application (I mean the settings.py file). I wrote a file (called
nodjango_settings.py as a template) that only contains the DATABASES
dictionary and added two custom variables to it:
MYAPP_DB_ID = "myappdb"
MYAPP_DB_PREFIX = "myapp_"
DATABASES = {
MYAPP_DB_ID: {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'myappdb',
}
}
The directory structure of my package is:
.
|-- README.txt
|-- doc
| `-- db.txt
|-- setup.py
|-- src
| |-- __init__.py
| `-- myapp
| |-- __init__.py
| |-- exceptions.py
| |-- models.py
| |-- nodjango_settings.py
| `-- nodjango_settings.pyc
`-- test
`-- test.py
I was reading a little bit of djangos own code to see how to handle the
DJANGO_SETTINGS_MODULE environment variable and read the configuration file,
so added the following code to `myapp/__init__py`.
import os
# try to look after the DJANGO_SETTINGS_MODULE environment variable
# if not present raise an import error
# code abstract from python2.6/site-packages/django/conf/__init__.py
try:
settings_module = os.environ["DJANGO_SETTINGS_MODULE"]
if not settings_module: # If it's set but is an empty string.
raise KeyError
except KeyError:
raise ImportError("The DJANGO_SETTINGS_MODULE environment variable is not present.")
# TODO: look at python2.6/site-packages/django/conf/__init__.py +93
# if you print sys.path then the project directory gets added
# in django/core/management/__init__.py with sys.path.append
from django.utils import importlib
p = importlib.import_module(settings_module)
print p.MYAPP_DB_ID
I wanted to test that `__init__.py` works as intended so on the root directory
of my package I ran:
$ DJANGO_SETTINGS_MODULE="src.myapp.nodjango_settings" python src/myapp/__init__.py
Traceback (most recent call last):
File "src/myapp/__init__.py", line 22, in <module>
p = importlib.import_module(settings_module)
File "/home/yanez/devpython/lib/python2.6/site-packages/django/utils/importlib.py", line 35, in import_module
__import__(name)
ImportError: No module named src.myapp.nodjango_settings
I didn't expect that. When you create a new django application a `manage.py`
file is created. And this file sets `DJANGO_SETTINGS_MODULE` the variable to
the settings.py file from the perspective of the root directory of the
application.
I read more code of django and realized that in
`django/core/management/__init__.py` the root directory of the application is
added to `sys.path`. This could also be the solution to my problem but I'm not
quite sure whether it's a good idea to mess with `sys.path` or not.
I'm not quite sure whether my idea is a good one or a bad one. I'd like to
know what you think about it and where/how I can improve it. Beside, if I
stick to my idea, how can I read my custom varaibles in `settings.py` without
having to reimport the settings module over and over?
Thanks
Answer: When deploying a django project with mod_wsgi, one have to write a wsgi
(python) script, that does the "sys.path" dance and sets the
DJANGO_SETTINGS_MODULE environment variable, then create the wsgi application
object etc.
Why do I mention this ? Because, IMHO, you should not try to handle this part
of the problem within "myapp", but from a distinct python script that would be
the application entry point when using your app outside a django project
context, and keep "myapp" as a pure library package. This launcher script
would then take care of setting up the correct environement (sys.path,
settings, whatever). For the record, the settings module is just a Python
module, which at runtime is an ordinary python object (instance of class
"module"), and there are quite a few ways (other than the default import
mechanism) to create a module instance and add it to sys.modules (which is the
important point here).
As a side note, having the settings in the packages doesn't make sense IMHO,
it's a configuration file.
Edit : well, I knew there was something about using part of django standalone,
and here it is: <https://docs.djangoproject.com/en/dev/topics/settings/#using-
settings-without-setting-django-settings-module>
|
Python tuple operations and count
Question: I have the following tuple.I want to build a string which outputs as stated in
output.I want count all the elements corresponding to 'a' i.e, how many k1
occured w.r.t 'a' and so on .What is the easiest way to do this
a=[('a','k1'),('b','k2'),('a','k2'),('a','k1'),('b','k2'),('a','k1'),('b','k2'),('c','k3'),('c','k4')]
Output should be in a string output=""
a k1 3
a k2 1
b k1 1
b k2 3
c k3 1
c k4 1
Answer: Use the
[`Counter`](http://docs.python.org/library/collections.html#collections.Counter)
class from `collections`:
>>> a = [('a', 'k1'), ('b', 'k2'), ('a', 'k2'), ('a', 'k1'), ('b', 'k2'), ('a', 'k1'), ('b', 'k2'), ('c', 'k3'), ('c', 'k4')]
>>> from collections import Counter
>>> c = Counter(a)
Counter({('b', 'k2'): 3, ('a', 'k1'): 3, ('a', 'k2'): 1, ('c', 'k3'): 1, ('c', 'k4'): 1})
You can use `c.items()` to iterate over the counts:
>>> for item in c.items():
... print(item)
...
(('a', 'k2'), 1)
(('c', 'k3'), 1)
(('b', 'k2'), 3)
(('a', 'k1'), 3)
(('c', 'k4'), 1)
The above code is Python 3. The `Counter` class is new in Python 2.7. You can
now rearrange the items in the desired order and convert them to a string if
needed.
|
Google App Engine + PyCrypto = /dev/urandom not accessible
Question: I am using Google App Engine and PyCrypto to do some encryption. The error I
am getting, which is below, occurs _only on my local developement server,_
which is running Linux Mint Maya (13). I deployed the same code to the GAE
cloud, and it runs without error.
ERROR 2012-06-29 16:04:20,717 webapp2.py:1553] [Errno 13] file not accessible: '/dev/urandom'
Traceback (most recent call last):
File "/home/eric/google_appengine/lib/webapp2/webapp2.py", line 1536, in __call__
rv = self.handle_exception(request, response, e)
File "/home/eric/google_appengine/lib/webapp2/webapp2.py", line 1530, in __call__
rv = self.router.dispatch(request, response)
File "/home/eric/google_appengine/lib/webapp2/webapp2.py", line 1278, in default_dispatcher
return route.handler_adapter(request, response)
File "/home/eric/google_appengine/lib/webapp2/webapp2.py", line 1102, in __call__
return handler.dispatch()
File "/home/eric/google_appengine/lib/webapp2/webapp2.py", line 572, in dispatch
return self.handle_exception(e, self.app.debug)
File "/home/eric/google_appengine/lib/webapp2/webapp2.py", line 570, in dispatch
return method(*args, **kwargs)
File "/home/eric/workspace/commentbox/src/controller/api.py", line 55, in get
self.response.out.write(encrypt(json.dumps(to_json)))
File "/home/eric/workspace/commentbox/src/controller/api.py", line 27, in encrypt
iv = Random.new().read(AES.block_size)
File "/usr/lib/python2.7/dist-packages/Crypto/Random/__init__.py", line 33, in new
return _UserFriendlyRNG.new(*args, **kwargs)
File "/usr/lib/python2.7/dist-packages/Crypto/Random/_UserFriendlyRNG.py", line 206, in new
return RNGFile(_get_singleton())
File "/usr/lib/python2.7/dist-packages/Crypto/Random/_UserFriendlyRNG.py", line 200, in _get_singleton
_singleton = _LockingUserFriendlyRNG()
File "/usr/lib/python2.7/dist-packages/Crypto/Random/_UserFriendlyRNG.py", line 144, in __init__
_UserFriendlyRNG.__init__(self)
File "/usr/lib/python2.7/dist-packages/Crypto/Random/_UserFriendlyRNG.py", line 86, in __init__
self._ec = _EntropyCollector(self._fa)
File "/usr/lib/python2.7/dist-packages/Crypto/Random/_UserFriendlyRNG.py", line 53, in __init__
self._osrng = OSRNG.new()
File "/usr/lib/python2.7/dist-packages/Crypto/Random/OSRNG/posix.py", line 60, in new
return DevURandomRNG(*args, **kwargs)
File "/usr/lib/python2.7/dist-packages/Crypto/Random/OSRNG/posix.py", line 42, in __init__
f = open(self.name, "rb", 0)
File "/home/eric/google_appengine/google/appengine/tools/dev_appserver_import_hook.py", line 592, in __init__
raise IOError(errno.EACCES, 'file not accessible', filename)
IOError: [Errno 13] file not accessible: '/dev/urandom'
ERROR 2012-06-29 16:04:20,721 webapp2.py:1549] Exception
ERROR 2012-06-29 16:04:20,721 webapp2.py:1549] AttributeError
ERROR 2012-06-29 16:04:20,721 webapp2.py:1549] :
ERROR 2012-06-29 16:04:20,721 webapp2.py:1549] "'DevURandomRNG' object has no attribute 'closed'"
ERROR 2012-06-29 16:04:20,721 webapp2.py:1549] in
ERROR 2012-06-29 16:04:20,721 webapp2.py:1549] <bound method DevURandomRNG.__del__ of <Crypto.Random.OSRNG.posix.DevURandomRNG object at 0x52707d0>>
ERROR 2012-06-29 16:04:20,721 webapp2.py:1549] ignored
The python code that is throwing the error is the second line in this block:
from Crypto.Cipher import AES
from Crypto import Random
key = b'Sixteen byte key'
iv = Random.new().read(AES.block_size)
cipher = AES.new(key, AES.MODE_CBC, iv)
return iv + cipher.encrypt(plaintext)
After seeing this error, [I realized it might be a permissions
error](http://serverfault.com/questions/391386/why-is-dev-urandom-only-
readable-by-root-since-ubuntu-12-04-and-how-can-i-fix). So then I did a quick
check of the permissions on /dev/urandom:
eric@eric-Latitude-E5400 ~ $ dpkg -L udev | xargs grep urandom
/lib/udev/rules.d/50-udev-default.rules:KERNEL=="null|zero|full|random|urandom", MODE="0666"
eric@eric-Latitude-E5400 ~ $ ls -lart /dev/*random
crw-rw-rw- 1 root root 1, 9 Jun 29 10:53 /dev/urandom
crw-rw-rw- 1 root root 1, 8 Jun 29 10:53 /dev/random
So it looks like my permissions are fine. I have also tried running the
development server as root, but I get the same error. For some reason this
only happens with the development server, and not when deployed to google's
cloud. Any ideas on what to try next?
Thanks!
Answer: The error you are getting is because GAE restricts file access, and
[/dev/urandom](http://en.wikipedia.org/wiki//dev/random#Linux) is blocked.
Note that the error is not when you import PyCrypto, it's when you do
`AES.new(key, AES.MODE_CBC, iv)`
You can fix it, either by editing `Crypto/Random/OSRNG/__init__.py` and moving
the lines
if hasattr(os, 'urandom'):
from Crypto.Random.OSRNG.fallback import new
to the top, or by modifying os.name to something different than posix or nt,
at the beginning of your script. I suggest the first option.
ps: I assume you are using python 2.5 and pycrypto 2.2, because of your
Traceback. Next time please include these details.
|
get max duplicate item indexes in a list using python
Question: As someone here pointed me out, for getting the max duplicated item in a list
this can be used:
>>> from collections import Counter
>>> mylist = [20, 20, 25, 25, 30, 30]
>>> max(k for k,v in Counter(mylist).items() if v>1)
30
but, what if i want to get the indexes instead of the values, being here `[4,
5]`
any help??
Regards...
Answer:
>>> from collections import defaultdict
>>> mylist = [20, 20, 25, 25, 30, 30]
>>> D = defaultdict(list)
>>> for i,x in enumerate(mylist):
D[x].append(i)
>>> D[max(k for k,v in D.items() if len(v)>1)]
[4, 5]
|
Encrypt using Python and decrypt in jQuery/Javascript?
Question: I have some JSON data that I need to encrypt before sending it to the client
side. I can encrypt the data using pycrpto like this:
from Crypto.Cipher import AES
key = '0123456789abcdef'
mode = AES.MODE_CBC
encryptor = AES.new(key, mode)
text = jsonData
ciphertext = encryptor.encrypt(text)
And then I can send it to the client side. Now I need to use jQuery/Javascript
to convert the ciphertext to plain text. [jsaes](http://point-at-
infinity.org/jsaes/) is an implementation of AES in Javascript. Can it be used
to decrypt data back to plain text? Or is there any other library I can use to
complete this task?
Answer: Your straight forward answer is what Wes said.
However, there are problems of transmitting the key securely. (One could use a
private/public key mechanism but then, there's no point encrypting the data
because you already a P/P mechanism)
SSL/HTTPS were created for secure "transactions" between client and server, I
would advise you use those.
|
Integrate protocol buffers into WAF
Question: I managed to compile my `.proto` files like this:
def build(bld):
bld(rule='protoc --cpp_out=. -I.. ${SRC}', source='a.proto b.proto', name='genproto')
Seems to work nice, when I make changes to the source files, they are
recompiled and so on. But the result would be files called `build/a.pb.cc` and
`build/b.pb.cc` which I need to include into my main programs source list. Of
course I know how to manually construct them from my protocol buffers file
names, but I don't think this is the way to go. Can anyone provide me a hint?
Best regards, Philipp
**UPDATE**
With patient help from the IRC people I was able to manage to build a tool, as
suggested below.
#!/usr/bin/env python
# encoding: utf-8
# Philipp Bender, 2012
from waflib.Task import Task
from waflib.TaskGen import extension
"""
A simple tool to integrate protocol buffers into your build system.
def configure(conf):
conf.load('compiler_cxx cxx protoc_cxx')
def build(bld):
bld.program(source = "main.cpp file1.proto proto/file2.proto",
target = "executable")
"""
class protoc(Task):
run_str = '${PROTOC} ${SRC} --cpp_out=. -I..'
color = 'BLUE'
ext_out = ['.h', 'pb.cc']
@extension('.proto')
def process_protoc(self, node):
cpp_node = node.change_ext('.pb.cc')
hpp_node = node.change_ext('.pb.h')
self.create_task('protoc', node, [cpp_node, hpp_node])
self.source.append(cpp_node)
self.env.append_value('INCLUDES', ['.'] )
self.use = self.to_list(getattr(self, 'use', '')) + ['PROTOBUF']
def configure(conf):
conf.check_cfg(package="protobuf", uselib_store="PROTOBUF",
args=['--cflags', '--libs'])
conf.find_program('protoc', var='PROTOC')
You can also find it in the bugtracker:
<https://code.google.com/p/waf/issues/detail?id=1184>
Answer: This kind of processing is documented in the Waf book (look for "idl").
However I'm pretty sure a protobuf tool would be welcomed by the community, so
I suggest you attempt to create one and submit it for review on the bug
tracker or on IRC. This way, you'll have less maintenance burden, a shorter
wscript.
I would expect to use the tool like this:
bld(
name="protobufs",
features="protoc cxx",
source=["protobuf/a.proto", "protobuf/b.proto"],
includes=["protobuf", "..."],
)
bld(
target="test",
features="cxx cxxprogram",
source="test.cpp",
use="protobufs", # uses the generated C++ code, links to -lprotobuf
)
Or something like that.
|
How to find full module path of a class to import in other file
Question: I have method that returns module path of given class name
def findModulePath(path, className):
attributes = []
for root, dirs, files in os.walk(path):
for source in (s for s in files if s.endswith(".py")):
name = os.path.splitext(os.path.basename(source))[0]
full_name = os.path.splitext(source)[0].replace(os.path.sep, '.')
m = imp.load_module(full_name, *imp.find_module(name, [root]))
try:
attr = getattr(m, className)
attributes.append(attr)
# if "." in attr.__module__:
# return
except:
pass
if len(attributes) <= 0:
raise Exception, "Class %s not found" % className
for element in attributes:
print "%s.%s" % (element.__module__, className)
but it does not return the full path of the module, For example I have a
python file named "objectmodel" in objects package,and it contains a Model
class, So I call findModulePath(MyProjectPath,"Model"). it prints
objectmodel.Model but I need objects.objectmodel.Model
Answer: The attribute you're looking for is `__file__`. Note you may have to do some
massaging of this value after you get it - it could be a `.py`, `.pyc`,
`.pyd`, `.so`, `.dll`, etc.
Of course it's also going to be a full path, but you have your root which you
can subtract to get the actual hierarchy that you care about.
|
memory error in python
Question:
Traceback (most recent call last):
File "/run-1341144766-1067082874/solution.py", line 27, in
main()
File "/run-1341144766-1067082874/solution.py", line 11, in main
if len(s[i:j+1]) > 0:
MemoryError
Error in sys.excepthook:
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook
from apport.fileutils import likely_packaged, get_recent_crashes
File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in
from apport.report import Report
MemoryError
Original exception was:
Traceback (most recent call last):
File "/run-1341144766-1067082874/solution.py", line 27, in
main()
File "/run-1341144766-1067082874/solution.py", line 11, in main
if len(s[i:j+1]) > 0:
MemoryError
The above errors appeared when I tried to run the following program. Can
someone explain what is a memory error, and how to overcome this problem? .
**The program takes strings as input and finds all possible sub strings and
creates a set(in a lexicographical order) out of it and it should print the
value at the respective index asked by the user otherwise it should print
'Invalid'**
def main():
no_str = int(raw_input())
sub_strings= []
for k in xrange(0,no_str):
s = raw_input()
a=len(s)
for i in xrange(0, a):
for j in xrange(0, a):
if j >= i:
if len(s[i:j+1]) > 0:
sub_strings.append(s[i:j+1])
sub_strings = list(set(sub_strings))
sub_strings.sort()
queries= int(raw_input())
resul = []
for i in xrange(0,queries):
resul.append(int(raw_input()))
for p in resul:
try:
print sub_strings[p-1]
except IndexError:
print 'INVALID'
if __name__ == "__main__":
main()
Answer: This one here:
s = raw_input()
a=len(s)
for i in xrange(0, a):
for j in xrange(0, a):
if j >= i:
if len(s[i:j+1]) > 0:
sub_strings.append(s[i:j+1])
seems to be very inefficient and expensive for large strings.
Better do
for i in xrange(0, a):
for j in xrange(i, a): # ensures that j >= i, no test required
part = buffer(s, i, j+1-i) # don't duplicate data
if len(part) > 0:
sub_Strings.append(part)
A buffer object keeps a reference to the original string and start and length
attributes. This way, no unnecessary duplication of data occurs.
A string of length `l` has `l*l/2` sub strings of average length `l/2`, so the
memory consumption would roughly be `l*l*l/4`. With a buffer, it is much
smaller.
Note that `buffer()` only exists in 2.x. 3.x has `memoryview()`, which is
utilized slightly different.
Even better would be to compute the indexes and cut out the substring on
demand.
|
Does my test automation strategy sound ludicrous?
Question: I am developing an automation testing framework for testing a web service. The
web service is SOAP and implemented in Java (via Apache Axis2), however, our
tests are implemented in Python and uses the suds library to issue requests to
the server. The tests are high level tests that tests scenarios such as
backing up a user's data.
Now this web service is complicated in that certain methods require passing
lots of different types of objects and often require calling other methods to
acquire certain objects. For example, we have a call for backing up a user
whose pre-reqs looks like this :
1. call method getUser() to retrieve object User
2. call method getDataset() to retrieve object Dataset
3. call method getXService() to retrieve object XService
4. call method doBackup(User, Dataset, XService) to begin the backup
This is just a simple example of one of our calls that have many other pre-
requisite calls before the primary call can be made. Since these scenarios
need to be executed often in the tests, I want to abstract the set of calls so
that to perform the above backup, I just need to call one method that just
calls the other methods for me.
My question is, should I do this in an object-oriented fashion and pretty much
create a 1-to-1 mapping of Python classes that map to the Java version of the
objects? So my testing framework would just include classes so I could do:
# User, XService, and Dataset are classes that correspond with
# types implemented in the SOAP web service
from lib import XService, Dataset
class User():
def __init__(self, **kwargs):
self.id = kwargs.get('id', None)
def create(self):
soap_client.call('createClient', self.id)
def backup(self):
dataset = Dataset.get(1234)
service = XService.getInstance()
soap_client.call('doBackup', self, dataset, service)
So all I have to do is call backup() rather than issue 3 different methods
everytime.
The downside of this is that I have to create an object for every Type on the
server. Moreover, the objects on the testing side may get stale since they
don't sync with data on the server.
My other idea was not to go the Object Oriented route and just create a module
with just methods like backupUser() or restoreBackupToUser() and just feed
them actual data objects returned from the server. This approach would solve
the stale data issue but also would create massive modules that would be hard
to maintain.
Can anyone who has encountered this problem give some tips or critique on my
approaches. Perhaps I'm thinking too much and there is a better way to go
about testing the web service methods.
Answer: I had to create an automated test suite for testing a web service a while
back. I wrote the whole thing object orientated as there were a lot of methods
that each test would use in common. Also this saved a lot of time when it came
to negative testing etc. If you have the time writing you're test this way
will be earier to maintain and once you have wrote all the initial methods to
be called you'll save a lot of time as you're tests will just be several
function calls with very little to no logic. Here is an example of something
we done:
import suds, unittest, pexpect, re, os,time, sys, random
from random import randrange
from setauth import Authentication
####################################################################
#
# Add/Get Profile Function Tests
#
####################################################################
class TestAddGetProfile(unittest.TestCase):
def setUp(self):
project = testvars[0]
media_address = testvars[5]
mgmt_address = testvars[4]
self.profile = testvars[7]
# connects to WSDL file and stores location in variable 'client'
self.client = testvars[15]
self.client.options.location = media_address
self.mgmtclient = testvars[14]
self.mgmtclient.options.location = mgmt_address
self.getProfileToken = self.client.factory.create('ns4:ReferenceToken')
self.getProfileToken.value = self.profile
self.createProfileToken = self.client.factory.create('ns4:ReferenceToken')
self.createProfileToken.value = self.profile
self.createProfileName = self.client.factory.create('ns4:Name')
self.createProfileName.value = self.profile
self.deleteProfileToken = self.client.factory.create('ns4:ReferenceToken')
self.deleteProfileToken.value = self.profile
self.auth = Authentication()
self.auth.setAuth('Administrator', self.client, self.mgmtclient)
self.auth.makeUsers(self.mgmtclient)
def sudsCreateProfile(self):
create_result = self.client.service.CreateProfile(self.createProfileName, self.createProfileToken)
def tearDown(self):
self.auth.deleteAllUsers(self.mgmtclient)
self.client = None
class AdminCreateProfile(TestAddGetProfile):
def runTest(self):
self.auth.setAuth('Administrator', self.client, self.mgmtclient)
try:
self.sudsGetProfile()
self.sudsDeleteProfile()
except:
pass
try:
self.sudsCreateProfile()
self.assertTrue(True)
except:
self.assertFalse(True)
We have several other script's here for authentication and a test wrapper to
run all the tests. But you should get the general idea here.
Hope this help's you're decision.
|
New URL on django admin independent of the apps
Question: I am using django 1.4 and Python 2.7.
I just have a simple requirement where I have to add a new URL to the django
admin app. I know how to add URLs which are for the custom apps but am unable
figure out how to add URLs which are of the admin app. Please guide me through
this.
Basically the full URL should be something like `admin/my_url`.
**UPDATE**
I want a way after which I can as well reverse map the URL using admin.
Answer: +1 for Jingo's answer to your original question. With your clarifying comment
to the answer in mind: Such a URL is _not_ "independent of the apps", it is a
URL for the app "admin".
Adding a URL to the admin site is similar to ModelAdmin, by overriding
get_urls(): <https://docs.djangoproject.com/en/dev/ref/contrib/admin/#adding-
views-to-admin-sites>
EDIT:
<https://docs.djangoproject.com/en/dev/ref/contrib/admin/#django.contrib.admin.AdminSite>
is an admin site, by default "the" admin site is instantiated as
"django.contrib.admin.site" (and then e.g. your ModelAdmin's are registered
against that). So you can subclass AdminSite for your own MyAdminSite and re-
define get_urls() there:
from django.contrib.admin import AdminSite
class MyAdminSite(AdminSite):
def get_urls():
...
...
my_admin_site = MyAdminSite()
...
my_admin_site.register(MyModel, MyModelAdmin)
Make sure you use my_admin_site in urls.py instead now:
<https://docs.djangoproject.com/en/dev/ref/contrib/admin/#hooking-adminsite-
instances-into-your-urlconf>
Regarding the actual contents of get_urls(),see
<https://docs.djangoproject.com/en/dev/ref/contrib/admin/#django.contrib.admin.ModelAdmin.get_urls>
(of course calling super() of MyAdminSite). Also note the convenient
"admin_view" wrapper mentioned there.
P.S.: In theory, you could also just define get_urls() and then monkeypatch
the default admin site so that it uses your get_urls() but I don't know if
that would actually work - you'd probably have to monkeypatch right after its
"first" import...
|
How to use boost::python::iterator with return_internal_reference?
Question: I have a class `Type` which cannot be copied nor it contains default
constructor. I have second class `A` that acts as a set of the above classes.
This second class gives access via iterators and my iterator has dereference
operator:
class A {
class iterator {
[...]
public:
Type & operator*()
{
return instance;
}
private:
Type instance;
}
[...]
};
Now to expose that I wrote a `boost::python` code that looks like that:
class_<A>("A", [...])
.def("__iter__", iterator<A, return_internal_reference<> >())
.def("__len__", container_length_no_diff<A, A::iterator>)
;
After adding print messages to all iterator operations (construction,
assignment, dereferences, destruction) for code Python like this:
for o in AInstance:
print o.key
I get output (trimmed to important part):
construct 0xffffffff7fffd3e8
dereference: 0xffffffff7fffd3e8
destroy 0xffffffff7fffd3e8
get key 0xffffffff7fffd3e8
In above code those addresses are just addresses of `instance` member (or
`this` in method call). First three lines are produced by `iterator`, the
fourth line is printed by getter method in `Type`. So somehow `boost::python`
wraps everything in such manner that it:
1. creates iterator
2. dereferences iterator and stores reference
3. **destroys iterator (and object it contains)**
4. uses reference obtained in step two
So clearly `return_internal_reference` does not behave like stated (note that
it actually is just typedef over `with_custodian_and_ward_postcall<>`) where
it should keep object as long as result of method call is referenced.
So my question is how do I expose such an iterator to Python with
`boost::python`?
**edit:**
As it was pointed out it might not be clear: the original container does not
contain objects of type `Type`. It contains some `BaseType` objects from which
I am able to construct/modify `Type` object. So `iterator` in above example
acts like `transform_iterator`.
Answer: I think the whole problem was that I did not fully understand what semantics
should `iterator` class provide. It seems that value returned by iterator has
to be valid as long as container exists, not iterator.
This means that `boost::python` behaves correctly and there are two solutions
to that:
* use `boost::shared_ptr`
* return by value
A bit less efficient approaches than what I tried to do, but looks like there
is no other way.
**edit:** I have worked out a solution (not only possible, but it seems to be
working nicely): [Boost python container, iterator and item
lifetimes](http://stackoverflow.com/questions/13585575/boost-python-container-
iterator-and-item-lifetimes)
|
Can import objc module in python 2.6 but NOT in python 2.7
Question: My system: Mac OS X 10.6.8, gcc 4.2, python 2.7, xcode 3.2.3
I use python 2.7 and I got error when tried to do: `import objc`, it returns:
`ImportError: No module named objc`.
It looks like the objc module is not there. But actually I have the objc
module installed already. Snow Leopard has got pyobjc preinstalled and I have
also checked this using python2.6 (I have python 2.7 and 2.6 in my Mac). So if
I invoke `import objc` using python2.6, I got no error which means `objc`
exists and I can use that module without problems ... but if I import using
python 2.7, I will got the `ImportError: No module named objc` error.
Does anyone have any solution? FYI, the python2.6 is coming preinstalled with
OS X while 2.7 is manually installed. I've been using the 2.7 for couple of
months without problems.
Answer: Python C extension modules like `objc` cannot be re-used between python
versions. You'll have to install the `objc` module for 2.7 separately.
Generally, different python installations (such as 2.6 or 2.7, or 3.2) use
separate module import locations, and you normally install extensions _per_
python setup.
|
python pexpect sendcontrol key characters
Question: I am working with pythons pexpect module to automate tasks, I need help in
figuring out key characters to use with sendcontrol. how could one send the
controlkey ENTER ? and for future reference how can we find the key
characters?
here is the code i am working on.
#!/usr/bin/env python
import pexpect
id = pexpect.spawn ('ftp 192.168.3.140')
id.expect_exact('Name')
id.sendline ('anonymous')
id.expect_exact ('Password')
*# Not sure how to send the enter control key
id.sendcontrol ('???')*
id.expect_exact ('ftp')
id.sendline ('dir')
id.expect_exact ('ftp')
lines = id.before.split ('\n')
for line in lines :
print line
Answer: **pexpect** has no `sendcontrol()` method. In your example you appear to be
trying to send an empty line. To do that, use:
id.sendline('')
If you need to send real control characters then you can `send()` a string
that contains the appropriate character value. For instance, to send a
control-C you would:
id.send('\003')
or:
id.send(chr(3))
_Responses to comment #2:_
Sorry, I typo'ed the module name -- now fixed. More importantly, I was looking
at old documentation on noah.org instead of the [latest documentation at
SourceForge](http://pexpect.sourceforge.net/pexpect.html). The newer
documentation does show a `sendcontrol()` method. It takes an argument that is
either a letter (for instance, `sendcontrol('c')` sends a control-C) or one of
a variety of punctuation characters representing the control characters that
don't correspond to letters. But really `sendcontrol()` is just a convenient
wrapper around the `send()` method, which is what `sendcontrol()` calls after
after it has calculated the actual value that you want to send. You can read
the source for yourself at [line 973 of this
file](http://pexpect.svn.sourceforge.net/viewvc/pexpect/trunk/pexpect/pexpect.py?revision=521&view=markup).
I don't understand why `id.sendline('')` does not work, especially given that
it apparently works for sending the user name to the spawned **ftp** program.
If you want to try using `sendcontrol()` instead then that would be either:
id.sendcontrol('j')
to send a Linefeed character (which is control-j, or decimal 10) or:
id.sendcontrol('m')
to send a Carriage Return (which is control-m, or decimal 13).
If those don't work then please explain exactly what does happen, and how that
differs from what you wanted or expected to happen.
|
Python Redhat version issue
Question: > **Possible Duplicate:**
> [Upgrade python without breaking
> yum](http://stackoverflow.com/questions/10624511/upgrade-python-without-
> breaking-yum)
I'm running a Redhat VM (2.6.18-274.el5 64 bit). I installed nodejs on the vm
in order to use browserstack. To get nodejs running I had to upgrade Python to
2.6 or above. I installed 2.7 from source using make altinstall. Then I
createda hard-link to point from 2.4 to 2.7. Checking the python -V now shows
2.7 being the default. That all worked out fine and node is now up and
running.
There was a problem importing one of the Python modules
required to run yum. The error leading to this problem was:
No module named yum
Please install a package which provides this module, or
verify that the module is installed correctly.
It's possible that the above module doesn't match the
current version of Python, which is:
2.7.2 (default, Jul 2 2012, 23:35:52)
[GCC 4.1.2 20080704 (Red Hat 4.1.2-51)]
If you cannot solve this problem yourself, please go to
the yum faq at:
http://wiki.linux.duke.edu/YumFaq
My problem is Yum. It's no longer working as its looking for the older version
of Python which is 2.4? Is there anyway I can have both of them working as in
Yum and node both using different versions?
Answer: There are two issues here. One is that you broke your system python. Yum is
installed in the python's site packages. If you damage python, yum will break.
Breaking the system python installation is a stupid idea. Maybe you made
backups, or have another sister machine from which you can transplant the
original 2.4 python.
The other issue is that there is no problem with having several pythons on the
same machine. To install Node.js you must simply tell it which python to use.
Set `PYTHON=/opt/local/my/path/to/python2.7/bin/python`, possibly `PYTHONHOME`
as well, before running `./configure` .
|
Python tkinter : loop in Label
Question: Hello I just wanted that the Label change/refresh during the loop, but it
doesn't work
This my code
fen1 = Tk()
v = StringVar()
Label(fen1,textvariable=v).pack()
i=0
while(1):
i=i+1
v.set(i)
fen1.mainloop()
Thanks
Answer: here, try this:
from Tkinter import *
import time
root=Tk()
variable=StringVar()
def update_label():
i=0
while 1:
i=i+1
variable.set(str(i))
root.update()
your_label=Label(root,textvariable=variable)
your_label.pack()
start_button=Button(root,text="start",command=update_label)
start_button.pack()
root.mainloop()
That should give you a good example. However, it is important to note that
during the while loop, you MUST call `root.update()` otherwise your GUI will
freeze until the loop completes (in this case it _never_ does) and never show
your numbers.
Also note that you _can_ call `update_label()` from anywhere in your program.
I just added it to the start button for example purposes.
What was wrong with your code was that you had set the while loop free-
floating and most importantly _before_ your GUI's mainloop. When you do this,
since this loop is infinate, it never allows `Tkinter` to start its
`mainloop()`. However, if you were to put the while loop _after_ the mainloop,
then that would never be executed until _after_ you exit the GUI, this is
because the mainloop is infinate until it is stopped (closing the GUI).
So to fix this you simply put it in a function and call it later on during
`Tkinter`'s mainloop. You can do this various ways as well, for example, you
can use `.after()` to perform a specific task after a certain amount of time,
or make it the command of a button to be run when pressed, ect., ect. .
However, The _proper_ code you should use is the following, as you do not
really want infinate loops in your code (other then you mainloop).:
class App (object):
def __init__(self):
self.root=Tk()
self.variable=StringVar()
self.i=0
self.your_label=Label(self.root,textvariable=self.variable)
def grid(self):
self.your_label.pack()
def update_label(self):
self.i=self.i+1
self.variable.set(str(self.i))
self.root.after(20,self.update_label)
def run(self):
self.grid()
self.root.after(20,self.update_label)
self.root.mainloop()
if __name__=='__main__':
App().run()
|
What is the cross-platform method of enumerating serial ports in Python (including virtual ports)?
Question: **Note:** I'm using Python 2.7, and pySerial for serial communications.
I found this article which lists two ways:
<http://www.zaber.com/wiki/Software/Python#Displaying_a_list_of_available_serial_ports>
This method works on Windows and Linux, but sometimes misses virtual ports on
Linux:
import serial
def scan():
# scan for available ports. return a list of tuples (num, name)
available = []
for i in range(256):
try:
s = serial.Serial(i)
available.append( (i, s.portstr))
s.close()
except serial.SerialException:
pass
return available
print "Found ports:"
for n,s in scan(): print "(%d) %s" % (n,s)
And this one that only works on Linux, but includes virtual ports:
import serial, glob
def scan():
# scan for available ports. return a list of device names.
return glob.glob('/dev/ttyS*') + glob.glob('/dev/ttyUSB*')
print "Found ports:"
for name in scan(): print name
I suppose I could do platform detection to use the second method (the one that
includes virtual ports) when running on Linux, and the first method when
running Windows, but what about Mac?
How should I enumerate serial ports (virtual too) regardless of platform?
**Edit**
I found a few pertinent questions:
* [MacPython: programmatically finding all serial ports](http://stackoverflow.com/questions/1659283/macpython-programmatically-finding-all-serial-ports)
* [MacOS: what's the difference between /dev/tty.* and /dev/cu.*?](http://stackoverflow.com/questions/8632586/macos-whats-the-difference-between-dev-tty-and-dev-cu)
* [How to find all serial devices (ttyS, ttyUSB, ..) on Linux without opening them?](http://stackoverflow.com/questions/2530096/how-to-find-all-serial-devices-ttys-ttyusb-on-linux-without-opening-them)
Answer: This is what I've been using. It's a mashup of the methods I posted above. I'd
still like to see better solutions, though.
# A function that tries to list serial ports on most common platforms
def list_serial_ports():
system_name = platform.system()
if system_name == "Windows":
# Scan for available ports.
available = []
for i in range(256):
try:
s = serial.Serial(i)
available.append(i)
s.close()
except serial.SerialException:
pass
return available
elif system_name == "Darwin":
# Mac
return glob.glob('/dev/tty*') + glob.glob('/dev/cu*')
else:
# Assume Linux or something else
return glob.glob('/dev/ttyS*') + glob.glob('/dev/ttyUSB*')
|
Inheritance in web.py?
Question: I am currently developing wep.py application. This is my web application which
is binded with web.py and wsgi.
root/main.py
import web
import sys
import imp
import os
sys.path.append(os.path.dirname(__file__))
#from module import module
from exam import exam
urls = (
'/exam', 'exam'
)
application = web.application(urls, globals(), autoreload = True).wsgifunc()
My application has an abstract class called module in module.py in root
directory and its purpose is to be inherited by modules.
root/module.py
class module:
def fetchURL(self, url):
# ...
return content
The lower level module called "exam" would inherits module
root/exam/**init**.py
from module import module
class exam(module):
def getResults(self):
# error occurs here
self.fetchURL('math.json')
When I call the parent method, web.py raises an exception
> WalkerError: ('unexpected node type', 339)
Environment: Python 2.5
How can I resolve the problem? Thanks
// EDIT 03 July 10:22 GMT+0
The stack trace is as follows
mod_wsgi (pid=1028): Exception occurred processing WSGI script 'D:/py/labs_library/index.py'.
Traceback (most recent call last):
File "D:\csvn\Python25\lib\site-packages\web\application.py", line 277, in wsgi
result = self.handle_with_processors()
File "D:\csvn\Python25\lib\site-packages\web\application.py", line 247, in handle_with_processors
return process(self.processors)
File "D:\csvn\Python25\lib\site-packages\web\application.py", line 244, in process
raise self.internalerror()
File "D:\csvn\Python25\lib\site-packages\web\application.py", line 467, in internalerror
return debugerror.debugerror()
File "D:\csvn\Python25\lib\site-packages\web\debugerror.py", line 305, in debugerror
return web._InternalError(djangoerror())
File "D:\csvn\Python25\lib\site-packages\web\debugerror.py", line 290, in djangoerror
djangoerror_r = Template(djangoerror_t, filename=__file__, filter=websafe)
File "D:\csvn\Python25\lib\site-packages\web\template.py", line 845, in __init__
code = self.compile_template(text, filename)
File "D:\csvn\Python25\lib\site-packages\web\template.py", line 924, in compile_template
ast = compiler.parse(code)
File "D:\csvn\Python25\lib\compiler\transformer.py", line 51, in parse
return Transformer().parsesuite(buf)
File "D:\csvn\Python25\lib\compiler\transformer.py", line 128, in parsesuite
return self.transform(parser.suite(text))
File "D:\csvn\Python25\lib\compiler\transformer.py", line 124, in transform
return self.compile_node(tree)
File "D:\csvn\Python25\lib\compiler\transformer.py", line 167, in compile_node
raise WalkerError, ('unexpected node type', n)
WalkerError: ('unexpected node type', 339)
If it is possible I would like to turn off the template functionality as I use
python only for JSON output for mobile app.
Answer: if you create python module you should add `__init__.py` in top of your
hierarchy:
dvedit/
__init__.py
clipview.py
filters/
__init__.py
it means that in every directory which will be imported via `from ... import
...` should have `__init__.py` file.
further info available: <http://wiki.cython.org/PackageHierarchy>
|
Pickle incompatability of numpy arrays between Python 2 and 3
Question: I am trying to load the MNIST dataset linked
[here](http://deeplearning.net/tutorial/gettingstarted.html) in Python 3.2
using this program:
import pickle
import gzip
import numpy
with gzip.open('mnist.pkl.gz', 'rb') as f:
l = list(pickle.load(f))
print(l)
Unfortunately, it gives me the error:
Traceback (most recent call last):
File "mnist.py", line 7, in <module>
train_set, valid_set, test_set = pickle.load(f)
UnicodeDecodeError: 'ascii' codec can't decode byte 0x90 in position 614: ordinal not in range(128)
I then tried to decode the pickled file in Python 2.7, and re-encode it. So, I
ran this program in Python 2.7:
import pickle
import gzip
import numpy
with gzip.open('mnist.pkl.gz', 'rb') as f:
train_set, valid_set, test_set = pickle.load(f)
# Printing out the three objects reveals that they are
# all pairs containing numpy arrays.
with gzip.open('mnistx.pkl.gz', 'wb') as g:
pickle.dump(
(train_set, valid_set, test_set),
g,
protocol=2) # I also tried protocol 0.
It ran without error, so I reran this program in Python 3.2:
import pickle
import gzip
import numpy
# note the filename change
with gzip.open('mnistx.pkl.gz', 'rb') as f:
l = list(pickle.load(f))
print(l)
However, it gave me the same error as before. How do I get this to work?
* * *
[This is a better approach for loading the MNIST dataset.](http://scikit-
learn.org/stable/modules/generated/sklearn.datasets.fetch_mldata.html)
Answer: This seems like some sort of incompatibility. It's trying to load a
"binstring" object, which is assumed to be ASCII, while in this case it is
binary data. If this is a bug in the Python 3 unpickler, or a "misuse" of the
pickler by numpy, I don't know.
Here is something of a workaround, but I don't know how meaningful the data is
at this point:
import pickle
import gzip
import numpy
with open('mnist.pkl', 'rb') as f:
u = pickle._Unpickler(f)
u.encoding = 'latin1'
p = u.load()
print(p)
Unpickling it in Python 2 and then repickling it is only going to create the
same problem again, so you need to save it in another format.
|
How to debug/log wsgi python app?
Question: I tried this:
#!/usr/bin/python
from wsgiref.simple_server import make_server
from cgi import parse_qs, escape
import logging
import os
import sys
html = """
<html>
<body>
<form method="post" action="parsing_post.wsgi">
<p>
Age: <input type="text" name="age">
</p>
<p>
Hobbies:
<input name="hobbies" type="checkbox" value="software"> Software
<input name="hobbies" type="checkbox" value="tunning"> Auto Tunning
</p>
<p>
<input type="submit" value="Submit">
</p>
</form>
<p>
Age: %s<br>
Hobbies: %s
</p>
</body>
</html>
"""
def application(environ, start_response):
# the environment variable CONTENT_LENGTH may be empty or missing
try:
request_body_size = int(environ.get('CONTENT_LENGTH', 0))
except (ValueError):
request_body_size = 0
# When the method is POST the query string will be sent
# in the HTTP request body which is passed by the WSGI server
# in the file like wsgi.input environment variable.
logger = logging.getLogger(__name__)
request_body = environ['wsgi.input'].read(request_body_size)
d = parse_qs(request_body)
age = d.get('age', [''])[0] # Returns the first age value.
hobbies = d.get('hobbies', []) # Returns a list of hobbies.
# Always escape user input to avoid script injection
age = escape(age)
hobbies = [escape(hobby) for hobby in hobbies]
response_body = html % (age or 'Empty',
', '.join(hobbies or ['No Hobbies']))
status = '200 OK'
response_headers = [('Content-Type', 'text/html'),
('Content-Length', str(len(response_body)))]
start_response(status, response_headers)
return [response_body]
But i don't know where it logs. I'm trying to display/log the value on webpage
or in a file /var/log/apache2/myapp.log
What's the best way to do this?
Any answer will be highly appreciated. Thanks in advanced.
Answer: Note that the above code won't actually produce any log whatsoever since your
not calling any of the `logger.log()` variants - but I guess that's not the
point.
If you're running your code with apache/mod_wsgi, the simplest solution is to
configure your logger(s) to log to sys.stderr using a StreamHandler (cf
<http://docs.python.org/howto/logging.html#configuring-logging>), and define
the error log path, name and level in your apache conf (beware, the default
apache behaviour is to only log "error level" message).
|
how to design a page in django cms
Question: I want to design a page in django that has a search bar in which user enters a
keyword to search it in the 10 xml documents . I have designed the python code
for searching the words using xml parsing. I also have developed an app called
"**search** " in django but that app has to have this page that I want to
design. Right now, my app is taking input from the terminal but I want that
the word should be entered through this web page that I will design. How can I
do that?
This is my code that will sit at back end:
#!/usr/bin/python
import sys
sys.path.insert(0,'/home/pooja/Desktop/mysite')
#to tell django which settings module to use
import os
os.environ['DJANGO_SETTINGS_MODULE'] = 'mysite.settings'
from search.models import Keywords
from skey import find_root_tags, count, sorting_list
str1 = raw_input("enter the word to be searched\n") # taking input from user on terminal but I want it to be taken through that page
list = []
fo = open("xml.txt","r")
for i in range(count.__len__()):
file = fo.readline()
file = file.rstrip('\n')
find_root_tags(file,str1,i)
list.append((file,count[i]))
sorting_list(list)
for name, count in list:
s = Keywords(file_name=name,frequency_count=count)
s.save()
fo.close()
Here django_project = mysite #my project's name and app = search #my app's
name
Please help.
Answer: create view in search app with form and recive search key and ...
see these pages
* Django | Working with forms | Django documentation -> <https://docs.djangoproject.com/en/dev/topics/forms/>
* Django | Writing views | Django documentation ->
<https://docs.djangoproject.com/en/dev/topics/http/views/>
|
Weekly Cron Job on a Specific Day
Question: I created the following cron job yesterday to run every Tuesday at 12:01 AM (I
think), but it did not run last night. Running the command as I have it in the
cron file works. What did I get wrong here?
1 0 * * 2 python2.7 /path/to/django/manage.py my_command > /path/to/file.log
EDIT: I wasn't including the log redirect and I'm guessing that might be
important. Should it be >>
Answer: Possibly `python2.7` is not in the default `PATH` that `cron` provides to the
jobs it spawns, as @Zhenya points out. Usually `cron` logs somewhere, but it
seems to vary by distribution - on my Debian box, `syslog` redirects these
entries to `/var/log/syslog`, but it may be a different log file on your box.
Check to see what errors `cron` may be complaining about...
|
Python - Mutliprocess, member functions of classes
Question: I can't figure out if this is because of me, or the multiprocessing module
that Python2.7 has. Can anyone figure out why this is not working?
from multiprocessing import pool as mp
class encapsulation:
def __init__(self):
self.member_dict = {}
def update_dict(self,index,value):
self.member_dict[index] = value
encaps = encapsulation()
def method(argument):
encaps.update_dict(argument,argument)
print encaps.member_dict
p = mp() #sets up multiprocess pool of processors
p.map(method,sys.argv[1:]) #method is the function, sys.argv is the list of arguments to multiprocess
print encaps.member_dict
>>>{argument:argument}
>>>{}
So my question is just about member variables. It is my understanding that the
class encapsulation should hold this dictionary inside and outside of the
function. Why does it reset and give me an empty dictionary even though I have
only initialized it once? Please help
Answer: Even though you are encapsulating the object, the multiprocessing module will
end up using a local copy of the object in each process and never actually
propagate your changes back to you. In this case, you are not using the
Pool.map properly, as it expects each method call to return a result, which is
then sent back up to your return value. If what you want is to affect the
shared object, then you need a manager, which will coordinate the shared
memory:
## Encapsulating a shared object
from multiprocessing import Pool
from multiprocessing import Manager
import sys
class encapsulation:
def __init__(self):
self.member_dict = {}
def update_dict(self,index,value):
self.member_dict[index] = value
encaps = encapsulation()
def method(argument):
encaps.update_dict(argument,argument)
# print encaps.member_dict
manager = Manager()
encaps.member_dict = manager.dict()
p = Pool()
p.map(method,sys.argv[1:])
print encaps.member_dict
**output**
$ python mp.py a b c
{'a': 'a', 'c': 'c', 'b': 'b'}
I would suggest not really setting the shared object as the member attribute,
but rather passing in as an arg, or encapsulating the shared object itself,
and then passing its values into your dict. The shared object cannot be kept
persistently. It needs to be emptied and discarded:
# copy the values to a reg dict
encaps.member_dict = encaps.member_dict.copy()
But this might even be better:
class encapsulation:
def __init__(self):
self.member_dict = {}
# normal dict update
def update_dict(self,d):
self.member_dict.update(d)
encaps = encapsulation()
manager = Manager()
results_dict = manager.dict()
# pass in the shared object only
def method(argument):
results_dict[argument] = argument
p = Pool()
p.map(method,sys.argv[1:])
encaps.update_dict(results_dict)
## Using the pool.map as intended
If you were using the map to return values, it might look like this:
def method(argument):
encaps.update_dict(argument,argument)
return encaps.member_dict
p = Pool()
results = p.map(method,sys.argv[1:])
print results
# [{'a': 'a'}, {'b': 'b'}, {'c': 'c'}]
You would need to combine the results into your dict again:
for result in results:
encaps.member_dict.update(result)
print encaps.member_dict
# {'a': 'a', 'c': 'c', 'b': 'b'}
|
Parsing a pwdump file python
Question: I'm trying to parse a pwdump file in python. The content of a pwdump file
looks like this:
...[snip]
Domain\TESTIN$::aad3b435b51404eeaad3b435b51404ee:31d6cfe0d16ae931b73c59d7e0c089c0:::
Guest(current):501:aad3b435b51404eeaad3b435b51404ee:31d6cfe0d16ae931b73c59d7e0c089c0:::
Guest(hist_01):501:aad3b435b51404eeaad3b435b51404ee:31d6cfe0d16ae931b73c59d7e0c089c0:::
Guest(hist_02):501:aad3b435b51404eeaad3b435b51404ee:31d6cfe0d16ae931b73c59d7e0c089c0:::
...[snip]
I would like to get 2 things out of this:
Domain\USER (So first string before the ":")
And the actual hash :
"aad3b435b51404eeaad3b435b51404ee:31d6cfe0d16ae931b73c59d7e0c089c0"
I was thinking about something like this :
import sys
infile, outfile = sys.argv[1], sys.argv[2]
with open(infile) as inf, open(outfile,"w") as outf:
line_words = (line.split('::') for line in inf)
outf.writelines(words[1].strip() + '\n' for words in line_words if len(words)>1)
But somehow I have only the first hash parsed.
Any help would be greatly appreciated !
Thanks
Answer: The problem is that you're splitting on `"::"`. Try splitting on `":"`
instead. Only the first line conforms to this format. Lines 2 and up have a
number between two `:`s, which mucks with your algorithm.
Try this instead (more verbose for readability):
with open(infile) as inf, open(outfile,"w") as outf:
for line in inf:
splits = line.split(":")
user, hash = splits[0], ":".join(splits[2:4])
outf.write(hash + '\n')
Hope that helps
|
Biopython class instance - output from Entrez.read: I don't know how to manipulate the output
Question: I am trying to download some xml from Pubmed - no problems there, Biopython is
great. The problem is that I do not really know how to manipulate the output.
I want to put most of the parsed xml into a sql database, but I'm not familiar
with the output. For some things I can call the parsed xml like a dictionary,
but for others it doesn't seem that straight forward.
from Bio import Entrez
Entrez.email="[email protected]"
import sqlite3 as lite
handle=Entrez.efetch(db='pubmed',id='22737229', retmode='xml')
record = Entrez.read(handle)
If I want to find the title I can do this:
title=record[0]['MedlineCitation']['Article']['ArticleTitle']
But the type of the parsed object is a class:
>>> type(record)
<class 'Bio.Entrez.Parser.ListElement'>
>>>r=record[0]
>>>type(r)
<class 'Bio.Entrez.Parser.DictionaryElement'>
>>> r.keys()
[u'MedlineCitation', u'PubmedData']
Which makes me think there must be a much easier way of doing this than using
it as a dictionary. But when I try:
>>> r.MedlineCitation
Traceback (most recent call last):
File "<pyshell#67>", line 1, in <module>
r.MedlineCitation
AttributeError: 'DictionaryElement' object has no attribute 'MedlineCitation'
It doesn't work. I can obviously use it as a dictionary, but then I run into
problems later.
The real problem is trying to get certain information from the record when
using it like a dictionary:
>>> record[0]['MedlineCitation']['PMID']
StringElement('22737229', attributes={u'Version': u'1'})
Which means that I can't just plop (that's a technical term ;) it into my sql
database but need to convert it:
>>> t=record[0]['MedlineCitation']['PMID']
>>> t
StringElement('22737229', attributes={u'Version': u'1'})
>>> int(t)
22737229
>>> str(t)
'22737229'
All in all I am glad for the depth of information that Entrez.read() provides
but I am not sure how to easily use the information in the resulting class
instance. Usually you can just do things like
record.MedlineCitation
but it doesn't work.
Cheers
Wheaton
Answer: The `Entrez.read()` method is going to return you a nested data structure,
composed of `ListElement`s and `DictionaryElement`s. For more information,
check out the documentation of the [`read` method in the biopython
source](https://github.com/biopython/biopython/blob/master/Bio/Entrez/Parser.py#L11)
which I'll excerpt and paraphrase below:
def read(handle, validate=True):
This function parses an XML file created by NCBI's Entrez Utilities,
returning a multilevel data structure of Python lists and dictionaries.
...
the[se] data structure[s] seem to consist of generic Python lists,
dictionaries, strings, and so on, [but] each of these is actually a class
derived from the base type. This allows us to store the attributes
(if any) of each element in a dictionary my_element.attributes, and
the tag name in my_element.tag.
The author of the package, [Michiel de Hoon](https://github.com/mdehoon), also
spends some time at the very top of the `Parser.py` source file discussing his
[motivations for representing the XML documents using the custom
`ListElement`s and
`DictionaryElement`s](https://github.com/biopython/biopython/blob/master/Bio/Entrez/Parser.py#L11)
in `Entrez`.
If you're super curious you can also read the fascinating declarations of the
[`ListElement`](https://github.com/biopython/biopython/blob/master/Bio/Entrez/Parser.py#L74),
[`DictionaryElement`](https://github.com/biopython/biopython/blob/master/Bio/Entrez/Parser.py#L83),
and
[`StructureElement`](https://github.com/biopython/biopython/blob/master/Bio/Entrez/Parser.py#L95)
classes in the source. I'll spoil the surprise and just let you know that they
are very light wrappers around their basic Python datatypes, and behave almost
exactly the same as their underlying basic datatypes, except they have a new
property, `attributes`, which captures the XML attributes (keys and values)
for each XML node in the document that `read` is parsing.
So the basic answer to your question is that there is no "easy" way of using
dot-operator syntax to address the keys of a `DictionaryElement`. If you have
a dictionary element d, such that:
>>> d
DictElement({'first_name': 'Russell', 'last_name': 'Jones'}, attributes={'occupation': 'entertainer'})
The only built-in way you can read the `first_name` is by using the normal
python dictionary API, for instance:
>>> d['first_name']
'Russell'
>>> d.get('first_name')
'Russell'
>>> d.get('middle_name', 'No Middle Name')
'No Middle Name'
Don't lose heart, this really isn't so bad. If you want to take certain nodes
and insert them into rows of a sqlite database, you can just write small
methods that take a DictElement as input, and return something sqlite can
accept as output. If you're having trouble with this, feel free to post
another question specifically about that.
|
Python: How to send POST request?
Question: I found this script online:
import httplib, urllib
params = urllib.urlencode({'number': 12524, 'type': 'issue', 'action': 'show'})
headers = {"Content-type": "application/x-www-form-urlencoded",
"Accept": "text/plain"}
conn = httplib.HTTPConnection("bugs.python.org")
conn.request("POST", "", params, headers)
response = conn.getresponse()
print response.status, response.reason
302 Found
data = response.read()
data
'Redirecting to <a href="http://bugs.python.org/issue12524">http://bugs.python.org/issue12524</a>'
conn.close()
But I don't understand how to use it with PHP or what everything inside the
params variable is or how to use it. Can I please have a little help with
trying to get this to work? Also, I am using Python 3.2
Answer: If you really want to handle with HTTP using Python, I highly recommend
[Requests: HTTP for Humans](http://docs.python-
requests.org/en/latest/index.html). The [POST quickstart](http://docs.python-
requests.org/en/latest/user/quickstart/#more-complicated-post-requests)
adapted to your question is:
>>> import requests
>>> r = requests.post("http://bugs.python.org", data={'number': 12524, 'type': 'issue', 'action': 'show'})
>>> print(r.status_code, r.reason)
200 OK
>>> print(r.text[:300] + '...')
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
<head>
<title>
Issue 12524: change httplib docs POST example - Python tracker
</title>
<link rel="shortcut i...
>>>
|
Python XML parsing comparison of files
Question: I have to compare two XML files using Python. Each has a list of items and I
have to output which items do not appear in both. Each item has various
properties which need to agree to see if it's the same item.
Which parser would be the most suitable. It has to already be included in
Python 2.7. I looked at etree but does it enable me to do what I want easily?
Or is there something else that would be more suitable. Thanks!
Answer: it depends, if your xml element have sub element, and that sub element also
need to be compared, use DOM,
your elements to be compared only has attributes, thant using SAX is best way,
I post some SAX code here you can ref:
import xml.sax
from xml.sax.handler import ContentHandler
class TableHandler(ContentHandler):
def __init__(self):
self.columns = {}
def startElement(self, name, attrs):
if name == 'R':
for k, v in attrs.items():
if not self.columns.has_key(k):
self.columns[k] = []
self.columns[k].append(v)
def xml_to_table(xml_str):
handler = TableHandler()
xml.sax.parseString(xml_str, handler)
return handler.columns
if __name__ == '__main__':
txt = """<xml>
<R CatalogId="8"/><R CatalogId="8"/><R CatalogId="7"/>
</xml>
"""
columns = xml_to_table(txt)
print columns
|
Unindent does not match any outer indentation level?
Question: > **Possible Duplicate:**
> [IndentationError: unindent does not match any outer indentation
> level](http://stackoverflow.com/questions/492387/indentationerror-unindent-
> does-not-match-any-outer-indentation-level)
I have the following python code.
import sys
ins = open( sys.argv[1], "r" )
array = []
for line in ins:
s = line.split()
array.append( s[0] ) # <-- Error here
print array
ins.close()
The python interpreter complains
File "sort.py", line 7
array.append( s[0] )
^
IndentationError: unindent does not match any outer indentation level
Why so? And how to correct this error?
Answer: You are mixing tabs and spaces (happens sometimes :). Use one or the other.
I looked at your source:
s = line.split() # there's a tab at the start of the line
array.append( s[0] ) # spaces at the start of the line
* * *
Aside: As just a friendly suggestion, consider using `with` to open your file.
The advantage is that the file will be automatically closed for you (no
`close()` needed) when you are done or an exception is encountered.
array = []
with open( sys.argv[1], "r" ) as ins: # "r" really not needed, it's the default.
for line in ins:
s = line.split()
# etc...
|
Store information into .exe file, exported from python
Question: I have to generate an executable (.exe) file from my python program. I would
like to store information in a persistent way within this .exe file itself.
Normally I would prickel it into an external file, however for me it is
important that the information is stored in the .exe file itself and not
externally.
Thanks in advance!
Answer: **If you want read-write data:**
Don't do this. An executable changing itself isn't guaranteed to work. Some
executables write data at the end of the file (in theory) but you don't know:
* whether antivirus software will pick this behaviour up as part of behavioural analysis
* whether the executable is actually writable from the executable process
* whether data you write might become executable in theory and result in a security exploit
* whether you'll want to update a new release to the code next week, which will replace the executable file and lose the data
[Nearly] all software is able to get by with 'normal' file storage (i.e. in a
user / application data directory).
**If you just want read-only data:**
Fine, no problem. Write a Python file with the data in it, as a variable in a
module. You can write a python file as part of your build process.
|
FANN Error 11: Unable to allocate memory
Question: In the Python implementation of FANN, I got this error from
from pyfann import libfann
ann = libfann.neural_net()
ann.create_standard(4, 2, 8, 9, 1)
#FANN Error 11: Unable to allocate memory.
Any suggestion?
Answer: There is a bug in create_standard and the other simple creates. The workaround
is
ann.create_standard_array([2,8,9,1])
same for `create_sparse` and `create_shortcut`.
|
How can you compute percentiles and ranks with a generator on a single pass?
Question: Building off and earlier question: [Computing stats on generators in single
pass. Python](http://stackoverflow.com/questions/11308146/computing-stats-on-
generators-in-single-pass-python)
As I mentioned before computing statistics from a generator in a single pass
is extremely fast and memory efficient. Complex statistics and rank attributes
like the 90th percentile and the nth smallest often need more complex work
than standard deviation and averages (solved in the above). These approaches
become very important when working with map/reduce jobs and large datasets
where putting the data into a list or computing multiple passes becomes very
slow.
The following is an O(n) quicksort style algorithm for looking up data based
on rank order. Useful for finding medians, percentiles, quartiles, and
deciles. Equivalent to data[n] when the data is already sorted. But needs all
the data in a list that can be split/pivoted.
**How can you compute medians, percentiles, quartiles, and deciles with a
generator on a single pass?**
The Quicksort style algorithm that needs a complete list
import random
def select(data, n):
"Find the nth rank ordered element (the least value has rank 0)."
data = list(data)
if not 0 <= n < len(data):
raise ValueError('not enough elements for the given rank')
while True:
pivot = random.choice(data)
pcount = 0
under, over = [], []
uappend, oappend = under.append, over.append
for elem in data:
if elem < pivot:
uappend(elem)
elif elem > pivot:
oappend(elem)
else:
pcount += 1
if n < len(under):
data = under
elif n < len(under) + pcount:
return pivot
else:
data = over
n -= len(under) + pcount
Answer: You will need to store large parts of the data. Up to the point where it may
just pay off to store it completely. Unless you are willing to accept an
approximate algorithm (which may be very reasonable when you know your data is
independent).
Consider you need to find the median of the following data set:
0 1 2 3 4 5 6 7 8 9 -1 -2 -3 -4 -5 -6 -7 -8 -9
The median is obviously `0`. However, if you have seen only the first 10
elements, it is your worst guess at that time! So in order to find the median
of an n element stream, you need to keep at least `n/2` candidate elements in
memory. And if you do not know the total size `n`, you need to keep all!
Here are the medians for every odd-sized situation:
0 _ 1 _ 2 _ 3 _ 4 _ 4 _ 3 _ 2 _ 1 _ 0
While they were never candidates, you also need to remember the element 5 - 9:
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
yields the median `9`. For every element in a series of size n I can find a
continued series of size O(2*n) that has this element as median. But
obviously, these series are not random / independent.
See ["On-line" (iterator) algorithms for estimating statistical median, mode,
skewness, kurtosis?](http://stackoverflow.com/questions/1058813/on-line-
iterator-algorithms-for-estimating-statistical-median-mode-skewnes) for an
overview of related methods.
|
Return reoccuring regex matches with python
Question: I have a string:
SomeTextSomeTextA _SomeThing_ BSomeTextSomeTextA _SomeThingElse_ BSomeText
I want to have the Strings SomeThing and SomeThingElse string returned because
they are bracketed with A and B and assuming SomeText does not contain any A
... B occurences.
Any hint would be highly appreciated.
Here's what I tried, but it doesn't work:
import re
string = 'SomeTextSomeTextASomeThingBSomeTextSomeTextASomeThingElseBSomeText'
regex='(A.*B)'
I guess neither the regex is correct, nor do I know how to access the matches.
Is it match of `finditer` or…?
Answer: Try using `re.findall`:
>>> print re.findall('A(.*?)B', s)
['SomeThing', 'SomeThingElse']
See it working online: [ideone](http://ideone.com/20QrS)
Note the question mark. Without it the matching is done greedily - it will
consume as many characters as possible.
|
Importing another module from another subdirectory of the current directory's parent directory (python)
Question: I'm attempting to write a game. I therefore have lots of different types of
code and want to arrange them in a useful hierarchy.
I've looked at solutions that involve placing `__init__.py` in each folder but
I'm still somewhat confused, though not as much as the python interpreter.
Now suppose `resource1.py` wants to import a function from `physics1.py`, or
indeed any other `.py` file in the `Game` directory, how would I go about
doing so?
I've tried `from bin.physics.physics1 import function` but obviously that
doesn't work.
Thanks for your help.
/Game
launcher.py
/bin
game.py
__init__.py
/physics
__init__.py
physics1.py
physics2.py
/resources
__init__.py
resource1.py
Answer: It is not possible with the normal import mechanism unless you make `Game` a
package (i.e., by putting an `__init__.py` inside the `Game` directory). The
python relative import system only works _within packages_. It is not a
general system for referring to arbitrary modules by their location in the
directory structure. If you make Game a package, then you could do `from
..bin.physics.physics1 import function`.
Edit: Note also that relative imports don't work from a script executed as the
main program. If you try to run `resource.py` directly and it uses relative
imports, you'll get a "relative import attempted in non-package" error. It
will work if you import resource from another module. This is because the
relative import system is based on the "name" of the executing module, and
when you run a script directly its name is `__main__` instead of whatever it
would usually be named. It's possible to get around this using [the
`__package__` keyword](http://www.python.org/dev/peps/pep-0366/) if you really
need to, but it can be a bit tricky.
|
appcfg.py is not running with cmd prompt (Windows 7)
Question: I am having strange problem. I used to run appcfg.py to update my app to
appengine but now its not working anymore. When I run this command
C:\Program Files <x86>\Google\google_appengine>appcfg.py update E:\path\myApp\
Its not giving me anything, no error no feedback. Its just back to this line
C:\Program Files <x86>\Google\google_appengine>
Any idea about this issue!
P.S. I'm using Python 2.7. My code is updating through google app launcher but
I need to run it from cmd prompt as I will be downloading/uploading appengine
stuff which launcher doesn't allow me to do!
Thanks
Answer: I still had a little trouble with the instructions. Here's the steps I used to
create a batch file to use the Google App Engine download_app with Windows 7.
In my example,
* I'll use drive, **C:**
* Default python path will be, **C:\Python27\python.exe**
* GAE path (include "), **"C:\Program Files (x86)\Google\google_appengine\appcfg.py"**
* App ID **{your_app_id}** will be just your app-id name
* **{your_app_version}** is the "Version:" number in your GAE app.yaml file
* I'll create a folder on my Windows 7 desktop and call it **GAE_App**
* The desktop folder path will be **C:\Users\\{username}\Desktop\GAE_App**
* **{your email}** will be the Gmail account connected to your GAE
Right-click on the desktop and select New->Text Document. Open the newly
created text document and add the following line of code modified for your
setup to the blank text document,
C:\Python27\python.exe "C:\Program Files (x86)\Google\google_appengine\appcfg.py" download_app -A {your_app_id} -V {your_app_version} --email={your email} C:\Users\{username}\Desktop\GAE_App
Save the text document and change the file name to **GAE_Download.bat**
It's important that the extension now says ".bat" instead of ".txt"
Once finished, click on the batch to execute it. You'll be asked for a
password and then the files should download into the **GAE_App** folder on
your desktop. Now, whenever you need to download your files, just click on the
batch file.
Not sure why Google didn't simply include this feature with their GAE for
Windows software.
|
How to get integer values from a string in Python?
Question: Suppose I had a string
string1 = "498results should get"
Now I need to get only integer values from the string like `498`. Here I don't
want to use `list slicing` because the integer values may increase like these
examples:
string2 = "49867results should get"
string3 = "497543results should get"
So I want to get only integer values out from the string exactly in the same
order. I mean like `498,49867,497543` from `string1,string2,string3`
respectively.
Can anyone let me know how to do this in a one or two lines?
Answer:
>>> import re
>>> string1 = "498results should get"
>>> int(re.search(r'\d+', string1).group())
498
If there are multiple integers in the string:
>>> map(int, re.findall(r'\d+', string1))
[498]
|
What are advantages and disadvantages of abstracting our database behind stored procedures?
Question: At work we currently have a PostgreSQL database and we access it via some Perl
bindings to access the database and marshal responses to Perl types. This
works OK, but for various reasons we are becoming unhappy with Perl. One
option we've been considering is to move the majority of the work in this API
to the database itself as `plpgsql` stored procedures.
# Brief Example
For example, we might have the following in the database:
-- This matches our 'Entity::Artist' object
CREATE TYPE loaded_artist (
artist_id uuid,
revision_id integer,
artist_tree_id integer,
name text,
sort_name text,
artist_type_id integer,
-- etc
);
-- This gets the latest 'master' version of an artist and joins in basic data
-- from the artist tree
CREATE FUNCTION get_latest_artist_by_mbid(in_mbid UUID)
RETURNS SETOF loaded_artist AS $$
BEGIN
RETURN QUERY
SELECT
artist_id, revision_id, artist_tree_id, name.name,
sort_name.name AS sort_name, artist_type_id
FROM artist
JOIN artist_revision USING (artist_id)
JOIN artist_tree USING (artist_tree_id)
JOIN artist_data USING (artist_data_id)
WHERE artist.master_revision_id = revision_id
AND artist_id = in_mbid;
END;
$$ LANGUAGE 'plpgsql';
Now our current Perl API can be simplified to effectively the following:# And
in Perl
package Data::Artist;
sub get_latest_by_mbid {
my ($self, $mbid) = @_;
return $self->new_from_row(
$self->sql->select_single_row_hash(
'SELECT * FROM get_latest_artist_by_mbid(?)',
$mbid));
}
# Is this sensible?
On face value, I like this. We:
* Move away from Perl, but don't commit to another language. This means we can move our actual application to Python/whatever in the future and the majority of our API is already done.
* Get extra type safety from PostgreSQL due to specifying things like `RETURNS SETOF loaded_artist`
* Still have unit tests and stuff via PGTAP.
There are a few disadvantages:
* Potentially lower development cycle as we now have to replace functions in the database. Not the end of the world, but this effectively introduces a 'compile' step into our workflow that was not previously there.
* Potentially more difficult version control, but there are certainly ways of doing it
Has anyone done work like this? Would you encourage it, or was it fraught with
peril?
* * *
# Footnote: A little more about our case
This is for an open source website. We distribute dumps of our database for
people to import into PostgreSQL databases. We have no plans to move away from
PG any time soon, so database agnostic decisions don't really apply to us. We
are a very small team (2 paid developers, more open source contributors) and
this lets us be quite flexible in terms of deployment strategies.
Answer: Advantages:
* database schema / layout / storage changes are completely hidden from the application;
* you have a unified API to work with the database;
* you can have extensive logging on all actions done in the database, including all `SELECT` queries.
Disadvantages:
* increased need in the good DBAs;
* increased need in the database developers with good understanding of how database is working with data and how DB-side procedures work;
* more coordination will be required between DB-side and Application-side teams;
* difficulties with ORM integration;
* using stored procedures limits optimization possibilities of the database and some queries (especially reports) will be giving performance issues, it is better to use views instead, as optimizer can push down predicates into the view and utilize indexes properly.
Best combination is when you implement a deal of your business logic on the
database side and not only wrapper functions.
Schema version control is possible. It is more tricky to version the data in
the configuration tables. In one of the project I'm involved this is done via
the external tool (perl based) that handles this part for us:
* data is loaded / extracted into intermediate tables first;
* then it is analyzed for RI constraints and all possible violations;
* data manipulations are possible before loading it into live tables;
* business objects that spawns multiple tables can be defined and extracted in one go;
* several ways of treating matching entities exists, like: overwrite, merge, duplicate.
We're versioning the extract files instead (which a plain SQL) and have a
special step in the installation script to load the new configuration.
|
accessing files in a folder using python
Question: I have a python script that runs a program, which generates few .exe files and
puts them in a folder. I want to access these exe files to do further testing,
the problem is that this folder name is not static (it's dynamic? ), the name
depends on the OS,compiler,binary type(64/32-bit), so I can't hard code the
directory.
So What is the best to access folders like this? , I tried looking this up but
I haven't got a clue as to how to approach this.
Answer: Just pass the folder name as a parameter to your python script:
python myscript.py FolderName
In `myscript.py`:
import sys
print sys.argv[1]
`sys.argv` gives you all the parameters.
|
Biopython -- reading a fixed number of seq_records at a time
Question: I built some code that retrieves PHRED scores from a fastq file, puts them all
into a single list, and then passes the list on to another function. It looks
like so:
def PHRED_get():
temp_scores = []
all_scores = []
fastq_location
print("Building PHRED score bins...")
for seq_record in SeqIO.parse(fastq_location, "fastq"):
temp_scores = seq_record.letter_annotations
temp_scores = temp_scores['phred_quality']
all_scores.append(temp_scores)
all_scores = list(itertools.chain(*all_scores))
score_bin_maker(all_scores)
The problem is that this loop will continue until all seq_records have been
searched and corresponding PHRED scores retrieved. In order to be more RAM
conservative, I'd like to have some code that reads a smaller number of
seq_records at a time (say, 100), and then pops their respective quality
scores onto my ongoing uberlist. It would then go grab info from the next 100
seq_records and do the loop again. I'm having trouble understanding how to get
this done. Any ideas?
Answer: _**Simple_** : Keep a counter and when it reaches 100, break from the loop. Or
some other early halt condition like `if len(temp_scores) > 1000: break` would
work too.
_**Elegant_** : Use itertools.islice to take just the first 100 records from
the iterator,
import itertools
def PHRED_get():
temp_scores = []
all_scores = []
fastq_location
print("Building PHRED score bins...")
for seq_record in itertools.islice(SeqIO.parse(fastq_location, "fastq"), 100):
temp_scores = seq_record.letter_annotations
temp_scores = temp_scores['phred_quality']
all_scores.append(temp_scores)
all_scores = list(itertools.chain(*all_scores))
score_bin_maker(all_scores)
|
Python/SciPy version of Excel FInv function
Question: Hopefully an easy one. Can anyone point me to the SciPy function that will
calculate a right-tailed F Probability Distribution?
Like Excel's `=FINV(0.2, 1, 2)` that results in `3.555555556`. Thanks, Scott
Answer:
import scipy.stats
print scipy.stats.f.isf(0.2, 1, 2) # => 3.5555555555555576
|
Debugging Python ctypes segmentation fault
Question: I am trying to port some Python ctypes code from a Windows-specific program to
link with a Linux port of my library. The shortest Python code sample that
describes my problem is shown below. When I try to execute it, I receive a
segmentation fault in examine_arguments() in Python. I placed a printf
statement in my library at the crashing function call, but it is never
executed, which leads me to think the problem is in the ctypes code.
import ctypes
avidll = ctypes.CDLL("libavxsynth.so")
class AVS_Value(ctypes.Structure, object):
def __init__(self, val=None):
self.type=ctypes.c_short(105) # 'i'
self.array_size = 5
self.d.i = 99
class U(ctypes.Union):
_fields_ = [("c", ctypes.c_void_p),
("b", ctypes.c_long),
("i", ctypes.c_int),
("f", ctypes.c_float),
("s", ctypes.c_char_p),
("a", ctypes.POINTER(AVS_Value))]
AVS_Value._fields_ = [("type", ctypes.c_short),
("array_size", ctypes.c_short),
("d", U)]
avs_create_script_environment = avidll.avs_create_script_environment
avs_create_script_environment.restype = ctypes.c_void_p
avs_create_script_environment.argtypes = [ctypes.c_int]
avs_set_var = avidll.avs_set_var
avs_set_var.restype = ctypes.c_int
avs_set_var.argtypes = [ctypes.c_void_p, ctypes.c_char_p, AVS_Value]
env = avs_create_script_environment(2)
val = AVS_Value()
res = avs_set_var(env, b'test', val)
My library has the following in its headers, and a plain-C program doing what
I describe above (calling create_script_environment followed by set_var) runs
fine. Looking at logging information my library is putting onto the console,
the crash happens when I try to enter avs_set_var.
typedef struct AVS_ScriptEnvironment AVS_ScriptEnvironment;
typedef struct AVS_Value AVS_Value;
struct AVS_Value {
short type; // 'a'rray, 'c'lip, 'b'ool, 'i'nt, 'f'loat, 's'tring, 'v'oid, or 'l'ong
// for some function e'rror
short array_size;
union {
void * clip; // do not use directly, use avs_take_clip
char boolean;
int integer;
float floating_pt;
const char * string;
const AVS_Value * array;
} d;
};
AVS_ScriptEnvironment * avs_create_script_environment(int version);
int avs_set_var(AVS_ScriptEnvironment *, const char* name, AVS_Value val);
I tried backtracing the call from GDB, but I don't understand how to interpret
the results nor really much about using GDB.
#0 0x00007ffff61d6490 in examine_argument () from /usr/lib/python2.7/lib-dynload/_ctypes.so
#1 0x00007ffff61d65ba in ffi_prep_cif_machdep () from /usr/lib/python2.7/lib-dynload/_ctypes.so
#2 0x00007ffff61d3447 in ffi_prep_cif () from /usr/lib/python2.7/lib-dynload/_ctypes.so
#3 0x00007ffff61c7275 in _ctypes_callproc () from /usr/lib/python2.7/lib-dynload/_ctypes.so
#4 0x00007ffff61c7aa2 in PyCFuncPtr_call.2798 () from /usr/lib/python2.7/lib-dynload/_ctypes.so
#5 0x00000000004c7c76 in PyObject_Call ()
#6 0x000000000042aa4a in PyEval_EvalFrameEx ()
#7 0x00000000004317f2 in PyEval_EvalCodeEx ()
#8 0x000000000054b171 in PyRun_FileExFlags ()
#9 0x000000000054b7d8 in PyRun_SimpleFileExFlags ()
#10 0x000000000054c5d6 in Py_Main ()
#11 0x00007ffff68e576d in __libc_start_main () from /lib/x86_64-linux-gnu/libc.so.6
#12 0x000000000041b931 in _start ()
I'm at a loss as to how to approach this problem. I've looked at the details
of the calling types, but I don't see anything obviously incorrect there. Am I
falling into any platform-specific usages of types?
**Edit** It seems there's a problem with 32-bit vs 64-bit architectures in the
ctypes module. When I tested this again with a 32-bit build of my library and
32-bit Python, it ran successfully. On 64-bit, it segfaults at the same place.
Answer: Try using `c_void_p` for the opaque `AVS_ScriptEnvironment*`:
avs_create_script_environment.restype = c_void_p
and:
avs_set_var.argtypes=[c_void_p,ctypes.c_char_p,AVS_Value]
|
Changing a file line - Python
Question: I've a file entitled **'users.txt'** with the following structure;
`username:info_about_the_user.`
Something like this:
**users.txt:**
> mark:stuffabouthim
> anthony:stuffabouthim
> peter:stuffabouthim
> peterpeter:stuffabouthim
> peterpeterpeter:stuffabouthim
> peterpeterpeterpeter:stuffabouthim
>
The following part of the script needs to change a line (change info about an
user) but I'm having problems when the string is duplicated (peter -
peterpeter) and I dont know how to fix it.
def test():
fn = 'users.txt'
f = open(fn)
output = []
changeuser = 'peterpeter'
userinfo = 'HeIsTall'
for line in f:
if not changeuser+":" in line:
output.append(line)
f.close()
f = open(fn, 'w')
f.writelines(output)
f.close()
f = open("users.txt", "a")
f.write(changeuser + ":" + userinfo+"\n")
f = open("users.txt", "a")
test()
This is the input I have:
**Input: users.txt:**
> mark:stuffabouthim
> anthony:stuffabouthim
> peter:stuffabouthim
> **peterpeter:HesAwesome**
> peterpeterpeter:stuffabouthim
> peterpeterpeterpeter:stuffabouthim
>
I want to change info about peterpeter and have the following output:
**Output I want to have: users.txt:**
> mark:stuffabouthim
> anthony:stuff about him
> peter:stuffabouthim
> **peterpeter:HeIsTall**
> peterpeterpeter:stuffabouthim
> peterpeterpeterpeter:stuffabouthim
>
But this is the input I'm having. All the lines behind peterpeter are getting
deleted among other things.
> mark:stuffabouthim
> anthony:stuffabouthim
> peter:stuffabouthim
> peterpeter:HeIsTall
>
Can anyone give me a help with the code below to have the desired output?
Thanks.
Answer: You can have it the easy way with the
[`fileinput`](http://docs.python.org/library/fileinput.html) module:
import fileinput
def test():
fn = 'users.txt'
changeuser = 'peterpeter'
newinfo = 'HeIsTall'
for line in fileinput.input(fn, inplace=1):
user, oldinfo = line.split(':')
print '%s:%s' % (user, newinfo if user == changeuser else oldinfo.replace('\n', ''))
if __name__ == "__main__":
test()
|
Convert byte string to string in python
Question: I'm using PyCrypto, and python 2.7.3. I'm attempting to prepend a regular
string to the hash to create a chained hash, but to keep formats consistent, I
need the string s in the 'printed' form instead of the binary form. Is there
any way to convert the binary string into a manipulable "normal" string?
from Crypto.Hash import SHA256
h = SHA256.new()
s = h.digest() #return binary "non-printable" digest
s
>>>"\xe3\xb0\xc4B\x98\xfc\x1c\x14\x9a\xfb\xf4\xc8\x99o\xb9$'\xaeA\xe4d\x9b\x93L\xa4\x95\x99\x1bxR\xb8U"
print(s)
>>> ã°ÄB˜üšûôÈ™o¹$'®Aäd›“L¤•™xR¸U
Thanks for any help
Answer: What you see when entering `s` in the interactive interpreter is the
_representation_ of the string. You shouldn't be concerned about what this
looks like – the actual string contents is what gets printed when you use
print. There is no way to "convert" the string to what is printed when using
`print` since the string already _has_ that contents.
|
How to dispatch requests for one URL to multiple threads?
Question: This is a test application:
#!/usr/bin/env python
from flask import Flask
from time import sleep
application = Flask(__name__)
application.debug = True
@application.route('/a')
@application.route('/b')
@application.route('/c')
def a():
sleep(10)
return 'Hello world.'
if __name__ == '__main__':
application.run()
This application is deployed on Apache:
WSGIDaemonProcess Test processes=2 threads=15 display-name=%{GROUP}
WSGIProcessGroup Test
if you
* request /a at 00:00
* request /b at 00:01
* request /c at 00:02
, you will
* receive response from /a at 00:10
* receive response from /b at 00:11
* receive response from /c at 00:12
But if you
* request /a at 00:00
* request /a at 00:01
* request /a at 00:02
, you will
* receive response from /a at 00:10
* receive response from /a at 00:20
* receive response from /a at 00:30
So I guess that every request for one URL is processed in one thread. Now I
intend to develop a long-pulling server, I think I need to dispatch every
request to a independent thread to avoid block subsequent requests. What
should I do?
Answer: This problem has been solved. My guess is incorrect, the truth is:
In Chrome browser, if two requests composed of identical protocol, hostname,
port and path, the later request will not send until the prior request close.
The blocked thing was not server but browser.
|
Python/POpen/gpg: Supply passphrase and encryption text both through stdin or file descriptor
Question: I'm trying to remote control gpg through a python program via `POpen`.
I have a file that contains encrypted data which I want to decrypt, modify and
write back to disk re-encrypted.
Currently I am storing the decrypted information in a temporary file (which I
`shred` when the program ends). Then I perform my modifications to that file
and then re-encrypt it using a function, which pipes the passphrase through
`stdin`.
The code for this is as follows:
def encrypt(source, dest, passphrase, cipher=None):
"""Encrypts the source file.
@param source Source file, that should be encrypted.
@param dest Destination file.
@param passphrase Passphrase to be used.
@param cipher Cipher to use. If None or empty string gpg's default cipher is
used.
"""
phraseecho = Popen(("echo", passphrase), stdout=subprocess.PIPE)
gpgargs = [
"gpg",
"-c",
"--passphrase-fd", "0", # read passphrase from stdin
"--output", dest,
"--batch",
"--force-mdc"]
if not cipher is None and len(cipher) > 0:
gpgargs.extend(("--cipher-algo", cipher))
gpgargs.append(source)
encrypter = Popen(
gpgargs,
stdin=phraseecho.stdout,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
stdout, stderr = encrypter.communicate()
rc = encrypter.returncode
if not rc == 0:
raise RuntimeError(
"Calling gpg failed with return code %d: %s" % (rc, stderr))
This works perfectly well, but I'm fairly sure that storing potentionally
sensitive, decrypted data in a temporary file is a rather big security flaw.
So I want to rewrite my encryption/decryption functions in a way, that enables
them to work completely in memory without storing sensitive data on disk.
Decryption works straight forward by also piping the passphrase via `stdin`
and capturing `stdout` for the decrypted data.
Encryption on the other hand drives me mad, since I can't just pipe the
passphrase AND the message to `stdin'...at least
encrypter.stdin.write("%s\n%s" % (passphrase, message))
didn't work.
My next best guess is to supply the file-descriptor of some kind of in-memory
file/pipe/socket or whatever as `--passphrase-fd` argument. The thing is: I
don't know if there even is a thing such as in-memory files or if sockets
would apply, since I never used them.
Can anybody help out or point me to a better solution for my problem?
The solution does not have to be portable - I'm totally fine with Linux only
approaches.
Thanks in advance...
Edit:
Thanks a lot to both of you, Lars and ryran. Both solutions work perfectly!
Unfortunately I can only accept one
Answer: Chris: Since you have a simple-ish example of using os.pipe thanks to Lars,
I'll offer what [Pyrite](https://github.com/ryran/pyrite/) (my GTK frontend
for gpg) does as well in the hope that more code examples are better. My use
case is a little more complicated than yours due to the gui aspect -- I
actually use a dictionary for input and output, and I have code to launch gpg
with stdin as input and code that launches it with files as input, among other
complications.
That warning said, I start with the gpg commandline in a list just like you
do; however, instead of using `--passphrase-fd 0`, I create a custom file
descriptor via `os.pipe()` to send the passphrase before loading the `Popen()`
instance, which has `stdin=subprocess.PIPE` for the input data. Following are
some relevant (modified) excerpts from pyrite's
[crypt_interface](https://github.com/ryran/pyrite/blob/master/modules/crypt_interface.py)
module.
#!/usr/bin/env python
# Adapted excerpts from Pyrite <http://github.com/ryran/pyrite>
from subprocess import Popen, PIPE, check_output
...
# I/O dictionary obj
self.io = dict(
stdin='', # Stores input text for subprocess
stdout='', # Stores stdout stream from subprocess
stderr=0, # Stores tuple of r/w file descriptors for stderr stream
gstatus=0, # Stores tuple of r/w file descriptors for gpg-status stream
infile=0, # Input filename for subprocess
outfile=0) # Output filename for subprocess
...
cmd = ['gpg']
fd_pwd_R, fd_pwd_W = os.pipe()
os.write(fd_pwd_W, passwd)
os.close(fd_pwd_W)
cmd.append('--passphrase-fd')
cmd.append(str(fd_pwd_R))
...
# If working direct with files, setup our Popen instance with no stdin
if self.io['infile']:
self.childprocess = Popen(cmd, stdout=PIPE, stderr=self.io['stderr'][3])
# Otherwise, only difference for Popen is we need the stdin pipe
else:
self.childprocess = Popen(cmd, stdin=PIPE, stdout=PIPE, stderr=self.io['stderr'][4])
# Time to communicate! Save output for later
self.io['stdout'] = self.childprocess.communicate(input=self.io['stdin'])[0]
# Clear stdin from our dictionary asap, in case it's huge
self.io['stdin'] = ''
# Close os file descriptors
if fd_pwd_R:
os.close(fd_pwd_R)
time.sleep(0.1) # Sleep a bit to ensure everything gets read
os.close(self.io['stderr'][5])
if self.io['gstatus']:
os.close(self.io['gstatus'][6])
...
The function that calls all that waits until the `self.childprocess` object
has a `returncode` attribute and assuming the returncode was `0` and that the
input was text (and not a file), it then reads gpg's stdout from that
dictionary and prints it to the screen.
Happy to answer questions or try to help from my limited experience. Can find
my contact info by following links.
Edit: You might also find
[a4crypt](https://github.com/ryran/b19scripts/blob/master/a4crypt.py)
instructive as it is a much simpler frontend for gpg -- it was the project I
started in order to learn python, and later mothballed after I "completed" (if
there is such a thing) pyrite.
|
CX_Freeze import error on Windows and ZMQ
Question: I've a python program that uses ZMQ. I want to Freeze it so everyone can use
it as executable. This is my setup.py
import sys
from cx_Freeze import setup, Executable
includes = ["sip", "re", "zmq", "PyQt4.QtCore", "atexit", "zmq.utils.strtypes", "zmq.utils.jsonapi", "encodings.hex_codec"]
base = None
if sys.platform == "win32":
base = "Win32GUI"
setup (
name = "prueba",
version = "0.1",
description = "Esto es una prueba",
options = {"build_exe" : {"includes" : includes }},
executables = [Executable("Cliente.py", base = base)])
When I run this on Linux it works perfect and my program runs OK, but when I
do so on Windows I get the Following Error when I execute the .exe file:
from zmq.core import (constants, error, message, context,
File "ExtensionLoader_zmq_core_error.py", line 12, in <module>
ImportError: DLL load failed: The specified module cannot be found
Also, when CX_Freeze is working I can notice the following lines: Missing
modules: ? zmq.core.Context imported from zmq.devices.basedevice ?
zmq.core.FORWARDER imported from zmq.devices.monitoredqueuedevice ?
zmq.core.QUEUE imported from zmq.devices.monitoredquedevice ?
zmq.core.ZMQError imported from zmq.devices.monitoredquedevice
I've been trying to figure out this problem for an hour or two, it seems it
may be related with a DLL it should be importing and it isn't. Some DLL that
ZMQ needs to work, but I cannot find which one is it.
Answer: Fixed by adding:
`['zmq','zmq.utils.garbage','zmq.backend.cython']`
To the packages, then renaming `zmq.libzmq.pyd` to `libzmq.pyd`
|
Signing data in Android and Verifying it in python
Question: I have written the following code to Sign data in android:
import java.security.KeyFactory;
import java.security.KeyPair;
import java.security.KeyPairGenerator;
import java.security.PrivateKey;
import java.security.PublicKey;
import java.security.Signature;
import java.security.spec.RSAPublicKeySpec;
import android.app.Activity;
import android.os.Bundle;
public class TestActivity extends Activity {
/** Called when the activity is first created. */
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
try{
String m ="This is my message";
System.out.println(m);
KeyPairGenerator keyPairGen = KeyPairGenerator.getInstance("RSA");
keyPairGen.initialize(1024);
KeyPair kp = keyPairGen.generateKeyPair();
PrivateKey priKey = kp.getPrivate();
PublicKey pubKey = kp.getPublic();
KeyFactory keyFactory = KeyFactory.getInstance("RSA");
RSAPublicKeySpec publicKeySpec = keyFactory.getKeySpec(pubKey, RSAPublicKeySpec.class);
System.out.println("WITH toString: ");
System.out.println("Mod :" + publicKeySpec.getModulus().toString());
System.out.println("Exp :" + publicKeySpec.getPublicExponent().toString());
System.out.println("PublicKey:" + pubKey.toString());
System.out.println("PublicKey:" + pubKey);
System.out.println("PublicKey Base64:" +MyBase64.encode(pubKey.getEncoded()));
Signature instance = Signature.getInstance("SHA1withRSA");
instance.initSign(priKey);
instance.update(m.getBytes());
byte[] signature = instance.sign();
System.out.println("Signature: " + MyBase64.encode(signature));
}catch(Exception e){
e.printStackTrace();
}
}
}
I am copy pasting values from adb logcat into python and I am verifying it in
python using:
mod=#I paste mod here
exp=#I paste exp here
signature=#I paste signature here
message="This is my message"
publicKey = RSA.construct((mod,exp))
print 'PublicKey Base64: ' + publicKey.exportKey()
print str(publicKey)
test = SHA.new(message)
verifier = PKCS1_v1_5.new(publicKey)
signature_base = base64.b64decode(signature)
print "Verification: " + str(verifier.verify(test, signature_base))
I find that MyBase64.encode(pubKey.getEncoded()) (in java) is same as
publicKey.exportKey() (in python)
However, the verification always results to false.
The java code seems to work fine , if I run it using javac .
Any help, what may be going wrong?
Answer: Often an error like this has to do with extra whitespace sneaking into the
data string. Try stripping it before computing the hashes.
|
Node.js Saving a GET request's HTML response
Question: I'm apparently a little newer to Javascript than I'd care to admit. I'm trying
to pull a webpage using Node.js and save the contents as a variable, so I can
parse it however I feel like.
In Python, I would do this:
from bs4 import BeautifulSoup # for parsing
import urllib
text = urllib.urlopen("http://www.myawesomepage.com/").read()
parse_my_awesome_html(text)
How would I do this in Node? I've gotten as far as:
var request = require("request");
request("http://www.myawesomepage.com/", function (error, response, body) {
/*
Something here that lets me access the text
outside of the closure
This doesn't work:
this.text = body;
*/
})
Answer:
var request = require("request");
var parseMyAwesomeHtml = function(html) {
//Have at it
};
request("http://www.myawesomepage.com/", function (error, response, body) {
if (!error) {
parseMyAwesomeHtml(body);
} else {
console.log(error);
}
});
Edit: As Kishore noted, there are nice options for parsing available. Also see
cheerio if you have python/gyp issues with jsdom on windows. [Cheerio on
github](https://github.com/MatthewMueller/cheerio)
|
Python 2 and Python 3 dual development
Question: I'm just starting a new Python project, and ideally I'd like to offer Python 2
and 3 support from the start, with minimal developmental overhead. My question
is, what is the best way of doing this for brand new projects?
I have come across projects that run 2to3, or even 3to2, as part of their
installation script. This seems to be a very common way. However, there seems
to be several different ways of doing this. I also came across
[Distribute](http://packages.python.org/distribute/python3.html).
There is also the option of trying to write polyglot Python 2/Python 3 code.
Even though this seems like a horrible idea, I have noticed that I tend to
write code lately that is more idiomatic as Python 3 code, even though I still
run it as Python 2. I have a feeling this only helps my own transition when
the day finally arrives, and doesn't do much for offering or at least helping
dual support though.
Most of the projects offering dual support that I have seen added Python 3
support late, so I'm especially curious if there is a better way that is more
suited for new projects, where you have the benefit of a clean slate.
Thanks!
* * *
**Update:** Thanks everyone, here's a summary of the suggestions:
Polyglot (same source code files run on Python 2 and 3)
* Use [six](http://packages.python.org/six/)
* Especially viable if you don't require support for low versions of 2.*
* No one suggested this, but use `from __future__ import ...` to give you Python 3 behavior with usually a modest Python 2.* requirement (for instance, Python 3-style division has been available since Python 2.2). This is especially applicable for brand new project, since it helps if you make this decision early on.
* If your Python 3-specific code is very rare, you can check for `sys.version_info >= (3,)` and basically do what six does, but in an ad-hoc fashion.
Automatic conversion
* Run [2to3](http://docs.python.org/library/2to3.html) or 3to2 automatically in `setup.py`
* Use [Distribute](http://packages.python.org/distribute/python3.html) to do this for you (Distribute is a one-file project that can easily be included in your project so as to avoid another requirement, as mentioned [here](http://packages.python.org/distribute/using.html))
* Rely on unit tests to make sure the conversion is sound
For videos about how to deal with Unicode and Time/date, check out [Paulo
Scardine's answer](http://stackoverflow.com/a/11372711/682907).
Answer: You should check out [six](http://packages.python.org/six/), a library that
provides a unified interface to various things that differ between Python 2
and 3.
|
How to display leading zeros after splitting numbers in Python
Question: Is there any way I can display leading zeros after splitting numbers into
parts like first 3 digit and last 2 digit in Python 3.2? My script returns the
numbers with no leading zeros...
I have a csv file that looks like this:
Name,Code
blackberry,20001
wineberry,02002
rasberry,30000
blueberry,03010
My desired output:
Name,Code,Code1,Code2
blackberry,20001,200,01
wineberry,02002,020,02
rasberry,30000,300,00
blueberry,03010,030,10
My script:
import csv
all = []
with open('aaa.csv','r') as csvinput:
with open('bbb.csv', 'w') as csvoutput:
reader = csv.reader(csvinput,delimiter=',')
writer = csv.writer(csvoutput,delimiter=",", lineterminator='\n')
row = next(reader)
row.append('Code1')
row.append('Code2')
all.append(row)
for row in reader:
row.append(row[1][0:2])
row.append(row[1][-2:])
all.append(row)
writer.writerows(all)
print(all)
Above script returns:
Name,Code,Code1,Code2
blackberry,20001,200,1
wineberry,02002,20,2
rasberry,30000,300,0
blueberry,03010,30,10
Answer: The `csv.writer` does not convert strings to numerical types, so it is not
causing the error you are seeing.
I expect that you are viewing your csv file in Excel, which converts strings
to numbers wherever it can. Look at your output in a text editor and you
should see that the leading zeros are really there.
Also, as others have pointed out your first slice should be `row[1][0:3]`.
|
Plot numpy datetime64 with matplotlib
Question: I have two numpy arrays 1D, one is time of measurement in datetime64 format,
for example:
array([2011-11-15 01:08:11, 2011-11-16 02:08:04, ..., 2012-07-07 11:08:00], dtype=datetime64[us])
and other array of same length and dimension with integer data.
I'd like to make a plot in matplotlib time vs data. If I put the data
directly, this is what I get:
plot(timeSeries, data)

Is there a way to get time in more natural units? For example in this case
months/year would be fine.
EDIT:
I have tried Gustav Larsson's suggestion however I get an error:
Out[128]:
[<matplotlib.lines.Line2D at 0x419aad0>]
---------------------------------------------------------------------------
OverflowError Traceback (most recent call last)
/usr/lib/python2.7/dist-packages/IPython/zmq/pylab/backend_inline.pyc in show(close)
100 try:
101 for figure_manager in Gcf.get_all_fig_managers():
--> 102 send_figure(figure_manager.canvas.figure)
103 finally:
104 show._to_draw = []
/usr/lib/python2.7/dist-packages/IPython/zmq/pylab/backend_inline.pyc in send_figure(fig)
209 """
210 fmt = InlineBackend.instance().figure_format
--> 211 data = print_figure(fig, fmt)
212 # print_figure will return None if there's nothing to draw:
213 if data is None:
/usr/lib/python2.7/dist-packages/IPython/core/pylabtools.pyc in print_figure(fig, fmt)
102 try:
103 bytes_io = BytesIO()
--> 104 fig.canvas.print_figure(bytes_io, format=fmt, bbox_inches='tight')
105 data = bytes_io.getvalue()
106 finally:
/usr/lib/pymodules/python2.7/matplotlib/backend_bases.pyc in print_figure(self, filename, dpi, facecolor, edgecolor, orientation, format, **kwargs)
1981 orientation=orientation,
1982 dryrun=True,
-> 1983 **kwargs)
1984 renderer = self.figure._cachedRenderer
1985 bbox_inches = self.figure.get_tightbbox(renderer)
/usr/lib/pymodules/python2.7/matplotlib/backends/backend_agg.pyc in print_png(self, filename_or_obj, *args, **kwargs)
467
468 def print_png(self, filename_or_obj, *args, **kwargs):
--> 469 FigureCanvasAgg.draw(self)
470 renderer = self.get_renderer()
471 original_dpi = renderer.dpi
/usr/lib/pymodules/python2.7/matplotlib/backends/backend_agg.pyc in draw(self)
419
420 try:
--> 421 self.figure.draw(self.renderer)
422 finally:
423 RendererAgg.lock.release()
/usr/lib/pymodules/python2.7/matplotlib/artist.pyc in draw_wrapper(artist, renderer, *args, **kwargs)
53 def draw_wrapper(artist, renderer, *args, **kwargs):
54 before(artist, renderer)
---> 55 draw(artist, renderer, *args, **kwargs)
56 after(artist, renderer)
57
/usr/lib/pymodules/python2.7/matplotlib/figure.pyc in draw(self, renderer)
896 dsu.sort(key=itemgetter(0))
897 for zorder, a, func, args in dsu:
--> 898 func(*args)
899
900 renderer.close_group('figure')
/usr/lib/pymodules/python2.7/matplotlib/artist.pyc in draw_wrapper(artist, renderer, *args, **kwargs)
53 def draw_wrapper(artist, renderer, *args, **kwargs):
54 before(artist, renderer)
---> 55 draw(artist, renderer, *args, **kwargs)
56 after(artist, renderer)
57
/usr/lib/pymodules/python2.7/matplotlib/axes.pyc in draw(self, renderer, inframe)
1995
1996 for zorder, a in dsu:
-> 1997 a.draw(renderer)
1998
1999 renderer.close_group('axes')
/usr/lib/pymodules/python2.7/matplotlib/artist.pyc in draw_wrapper(artist, renderer, *args, **kwargs)
53 def draw_wrapper(artist, renderer, *args, **kwargs):
54 before(artist, renderer)
---> 55 draw(artist, renderer, *args, **kwargs)
56 after(artist, renderer)
57
/usr/lib/pymodules/python2.7/matplotlib/axis.pyc in draw(self, renderer, *args, **kwargs)
1039 renderer.open_group(__name__)
1040
-> 1041 ticks_to_draw = self._update_ticks(renderer)
1042 ticklabelBoxes, ticklabelBoxes2 = self._get_tick_bboxes(ticks_to_draw, renderer)
1043
/usr/lib/pymodules/python2.7/matplotlib/axis.pyc in _update_ticks(self, renderer)
929
930 interval = self.get_view_interval()
--> 931 tick_tups = [ t for t in self.iter_ticks()]
932 if self._smart_bounds:
933 # handle inverted limits
/usr/lib/pymodules/python2.7/matplotlib/axis.pyc in iter_ticks(self)
876 Iterate through all of the major and minor ticks.
877 """
--> 878 majorLocs = self.major.locator()
879 majorTicks = self.get_major_ticks(len(majorLocs))
880 self.major.formatter.set_locs(majorLocs)
/usr/lib/pymodules/python2.7/matplotlib/dates.pyc in __call__(self)
747 def __call__(self):
748 'Return the locations of the ticks'
--> 749 self.refresh()
750 return self._locator()
751
/usr/lib/pymodules/python2.7/matplotlib/dates.pyc in refresh(self)
756 def refresh(self):
757 'Refresh internal information based on current limits.'
--> 758 dmin, dmax = self.viewlim_to_dt()
759 self._locator = self.get_locator(dmin, dmax)
760
/usr/lib/pymodules/python2.7/matplotlib/dates.pyc in viewlim_to_dt(self)
528 def viewlim_to_dt(self):
529 vmin, vmax = self.axis.get_view_interval()
--> 530 return num2date(vmin, self.tz), num2date(vmax, self.tz)
531
532 def _get_unit(self):
/usr/lib/pymodules/python2.7/matplotlib/dates.pyc in num2date(x, tz)
287 """
288 if tz is None: tz = _get_rc_timezone()
--> 289 if not cbook.iterable(x): return _from_ordinalf(x, tz)
290 else: return [_from_ordinalf(val, tz) for val in x]
291
/usr/lib/pymodules/python2.7/matplotlib/dates.pyc in _from_ordinalf(x, tz)
201 if tz is None: tz = _get_rc_timezone()
202 ix = int(x)
--> 203 dt = datetime.datetime.fromordinal(ix)
204 remainder = float(x) - ix
205 hour, remainder = divmod(24*remainder, 1)
OverflowError: signed integer is greater than maximum
Could this be an bug? Or am I missing something. I also tried something
simple:
import matplotlib.pyplot as plt
import numpy as np
dates=np.array(["2011-11-13", "2011-11-14", "2011-11-15", "2011-11-16", "2011-11-19"], dtype='datetime64[us]')
data=np.array([1, 2, 3, 4, 5])
plt.plot_date(dates, data)
plt.show()
I still get this error:
OverflowError: signed integer is greater than maximum
I don't understand what am I doing wrong. ipython 0.13, matplotlib 1.1, Ubuntu
12.04 x64.
FINAL EDIT:
It seems that matplotlib doesn't support `dtype=datetime64`, so I needed to
convert the `timeSeries` to ordinary `datetime.datetime` from `datetime`.
Answer:
from datetime import datetime
a=np.datetime64('2002-06-28').astype(datetime)
plot_date(a,2)
|
Cannot access member variable using abc module and properties in python
Question: I wrote a code that simulates the use of `abc` module and `properties`.
However, it seems that I couldn't be able to access `width` and `height`
variables. The code is as the following:
from abc import ABCMeta, abstractmethod
class Polygon:
__metaclass__ = ABCMeta
@abstractmethod
def compute_area(self): pass
def __init__(self):
self.width = None
self.height = None
@property
def width_prop(self):
return self.width
@property
def height_prop(self):
return self.height
@width_setter.setter
def width_setter(self, width):
self.width = width
@height_setter.setter
def height_setter(self, height):
self.height = height
class Triangle(Polygon):
def compute_area(self):
return 0.5 * width * height
if __name__ == "__main__":
tri = Triangle()
tri.height_setter(20)
tri.width_setter(30)
print "Area of the triangle = ", tri.compute_area()
The error message that I obtained is `NameError: name 'width_setter' is not
defined`. What could be wrong in my implementation?
EDIT:
from abc import ABCMeta, abstractmethod
class Polygon:
__metaclass__ = ABCMeta
@abstractmethod
def compute_area(self): pass
def __init__(self):
self.width = None
self.height = None
@property
def width_prop(self):
return self.width
@width_prop.setter
def width_setter(self, width):
self.width = width
@property
def height_prop(self):
return self.height
@height_prop.setter
def height_setter(self, height):
self.height = height
class Triangle(Polygon):
def compute_area(self):
return 0.5 * self.width * self.height
if __name__ == "__main__":
tri = Triangle()
tri.height_prop = 20
tri.width_prop = 30
print "Area of the triangle = ", tri.compute_area()
Answer: Write Python as Python, not as C++ or Java:
class Polygon:
def compute_area(self): # if you need this at all...
raise NotImplementedError() # what does it do for you?
def __init__(self):
self.width = None
self.height = None
class Triangle(Polygon):
def compute_area(self):
return 0.5 * self.width * self.height
if __name__ == "__main__":
tri = Triangle()
tri.height = 20
tri.width = 30
print "Area of the triangle = ", tri.compute_area()
|
Python (w/ pyglet) memory leak
Question: In a very large project I'm searching for a memory leak. Here my progress so
far:
Using a class counter,
import gc
from collections import Counter
def count():
return Counter(type(o).__name__ for o in gc.get_objects())
I see that for each render pass of the program I gain dicts and
instancemethods:
Counter({'instancemethod': 9714, 'dict': 7274, ...
Counter({'instancemethod': 9716, 'dict': 7275, ...
Counter({'instancemethod': 9718, 'dict': 7276, ...
Counter({'instancemethod': 9720, 'dict': 7277, ...
I then tried to identify the additional dict that isn't getting garbage
collected, with this:
def get_latest():
for e in gc.get_objects():
if type(e).__name__ == "dict":
latest = e
return latest
Unfortunately, that returns mostly the identical (dict1 is dict2), so it's not
the last in the list.
Any pointers on how to find the leak would be appreciated. Using python 2.7
and bleeding edge pyglet.
Also, this only affects the game's client, not the server. So It may be a
problem within pyglet - even so I would like to find it.
EDIT: This question is answered by myself, my problem was using pyglet's
push_handlers method each frame as opposed to once.
Answer: My problem was using pyglet's push_handlers method each frame as opposed to
once. Removing that, the memory leak is gone.
|
RESTFUL POST with Python request to Glassfish Server
Question: I'm having a difficulty trying to make a Python REST POST to a webservice
running on Glassfish. I have verified that POST works ok using CURL but having
no luck with Python.
**Here is the CURL request that works ok.**
curl -X POST -H "Content-Type: application/json" -d '{"id":1,"lastname":"smith"}'
http://192.168.0.20:8080/field1/resources/com.field1entity.field1
**Here is the Python code to make the POST request**
import urllib
import httplib2
def call():
http = httplib2.Http()
url = 'http://192.168.0.20:8080/field1/resources/com.field1entity.field1'
params = urllib.urlencode({"id":11111,"lastname":"oojamalip"})
response, content = http.request(url, 'POST', params, headers={'Content-type':'application/json'})
print "lets stop here to have a looksy at the variables"
print content
if __name__ == '__main__':
namesPage = call()
print namesPage
**Output from console,**
> Unexpected character ('l' (code 108)): expected a valid value (number,
> String, array, object, 'true', 'false' or 'null') at [Source:
> org.apache.catalina.connector.CoyoteInputStream@18f494d; line: 1, column: 2]
Hope someone can shed some light on the problem.
thanks Nick
Answer: You are url encoding the prams and then telling the server it is json encoded
import json
params = json.dumps({"id":11111,"lastname":"oojamalip"})
# then
response, content = http.request(url, 'POST', body=params, headers={'Content-type':'application/json'})
|
csv reader behavior with None and empty string
Question: I'd like to distinguishing `None` and empty strings when going back and forth
between Python data structure and csv representation using Python's `csv`
module.
My issue is that when I run:
import csv, cStringIO
data = [['NULL/None value',None],
['empty string','']]
f = cStringIO.StringIO()
csv.writer(f).writerows(data)
f = cStringIO.StringIO(f.getvalue())
data2 = [e for e in csv.reader(f)]
print "input : ", data
print "output: ", data2
I get the following output :
input : [['NULL/None value', None], ['empty string', '']]
output: [['NULL/None value', ''], ['empty string', '']]
Of course, I could play with `data` and `data2` to distinguish `None` and
empty strings with things like:
data = [d if d!=None else 'None' for d in data]
data2 = [d if d!='None' else None for d in data2]
But that would partly defeat my interest of the `csv` module (quick
deserialization/serialization implemented in C, specially when you are dealing
with large lists).
Is there a `csv.Dialect` or parameters to `csv.writer` and `csv.reader` that
would enable them to distinguish between `''` and `None` in this use-case?
If not, would there be an interest in implementing a patch to `csv.writer` to
enable this kind of back and forth? (Possibly a `Dialect.None_translate_to`
parameter defaulting to `''` to ensure backward compatibility)
Answer: You could at least partially side-step what the `csv` module does by creating
your own version of a singleton `None`-like class/value:
class NONE(object):
def __repr__(self): # method csv.writer class uses to write values
return 'NONE' # unique string value to represent None
def __len__(self): # method called to determine length and truthiness
return 0 # (optional)
NONE = NONE() # singleton instance of the class
import csv
import cStringIO
data = [['None value', None], ['NONE value', NONE], ['empty string', '']]
f = cStringIO.StringIO()
csv.writer(f).writerows(data)
f = cStringIO.StringIO(f.getvalue())
print " input:", data
print "output:", [e for e in csv.reader(f)]
Results:
input: [['None value', None], ['NONE value', NONE], ['empty string', '']]
output: [['None value', ''], ['NONE value', 'NONE'], ['empty string', '']]
Using`NONE`instead of `None` would preserve enough information for you to be
able to differentiate between it and any actual empty-string data values.
**Even better alternative...**
You could use the same approach to implement a pair of relatively lightweight
`csv.reader` and `csv.writer` “proxy” classes -- necessary since you can't
actually subclass the built-in `csv` classes which are written in C -- without
introducing a lot of overhead (since the majority of the processing would
still be performed by the underlying built-ins). This would make what goes on
completely transparent since it's all encapsulated within the proxies.
import csv
class csvProxyBase(object): _NONE = '<None>' # unique value representing None
class csvWriter(csvProxyBase):
def __init__(self, csvfile, *args, **kwrags):
self.writer = csv.writer(csvfile, *args, **kwrags)
def writerow(self, row):
self.writer.writerow([self._NONE if val is None else val for val in row])
def writerows(self, rows):
map(self.writerow, rows)
class csvReader(csvProxyBase):
def __init__(self, csvfile, *args, **kwrags):
self.reader = csv.reader(csvfile, *args, **kwrags)
def __iter__(self):
return self
def next(self):
return [None if val == self._NONE else val for val in self.reader.next()]
if __name__ == '__main__':
import cStringIO as StringIO
data = [['None value', None], ['empty string', '']]
f = StringIO.StringIO()
csvWriter(f).writerows(data)
f = StringIO.StringIO(f.getvalue())
print " input:", data
print "output:", [e for e in csvReader(f)]
Results:
input: [['None value', None], ['empty string', '']]
output: [['None value', None], ['empty string', '']]
|
Publishing on Facebook fan page with Python
Question: I tried a couple of codes about how to publish on Facebook wall. But I would
like do a little bit different. I wonder publish in my facebook fan page. The
following code just publish on my personal profile. Can any one give me a clue
to publish in fan page?
#!/usr/bin/python
import facebook
import urllib
import urlparse
FACEBOOK_APP_ID = 'X'
FACEBOOK_APP_SECRET = 'Y'
FACEBOOK_PROFILE_ID = 'MyProfileId (**not page id, right?**)'
oauth_args = dict(client_id = FACEBOOK_APP_ID,
client_secret = FACEBOOK_APP_SECRET,
grant_type = 'client_credentials')
oauth_response = urllib.urlopen('https://graph.facebook.com/oauth/access_token?' + urllib.urlencode(oauth_args)).read()
page_token='PAGE TOKEN GOT INhttps://graph.facebook.com/SITE'
fields=access_token
attach = {
"name": 'Hello world',
"link": 'http://www.example.com',
"caption": 'test post',
"description": 'some test',
"picture" : 'http://www.example.com/picture.jpg',
"page_token" : page_token
}
try:
oauth_access_token = urlparse.parse_qs(str(oauth_response))['access_token'][0]
except KeyError:
raise
print oauth_access_token
facebook_graph = facebook.GraphAPI(oauth_access_token)
try:
response = facebook_graph.put_wall_post('', attachment=attach,profile_id = FACEBOOK_PROFILE_ID)
except facebook.GraphAPIError as e:
print e
Answer: I corrected the code, here is the solution:
#!/usr/bin/python
# coding: utf-8
import facebook
import urllib
import urlparse
access_token_page='X'
FACEBOOK_APP_ID = 'Y'
FACEBOOK_APP_SECRET = 'Z'
FACEBOOK_PROFILE_ID = 'W'
oauth_args = dict(client_id = FACEBOOK_APP_ID,
client_secret = FACEBOOK_APP_SECRET,
grant_type = 'client_credentials')
oauth_response = urllib.urlopen('https://graph.facebook.com/oauth/access_token?' + urllib.urlencode(oauth_args)).read()
attach = {
"name": 'Hello world',
"link": 'http://www.example.com',
"caption": 'test post',
"description": 'some test',
"picture" : 'http://www.example.com/picture.jpg',
}
facebook_graph = facebook.GraphAPI(access_token_page)
try:
response = facebook_graph.put_wall_post('', attachment=attach)
except facebook.GraphAPIError as e:
print e
Information about authentication can be get in:
<https://developers.facebook.com/docs/authentication/pages/>
|
urlopen always retrieves the same webpage
Question: I am trying to parse webpages using urllib2, BeautifulSoup and Python 2.7.
The problem lies upstream: each time I try to retrieve a new webpage, I get
the one I already retrieved. However, pages are different in my webbrowser:
see [page 1](http://www.senscritique.com/clement/collection/#page=1) and [page
2](http://www.senscritique.com/clement/collection/#page=2). Is there something
wrong with the loop over page numbers?
Here is a code sample:
def main(page_number_max):
import urllib2 as ul
from BeautifulSoup import BeautifulSoup as bs
base_url = 'http://www.senscritique.com/clement/collection/#page='
for page_number in range(1, 1+page_number_max):
url = base_url + str(page_number) + '/'
html = ul.urlopen(url)
bt = bs(html)
for item in bt.findAll('div', 'c_listing-products-content xl'):
item_name = item.findAll('h2', 'c_heading c_heading-5 c_bold')
print str(item_name[0].contents[1]).split('\t')[11]
print('End of page ' + str(page_number) + '\n')
if __name__ == '__main__':
page_number_max = 2
main(page_number_max)
Answer: When you send http request to server, everything after "#" character is
ignored. The part after "#" is only available to browser.
If you open developer tools in Chrome browser (or open firebug in Firefox) you
will see that everytime you change page on senscritique.com there is request
sent to the server. That's where the data you are looking for comes from.
I'm not going into details about what exacly to send in order to retrieve data
from this page, because I think it's not consistent with their TOS.
|
Passing arguments to tp_new and tp_init from subtypes in Python C API
Question: I originally asked this question on the Python capi-sig list: [How to pass
arguments to tp_new and tp_init from
subtypes?](http://mail.python.org/pipermail/capi-sig/2012-July/000500.html)
I'm reading the Python [PEP-253](http://www.python.org/dev/peps/pep-0253/) on
subtyping and there are plenty of good recommendations on how to structure the
types, call `tp_new` and `tp_init` slots, etc.
But, it lacks an important note on passing arguments from sub to super type.
It seems the [PEP-253](http://www.python.org/dev/peps/pep-0253/) is unfinished
as per the note:
> (XXX There should be a paragraph or two about argument passing here.)
So, I'm trying to extrapolate some strategies [well known from the Python
classes subtyping](http://rhettinger.wordpress.com/2011/05/26/super-
considered-super/), especially techniques that each level strips-off
arguments, etc.
I'm looking for techniques to achieve similar effect to this, but using plain
**Python C API** (3.x):
class Shape:
def __init__(self, shapename, **kwds):
self.shapename = shapename
super().__init__(**kwds)
class ColoredShape(Shape):
def __init__(self, color, **kwds):
self.color = color
super().__init__(**kwds)
What would be the equivalent in Python C API?
How to deal with similar situation but with arguments specific to derived
class expected in different order? It is arguments given at the end of the
args tuple (or `kwds` dict, I assume principle would be same).
Here is some (pseudo-)code that illustrates the situation:
class Base:
def __init__(self, x, y, z):
self.x = x
self.y = y
self.z = z
class Derived(Base):
def __init__(self, x, y, a):
self.a = a
super().__init__(x, y, None):
Note, if the `a` was expected first:
Derived.__init__(self, a, x, y)
it would be similar situation to the `Shape` and `ColoredShape` above. It
would also be easier to deal with, I assume.
Could anyone help to figure out the missing _XXX_ comment mentioned above and
correct technique for passing arguments from subtype up to super type(s) on
construction?
UPDATE 2012-07-17:
Inspired by [ecatmur's answer](http://stackoverflow.com/a/11452964/151641)
below I looked through Python 3 sources and I found `defdict_init` constructor
of
[collections.defaultdict](http://docs.python.org/py3k/library/collections.html#collections.defaultdict)
type object interesting. The type is derived from `PyDictObject` and its
constructor takes additional argument of `default_factory`. The constructor
signature in Python class is this:
class collections.defaultdict([default_factory[, ...]])
Now, here is how the `default_factory` is stripped from original `args` tuple,
so the rest of arguments is forwarded to the `tp_init` of base type, it is
`PyDictObject`:
int result;
PyObject *newargs;
Py_ssize_t n = PyTuple_GET_SIZE(args);
...
newargs = PySequence_GetSlice(args, 1, n);
...
result = PyDict_Type.tp_init(self, newargs, kwds);
Note, this snipped presence only the relevant part of the `defdict_init`
function.
Answer: The problem is that `PyArgs_ParseTupleAndKeywords` doesn't provide a way to
extract extra `*args` and `**kwargs` from the input args and keywords; indeed,
any extra arguments result in a `TypeError`; "Function takes %s %d positional
arguments (%d given)", or "'%U' is an invalid keyword argument for this
function".
This means that you're going to have to parse args and keywords yourself;
you're guaranteed that args is a tuple and keywords is a dict, so you can use
the standard methods (`PyTuple_GET_ITEM` and `PyDict_GetItemString`) to
extract the arguments you're interested in, and identify and construct a tuple
and dict to pass on from the remainder. You obviously can't modify args,
because tuples are immutable; and while popping items from keywords should be
OK it does seem a little risky ([example
crash](http://bugs.python.org/issue2016)).
A more ambitious but definitely feasible route would be to copy
`vgetargskeywords` from `getargs.c`
(<http://hg.python.org/cpython/file/tip/Python/getargs.c>) and extend it to
take optional out-parameters for remainder `*args` and `**kwargs`. This should
be fairly straightforward as you just need to modify the parts where it
detects and throws `TypeError` on extra arguments ([extra
args](http://hg.python.org/cpython/file/4752fafb579d/Python/getargs.c#l1455);
[extra
keywords](http://hg.python.org/cpython/file/4752fafb579d/Python/getargs.c#l1592)).
Good luck if you choose this route.
|
Python osascript returning 0 it seems
Question: I'm trying to work with applescript for retrieving BPM values of songs.
Eventually I'd like to implement it with a game. Here's my code:
import os
import time
import sys
def getBPM():
iTunesInstruct = """'
tell application "iTunes"
set k to get bpm of current track
end tell
return k
'"""
bpm = os.system('arch -i386 osascript -e ' + iTunesInstruct )
#bpm =90
bpm = int(bpm)
bpm = round(bpm)
if bpm > 250:
bpm = 200
return bpm
def getBeatSecond(bpm):
bps = float(bpm) / 60
#raw_input(bps)
return float(bps)
i = 0
beatMatch = True
while True:
beat = 1 / getBeatSecond(getBPM()) # BPS Beat divided by a second.
if beatMatch:
time.sleep(beat)
print beat
else:
raw_input('Go??')
i += 1
if i > 50:
break
But this seems to only work once... it got the BPM of the song I was listening
to, saw it was 94, and then it seems on the second iteration it thought it was
0, and then it divided by 0 and died. What's going on?
Answer: **os.system** doesn't wait for the command completes.
**90** = is the result of osascript , **0** = no error, is the exit status
(**os.system**).
Use `subprocess.Popen`
from subprocess import Popen, PIPE
def getBPM():
cmd = "arch -i386 osascript -e " + """'tell application "iTunes" to return bpm of current track'"""
bpm, tError = Popen(cmd, shell=True, stdin=PIPE, stdout=PIPE, stderr=PIPE).communicate()
if bpm > 250:
return = 200
return int(bpm)
|
Incorrect datetime value: ''2012-07-14 23:00:00''
Question: I'm having some trouble with the datetime format with Python/MySQL.
I calculate the datetime using the following script (fed by a Python
dictionary):
tempDate = str(eachday.get("date").get("year")).zfill(4) + "-" +
str(eachday.get("date").get("month")).zfill(2) + "-" +
str(eachday.get("date").get("day")).zfill(2) + " " +
str(eachday.get("date").get("hour")).zfill(2) + ":" +
str(eachday.get("date").get("min")).zfill(2) + ":" +
str(eachday.get("date").get("sec")).zfill(2)
Which yields a value that looks like `2012-04-02 04:04:23`.
I can insert into MySQL without any problems.
sql.execute("""INSERT INTO `db`.`table`(`id`, `fk_id`, `time`, `field1`, `field2`) VALUES (NULL, %s, %s, %s, %s);""", (fk_value, tempDate, value1, value2))
DB_CONN.commit()
But when I try to delete anything with that datetime,
sql.execute("""DELETE FROM `db`.`table` WHERE `time` = "%s";""", (tempDate))
DB_CONN.commit()
it returns a warning about an incorrect datetime value:
Warning: Incorrect datetime value: ''2012-07-17 23:00:00'' for column 'time' at row 1
How can I delete by datetime in Python? This is especially confusing since
inserting the same variable (nothing changed) works perfectly.
Answer: You could easily make use of the `datetime` module, which is a lot more
controlled for building date strings:
from datetime import datetime
eachday = {
'date': {
'day': 2,
'hour': 4,
'min': 4,
'month': 4,
'sec': 23,
'year': 2012
}
}
keys = ('year','month','day','hour','min','sec')
dt = datetime(*(eachday['date'][k] for k in keys))
print dt.strftime('%Y-%m-%d %H:%M:%S')
# '2012-04-02 04:04:23'
The benefit of sticking with a datetime, is that its the native format you
will be getting from MySQLdb anyways. And you can pass it as a value with NO
conversions. So really, formatting the date string isn't even needed.
sql.execute("DELETE FROM `db`.`table` WHERE `time`=%s", (dt,))
It is best to keep the datetime value as a datetime object as long as possible
instead of carrying around int components.
|
double to PyFloat conversion is incorrect
Question: I'm learning SWIG, for using C in Python. I've written this function, but I
can't understand, why the wrapped `myfunc` returns wrong float/double values:
mfuncs.c
#include <stdlib.h>
float myfunc(int n) {
float result;
result = 100 / n;
return result;
}
mfuncs.i
%module mfuncs
%typemap(out) double, float "$result = PyFloat_FromDouble($1);"
extern float myfunc(int n);
Finally I get 1107558400.0 instead of 33.33333.
>>> import mfuncs
>>> mfuncs.myfunc(3)
1107558400.0
>>>
Where is the mistake?
Answer: The SWIG typemap is unneeded - it's provided by default, you only need to
write typemaps for "esoteric" types and the default `double`/`float` ones
provided are fine here.
The real problem here is that you're not compiling with warnings enabled or
ignoring them! **It's really worth getting in the habit of compiling with
"-Wall -Wextra" or whatever your compiler requires to enable maximum warnings
and heeding them.**
Your SWIG interface only tells _SWIG_ about the function `myfunc` but there's
nothing in that interface to make the declaration available to the compiler
you use to compile the generated myfuncs_wrap.c. This means that when you come
to compile the shared library you're relying on an implicit declaration of
`myfunc`. GCC on my machine with -Wall reports this:
> test_wrap.c:3139:3: warning: implicit declaration of function 'myfunc'
The implicit declaration assumes it returns `int`. That's just the rule in C
if there is no declaration, it's as though you wrote:
#include <stdlib.h>
int myfunc(int n);
int main() {
printf("%d\n", myfunc(3));
return 0;
}
which is clearly wrong (undefined behaviour to be exact) given the definition
of `myfunc` returns a `float`. Your implementation (legally) chooses to do the
simplest thing for this undefined behaviour, which is roughly a bit-wise cast
from `int` to `float`. (It could equally well do _anything_ , even something
different on every run - that's the beauty of undefined behaviour).
You can fix your SWIG interface by changing it to:
%module mfuncs
%{
extern float myfunc(int n);
%}
extern float myfunc(int n);
this works because the code between `%{` and `%}` is directly passed to the
generated wrapper, which makes the compiler aware of the real declaration of
`myfunc` when building the wrapper.
There's a nicer solution in my view though: provide the declaration only once,
in a header file and then your interface file becomes:
%module mfuncs
%{
#include "myfunc.h"
%}
%include "myfunc.h"
(and obviously `#include "myfunc.h"` in myfunc.c). In that way you only write
the declaration once and the compiler will warn/error if there's anything
that's not quite expected rather than just take a (usually wrong) best guess.
|
In Python, how can I turn this format into a unix timestamp?
Question:
Mon Jul 09 09:20:28 +0000 2012
If I have a format like that as a STRING, how can I turn it into a unix
timestamp?
Note: I'm getting this format from Twitter's API:
[https://api.twitter.com/1/statuses/user_timeline.json?include_entities=true&include_rts=true&screen_name=twitter](https://api.twitter.com/1/statuses/user_timeline.json?include_entities=true&include_rts=true&screen_name=twitter)
Answer: The best option is using `dateutil.parser.parse()` which gives you a
`datetime` object with proper timezone information:
>>> import dateutil.parser
>>> dt = dateutil.parser.parse('Mon Jul 09 09:20:28 +0200 2012')
>>> dt
datetime.datetime(2012, 7, 9, 9, 20, 28, tzinfo=tzoffset(None, 7200))
Now you just need to convert it to a UNIX timestamp:
>>> import time
>>> int(time.mktime(dt.timetuple()))
1341822028
* * *
The format you have can also be easily parsed using
[`email.utils.parsedate_tz`](http://docs.python.org/dev/library/email.util.html#email.utils.parsedate_tz):
>>> import datetime
>>> import email.utils
>>> parts = email.utils.parsedate_tz('Mon Jul 09 09:20:28 +0200 2012')
>>> dt = datetime.datetime(*parts[:6]) - datetime.timedelta(seconds=parts[-1])
>>> str(dt)
'2012-07-09 07:20:28'
This is actually how `email.utils.parsedate_to_datetime` in Python 3.3 is
implemented (if you want to copy&paste this into your project, replace
`__parsedate_tz` with `parsedate_tz` from `email.utils`):
def parsedate_to_datetime(data):
if not data:
return None
*dtuple, tz = __parsedate_tz(data)
if tz is None:
return datetime.datetime(*dtuple[:6])
return datetime.datetime(*dtuple[:6],
tzinfo=datetime.timezone(datetime.timedelta(seconds=tz)))
|
Cloudera CDH3 installation failure, how to get around this?
Question: I am attempting to install CHD3 onto a 3 node cluster. I launch the
installations via the Cloudera Manager. All three installations fail.
I see this error after the Cloudera installation fails in /var/log/cloudera-
scm-agent/cloudera-scm-agent.out:
File "/usr/lib64/cmf/agent/src/cmf/agent.py", line 19, in <module>
import psutil
File "/usr/lib64/cmf/agent/build/env/lib/python2.6/site-packages/psutil-0.3.0-py2.6-linux-x86_64.egg/psutil/__init__.py", line 84, in <module>
TOTAL_PHYMEM = _psplatform.phymem_usage()[0]
File "/usr/lib64/cmf/agent/build/env/lib/python2.6/site-packages/psutil-0.3.0-py2.6-linux-x86_64.egg/psutil/_pslinux.py", line 122, in phymem_usage
percent = usage_percent(total - (free + buffers + cached), total,
TypeError: unsupported operand type(s) for +: 'int' and 'NoneType'
Apparently the Python interpreter running on start up sees "free", "buffers",
or "cached" as having a NoneType and this error causes the entire installation
to roll back.
Can anyone advise as to why this occurs and/or a way around the problem?
Thanks in advance.
Answer: The problem is here, in phymem_usage() in _pslinux.py:
def phymem_usage():
# total, used and free values are matched against free cmdline utility
# the percentage matches top/htop and gnome-system-monitor
f = open('/proc/meminfo', 'r')
try:
total = free = buffers = cached = None
for line in f:
if line.startswith('MemTotal:'):
total = int(line.split()[1]) * 1024
elif line.startswith('MemFree:'):
free = int(line.split()[1]) * 1024
elif line.startswith('Buffers:'):
buffers = int(line.split()[1]) * 1024
elif line.startswith('Cached:'):
cached = int(line.split()[1]) * 1024
break
used = total - free
percent = usage_percent(total - (free + buffers + cached), total,
_round=1)
return ntuple_sysmeminfo(total, used, free, percent)
finally:
f.close()
Note that it is examining /proc/meminfo and is converting fields to integers
without checking if those fields exist. On some systems, including some
virtualization technologies, Buffers or Cache may be missing. (The LSB spec
states that most of these fields are optional.)
A quick fix would be to change /proc/meminfo to /tmp/meminfo, "cat
/proc/meminfo >/tmp/meminfo", and add a line like:
Buffers: 0 kB
|
Starting and stopping processes in a cluster
Question: I'm writing software that runs a bunch of different programs (via
[twisted's](http://twistedmatrix.com) `twistd`); that is N daemons of various
kinds must be started across multiple machines. If I did this manually, I
would be running commands like `twistd foo_worker`, `twistd bar_worker` and so
on on the machines involved.
**Basically there will be a list of machines, and the daemon(s) I need them to
run. Additionally, I need to shut them all down when the need arises.**
If I were to program this from scratch, I would write a "spawner" daemon that
would run permanently on each machine in the cluster with the following
features accessible through the network for an authenticated administrator
client:
* Start a process with a given command line. Return a handle to manage it.
* Kill a process given a handle.
* Optionally, query stuff like cpu time given a handle.
It would be fairly trivial to program the above, but I cannot imagine this is
a new problem. Surely there are existing solutions to doing exactly this? I do
however lack experience with server administration, and don't even know what
the related terms are.
**What existing ways are there to do this on a linux cluster, and what are
some of the important terms involved?** Python specific solutions are welcome,
but not necessary.
Another way to put it: Given a bunch of machines in a lan, how do I
programmatically work with them as a cluster?
Answer: The usual tool is a batch queue system, such as SLURM, SGE, Torque/Moab, LSF,
and so on.
|
symbol picked up from wrong dylib on linking
Question: I am linking a binary which imports a symbol defined in two dependent dylibs,
and I can't make `ld` to pick the correct one.
The symbol is `_init_process` and it's defined both in libSystem.dylib (added
by ld implicitly) and libida.dylib (our library). I want `ld` to pick libida
but I can't make it work.
Here's the final linker command line:
/usr/libexec/gcc/i686-apple-darwin10/4.2.1/ld -dynamic -dylib
-dylib_compatibility_version 1.0 -dylib_current_version 1.0 -arch
i386 -macosx_version_min 10.5 -macosx_version_min 10.5
-single_module -weak_reference_mismatches non-weak -w -o
../../bin/x86_mac_gcc/plugins/python.pmc -ldylib1.10.5.o
-L../../bin/x86_mac_gcc/
-L/Developer/SDKs/MacOSX10.5.sdk/usr/lib/i686-apple-darwin10/4.2.1
-L/Developer/SDKs/MacOSX10.5.sdk/usr/lib
-L/usr/lib/gcc/i686-apple-darwin10/4.2.1
-L/usr/lib/gcc/i686-apple-darwin10/4.2.1
-L/Developer/SDKs/MacOSX10.5.sdk/usr/lib/gcc/i686-apple-darwin10/4.2.1/../../../i686-apple-darwin10/4.2.1
-L/Developer/SDKs/MacOSX10.5.sdk/usr/lib/gcc/i686-apple-darwin10/4.2.1/../../..
-v -lpthread ../../lib/x86_mac_gcc_32/libiconv.2.2.0.dylib
obj/x86_mac_gcc_32/python.o32 obj/x86_mac_gcc_32/idaapi.o32 -lida
-install_name python.pmc -lpython2.6 -ldl -why_load
-search_paths_first -t -lstdc++ -lgcc_s.10.5 -lgcc -lSystem
As you can see, -lida comes before -lSystem, so I would expect `ld` to pick
libida.dylib first, but it's not happening:
dlopen(/home/test/build/bin/x86_mac_gcc/plugins/python.pmc): dlopen(/home/test/build/bin/x86_mac_gcc/plugins/python.pmc, 2): Symbol not found: _init_process
Referenced from: /home/test/build/bin/x86_mac_gcc/plugins/python.pmc
Expected in: /usr/lib/libSystem.B.dylib
in /home/test/build/bin/x86_mac_gcc/plugins/python.pmc
/home/test/build/bin/x86_mac_gcc/plugins/python.pmc: can't load file
Debug output from the linker:
Library search paths:
../../bin/x86_mac_gcc/
/Developer/SDKs/MacOSX10.5.sdk/usr/lib/i686-apple-darwin10/4.2.1
/Developer/SDKs/MacOSX10.5.sdk/usr/lib
/usr/lib/gcc/i686-apple-darwin10/4.2.1
/usr/lib/gcc/i686-apple-darwin10/4.2.1
/Developer/SDKs/MacOSX10.5.sdk/usr/lib/i686-apple-darwin10/4.2.1
/Developer/SDKs/MacOSX10.5.sdk/usr/lib
/usr/lib
/usr/local/lib
Framework search paths:
/Library/Frameworks/
/System/Library/Frameworks/
/Developer/SDKs/MacOSX10.5.sdk/usr/lib/dylib1.10.5.o
/Developer/SDKs/MacOSX10.5.sdk/usr/lib/libpthread.dylib
../../lib/x86_mac_gcc_32/libiconv.2.2.0.dylib
obj/x86_mac_gcc_32/python.o32
obj/x86_mac_gcc_32/idaapi.o32
../../bin/x86_mac_gcc//libida.dylib
/usr/lib/libpython2.6.dylib
/Developer/SDKs/MacOSX10.5.sdk/usr/lib/libdl.dylib
/Developer/SDKs/MacOSX10.5.sdk/usr/lib/i686-apple-darwin10/4.2.1/libstdc++.dylib
/Developer/SDKs/MacOSX10.5.sdk/usr/lib/libgcc_s.10.5.dylib
/usr/lib/gcc/i686-apple-darwin10/4.2.1/libgcc.a
/Developer/SDKs/MacOSX10.5.sdk/usr/lib/libSystem.dylib
/usr/lib/system/libmathCommon.A.dylib
Answer: Solved it. The culprit was `-lpthread` \- libpthread is a symlink to
libSystem:
$ ls -la /Developer/SDKs/MacOSX10.5.sdk/usr/lib/libpthread.dylib
lrwxr-xr-x 1 root wheel 15 Nov 9 2011 /Developer/SDKs/MacOSX10.5.sdk/usr/lib/libpthread.dylib -> libSystem.dylib
After moving it after -lida, everything works as expected.
|
How to Write python code in a wordpress blog?
Question: I want to write some python code in a wordpress blog but whitespaces are not
preserved. Can some one please tell me how to write my python code in the blog
with the proper indentation and styling preserved as indentation is very
important for python code.
Answer: Use html **`<code>`** tag.
You can also try this very good plugin for code highlighting in WP.
[WP-SynHighlight](http://wordpress.org/extend/plugins/wp-synhighlight/)
And for blog hosted on wordpress.com:
As perfect answer on [this](http://stackoverflow.com/questions/1273647/how-do-
i-add-syntax-highlighting-to-a-wordpress-blog-hosted-on-wordpress-com) stack
post.
|
How to update imshow in matplotlib without overwriting new color bar or subplot title?
Question: **Background:** I am working an a data processing application and am trying to
visualize 2D arrays with matplotlib embedded into a tkinter gui. I am trying
to update the matplotlib figure by collecting user input (i.e. what frame they
want displayed, various other options) so I do not want to generate the
animation in advance.
**System:** Windows xp, Python 2.7, matplotlib 1.1.1rc
**Question:** How do I update only the image produced by imshow and not the
colorbar?
**GUI**  _Notice how color
bars write over themselves._
**Code:** _note: the code below is untested, however it is the basic idea of
how my current code is setup_
import matplotlib
import matplotlib.pyplot as plt
from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg, NavigationToolbar2TkAgg
import Tkinter as tk
import numpy as np
class MainWindow:
def __init__(self,application):
self.mainframe=tk.Frame(application)
#update button
ttk.Button(application,text='Update',command=self.update).pack()
#matplotlib setup
self.ren2DFig=plt.figure(figsize=(4,4),dpi=100)
self.renCanvas=FigureCanvasTkAgg(self.ren2DFig,master=self.renWin2D)
self.renCanvas.show()
self.renCanvas.get_tk_widget().pack()
self.subPlot=self.ren2DFig.add_subplot(111)
self.subPlot.get_yaxis().set_visible(False)
self.subPlot.get_xaxis().set_visible(False)
self.subPlot.get_axes().set_frame_on(False)
frame=np.zeros((20,40),)
frame[9:11 ,9:11]=1
frame[5 ,5]=.5
self.im=self.subPlot.imshow(self.rotate(frame), origin = 'lower')
self.cbar=self.ren2DFig.colorbar(self.im)
self.cbar.set_label('Solid Fraction')
self.renCanvas.draw()
self.ren2DFig.canvas
def update(self):
self.im.set_array(np.zeros((20,40),))
self.renCanvas.draw() # I think this is the problem?
application=tk.Tk()
application.focus_force()
window=MainWindow(application)
application.protocol("WM_DELETE_WINDOW",window.close)
application.mainloop()
Any help would be much appreciated! Thanks!
**Update**[July 13, 2012]:
If I try clearing the subplot, then adding the image and color bar back,
finally re-drawing the canvas I get this:

What the heck am I doing wrong?????? This is really starting to irritate me.
Answer: I figure it out. The problem is with
self.subPlot.get_axes().set_frame_on(False)
I guess it is a bug or something but if I change it to True, i.e.
self.subPlot.get_axes().set_frame_on(True)
It works just fine, everything updates like normal.
This little thing stumped me for two days!!! Go figure. lol.
|
Django: Query returns different results in management command
Question: I have a django application with a model named `TestCase`. There are 9
instances of the model currently stored in the DB, which I can see by running
`TestCase.objects.all()` in the shell, and they're also being displayed
correctly in my views.
However, in a management command I'm running, the same query
(`TestCase.objects.all()`) consistently returns an empty list instead. I have
imported the model correctly, and the management command is even able to add
entries to the database without any problem, so reading back from the database
shouldn't be a problem.
Any ideas on what could be causing this?
Some context: The django app is a frontend to display and manage testcases.
The management command reads in the results from the test into the DB. I need
to access the DB in the management command to incorporate test runs into the
app - if a testcase provides a test run (an integer) it is used, but if it
does not, then the command sets the test run to one plus the max test run
present in the app already - this is where I need to access the DB (using
something like `TestCase.objects.all().aggregate(Max('test_run'))`).
I'm using Django 1.4.
This is the management command:
from django.core.management.base import NoArgsCommand
from django.core.management.base import AppCommand, CommandError
from mainapp.models import TestCase
from django.utils import timezone
from django.db.utils import IntegrityError
from django.conf import settings
from django.core import management
from django.db.models import Max
import cPickle
import errno
class Command(NoArgsCommand):
def handle_noargs(self, **options):
management.call_command('reset', 'mainapp', interactive=False)
print "ALL: %s" % TestCase.objects.all()
self.traverse()
def traverse(self):
...
the output is `ALL: []`. I've omitted the source of the `traverse()` method,
but the problem is visible before that, so it shouldn't impact anything.
Here's the output from the shell showing the instances in the DB:
[as@as-mac ui]$ pm shell
Python 2.6.1 (r261:67515, Jun 24 2010, 21:47:49)
[GCC 4.2.1 (Apple Inc. build 5646)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
(InteractiveConsole)
>>> from mainapp.models import TestCase
>>> TestCase.objects.all()
[<TestCase: internet explorer 8 on WIN7 at https://www.google.com/ >, <TestCase: internet explorer 8 on WIN7 at https://www.google.com/ >, <TestCase: internet explorer 8 on WIN7 at https://www.google.com/search?q=mooo >, <TestCase: internet explorer 8 on WIN7 at https://www.google.com/search?q=mooo >, ...]
>>> TestCase.objects.count()
454
Please feel free to ask for more details!
Answer: The following line of your Command is resetting the database and clearing
data:
management.call_command('reset', 'mainapp', interactive=False)
If you remove this line, the record count in the shell and command will be
equivalent.
|
Remove certain return characters from tab-separated values file
Question: I've got a problem at work that requires me to insheet some MASSIVE tab-
separated values files (think 8-15 GB .txt files) into my PostgreSQL DB, but
I've run into a problem with the way the data was formatted in the first
place. Basically, the way we are given the data (and unfortunately we cannot
get the data in a better format), there are some backslashes that appear and
cause a return/new line.
So, there are lines (rows of data, tab-delim) that get chopped up into
multiple lines, where the last character of line n is a \ , and the first
character of line n+1 is a tab. Usually line n will be broken up into 1-3
additional lines (e.g. line n ends in a "\", lines n+1 and n+2 start with a
tab and end with a "\", and line n+3 starts with a tab).
I need to write a script that can work with these huge files (this will run on
a linux server with 192 GB of RAM) to look for the lines that begin with a
tab, and then remove the return (and "\" wherever it exists) and save the text
file.
To recap, the customer's logging program splits the original line N into lines
n, n+1, and sometimes n+2 and n+3 (depending on how many \ characters appear
in line N), and I need to write a python script to recreate the original line
N.
Answer:
#!/usr/bin/python
import re,sys
lastLine = None
incomplete = re.compile("\\\\+$")
indented = re.compile("^\\t")
for line in open(sys.argv[1]):
line = line.rstrip()
line = incomplete.sub("", line)
if indented.match(line):
lastLine += indented.sub("",line)
else:
if lastLine:
print lastLine
lastLine = None
lastLine = line
print lastLine
Basically, i'm ignoring the \ at the end since the tab on the next line tells
you that it's a continuation anyways.
|
directory path types with argparse
Question: My python script needs to read files from a directory passed on the command
line. I have defined a readable_dir type as below to be used with argparse for
validating that the directory passed on the command line is existent and
readable. Additionally, a default value (/tmp/non_existent_dir in the example
below) has also been specified for the directory argument. The problem here is
that argparse invokes readable_dir() on the default value even in a situation
where a directory argument is explicitly passed in on the command line. This
causes the script to crap out as the default path /tmp/non_existent_dir does
not exist in a context where a directory is explicitly passed in on the
command line. I could get around this by not specifying a default value and
making this argument mandatory, or by deferring the validation until later in
the script but is a more elegant solution that anyone is aware of?
#!/usr/bin/python
import argparse
import os
def readable_dir(prospective_dir):
if not os.path.isdir(prospective_dir):
raise Exception("readable_dir:{0} is not a valid path".format(prospective_dir))
if os.access(prospective_dir, os.R_OK):
return prospective_dir
else:
raise Exception("readable_dir:{0} is not a readable dir".format(prospective_dir))
parser = argparse.ArgumentParser(description='test', fromfile_prefix_chars="@")
parser.add_argument('-l', '--launch_directory', type=readable_dir, default='/tmp/non_existent_dir')
args = parser.parse_args()
Answer: You can create a custom action instead of a type:
import argparse
import os
import tempfile
import shutil
import atexit
class readable_dir(argparse.Action):
def __call__(self,parser, namespace, values, option_string=None):
prospective_dir=values
if not os.path.isdir(prospective_dir):
raise argparse.ArgumentTypeError("readable_dir:{0} is not a valid path".format(prospective_dir))
if os.access(prospective_dir, os.R_OK):
setattr(namespace,self.dest,prospective_dir)
else:
raise argparse.ArgumentTypeError("readable_dir:{0} is not a readable dir".format(prospective_dir))
ldir = tempfile.mkdtemp()
atexit.register(lambda dir=ldir: shutil.rmtree(ldir))
parser = argparse.ArgumentParser(description='test', fromfile_prefix_chars="@")
parser.add_argument('-l', '--launch_directory', action=readable_dir, default=ldir)
args = parser.parse_args()
print (args)
But this seems a little fishy to me -- if no directory is given, it passes a
non-readable directory which seems to defeat the purpose of checking if the
directory is accessable in the first place.
**EDIT**
As far as I'm aware, there is no way to validate the default argument. I
suppose the argparse developers just assumed that if you're providing a
default, then it should be valid. The quickest and easiest thing to do here is
to simply validate the arguments immediately after you parse them. It looks
like, you're just trying to get a temporary directory to do some work. If
that's the case, you can use the `tempfile` module to get a new directory to
work in. I updated my answer above to reflect this. I create a temporary
directory, use that as the default argument (`tempfile` already guarantees the
directory it creates will be writeable) and then I register it to be deleted
when your program exits.
|
Web servers vs application servers, Open source database Security vs Enterprise Database security
Question: I am working on creating a spec for a startup to create a financial broker
check website. It involves storing information about financial advisers and
payment details of the users (so obviously needs a lot of security). What kind
of databases are best suited for the application. Is MySQL or its open source
variations enough or is it better to go with Oracle Enterprise etc. Also any
info about the usefulness of application servers over traditional web servers
(cloud based or normal) in this scenario and the preferred scripting language
(PHP, Ruby, Python) for secure web applications.
Answer: Your choice of language, database, etc. has a relatively small impact on the
security of your application. The developer's understanding of how to write
secure code and the developer's understanding of the features provided by
their tools is far more important. It is entirely possible to write a secure
application on an open source LAMP stack. It is entirely possible to write a
secure application on a completely closed source stack. It is also very easy
to write insecure applications on any stack.
An enterprise database like Oracle will (depending on the edition, the options
that are licensed, and the add-ons that are purchased) provide a host of
security functions that may be useful. You can transparently encrypt the data
at rest, you can encrypt the data when it flows over the network to the app
server, you can prevent the DBA from viewing sensitive data, you can audit the
actions of the DBA and other users, etc. But these sorts of things really only
come into play when you've written a reasonably secure application to begin
with. It does you little good to encrypt all the data if your application is
vulnerable to SQL injection attacks and can be easily hacked to present all
the decrypted data to the attacker, for example.
|
Print two presorted lists into one output file in python
Question: I have two lists that are already sorted how they need to be, and i need them
put into one file, like this example:
list1 = [a, b, c, d, e]
list2 = [1, 2, 3, 4, 5]
output file should look like:
a1
b2
c3
d4
e5
im fairly new to python, so im not really all that sure how to do file
writing. I read using `with open(file, 'w') as f:` is a better/easier way to
start the writing block, but i am unsure how to merge the lists and print them
as such. I could probably merge them into a third list and print that one to
the file using `print>>f, item` but i wanted to see if there was as simpler
way.
Thank you!
Late edit: looking at my lists, they wont always be the same length, but all
the data needs printed regardless. So if list2 went to 7 then then the output
would need to be:
a1
b2
c3
d4
e5
6
7
or vice versa, where list1 may be longer then list2.
Answer: Use the [zip()](http://docs.python.org/library/functions.html#zip) function to
combine (ie zip) your two lists. E.g.,
list1 = ['a', 'b', 'c', 'd', 'e']
list2 = [1, 2, 3, 4, 5]
zip(list1, list2)
gives:
[('a', 1), ('b', 2), ('c', 3), ('d', 4), ('e', 5)]
you can then format the output to suit your needs.
for i,j in zip(list1, list2):
print '%s%d' %(i,j)
yielding:
a1
b2
c3
d4
e5
**Update** :
If your lists are unequal length, this approach using
[itertools.izip_longest()](http://docs.python.org/library/itertools.html#itertools.izip_longest)
might work for you:
import itertools
list1 = ['a', 'b', 'c', 'd', 'e']
list2 = [1, 2, 3]
for i,j in itertools.izip_longest(list1, list2):
if i: sys.stdout.write('%s' %i)
if j: sys.stdout.write('%d' %j)
sys.stdout.write('\n')
gives:
a1
b2
c3
d
e
Note, if you were using Python 3, there is a nice way to use the `print()`
function. I am using `write()` here to avoid extra blank spaces between items.
|
Importing Maya module into Nuke (Python)
Question: I can import the maya module with ease through the Python 2.7 IDE, but when
working with Nuke's script editor, I cannot import Maya and get a "No module
named maya" error
Any help?
Answer: well if you want to import maya modules you can add the path of "E:\Program
Files\Autodesk\Maya2013\Python\Lib\site-packages" to your sys.path in nuke,
here is an explanation how to do it ..
first test the paths if its their by using the code below
import sys
[each for each in sys.path]
now in this list you can either insert in the beginning or append the path of
your maya modules at the end by this line of code(**the path below is in my
case will be different on your computer**)
sys.path.append("E:/Program Files/Autodesk/Maya2013/Python/Lib/site-packages")
when you are adding the path check with your slash should be either // or
**/** ...
then again test if the path is added by using the code i give you in the
beginning...
|
How to Read a Simple Json Result (from Google calculator) in Python?
Question: I'm trying to parse a json result from the next google Query:
[http://www.google.com/ig/calculator?hl=en&q=1USD=?MXN](http://www.google.com/ig/calculator?hl=en&q=1USD=?MXN)
The result is this:
{lhs: "1 U.S. dollar",rhs: "13.3317335 Mexican pesos",error: "",icc: true}
So i'm just trying to decode with this:
import json, urllib2
j=urllib2.urlopen("http://www.google.com/ig/calculator?hl=en&q=1USD=?MXN")
print json.load(j)
But i'm having this error (Traceback):
Traceback (most recent call last):
File "/home/rafael/gitSources/PythonConcept/Monpy/negApi.py", line 4, in <module>
print json.load(j)
File "/usr/lib/python2.7/json/__init__.py", line 278, in load
**kw)
File "/usr/lib/python2.7/json/__init__.py", line 326, in loads
return _default_decoder.decode(s)
File "/usr/lib/python2.7/json/decoder.py", line 366, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/python2.7/json/decoder.py", line 382, in raw_decode
obj, end = self.scan_once(s, idx)
ValueError: Expecting property name: line 1 column 1 (char 1)
[Finished in 0.6s with exit code 1]
I don't know what i'm doing wrong, I just read many tutorials :/
Thanks :)!
Answer: This is not JSON but you can still parse it yourself - like this:
import ast
resp = '{lhs: "1 U.S. dollar",rhs: "13.3317335 Mexican pesos",error: "",icc: true}'
d = {}
for pair in resp[1:-1].split(','):
(k,v) = pair.split(':')
v = v.strip()
if v == "true":
v = "True"
try:
v = ast.literal_eval(v)
except:
print "Couldn't eval " + v
d[k] = v
print d
You could also fix up the formatting to be json and then use the json parser
like this:
import json
resp = '{lhs: "1 U.S. dollar",rhs: "13.3317335 Mexican pesos",error: "",icc: true}'
s = "{"
for pair in resp[1:-1].split(','):
(k,v) = pair.split(':')
s += '"%s" : %s,' % (k,v)
s = s[:-1] + "}"
print json.loads(s)
|
Compare values of two arrays in python
Question: How can i check if item in `b` is in `a` and the found match item in `a`
should not be use in the next matching?
Currently this code will match both 2 in `b`.
a = [3,2,5,4]
b = [2,4,2]
for i in b:
if i in a:
print "%d is in a" % i
This is the required output:
2 => 2 is in a
4 => 4 is in a
2 =>
**EDIT: Example 2:**
a = [3,2,2,4]
b = [2,4,2]
output should be
2 => 2 is in a
4 => 4 is in a
2 => 2 is in a
Answer: (long post but read it entirely, solution is at the end).
Remove the found value or register it in another dict.
Better though is to count the number of apparitions inside each array and test
how many are common.
For the second case, you'd have
* for `a`:
3 appears 1 times 2 appears 1 times 5 appears 1 times 4 appears 1 times
* for `b`:
2 appears 2 times 4 appears 1 times
Keep these values in dictionaries:
a_app = {3:1, 2:1, 5:1, 4:1}
b_app = {2:2, 4:1}
And now, it is simple:
for i in b:
if a_app.has_key(i) and a_app[i] > 0:
a_app[i] -= 1
The `b_app` dictionary would be used in other case.
Here is a test script I wrote (testing all testcases issued here):
def f(a, b):
a_app = {}
for i in a:
if not a_app.has_key(i):
a_app[i] = 0
a_app[i] += 1
print a_app
for i in b:
print i, '=>',
if a_app.has_key(i) and a_app[i] > 0:
a_app[i] -= 1
print i, ' is in a',
print '.'
f([1,1,2],[1,1])
f([3,2,5,4],[2,4,2])
f([3,2,2,4],[2,4,2])
f([3,2,5,4],[2,3,2])
And here is the output:
$ python 1.py
{1: 2, 2: 1}
1 => 1 is in a .
1 => 1 is in a .
{2: 1, 3: 1, 4: 1, 5: 1}
2 => 2 is in a .
4 => 4 is in a .
2 => .
{2: 2, 3: 1, 4: 1}
2 => 2 is in a .
4 => 4 is in a .
2 => 2 is in a .
{2: 1, 3: 1, 4: 1, 5: 1}
2 => 2 is in a .
3 => 3 is in a .
2 => .
Everything is perfect and no order is lost :)
Edit: Updated with @Avaris's suggestions, this script looks like:
import collections
def f(a, b):
a_app = collections.Counter(a)
for i in b:
print i, '=>',
if i in a_app and a_app[i] > 0:
a_app[i] -= 1
print i, ' is in a',
print '.'
print ''
f([1,1,2],[1,1])
f([3,2,5,4],[2,4,2])
f([3,2,2,4],[2,4,2])
f([3,2,5,4],[2,3,2])
|
python sterling's approximation program
Question: I'm trying to write a simple program that prints the first [Stirling's
approximation](http://en.wikipedia.org/wiki/Stirling%27s_approximation) for
the integers 1:10 alongside the actual value of 1:10 factorial. This is my
code:
import math
nf =1
def stirling(n):
return math.sqrt(2*math.pi*n)*(n/math.e)**n
print "n","\t", "Stirling","\t\tFactorial"
for x in range (1,11):
for y in range(1,x):
nf *=y
print x,"\t", stirling(x), "\t\t", nf
I'm getting the wrong output for the factorial, where did I mess up the code?
Answer: (1) You need to reset `nf=1` each time you compute the factorial (or,
alternatively, only multiply by one new number each time, which would be more
efficient);
(2) `range(1,x)` doesn't include x, so your factorials won't include the right
upper bound. The following should work:
nf = 1
for x in range (1,11):
nf *= x
print x,"\t", stirling(x), "\t\t", nf
which produces
n Stirling Factorial
1 0.922137008896 1
2 1.91900435149 2
3 5.83620959135 6
4 23.5061751329 24
5 118.019167958 120
6 710.078184642 720
7 4980.39583161 5040
8 39902.3954527 40320
9 359536.872842 362880
10 3598695.61874 3628800
|
How can I save a LibSVM python object instance?
Question: I wanted to use this classifier in other computer without had to train it
again. I used to save some classifiers from scikit with cPickle. Doing the
same with LIBSVM it gives me a " ValueError: ctypes objects containing
pointers cannot be pickled ".
I'm using LibSVM 3.1 and Python 2.7.3.
Thanks
from libsvm.svm import *
from libsvm.svmutil import *
import cPickle
x = [[1, 0, 1], [-1, 0, -1]]
y = [1, -1]
prob = svm_problem(y, x)
param = svm_parameter()
param.kernel_type = LINEAR
param.C = 10
m = svm_train(prob, param)
labels_pred, acc, probs = svm_predict([-1, 1], [[1, 1, 1], [0, 0, 1]], m)
print labels_pred, acc, probs
import ipdb; ipdb.set_trace()
filename='libsvm-classif.pkl'
fid = open(filename, 'wb')
cPickle.dump(m, fid)
fid.close()
fid = open(filename, 'rb')
m = cPickle.load(fid)
labels_pred, acc, probs = svm_predict([-1, 1], [[1, 1, 1], [0, 0, 1]], m)
print labels_pred, acc, probs
Answer: Just use libsvm's load and save functions
svm_save_model('libsvm.model', m)
m = svm_load_model('libsvm.model')
This is from the README file included in the python directory of the libsvm
package. It seems to have a much better description of features than the
website.
|
POST Message for uploading large file To Google Drive without Google Driver UI
Question: My understanding is that to upload a large file to Google Drive from my own
app using version 2 of the API, I should be sending a message like below.
Unfortunately, I do not know how to achieve this format for the multipart
message using Python. Does anyone have example Python code that could get me
going in the right direction?
Thanks, Chris
* * *
POST /upload/drive/v2/files?uploadType=multipart
Authorization: Bearer <Access token>
Content-Length: <length>
Content-Type: multipart/related; boundary="<a base64 encoded guid>"
--<a base64 encoded guid>
Content-Type: application/json
{"title": "test.jpg", "mimeType":"image/jpeg", "parents":[]}
--<a base64 encoded guid>
Content-Type: image/jpeg
Content-Transfer-Encoding: base64
<base64 encoded binary data>
--<a base64 encoded guid>--
Answer: The Google Drive API's reference guide contains code snippet in many languages
including Python for all of the API endpoints.
For your use-case, the
[drive.files.insert](https://developers.google.com/drive/v2/reference/files/insert)
endpoint has the answer:
from apiclient import errors
from apiclient.http import MediaFileUpload
# ...
def insert_file(service, title, description, parent_id, mime_type, filename):
"""Insert new file.
Args:
service: Drive API service instance.
title: Title of the file to insert, including the extension.
description: Description of the file to insert.
parent_id: Parent folder's ID.
mime_type: MIME type of the file to insert.
filename: Filename of the file to insert.
Returns:
Inserted file metadata if successful, None otherwise.
"""
media_body = MediaFileUpload(filename, mimetype=mime_type, resumable=True)
body = {
'title': title,
'description': description,
'mimeType': mime_type
}
# Set the parent folder.
if parent_id:
body['parents'] = [{'id': parent_id}]
try:
file = service.files().insert(
body=body,
media_body=media_body).execute()
return file
except errors.HttpError, error:
print 'An error occured: %s' % error
return None
|
Regex passes in Rubular but not in Python
Question:
import re
import urllib.request
file_txt = urllib.request.urlopen("ftp://ftp.sec.gov/edgar/data/1408597/0000930413-12-003922.txt")
pattern_item4= re.compile("(Item\\n*\s*4.*)Item\\n*\s*5")
print(re.search(pattern_item4,bytes.decode(f)))
#Returns None
This regex returns what I want in rubular, but obviously it doesn't do what is
expected in Python. Would anyone help me abit with this. The intention of the
regex is to basically extract stuff between item4 and item5.
Thank you

Answer: Try using raw strings
re.compile (r"(Item\\n*\s*4.*)Item\\n*\s*5")
I would guess it has to do with your escaping of `\n`. But it's impossible to
tell without knowing exactly what it is you're expecting that to match.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.