text
stringlengths 226
34.5k
|
---|
Can I add arguments to python code when I submit spark job?
Question: I'm trying to use `spark-submit` to execute my python code in spark cluster.
Generally we run `spar-submit` with python code like below.
# Run a Python application on a cluster
./bin/spark-submit \
--master spark://207.184.161.138:7077 \
my_python_code.py \
1000
But I wanna run `my_python_code.py`by passing several arguments Is there smart
way to pass arguments?
Answer: **Yes** : Put this in a file called args.py
#import sys
print sys.argv
If you run
spark-submit args.py a b c d e
You will see:
['/spark/args.py', 'a', 'b', 'c', 'd', 'e']
|
Python regular expression to change date formatting
Question: I have an array of strings representing dates like '2015-6-03' and I want to
convert these to the format '2015-06-03'.
Instead of doing the replacement with an ugly loop, I'd like to use a regular
expression. Something along the lines of:
str.replace('(-){1}(\d){1}(-){1}', '-0{my digit here}-')
Is something like this possible?
Answer: You don't have to retrieve the digit from the match. You can replace the
hyphen before a single-digit month with `-0`.
Like this:
re.sub('-(?=\d-)', '-0', text)
Note that `(?=\d-)` is a non-capturing expression because the opening
parenthesis is followed by the special sequence `?=`. That's why only the
hyphen gets replaced.
Test:
import re
text = '2015-09-03 2015-6-03 2015-1-03 2015-10-03'
re.sub('-(?=\d-)', '-0', text)
Result:
'2015-09-03 2015-06-03 2015-01-03 2015-10-03'
|
Attribute error python tkinter
Question: I am trying to make a calculator using a class and (Im quite new to this) this
code keeps telling me AttributeError: 'Calculator' object has no attribute
'clear'and when I run my code, everything inside the class doesn't work. What
can I do to fix my code?
class Calculator(Frame):
def __init__(self):
Frame.__init__(self)
display = Frame(calculator, bd=0, width=1000, height=1000, relief=SUNKEN)
buttons = Frame(calculator, bd=0, width=7, height=1, relief=GROOVE)
display.grid(column=0, row=0, padx=0, pady=0)
buttons.grid(column=0, row=1, padx=1)
numbers = StringVar()
results = Entry(display, textvariable=numbers, width=31, fg="DarkOrchid4", bg="lavender blush", font="Verdana")
results.pack()
results.grid(column=0, row=0)
def showup(x):
return lambda: results.insert(END, x)
def equals(self):
try:
result = eval(results.get())
except:
result = "Invalid input"
self.all_clear
results.insert(0, result)
def zero(self):
results.insert(END, "0")
def bracket_one(self):
results.insert(END, "(")
def bracket_two(self):
results.insert(END, ")")
def all_clear(self):
results.delete(0, END)
def clear(self):
results.delete(-1)
def multiplication(self):
results.insert(END, "x")
def division(self):
results.insert(END, "/")
def addition(self):
results.insert(END, "+")
def subtraction(self):
results.insert(END, "-")
def decimal_point(self):
results.insert(END, ".")
Answer: Your indentation in the `class Calculator(Frame):` is wrong, you have indented
all the methods inside `__init__()` instead of inside the class. Decrease the
indentation for the methods like - `equals(self)` , `zero(self)` , etc , and
move them outside `__init__()` .
Also, you should put `results` as an instance variable. And access it as an
instance variable - `self.results`.
Example -
from tkinter import *
from tkinter import messagebox
calculator = Tk()
calculator.title("Calcualtor")
calculator.geometry("317x145")
menubar = Menu(calculator)
class Calculator(Frame):
def __init__(self):
Frame.__init__(self)
display = Frame(calculator, bd=0, width=1000, height=1000, relief=SUNKEN)
buttons = Frame(calculator, bd=0, width=7, height=1, relief=GROOVE)
display.grid(column=0, row=0, padx=0, pady=0)
buttons.grid(column=0, row=1, padx=1)
numbers = StringVar()
self.results = Entry(display, textvariable=numbers, width=31, fg="DarkOrchid4", bg="lavender blush", font="Verdana")
self.results.pack()
self.results.grid(column=0, row=0)
def showup(x):
return lambda: self.results.insert(END, x)
numbers=["7", "4", "1", "8", "5", "2", "9", "6", "3"]
for i in range(9):
n=numbers[i]
Button(buttons, bg="snow", text=n, width=7, height=1, command=showup(n), relief=RAISED).grid(row=i%3, column=i//3)
Clear = Button(buttons, bg="snow", text="C", width=7, height=1, command=self.clear, relief=RAISED)
Clear.grid(padx=2, pady=2, column=3, row=0)
All_clear = Button(buttons, bg="snow", text="AC", width=7, height=1, command=self.all_clear, relief=RAISED)
All_clear.grid(padx=2, pady=2, column=4, row=0)
Bracket_one = Button(buttons, bg="snow", text="(", width=7, height=1, command=self.bracket_one, relief=RAISED)
Bracket_one.grid(padx=2, pady=2, column=2, row=3)
Bracket_two = Button(buttons, bg="snow", text=")", width=7, height=1, command=self.bracket_two, relief=RAISED)
Bracket_two.grid(padx=2, pady=2, column=3, row=3)
Zero = Button(buttons, bg="snow", text="0", width=7, height=1, command=self.zero, relief=RAISED)
Zero.grid(padx=2, pady=2, column=0, row=3)
Decimal_point = Button(buttons, bg="snow", text=".", width=7, height=1, command=self.decimal_point, relief=RAISED)
Decimal_point.grid(padx=2, pady=2, column=1, row=3)
Multiplication = Button(buttons, bg="red", text="x", width=7, height=1, command=self.multiplication, relief=RAISED)
Multiplication.grid(padx=2, pady=2, column=3, row=1)
Division = Button(buttons, bg="powder blue", text="/", width=7, height=1, command=self.division, relief=RAISED)
Division.grid(padx=2, pady=2, column=4, row=1)
Addition = Button(buttons, bg="yellow", text="+", width=7, height=1, command=self.addition, relief=RAISED)
Addition.grid(padx=2, pady=2, column=3, row=2)
Subtraction = Button(buttons, bg="green", text="-", width=7, height=1, command=self.subtraction, relief=RAISED)
Subtraction.grid(padx=2, pady=2, column=4, row=2)
Equals = Button(buttons, bg="orange", text="=", width=7, height=1, command=self.equals, relief=RAISED)
Equals.grid(padx=2, pady=2, column=4, row=3)
def equals(self):
try:
result = eval(self.results.get())
except:
result = "Invalid input"
self.all_clear()
self.results.insert(0, result)
def zero(self):
self.results.insert(END, "0")
def bracket_one(self):
self.results.insert(END, "(")
def bracket_two(self):
self.results.insert(END, ")")
def all_clear(self):
self.results.delete(0, END)
def clear(self):
self.results.delete(-1)
def multiplication(self):
self.results.insert(END, "x")
def division(self):
self.results.insert(END, "/")
def addition(self):
self.results.insert(END, "+")
def subtraction(self):
self.results.insert(END, "-")
def decimal_point(self):
self.results.insert(END, ".")
if __name__ == '__main__':
Calculator().mainloop()
calculator.config(menu=menubar)
calculator.mainloop()
|
Methods of creating a structured array
Question: I have the following information and I can produce a numpy array of the
desired structure. Note that the values x and y have to be determined
separately since their ranges may differ so I cannot use:
xy = np.random.random_integers(0,10,size=(N,2))
The extra _list[..._ conversion is necessary for the conversion in order for
it to work in Python 3.4, it is not necessary, but not harmful when using
Python 2.7.
The following works:
>>> # attempts to formulate [id,(x,y)] with specified dtype
>>> N = 10
>>> x = np.random.random_integers(0,10,size=N)
>>> y = np.random.random_integers(0,10,size=N)
>>> id = np.arange(N)
>>> dt = np.dtype([('ID','<i4'),('Shape',('<f8',(2,)))])
>>> arr = np.array(list(zip(id,np.hstack((x,y)))),dt)
>>> arr
array([(0, [7.0, 7.0]), (1, [7.0, 7.0]), (2, [5.0, 5.0]), (3, [0.0, 0.0]),
(4, [6.0, 6.0]), (5, [6.0, 6.0]), (6, [7.0, 7.0]),
(7, [10.0, 10.0]), (8, [3.0, 3.0]), (9, [7.0, 7.0])],
dtype=[('ID', '<i4'), ('Shape', '<f8', (2,))])
I cleverly thought I could circumvent the above nasty bits by simply creating
the array in the desired vertical structure and applying my dtype to it,
hoping that it would work. The stacked array is correct in the vertical form
>>> a = np.vstack((id,x,y)).T
>>> a
array([[ 0, 7, 6],
[ 1, 7, 7],
[ 2, 5, 9],
[ 3, 0, 1],
[ 4, 6, 1],
[ 5, 6, 6],
[ 6, 7, 6],
[ 7, 10, 9],
[ 8, 3, 2],
[ 9, 7, 8]])
I tried several ways of trying to reformulate the above array so that my dtype
would work and I just can't figure it out (this included vstacking a vstack
etc). So my question is...how can I use the vstack version and get it into a
format that meets my dtype requirements without having to go through the
procedure that I did. I am hoping it is obvious, but I am sliced, stacked and
ellipsed myself into an endless loop.
**SUMMARY**
Many thanks to hpaulj. I have included two incarnations based upon his
suggestions for others to consider. The pure numpy solution is substantially
faster and a lot cleaner.
"""
Script: pnts_StackExch
Author: [email protected]
Modified: 2015-08-24
Purpose:
To provide some timing options on point creation in preparation for
point-to-point distance calculations using einsum.
Reference:
http://stackoverflow.com/questions/32224220/
methods-of-creating-a-structured-array
Functions:
decorators: profile_func, timing, arg_deco
main: make_pnts, einsum_0
"""
import numpy as np
import random
import time
from functools import wraps
np.set_printoptions(edgeitems=5,linewidth=75,precision=2,suppress=True,threshold=5)
# .... wrapper funcs .............
def delta_time(func):
"""timing decorator function"""
import time
@wraps(func)
def wrapper(*args, **kwargs):
print("\nTiming function for... {}".format(func.__name__))
t0 = time.time() # start time
result = func(*args, **kwargs) # ... run the function ...
t1 = time.time() # end time
print("Results for... {}".format(func.__name__))
print(" time taken ...{:12.9f} sec.".format(t1-t0))
#print("\n print results inside wrapper or use <return> ... ")
return result # return the result of the function
return wrapper
def arg_deco(func):
"""This wrapper just prints some basic function information."""
@wraps(func)
def wrapper(*args,**kwargs):
print("Function... {}".format(func.__name__))
#print("File....... {}".format(func.__code__.co_filename))
print(" args.... {}\n kwargs. {}".format(args,kwargs))
#print(" docs.... {}\n".format(func.__doc__))
return func(*args, **kwargs)
return wrapper
# .... main funcs ................
@delta_time
@arg_deco
def pnts_IdShape(N=1000000,x_min=0,x_max=10,y_min=0,y_max=10):
"""Make N points based upon a random normal distribution,
with optional min/max values for Xs and Ys
"""
dt = np.dtype([('ID','<i4'),('Shape',('<f8',(2,)))])
IDs = np.arange(0,N)
Xs = np.random.random_integers(x_min,x_max,size=N) # note below
Ys = np.random.random_integers(y_min,y_max,size=N)
a = np.array([(i,j) for i,j in zip(IDs,np.column_stack((Xs,Ys)))],dt)
return IDs,Xs,Ys,a
@delta_time
@arg_deco
def alternate(N=1000000,x_min=0,x_max=10,y_min=0,y_max=10):
""" after hpaulj and his mods to the above and this. See docs
"""
dt = np.dtype([('ID','<i4'),('Shape',('<f8',(2,)))])
IDs = np.arange(0,N)
Xs = np.random.random_integers(0,10,size=N)
Ys = np.random.random_integers(0,10,size=N)
c_stack = np.column_stack((IDs,Xs,Ys))
a = np.ones(N, dtype=dt)
a['ID'] = c_stack[:,0]
a['Shape'] = c_stack[:,1:]
return IDs,Xs,Ys,a
if __name__=="__main__":
"""time testing for various methods
"""
id_1,xs_1,ys_1,a_1 = pnts_IdShape(N=1000000,x_min=0, x_max=10, y_min=0, y_max=10)
id_2,xs_2,ys_2,a_2 = alternate(N=1000000,x_min=0, x_max=10, y_min=0, y_max=10)
Timing results for 1,000,000 points are as follows
Timing function for... pnts_IdShape
Function... **pnts_IdShape**
args.... ()
kwargs. {'N': 1000000, 'y_max': 10, 'x_min': 0, 'x_max': 10, 'y_min': 0}
Results for... pnts_IdShape
time taken ... **0.680652857 sec**.
Timing function for... **alternate**
Function... alternate
args.... ()
kwargs. {'N': 1000000, 'y_max': 10, 'x_min': 0, 'x_max': 10, 'y_min': 0}
Results for... alternate
time taken ... **0.060056925 sec**.
Answer: There are 2 ways of filling a structured array
(<http://docs.scipy.org/doc/numpy/user/basics.rec.html#filling-structured-
arrays>) - by row (or rows with list of tuples), and by field.
To do this by field, create the empty structured array, and assign values by
field name
In [19]: a=np.column_stack((id,x,y))
# same as your vstack().T
In [20]: Y=np.zeros(a.shape[0], dtype=dt)
# empty, ones, etc
In [21]: Y['ID'] = a[:,0]
In [22]: Y['Shape'] = a[:,1:]
# (2,) field takes a 2 column array
In [23]: Y
Out[23]:
array([(0, [8.0, 8.0]), (1, [8.0, 0.0]), (2, [6.0, 2.0]), (3, [8.0, 8.0]),
(4, [3.0, 2.0]), (5, [6.0, 1.0]), (6, [5.0, 6.0]), (7, [7.0, 7.0]),
(8, [6.0, 1.0]), (9, [6.0, 6.0])],
dtype=[('ID', '<i4'), ('Shape', '<f8', (2,))])
On the surface
arr = np.array(list(zip(id,np.hstack((x,y)))),dt)
looks like an ok way of constructing the list of tuples need to fill the
array. But result duplicates the values of `x` instead of using `y`. I'll have
to look at what is wrong.
You can take a view of an array like `a` if the `dtype` is compatible - the
data buffer for 3 int columns is layed out the same way as one with 3 int
fields.
a.view('i4,i4,i4')
But your dtype wants 'i4,f8,f8', a mix of 4 and 8 byte fields, and a mix of
int and float. The `a` buffer will have to be transformed to achieve that.
`view` can't do it. (don't even ask about .astype.)
* * *
corrected list of tuples method:
In [35]: np.array([(i,j) for i,j in zip(id,np.column_stack((x,y)))],dt)
Out[35]:
array([(0, [8.0, 8.0]), (1, [8.0, 0.0]), (2, [6.0, 2.0]), (3, [8.0, 8.0]),
(4, [3.0, 2.0]), (5, [6.0, 1.0]), (6, [5.0, 6.0]), (7, [7.0, 7.0]),
(8, [6.0, 1.0]), (9, [6.0, 6.0])],
dtype=[('ID', '<i4'), ('Shape', '<f8', (2,))])
The list comprehension produces a list like:
[(0, array([8, 8])),
(1, array([8, 0])),
(2, array([6, 2])),
....]
For each tuple in the list, the `[0]` goes in the first field of the dtype,
and `[1]` (a small array), goes in the 2nd.
The tuples could also be constructed with
[(i,[j,k]) for i,j,k in zip(id,x,y)]
* * *
dt1 = np.dtype([('ID','<i4'),('Shape',('<i4',(2,)))])
is a view compatible dtype (still 3 integers)
In [42]: a.view(dtype=dt1)
Out[42]:
array([[(0, [8, 8])],
[(1, [8, 0])],
[(2, [6, 2])],
[(3, [8, 8])],
[(4, [3, 2])],
[(5, [6, 1])],
[(6, [5, 6])],
[(7, [7, 7])],
[(8, [6, 1])],
[(9, [6, 6])]],
dtype=[('ID', '<i4'), ('Shape', '<i4', (2,))])
|
Conditional average in Python
Question: I am having a problem manipulating my excel file in python. I have a large
excel file with data arranged by date/time. I would like to be able to average
the data for a specific time of day, over all the different days; ie. to
create an average profile of the _gas_concentrations_ over 1 day.
Here is a sample of my excel file:
Decimal Day of year Decimal of day Gas concentration
133.6285 0.6285 46.51230
133.6493 0.6493 47.32553
133.6701 0.6701 49.88705
133.691 0.691 51.88382
133.7118 0.7118 49.524
133.7326 0.7326 50.37112
Basically I need a function, like the AVERAGEIF function in excel, that will
say something like "Average the _gas_concentrations_ when _decimal_of_day_ =x"
However I really have no idea how to do this. Currently I have got this far
import xlrd
import numpy as np
book= xlrd.open_workbook('TEST.xlsx')
level_1=book.sheet_by_index(0)
time_1=level_1.col_values(0, start_rowx=1, end_rowx=1088)
dectime_1=level_1.col_values(8, start_rowx=1, end_rowx=1088)
ozone_1=level_1.col_values(2, start_rowx=1, end_rowx=1088)
ozone_1 = [float(i) if i != 'NA' else 'NaN' for i in ozone_1]
**Edit**
I updated my script to include the following
ozone=np.array(ozone_1, float)
time=np.array(dectime_1)
a=np.column_stack((ozone, time))
b=np.where((a[:,0]<0.0035))
print b
**EDIT** Currently I solved the problem by putting both the variables into an
array, then making a smaller array with just the variables I need to average -
a bit inefficient but it works!
ozone=np.array(ozone_1, float)
time=np.array(dectime_1)
a=np.column_stack((ozone, time))
b=a[a[:,1]<0.0036]
c=np.nanmean(b[:,0])
Answer: You can use [numpy masked
array](http://docs.scipy.org/doc/numpy/reference/maskedarray.html "numpy
masked array").
import numpy as np
data_1 = np.ma.arange(10)
data_1 = np.ma.masked_where(<your if statement>, data_1)
data_1_mean = np.mean(data1)
Hope that helps
|
Python 2.7 & ANTLR4 : Make ANTLR throw exceptions on invalid input
Question: I want to catch errors like
line 1:1 extraneous input '\r\n' expecting {':', '/',}
line 1:1 mismatched input 'Vaasje' expecting 'Tafel'
I tried wrapping my functions in try-catch but, as expected, these errors are
just print statement and not exceptions. I've seen some examples of switching
on errors in the .g4 file, but all the examples are for Java, and I can't seem
to get it working.
Is it possible for ANTLR4 in Python to throw exceptions which I can catch?
Answer: I have looked through the python classes and noticed that they dont have the
methods that the java one has for adding and removing a error listener, this
maybe a bug in ANTLR, however python being python you are allowed to modify
the members without requiring a setter as such like in the following example:
I run the example by performing a : antlr4 -Dlanguage=Python2 AlmostEmpty.g4
and then by typing in main.py
* * *
AlmostEmpty.g4
grammar AlmostEmpty;
animals: (CAT | DOG | SHEEP ) EOF;
WS: [ \n\r]+ -> skip;
CAT: [cC] [aA] [tT];
DOG: [dD] [oO] [gG];
SHEEP: [sS] [hH] [eE] [pP];
* * *
main.py
from antlr4 import *
import sys
from AlmostEmptyLexer import AlmostEmptyLexer
from AlmostEmptyParser import AlmostEmptyParser
from antlr4.error.ErrorListener import ErrorListener
class MyErrorListener( ErrorListener ):
def __init__(self):
super(MyErrorListener, self).__init__()
def syntaxError(self, recognizer, offendingSymbol, line, column, msg, e):
raise Exception("Oh no!!")
def reportAmbiguity(self, recognizer, dfa, startIndex, stopIndex, exact, ambigAlts, configs):
raise Exception("Oh no!!")
def reportAttemptingFullContext(self, recognizer, dfa, startIndex, stopIndex, conflictingAlts, configs):
raise Exception("Oh no!!")
def reportContextSensitivity(self, recognizer, dfa, startIndex, stopIndex, prediction, configs):
raise Exception("Oh no!!")
if __name__ == "__main__":
inputStream = StdinStream( )
lexer = AlmostEmptyLexer(inputStream)
# Add your error listener to the lexer if required
#lexer.removeErrorListeners()
#lexer._listeners = [ MyErrorListener() ]
stream = CommonTokenStream(lexer)
parser = AlmostEmptyParser(stream)
parser._listeners = [ MyErrorListener() ]
tree = parser.animals()
|
Add a list of regions to Vimeo using PyVimeo in django
Question: I have a django app in which i am using
[`PyVimeo`](https://pypi.python.org/pypi/PyVimeo/0.3.0) module to connect and
upload videos etc., to `Vimeo`
The actual vimeo api to post the region data was
[here](https://developer.vimeo.com/api/playground/ondemand/pages/47753/regions/AL)
For example i have the following data `[{u'country_name': u'CA'},
{u'country_name': u'US'}]` to send a `PUT` request to the url
`https://api.vimeo.com/ondemand/pages/47753/regions`
From code i was trying to send PUT request as below
import vimeo
token = XXXXXXXXXXXXXXXXXX
VIMEO_KEY = XXXXXXXXXXXXXXXXXX
VIMEO_SECRET = XXXXXXXXXXXXXXXXXX
client = vimeo.VimeoClient(key=VIMEO_KEY, secret=VIMEO_SECRET, token=token)
url = https://api.vimeo.com/ondemand/pages/47753/regions
regions_data = [{u'country_name': u'CA'}, {u'country_name': u'US'}]
result_data = client.put(url, regions_data)
Response was `400 Bad request`
When tried in the below way as indicated in the Vimeo API docs
client.put(url + 'CA')
Response
HTTP/1.1 201
Location: Array
Host: api.vimeo.com
But it was not reflecting in the Distribution section of the video setting and
being as `Worldwide` by default
So how actually to set a list of regions to a on demand page VOD ?
Answer: Try setting `country_code` instead of `country_name`
v = vimeo.VimeoClient(key=YOUR_VIMEO_KEY,
secret=YOUR_VIMEO_SECRET,
token=YOUR_VIMEO_TOKEN)
regions_data = [{'country_code': 'CA'}, {'country_code': 'US'}]
output = v.put('/ondemand/pages/mytestvod/regions', data=regions_data)
This should restrict distribution to only Canada and the US.
|
Apache Thrift Python 3 support
Question: I compiled my test.thrift file using:
thrift -gen py test.thrift
Then i tried to import the created files:
from test.ttypes import *
When I use Python 2.7 the import works but with Python 3.4 it raises
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/art/SerTest/addressThrift/gen-py/test/ttypes.py", line11, in <module>
from thrift.transport import TTransport
File "/usr/local/lib/python3.4/dist-
packages/thrift/transport/TTransport.py", line 20, in <module>
from cStringIO import StringIO
ImportError: No module named 'cStringIO'
I tried to run: `sudo python3 setup.py install` and got many exceptions, all
seems to be related to python 2 vs 3 problems. for example:
File "/usr/local/lib/python3.4/dist-
packages/thrift/transport/TSSLSocket.py", line 99
except socket.error, e:
^
SyntaxError: invalid syntax
I addition there is a warning thats seems important:
/usr/lib/python3.4/distutils/dist.py:260: UserWarning: Unknown distribution option: 'use_2to3'
Googling Thrift Python 3 support seems contradicting.
Those say that there is no support:
[Does cql support python 3?](http://stackoverflow.com/questions/17390367/does-
cql-support-python-3)
<https://issues.apache.org/jira/browse/THRIFT-1857>
And here I understand from the subtext that it does:
[Thrift python 3.4 TypeError: string argument expected, got
'bytes'](http://stackoverflow.com/questions/31869321/thrift-
python-3-4-typeerror-string-argument-expected-got-bytes)
[Python 3.4 TypeError: input expected at most 1 arguments, got
3](http://stackoverflow.com/questions/30673185/python-3-4-typeerror-input-
expected-at-most-1-arguments-got-3?rq=1)
<https://issues.apache.org/jira/browse/THRIFT-2096>
So does Thrift support Python 3.X? If so what did I missed?
Answer: There is a better solution to this. Instead of waiting the official python 3
support, why not use our python implementation of thrift?
<https://github.com/eleme/thriftpy>
it fully supports python 3, and pypy, and pypy3.
|
How to exchange one-time authorization code for access token with flask_oauthlib?
Question: I'm building an API with the [Python Flask framework](http://flask.pocoo.org/)
in which I now receive a "one-time authorization code" from an app which I
supposedly can exchange for an access and refresh token with the Gmail API
([described here](https://developers.google.com/gmail/api/auth/web-server)).
I've been using
[Oauthlib](http://oauthlib.readthedocs.org/en/latest/index.html) for regular
authorizations in the browser, which works perfectly fine. So I found [this
page](http://oauthlib.readthedocs.org/en/latest/oauth2/grants/authcode.html)
with some example code of how to implement this. The code in the example
starts off with `from your_validator import your_validator` and I'm
immediately stuck. I found something
[here](http://oauthlib.readthedocs.org/en/latest/oauth2/server.html#implement-
a-validator) about implementing a custom validator, but at the top of that
page it says that [flask_oauthlib](https://github.com/lepture/flask-oauthlib)
already implemented this.
Does anybody have an example how to exchange a one-time authorization code for
an access and a refresh token with flask_oauthlib? All tips are welcome!
Answer: I don't have an example using Oauthlib but here's one using oauth2client
library. <https://github.com/googleplus/gplus-quickstart-python>
Looking at flask_oauthlib,
[`handle_oauth2_response`](https://github.com/lepture/flask-
oauthlib/blob/master/flask_oauthlib/client.py#L619) seems to be the method
you'll need to use to exchange `code` with `access_token`.
|
Django ImportError: No module named middleware
Question: I am using Django version 1.8 and python 2.7. I am getting the following error
after running my project.
Traceback (most recent call last):
File "C:\Python27\lib\wsgiref\handlers.py", line 85, in run
self.result = application(self.environ, self.start_response)
File "C:\Python27\lib\site-packages\django\contrib\staticfiles\handlers.py", line 63, in __call__
return self.application(environ, start_response)
File "C:\Python27\lib\site-packages\django\core\handlers\wsgi.py", line 170, in __call__
self.load_middleware()
File "C:\Python27\lib\site-packages\django\core\handlers\base.py", line 50, in load_middleware
mw_class = import_string(middleware_path)
File "C:\Python27\lib\site-packages\django\utils\module_loading.py", line 26, in import_string
module = import_module(module_path)
File "C:\Python27\lib\importlib\__init__.py", line 37, in import_module
__import__(name)
ImportError: No module named middleware
[26/Aug/2015 20:34:29] "GET /favicon.ico HTTP/1.1" 500 59
This is my settings.py file
"""
Django settings for collageapp project.
Generated by 'django-admin startproject' using Django 1.8.
For more information on this file, see
https://docs.djangoproject.com/en/1.8/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/1.8/ref/settings/
"""
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
import os
APP_PATH = os.path.dirname(os.path.abspath(__file__))
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/1.8/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = '******************************************************'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = []
# Application definition
INSTALLED_APPS = (
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'manageApp',
'django.contrib.sites',
'allauth',
'allauth.account',
'allauth.socialaccount',
'allauth.socialaccount.providers.facebook',
'allauth.socialaccount.providers.google',
'django.contrib.admindocs',
'rest_framework',
)
SITE_ID = 1
MIDDLEWARE_CLASSES = (
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'django.middleware.security.SecurityMiddleware',
'corsheaders.middleware.CorsMiddleware',
'oauth2_provider.middleware.OAuth2TokenMiddleware',
)
ROOT_URLCONF = 'collageapp.urls'
CORS_ORIGIN_ALLOW_ALL = True
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
'allauth.account.context_processors.account',
'allauth.socialaccount.context_processors.socialaccount'
],
},
},
]
WSGI_APPLICATION = 'collageapp.wsgi.application'
# Database
# https://docs.djangoproject.com/en/1.8/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'college_app',
'USER': 'root',
'PASSWORD': '',
'HOST': 'localhost', # Or an IP Address that your DB is hosted on
'PORT': '3306',
}
}
# Internationalization
# https://docs.djangoproject.com/en/1.8/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/1.8/howto/static-files/
STATIC_URL = '/static/'
I have tried surfing the error but in vain. My same code is working fine in
other machine.
Answer: Open up a python shell by running `python manage.py shell` in your project
directory.
Run the following commands **one at a time** in the python shell:
>>> from corsheaders.middleware import CorsMiddleware
>>> from oauth2_provider.middleware import OAuth2TokenMiddleware
>>> from django.contrib.auth.middleware import SessionAuthenticationMiddleware
One of the lines should give you an error like the following:
Traceback (most recent call last):
File "<console>", line 1, in <module>
ImportError: No module named middleware
The line that gives you that error is the missing module that is giving you
the problem.
To find the path where the modules are searched, do
>>> import sys; sys.path
* * *
Alternatively, if you don't know how to use the python shell, you could just
remove the following lines in your `settings.MIDDLEWARE_CLASSES` one at a time
until you don't get that error anymore:
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'corsheaders.middleware.CorsMiddleware',
'oauth2_provider.middleware.OAuth2TokenMiddleware',
* * *
Just reinstall the package that gave you the error.
`django.contrib.auth.middleware` -> `django`
`corsheaders.middleware` -> `corsheaders`
`oauth2_provider.middleware` -> `oauth2_provider`
|
hello world in wxPython gets no reaction at all, no frame, no return **SOLVED**
Question: First foray into python GUI and I'm using thenewboston tutorial. First lesson
with a basic frame, I get an error that wx.PySimpleApp() is depreciated and I
follow the instructions here to change it to wx.App(False). No errors come up,
but also no frame. Here is the code:
#!/usr/bin/env python
import wx
class duckie(wx.Frame):
def __init__(self,parent,id):
wx.Frame.__init__(self,parent,id, 'Frame aka window', size = (300, 200))
if __name__=='__main__':
app=wx.App(False)
frame=duckie(parent=None,id=-1)
frame.Show()
app.MainLoop
Originally I had everything spaced out (space after commas, before and after
operators, etc) but I changed it to more accurately follow the tutorial with
no difference in effect. Understandably, I'm a little upset that I can't even
get the Hello World to work here.
In case it's needed, the system is Ubuntu, everything is up to date through
pip.
For anyone else with this problem, I changed the id tag on line 10 from
frame=duckie(parent=None, id=-1)
to
frame=duckie(None, wx.ID_ANY)
Answer:
app.MainLoop
should be:
app.MainLoop()
'-1' or wx.ID_ANY is the same, do the following on a Python prompt:
import wx
wx.ID_ANY
The little sample I would probably do like this:
#!/usr/bin/env python
import wx
class Duckie(wx.Frame):
def __init__(self, parent, *args, **kwargs):
super(Duckie, self).__init__(parent, id=wx.ID_ANY,
title='Frame aka window',
size=(300, 200))
if __name__ == '__main__':
app = wx.App(False)
frame = Duckie(None)
frame.Show()
app.MainLoop()
The above code will pass PEP 8 (<https://www.python.org/dev/peps/pep-0008/>)
checking and will also have no errors when one uses e.g. PyLint to check ones
code.
|
Sklearn joblib load function IO error from AWS S3
Question: I am trying to load a pkl dump of my classifier from sklearn-learn.
The joblib dump does a much better compression than the cPickle dump for my
object so I would like to stick with it. However, I am getting an error when
trying to read the object from AWS S3.
Cases:
* Pkl object hosted locally: pickle.load works, joblib.load works
* Pkl object pushed to Heroku with app (load from static folder): pickle.load works, joblib.load works
* Pkl object pushed to S3: pickle.load works, joblib.load returns IOError. (testing from heroku app and tested from local script)
Note that the pkl objects for joblib and pickle are different objects dumped
with their respective methods. (i.e. joblib loads only joblib.dump(obj) and
pickle loads only cPickle.dump(obj).
Joblib vs cPickle code
# case 2, this works for joblib, object pushed to heroku
resources_dir = os.getcwd() + "/static/res/" # main resource directory
input = joblib.load(resources_dir + 'classifier.pkl')
# case 3, this does not work for joblib, object hosted on s3
aws_app_assets = "https://%s.s3.amazonaws.com/static/res/" % keys.AWS_BUCKET_NAME
classifier_url_s3 = aws_app_assets + 'classifier.pkl'
# does not work with raw url, IO Error
classifier = joblib.load(classifier_url_s3)
# urrllib2, can't open instance
# TypeError: coercing to Unicode: need string or buffer, instance found
req = urllib2.Request(url=classifier_url_s3)
f = urllib2.urlopen(req)
classifier = joblib.load(urllib2.urlopen(classifier_url_s3))
# but works with a cPickle object hosted on S3
classifier = cPickle.load(urllib2.urlopen(classifier_url_s3))
My app works fine in case 2, but because of very slow loading, I wanted to try
and push all static files out to S3, particularly these pickle dumps. Is there
something inherently different about the way joblib loads vs pickle that would
cause this error?
This is my error
File "/usr/local/lib/python2.7/site-packages/sklearn/externals/joblib/numpy_pickle.py", line 409, in load
with open(filename, 'rb') as file_handle:
IOError: [Errno 2] No such file or directory: classifier url on s3
[Finished in 0.3s with exit code 1]
It is not a permissions issue as I've made all my objects on s3 public for
testing and the pickle.dump objects load fine. The joblib.dump object also
downloads if I directly enter the url into the browser
I could be completely missing something.
Thanks.
Answer: joblib.load() expects a name of the file present on filesystem.
Signature: joblib.load(filename, mmap_mode=None)
Parameters
-----------
filename: string
The name of the file from which to load the object
Moreover, making all your resources public might not be a good idea for other
assets, even you don't mind pickled model being accessible to the world.
It is rather simple to copy object from S3 to local filesystem of your worker
first:
from boto.s3.connection import S3Connection
from sklearn.externals import joblib
import os
s3_connection = S3Connection(AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)
s3_bucket = s3_connection.get_bucket(keys.AWS_BUCKET_NAME)
local_file = '/tmp/classifier.pkl'
s3_bucket.get_key(aws_app_assets + 'classifier.pkl').get_contents_to_filename(local_file)
clf = joblib.load(local_file)
os.remove(local_file)
Hope this helped.
P.S. you can use this approach to pickle the entire scklearn pipeline, with
feature imputers too. Just beware of version conflicts of libraries between
training and predicting.
|
Making a post request in python for scraping
Question: My goal is to be able to access the data from a website after inputting
information in a field and hitting submit. I'm using Httpfox to grab which
values are needed to "post". I included a screenshot of that below the code.
#SECTION 1: import modules
import requests
#SECTION 2: setup variables
url = 'http://www.clarkcountynv.gov/Depts/assessor/Pages/PropertyRecords.aspx?H=redrock&P=assrrealprop/pcl.aspx'
ses = requests.session()
values = []
values.append({
'__LASTFOCUS' : '',
'__EVENTARGUMENT' : '',
'__EVENTTARGET' : 'Submit1',
'__VIEWSTATE' : '/wEPDwUJLTcyMjI2NjUwZBgBBR5fX0NvbnRyb2xzUmVxdWlyZVBvc3RCYWNrS2V5X18WAgUKY2hrQ3VycmVudAUKY2hrSGlzdG9yeUfXtwoelaE/eJmc1s9mHzvIqqwk',
'__VIEWSTATEGENERATOR' : '5889EE07',
'__EVENTVALIDATION' : '/wEWBQLnuv+BDALRqf+zDQK+zcmQBAK+zeWoCALVo8avDjkJwx8mhBoXL3mYGKBSY5lYBPxY',
'hdnInstance' : 'pcl',
'parcel' : '124-32-816-087', #this is what changes
'age' : 'pcl17',
'Submit1' : 'Submit'})
#SECTION 3: grab html text
r = ses.post(url, data = values)
r.content() #this either throws an error 'too many values to unpack' or gives me the content of the main page if i play around with the input values a little, not the redirected page which is my problem
[](http://i.stack.imgur.com/IqCFm.png)
Answer: You don't need to embark in a POST request when you can get the same result
with a simple GET
A page inspection of the final page higlights there is an iframe used to show
the actual search result.
You can get the result straight from this URL, by replacing "your-parcel-
number-here" with the desired value (In your example it was `124-32-816-087`).
>
> <http://sandgate.co.clark.nv.us/assrrealprop/ParcelDetail.aspx?hdnParcel=your-
> parcel-number-here>
Looks like no cookie is needed and hotlinking works (I tried that link in
firefox private mode).
|
Google Datastore API Authentication in Python
Question: Authenticating requests, especially with Google's API's is so incredibly
confusing!
I'd like to make authorized HTTP POST requests through python in order to
query data from the datastore. I've got a service account and p12 file all
ready to go. I've looked at the examples, but it seems no matter which one I
try, I'm always unauthorized to make requests.
Everything works fine from the browser, so I know my permissions are all in
order. So I suppose my question is, how do I authenticate, and request data
securely from the Datastore API through python?
I am so lost...
Answer: You probably should not be using raw POST requests to use Datastore, instead
use the [gcloud library](https://github.com/GoogleCloudPlatform/gcloud-python)
to do the heavy lifting for you.
I would also recommend the [Python getting started
page](https://cloud.google.com/python/), as it has some good tutorials.
Finally, I recorded a podcast where I go over the basics of [using Datastore
with Python](https://youtu.be/bMC_k0NBjKs?t=1494), check it out!
[Here is the code](https://github.com/thesandlord/samples/tree/master/google-
cloud-datastore/interactive-demo), and here is an example:
#Import libraries
from gcloud import datastore
import os
#The next few lines will set up your environment variables
#Replace "YOUR_RPOEJCT_ID_HERE" with the correct value in code.py
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "key.json"
projectID = "YOUR_RPOEJCT_ID_HERE"
os.environ["GCLOUD_TESTS_PROJECT_ID"] = projectID
os.environ["GCLOUD_TESTS_DATASET_ID"] = projectID
datastore.set_default_dataset_id(projectID)
#Let us build a message board / news website
#First, create a fake email for our fake user
email = "[email protected]"
#Now, create a 'key' for that user using the email
user_key = datastore.Key('User', email)
#Now create a entity using that key
new_user = datastore.Entity( key=user_key )
#Add some fields to the entity
new_user["name"] = unicode("Iam Fake")
new_user["email"] = unicode(email)
#Push entity to the Cloud Datastore
datastore.put( new_user )
#Get the user from datastore and print
print( datastore.get(user_key) )
This code is licensed under Apache v2
|
Using python to find specific pattern contained in a paragraph
Question: I'm trying to use python to go through a file, find a specific piece of
information and then print it to the terminal. The information I'm looking for
is contained in a block that looks something like this:
\\Version=EM64L-G09RevD.01\State=1-A1\HF=-1159.6991675\RMSD=4.915e-11\RMSF=1.175e-07\ZeroPoint=0.0353317\
I would like to be able to get the information `HF=-1159.6991675`. More
generally, I would like the script to copy and print
`\HF=WhateverTheNumberIs\`
I've managed to make scripts that are able to copy an entire line and print it
out to the terminal, but I am unsure how to accomplish this particular task.
Answer: My suggestions is to use regular expressions
([regex](https://docs.python.org/2/howto/regex.html)) in order to catch the
required pattern:
import re #for using regular expressions
s = open(<filename here>).read() #read the content of the file and hold it as a string to be scanned
p = re.compile("\HF=[^\]+", re.flags) #p would be the pattern as you described, starting with \HF= till the next \)
print p.findall(s) #finds all occurrences and prints them
|
List files on device
Question: I'm learning Python and I'm trying to list a directory on a USB device from
Windows
import os
#dirname = "C:\\temp\\" # works fine
dirname = "\\mycomputer\\WALKMAN NWZ-B133 \\Storage Media\\Music\\"
x = os.listdir(dirname)
print x
There IS a space after B133
Only I get the error WindowsError: [Error 3] The system cannot find the path
specified: '\mycomputer\WALKMAN NWZ-B133 \Storage Media\Music\\*.*' Any ideas?
Answer: This type of player is connected via Media Transfer Protocol (MTP) Looks like
you need [PyMTP](https://pypi.python.org/pypi/PyMTP) Good luck.
|
Python: Displaying an object's implementation source
Question: I've been tasked with something a bit unusual and unexpectedly puzzling -
Display the source code of a particular class's implementation of a method.
In [1]: class Demonstration:
...: def cost():
...: return 42
...:
In [2]: class OtherDemo:
...: def cost():
...: return 43
...:
In this example, I'd want to find the text `def cost():` and the following
lines at the appropriate indent for either given class.
modules like `inspect` or `dis` are nearly what I'm looking for, but I want to
display the python code, such as is displayed during a traceback; not
bytecode.
I'm also not opposed to parsing the source files, either - is there a library
that text editors might use for autocompletion or indent calculations that
could find a specific method in a class?
Answer: It sounds like the `inspect` library is exactly what you need, in particular,
the function
[`getsourcelines`](https://docs.python.org/2/library/inspect.html#inspect.getsourcelines):
In [1]: def foo(x):
...: x += 3
...: x += 4
...: return x
...:
In [2]: import inspect
In [3]: inspect.getsourcelines(foo)
Out[3]: (['def foo(x):\n', ' x += 3\n', ' x += 4\n', ' return x\n'], 1)
In [4]: source_code = _
In [6]: print(''.join(source_code[0]))
def foo(x):
x += 3
x += 4
return x
From the docs:
> Return a list of source lines and starting line number for an object. The
> argument may be a module, class, method, function, traceback, frame, or code
> object. The source code is returned as a list of the lines corresponding to
> the object and the line number indicates where in the original source file
> the first line of code was found. An IOError is raised if the source code
> cannot be retrieved.
|
python how to serve multiple tcp clients with input from single udp port?
Question: I have a python TCP server that listens for incoming data requests. As soon as
someone connects to it, the server starts serving data to that client. The
data it serves comes in via UDP on some port.
The question is, how can I serve the same data coming in from a UDP port to
potentially multiple tcp clients?
Some background info: 1\. a thread is created to listen for incoming UDP data.
2\. A new thread is created every time a new TCP clients connects.
I looked at the python Queue package, but it seems this is point-to-point and
limiting for my objective.
Any ideas?
Answer: You could hide the point-to-point connections in a class, for example:
from multiprocessing import Queue
from threading import Lock
class BroadcastChannel():
def __init__(self):
self._queues = []
self.lock = Lock()
def send(self, item):
self.lock.acquire()
for q in self._queues:
q.put(item)
self.lock.release()
def fork(self):
q = Queue()
self.lock.acquire()
self._queues.append(q)
self.lock.release()
return q
|
Reading an image with OpenCV, altering the pixels and returning the new image
Question: I'm using Python, OpenCV and Numpy to read a black and white image. I then
iterate over each pixel using numpy.nditer and either change the pixel to 255
(if it is greater than 128), or change it to 0. Through some testing, I think
I have that part right. What I cannot figure out is how to use cv2.imshow to
'show' the altered image. Currently, it only seems to show the original image.
import cv2
import numpy as np
image = cv2.imread('someimage', cv2.IMREAD_GRAYSCALE)
it = np.nditer(image)
for (x) in it:
if x > 128:
x = 255
else:
x = 0
cv2.imshow('image',image)
cv2.waitKey(0)
cv2.destroyAllWindows()
Thank you for any advice.
Answer: You may want to look into the `np.where` function:
image[:] = np.where(image > 128, 255, 0)
You really shouldn't be looping through arrays unless absolutely necessary:
In [16]: %%timeit
....: a = np.random.randint(0, 255, (800,600))
....: a[:] = np.where(a > 128, 255, 0)
....:
10 loops, best of 3: 27.6 ms per loop
In [17]: %%timeit
....: image = np.random.randint(0, 255, (800,600))
....: it = np.nditer(image, op_flags=['readwrite'])
....: for (x) in it:
....: if x > 128:
....: x[...] = 255
....: else:
....: x[...] = 0
....:
1 loops, best of 3: 1.26 s per loop
|
Finding the corresponding sample fraction for a predicted response in classification trees Python 2.7
Question: I know how to fit a tree using `sklearn`. I also know how to use it for
prediction using either `predict` or `predict_proba`. However, for prediction
I want to get the (raw) sample fractions rather than the probability.
For example, in a fitted tree, two leaf nodes might both have probability of
0.2 for class A but one with 2/10 while the other with 400/2000. Now, if I use
this tree, I want to get something like [400,2000] or [2,10] rather than just
0.2.
`n_node_sample` and `value` attributes store such information in the fitted
tree object but I dont know how to extract the appropriate values from them in
prediction.
Thanks in advance.
Answer: You can use the tree's `tree.tree_.apply` method to find out which leaf the
point ends up in, and then use the `tree.tree_.value` array to check how many
samples of each class are in this leaf:
import numpy as np
from sklearn.datasets import load_iris
from sklearn.tree import DecisionTreeClassifier
iris = load_iris()
tree = DecisionTreeClassifier(max_depth=2).fit(iris.data, iris.target)
leaf = tree.tree_.apply(iris.data[50:51].astype(np.float32))
print(leaf)
# output [3]
print(tree.tree_.value[leaf])
# output [[[ 0. 49. 5.]]]
print(tree.predict_proba(iris.data[50:51]))
# output [[ 0. 0.90740741 0.09259259]]
In the next version 0.17 `tree.tree_.apply` will be "public" as `tree.apply`
and will take care of the datatype conversion (to float32). See [the
docs](http://scikit-
learn.org/dev/modules/generated/sklearn.tree.DecisionTreeClassifier.html#sklearn.tree.DecisionTreeClassifier.apply).
|
ffmpeg in Python subprocess - Unable to find a suitable output format for 'pipe:'
Question: Trying to burn subs into video with ffmpeg via Python. Works fine in the
command line, but when calling from Python subprocess with:
p = subprocess.Popen('cd ~/Downloads/yt/; ffmpeg -i ./{video} -vf subtitles=./{subtitles} {out}.mp4'.format(video=vid.replace(' ', '\ '), subtitles=subs, out='out.mp4'), shell=True)
I get:
Unable to find a suitable output format for 'pipe:'
Full traceback:
'ffmpeg version 2.7.2 Copyright (c) 2000-2015 the FFmpeg developers
built with Apple LLVM version 6.1.0 (clang-602.0.53) (based on LLVM 3.6.0svn)
configuration: --prefix=/usr/local/Cellar/ffmpeg/2.7.2_1 --enable-shared --enable-pthreads --enable-gpl --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-opencl --enable-libx264 --enable-libmp3lame --enable-libvo-aacenc --enable-libxvid --enable-libfreetype --enable-libvpx --enable-libass --enable-libfdk-aac --enable-nonfree --enable-vda
libavutil 54. 27.100 / 54. 27.100
libavcodec 56. 41.100 / 56. 41.100
libavformat 56. 36.100 / 56. 36.100
libavdevice 56. 4.100 / 56. 4.100
libavfilter 5. 16.101 / 5. 16.101
libavresample 2. 1. 0 / 2. 1. 0
libswscale 3. 1.101 / 3. 1.101
libswresample 1. 2.100 / 1. 2.100
libpostproc 53. 3.100 / 53. 3.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from './OnHub - a router for the new way to Wi-Fi-HNnfHP7VDP8.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf56.36.100
Duration: 00:00:53.94, start: 0.000000, bitrate: 2092 kb/s
Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1920x1080 [SAR 1:1 DAR 16:9], 1961 kb/s, 23.98 fps, 23.98 tbr, 90k tbn, 47.95 tbc (default)
Metadata:
handler_name : VideoHandler
Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 125 kb/s (default)
Metadata:
handler_name : SoundHandler
[NULL @ 0x7fc07b077600] Unable to find a suitable output format for 'pipe:'
pipe:: Invalid argument'
Answer: I'm guessing the problem was that you had spaces in some argument that you
weren't escaping. You could just escape it, but this is a better way to do
what you're trying to do:
import os
directory_path = os.path.expanduser('~/Downloads/yt/')
video_path = 'path/to/video'
subtitles_path = 'path/to/subtitles'
outfile_path = 'out.mp4'
args = ['ffmpeg', '-i', video_path, '-vf',
'subtitles={}'.format(subtitles_path), outfile_path]]
p = subprocess.Popen(args, cwd=directory_path)
The main difference is that you're not using `shell=True`, which is good
practice for security and other reasons, including the fact that you don't
have to worry about quoting arguments with spaces. Because it's not using
`shell=True`, you have to pass in the command line as a list of strings, one
element per argument, instead of one string like before.
|
re.findall printing the full text on a line of tekst
Question: I got the following code:
import urllib
import re
html = urllib.urlopen("http://jshawl.com/python-playground/").read()
lines = [html]
for line in lines:
if re.findall("jesseshawl", line):
print line
My output when I run this code, is that it wil return the full website. How
can I only display the row where it did found "jesseshawl". It should return
something like:
[email protected]
And is there a way to not return all html tags when I run this?
My output:
<html>
<head></head>
<body>
<h1>Some images to download:</h1>
<img src='python.gif'/><br />
<img src='terminal.png' />
<hr />
<h1>Email addresses to extract:</h1>
[email protected]<br />
[email protected]<br />
<hr />
<h1>Login Form:</h1>
Login here:<br />
User: user<br />
Pass: pass
<form method="POST" action="login.php">
User: <input type="text" name="username" /><br />
Pass: <input type="password" name="password" /><br />
<input type="submit" />
</form>
<h1>Memorable Quotes</h1>
<ul>
<li></li>
</ul>
</body>
</html>
Answer: You are reading the whole page .S0 it prints all the thing .You have to read
it line by line.There is no need for `findall` you can use `in` operator
**Code:**
import urllib
import re
html = urllib.urlopen("http://jshawl.com/python-playground/").readlines()
for line in html :
if "jesseshawl" in line:
print line
**Output:**
[email protected]<br />
And if you don't want tags you could remove them using `sub`
**Code2:**
import urllib
import re
html = urllib.urlopen("http://jshawl.com/python-playground/").readlines()
for line in html :
if "jesseshawl" in line:
print re.sub("<[^>]*?>","",line)
**Output2:**
[email protected]
|
'module' object has no attribute '_strptime' with several threads Python
Question: I'm getting this error `'module' object has no attribute '_strptime'` but only
when I use several threads. When I only use one it works fine. Im using python
2.7 x64. Here there us the reduced function i'm calling
import datetime
def get_month(time):
return datetime.datetime.strptime(time, '%Y-%m-%dT%H:%M:%S+0000').strftime("%B").lower()
Here is the complete traceback:
AttributeError: 'module' object has no attribute '_strptime'
Exception in thread Thread-22:
Traceback (most recent call last):
File "C:\Python27x64\lib\threading.py", line 810, in __bootstrap_inner
self.run()
File "C:\Python27x64\lib\threading.py", line 763, in run
self.__target(*self.__args, **self.__kwargs)
File "C:\file.py", line 81, in main
month=get_month(eventtime)
File "C:\file.py", line 62, in get_month
return datetime.datetime.strptime(time, '%Y-%m-%dT%H:%M:%S+0000').strftime("%B").lower()
AttributeError: 'module' object has no attribute '_strptime'
Answer: Just ran into this exact problem. It's a tricky one - took me an hour or so to
track it down. I tried launching the shell and entering in the following code:
import datetime
print(datetime.datetime.strptime("2015-4-4", "%Y-%m-%d"))
This worked fine. Then I tried it in a blank file in my workspace. This gave
the same error you described. I tried running it from the command line in my
workspace. Still gave the error. I then launched the shell from my workspace.
This time it gave the error in the shell environment. As it turned out, any
directory other than the one I was in worked fine.
The problem was that my project was a python calendar app, and my main file
was called "calendar.py". This conflicted with some native import, thus
creating the bizarre error.
In your case, I'd bet anything the problem is the name of your file:
"file.py". Call it something else, and you should be good to go.
|
Error on deploying Flask application using wsgi on apache2
Question: I am having a problem deploying a flask application on apache2 using mod_wsgi.
Error log and config files follow. I always get internal server error. This is
very similar to [How to solve import errors while trying to deploy Flask using
WSGI on Apache2](http://stackoverflow.com/questions/3696606/how-to-solve-
import-errors-while-trying-to-deploy-flask-using-wsgi-on-apache2) but for some
reason the solution proposed there did not work here.
apache error log
[Thu Aug 27 12:06:30.366817 2015] [:error] [pid 9330:tid 140623686452992] [remote 2.239.9.178:64904] mod_wsgi (pid=9330): Target WSGI script '/var/www/bitcones/bitcones.wsgi' cannot be loaded as Python module.
[Thu Aug 27 12:06:30.366867 2015] [:error] [pid 9330:tid 140623686452992] [remote 2.239.9.178:64904] mod_wsgi (pid=9330): Exception occurred processing WSGI script '/var/www/bitcones/bitcones.wsgi'.
[Thu Aug 27 12:06:30.366894 2015] [:error] [pid 9330:tid 140623686452992] [remote 2.239.9.178:64904] Traceback (most recent call last):
[Thu Aug 27 12:06:30.366913 2015] [:error] [pid 9330:tid 140623686452992] [remote 2.239.9.178:64904] File "/var/www/bitcones/bitcones.wsgi", line 4, in <module>
[Thu Aug 27 12:06:30.366969 2015] [:error] [pid 9330:tid 140623686452992] [remote 2.239.9.178:64904] from bitcones import bitcones as application
[Thu Aug 27 12:06:30.366981 2015] [:error] [pid 9330:tid 140623686452992] [remote 2.239.9.178:64904] File "/var/www/bitcones/bitcones/bitcones.py", line 6, in <module>
[Thu Aug 27 12:06:30.367045 2015] [:error] [pid 9330:tid 140623686452992] [remote 2.239.9.178:64904] from analysis import cone as _cone, flow
[Thu Aug 27 12:06:30.367056 2015] [:error] [pid 9330:tid 140623686452992] [remote 2.239.9.178:64904] File "/var/www/bitcones/bitcones/analysis/cone.py", line 5, in <module>
[Thu Aug 27 12:06:30.367121 2015] [:error] [pid 9330:tid 140623686452992] [remote 2.239.9.178:64904] from analysis.statistics import purity_statistics
[Thu Aug 27 12:06:30.367139 2015] [:error] [pid 9330:tid 140623686452992] [remote 2.239.9.178:64904] ImportError: No module named analysis.statistics
bitcones.wsgi
#!/usr/bin/python
import sys
sys.path.insert(0,"/var/www/bitcones")
from bitcones import bitcones as application
apache virtual host file
<VirtualHost *:80>
ServerName <my-server-name>
ServerAdmin <my-email>
WSGIDaemonProcess bitcones user=<my-username> group=<my-username> threads=5
WSGIScriptAlias / /var/www/bitcones/bitcones.wsgi
<Directory /var/www/bitcones>
WSGIProcessGroup bitcones
WSGIApplicationGroup %{GLOBAL}
Order deny,allow
Allow from all
</Directory>
part of my app tree (everything is under /var/www/bitcones/)
├── bitcones
│ ├── analysis
│ │ ├── <some_files>
│ │ └── statistics
│ │ │ ├── <some_files>
│ ├── bitcones.py
│ ├── static
│ │ ├── <some static content>
│ └── templates
│ └── <my_templates>.html
└── bitcones.wsgi
This should be sufficient to figure out why I'm having this import error. If
any other file/configuration is needed please ask. I'm loosing my mind.
Thanks!
EDIT: I just want to add that I was following this guide:
<http://flask.pocoo.org/docs/0.10/deploying/mod_wsgi/>
Answer: You set the Python version that mod_wsgi uses when you install libapache2-mod-
wsgi (python 2) or libapache2-mod-wsgi-py3 (python 3). I would guess you're on
Python 2 from what you describe, since using python 3 is a more deliberate
choice than 2. I don't think that's your problem, though. I think it's an
import problem, like Graham said.
I recommend using `from bitcones.analysis.statistics import purity_statistics`
for your import statement.
|
Use OrderedDict or ordered list?(novice)
Question: (Using Python 3.4.3) Here's what I want to do: I have a dictionary where the
keys are strings and the values are the number of times that string occurs in
file. I need to output which string(s) occur with the greatest frequency,
along with their frequencies (if there's a tie for the most-frequent, output
all of the most-frequent).
I had tried to use OrderedDict. I can create it fine, but I struggle to get it
to output specifically the most frequently occurring. I can keep trying, but
I'm not sure an OrderedDict is really what I should be using, since I'll never
need the actual OrderedDict once I've determined and output the most-frequent
strings and their frequency. A fellow student recommended an ordered list, but
I don't see how I'd preserve the link between the keys and values as I
currently have them.
Is OrderedDict the best tool to do what I'm looking for, or is there something
else? If it is, is there a way to filter/slice(or equivalent) the OrderedDict?
Answer: You can simply use `sorted` with a proper key function, in this case you can
use `operator.itemgetter(1)` which will sorts your items based on values.
from operator import itemgetter
print sorted(my_dict.items(),key=itemgetter(1),reverse=True)
|
Python pandas concatenate: join="inner" works on toy data, not on real data
Question: I'm working on topic modeling data where I have one data frame with a small
selection of topics and their scores for each document or author (called
"scores"), and another data frame with the top three words for all 250 topics
(called "words").
I'm trying to combine the two data frames in a way to have an extra column in
"scores", in which the top three words from "words" appear for each of the
topics included in "scores". This is useful for visualizing the data as a
heatmap, as `seaborn` or `pyplot` will pick up the labels automatically from
such a dataframe.
I have tried a wide variety of merge and concat commands, but do not get the
desired result. The strange thing is: what seems the most logical command,
according to my understanding of the relevant documentation and the examples
there (i.e. use `concat` on the two df with `axis=1` and `join="inner"`),
works on toy data but does not work on my real data.
Here is my toy data with the code I used to generate it and to do the merge:
import pandas as pd
## Defining the two data frames
scores = pd.DataFrame({'author1': ['1.00', '1.50'],
'author2': ['2.75', '1.20'],
'author3': ['0.55', '1.25'],
'author4': ['0.95', '1.3']},
index=[1, 3])
words = pd.DataFrame({'words': ['cadavre','fenêtre','musique','mariage']},
index=[0, 1, 2, 3])
## Inspecting the two dataframes
print("\n==scores==\n", scores)
print("\n==words==\n", words)
## Merging the dataframes
merged = pd.concat([scores, words], axis=1, join="inner")
## Check the result
print("\n==merged==\n", merged)
And this is the output, as expected:
==scores==
author1 author2 author3 author4
1 1.00 2.75 0.55 0.95
3 1.50 1.20 1.25 1.3
==words==
words
0 cadavre
1 fenêtre
2 musique
3 mariage
==merged==
author1 author2 author3 author4 words
1 1.00 2.75 0.55 0.95 fenêtre
3 1.50 1.20 1.25 1.3 mariage
This is exactly what I would like to accomplish with my real data. And
although the two dataframes seem no different from the test data, I get an
empty dataframe as the result of the merge.
Here are is a small example from my real data:
someScores (complete table):
blanche policier
108 0.003028 0.017494
71 0.002997 0.016956
115 0.029324 0.016127
187 0.004867 0.017631
122 0.002948 0.015118
firstWords (first 5 rows only; the index goes to 249, all index entries in
"someScores" have an equivalent in "firstwords"):
topicwords
0 château-pays-intendant (0)
1 esclave-palais-race (1)
2 linge-voisin-chose (2)
3 question-messieurs-réponse (3)
4 prince-princesse-monseigneur (4)
5 arbre-branche-feuille (5)
My merge command:
dataToPlot = pd.concat([someScores, firstWords], axis=1, join="inner")
And the resulting data frame (empty)!
Empty DataFrame
Columns: [blanche, policier, topicwords]
Index: []
I have tried many variants, like using `merge` instead or creating extra
columns replicating the indexes and then merging on those with `left_on` and
`right_on`, but then I either get the same result or I just get NaN in the
"topicwords" column.
Any hints and help would be greatly appreciated!
Answer: Inner join only returns rows whose index is present in both dataframes.
Consider your row indices for `someScores` ( _108 71 115 187 122_ ) and
`firstWords` ( _0 1 2 3 4 5_ ) contain no common value in row index the
resultant is an empty dataframe.
Either set these indeces correctly or specify different criteria for joining.
You can confirm the problem by checking for common values in both index
someScores.index.intersection(firstWords.index)
For different strategies of joining refer
[documentation](http://pandas.pydata.org/pandas-docs/stable/merging.html).
|
Write to csv python Horizontally append Each time
Question: I Wrote this Piece of code which scrapes Amazon for some elements using page
URL, Now i want to add a csv function which enables me to append horizontally
CSV columns With Following varibles :- ( Date_time, price, Merchant,
Sellers_count ) Each time i run the code this columns should be added on right
without removing any existing columns ..Here is code & table format to whom i
want to add
# -*- coding: cp1252 -*-
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.common.by import By
import requests, csv, time, urllib2, gspread, os, ast, datetime
from scrapy import Selector as s
from lxml import html
from random import randint
from oauth2client.client import SignedJwtAssertionCredentials
x = lambda x: source.xpath(x).extract()
links = ['http://www.amazon.com/dp/B00064NZCK',
'http://www.amazon.com/dp/B000CIU7F8',
'http://www.amazon.com/dp/B000H5839I',
'http://www.amazon.com/dp/B000LTLBHG',
'http://www.amazon.com/dp/B000SDLXKU',
'http://www.amazon.com/dp/B000SDLXNC',
'http://www.amazon.com/dp/B000SPHPWI',
'http://www.amazon.com/dp/B000UUMHRE']
driver = webdriver.Firefox()
#driver.set_page_load_timeout(30)
for Url in links:
try:
driver.get(Url)
except:
pass
time.sleep(randint(1,3))
try:
html = driver.page_source
source = s(text=html,type="html")
except:
pass
try:
Page_link = x('//link[@rel="canonical"]//@href')
except:
pass
try:
Product_Name = x('//span[@id="productTitle"]/text()')
except:
pass
Product_Name = str(Product_Name).encode('utf-8'); Product_Name = Product_Name.replace("[u'","").replace("']","")
try:
price = x('//span[@id="priceblock_ourprice"]//text()')
except:
pass
try:
Merchant = x('//div[@id="merchant-info"]//a//text()')
except:
pass
try:
Sellers_count = x('//span[@class="olp-padding-right"]//a/text()')
except:
pass
if Merchant == []:
Merchant = 'Amazon'
else:
Merchant = Merchant[0]
price = str(price).replace("[u'","").replace("']","")
if len(Sellers_count)>0:
Sellers_count = Sellers_count[0].encode('utf-8')
else:
Sellers_count = str(Sellers_count).encode('utf-8')
try:
Sellers_count = Sellers_count.replace("Â new",""); Sellers_count = int(Sellers_count)-1
except:
pass
if Sellers_count == []:
Sellers_count = str(Sellers_count).replace("[]","")
else:
Sellers_count = Sellers_count
Date_time = datetime.datetime.now().strftime('%Y-%m-%d_%H-%M-%S')
print Date_time, Product_Name, Url, price, Merchant, Sellers_count
The Existing table format I want to append to :-
ASIN ID PRODUCT URL
B00064NZCK MG-5690 BigMouth Inc Over The Hill Parking Privelege Permit http://www.amazon.com/dp/B00064NZCK
B000CIU7F8 BM1102 BigMouth Inc Pocket Disgusting Sounds Machine http://www.amazon.com/dp/B000CIU7F8
B000H5839I MG-4774 BigMouth Inc All Occasion Over The Hill Cane http://www.amazon.com/dp/B000H5839I
B000LTLBHG BM1234 BigMouth Inc Beer Belt / 6 Pack Holster(Black) http://www.amazon.com/dp/B000LTLBHG
B000SDLXKU BM1103 BigMouth Inc Covert Clicker http://www.amazon.com/dp/B000SDLXKU
B000SDLXNC BM1254 BigMouth Inc Inflatable John http://www.amazon.com/dp/B000SDLXNC
B000SPHPWI SO:AP Design Sense Generic Weener Kleener Soap http://www.amazon.com/dp/B000SPHPWI
B000UUMHRE MG-5305 BigMouth Inc Over the Hill Rectal Thermometer http://www.amazon.com/dp/B000UUMHRE
Answer: You have to read the CSV you already have and write a new file contains the
columns you add, here an example:
with open('your.csv', 'w') as out_file:
with open('new.csv', 'r') as in_file:
for line in in_file:
out_file.write(line.rstrip('\n') + Date_time+ Product_name + '\n')
Obviusly, you have to manage the header (first line I suppose)
Hope I helped you
|
Python, url parsing
Question: I have url e.g: "<http://www.nicepage.com/nicecat/something>" And I need parse
it, I use:
from urlparse import urlparse
url=urlparse("http://www.nicepage.com/nicecat/something")
#then I have:
#url.netloc() -- www.nicepage.com
#url.path() -- /nicecat/something
But I want to delete "www", and parse it little more. I would like to have
something like this:
#path_without_www -- nicepage.com
#list_of_path -- list_of_path[0] -> "nicecat", list_of_path[1] -> "something"
Answer: How about this:
import re
from urlparse import urlparse
url = urlparse('http://www.nicepage.com/nicecat/something')
url = url._replace(netloc=re.sub(r'^(www.)(.*)', r'\2', url.netloc))
The regex strips the 'www.' from the beginning of the netloc. From there you
can parse it more as you wish.
|
Improve reCaptcha 2.0 solving automation script (Selenium)
Question: I've written a python with selenium code to solve [new behaviour
captcha](http://scraping.pro/no-captcha-recaptcha-challenge/). But something
is lacking as to fully imitate user behaviour: the code works to locate and
click on a captcha, yet after that google sets up additional pictures check
[](http://i.stack.imgur.com/mStXU.jpg)
which is not easy to automate. How to improve the code to solve captcha
immediately without pictures check (letting google no hint of robot presence)?
[reCaptcha testing ground](http://tarex.ru/testdir/recaptcha/recaptcha.php).
## Python code
from time import sleep
from random import uniform
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.wait import WebDriverWait
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.support import expected_conditions as EC
# to imitate hovering
def hover(element):
hov = ActionChains(driver).move_to_element(element)
hov.perform()
# optional: adding www.hola.org proxy profile to FF (extention is installed on FF, Win 8)
ffprofile = webdriver.FirefoxProfile()
hola_file = '/Users/Igor/AppData/Roaming/Mozilla/Firefox/Profiles/7kcqxxyd.default-1429005850374/extensions/hola/hola_firefox_ext_1.9.354_www.xpi'
ffprofile.add_extension(hola_file)
# end of the optional part
driver = webdriver.Firefox(ffprofile)
url='http://tarex.ru/testdir/recaptcha/recaptcha.php'
# open new tab, also optional
driver.find_element_by_tag_name('body').send_keys(Keys.COMMAND + 't')
driver.get(url)
recaptchaFrame = WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.TAG_NAME ,'iframe'))
)
frameName = recaptchaFrame.get_attribute('name')
# move the driver to the iFrame...
driver.switch_to_frame(frameName)
# ************* locate CheckBox **************
CheckBox = WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.ID ,"recaptcha-anchor"))
)
# ************* hover CheckBox ***************
rand=uniform(1.0, 1.5)
print('\n\r explicit wait for ', rand , ' seconds...')
sleep(rand)
hover(CheckBox)
# ************* click CheckBox ***************
rand=uniform(0.5, 0.7)
print('\n\r explicit wait for ', rand , 'seconds...')
sleep(rand)
# making click on CheckBox...
clickReturn= CheckBox.click()
print('\n\r after click on CheckBox... \n\r CheckBox click result: ' , clickReturn)
Answer: you can't do that, I think the image obstacle is used when too many requests
are done from the same IP anyway, so you can't bypass it, what you can do is
use proxies
|
Python:requests.exceptions.ConnectionError: ('Connection aborted.', BadStatusLine("''",))
Question: I encounter this error when I'm trying to download a lot of pages from a
website. The script is pieced up and modified from several other scripts and
it seems that I am rather unfamiliar with Python and programming.
The version of Python is 3.4.3 and the version of Requests is 2.7.0.
This is the script:
import requests
from bs4 import BeautifulSoup
import os.path
s = requests.session()
login_data = {'dest': '/','user': '******', 'pass': '******'}
header_info={'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:38.0) Gecko/20100101 Firefox/38.0'}
url='http://www.oxfordreference.com/LOGIN'
s.post(url,data=login_data,headers=header_info)
for i in range(1,100):
downprefix='http://www.oxfordreference.com/view/10.1093/acref/9780198294818.001.0001/acref-9780198294818-e-'
downurl=downprefix+str(i)
r=s.get(downurl,headers=header_info,timeout=30)
if r.status_code==200:
soup=BeautifulSoup(r.content,"html.parser")
shorten=str(soup.find_all("div", class_="entryContent"))
fname='acref-9780198294818-e-'+str(i)+'.htm'
newname=os.path.join('shorten',fname)
htmfile=open(newname,'w',encoding="utf_8")
htmfile.write(shorten)
htmfile.close()
print('Success in '+str(i))
else:
print('Error in '+str(i))
errorfile=open('errors.txt','a',encoding="utf_8")
errorfile.write(str(i))
errorfile.write('\n')
errorfile.close()
The complete trackback is:
Traceback (most recent call last):
File "D:\Program Files (x86)\python343\lib\site-packages\requests\packages\urllib3\connectionpool.py", line 372, in _make_request
httplib_response = conn.getresponse(buffering=True)
TypeError: getresponse() got an unexpected keyword argument 'buffering'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\Program Files (x86)\python343\lib\site-packages\requests\packages\urllib3\connectionpool.py", line 544, in urlopen
body=body, headers=headers)
File "D:\Program Files (x86)\python343\lib\site-packages\requests\packages\urllib3\connectionpool.py", line 374, in _make_request
httplib_response = conn.getresponse()
File "D:\Program Files (x86)\python343\lib\http\client.py", line 1171, in getresponse
response.begin()
File "D:\Program Files (x86)\python343\lib\http\client.py", line 351, in begin
version, status, reason = self._read_status()
File "D:\Program Files (x86)\python343\lib\http\client.py", line 321, in _read_status
raise BadStatusLine(line)
http.client.BadStatusLine: ''
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\Program Files (x86)\python343\lib\site-packages\requests\adapters.py", line 370, in send
timeout=timeout
File "D:\Program Files (x86)\python343\lib\site-packages\requests\packages\urllib3\connectionpool.py", line 597, in urlopen
_stacktrace=sys.exc_info()[2])
File "D:\Program Files (x86)\python343\lib\site-packages\requests\packages\urllib3\util\retry.py", line 245, in increment
raise six.reraise(type(error), error, _stacktrace)
File "D:\Program Files (x86)\python343\lib\site-packages\requests\packages\urllib3\packages\six.py", line 309, in reraise
raise value.with_traceback(tb)
File "D:\Program Files (x86)\python343\lib\site-packages\requests\packages\urllib3\connectionpool.py", line 544, in urlopen
body=body, headers=headers)
File "D:\Program Files (x86)\python343\lib\site-packages\requests\packages\urllib3\connectionpool.py", line 374, in _make_request
httplib_response = conn.getresponse()
File "D:\Program Files (x86)\python343\lib\http\client.py", line 1171, in getresponse
response.begin()
File "D:\Program Files (x86)\python343\lib\http\client.py", line 351, in begin
version, status, reason = self._read_status()
File "D:\Program Files (x86)\python343\lib\http\client.py", line 321, in _read_status
raise BadStatusLine(line)
requests.packages.urllib3.exceptions.ProtocolError: ('Connection aborted.', BadStatusLine("''",))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\stuff\Mdict\dict by me\odoa\newahktest\CrawlTest2.py", line 14, in <module>
r=s.get(downurl,headers=header_info,timeout=30)
File "D:\Program Files (x86)\python343\lib\site-packages\requests\sessions.py", line 477, in get
return self.request('GET', url, **kwargs)
File "D:\Program Files (x86)\python343\lib\site-packages\requests\sessions.py", line 465, in request
resp = self.send(prep, **send_kwargs)
File "D:\Program Files (x86)\python343\lib\site-packages\requests\sessions.py", line 573, in send
r = adapter.send(request, **kwargs)
File "D:\Program Files (x86)\python343\lib\site-packages\requests\adapters.py", line 415, in send
raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', BadStatusLine("''",))
Answer: The host you're talking to did not respond properly. This usually happens when
you try to connect to an https service using http, but there may be a lot of
other situations too.
Probably the best way to check what's going on is to get a network traffic
analyser (for example [wireshark](https://www.wireshark.org/)) and look the
connection.
|
py.test to test Cython C API modules
Question: I'm trying to set up unit tests for a Cython module to test some functions
that do not have python interface. The first idea was to check if `.pyx` files
could directly be used by `py.test`'s test runner, but apparently it only
scans for `.py` files.
Second idea was to write the `test_*` methods in one Cython module, which
could then be imported into a plain `.py` file. Let say we have a `foo.pyx`
module with the contents we want to test:
cdef int is_true():
return False
then a `test_foo.pyx` module that uses the C API to test the `foo` module:
cimport foo
def test_foo():
assert foo.is_true()
and then import these in a plain `cython_test.py` module that would just
contain this line:
from foo_test import *
The `py.test` test runner does find `test_foo` this way, but then reports:
/usr/lib/python2.7/inspect.py:752: in getargs
raise TypeError('{!r} is not a code object'.format(co))
E TypeError: <built-in function test_foo> is not a code object
Is there any better way to test Cython C-API code using `py.test`?
Answer: So, in the end, I managed to get `py.test` to run tests directly from a
Cython-compiled `.pyx` files. However, the approach is a terrible hack devised
to make use of `py.test` Python test runner as much as possible. It might stop
working with any `py.test` version different from the one I prepared the hack
to work with (which is 2.7.2).
First thing was to defeat `py.test`'s focus on `.py` files. Initially,
`py.test` refused to import anything that didn't have a file with `.py`
extension. Additional problem was that `py.test` verifies whether the module's
`__FILE__` matches the location of the `.py` file. Cython's `__FILE__`
generally does not contain the name of the source file though. I had to
override this check. I don't know if this override breaks anything—all I can
say is that the tests seem to run well, but if you're worried, please consult
your local `py.test` developer. This part was implemented as a local
`conftest.py` file.
import _pytest
import importlib
class Module(_pytest.python.Module):
# Source: http://stackoverflow.com/questions/32250450/
def _importtestmodule(self):
# Copy-paste from py.test, edited to avoid throwing ImportMismatchError.
# Defensive programming in py.test tries to ensure the module's __file__
# matches the location of the source code. Cython's __file__ is
# different.
# https://github.com/pytest-dev/pytest/blob/2.7.2/_pytest/python.py#L485
path = self.fspath
pypkgpath = path.pypkgpath()
modname = '.'.join(
[pypkgpath.basename] +
path.new(ext='').relto(pypkgpath).split(path.sep))
mod = importlib.import_module(modname)
self.config.pluginmanager.consider_module(mod)
return mod
def collect(self):
# Defeat defensive programming.
# https://github.com/pytest-dev/pytest/blob/2.7.2/_pytest/python.py#L286
assert self.name.endswith('.pyx')
self.name = self.name[:-1]
return super(Module, self).collect()
def pytest_collect_file(parent, path):
# py.test by default limits all test discovery to .py files.
# I should probably have introduced a new setting for .pyx paths to match,
# for simplicity I am hard-coding a single path.
if path.fnmatch('*_test.pyx'):
return Module(path, parent)
Second major problem is that `py.test` uses Python's `inspect` module to check
names of function arguments of unit tests. Remember that `py.test` does that
to inject fixtures, which is a pretty nifty feature, worth preserving.
`inspect` does not work with Cython, and in general there seems to be no easy
way to make original `inspect` to work with Cython. Nor there is any other
good way to inspect Cython function's list of arguments. For now I decided to
make a small workaround where I'm wrapping all test functions in a pure Python
function with desired signature.
In addition to that, it seems that Cython automatically puts a `__test__`
attribute to each `.pyx` module. The way Cython does that interferes with
`py.test`, and needed to be fixed. As far as I know, `__test__` is an internal
detail of Cython not exposed anywhere, so it should not matter that we're
overwriting it. In my case, I put the following function into a `.pxi` file
for inclusion in any `*_test.pyx` file:
from functools import wraps
# For https://github.com/pytest-dev/pytest/blob/2.7.2/_pytest/python.py#L340
# Apparently Cython injects its own __test__ attribute that's {} by default.
# bool({}) == False, and py.test thinks the developer doesn't want to run
# tests from this module.
__test__ = True
def cython_test(signature=""):
''' Wrap a Cython test function in a pure Python call, so that py.test
can inspect its argument list and run the test properly.
Source: http://stackoverflow.com/questions/32250450/'''
if isinstance(signature, basestring):
code = "lambda {signature}: func({signature})".format(
signature=signature)
def decorator(func):
return wraps(func)(eval(code, {'func': func}, {}))
return decorator
# case when cython_test was used as a decorator directly, getting
# a function passed as `signature`
return cython_test()(signature)
After that, I could implement tests like:
include "cython_test_helpers.pxi"
from pytest import fixture
cdef returns_true():
return False
@cython_test
def test_returns_true():
assert returns_true() == True
@fixture
def fixture_of_true():
return True
@cython_test('fixture_of_true')
def test_fixture(fixture_of_true):
return fixture_of_true == True
If you decide to use the hack described above, please remember to leave
yourself a comment with a link to this answer—I'll try to keep it updated in
case better solutions are available.
|
How to make Python go back to asking for input
Question: So I want the program to go back to asking for the input once it has
completed.
I've asked this in reddit and gone through quite a many similar threads here
and so far the answer seems to be loops if true perform x. But what is the
command for the program to go back to asking for the input on line 5?
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from time import sleep
card = input()
driver = webdriver.Firefox()
driver.get("http://www.mtgprice.com/")
elem = driver.find_element_by_xpath("//form/input")
elem.send_keys(card) # input
driver.implicitly_wait(5) # seconds
driver.find_element_by_class_name('btn-blue').click()
Answer: Everyone you asked is pretty much right. Additionally I'd put the chunk of
code in a function just because.
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from time import sleep
def web_stuff(card):
driver = webdriver.Firefox()
driver.get("http://www.mtgprice.com/")
elem = driver.find_element_by_xpath("//form/input")
elem.send_keys(card) # input
driver.implicitly_wait(5) # seconds
driver.find_element_by_class_name('btn-blue').click()
while loop_condition:
card = input()
web_stuff(card)
|
Scrapy 4xx/5xx error handling
Question: We're building a distributed system that uses Amazon's SQS to dispatch
messages to workers that run scrapy spiders based on the messages' contents.
We (obviously) only want to remove a message from the queue if its
corresponding spider has been run successfully, i.e. without encountering
4xx/5xx responses.
What I'd like to do is hook into scrapy's `signals` API to fire a callback
that deletes the message from the queue when the spider's closed successfully,
but I'm not sure whether that's actually the semantics of
`signals.spider_closed` (as opposed to "this spider has closed for literally
any reason.")
It's also not clear (at least to me) whether `signals.spider_error` is fired
when encountering an HTTP error code, or only when a Python error is raised
from within the spider.
Any suggestions?
Answer: `signals.spider_error` is raised when a Python error occurs during the spider-
crawl process. If the error occurs in the `spider_closed` signal handler then
`spider_error` is not raised.
A basic approach would be to have a signal handler extension which will
register to the `spider_closed` and `spider_error` events to handle the
statuses -- do not remove the URL from the queue if it contains a status
higher than 399 for example.
Then in these handler methods you can utilize the stats collected by the
spider to see if it was OK or not:
class SignalHandler(object):
@classmethod
def from_crawler(cls,crawler):
ext = cls()
crawler.signals.connect(ext.spider_error, signal=signals.spider_error)
crawler.signals.connect(ext.spider_closed, signal=signals.spider_closed)
return ext
def spider_error(self, failure, response, spider):
print "Error on {0}, traceback: {1}".format(response.url, failure.getTraceback())
def spider_closed(self, spider):
if spider.crawler.stats.get_value('downloader/response_status_count/200') == spider.crawler.stats.get_value('downloader/response_count'):
# OK, all went fine
if spider.crawler.stats.get_value('downloader/response_status_count/404') != 0 or spider.crawler.stats.get_value('downloader/response_status_count/503') != 0:
# something went wrong
And of course do not forget to add your `SignalHandler` in the `settings.py`:
EXTENSIONS = {'myproject.extensions.signal_handler.SignalHandler': 599,}
There is another way of course which requires a bit more coding:
You could handle the status codes yourself with the `handle_httpstatus_list`
parameter of your spider. This allows your spider to handle a list of HTTP
status which would be ignored per default.
Summarizing an approach would be to handle the statuses you are interested in
in your spider and collect them into a `set`.
This would be the spider:
class SomeSpider(scrapy.Spider):
name = "somespider"
start_urls = {"http://stackoverflow.com/questions/25308231/liferay-6-2-lar-import-no-journalfolder-exists-with-the-primary-key-2"}
handle_httpstatus_list = [404, 503]
encountered = set()
def parse(self, response):
self.encountered.add(response.status)
# parse the response
This would be the extension's new method:
def spider_closed(self, spider):
if 404 in spider.encountered:
# handle 404
|
Python Cursor to Csv using csv.writer.writerows
Question: I'm currently trying to write the results of a MySQL select statement to a
csv.
I'm using MySQLdb to select data, which returns a cursor.
I'm then passing that cursor into a function that writes it to a csv:
def write_cursor_to_file(cursor, path, delimiter='|', quotechar='`'):
with open(path, 'wb') as f:
csv.writer(f, delimiter=delimiter, quoting=csv.QUOTE_ALL, \
quotechar=quotechar).writerows(cursor)
The issue I'm running into is one column in the mysql database can contain
paragraphs. The csv.writer is looking at that column and writing the break in
between the paragraphs as two seperate rows.
EXAMPLE:
`26159`|`306`|`14448`|`XXXXXXXXXX`|`XXXXXXXXXXXX`|`1`|`2`|`387`|`67`|`XXXXXXX`|`|`|`|`|`2011-08-04
05:41:45`|`2015-06-03 18:38:04`|`2011-08-04 07:00:00`|`2011-08-06
06:59:00`|`0`|`1`|`0`|`0.0000000`|`1`|`-2`|`-2`|`-2`|`-2`|Lorem ipsum dolor
sit amet, consectetur adipiscing elit. In facilisis, enim sit amet interdum
ultricies, nisl elit aliquam justo, fermentum ullamcorper ligula nisl vitae
nisi. Proin semper nunc a magna elementum imperdiet. In hac habitasse platea
dictumst. Proin lobortis neque non nulla volutpat gravida. Phasellus lectus
lacus, vehicula vel felis ac, convallis dignissim quam. Mauris semper, enim
eget ultrices finibus, erat libero vehicula ante, vitae varius ex erat quis
ipsum. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nunc dignissim
venenatis euismod. Nam venenatis urna ac arcu efficitur, id lobortis ligula
elementum. Quisque eget sollicitudin erat. Lorem ipsum dolor sit amet,
consectetur adipiscing elit. Donec gravida velit at erat consequat ultrices.
Aenean tempus eros non nulla pellentesque faucibus. Integer laoreet placerat
sem eget porta. Quisque porttitor tortor in mollis mollis. Donec et auctor
lacus. Pellentesque rutrum, nibh non convallis dignissim, dui
m`|`XXXXXXXXX`|`1`|`0`|`XXXXXXXXXXXX`|`0.0000000`|``|`0.0000000`|`0`|`0.0000000`|`1`|`OO`|`0`|``|`0.0000000`|`XXXXXXXXXXX`|`150`|``|``|``|`0`|`0`|`0`|`0.0000000`|``|`0.0000000`|`0.0000000`|`0`|`0`|`XXX`|`0`|`0`|`0`|``|`XXXXXXXXXX`|`0`|`0`|`XXXXXX`
So instead of writing the above block of text into one column in the csv, the
column is ending at the end of the first paragraph and the next row starts
with the second paragraph. The desired output is to have both paragraphs in
one column on the same row.
Answer: The code works flawlessly and creates the output you desire. However, not all
spreadsheets can identify the backtick as an enclosing mark.
The problem you are seeing is that the csv import feature of the spreadsheet
program you are using does not identify the backtick mark (used in your code)
as a quote/field-enclosing mark. As you know, standard quote-enclosing marks
are single- and double-quotes.
To make this work, the csv import program you are using would need to learn
that backtick marks are the quote enclosing marks instead. LibreOffice Calc is
flexible enough to learn this, as shown in the screenshot below, and after
learning this it imports the csv precisely as you desire, with the entire
multi-line field in a single row/column. (I added another row of data in the
sample.)
[](http://i.stack.imgur.com/HOKNh.png)
As for Excel ... well ... yes, the code would need to be modified because the
backtick is an unrecognized quote-enclosing mark in Excel. See [this SuperUser
discussion](http://superuser.com/questions/733902/excel-how-to-make-text-
qualifier-editable-or-add-a-new-entry-in-the-existing-lis) for more details on
that. Your sample text does not have quotes in it, so perhaps you can bend to
Excel's restrictions by using a more standard quote-enclosing mark and if
necessary escape out any enclosed quotations by double-quoting per the
SuperUser discussion.
|
Patch a method outside python class
Question: I am interested in patching a method which is called by another method in one
file. Example - original.py file contains -
def A():
a = 10
b = 5
return a*b;
def B():
c = A()
return c* 10
I want to write unit test for this file , say call it test.py
import mock
import unittest
class TestOriginal(unitest.TestCase):
def test_Original_method(self):
with patch(''):
How can I use patch and mock modules to test original.py. I want A() to always
return MagicMock() object instead of an integer.
Answer: You simply patch out the `A` global in the module under test. I'd use the
`@patch` decorator syntax here:
import mock
import unittest
import module_under_test
class TestOriginal(unitest.TestCase):
@patch('module_under_test.A')
def test_Original_method(self, mocked_A):
mocked_A.return_value = 42
result = module_under_test.B()
mocked_A.assert_called_with()
self.assertEqual(result, 420)
This passes in the `MagicMock` mock object for `A()` as an extra argument to
the test method.
Note that we explicitly named the module here. You could also use
`patch.object()`, just naming the attribute on the module (which are your
module globals):
class TestOriginal(unitest.TestCase):
@patch.object(module_under_test, 'A')
def test_Original_method(self, mocked_A):
mocked_A.return_value = 42
result = module_under_test.B()
mocked_A.assert_called_with()
self.assertEqual(result, 420)
You can still use a `with` statement too, of course:
class TestOriginal(unitest.TestCase):
def test_Original_method(self):
with patch('module_under_test.A') as mocked_A:
mocked_A.return_value = 42
result = module_under_test.B()
mocked_A.assert_called_with()
self.assertEqual(result, 420)
|
Allen Brain Institute - Mouse Connectivity API and Mouse Connectivity Cache examples
Question: I'm trying to follow the [Mouse Connectivity
sdk](http://alleninstitute.github.io/AllenSDK/connectivity.html) and get their
two examples working.
`pd` returns None or all projection signal density experiments. Why might this
be?
from allensdk.api.queries.mouse_connectivity_api import MouseConnectivityApi
mca = MouseConnectivityApi()
# get metadata for all non-Cre experiments
experiments = mca.experiment_source_search(injection_structures='root', transgenic_lines=0)
# download the projection density volume for one of the experiments
#pd = mca.download_projection_density('example.nrrd', experiments[0]['id'], resolution=25)
for exp in range(len(experiments)):
pd = mca.download_projection_density('example.nrrd', experiments[exp]['id'], resolution=25)
print(type(pd))
Results:
C:\Anaconda\python.exe C:/Users/user/PycharmProjects/MyFirstAllenBrain/Test.py
<type 'NoneType'>
<type 'NoneType'>
<type 'NoneType'>
<type 'NoneType'>
<type 'NoneType'>
<type 'NoneType'>
... etc
The thing though is, is that `experiments` does recieve a value so it appears
to be the case that the MouseConnectivityApi and nrrd (which I installed as
per the following [post](http://stackoverflow.com/questions/32240133/pip-
install-pynrrd)) are working appropriately.
See here: [](http://i.stack.imgur.com/eXdHi.png)
Right, and now the second example
from allensdk.core.mouse_connectivity_cache import MouseConnectivityCache
# tell the cache class what resolution (in microns) of data you want to download
mcc = MouseConnectivityCache(resolution=25)
# use the ontology class to get the id of the isocortex structure
ontology = mcc.get_ontology()
isocortex = ontology['Isocortex']
# a list of dictionaries containing metadata for non-Cre experiments
experiments = mcc.get_experiments(injection_structure_ids=isocortex['id'])
# download the projection density volume for one of the experiments
pd = mcc.get_projection_density(experiments[0]['id'])
This is copied word for word from Allen and yet this is the error message I
get:
C:\Anaconda\python.exe C:/Users/user/PycharmProjects/MyFirstAllenBrain/Test.py
Traceback (most recent call last):
File "C:/Users/user/PycharmProjects/MyFirstAllenBrain/Test.py", line 14, in <module>
pd = mcc.get_projection_density(experiments[0]['id'])
File "C:\Users\user\AppData\Roaming\Python\Python27\site-packages\allensdk\core\mouse_connectivity_cache.py", line 170, in get_projection_density
return nrrd.read(file_name)
File "C:\Anaconda\lib\site-packages\nrrd.py", line 384, in read
data = read_data(header, filehandle, filename)
File "C:\Anaconda\lib\site-packages\nrrd.py", line 235, in read_data
mmap.MAP_PRIVATE, mmap.PROT_READ)
AttributeError: 'module' object has no attribute 'MAP_PRIVATE'
Process finished with exit code 1
Why might this occur?
And again as before (no need for another picture I assume), the `experiments`
variable does recieve what appears to be values for each experiment.
Answer: Unfortunately this is a Windows-specific issue with the pynrrd/master github
repository right now. I know that one specific revision works:
<https://github.com/mhe/pynrrd/commit/3c0f3d577b0b435fb4825c14820322a574311af0>
To install this revision from a windows command prompt, you can:
> git clone https://github.com/mhe/pynrrd.git
> cd pynrrd
> git checkout 3c0f3d5
> cd ..
> pip install --upgrade pynrrd\
The backslash on the end is important. It tells pip to install from a local
path instead of checking PyPI.
Issue logged: <https://github.com/mhe/pynrrd/issues/18>
|
How can I process data after a specific line in python 2.6?
Question: I have a script that basically reads a text file and creates 8 lists. It works
perfectly if it reads the file from line 1. I need it to start reading the
text file from line 177 to line 352 (that is the last line).
This is my script and the change. I'm not getting any error but not any result
either. The program hangs there without response:
f = open("Output1.txt", "r")
lines = [line.rstrip() for line in f if line != "\n"] #Get all lines, strip
newline chars, and remove lines that are just newlines.
NUM_LISTS = 8
groups = [[] for i in range(NUM_LISTS)]
listIndex = 0
for line in lines:
while line > 177: #here is the problem
if "Transactions/Sec for Group" not in line:
groups[listIndex].append(float(line))
listIndex += 1
if listIndex == NUM_LISTS:
listIndex = 0
value0 = groups[0]
value1 = groups[1]
value2 = groups[2]
value3 = groups[3]
value4 = groups[4]
value5 = groups[5]
value6 = groups[6]
value7 = groups[7]
json_file = 'json_global.json'
json_data = open(json_file)
data = json.load(json_data)
for var1 in range(0, 11):
a = value0[var1]
b = value1[var1]
c = value2[var1]
d = value3[var1]
e = value4[var1]
f = value5[var1]
g = value6[var1]
h = value7[var1]
var2 = var1 + 57
item = data[var2]['item']
cmd = data[var2]['command']
var1+= 1
print item, cmd, a, b, c, d, e, f, g, h)
Answer: `line` contains the contents of each line, not the line number. Even if it
did, this would fail, because you'd skip over the loop since the first line's
number is not greater than 177. Here's a way to do what you want:
for linenumber, line in enumerate(lines, 1):
if linenumber > 177:
do_stuff(line)
`enumerate()` takes an iterable and returns an iterable of `(index, item)`
tuples. The `1` argument tells it what index to start at; it defaults to `0`.
Adjust that and the number in `if linenumber > 177:` according to what you
want to do.
Another way of doing it is using `itertools.islice()`, as also mentioned by
[Anand S Kumar](https://stackoverflow.com/users/795990/anand-s-kumar) in [his
answer](https://stackoverflow.com/a/32262943/892383). Here's a version using
`islice()` that doesn't read the entire file into memory beforehand:
from itertools import islice
with open('Output1.txt', 'r') as f:
lines = (line.rstrip() for line in f if line != '\n')
for line in islice(lines, 177, None):
do_stuff(line)
That'll effectively slice the lines as if you had done lines[177:] (which is
another solution).
Note that you're not including lines that are only a newline, so line 177 in
the file is not the same line 177 in your program.
|
How to know a group of dates are daily, weekly or monthly in Pandas Python?
Question: I have a dataframe in Pandas with the date as index. "YYYY-MM-DD" format. I
have a lot of rows in this dataframe which means a lot of date indexes.
For all of these dates, most of them are daily continuous, some of them are
weekly dates, some are yearly.
Example:
2015-01-05,
2015-01-06,
2015-01-07,
2015-01-08,
2015-01-09,
2015-01-16,
2015-01-23,
2015-01-30,
2015-02-28,
2015-03-30
So Some of them are daily dates and maybe then follow by several monthly dates
or weekly dates, yearly dates.
**So how can I know in which dates duration, it is daily, weekly, monthly and
yearly?**
Remark: the daily one only have working day dates (Monday - Friday). For
Weekly dates, the Friday dates will be displayed. For Monthly/Quarterly/Yearly
dates, the last day of this month/quarter/year will be displayed.
Answer: Using the Pandas type [Timedelta](http://pandas.pydata.org/pandas-
docs/stable/timedeltas.html).
> Starting in v0.15.0, we introduce a new scalar type Timedelta, which is a
> subclass of datetime.timedelta, and behaves in a similar manner, but allows
> compatibility with np.timedelta64 types as well as a host of custom
> representation, parsing, and attributes.
First value in col `days` is `NaN`, which is corrected by `df = df.fillna(1)`.
df['days'] = (df['date']-df['date'].shift()).dt.days
Other solutions for timedelta column `days` are:
df['days'] = (df['date']-df['date'].shift()).fillna(0)
df['days'] = df['date'].diff()
Then I consolidate different days of months and years and group by them.
import pandas as pd
import io
temp=u"""2015-01-05
2015-01-06
2015-01-07
2015-01-08
2015-01-09
2015-01-16
2015-01-23
2015-01-30
2015-02-28
2015-03-31
2015-04-30
2016-04-30"""
df = pd.read_csv(io.StringIO(temp), parse_dates=True, index_col=[0], header=None)
df['date'] = df.index
df['days'] = (df['date']-df['date'].shift()).dt.days
#normalize days of months for grouping to 30
#normalize days of years for grouping to 365
df['days_normalize'] = df['days'].replace([28,29,31], 30)
df['days_normalize'] = df['days_normalize'].replace(366, 365)
#replace firs value to 1 - correct unshifted first value
df = df.fillna(1)
print df
# date days days_normalize
#0
#2015-01-05 2015-01-05 1 1
#2015-01-06 2015-01-06 1 1
#2015-01-07 2015-01-07 1 1
#2015-01-08 2015-01-08 1 1
#2015-01-09 2015-01-09 1 1
#2015-01-16 2015-01-16 7 7
#2015-01-23 2015-01-23 7 7
#2015-01-30 2015-01-30 7 7
#2015-02-28 2015-02-28 29 30
#2015-03-31 2015-03-31 31 30
#2015-04-30 2015-04-30 30 30
#2016-04-30 2016-04-30 366 365
grouped = df.groupby("days_normalize")
for name, group in grouped:
print(group)
|
Comparing two variables with 'is' operator which are declared in one line in Python
Question: According to the [Documentation](https://docs.python.org/2/c-api/int.html):
> The current implementation keeps an array of integer objects for all
> integers between `-5` and `256`, when you create an int in that range you
> actually just get back a reference to the existing object. So it should be
> possible to change the value of `1`. I suspect the behaviour of Python in
> this case is undefined. :-)
So the following behaviors are normal.
>>> a = 256
>>> b = 256
>>> a is b
True
>>> c = 257
>>> d = 257
>>> c is d
False
But when i declare two variables like these, i am getting **True** -
>>> e = 258; f=258;
>>> e is f
True
I have checked the identity of the objects referenced by e and f-
>>> id(e)
43054020
>>> id(f)
43054020
They are same.
My question is what is happening when we are declaring e and f by separating
with semicolons? Why are they referencing to the same object (though the
values are out of the range of Python's array of integer objects) ?
It would be better, if you please explain it like you are explaining it to a
beginner.
Answer: This is _not_ an unexpected behavior, according to [Python Data
model](https://docs.python.org/2/reference/datamodel.html) it's an
implementation detail:
> Types affect almost all aspects of object behavior. Even the importance of
> object identity is affected in some sense: **for immutable types, operations
> that compute new values may actually return a reference to any existing
> object with the same type and value, while for mutable objects this is not
> allowed. E.g., after a = 1; b = 1, a and b may or may not refer to the same
> object with the value one, depending on the implementation** , but after c =
> []; d = [], c and d are guaranteed to refer to two different, unique, newly
> created empty lists. (Note that c = d = [] assigns the same object to both c
> and d.)
|
webbrowser.open_new_tab or webbrowser.open not working in ubuntu 14.04
Question: New tab with provided url is not opening in Ubuntu 14.04 Same code works in
Mac OS X Yosemite
I have flask installed on both Ubuntu 14.04 and Mac Yosemite Both have python
2.7.6 installed
Below is the source code:
import webbrowser
from flask import Flask
from flask import render_template
app = Flask(__name__)
@app.route('/', methods=['POST'])
def submit():
url = 'https://www.google.com'
webbrowser.open(url, new=0, autoraise=True)
return render_template("index.html")
if __name__ == '__main__':
app.debug = True
app.run()
I am accessing the flask app on Mac on port `5000` whereas on Ubuntu I am
accessing it on port `8080`
Let me know what more information I need to provide to help me debug.
* * *
After Debugging I think whether this behavior is because of SSL certificate
issue? In order to debug, I tried to create the environment on server same as
my local machine where it is working. BI stopped the apache web server on my
server and launched the flask app manually (so that I can access the page on
port 5000) and tried to launch the page using `http://127.0.0.1:5000` I
observed that the python logs in the terminal were erased and the screen
showed "`≪ ↑ ↓ Viewing[SSL] <Google Maps>`" in the bottom
Answer: Your current code does open new browser window, but on the machine where your
server is running. If you want to open new tab in client's browser you can use
HTML attribute `target="_blank"` like this:
<a href="http://www.google.com/" target="_blank">Button</a>
|
How to use change desktop wallpaper using Python in Ubuntu 14.04 (with Unity)
Question: I tried this code:
import os
os.system("gsettings set org.gnome.desktop.background picture-uri file:///home/user/Pictures/wallpapers/X")
where `user` is my name and `X` is the picture.
But instead of changing the background to the given picture, it set the
default Ubuntu wallpaper.
What am I doing wrong?
Answer: First of all, make sure the file path is correct. Execute this line in a
terminal:
ls /home/user/Pictures/wallpapers/X
Did the file get listed? If so, move on to the next step.
Make sure that you know where the `gsettings` command is. In the terminal,
run:
which gsettings
That should get you the full path to `gsettings`. If nothing is displayed, the
directory containing `gsettings` isn't in [your `$PATH`
variable](http://stackoverflow.com/questions/14637979/how-to-permanently-set-
path-on-linux).
Let's say the path is `/usr/bin/gsettings`. Try to execute this in a terminal:
/usr/bin/gsettings set org.gnome.desktop.background picture-uri /home/user/Pictures/wallpapers/X
If it works, pass the same string to `os.system()`:
import os
os.system("/usr/bin/gsettings set org.gnome.desktop.background picture-uri /home/user/Pictures/wallpapers/X")
|
Regex/Python - why is non capturing group captured in this case?
Question: Each element of this raw data array is parsed by regex
['\r\n\t\t\t\t\t\t',
'Monday, Tuesday, Wednesday, Thursday, Friday, Saturday:',
' 12:00 pm to 03:30 pm & 07:00 pm to 12:00 am\t\t\t\t\t',
'\r\n\t\t\t\t\t\t',
'Sunday:',
' 12:00 pm to 03:30 pm & 07:00 pm to 12:30 am\t\t\t\t\t']
This is my regex `(\\r|\\n|\\t)|(?:\D)(\:)`
<https://regex101.com/r/fV7wI2/1>
[](http://i.stack.imgur.com/NJZXO.png)
Please note that I'm trying to match the `:` after Saturday but not the `:` in
Time formats eg `12:00`
Although the above image classifies capturing/non capturing groups properly
on running `re.sub("(\\r|\\n|\\t)|(?:\D)(\:)",'',"Monday, Tuesday, Wednesday,
Thursday, Friday, Saturday:")`
returns
`'Monday, Tuesday, Wednesday, Thursday, Friday, Saturda'` (missing 'y' after
saturday)
instead of
`'Monday, Tuesday, Wednesday, Thursday, Friday, Saturday'`
why is this so?
Answer: You need to use a look-behind instead of a non-capturing group if you want to
check a substring for presence/absence, but exclude it from the match:
import re
s = "Monday, Tuesday, Wednesday, Thursday, Friday, Saturday:"
print(re.sub(r"[\r\n\t]|(?<!\d):",'',s))
# ^^^^^^^
# Result: Monday, Tuesday, Wednesday, Thursday, Friday, Saturday
See [IDEONE demo](https://ideone.com/UX5SNx)
Here, `(?<!\d)` only checks if the preceding character before a colon is not a
digit.
Also, alternation involves additional overhead, character class `[\r\n\t]` is
preferable, and you do not need any capturing groups (round brackets) since
you are not using them at all.
Also, please note that the regex is initialized with a raw string literal to
avoid overescaping.
Some [more details from Python _Regular Expression
Syntax_](https://docs.python.org/2/library/re.html#regular-expression-syntax)
regarding _non-capturing groups_ and _negative look-behinds_ :
> `(?<!...)`
> \- Matches if the current position in the string is not preceded by a match
> for `...`. This is called a **negative lookbehind assertion**. Similar to
> positive lookbehind assertions, the contained pattern must only match
> strings of some fixed length and shouldn’t contain group references.
> Patterns which start with negative lookbehind assertions may match at the
> beginning of the string being searched.
>
> `(?:...)`
> \- A non-capturing version of regular parentheses. Matches whatever regular
> expression is inside the parentheses, but the substring matched by the group
> cannot be retrieved after performing a match or referenced later in the
> pattern.
As look-behinds are _zero-width assertions_ (=expressions returning _true_ or
_false_ without moving the index any further in the string), they are exactly
what you need in this case where you want to _check_ but not _match_. A non-
capturing group will consume part of the string and thus will be part of the
match.
|
Tests in subdirectory
Question: In Django 1.8, I have an app with this setting:
app
|- tests/
| |- test_ook.py
| |- __init__.py
|- models.py
|- __init__.py
...
When I run `python manage.py test -v 2 app`, I get this error:
ImportError: 'tests' module incorrectly imported from '/home/.../site/app/tests'. Expected '/home/.../site/app'. Is this module globally installed?
_Why is this?_
_How can I fix it?_
Answer: This error generally occurs if you have `tests.py` or a `tests.pyc` file
present in the `app` i.e. at the same level as `tests` module.
Remove those files and it should work perfectly.
|
Untangle re.findall capturing groups: 'list' object has no attribute 'join'
Question: This function highlights certain keywords in a string with color. `Fore.CYAN`
and `Fore.RESET` are from the
[Colorama](https://pypi.python.org/pypi/colorama) module.
Is there a way to insert `regex` instead of the list `["This", "words",
"annotate"]`?
from colorama import Fore
def highlight(var, keywords):
replacement = Fore.CYAN + "\\1" + reset()
var = re.sub("(" + "|".join(map(re.escape, keywords)) + ")", replacement, var, re.I)
print var + Fore.RESET
string = "This string contains words to annotate"
highlight(string, ["This", "words", "annotate"])
I have tried this:
regex_keywords = re.findall(r"(This)|(Words)|(Annotate)", string, re.I)
highlight(string, regex_keywords)
Does not work as it complains about `'list' object has no attribute 'join'"`.
Answer: `higlight` needs any iterable over strings. You have a list of list of tuple
of strings. Remove the parentheses and forget about the tuple:
regex_keywords = re.findall(r"This|Words|Annotate", string, re.I)
highlight(string, regex_keywords)
|
Increment Alphabet Python
Question: I have a list like this
['AX95', 'BD95']
I need to expand the list starting from `AX95` to `BD95` like this
['AX95', 'AY95', 'AZ95', 'BA95','BB95','BC95','BD95']
My current code works fine for single alphabets like
['A95', 'D95']
My code looks like this for now,
import re
def resolve(item):
start = int(re.search(r'\d+', item[0]).group())
end = int(re.search(r'\d+', item[1]).group())
print(item)
c = re.search(r'[a-zA-Z]+', item[0]).group()
d = re.search(r'[a-zA-Z]+', item[1]).group()
print(c, d)
for char in range(ord(c), ord(d) + 1):
yield chr(char) + str(end)
xx = resolve(['AX95', 'BD95'])
print(list(xx))
How to do this?
Answer: You cannot directly use `ord()` on multiple characters , it would error out
with the error -
TypeError: ord() expected a character, but string of length 2 found
Also , it would be very complicated to do this with `for` loop and `range()` ,
I would suggest using `while` loop and checking till the start characters
become the end characters.
One way to do this would be to get the last element, check if its `Z` change
it to `A` and increment the element before it. Otherwise take its `ord()`
increment by `1` and then get that character using `chr()` .
Example Algorithm that works on arbitrary size of characters -
def resolve(item):
start = int(re.search(r'\d+', item[0]).group())
c = re.search(r'[a-zA-Z]+', item[0]).group()
d = re.search(r'[a-zA-Z]+', item[1]).group()
print(c, d)
s = c
yield s + str(start)
while s != d:
ls = len(s) - 1
news = ""
for i in range(ls,-1,-1):
c = s[i]
if c.upper() == 'Z':
news += 'A'
else:
news += chr(ord(c) + 1)
break
s = s[:i] + news[::-1]
yield s + str(start)
Example/Demo -
>>> def resolve(item):
... start = int(re.search(r'\d+', item[0]).group())
... c = re.search(r'[a-zA-Z]+', item[0]).group()
... d = re.search(r'[a-zA-Z]+', item[1]).group()
... print(c, d)
... s = c
... yield s + str(start)
... while s != d:
... ls = len(s) - 1
... news = ""
... for i in range(ls,-1,-1):
... c = s[i]
... if c.upper() == 'Z':
... news += 'A'
... else:
... news += chr(ord(c) + 1)
... break
... s = s[:i] + news[::-1]
... yield s + str(start)
...
>>>
>>> xx = resolve(['AX95', 'BD95'])
>>>
>>> print(list(xx))
AX BD
['AX95', 'AY95', 'AZ95', 'BA95', 'BB95', 'BC95', 'BD95']
|
UnicodeEncodeError in Django Project
Question: Traceback:
File "/usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py" in get_response
132. response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/usr/local/lib/python2.7/dist-packages/django/contrib/auth/decorators.py" in _wrapped_view
22. return view_func(request, *args, **kwargs)
File "/home/django/upgrademystartup/project/views.py" in create_or_edit_project
130. prj.save()
File "/usr/local/lib/python2.7/dist-packages/django/db/models/base.py" in save
710. force_update=force_update, update_fields=update_fields)
File "/usr/local/lib/python2.7/dist-packages/django/db/models/base.py" in save_base
738. updated = self._save_table(raw, cls, force_insert, force_update, using, update_fields)
File "/usr/local/lib/python2.7/dist-packages/django/db/models/base.py" in _save_table
800. for f in non_pks]
File "/usr/local/lib/python2.7/dist-packages/django/db/models/fields/files.py" in pre_save
315. file.save(file.name, file, save=False)
File "/usr/local/lib/python2.7/dist-packages/django/db/models/fields/files.py" in save
94. self.name = self.storage.save(name, content, max_length=self.field.max_length)
File "/usr/local/lib/python2.7/dist-packages/django/core/files/storage.py" in save
54. name = self.get_available_name(name, max_length=max_length)
File "/usr/local/lib/python2.7/dist-packages/django/core/files/storage.py" in get_available_name
90. while self.exists(name) or (max_length and len(name) > max_length):
File "/usr/local/lib/python2.7/dist-packages/django/core/files/storage.py" in exists
295. return os.path.exists(self.path(name))
File "/usr/lib/python2.7/genericpath.py" in exists
18. os.stat(path)
Exception Type: UnicodeEncodeError at /project/edit/8
Exception Value: 'ascii' codec can't encode characters in position 61-66: ordinal not in range(128)
wsgi.py:
import os
import sys
from django.core.wsgi import get_wsgi_application
reload(sys)
sys.setdefaultencoding("utf-8")
os.environ['LANG'] = 'en_US.UTF-8'
os.environ['LC_ALL'] = 'en_US.UTF-8'
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "upgrademystartup.settings")
application = get_wsgi_application()
locale page is:
getlocale: (None, None)
getdefaultlocale(): ('en_US', 'UTF-8')
fs_encoding: ANSI_X3.4-1968
sys default encoding: utf-8
As I'm using gunicorn. Server is **nginx** and configured charset to utf-8.
It's default Django image from DigitalOcean. Language is Russian. I've tried
almost every advice from similar questions.
Answer: It was wrong approach to edit `wsgi.py`, because you need to set **LANG** and
**LC_ALL** variables BEFORE you start Django application.
As for DigitalOcean Django image you should open gunicorn Upstart script
`/etc/gunicorn.d/gunicorn.p` and add two variables just before you start
application:
env LANG = en_US.UTF-8
env LC_AL = en_US.UTF-8
|
Curve fitting with broken power law in Python
Question: Im trying to follow and re-use a piece of code (with my own data) suggested by
someone named @ThePredator (I couldn't comment on that thread since I don't
currently have the required reputation of 50). The full code is as follows:
import numpy as np # This is the Numpy module
from scipy.optimize import curve_fit # The module that contains the curve_fit routine
import matplotlib.pyplot as plt # This is the matplotlib module which we use for plotting the result
""" Below is the function that returns the final y according to the conditions """
def fitfunc(x,a1,a2):
y1 = (x**(a1) )[x<xc]
y2 = (x**(a1-a2) )[x>xc]
y3 = (0)[x==xc]
y = np.concatenate((y1,y2,y3))
return y
x = array([0.001, 0.524, 0.625, 0.670, 0.790, 0.910, 1.240, 1.640, 2.180, 35460])
y = array([7.435e-13, 3.374e-14, 1.953e-14, 3.848e-14, 4.510e-14, 5.702e-14, 5.176e-14, 6.0e-14,3.049e-14,1.12e-17])
""" In the above code, we have imported 3 modules, namely Numpy, Scipy and matplotlib """
popt,pcov = curve_fit(fitfunc,x,y,p0=(10.0,1.0)) #here we provide random initial parameters a1,a2
a1 = popt[0]
a2 = popt[1]
residuals = y - fitfunc(x,a1,a2)
chi-sq = sum( (residuals**2)/fitfunc(x,a1,a2) ) # This is the chi-square for your fitted curve
""" Now if you need to plot, perform the code below """
curvey = fitfunc(x,a1,a2) # This is your y axis fit-line
plt.plot(x, curvey, 'red', label='The best-fit line')
plt.scatter(x,y, c='b',label='The data points')
plt.legend(loc='best')
plt.show()
Im having some problem running this code and the errors I get are as follows:
y3 = (0)[x==xc]
TypeError: 'int' object has no attribute '**getitem** '
and also:
xc is undefined
I don't see anything missing in the code (xc shouldn't have to be defined?).
Could the author (@ThePredator) or someone else having knowledge about this
please help me identify what i haven't seen.
* New version of code:
import numpy as np # This is the Numpy module
from scipy.optimize import curve_fit
import matplotlib.pyplot as plt
def fitfunc(x, a1, a2, xc):
if x.all() < xc:
y = x**a1
elif x.all() > xc:
y = x**(a1 - a2) * x**a2
else:
y = 0
return y
xc = 2
x = np.array([0.001, 0.524, 0.625, 0.670, 0.790, 0.910, 1.240, 1.640, 2.180, 35460])
y = np.array([7.435e-13, 3.374e-14, 1.953e-14, 3.848e-14, 4.510e-14, 5.702e-14, 5.176e-14, 6.0e-14,3.049e-14,1.12e-17])
popt,pcov = curve_fit(fitfunc,x,y,p0=(1.0,1.0))
a1 = popt[0]
a2 = popt[1]
residuals = y - fitfunc(x, a1, a2, xc)
chisq = sum((residuals**2)/fitfunc(x, a1, a2, xc))
curvey = [fitfunc(val, a1, a2, xc) for val in x] # y-axis fit-line
plt.plot(x, curvey, 'red', label='The best-fit line')
plt.scatter(x,y, c='b',label='The data points')
plt.legend(loc='best')
plt.show()
Answer: There are multiple errors/typos in your code.
1) You cannot use `-` in your variable names in Python (chi-square should be
`chi_square` for example)
2) You should `from numpy import array` or replace `array` with `np.array`.
Currently the name `array` is not defined.
3) `xc` is not defined, you should set it before calling `fitfunc()`.
4) `y3 = (0)[x==xc]` is not valid, should be (I think) `y3 =
np.zeros(len(x))[x==xc]` or `y3 = np.zeros(np.sum(x==xc))`
Your use of fit_function() is wrong, because it changes the order of the
images. What you want is:
def fit_function(x, a1, a2, xc):
if x < xc:
y = x**a1
elif x > xc:
y = x**(a1 - a2) * x**a2
else:
y = 0
return y
xc = 2 #or any value you want
curvey = [fit_function(val, a1, a2, xc) for val in x]
|
python how to know which tag exactly is not closed in xml
Question: I have an xml, and I validate if it is really a good formatted xml like this:
try:
self.doc=etree.parse(attributesXMLFilePath)
except IOError:
error_message = "Error: Couldn't find attribute XML file path {0}".format(attributesXMLFilePath)
raise XMLFileNotFoundException(error_message)
except XMLSyntaxError:
error_message = "The file {0} is not a good XML file, recheck please".format(attributesXMLFilePath)
raise NotGoodXMLFormatException(error_message)
as you see, I am catching the XMLSyntaxError, which is an error from :
`from lxml.etree import XMLSyntaxError`
that works good, but that just told me if the file is not a good xml format.
However, i want to ask you guys if there is a way to know which tag is wrong
because in my situation when i do this:
<name>Marco</name1>
i got the error, is there a way to know that `name` tag hasn't been closed
yet?
### Update
after some people give me the idea of line and position,i came up with this
code:
class XMLFileNotFoundException(GeneralSpiderException):
def __init__(self, message):
super(XMLFileNotFoundException, self).__init__(message, self)
class GeneralSpiderException(Exception):
def __init__(self, message, e):
super(GeneralSpiderException, self).__init__(message+" \nline of Exception = {0}, position of Exception = {1}".format(e.lineno, e.position))
and i am still raising the error like this
raise XMLFileNotFoundException(error_message)
i got this error now
super(GeneralSpiderException, self).__init__(message+" \nline of Exception = {0}, position of Exception = {1}".format(e.lineno, e.position))
exceptions.AttributeError: 'XMLFileNotFoundException' object has no attribute 'lineno'
Answer: You can print the details of the error. For instance:
try:
self.doc = etree.parse(attributesXMLFilePath)
except XMLSyntaxError as e:
error_message = "The file {0} is not correct XML, {1}".format(attributesXMLFilePath, e.msg)
raise NotGoodXMLFormatException(error_message)
|
How does one retrieve a c# byte array (Byte[]) from IronPython?
Question: I have a c# function that I can call from IronPython. The function returns a
byte array that I'd like to convert to a string for display and compare.
Python is telling me to pass the input parameter - (out Byte[] DataOut), below
- as type "StrongBox[Array[Byte]]", so I converted "var" with
clr.Reference[Array[Byte]]() .
How do I convert this to a string?
namespace My_Library.My_Namespace
{
/// </summary>
public class My_App : OSI_Layer
{
public bool My_Function(out Byte[] DataOut)
{
// fill up DataOut with a string
return (Send(out DataOut));
}
// etc...
}
}
//////////////////////////
//
// IronPython
//
// From IronPython I...
>>>
>>> import clr
>>> clr.AddReferenceToFileAndPath('My_Library.dll')
>>> from My_Library.My_Namespace import My_App
>>> App = My_App()
>>>
>>> from System import Array, Byte
>>> var = clr.Reference[Array[Byte]]() # Create type StrongBox[Array[Byte]]"
>>>
>>> clr.Reference[Array[Byte]]
<type 'StrongBox[Array[Byte]]'>
>>>
>>> App.My_Function(var)
>>>
True
>>> var
<System.Byte[] object at 0x000000000000002B [System.Byte[]]>
>>>
>>> printable_var = System.BitConverter.ToString(var)
> Traceback (most recent call last): File "", line 1, in TypeError: expected
> Array[Byte], got StrongBox[Array[Byte]]
Answer: You need to pass in the `Value` of the box, not the box itself.
printable_var = System.BitConverter.ToString(var.Value)
|
Midrule in LaTeX output of Python Pandas
Question: I'm using Python Pandas.
I'm trying to automate the creation of LaTeX tables from excel workbooks. I so
far have the script complete to create the following dataframe:
Date Factor A Factor B Total
Person A 01/01/2015 A C $220m
Person B 01/02/2015 B D $439m
Total $659m
I can use Pandas `.to_latex()` command to create a booktabs table from this,
which is all fine.
My question is, is it possible to add a midrule just before the last row of
the dataframe above to the LaTeX output?
Answer: Since pandas' [`.to_latex()`](http://pandas.pydata.org/pandas-
docs/version/0.15.1/generated/pandas.DataFrame.to_latex.html) does not seem to
deliver such an option I would to this manually by some string handling:
import pandas as pd
import numpy as np
# use a DataFrame df with some sample data
df = pd.DataFrame(np.random.random((5, 5)))
# get latex string via `.to_latex()`
latex = df.to_latex()
# split lines into a list
latex_list = latex.splitlines()
# insert a `\midrule` at third last position in list (which will be the fourth last line in latex output)
latex_list.insert(len(latex_list)-3, '\midrule')
# join split lines to get the modified latex output string
latex_new = '\n'.join(latex_list)
Latex output without additional `\midrule`:
\begin{tabular}{lrrrrr}
\toprule
{} & 0 & 1 & 2 & 3 & 4 \\
\midrule
0 & 0.563803 & 0.962439 & 0.572583 & 0.567999 & 0.390899 \\
1 & 0.728756 & 0.452122 & 0.358927 & 0.426866 & 0.234689 \\
2 & 0.907841 & 0.622264 & 0.128458 & 0.098953 & 0.711350 \\
3 & 0.338298 & 0.576341 & 0.625921 & 0.139799 & 0.146484 \\
4 & 0.303568 & 0.495921 & 0.835966 & 0.583697 & 0.675465 \\
\bottomrule
\end{tabular}
Output with manually added `\midrule`:
\begin{tabular}{lrrrrr}
\toprule
{} & 0 & 1 & 2 & 3 & 4 \\
\midrule
0 & 0.563803 & 0.962439 & 0.572583 & 0.567999 & 0.390899 \\
1 & 0.728756 & 0.452122 & 0.358927 & 0.426866 & 0.234689 \\
2 & 0.907841 & 0.622264 & 0.128458 & 0.098953 & 0.711350 \\
3 & 0.338298 & 0.576341 & 0.625921 & 0.139799 & 0.146484 \\
\midrule
4 & 0.303568 & 0.495921 & 0.835966 & 0.583697 & 0.675465 \\
\bottomrule
\end{tabular}
|
cx_Freeze exe results in sqlalchemy.exc.NoSuchmoduleError with psycopg2 at run time
Question: Edit: What tools can I use to see what packages/file the executable is trying
to find when it tries to access the psycopg2 package? Perhaps that can help
profile where things are going wrong.
I have a python script that runs perfectly fine when run using the interpreter
yet when I freeze it, I am getting the error:
sqlalchemy.exc.NoSuchModuleError: Can't load plugin: sqlalchemy.dialects:postgresql.psycopg2
Because it runs fine through the interpreter and fails when frozen, I suspect
that something is wrong with my setup.py file.
#-*- coding: 'utf-8' -*-
from cx_Freeze import setup, Executable
import sys
# Dependencies are automatically detected, but it might need
# fine tuning.
# execute this file with the command: python setup.py build
buildOptions = dict(packages = ['ht_cg.ht_cg_objects'],
includes = ['ht_cg.ht_cg_objects'],
excludes = ['tkinter', 'PyQt4', 'matplotlib', 'tcl', 'scipy'],
include_files = ['./cg_source_prep/readme.txt', "./ht_cg_objects/ht_db_config.cfg"],
build_exe = 'build/source_density_exe')
sys.path.append('./')
sys.path.append('../')
executables = [
Executable(script = "cg_source_save.py",
initScript = None,
base='Console',
targetName = 'source_density_save.exe',
copyDependentFiles = True,
compress = True,
appendScriptToExe = True,
appendScriptToLibrary = True,
shortcutName="CG Source Density",
shortcutDir='DesktopFolder',
icon = "./cg_source_prep/archimedes.ico"
)
]
setup(name='Source_Density',
version = '1.0',
description = 'Source Density Uploader',
author = 'zeppelin_d',
author_email = 'zeppelin_d@email',
options = dict(build_exe = buildOptions),
executables = executables)
1. I've tried adding 'psycopg2' to the includes list and the packages list.
2. I've installed the [psycopg2 binary installer](http://www.stickpeople.com/projects/python/win-psycopg/).
3. I've made sure 'psycopg2' is not in the includes or packages list.
4. I've got the MS Visual C++ 2008 redistributable x64 and x86 packages installed.
5. [Mike Bayer says a similar error is due to the connection string not being what I think it is](http://stackoverflow.com/questions/15648814/sqlalchemy-exc-argumenterror-cant-load-plugin-sqlalchemy-dialectsdriver) but I am printing my conn string right before I create the engine and it is correct. It's only broken when frozen and the same connection string works fine otherwise.
6. I have another frozen script that runs without error. These two scripts use the same method to create the engine and metadata objects from sqlalchemy. I've tried copying the psycopg2 files from the working executable folder to the broken one with no results. I've tried copying over random dlls and no luck.
I'm reading the connection string from the ht_db_config.cfg file that is
base64 encoded but I print out the string just before attempting
sqlalchemy.create_engine() and it's correct. I also added a string literal as
the argument to the sqlalchemy.create_engine method and the frozen executable
fails. The actual output from the script that fails is:
postgresql+psycopg2://user_name:[email protected]:5432/ht_cg_prod
I've replaced the username and the password.
I've been trying to get this fixed for a couple of days. I'd be grateful for
any help. I'm running python 3.4, sqlalchemy 1.0.8, cx_freeze 4.3.4 and
psycopg2 2.6 as determined by 'conda list' on windows 8.1. Thanks.
Answer: I finally found an answer to this. On the [cx_freeze mailing
list](http://sourceforge.net/p/cx-freeze/mailman/cx-freeze-
users/thread/CAKiRVAS+XcSkc_817rfY3Ezsvuhy33CZjjNeY+6MikhC1Lz1Xg@mail.gmail.com/)
someone else had the same problem. The solution was to add
`'sqlalchemy.dialects.postgresql'` to my list of packages in the cx_freeze
build options.
|
Python Tkinter Error
Question: I have tried to work with the Tkinter library, however, I keep getting this
message, and I don't know how to solve it.. I looked over the net but found
nothing to this specific error - I call the library like this:
from Tkinter import *
and I get this error -
TclError = Tkinter.TclError
AttributeError: 'module' object has no attribute 'TclError'
I have no clue what can I do now.. Thank you
full traceback:
Traceback (most recent call last):
File "C:/Users/Shoham/Desktop/MathSolvingProject/Solver.py", line 3, in <module>
from Tkinter import *
File "C:\Heights\PortableApps\PortablePython2.7.6.1\App\lib\lib- tk\Tkinter.py", line 41, in <module>
TclError = Tkinter.TclError
AttributeError: 'module' object has no attribute 'TclError'
Answer: You imported (mostly) everything from the module with `from Tkinter import *`.
That means that (mostly) everything in that module is now included in the
global namespace, and you no longer have to include the module name when you
refer to things from it. Thus, refer to `Tkinter`'s `TclError` object as
simply `TclError` instead of `Tkinter.TclError`.
|
Python re.findall fails at UTF-8 while rest of script succeeds
Question: I have this script that reads a large ammount of text files written in Swedish
(frequently with the åäö letters). It prints everything just fine from the
dictionary if I loop over `d` and `dictionary[]`. However, the regular
expression (from the raw input with `u'.*'` added) fails at returning utf-8
properly.
# -*- coding: utf8 -*-
from os import listdir
import re
import codecs
import sys
print "Välkommen till SOU-sök!"
search_word = raw_input("Ange sökord: ")
dictionary = {}
for filename in listdir("20tal"):
with open("20tal/" + filename) as currentfile:
text = currentfile.read()
dictionary[filename] = text
for d in dictionary:
result = re.findall(search_word + u'.*', dictionary[d], re.UNICODE)
if len(result) > 0:
print "Filnament är:\n %s \noch sökresultatet är:\n %s" % (d, result)
Edit: The output is as follows:
If I input:
katt
I get the following output:
Filnament är: Betänkande och förslag angående vissa ekonomiska spörsmål berörande enskilda järnvägar - SOU 1929:2.txt
och sökresultatet är:
['katter, r\xc3\xa4ntor m. m.', 'katter m- m., men exklusive r \xc3\xa4 nor m.', 'kattemedel subventionerar', av totalkostnaderna, ofta \xe2\x80\x94 med eller utan', 'kattas den nuvarande bilparkens kapitalv\xc3\xa4rde till 500 milj.
Here, the Filename `d` is printed correctly but not the result of the
`re.findall`
Answer: In Python `2.x` unicode list items normally output escaped unless you loop
through each or join them; maybe try something such as this:
result = ', '.join(result)
if len(result) > 0:
print ( u"Filnament är:\n %s \noch sökresultatet är:\n %s" % (d, result.decode('utf-8')))
**Input** :
katt
**Result** :
katter, räntor m. m. katter m- m., men exklusive r ä nor m. kattemedel subventionerar av totalkostnaderna, ofta — med eller utan kattas den nuvarande bilparkens kapitalvärde till 500 milj
|
How do I append multiple CSV files using Pandas data structures in Python
Question: I have about 10 CSV files that I'd like to append into one file. My thought
was to assign the file names to numbered data_files, and then append them in a
while loop, but I'm having trouble updating the file to the next numbered
date_file in my loop. I keep getting errors related to "data_file does not
exist" and "cannot concatenate 'str' and 'int' objects". I'm not even sure if
this is a realistic approach to my problem. Any help would be appreciated.
import pandas as pd
path = '//pathname'
data_file1= path + 'filename1.csv'
data_file2= path + 'filename2.csv'
data_file3= path + 'filename3.csv'
data_file4= path + 'filename4.csv'
data_file5= path + 'filename5.csv'
data_file6= path + 'filename6.csv'
data_file7= path + 'filename7.csv'
df = pd.read_csv(data_file1)
x = 2
while x < 8:
data_file = 'data file' + str(x)
tmdDF = pd.read_csv(data_file)
df = df.append(tmpDF)
x += x + 1
Answer: Not quite sure what you're doing in terms of constructing that string
`data_file` within the loop. You can't address variables using a string of
their name. Also as noted by Paulo, you're not incrementing the indices
correctly either. Try the following code but note that for the purposes of
merely concatenating csv files, you certainly do not need pandas.
import pandas
filenames = ["filename1.csv", "filename2.csv", ...] # Fill in remaining files.
df = pandas.DataFrame()
for filename in filenames:
df = df.append(pandas.read_csv(filename))
# df is now a dataframe of all the csv's in filenames appended together
|
Am I using pickle correctly?-Python
Question: I am a beginner in Python and therefore am not sure why I am receiving the
following error:
> TypeError: invalid file: []
for this line of code:
> usernamelist=open(user_names,'w')
I am trying to get an input of a username and password, write them to files,
and then read them.
Here is the rest of my code:
user_names=[]
passwords=[]
username=input('Please enter a username')
password=input('Please enter a password')
usernamelist=open(user_names,'w')
pickle.dump(userName,usernamelist)
usernamelist.close()
usernamelist=open(user_names,'r')
loadusernames=pickle.load(usernamelist)
passwordlist=open(passwords,'w')
pickle.dump(password,passwordlist)
passwordlist.close()
passwordlist=open(passwords,'r')
loadpasswords=pickle.load(passwordlist)
All answers would be appreciated. Thanks.
Answer: Based on your script, this may help. It creates a 'username.txt' and
'password.txt' to store input username and password.
I use python2.7, input behaves differently between in python2.7 and python3.x.
"""
opf: output file
inf: input file
use with instead of .open .close: http://effbot.org/zone/python-with-statement.htm
for naming rules and coding style in Python: https://www.python.org/dev/peps/pep-0008/
"""
import pickle
username = raw_input('Please enter a username:\n')
password = raw_input('Please enter a password:\n')
with open('username.txt', 'wb') as opf:
pickle.dump(username, opf)
with open('username.txt') as inf:
load_usernames = pickle.load(inf)
print load_usernames
with open('password.txt', 'wb') as opf:
pickle.dump(password, opf)
with open('password.txt') as inf:
load_passwords = pickle.load(inf)
print load_passwords
|
how to make multiple bar plots one within another using matplotlib.pyplot
Question: With reference to the bar chart shown as answer in this link
[python matplotlib multiple
bars](http://stackoverflow.com/questions/14270391/python-matplotlib-multiple-
bars)
I would like to have green bar inside blue bar and both these bars inside red
bar. And yes it should not be stacked rather each of the bars should be of
different width.
Could anyone get me started with some clue. Thanks.
Answer: Using the example you reference, you can nest the bars with different widths
as shown below. Note that a bar can only be 'contained' within another bar if
its y value is smaller (i.e., see the third set of bars in the plot below).
The basic idea is to set `fill = False` for the bars so that they don't
obscure one another. You could also try making bars with semi-transparent (low
`alpha`) fill colours, but this tends to get pretty confusing--especially with
red, blue, and green all superposed.
import matplotlib.pyplot as plt
%matplotlib inline
from matplotlib.dates import date2num
import datetime
x = [datetime.datetime(2011, 1, 4, 0, 0),
datetime.datetime(2011, 1, 5, 0, 0),
datetime.datetime(2011, 1, 6, 0, 0)]
x = date2num(x)
y = [4, 9, 2]
z=[1,2,3]
k=[11,12,13]
ax = plt.subplot(111)
#first strategy is to use hollow bars with fill=False so that they can be reasonably superposed / contained within one another:
ax.bar(x, z,width=0.2,edgecolor='g',align='center', fill=False) #the green bar has the smallest width as it is contained within the other two
ax.bar(x, y,width=0.3,edgecolor='b',align='center', fill=False) #the blue bar has a greater width than the green bar
ax.bar(x, k,width=0.4,edgecolor='r',align='center', fill=False) #the widest bar encompasses the other two
ax.xaxis_date()
plt.show()
[](http://i.stack.imgur.com/iKxmd.png)
|
Displaying live scorecard on linux desktop
Question: I have written a python script which displays all the live matches scores. I
wish to display the score on my desktop rather than in terminal. I also wish
to update the score card every 5 minutes or so. Here is the python script:
import xml.etree.cElementTree as ET
import requests
tree = ET.fromstring(requests.get('http://www.cricbuzz.com/livecricketscore/home-score-matches.xml').text)
for elem in tree.iter('match'):
state = elem.find('state').text
footer = elem.find('footer').text
header = elem.find('header').text
print state,header,footer
[xml file used for parsing](http://www.cricbuzz.com/livecricketscore/home-
score-matches.xml)
How can I achieve the above?
Answer: You will need to build a GUI for this. There are a lot of libraries that
should help. A couple as example -
[PyQT4](https://pypi.python.org/pypi/PyQt4),
[Tkinter](https://wiki.python.org/moin/TkInter),
[easygui](http://easygui.sourceforge.net/) etc.
|
Updating Attributes of Class as Parameters Changes: How to Keep Brokerage Account Class up-to-date?
Question: How does one keep the attributes of an instance of a class up-to-date if the
are changing moment to moment?
For example, I have defined a class describing my stock trading brokerage
account balances. I have defined a function which pings the brokerage API and
returns a JSON object with the current status of various parameters. The
status of these parameters are then set as attributes of a given instance.
import json
import requests
from ConfigParser import SafeConfigParser
class Account_Balances:
def Account_Balances_Update():
"""Pings brokerage for current status of target account"""
#set query args
endpoint = parser.get('endpoint', 'brokerage') + 'user/balances'
headers = {'Authorization': parser.get('account', 'Auth'), 'Accept': parser.get('message_format', 'accept_format')}
#send query
r = requests.get(endpoint, headers = headers)
response = json.loads(r.text)
return response
def __init__(self):
self.response = self.Account_Balances_Update()
self.parameterA = response['balances']['parameterA']
self.parameterB = response['balances']['parameterB']
As it stands, this code sets the parameters at the moment the instance is
created but they become static.
Presumably `parameterA` and `parameterB` are changing moment to moment so I
need to keep them up-to-date for any given instance when requested. Updating
the parameters requires rerunning the `Account_Balances_Update()` function.
What is the pythonic way to keep the attribute of a given instance of a class
up to date in a fast moving environment like stock trading?
Answer: Why not just creating an update method?
class Account_Balances:
@staticmethod
def fetch():
"""Pings brokerage for current status of target account"""
#set query args
endpoint = parser.get('endpoint', 'brokerage') + 'user/balances'
headers = {'Authorization': parser.get('account', 'Auth'), 'Accept': parser.get('message_format', 'accept_format')}
#send query
r = requests.get(endpoint, headers = headers)
response = json.loads(r.text)
balances = response['balances']
return balances['parameterA'], balances['parameterB']
def update(self):
self.parameterA, self.parameterB = self.fetch()
|
Can't install jpeg because conflicting ports are active: libjpeg-turbo
Question: I am running into an issue with libjpeg-turbo trying to install vsftpd with
Mac Ports. I'm running on OS X 10.10.5.
David-Laxers-MacBook-Pro:phoenix_pipeline davidlaxer$ conda -V
conda 3.16.0
David-Laxers-MacBook-Pro:phoenix_pipeline davidlaxer$ java -version
java version "1.8.0_05"
Java(TM) SE Runtime Environment (build 1.8.0_05-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.5-b02, mixed mode)
David-Laxers-MacBook-Pro:phoenix_pipeline davidlaxer$
David-Laxers-MacBook-Pro:phoenix_pipeline davidlaxer$ port -v
MacPorts 2.3.3
sudo port install vsftpd
Password:
---> Fetching archive for vsftpd
---> Attempting to fetch vsftpd-3.0.2_1.darwin_14.x86_64.tbz2 from http://packages.macports.org/vsftpd
---> Attempting to fetch vsftpd-3.0.2_1.darwin_14.x86_64.tbz2.rmd160 from http://packages.macports.org/vsftpd
---> Installing vsftpd @3.0.2_1
---> Activating vsftpd @3.0.2_1
To configure vsftpd edit /opt/local/etc/vsftpd.conf.
---> Cleaning vsftpd
---> Updating database of binaries
---> Scanning binaries for linking errors
---> Found 99 broken file(s), matching files to ports
---> Found 14 broken port(s), determining rebuild order
---> Rebuilding in order
tiff @4.0.4
gd2 @2.1.1 +x11
ghostscript @9.16 +x11
djvulibre @3.5.27
webp @0.4.3
jasper @1.900.1
gdk-pixbuf2 @2.31.6 +x11
opencv @3.0.0
lcms @1.19
libmng @1.0.10
netpbm @10.71.02 +x11
lcms2 @2.7
ImageMagick @6.9.0-0 +x11
poppler @0.35.0
Error: Unable to exec port: Can't install jpeg because conflicting ports are active: libjpeg-turbo
Error rebuilding tiff
while executing
"error "Error rebuilding $portname""
(procedure "revupgrade_scanandrebuild" line 395)
invoked from within
"revupgrade_scanandrebuild broken_port_counts $opts"
(procedure "macports::revupgrade" line 5)
invoked from within
"macports::revupgrade $opts"
(procedure "action_revupgrade" line 2)
invoked from within
"action_revupgrade $action $portlist $opts"
(procedure "action_target" line 96)
invoked from within
"$action_proc $action $portlist [array get global_options]"
(procedure "process_cmd" line 103)
invoked from within
"process_cmd $remaining_args"
invoked from within
"if { [llength $remaining_args] > 0 } {
# If there are remaining arguments, process those as a command
set exit_status [process_cmd $remaining..."
(file "/opt/local/bin/port" line 5268)
I had what could be a similar problem earlier today with python-goose:
David-Laxers-MacBook-Pro:python-goose davidlaxer$ anaconda search -t conda libjpg
Traceback (most recent call last):
File "/users/davidlaxer/anaconda/bin/anaconda", line 6, in <module>
sys.exit(main())
File "/Users/davidlaxer/anaconda/lib/python2.7/site-packages/binstar_client/scripts/cli.py", line 94, in main
description=__doc__, version=version)
File "/Users/davidlaxer/anaconda/lib/python2.7/site-packages/binstar_client/scripts/cli.py", line 60, in binstar_main
add_subparser_modules(parser, sub_command_module, 'conda_server.subcommand')
File "/Users/davidlaxer/anaconda/lib/python2.7/site-packages/clyent/__init__.py", line 117, in add_subparser_modules
for command_module in get_sub_commands(module):
File "/Users/davidlaxer/anaconda/lib/python2.7/site-packages/clyent/__init__.py", line 106, in get_sub_commands
this_module = __import__(module.__package__ or module.__name__, fromlist=names)
File "/Users/davidlaxer/anaconda/lib/python2.7/site-packages/binstar_client/commands/notebook.py", line 13, in <module>
from binstar_client.utils.notebook import Uploader, Downloader, parse, notebook_url, has_environment
File "/Users/davidlaxer/anaconda/lib/python2.7/site-packages/binstar_client/utils/notebook/__init__.py", line 10, in <module>
from .uploader import *
File "/Users/davidlaxer/anaconda/lib/python2.7/site-packages/binstar_client/utils/notebook/uploader.py", line 7, in <module>
from .data_uri import data_uri_from
File "/Users/davidlaxer/anaconda/lib/python2.7/site-packages/binstar_client/utils/notebook/data_uri.py", line 10, in <module>
from PIL import Image
File "/Users/davidlaxer/anaconda/lib/python2.7/site-packages/PIL/Image.py", line 63, in <module>
from PIL import _imaging as core
ImportError: dlopen(/Users/davidlaxer/anaconda/lib/python2.7/site-packages/PIL/_imaging.so, 2): Library not loaded: libjpeg.8.dylib
Referenced from: /Users/davidlaxer/anaconda/lib/python2.7/site-packages/PIL/_imaging.so
Reason: Incompatible library version: _imaging.so requires version 13.0.0 or later, but libjpeg.8.dylib provides version 12.0.0
Continuum Analytics gave me these instructions (which resolved the 'goose'
problem):
conda install -f pillow jpeg
I was unable to find any details of a jpeg package that has _imaging.so 13. My suggestion from here may be to reinstall goose. I did so by the following. first download the zip from https://github.com/grangier/python-goose and run the following individually.
conda create -n goose python=2.7 anaconda-client pillow lxml cssselect nltk
source activate goose
pip install -i https://pypi.anaconda.org/pypi/simple jieba
conda install -fc https://conda.anaconda.org/auto beautifulsoup4
Then move the contents of the python-goose-develop/ into the goose environment, similar to this.
cp ~/Downloads/python-goose-develop/* ~/anaconda/envs/goose
cd ~/anaconda/envs/goose
python setup.py install
In response to the 'answer' below from dsgfdg.
David-Laxers-MacBook-Pro:phoenix_pipeline davidlaxer$ sudo !!
sudo port install webp
---> Computing dependencies for webp
Error: Unable to execute port: Can't install jpeg because conflicting ports are active: libjpeg-turbo
I downloaded Pillow 2.3.1 and built it. It build successfully but then fails
the tests. Here's an excerpt.
David-Laxers-MacBook-Pro:Pillow-2.3.1 davidlaxer$ python Tests/run.py
--------------------------------------------------------------------
running test_000_sanity ...
=== error 256
Traceback (most recent call last):
File "Tests/test_000_sanity.py", line 5, in <module>
import PIL.Image
File "build/bdist.macosx-10.5-x86_64/egg/PIL/Image.py", line 53, in <module>
File "build/bdist.macosx-10.5-x86_64/egg/PIL/_imaging.py", line 7, in <module>
File "build/bdist.macosx-10.5-x86_64/egg/PIL/_imaging.py", line 6, in __bootstrap__
ImportError: dlopen(/Users/davidlaxer/.python-eggs/Pillow-2.3.1-py2.7-macosx-10.5-x86_64.egg-tmp/PIL/_imaging.so, 2): Library not loaded: libjpeg.8.dylib
Referenced from: /Users/davidlaxer/.python-eggs/Pillow-2.3.1-py2.7-macosx-10.5-x86_64.egg-tmp/PIL/_imaging.so
Reason: Incompatible library version: _imaging.so requires version 13.0.0 or later, but libjpeg.8.dylib provides version 12.0.0
running test_001_archive ...
=== error 256
Traceback (most recent call last):
File "Tests/test_001_archive.py", line 2, in <module>
import PIL.Image
File "build/bdist.macosx-10.5-x86_64/egg/PIL/Image.py", line 53, in <module>
File "build/bdist.macosx-10.5-x86_64/egg/PIL/_imaging.py", line 7, in <module>
File "build/bdist.macosx-10.5-x86_64/egg/PIL/_imaging.py", line 6, in __bootstrap__
ImportError: dlopen(/Users/davidlaxer/.python-eggs/Pillow-2.3.1-py2.7-macosx-10.5-x86_64.egg-tmp/PIL/_imaging.so, 2): Library not loaded: libjpeg.8.dylib
Referenced from: /Users/davidlaxer/.python-eggs/Pillow-2.3.1-py2.7-macosx-10.5-x86_64.egg-tmp/PIL/_imaging.so
Reason: Incompatible library version: _imaging.so requires version 13.0.0 or later, but libjpeg.8.dylib provides version 12.0.0
running test_file_bmp ...
=== error 256
Traceback (most recent call last):
File "Tests/test_file_bmp.py", line 3, in <module>
from PIL import Image
File "build/bdist.macosx-10.5-x86_64/egg/PIL/Image.py", line 53, in <module>
File "build/bdist.macosx-10.5-x86_64/egg/PIL/_imaging.py", line 7, in <module>
File "build/bdist.macosx-10.5-x86_64/egg/PIL/_imaging.py", line 6, in __bootstrap__
ImportError: dlopen(/Users/davidlaxer/.python-eggs/Pillow-2.3.1-py2.7-macosx-10.5-x86_64.egg-tmp/PIL/_imaging.so, 2): Library not loaded: libjpeg.8.dylib
Referenced from: /Users/davidlaxer/.python-eggs/Pillow-2.3.1-py2.7-macosx-10.5-x86_64.egg-tmp/PIL/_imaging.so
Reason: Incompatible library version: _imaging.so requires version 13.0.0 or later, but libjpeg.8.dylib provides version 12.0.0
running test_file_eps ...
=== error 256
Traceback (most recent call last):
File "Tests/test_file_eps.py", line 3, in <module>
from PIL import Image, EpsImagePlugin
File "build/bdist.macosx-10.5-x86_64/egg/PIL/Image.py", line 53, in <module>
File "build/bdist.macosx-10.5-x86_64/egg/PIL/_imaging.py", line 7, in <module>
File "build/bdist.macosx-10.5-x86_64/egg/PIL/_imaging.py", line 6, in __bootstrap__
ImportError: dlopen(/Users/davidlaxer/.python-eggs/Pillow-2.3.1-py2.7-macosx-10.5-x86_64.egg-tmp/PIL/_imaging.so, 2): Library not loaded: libjpeg.8.dylib
Referenced from: /Users/davidlaxer/.python-eggs/Pillow-2.3.1-py2.7-macosx-10.5-x86_64.egg-tmp/PIL/_imaging.so
Reason: Incompatible library version: _imaging.so requires version 13.0.0 or later, but libjpeg.8.dylib provides version 12.0.0
running test_file_fli ...
=== error 256
Traceback (most recent call last):
File "Tests/test_file_fli.py", line 3, in <module>
from PIL import Image
File "build/bdist.macosx-10.5-x86_64/egg/PIL/Image.py", line 53, in <module>
File "build/bdist.macosx-10.5-x86_64/egg/PIL/_imaging.py", line 7, in <module>
File "build/bdist.macosx-10.5-x86_64/egg/PIL/_imaging.py", line 6, in __bootstrap__
ImportError: dlopen(/Users/davidlaxer/.python-eggs/Pillow-2.3.1-py2.7-macosx-10.5-x86_64.egg-tmp/PIL/_imaging.so, 2): Library not loaded: libjpeg.8.dylib
Referenced from: /Users/davidlaxer/.python-eggs/Pillow-2.3.1-py2.7-macosx-10.5-x86_64.egg-tmp/PIL/_imaging.so
Reason: Incompatible library version: _imaging.so requires version 13.0.0 or later, but libjpeg.8.dylib provides version 12.0.0
running test_file_gif ...
=== error 256
Traceback (most recent call last):
File "Tests/test_file_gif.py", line 3, in <module>
from PIL import Image
File "build/bdist.macosx-10.5-x86_64/egg/PIL/Image.py", line 53, in <module>
File "build/bdist.macosx-10.5-x86_64/egg/PIL/_imaging.py", line 7, in <module>
File "build/bdist.macosx-10.5-x86_64/egg/PIL/_imaging.py", line 6, in __bootstrap__
...
ImportError: dlopen(/Users/davidlaxer/.python-
eggs/Pillow-2.3.1-py2.7-macosx-10.5-x86_64.egg-tmp/PIL/_imaging.so, 2):
Library not loaded: libjpeg.8.dylib Referenced from:
/Users/davidlaxer/.python-eggs/Pillow-2.3.1-py2.7-macosx-10.5-x86_64.egg-
tmp/PIL/_imaging.so Reason: Incompatible library version: _imaging.so requires
version 13.0.0 or later, but libjpeg.8.dylib provides version 12.0.0
\-------------------------------------------------------------------- *** 94
tests of 94 failed. David-Laxers-MacBook-Pro:Pillow-2.3.1 davidlaxer$
David-Laxers-MacBook-Pro:scraper davidlaxer$ python scraper.py
Traceback (most recent call last):
File "scraper.py", line 8, in <module>
from goose import Goose
File "/users/davidlaxer/anaconda/lib/python2.7/site-packages/goose/__init__.py", line 27, in <module>
from goose.crawler import CrawlCandidate
File "/users/davidlaxer/anaconda/lib/python2.7/site-packages/goose/crawler.py", line 31, in <module>
from goose.images.extractors import UpgradedImageIExtractor
File "/users/davidlaxer/anaconda/lib/python2.7/site-packages/goose/images/extractors.py", line 28, in <module>
from goose.images.utils import ImageUtils
File "/users/davidlaxer/anaconda/lib/python2.7/site-packages/goose/images/utils.py", line 26, in <module>
from PIL import Image
File "build/bdist.macosx-10.5-x86_64/egg/PIL/Image.py", line 53, in <module>
File "build/bdist.macosx-10.5-x86_64/egg/PIL/_imaging.py", line 7, in <module>
File "build/bdist.macosx-10.5-x86_64/egg/PIL/_imaging.py", line 6, in __bootstrap__
ImportError: dlopen(/Users/davidlaxer/.python-eggs/Pillow-2.3.1-py2.7-macosx-10.5-x86_64.egg-tmp/PIL/_imaging.so, 2): Library not loaded: libjpeg.8.dylib
Referenced from: /Users/davidlaxer/.python-eggs/Pillow-2.3.1-py2.7-macosx-10.5-x86_64.egg-tmp/PIL/_imaging.so
Reason: Incompatible library version: _imaging.so requires version 13.0.0 or later, but libjpeg.8.dylib provides version 12.0.0
Answer: The problem is that both the `libjpeg-turbo` and the `jpeg` port in MacPorts
provide `libjpeg.dylib` and corresponding headers (consequently, they conflict
with each other and cannot be installed simultaneously), but the `jpeg` port
ships `libjpeg.9.dylib`, which is ABI-incompatible with `libjpeg-turbo`'s
`libjpeg.8.dylib`.
The problem is not caused by the installation of the `vsftpd` port; rather,
MacPorts runs a sanity check after each installation or update called `rev-
upgrade` that checks for ABI compatibility issues such as referencing non-
existant or incompatible libraries. When it detects such issues (as in your
case), it attempts to rebuild the broken ports to fix the issues.
This sanity checks determines that your `tiff` port is broken, because one of
its binaries or libraries links against `libjpeg.9.dylib` \-- this file
doesn't exist because it is provided by the `jpeg` port, but you only have
`libjpeg-turbo` installed. The rebuild then attempts to install the `jpeg`
port as a dependency of `tiff` and fails because of the conflict.
You have multiple options at this point:
* Uninstall `libjpeg-turbo` and install `jpeg`. That should fix your issues, but is only an option if you don't explicitly want `libjpeg-turbo` for some reason.
* Uninstall all ports that explicitly depend on the `jpeg` port and rebuild those that will accept `libjpeg-turbo` as a provider of the `libjpeg.dylib` library; unfortunately, the `tiff` port is currently not among them:
$> port info --depends_lib tiff
depends_lib: port:jpeg, port:xz, port:zlib
This would contain `path:lib/libjpeg.dylib:jpeg` if it supported both
versions. A quick survey suggests only very few ports in MacPorts currently
support `libjpeg-turbo` to provide JPEG functionality. This may change in the
future (and in fact, there was a discussion about that recently on the mailing
lists).
**TL;DR:**
sudo port -f uninstall libjpeg-turbo
sudo port install jpeg
sudo port rev-upgrade
|
Python/Selenium: Not able to find dynamically-generated element (button), Error: "Element could not be found"
Question: I'm trying to post text and hyperlink combinations on multiple Facebook groups
for my online business promotion.
The problem with my code is: when I pass a hyperlink and some text to
`send_keys` and try to click a button, my script is not able to locate the
button and gives me an error: `element could not be found`.
When I try to pass text to `send_keys` without a hyperlink, I don't receive
this error.
Code:
import re
from random import randint
import os
import time
import sys
import csv
from selenium import webdriver from
selenium.webdriver.common.keys import Keys from
selenium.webdriver.support.ui import Select
driver = webdriver.Firefox() driver.get("https://www.facebook.com/")
driver.find_element_by_id("email").clear()
driver.find_element_by_id("email").send_keys("[email protected]")
driver.find_element_by_id("pass").clear()
driver.find_element_by_id("pass").send_keys("************")
driver.find_element_by_class_name("uiButton").click() time.sleep(3)
with open('url1', 'r') as f:
for line in f:
time.sleep(randint(8,10))
driver.get(line)
try:
driver.find_element_by_class_name("uiTextareaAutogrow").click()
except:
pass
time.sleep(randint(3,6))
driver.find_element_by_class_name("uiTextareaAutogrow").send_keys("some text and hyderlink http://google.com ")
time.sleep(randint(10,15))
driver.find_element_by_xpath("//*[@id='u_0_1o']/div/div[5]/div/ul/li[2]/button/").click()
Note: I have URLs of groups collected in a file.
Please advise how this problem can be solved.
Answer: Instead of `find_element_by_xpath` you can try to use `WebDriverWait`. Syntax
is something like this -
WebDriverWait(driver, 10).until(expected_conditions.element_to_be_clickable((By.CSS_SELECTOR, '<CSS HERE>'))).click()
This will basically try calling `find_element_by_css_selector` every half a
second until timeout.
|
Checking number is prime in python, why check up to int(sqrt(n)-1)) not int(sqrt(n))
Question: new to Python here. I am trying to understand how this function works to check
prime numbers:
from itertools import count, islice
from math import sqrt
def is_prime(n):
if n < 2: return False
return all(n%i for i in islice(count(2), int(sqrt(n)-1)))
From what I understand you can check factors up to and **including** the
square root of n, so why is this only testing up to sqrt(n)-1? I'm also not
clear on the `return all` part for the function. n%i returns an int, the
remainder. So why does this expression evaluate as a bool? Any pointers on
this would be great. thanks!
Answer: Because the second argument to `islice` is a count, not the value to stop on.
This would be far better written with `xrange(2, int(sqrt(n))+1)`
The plus one here is to make the range inclusive at both ends, which xrange
normally is not.
|
Create Matrix from a csv file - Python
Question: I am trying to read some numbers from a .csv file and store them into a matrix
using Python. The input file looks like this
> Input File
>
>
> B,1
> A,1
> A,1
> B,1
> A,3
> A,2
> B,1
> B,2
> B,2
>
The input is to be manipulated to a matrix like -
> Output File
>
>
> 1 2 3
> A 2 1 1
> B 3 2 0
>
Here, the first column of the input file becomes the row, second column
becomes the column and the value is the count of the occurrence. How should I
implement this? The size of my input file is huge (1000000 rows) and hence
there can be large number of rows (anywhere between 50 to 10,000) and columns
(from 1 to 50)
Answer: With pandas, it becomes easy, almost in just 3 lines
import pandas as pd
df = pd.read_csv('example.csv', names=['label', 'value'])
# >>> df
# label value
# 0 B 1
# 1 A 1
# 2 A 1
# 3 B 1
# 4 A 3
# 5 A 2
# 6 B 1
# 7 B 2
# 8 B 2
s = df.groupby(['label', 'value']).size()
# >>> s
# label value
# A 1 2
# 2 1
# 3 1
# B 1 3
# 2 2
# dtype: int64
# ref1: http://stackoverflow.com/questions/15751283/converting-a-pandas-multiindex-dataframe-from-rows-wise-to-column-wise
# ref2: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.unstack.html
m = s.unstack()
# >>> m
# value 1 2 3
# label
# A 2 1 1
# B 3 2 NaN
# Below are optional: just to make it look more like what you want
m.columns.name = None
m.index.name = None
m = m.fillna(0)
print m
# 1 2 3
# A 2 1 1
# B 3 2 0
|
Python bit list to byte list
Question: I have a long 1-dimensional list of integer 1's and 0's, representing 8-bit
binary bytes. What is a _neat_ way to create a new list from that, containing
the integer bytes.
Being familiar with C, but new to Python, I've coded it in the way I'd do it
with C: an elaborate structure that loops though each bit. However, I'm aware
that the whole point of Python over C is that such things can usually be done
compactly and elegantly, and that I should learn how to do that. Maybe using
list comprehension?
This works, but suggestions for a more "Pythonic" way would be appreciated:
#!/usr/bin/env python2
bits = [1,0,0,1,0,1,0,1,0,1,1,0,1,0,1,1,1,1,1,0,0,1,1,1]
bytes = []
byt = ""
for bit in bits:
byt += str(bit)
if len(byt) == 8:
bytes += [int(byt, 2)]
byt = ""
print bytes
$ bits-to-bytes.py
[149, 107, 231]
Answer: You can slice the list into chunks of 8 elements and map the subelements to
str:
[int("".join(map(str, bits[i:i+8])), 2) for i in range(0, len(bits), 8)]
You could split it up into two parts mapping and joining once:
mapped = "".join(map(str, bits))
[int(mapped[i:i+8], 2) for i in range(0, len(mapped), 8)]
Or using iter and borrowing from the grouper
[recipe](https://docs.python.org/2/library/itertools.html#recipes) in
itertools:
it = iter(map(str, bits))
[int("".join(sli), 2) for sli in zip(*iter([it] * 8))]
`iter(map(str, bits))` maps the content of bits to str and creates an
[iterator](http://anandology.com/python-practice-book/iterators.html),
`zip(*iter([it] * 8))` groups the elements into groups of 8 subelements.
Each `zip(*iter..` consumes eight subelements from our iterator so we always
get sequential groups, it is the same logic as the slicing in the first code
we just avoid the need to slice.
As Sven commented, for lists not divisible by `n` you will lose data using zip
similarly to your original code, you can adapt the grouper recipe I linked to
handle those cases:
from itertools import zip_longest # izip_longest python2
bits = [1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1,1,0]
it = iter(map(str, bits))
print( [int("".join(sli), 2) for sli in izip_longest(*iter([it] * 8),fillvalue="")])
[149, 107, 231, 2] # using just zip would be [149, 107, 231]
The `fillvalue=""` means we pad the odd length group with empty string so we
can still call `int("".join(sli), 2)` and get correct output as above where we
are left with `1,0` after taking `3 * 8` chunks.
In your own code `bytes += [int(byt, 2)]` could simply become
`bytes.append(int(byt, 2))`
|
Error while dumping out data from sqlite3
Question: I have used **sqlite3_connection.iterdump()** method to dump the sqlite3 the
database.
I have written a module in python that dumps out the sqlite3 tables. The
module works fine if I run it locally in my machine.
And, After creating a python package of the module using pyinstaller, if I try
to dump out the database it gives an error saying
"ImportError: No module named sqlite3.dump"
Any idea how I can solve this issue. Or is there any alternative to get the
sqlite3 dump.
Here is what I'm following to dump the database.
#Export database
def export_database(self):
database_string = ""
for line in self.conn.iterdump():
database_string += '%s\n' % (line)
return database_string
#Import database
def import_database(self, database_string):
self.cursor.executescript(database_string)
Answer: Please verify that you have have the file `hooks/hook-sqlite3.py` under your
_PyInstaller_ installation directory. If you don't have it please install the
[latest _PyInstaller_
version](https://github.com/pyinstaller/pyinstaller/wiki#downloads).
* * *
If you're unable to install the latest version, create the file `hook-
sqlite3.py` with the following content:
from PyInstaller.hooks.hookutils import collect_submodules
hiddenimports = collect_submodules('sqlite3')
When building, supply the path to the directory in which you placed the hook
file as a value to the `--additional-hooks-dir` argument, as follows:
--additional-hooks-dir=<path_to_directory_of_hook_file>
* * *
As per the comment below, it seems that adding `--hidden-import=sqlite3` while
building works as well.
|
Ipython notebook on 2 columns
Question: I'd like to have to have cells of a python notebook on 2 columns, for writng
annotations next to code (for example, instead of inserting 2 cells below, I
would insert I insert a cell on the right and a cell below on the left) I know
that it's possible to use custom css for changing the appearance (e.g
<https://github.com/nsonnad/base16-ipython-
notebook/blob/master/ipython-3/output/base16-3024-dark.css> ), is it possible
also for the layout?
On the other hand, I found an example of how to use the css for creating a
table layout (<https://pixelsvsbytes.com/2012/02/this-css-layout-grid-is-no-
holy-grail/>), but not being very familiar with CSS I don't understand if this
can be applied to an unknown number of equal blocks (unknown because they are
generated interactively by the user). For reference, here is how currrently
looks like:
[](http://i.stack.imgur.com/tbxij.png)
Answer: You could just change the cells to markdown or raw and make them float right.
from IPython.core.display import HTML
HTML("<style> div.code_cell{width: 75%;float: left;}"
+"div.text_cell{width: 25%;float: right;}"
+"div.text_cell div.prompt {display: none;}</style>")
Now when you enter a cell and want it on the right press esc-r (or m).
esc- unselects it and allows the notebook to process commands. r is the
command to make a raw cell.
[](http://i.stack.imgur.com/hDWGn.jpg)
|
Best data structure to use in python to store a 3 dimensional cube of named data
Question: I would like some feedback on my choice of data structure. I have a 2D X-Y
grid of current values for a specific voltage value. I have several voltage
steps and have organized the data into a cube of X-Y-Voltage. I illustrated
the axes here: <http://imgur.com/FVbluwB>.
I currently use numpy arrays in python dictionaries for the different kind of
transistors I am sweeping. I'm not sure if this is the best way to do this.
I've looked at Pandas, but am also not sure if this is a good job for Pandas.
Was hoping someone could help me out, so I could learn to be pythonic! The
code to generate some test data and the end structure is below.
Thank you!
import numpy as np
#make test data
test__transistor_data0 = {"SNMOS":np.random.randn(3,256,256),"SPMOS":np.random.randn(4,256,256), "WPMOS":np.random.randn(6,256,256),"WNMOS":np.random.randn(6,256,256)}
test__transistor_data1 = {"SNMOS":np.random.randn(3,256,256), "SPMOS":np.random.randn(4,256,256), "WPMOS":np.random.randn(6,256,256), "WNMOS":np.random.randn(6,256,256)}
test__transistor_data2 = {"SNMOS":np.random.randn(3,256,256), "SPMOS":np.random.randn(4,256,256), "WPMOS":np.random.randn(6,256,256), "WNMOS":np.random.randn(6,256,256)}
test__transistor_data3 = {"SNMOS":np.random.randn(3,256,256), "SPMOS":np.random.randn(4,256,256), "WPMOS":np.random.randn(6,256,256), "WNMOS":np.random.randn(6,256,256)}
quadrant_data = {"ne":test__transistor_data0,"nw":test__transistor_data1,"sw":test__transistor_data2,"se":test__transistor_data3}
Answer: It may be worth checking out [xray](http://xray.readthedocs.org/en/stable/why-
xray.html), which is like (and partially based on) `pandas`, but designed for
N-dimensional data.
Its two fundamental containers are a `DataArray` which is a labeled ND array,
and a a `Dataset`, which is a container of `DataArray`s.
In [29]: s1 = xray.DataArray(np.random.randn(3,256,256), dims=['voltage', 'x', 'y'])
In [30]: s2 = xray.DataArray(np.random.randn(3,256,256), dims=['voltage', 'x', 'y'])
In [32]: ds = xray.Dataset({'SNMOS': s1, 'SPMOS': s2})
In [33]: ds
Out[33]:
<xray.Dataset>
Dimensions: (voltage: 3, x: 256, y: 256)
Coordinates:
* voltage (voltage) int64 0 1 2
* x (x) int64 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 ...
* y (y) int64 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 ...
Data variables:
SPMOS (voltage, x, y) float64 -1.363 2.446 0.3585 -0.8243 -0.814 ...
SNMOS (voltage, x, y) float64 1.07 2.327 -1.435 0.4011 0.2379 2.07 ...
Both containers have a lot of nice functionality (see the docs), for example,
if you wanted to know max value of `x` for each transitor, at the first
voltage level, it'd be something like this:
In [39]: ds.sel(voltage=0).max(dim='x').max()
Out[39]:
<xray.Dataset>
Dimensions: ()
Coordinates:
*empty*
Data variables:
SPMOS float64 4.175
SNMOS float64 4.302
|
Pymongo threading error while connecting to remote server from google app engine
Question: I have deployed a Flask application on Google App Engine. I am connecting to
MongoDB hosted at google compute engine using pymongo.
Here is my snippet:
from pymongo import MongoClient, ASCENDING, DESCENDING
serveraddress = 'my_server_address'
client = MongoClient(serveraddress, 27017)
db = client['MasterData']
MJCollection = db['StoredJsons']
print MJCollection.count()
This gives an output but the process stops: This is the error:
Thread running after request. Creation traceback:
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/runtime/runtime.py", line 152, in HandleRequest
error)
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/runtime/wsgi.py", line 329, in HandleRequest
return WsgiRequest(environ, handler_name, url, post_data, error).Handle()
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/runtime/wsgi.py", line 240, in Handle
handler = _config_handle.add_wsgi_middleware(self._LoadHandler())
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/runtime/wsgi.py", line 299, in _LoadHandler
handler, path, err = LoadObject(self._handler)
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/runtime/wsgi.py", line 85, in LoadObject
obj = __import__(path[0])
File "/base/data/home/apps/s~appname-frontend/1.386781073242991356/main.py", line 2, in <module>
from dbHandler import get, update
File "/base/data/home/apps/s~appname-frontend/1.386781073242991356/dbHandler.py", line 9, in <module>
client = MongoClient(serveraddress, 27017)
File "/base/data/home/apps/s~appname-frontend/1.386781073242991356/pymongo/mongo_client.py", line 372, in __init__
executor.open()
File "/base/data/home/apps/s~appname-frontend/1.386781073242991356/pymongo/periodic_executor.py", line 64, in open
thread.start()
File "/base/data/home/runtimes/python27/python27_dist/lib/python2.7/threading.py", line 505, in start
_start_new_thread(self.__bootstrap, ())
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/runtime/runtime.py", line 82, in StartNewThread
return base_start_new_thread(Run, ())
Answer: IMHO this configuration is not going to work. Pymongo MongoClient creates a
couple of threads when connected for monitoring purposes etc. GAE wouldn't
allow that.
The reason you are geting the exemption on MJCollection.count() and not on
client = MongoClient(serveraddress, 27017) is that this is the time
MongoClient tries to connect.
Your alternatives are :
* Use mongoDB's REST API but then you will get penalized on functionality and speed.
* Move your flask appl to Compute Engine
|
Python smtplib Name or service not known
Question: i was making a simple daemon in python which takes a mail queue and delivers
them to the recipients. Everything is working pretty good except from the
`smtplib` which is actually the most important part.
# What happens?
When im running the script im getting the following error:
root@vagrant-ubuntu-trusty-64:/mailer/tests# python daemon_run.py
[Errno -2] Name or service not known
From what i found on the internet this error occurs when it cant connect to
the SMTP server. Most users suggested fixes on postman which i dont use since
i take advantage of google's services.
# The code
headers = "\r\n".join(["from: " + "[email protected]",
"subject: " + "Testing",
"to: " + "[email protected]",
"mime-version: 1.0",
"content-type: text/html"])
content = headers + "\r\n\r\n" + template_content
server = smtplib.SMTP('smtp.google.com', 587)
server.ehlo()
server.starttls()
server.login('[email protected]', 'pass')
server.sendmail('[email protected]', '[email protected]', content)
server.close()
Please note that i'm using exactly the same login details in PHPMailer which
actually works.
Any ideas?
Answer: It seems like the goold old typo hit again. Gmail's SMTP is `smtp.gmail.com`
and not `smtp.google.com`
|
Separate get request and database hit for each post to get like status
Question: So I am trying to make a social network on Django. Like any other social
network users get the option to like a post, and each of these likes are
stored in a model that is different from the model used for posts that show up
in the news feed. Now I have tried two choices to get the like status on the
go.
## 1.Least database hits:
Make one sql query and get the like entry for every post id if they exist.Now
I use a custom django template tag to see if the like entry for the current
post exist in the Queryset by searching an array that contains like statuses
of all posts. This way I use the database to get all values and search for a
particular value from the list using python.
## 2.Separate Database Query for each query:
Here i use the same custom template tag but rather that searching through a
Queryset I use the mysql database for most of the heavy lifting. I use
model.objects.get() for each entry.
Which is a more efficient algorithm. Also I was planning on getting another
database server, can this change the choice if network latency is only around
0.1 ms.
Is there anyway that I can get these like statuses on the go as boolean values
along with all the posts in a single db query.
## An example query for the first method can be like
Let post_list be the post QuerySet
models.likes.objects.filter(user=current_user,post__in = post_list)
Answer: This is not a direct answer to your question, but I hope it is useful
nonetheless.
> and each of these likes are stored in a model that is different from the
> model used for news feed
I think you have a design issue here. It is better if you create a model that
describes a post, and then add a field `users_that_liked_it` as a [many-to-
many
relationship](https://docs.djangoproject.com/en/1.8/topics/db/examples/many_to_many/)
to your user model. Then, you can do something like `post.users_that_liked_it`
and get a query set of all users that liked your page.
In my eyes you should also avoid putting logic in templates as much as
possible. They are simply not made for it. Logic belongs into the model class,
or, if it is dependent on the page visited, in the view. (As a rule of thumb).
Lastly, if performance is your main worry, you probably shouldn't be using
Django anyway. It is just not that fast. What Django gives you is the ability
to write clean, concise code. This is much more important for a new project
than performance. Ask yourself: How many (personal) projects fail because
their performance is bad? And how many fail because the creator gets caught in
messy code?
Here is my advice: Favor clarity over performance. Especially in a young
project.
|
Plotting a graph in python
Question: I'm new to python and want to plot a point on graph in python..
X_cord=int(raw_input("Enter the x-coordinate"))
Y_cord=int(raw_input("Enter the y-coordinate"))
I could just figure out this much.
Answer: Have a look at [matplotlib](http://matplotlib.org), a 2D plotting library for
Python. For your code, this could work:
import matplotlib.pyplot as plt # I include a module which contains the plotting functionality I need: https://docs.python.org/2/tutorial/modules.html
plt.plot(X_cord, # here go the X coordinates
Y_cord, # here go the Y coordinates
marker='x', # as I'm plotting only one point here, I'd like to make it extra visible
markersize=10 # by choosing a nice marker shape ('x') and large size
)
plt.show() # this shows the current plot in a pop-up window
If you would like to immediately save the figure as an image, you can also
choose to replace the last line by
plt.savefig("my_first_plot.pdf", bbox_inches='tight')
plt.close()
_Edit:_ I spelled it out a bit more, but the main suggestion is to get to know
Python better (ample tutorials on the web, this is not the place for it) and
to read the matplotlib documentation if you want to know more about plotting.
Hope this helps, or feel free to post specific problems you are having.
|
What can axis names be used for in python pandas?
Question: I was excited when I learned that it is possible name the axes of pandas data
structures (panels, in particular). I named my axes now some plots are
labelled and the axis names show up in `mypanel.axes`.
So then I thought, hm, seems like I should be able to use my axes names in
place of `items`, `major_axis`, and `minor_axis`. I tried
`mypanel.transpose('ground_type', 'volcano', 'date')` and I was sad when that
didn't work. I couldn't find any general documentation on pandas axis names.
So, my question is: what are the intended uses for axis names?
Help me get excited about them again!
Answer: `Panel` is a bit less developed compared to other `pandas` structures, so as
far as I'm aware, named axes can't be used for much. That `transpose` use-case
seems reasonable, may be worth making an issue.
Two alternatives to consider - one is to store your `Panel` as a `DataFrame`
with a `MultiIndex`, which has better support for named levels. For example:
In [29]: from pandas.io.data import DataReader
In [30]: pnl = DataReader(['GOOG','AAPL'], 'yahoo')
In [31]: pnl.major_axis.name = 'date'
In [32]: pnl.minor_axis.name = 'ticker'
In [33]: pnl.items.name = 'measure'
In [34]: df = pnl.to_frame()
In [35]: df.unstack(level='ticker').stack(level='measure')
Out[35]:
ticker AAPL GOOG
date measure
2010-01-04 Open 2.134300e+02 NaN
High 2.145000e+02 NaN
Low 2.123800e+02 NaN
Close 2.140100e+02 NaN
....
Another would be to look at [`xray`](http://xray.readthedocs.org/en/stable/)
which is essentially a ND extension of the ideas in `pandas`. It has the
concept of naming axes built in at a deeper level.
|
Debugging error 500 issues in Python EVE
Question: What is the best way to debug error 500 issues in Python EVE on the resources?
I'm having a problem with my PATCH method in one of my item end points. Is
there an options to get more verbose error or catching the exceptions with the
proper info before we get the error 500.
My database is MongoDB and I'm using Cerberus styled schema.
Answer: If you switch debug mode on you will get exception message within the body of
the response. Just set `DEBUG = True` in your settings, or run the application
like this:
from eve import Eve
app = Eve()
app.run(debug=True)
Furthermore, if you really want to dig in, you could clone the repository and
install from it (`pip install -e <path to the repo>`. Then you can set your
own breakpoints directly in the source code.
|
Why does Django Queryset say: TypeError: Complex aggregates require an alias?
Question: I have a Django class as follows:
class MyModel(models.Model):
my_int = models.IntegerField(null=True, blank=True,)
created_ts = models.DateTimeField(default=datetime.utcnow, editable=False)
When I run the following queryset, I get an error:
>>> from django.db.models import Max, F, Func
>>> MyModel.objects.all().aggregate(Max(Func(F('created_ts'), function='UNIX_TIMESTAMP')))
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "MyVirtualEnv/lib/python2.7/site-packages/django/db/models/query.py", line 297, in aggregate
raise TypeError("Complex aggregates require an alias")
TypeError: Complex aggregates require an alias
How do I adjust my queryset so that I don't get this error? I want to find the
instance of MyModel with the latest created_ts. And I want to use the
`UNIX_TIMESTAMP` function to get it.
Answer: I'm not running the same Django version, but I think this'll work:
MyModel.objects.all().aggregate(latest=Max(Func(F('created_ts'), function='UNIX_TIMESTAMP')))
Note the `latest` keyword argument in there. That's the key (I think, again I
can't test this).
|
Find the superblock on disk
Question: i have to write python script in my work. My script must print all devices
which meet some conditions. One of this conditions is superblock. Device must
have superblock.
other conditions:
1. any partitions is not mounted - DONE
2. any partition is not in raid - DONE
3. uuid is not in fstab - DONE
4. arr uuid is in mdadm.conf - DONE
5. device has superblock - ?????
is there anyone who has some idea how to do it? I have to confess that i dont
have any. It's not necessary to manage it by python. Is there ANY way how to
check it ? :)
Thank you very much.
Answer: You can grep the output of `dumpe2fs device_name` for existance of
_"superblock at"_.
Here's an example on my Centos 5 linux system:
>>> import shlex, subprocess
>>> filesystems = ['/dev/mapper/VolGroup00-LogVol00', '/dev/vda1', 'tmpfs']
>>> for fs in filesystems:
... command = '/sbin/dumpe2fs ' + fs
... p = subprocess.Popen(shlex.split(command),stdout=subprocess.PIPE,stderr=subprocess.STDOUT)
... output = p.communicate()[0]
... if 'superblock at' in output:
... print "{fs} has superblock".format(fs=fs)
... else:
... print "No superblock found for {fs}".format(fs=fs)
...
/dev/mapper/VolGroup00-LogVol00 has superblock
/dev/vda1 has superblock
No superblock found for tmpfs
More information on [dumpe2fs](http://linux.die.net/man/8/dumpe2fs):
<http://linux.die.net/man/8/dumpe2fs>
|
python filter and sort list of orderedict from xml2dict
Question: i have a question on sorting xml2dict outcome. i have a xml like this:
Python 2.7
<schedule>
<layout file="12" fromdt="2015-07-25 00:42:35" todt="2015-09-02 02:54:14" scheduleid="34" priority="0" dependents="30.jpg,38.mp4,39.mp4"/>
<layout file="10" fromdt="2015-08-25 00:42:32" todt="2015-09-02 02:54:03" scheduleid="34" priority="1" dependents="30.jpg,38.mp4,39.mp4"/>
</schedule>
in which i use the following code to import to py:
dict_schedule_xml = xmlFileToDict(filename)
layoutList.append(dict_schedule_xml['schedule']['layout'])
layoutList like below:
[[OrderedDict([(u'@file', u'12'), (u'@fromdt', u'2015-07-24 00:42:35'), (u'@todt', u'2015-09-02 02:54:14'), (u'@scheduleid', u'34'), (u'@priority', u'0'), (u'@dependents', u'30.jpg,38.mp4,39.mp4')]), OrderedDict([(u'@file', u'10'), (u'@fromdt', u'2015-08-25 00:42:32'), (u'@todt', u'2015-09-02 02:54:03'), (u'@scheduleid', u'34'), (u'@priority', u'1'), (u'@dependents', u'30.jpg,38.mp4,39.mp4')])]]
and i want to filter out some unwanted entry as execute follow:
dict_filtered_layout_list = [s for s in layoutList if (onAirSchedule(s['@fromdt'],s['@todt']))]
dict_filtered_layout_list = sorted(dict_filtered_layout_list, key=lambda k: k['@priority'], reverse = True)
using the following self-defined function:
def onAirSchedule(fromdt, todt):
dt_now = datetime.now()
fromdt = datetime.strptime(fromdt,'%Y-%m-%d %H:%M:%S')
todt = datetime.strptime(todt,'%Y-%m-%d %H:%M:%S')
return (fromdt < dt_now < todt)
I wonder everytime i got the error
File "xibo_reader.py", line 91, in on_modified
dict_filtered_layout_list = [s for s in layoutList if (onAirSchedule(s['@fromdt'],s['@todt']))]
TypeError: list indices must be integers, not str
I would like to have help on this as i don't know much on ordereddict
behaviour...
Answer: From the update `layoutList` looks like a list of list of OrderedDict , so
when you directly try to do `for s in layoutList if
(onAirSchedule(s['@fromdt']` ,it causes error as lists cannot be accessed
using string, only integer indexes.
If you want `dict_filtered_layout_list` to be a single list of OrderedDict
with the filtered values, then you can iterate over the list of lists inside
`layoutList` to create that single list.
Example -
dict_filtered_layout_list = [s for list_orderedDict in layoutList for s in list_orderedDict if (onAirSchedule(s['@fromdt'],s['@todt']))]
dict_filtered_layout_list = sorted(dict_filtered_layout_list, key=lambda k: k['@priority'], reverse = True)
|
Broken python after Mac OS X update
Question: After an update of OS X Yosemite 10.10.5 my Python install has blown up. I am
not using brew, macports, conda or EPD, here, but a native Python build. While
it was perfectly functional before, now it seems to have lost track of the
installed packages. I try to start an ipython session and it returns
$ ipython
Traceback (most recent call last):
File "/System/Library/Frameworks/Python.framework/Versions/2.7/bin/ipython", line 7, in <module>
from IPython import start_ipython
ImportError: No module named IPython
Then I resort to checking whether I can re-install ipython but my pip also
went missing:
$ sudo pip install ipython
Traceback (most recent call last):
File "/System/Library/Frameworks/Python.framework/Versions/2.7/bin/pip", line 7, in <module>
from pip import main
ImportError: No module named pip
So may it be that pip disappeared?
$ sudo easy_install install pip
Traceback (most recent call last):
File "/System/Library/Frameworks/Python.framework/Versions/2.7/bin/easy_install", line 5, in <module>
from pkg_resources import load_entry_point
ImportError: No module named pkg_resources
I have tried solutions to this [last
problem](http://stackoverflow.com/questions/7446187/no-module-named-pkg-
resources) reported before but they do not seem to work.
So it seems that Python has lost track of itself. Anyone, clues of what may
have gone wrong and how can I fix this? Thanks!
Answer: It is not quite clear what you mean by saying that you installed "a native
Python build". But in any case, it seems you wrote files of your installation
into the system directory" `/System/...`, which is not a good thing to do,
since these are controlled by OSX and e.g. may be overwritten by a system
update.
Apparently, that is then what happened. You updated OSX, it clobbered your
changes to `/System/...` and thereby messed up your Python installation.
Note that AFAIK, with OSX 10.11 it will no longer even be possible to write
into system directories, so you will need a different setup anyway.
A suggestion to avoid/fix this is simple: Use one of the standard ways to
install your Python stack -- my personal recommendation is either macports or
anaconda.
|
How can I mock/patch an associative array in python
Question: I have a module with a dictionary as associative array to implement a kind-of
switch statement.
def my_method1():
return "method 1"
def my_method2():
return "method 2"
map_func = {
'0': my_method1,
'1': my_method2
}
def disptach(arg):
return map_func[arg]()
How can I mock my_method1 in tests? I've tried the following without success:
import my_module as app
@patch('my_module.my_method1')
def test_mocking_sample(self, my_mock):
my_mock.return_value = 'mocked'
assert_equal('mocked',app.dispatch('0'))
Any idea?
Answer: This piece of
[patch](http://www.voidspace.org.uk/python/mock/patch.html#where-to-patch)
documentation says the following:
> patch works by (temporarily) changing the object that a name points to with
> another one. There can be many names pointing to any individual object, so
> for patching to work you must ensure that you patch the name used by the
> system under test.
Basically, your dispatcher won't see it, as the mapping is built to reference
the original method, before the patch is applied.
The simplest thing you can do to make it mockable is to fold the mapping into
the `dispatch` function:
def dispatch(arg):
return {
'0': my_method1,
'1': my_method2
}[arg]()
This does have the downside that it rebuilds that mapping every time you call
it, so it will be slower.
Trying to get a bit clever, it seems that Python lets you swap out the actual
code of a function, like so:
>>> f = lambda: "foo"
>>> a = f
>>> g = lambda: "bar"
>>> f.func_code = g.func_code
>>> a()
'bar'
I won't recommend that you do it this way, but maybe you can find a mocking
framework that supports something similar.
|
Transpose multiple variables in rows to columns depending on a groupby using pandas
Question: This is referred to a questions answered before using SAS. [SAS - transpose
multiple variables in rows to
columns](http://stackoverflow.com/questions/25384634/sas-transpose-multiple-
variables-in-rows-to-columns)
The new thing is that the length of the variables is not two but varies. Here
is an example:
acct la ln seq1 seq2
0 9999 20.01 100 1 10
1 9999 19.05 1 1 10
2 9999 30.00 1 1 10
3 9999 26.77 100 2 11
4 9999 24.96 1 2 11
5 8888 38.43 218 3 20
6 8888 37.53 1 3 20
My desired output is:
acct la ln seq1 seq2 la0 la1 la2 la3 ln0 ln1 ln2
5 8888 38.43 218 3 20 38.43 37.53 NaN NaN 218 1 NaN
0 9999 20.01 100 1 10 20.01 19.05 30 NaN 100 1 1
3 9999 26.77 100 2 11 26.77 24.96 NaN NaN 100 1 NaN
In SAS I could use proc summary which is fairly simple however I want to get
it done in Python since I can't use SAS any longer.
I already solved the question which I can reuse for my problems but I was
wondering if there is an easier option in Pandas which I didn't see. Here is
my solution. Would be interesting if someone has a faster approach!
# write multiple row to col based on groupby
import pandas as pd
from pandas import DataFrame
import numpy as np
data = DataFrame({
"acct": [9999, 9999, 9999, 9999, 9999, 8888, 8888],
"seq1": [1, 1, 1, 2, 2, 3, 3],
"seq2": [10, 10, 10, 11, 11, 20, 20],
"la": [20.01, 19.05, 30, 26.77, 24.96, 38.43, 37.53],
"ln": [100, 1, 1, 100, 1, 218, 1]
})
# group the variables by some classes
grouped = data.groupby(["acct", "seq1", "seq2"])
def rows_to_col(column, size):
# create head and contain to iterate through the groupby values
head = []
contain = []
for i,j in grouped:
head.append(i)
contain.append(j)
# transpose the values in contain
contain_transpose = []
for i in range(0,len(contain)):
contain_transpose.append(contain[i][column].tolist())
# determine the longest list of a sublist
length = len(max(contain_transpose, key = len))
# assign missing values to sublist smaller than longest list
for i in range(0, len(contain_transpose)):
if len(contain_transpose[i]) != length:
contain_transpose[i].append("NaN" * (length - len(contain_transpose[i])))
# create columns for the transposed column values
for i in range(0, len(contain)):
for j in range(0, size):
contain[i][column + str(j)] = np.nan
# assign the transposed values to the column
for i in range(0, len(contain)):
for j in range(0, length):
contain[i][column + str(j)] = contain_transpose[i][j]
# now always take the first values of the grouped group
concat_list = []
for i in range(0, len(contain)):
concat_list.append(contain[i][:1])
return pd.concat(concat_list) # concate the list
# fill in column name and expected size of the column
data_la = rows_to_col("la", 4)
data_ln = rows_to_col("ln", 3)
# merge the two data frames together
cols_use = data_ln.columns.difference(data_la.columns)
data_final = pd.merge(data_la, data_ln[cols_use], left_index=True, right_index=True, how="outer")
data_final.drop(["la", "ln"], axis = 1)
Answer: Note that:
In [58]:
print grouped.la.apply(lambda x: pd.Series(data=x.values)).unstack()
0 1 2
acct seq1 seq2
8888 3 20 38.43 37.53 NaN
9999 1 10 20.01 19.05 30
2 11 26.77 24.96 NaN
and:
In [59]:
print grouped.ln.apply(lambda x: pd.Series(data=x.values)).unstack()
0 1 2
acct seq1 seq2
8888 3 20 218 1 NaN
9999 1 10 100 1 1
2 11 100 1 NaN
Therefore:
In [60]:
df2 = pd.concat((grouped.la.apply(lambda x: pd.Series(data=x.values)).unstack(),
grouped.ln.apply(lambda x: pd.Series(data=x.values)).unstack()),
keys= ['la', 'ln'], axis=1)
print df2
la ln
0 1 2 0 1 2
acct seq1 seq2
8888 3 20 38.43 37.53 NaN 218 1 NaN
9999 1 10 20.01 19.05 30 100 1 1
2 11 26.77 24.96 NaN 100 1 NaN
The only problem is that the column index are `MultiIndex`. If we don't want
it, we can transform them to `la0....` by:
df2.columns = map(lambda x: x[0]+str(x[1]), df2.columns.tolist())
I don't know what do you think. But I prefer the `SAS` `PROC TRANSPOSE` syntax
for better readability. `Pandas` syntax is concise but less readable in this
particular case.
|
__pycache__ folder executes each time i run any other file in the folder
Question: I am learning python and I have a tutorial folder with 5 or 6 python files.
One of them contained regex functions say `file_regex.py`. The problem is when
I execute any other file in the folder, always `file_regex.py` is executed
thus giving the output of `file_regex.py`. I am not importing file_regex.py in
any of the other files.
`file_regex.py`
> import re
>
> sentence = ''' Jessica_is_27 year7s old, whereas John is 12 but Stephanie is
> 12 and Marco is 50 years '''
>
> ages = re.findall(r'\d{1,3}', sentence)
>
> names = re.findall('[A-Z][a-z]+', sentence) test = re.findall('\W',
> sentence)
>
> print(ages)
>
> print(names)
>
> print(test)
This is because of the `__pycache__` folder created which has a `.pyc` file
for `file_regex.py` file.
`regex.cpython-23.pyc`
> � Vo�U�@sjddlZdZejde�Zejde�Zejde�Zee�ee�ee�dS)�NzX Jessica_is_27 year7s old,
> whereas John is 12 but Stephanie is 12 and Marco is 50 years
> z\d{1,3}z[A-Z][a-z]+z\W)�re�sentence�findallZages�names�test�print�rr�!/home/sivasurya/tutorial/regex.py�s
I have two questions:
1. Why does the `__pycache__` folder created only for `file_regex.py` file
2. How can I delete `__pycache__` folder or a solution to this problem (I tried compiling the python file with `python -B file1.py` command, which didn't work)
P.S: I work in miniconda environment (python 3.x), if that helps
Answer: > This is because of the `__pycache__` folder . . .
This is incorrect. The existence of the `__pycache__` folder has nothing to do
with whether a file is run or not. It simply holds the compiled files, nothing
else.
If your `file_regex.py` keeps being executed it is because the other files
have
import file_regex
or
from file_regex import ...
in them.
|
Bokeh: pass vars to CustomJS for Widgets
Question: A nice thing about Bokeh is that callbacks can be specified from the Python
layer that result actions on the javascript level without the need of bokeh-
server. So one can create interactive widgets that run in a browser without an
Ipython or Bokeh server running.
The 0.9.3. documentation gives an example that I can reproduce in an ipython
notebook:
<http://bokeh.pydata.org/en/latest/docs/user_guide/interaction.html#cutomjs-
for-widgets>
from bokeh.io import vform
from bokeh.models import CustomJS, ColumnDataSource, Slider
from bokeh.plotting import figure, output_file, show
output_file("callback.html")
x = [x*0.005 for x in range(0, 200)]
y = x
source = ColumnDataSource(data=dict(x=x, y=y))
plot = figure(plot_width=400, plot_height=400)
plot.line('x', 'y', source=source, line_width=3, line_alpha=0.6)
callback = CustomJS(args=dict(source=source), code="""
var data = source.get('data');
var f = cb_obj.get('value')
x = data['x']
y = data['y']
for (i = 0; i < x.length; i++) {
y[i] = Math.pow(x[i], f)
}
source.trigger('change');
""")
slider = Slider(start=0.1, end=4, value=1, step=.1, title="power", callback=callback)
layout = vform(slider, plot)
show(layout)
I want to adapt code like this to create some simple online assignments. My
question is how can I pass other variables from python to javascript directly
without invoking a Slider. For example suppose I want the Javascript to
become:
y[i] = Math.pow(x[i], A*f)
where the A was defined in an ipython code cell above (for example A = 10).
It's easy enough to define 'var A = 10' in the javascript but I'd like to set
the value of A and other variables in python and then pass them into this
javascript. Is there a way?
Answer: As of Bokeh 0.9.3 you can only pass "Bokeh Models" (e.g. things like data
sources and renderers), not arbitrary python objects. But we are working on
extending bokeh documents with a simple namespace concept that can be mirrored
easily.
|
Returning columns conditionally in pandas
Question: I've read quite a few questions and answers on using indexing with pandas in
python, but I can't work out how to return columns conditionally. For
instance, consider the following:
import pandas as pd
df = pd.DataFrame([[0,1,1],[0,0,1],[0,0,0]], columns=['a','b','c'])
print(df)
a b c
0 0 1 1
1 0 0 1
2 0 0 0
How would I ask pandas to give me column names such that value in row zero and
that column is 1? Anticipated output: `['b','c']`
How would I ask pandas to give me column names such that value in row zero and
row one of that column is 1? Anticipated output: `['c']`
I'm comfortable using `.loc` to find all rows where specific conditions are
true, but this is slightly different.
Answer: @PadraicCunningham provided a solution in the comments. It was as simple as:
df.columns[df.loc[0] == 1]
If I want to select only columns that meet two (or more criteria), that can be
listed in statements like the following:
df.columns[(df.loc[:1] == 1).all()]
df.columns[(df.loc[[0, 2]] == 1).all()]
The former returns columns in which all but the last row takes value 1. The
latter returns columns in which the first and third row (labeled 0 and 2) take
value 1. My error was trying to do this with `df.loc`.
|
Combine key and mouse button events in wxpython panel using matplotlib
Question: In a `wxPython` panel I want to use `matplotlib's`
[Lasso](http://matplotlib.org/api/widgets_api.html?highlight=lasso#matplotlib.widgets.Lasso)
widget. In my implementation `Lasso` is used in three different
functionalities. Nevertheless, in order to accomplish these functionalities I
must combine key events with mouse button events.
By default, the initial `Lasso` function can be used by pressing the left
mouse button. So, for my first functionality I press the left mouse button,
select a region of interest and do something with the points included. For the
second functionality, I would like to select another region of interest and do
something else with these points.I am trying to do that by pressing
`shift`+`LMB`. Finally, in my third functionality I would like to do something
else with the selected points by pressing `Ctrl`+`LMB`.
Schematically, I would like to do the following:
if left mouse button is pressed:
use Lasso and with the included points do the 1st functionality
if shift is press and left mouse button is pressed:
use Lasso and with the included points do the 2nd functionality
if ctrl is press and lmb is also pressed :
call Lasso and with the included points do the 3rd functionality
Unfortunately, I can not achieve my goal and I get the following error: `if
self.shift_is_held == True: AttributeError: object has no attribute
'shift_is_held'`. It seems that it can not recognize the button events, while
in other cases the `widgetlock` command seems not to make the axes available.
Here are parts of my code:
def scatter(self,data):
#create scatter plot
fig = self._view_frame.figure
fig.clf()
Data = []
self.lmans = []
self.points_colors = []
Data2 = []
self.lmans_2 = []
Data3 = []
self.lmans_2 = []
ax = fig.add_subplot('111')
datum = []
datum_2 = []
datum_3 = []
for x,y in zip(self.data[:,0],self.data[:,1]):
datum.append(Datum(x,y))
datum_2.append(Datum2(x,y))
datum_3.append(Datum3(x,y))
ax.scatter(self.data[:, 0], self.data[:, 1], s=150, marker='d')
ax.set_xlim((min(self.data[:,0]), max(self.data[:,0])))
ax.set_ylim((min(self.data[:,1]), max(self.data[:,1])))
ax.set_aspect('auto')
Data.append(datum)
Data2.append(datum_2)
Data3.append(datum_3)
lman = self.LassoManager(ax, datum, Data)
lman_2 = self.LassoManager2(ax, datum_2, Data2)
lman_3 = self.LassoManager3(ax, datum_3, Data3)
fig.canvas.draw()
self.lmans.append(lman)
self.lmans_2.append(lman_2)
self.lmans_3.append(lman_3)
fig.canvas.mpl_connect('axes_enter_event', self.enter_axes)
class Lasso(AxesWidget):
"""Selection curve of an arbitrary shape.
The selected path can be used in conjunction with
:func:`~matplotlib.path.Path.contains_point` to select data points
from an image.
Unlike :class:`LassoSelector`, this must be initialized with a starting
point `xy`, and the `Lasso` events are destroyed upon release.
Parameters:
*ax* : :class:`~matplotlib.axes.Axes`
The parent axes for the widget.
*xy* : array
Coordinates of the start of the lasso.
*callback* : function
Whenever the lasso is released, the `callback` function is called and
passed the vertices of the selected path.
"""
def __init__(self, ax, xy, callback=None, useblit=True):
AxesWidget.__init__(self, ax)
self.useblit = useblit and self.canvas.supports_blit
if self.useblit:
self.background = self.canvas.copy_from_bbox(self.ax.bbox)
x, y = xy
self.verts = [(x, y)]
self.line = Line2D([x], [y], linestyle='-', color='black', lw=2)
self.ax.add_line(self.line)
self.callback = callback
self.connect_event('button_release_event', self.onrelease)
self.connect_event('motion_notify_event', self.onmove)
def onrelease(self, event):
if self.ignore(event):
return
if self.verts is not None:
self.verts.append((event.xdata, event.ydata))
if len(self.verts) > 2:
self.callback(self.verts)
self.ax.lines.remove(self.line)
self.verts = None
self.disconnect_events()
def onmove(self, event):
if self.ignore(event):
return
if self.verts is None:
return
if event.inaxes != self.ax:
return
if event.button != 1:
return
self.verts.append((event.xdata, event.ydata))
self.line.set_data(list(zip(*self.verts)))
if self.useblit:
self.canvas.restore_region(self.background)
self.ax.draw_artist(self.line)
self.canvas.blit(self.ax.bbox)
else:
self.canvas.draw_idle()
def enter_axes(self, event):
self.idfig = event.inaxes.colNum
self._view_frame.figure.canvas.mpl_connect('button_press_event', self.onpress_2)
self._view_frame.figure.canvas.mpl_connect('key_press_event', self.onkey_press_2)
self._view_frame.figure.canvas.mpl_connect('key_release_event', self.onkey_release_2)
self._view_frame.figure.canvas.mpl_connect('button_press_event', self.onpress_3)
self._view_frame.figure.canvas.mpl_connect('key_press_event', self.onkey_press_3)
self._view_frame.figure.canvas.mpl_connect('key_release_event', self.onkey_release_3)
def LassoManager(self, ax, data, Data):
self.axes = ax
self.canvas = ax.figure.canvas
self.data = data
self.Data = Data
self.Nxy = len(data)
# self.facecolors = [d.color for d in data]
self.xys = [(d.x, d.y) for d in data]
fig = ax.figure
#self.cid = self.canvas.mpl_connect('button_press_event', self.onpress)
def callback(self, verts):
#facecolors = self.facecolors#collection.get_facecolors()
#colorin = colorConverter.to_rgba('red')
#colorout = colorConverter.to_rgba('blue')
p = path.Path(verts)
self.ind = p.contains_points([(d.x, d.y) for d in self.Data[self.where.colNum]])
self._view_frame.figure.canvas.mpl_connect('button_press_event', self.onpress)
#Functionality 1
self.canvas.draw_idle()
self.canvas.widgetlock.release(self.lasso)
del self.lasso
def onpress(self,event):
if self.canvas.widgetlock.locked(): return
if event.inaxes is None: return
self.lasso = Lasso(event.inaxes, (event.xdata, event.ydata), self.callback)
self.where = event.inaxes
# acquire a lock on the widget drawing
self.canvas.widgetlock(self.lasso)
def onkey_press_2(self,event):
if event.key =='shift':
self.merge_is_held = True
def onkey_press_3(self,event):
if event.key == 'control':
self.split_is_held = True
def onkey_release_2(self, event):
if event.key == 'shift':
self.merge_is_held = False
def onkey_release_3(self, event):
if event.key == 'control':
self.split_is_held = False
def LassoManagerMerge(self, ax, data2, Data2):
self.axes = ax
self.canvas = ax.figure.canvas
self.data2 = data2
self.Data2 = Data2
self.Nxy = len(dataMerge)
self.facecolors_2 = [d.color for d in data2]
# print "facecolors",self.facecolors
self.xys = [(d.x, d.y) for d in data2]
# print "xys",self.xys
fig = ax.figure
self.collection_2 = RegularPolyCollection(
fig.dpi, 6, sizes=(0,),
facecolors=self.facecolors_2,
offsets = self.xys,
transOffset = ax.transData)
ax.add_collection(self.collection_2)
def callback_2(self, verts):
self.facecolors_2 = self.collection_2.get_facecolors()
#colorin = colorConverter.to_rgba('red')
#colorout = colorConverter.to_rgba('blue')
p = path.Path(verts)
self.ind = p.contains_points([(d.x, d.y) for d in self.Data2[self.where.colNum]])
#Functionality 2
self.canvas.draw_idle()
self.canvas.widgetlock.release(self.lasso_2)
del self.lasso_2
def onpress_2(self, event):
if self.canvas.widgetlock.locked(): return
if event.inaxes is None: return
if event.button == 1:
if self.shift_is_held == True:
print "Shift pressed"
self.lasso_2 = Lasso(event.inaxes, (event.xdata, event.ydata), self.callback_2)
print "Shift pressed"
self.where = event.inaxes
self.canvas.widgetlock(self.lasso_2)
def LassoManager3(self, ax, data3, Data3):
self.axes = ax
self.canvas = ax.figure.canvas
self.data3 = data3
self.Data3 = Data3
self.Nxy = len(data3)
self.facecolors_3= [d.color for d in data3]
# print "facecolors",self.facecolors
self.xys = [(d.x, d.y) for d in data3]
# print "xys",self.xys
fig = ax.figure
self.collection_3 = RegularPolyCollection(
fig.dpi, 6, sizes=(0,),
facecolors=self.facecolors_3,
offsets = self.xys,
transOffset = ax.transData)
ax.add_collection(self.collection_3)
def callback_3(self, verts):
self.facecolors_3 = self.collection_3.get_facecolors()
#colorin = colorConverter.to_rgba('red')
#colorout = colorConverter.to_rgba('blue')
p = path.Path(verts)
self.ind = p.contains_points([(d.x, d.y) for d in self.Data3[self.where.colNum]])
#Functionality 3
self.canvas.draw_idle()
self.canvas.widgetlock.release(self.lasso_3)
del self.lasso_3
def onpress_3(self, event):
if self.canvas.widgetlock.locked(): return
if event.inaxes is None: return
if event.button == 1:
if self.split_is_held == True:
print " Split pressed"
self.lasso_3 = Lasso(event.inaxes, (event.xdata, event.ydata), self.callback_3)
print "Split pressed"
self.where = event.inaxes
# acquire a lock on the widget drawing
self.canvas.widgetlock(self.lasso_3)
Any suggestions?
Answer: I'm not entirely sure what you're doing wrong because your code looks
incomplete. I think your general idea was correct, but you seem to have been
mixing up classes and trying to access the property `shift_is_held` from the
wrong class or something.
I wrote this simple example using the `lasso_example.py` code from matplotlib
examples. I did run into some complications trying to use the `control` key.
When I try to drag with the mouse using the control key, the Lasso manager
becomes unresponsive (including in the original code from matplotlib). I could
not figure out why, so I used the `shift` and `alt` keys as modifiers in the
present code.
You'll see that the logic of what action to perform depending on which key is
held down at the moment when you release the lasso is performed in
`LassoManager.callback()`
import logging
import matplotlib
from matplotlib.widgets import Lasso
from matplotlib.colors import colorConverter
from matplotlib.collections import RegularPolyCollection
from matplotlib import path
import matplotlib.pyplot as plt
from numpy.random import rand
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
class Datum(object):
colorin = colorConverter.to_rgba('red')
colorShift = colorConverter.to_rgba('cyan')
colorCtrl = colorConverter.to_rgba('pink')
colorout = colorConverter.to_rgba('blue')
def __init__(self, x, y, include=False):
self.x = x
self.y = y
if include:
self.color = self.colorin
else:
self.color = self.colorout
class LassoManager(object):
def __init__(self, ax, data):
self.axes = ax
self.canvas = ax.figure.canvas
self.data = data
self.Nxy = len(data)
facecolors = [d.color for d in data]
self.xys = [(d.x, d.y) for d in data]
fig = ax.figure
self.collection = RegularPolyCollection(
fig.dpi, 6, sizes=(100,),
facecolors=facecolors,
offsets = self.xys,
transOffset = ax.transData)
ax.add_collection(self.collection)
self.cid = self.canvas.mpl_connect('button_press_event', self.onpress)
self.keyPress = self.canvas.mpl_connect('key_press_event', self.onKeyPress)
self.keyRelease = self.canvas.mpl_connect('key_release_event', self.onKeyRelease)
self.lasso = None
self.shiftKey = False
self.ctrlKey = False
def callback(self, verts):
logging.debug('in LassoManager.callback(). Shift: %s, Ctrl: %s' % (self.shiftKey, self.ctrlKey))
facecolors = self.collection.get_facecolors()
p = path.Path(verts)
ind = p.contains_points(self.xys)
for i in range(len(self.xys)):
if ind[i]:
if self.shiftKey:
facecolors[i] = Datum.colorShift
elif self.ctrlKey:
facecolors[i] = Datum.colorCtrl
else:
facecolors[i] = Datum.colorin
else:
facecolors[i] = Datum.colorout
self.canvas.draw_idle()
self.canvas.widgetlock.release(self.lasso)
del self.lasso
def onpress(self, event):
if self.canvas.widgetlock.locked():
return
if event.inaxes is None:
return
self.lasso = Lasso(event.inaxes, (event.xdata, event.ydata), self.callback)
# acquire a lock on the widget drawing
self.canvas.widgetlock(self.lasso)
def onKeyPress(self, event):
logging.debug('in LassoManager.onKeyPress(). Event received: %s (key: %s)' % (event, event.key))
if event.key == 'alt+alt':
self.ctrlKey = True
if event.key == 'shift':
self.shiftKey = True
def onKeyRelease(self, event):
logging.debug('in LassoManager.onKeyRelease(). Event received: %s (key: %s)' % (event, event.key))
if event.key == 'alt':
self.ctrlKey = False
if event.key == 'shift':
self.shiftKey = False
if __name__ == '__main__':
data = [Datum(*xy) for xy in rand(100, 2)]
ax = plt.axes(xlim=(0,1), ylim=(0,1), autoscale_on=False)
lman = LassoManager(ax, data)
plt.show()
|
May I use groupby to solve this case in python?
Question: I have a redis database that it's receiving data from Arduino every ten
seconds.
Now, I want to make six ten-second data calculate one sixty-second data and
then get avg, max, min of six ten-second data as follow.
import json
a = [u'{"id":"proximity_sensor1","tstamp":1440643570238,"avg":15.0,"coefVariation":0.0,"anom":0,"max":15.0,"min":15.0,"sample_size":10}',
u'{"id":"proximity_sensor1","tstamp":1440643580307,"avg":15.0,"coefVariation":0.0,"anom":0,"max":15.0,"min":15.0,"sample_size":9}',
u'{"id":"proximity_sensor1","tstamp":1440643590242,"avg":15.0,"coefVariation":0.0,"anom":0,"max":15.0,"min":15.0,"sample_size":9}',
u'{"id":"proximity_sensor1","tstamp":1440643590242,"avg":15.0,"coefVariation":0.0,"anom":0,"max":15.0,"min":15.0,"sample_size":8}',
u'{"id":"proximity_sensor1","tstamp":1440643590242,"avg":15.0,"coefVariation":0.0,"anom":0,"max":15.0,"min":15.0,"sample_size":9}',
u'{"id":"proximity_sensor1","tstamp":1440643590242,"avg":15.0,"coefVariation":0.0,"anom":0,"max":15.0,"min":15.0,"sample_size":9}']
a = map(lambda x: json.loads(x), a)
#print a
def abc(aaa):
for index in range(0, len(aaa), 6):
abc = aaa[index:(index+6)]
tuples = [('avg', 'max', 'min')]
avg = sum(map(lambda x: x['avg'], abc))/6
min_ = min(map(lambda x: x['min'], abc))
max_ = max(map(lambda x: x['max'], abc))
yield [avg, max_, min_]
print list(abc(a))
I am thinking whether it has better method to solve or not. If I use
`itertools.groupby`, may I solve it faster? or anyone has good idea to
simplify calculated process?
Answer: Normally itertools.groupby is used with some condition to group the elements ,
but since in your case you are not having any such conditions , instead you
just want to group every 6 elements together, I don't think using
itertools.groupby would give any benefit.
That being said there are some other improvements I can suggest -
1. You can use the 'key' argument for max/min functions rather than your current method of map/lambda , Example -
max_ = max(abc, key= lambda x:x['max'])['max']
Similarly for min() function.
2. Also , I think it would be more readable to have a list comprehension for sum() , rather than map/lambda . Example -
avg = sum([x['avg'] for x in abc])/6
|
Grab 2 items in 1 string?
Question: I'm not smart with this as u can tell. I'm looking to grab 2 things with 1
line.
eg
<a href="(URL TO GRAB)">(TITLE TO GRAB)</a>
<a href="(URL TO GRAB)" rel="nofollow">(TITLE TO GRAB)</a>
The Urls and Titles always begin with http or https
<a href="http(s)://www.whatever.com/1.html">http(s)://www.whatever.com/1.html</a>
<a href="http(s)://www.whatever.com/2.html" rel="nofollow">http(s)://www.whatever.com/2.html</a>
I have tried the following with replace method to eliminate `rel="nofollow">`
with a blank space but there are 130 some other `re="variables'>` and I only
want the `"nofollow"` one and don't want to write 130 some replace.
item_infos=<a href="([^"]+)"([^"]+)</a>
item_order=url.tmp|title.tmp
item_skill=rss
It's python for kodi/xbmc scraping reddit.
Edit: Thanks for your help folks. I am currently using the one provided by Jon
item_infos=<a href="([^"]+)"[^>]*>([^<]+)</a>
Seems to work but i wont know till the thread is updated later. Thanks again
:)
Answer: You can use a HTML parser such as BeautifulSoup. Here's an example:
from bs4 import BeautifulSoup
html = '''<a href="http(s)://www.whatever.com/1.html">http(s)://www.whatever.com/1.html</a>
<a href="http(s)://www.whatever.com/2.html" rel="nofollow">http(s)://www.whatever.com/2.html</a>'''
soup = BeautifulSoup(html)
print 'href :', soup.a['href']
print 'title :', soup.a.text
for tag in soup.find_all('a'):
print 'href: {}, title: {}'.format(tag['href'], tag.text)
**Output**
href : http(s)://www.whatever.com/1.html
title : http(s)://www.whatever.com/1.html
href: http(s)://www.whatever.com/1.html, title http(s)://www.whatever.com/1.html
href: http(s)://www.whatever.com/2.html, title http(s)://www.whatever.com/2.html
|
Arbitrary host name resolution in Ansible
Question: Is there a way to resolve an arbitrary string as a host name in Ansible
`group_vars` file or in a Jinja2 template used by Ansible? Let's say, I want
to define a variable in `global_vars/all` that would contain one of the
several IP addresses that `www.google.com` resolves into. In this example, I
used `www.google.com` just as an example of a string that ca be resolved into
multiple IP addresses and yet I cannot use Ansible `hostvars` for the address
because I cannot ssh into it.
I tried to wire in Pythonic `socket.gethostbyname()` but could not get the
syntax right. At most, my variable became a literal "socket.gethostbyname('my-
host-1')".
I know I can fall back to a shell script and to take advantage of the tools
available in shell but I'd like to see if there is an elegant way to
accomplish this in Ansible.
The more gory details of the question are that I need to populate Postgres HBA
configuration file with IP addresses of the permitted hosts. I cannot use
their host names because the target deployment does not have reverse DNS that
is required for host name based HBA.
I really wish Postgres resolved the names in the configuration file and
matched it against the client's IP address instead of doing a reverse lookup
of the client's IP address and then matching the strings of host names. But
this is too much to expect and too long to wait. I need a workaround for now,
and I'd like to stay within Ansible for that, not having to offload this into
an external script.
Thank you for reading this far!
Answer: You can create a lookup plugin for this:
**Ansible 1.x:**
import ansible.utils as utils
import ansible.errors as errors
import socket
class LookupModule(object):
def __init__(self, basedir=None, **kwargs):
self.basedir = basedir
def run(self, terms, inject=None, **kwargs):
if not isinstance(terms, basestring):
raise errors.AnsibleError("ip lookup expects a string (hostname)")
return [socket.gethostbyname(terms)]
**Ansible 2.x:**
import ansible.utils as utils
import ansible.errors as errors
from ansible.plugins.lookup import LookupBase
import socket
class LookupModule(LookupBase):
def __init__(self, basedir=None, **kwargs):
self.basedir = basedir
def run(self, terms, variables=None, **kwargs):
hostname = terms[0]
if not isinstance(hostname, basestring):
raise errors.AnsibleError("ip lookup expects a string (hostname)")
return [socket.gethostbyname(hostname)]
Save this relative to your playbook as `lookup_plugins/ip.py`.
Then use it as `{{ lookup('ip', 'www.google.com') }}`
* [More about lookups](http://docs.ansible.com/ansible/playbooks_lookups.html)
* [(Not so much) More about creating lookup plugins](http://docs.ansible.com/ansible/developing_plugins.html#lookup-plugins)
|
Userena raising RemovedInDjango19Warning
Question: I'm using userena app in my django project, when running `python manage.py
migrate`, it just raise below warning:
> /usr/local/lib/python2.7/dist-packages/userena/utils.py:133:
> RemovedInDjango19Warning: django.db.models.get_model is deprecated.
> profile_mod = get_model(*settings.AUTH_PROFILE_MODULE.rsplit('.', 1))
>
> /usr/local/lib/python2.7/dist-packages/django/db/models/**init**.py:55:
> RemovedInDjango19Warning: The utilities in django.db.models.loading are
> deprecated in favor of the new application loading system. from . import
> loading
How to fix this issue?
BTW, I just find the latest build on github(<https://github.com/bread-and-
pepper/django-userena>) is failing, so this project is no longer maintained?
Answer: You can [open a pull request](https://guides.github.com/activities/forking/)
on [userna's github repo](https://github.com/bread-and-pepper/django-userena)
or if it's difficult you can wait until the maintainers or other contributors
will fix this and you have to update the package version.
There is a lot of similar warning from third party apps but it's better to
help if possible.
|
Python is saying that a variable has not been asigned when it has
Question:
import random
import time
name=input("Wecome to the game what is your name")
print(("This is a numbers game"),(name),("you will be playing against the computer."))
print("The idea of the game is to get closer to 21 to the computer without going over 21"),
ready="N"
ready=input("Are you ready to play the game yet?").lower
if ready=="yes" or ready=="y":
score=0
while (score)<21 and (ready == "Y" or ready == "Yes" or ready =="YES" or ready == "yes" or ready =="y"):
player1=random.randint(1,21)
score=(score+player1)
time.sleep(3)
print(("you have scored"),(score))
if score <21:
ready=input("Do you want to add more to your score?")
if score>21: *THE PROBLEMS ON THIS LINE HERE*
print("Sorry Over 21 , The Computer Wins!")
else:
print("ok Well done let us see what the computer can do with their turn")
computerscore=0
while(computerscore)<21 and (computerscore)<(score):
computer=random.randint(1,21)
computerscore=(computerscore+computer)
time.sleep(3)
print(("The computer has scored"),(computerscore))
if (computerscore)<=21 and (computerscore)>(score):
print("Sorry the computer wins")
else:
print("You win well done")
break
I get an error that says if score>21: NameError: name 'score' is not defined
but I have put score=0 so isn't that defining it?
Answer: I am guessing you are inputting the value `yes` correctly, the issue may be
because of the line -
ready=input("Are you ready to play the game yet?").lower
You are assigning the reference to the `lower` function in `ready` , instead
you should call `lower()` and assign the return value in `ready` . Example -
ready=input("Are you ready to play the game yet?").lower()
* * *
Also, if you want your code to work, when you do not input `ready` as `yes` ,
you should set `score=0` , before the `if` condition - `if ready=="yes" or
ready=="y":`
|
How do I prevent python from freezing to work on a large number equation?
Question: Because it takes too much time to calculate for A, I'll want the calculation
to stop and have the program continue on to calculate for B. It would also be
helpful to know what error this is called.
A = 999999999999999999**999999999999999999
B = 9**9
The program still freezes when using threading.Timer()
import threading
import time
a = 1
def zzz():
global a
print('restarting')
a = 0
threading.Timer(1.0,zzz).start()
a = 1
threading.Timer(1.0, zzz).start()
while 1:
while a == 1:
h = 999999999999999**999999999999999
Answer: I believe the problem has been solved: adding ".0" at the end of one number
will allow python to recognize that 99999999999999.0**99999999999999 is too
large of a result and will output an error that can be ignored with try/except
|
Edit python global variable defined in different file
Question: I am using Python 2.7. I want to store a variable so that I can run a script
without defining the variable in that script. I think global variables are the
way to do this although I am open to correction.
I have defined a global variable in `file1.py`:
def init():
global tvseries
tvseries = ['Murder I Wrote','Top Gear']
In another file, `file2.py`, I can call this variable:
import file1
file1.init()
print file1.tvseries[0]
If I edit the value of file1.tvseries (`file1.tvseries[0] = 'The Bill'`) in
file2.py this is not stored. How can I edit the value of file1.tvseries in
file2.py so that this edit is retained?
**EDIT: Provide answer**
Using `pickle`:
import pickle
try:
tvseries = pickle.load(open("save.p","rb"))
except:
tvseries = ['Murder I Wrote','Top Gear']
print tvseries
tvseries[0] = 'The Bill'
print tvseries
pickle.dump(tvseries,open("save.p", "wb"))
Using `json`:
import json
try:
tvseries = json.load(open("save.json"))
tvseries = [s.encode('utf-8') for s in tvseries]
except:
tvseries = ['Murder I Wrote','Top Gear']
print tvseries
tvseries[0] = str('The Bill')
print tvseries
json.dump(tvseries,open("save.json", "w"))
Both these files return `['Murder I Wrote','Top Gear']['The Bill','Top Gear']`
when run the first time and `['The Bill','Top Gear']['The Bill','Top Gear']`
when run the second time.
Answer: Try this Create a file called **tvseries** with these contents:
Murder I Wrote
Top Gear
**file1.py** :
with open("tvseries", "r") as f:
tvseries = map(str.strip, f.readlines())
def save(tvseries):
with open("tvseries", "w") as f:
f.write("\n".join(tvseries))
**file2.py** :
import file1
print file1.tvseries[0]
file1.tvseries.append("Dr Who")
file1.save(file1.tvseries)
I've moved the contents of your `init` method out to module level since I
don't see any need for it to exist. When you `import file1` any code at the
module level will be automatically run - eliminating the need for you to
manually run `file1.init()`. I've also changed the code to populate the
contents of `tvseries` by reading from a simple text file called **tvseries**
containing a list of tv series and added a `save` method in **file1.py** which
will write the contents of it's argument to the file **tvseries**.
|
Wrong pip in conda env
Question: I have a conda env called birdid.
While working in the env (i.e. I did `source activate bird_dev`), showing the
list of the packages give
(bird_dev)...$ conda list
# packages in environment at /home/jul/Development/miniconda/envs/bird_dev:
#
...
pep8 1.6.2 py27_0
pip 7.1.2 py27_0
pixman 0.26.2 0
...
but when trying to see what `pip` is used I get
(bird_dev)...$ which pip
/usr/local/bin/pip
while the correct `python` is found
(bird_dev)...$ which python
/home/jul/Development/miniconda/envs/bird_dev/bin/python
Anybody can help?
**Edit: more details about the installed versions**
Check which -a pip
(bird_dev)...$ which -a pip
/usr/local/bin/pip
/usr/bin/pip
The version in `/usr/bin/pip` is quite old.
(bird_dev)...$ /usr/bin/pip -V
pip 1.5.4 from /usr/lib/python2.7/dist-packages (python 2.7)
(bird_dev)....$ /usr/local/bin/pip -V
pip 6.1.1 from /usr/local/lib/python2.7/dist-packages (python 2.7)
There is actually no pip in the env
$ ll /home/jul/Development/miniconda/envs/bird_dev/bin/ | grep pip
returns nothing
there is one pip in `/home/jul/Development/miniconda/bin/pip`
$ /home/jul/Development/miniconda/bin/pip -V
pip 6.1.1 from /usr/local/lib/python2.7/dist-packages (python 2.7)
but it is not the version listed by `conda list`, and it is a python script
(!)
$ cat /home/jul/Development/miniconda/bin/pip
#!/home/jul/Development/miniconda/bin/python
if __name__ == '__main__':
import sys
from pip import main
sys.exit(main())
**Edit: echo $PATH**
(bird_dev)...$ echo $PATH
/home/jul/Development/miniconda/envs/bird_dev/bin:/home/jul/torch/install/bin:/home/jul/torch/install/bin:/home/jul/torch/install/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
**Edit: try to force install**
(bird_dev)...$ conda install --force pip
Fetching package metadata: ....
Solving package specifications: .
Package plan for installation in environment /home/jul/Development/miniconda/envs/bird_dev:
The following packages will be UPDATED:
pip: 7.1.2-py27_0 --> 7.1.2-py27_0
Proceed ([y]/n)? y
[ COMPLETE ]|##################################################################################################################################################################################| 100%
Extracting packages ...
[ COMPLETE ]|##################################################################################################################################################################################| 100%
Unlinking packages ...
[ COMPLETE ]|##################################################################################################################################################################################| 100%
Linking packages ...
[ COMPLETE ]|##################################################################################################################################################################################| 100%
(bird_dev)...$ which pip
/home/jul/Development/miniconda/envs/bird_dev/bin/pip
(bird_dev)...$ /home/jul/Development/miniconda/envs/bird_dev/bin/pip -V
pip 6.1.1 from /usr/local/lib/python2.7/dist-packages (python 2.7)
(bird_dev)...$ cat /home/jul/Development/miniconda/envs/bird_dev/bin/pip
#!/home/jul/Development/miniconda/envs/bird_dev/bin/python
if __name__ == '__main__':
import sys
from pip import main
sys.exit(main())
Weird.
Answer: You probably have `PYTHONPATH` set. I would recommend unsetting it, and
removing any lines from `~/.bashrc` that set it. It will cause any of your
conda environments' Pythons to look in that location before themselves.
|
How to count numbers in a list from a csv file filtering out words and commas
Question: So im fairly new to python and im looking to do a few things:
1. Display the number of numbers in the row
2. Display the average of the numbers in the row
3. Display the name of the row
4. No use of libraries such as import csv
the raw csv file has 3 rows
TTT TTTT, 21.72, 29.3, 20.08, 29.98, 29.85
DDDD, 57.51, 47.59
WWWW, 75.0, 82.43, 112.11, 89.93, 103.19, 80.6, 89.93, 103.19, 8
I have been at this for hours now and i simply can't get it my best attempt is
the following
with open('test1.csv', newline='') as f:
content = f.readlines()
print (content)
for line in content:
entry = line.split(',')
entry = line.split()
print (entry)
nums = 0
list1avg = 0
list1=[]
for c in entry:
if is_number(c):
print ("is num "+c)
list1.append(c)
nums+=1
else:
print(c)
for i in list1:
list1avg = list1avg +1
print(list1avg)
print(nums)
print("The average is "+list1avg/nums)
print("Total numbers in this row is " +nums)
i know its a mess but any help is appreciated.
Answer: just to get the number of numbers and the row, you could do something like
this
with open("csv_exmp.csv") as f:
items = [line.strip().split(',') for line in f]
for item in items:
numerbs_amount = sum(map(is_number, item))
print numerbs_amount, " in row:", item[0]
if you want to count the average etc. just store the data in some variables
outside of the for-loop
|
swig python interfacing to function using void **
Question: BACKGROUND. I have an API (third party provided) consisting of C header files
and a shared library. I have managed to create a shell script for the build
environment, along with a simple interface file for swig. I am trying to make
this API accessible to an IPython environment such that I don't have to
compile C code all the time to communicate with the associated hardware that
leverages this API for I/O.
PROBLEM. The first function call I need to do creates a board handle (some
arbitrary "object" that is used for all other function calls in the C-side.
The function accepts a void **, assuming the underlying function is probably
malloc-ing memory, has some sort of internal structure, and allows accessing
this memory by some of the other functions. Anyhow, I can't seem to properly
interface to this from Python due to the lack of support for void * and
receive a typeError.
The offending C code snippet, with typedef's/defines extracted from the
underlying header files is:
#define WD_PVOID void*
typedef WD_PVOID WD_BOARD;
typedef WD_UINT32 WD_RetCode;
#define WD_EXPORT extern
#define WD_CHAR8 char
#define WD_UINT32 unsigned int
#---------------------------------------
//prototype
WD_EXPORT WD_RetCode wd_CreateBoardHandle( WD_BOARD *pBoardHandle, const WD_CHAR8 *pUrl );
//interpreted prototype
//extern unsigned int wd_CreateBoardHandle( void* *pBoardHandle, const char *pUrl );
A third party provided provided example (written in C) uses the function as so
(I removed superfluous stuff) :
int main(int argc, char *argv [])
{
WD_RetCode rc;
Vhdl_Example_Opts cmdOpts = VHDL_EXAMPLE_DEFAULTS;
char urlBoard[VHDL_SHORT_STRING_LENGTH];
WD_BOARD BoardHandle;
sprintf(urlBoard, "/%s/%s/wildstar7/board%d", cmdOpts.hostVal, cmdOpts.boardDomain, cmdOpts.boardSlot);
rc = wd_CreateBoardHandle(&BoardHandle,urlBoard);
}
and lastly, my watered down swig interface file (I have been trying swig
typedef's and *OUTPUT with no success):
%module wdapi
%{
#include "wd_linux_pci.h"
#include "wd_types.h"
#include "wd_errors.h"
%}
%import "wd_linux_pci.h"
%import "wd_types.h"
%import "wd_errors.h"
%include <typemaps.i>
WD_EXPORT WD_RetCode wd_CreateBoardHandle( WD_BOARD *pBoardHandle, const WD_CHAR8 *pUrl );
WD_EXPORT WD_RetCode wd_OpenBoard( WD_BOARD BoardHandle );
What I would like to be able to do is to call that function in python as so:
rslt,boardHandle = wdapi.wd_CreateBoardHandle("/foo/bar/etc")
Please let me know if I can provide any other information and I greatly
appreciate your help/guidance towards a solution! I have spent days trying to
review other similar issues posted.
EDIT. I manipulated some typedefs from other posts with similar issues. I am
able to now call the functions and receive both a value in rslt and
boardHandle as an object; however, it appears the rslt value is gibberish.
Here is the new swig interface file (any thoughts as to the problem?):
%module wdapi
%{
#include "wd_linux_pci.h"
#include "wd_types.h"
#include "wd_errors.h"
%}
%import "wd_linux_pci.h"
%import "wd_types.h"
%import "wd_errors.h"
%include <python/typemaps.i>
%typemap(argout) WD_BOARD *pBoardHandle
{
PyObject *obj = PyCObject_FromVoidPtr( *$1, NULL );
$result = PyTuple_Pack(2, $result, obj);
}
%typemap(in,numinputs=0) WD_BOARD *pBoardHandle (WD_BOARD temp)
{
$1 = &temp;
}
%typemap(in) WD_BOARD {
$1 = PyCObject_AsVoidPtr($input);
}
WD_EXPORT WD_RetCode wd_CreateBoardHandle( WD_BOARD *pBoardHandle, const WD_CHAR8 *pUrl );
WD_EXPORT WD_RetCode wd_OpenBoard( WD_BOARD BoardHandle );
WD_EXPORT WD_RetCode wd_DeleteBoardHandle( WD_BOARD BoardHandle );
WD_EXPORT WD_RetCode wd_IsBoardPresent( const WD_CHAR8 *pUrl, WD_BOOL *OUTPUT );
Answer: I resolved my own question. The edited swig interface file, listed above in my
original post, turned out to correct my issue. Turns out that somewhere along
the way, I mangled the input to my function call in python and the error code
returned was "undefined" from the API.
On another note, while investigating other options, I also found "ctypes"
which brought me to a solution first. Rather than dealing with wrapper code
and building a 2nd shared library (that calls another), ctypes allowed me to
access it directly and was much easier. I will still evaluate which I will
move forward with. ctypes python code is listed below for comparison (look at
the c-code example I listed in the original post) :
from ctypes import cdll
from ctypes import CDLL
from ctypes import c_void_p
from ctypes import addressof
from ctypes import byref
import sys
#Update Library Path for shared API library
sys.path.append('/usr/local/lib');
#Load the API and make accessible to Python
cdll.LoadLibrary("libwdapi.so")
wdapi = CDLL("libwdapi.so")
#Create the url for the board
urlBoard='/<server>/<boardType>/<FGPAType>/<processingElement>'
#Lets create a void pointer for boardHandle object
pBoardHandle=c_void_p()
#now create & open the board
rtn = wdapi.wd_CreateBoardHandle(byref(pBoardHandle),urlBoard)
if (rtn) :
print "Error"
else :
print "Success"
|
Python Get/POST http request
Question: my knowledge of Python is very limited however i know my question is a bit
simple on how to send a GET/Post request. i'm trying to create a simple
program for the (to be released LaMatric). it displays info coming from a GET
request on a dot matrix like screen. I would like to connect it with
Eventghost then be able to send all kinda of info (weather, reminders.... and
so on) to the screen. On the website they provide you with this code to get
started, but i'm not sure how to convert that to Python.
curl -X POST \
-H "Accept: application/json" \
-H "X-Access-Token: <MY TOKEN>" \
-H "Cache-Control: no-cache" \
-d '{
"frames": [
{
"index": 0,
"text": "<TEXT GOES HERE>",
"icon": null
}
]
}' \
https://developer.lametric.com......(API)
Answer: It would look something like:
import requests
headers = {
'Accept': 'application/json',
'X-Access-Token': 'TOKEN',
'Cache-Control': 'no-cache'
}
payload = {
"frames": [
{
"index": 0,
"text": "<TEXT GOES HERE>",
"icon": "null"
}
]
}
requests.post('https://developer.lametric.com......', headers=headers, data=payload)
|
Regex doesnt match the groups (Python)
Question: On my administration page I have a list of accounts with various values that I
wanna to capture, like id, name, type, etc. On Regex101 its capturing
perfectly all the values with "g" and "s" modifiers active. This what I trying
to do:
def extract_accounts(src):
list_accounts = []
try:
pattern = re.compile(r'''id=(?P<id>.*?)&serverzone=.\">(?P<name>[a-zA-Z].*?)<\/a>.*?75px;\">(?P<level>.*?)<\/td>.*?75px;.*?75px;\">(?P<type>.*?)<\/td>.*?Open!''', re.X)
print type(pattern)
match = pattern.match(src)
print match, "type=", type(match)
name = match.group("name")
print "name", name
ids = match.group("id")
level = match.group("level")
type = match.group("type")
#list_accounts.append(name, ids, level, type)
#print ("id=", ids, ", name=",name," level=", level, " type=", type)
except Exception as e:
print (e)
But somehow I get this:
<type '_sre.SRE_Pattern'>
None type= <type 'NoneType'>
'NoneType' object has no attribute 'group'
I don't have a clue what I'm doing wrong. Basically what I want is to put in a
list = [(name1, id1, level1, type), (name2, id2, level1, type) ..... and so on
the things that I grab from each line Thanks in advance for any help.
Answer: You should be capturing groups by their group number. I have changed the
regular expression completely and implemented it like so:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import re
def main():
sample_data = '''
<tr style="background-color: #343222;">
<td style="width: 20px;"><img src="/images/Star.png" style="border: 0px;" /></td>
<td><a target="_top" href="adminzone.php?id=2478&serverid=1">Mike</a></td>
<td style="text-align: center;width: 75px;">74</td>
<td>•Evolu†ion•</td>
<td style="text-align: center;width: 100px;">1635</td>
<td style="text-align: center;width: 75px;">40,826</td>
<td style="text-align: center;width: 75px;">User</td>
<td style="width: 100px;"><a target="_top" href="href="adminzone.php"><strong>Open!</strong></a></td>
</tr>
<tr style="background-color: #3423323;">
<td style="width: 20px;"><img src="/images/Star.png" style="border: 0px;" /></td>
<td><a target="_top" href="adminzone.php?suid=24800565&serverid=1">John</a></td>
<td style="text-align: center;width: 75px;">70</td>
<td>•Evolu†ion•</td>
<td style="text-align: center;width: 100px;">9167</td>
<td style="text-align: center;width: 75px;">36,223</td>
<td style="text-align: center;width: 75px;">Admin</td>
<td style="width: 100px;"><a style="color: #00DD19;" target="_top" href="adminzone.php?id=248005&serverid=1"><strong>Open!</strong></a></td>
'''
matchObj = re.search('id=(.*)&serverid=.">(.*)<\\/a><\\/td>\\n.*?75px;\\">(.+)<\\/td>\\n.*\\n.*\\n.*75px;\\">(.+)<\\/td>\\n.*75px;\\">(.+)<\\/td>', sample_data, re.X)
if matchObj:
user_id = matchObj.group(1)
name = matchObj.group(2)
level = matchObj.group(3)
user_type = matchObj.group(4)
print user_id, name, level, user_type
if __name__ == '__main__':
main()
Output: `2478 Mike 74 40,826`
The above should give you a basic idea. Just incase you might be wondering,
`group(0)` is the entire regular expression.
|
dbus-send version in python
Question: I have a working dbus-send invocation:
# OBJECT INTERFACE .MEMBER CONTENT
dbus-send --system --dest=org.bluez /org/bluez/hci0 org.bluez.Adapter.SetMode string:discoverable
Now I am trying to do the same in python, but since pitful documentation and
despite me trying all thinkable permutations all I get are errors on the
**last** step.
import dbus
bus = dbus.SystemBus()
hci0 = bus.get_object('org.bluez', '/org/bluez/hci0')
# everything good so far
# v1
hci0_setmode = hci0.get_dbus_method('SetMode', 'org.bluez.Adapter')
hci0_setmode('discoverable')
# v2
iface = dbus.Interface(hci0, 'org.bluez.Adapter')
iface.SetMode('discoverable')
# v3
iface = dbus.Interface(hci0, 'org.bluez.Adapter')
hci0_setmode =iface.get_dbus_method('SetMode', 'org.bluez.Adapter')
hci0_setmode('discoverable')
Whatever I do, the error is:
dbus.exceptions.DBusException: org.freedesktop.DBus.Error.UnknownMethod: Method "SetMode" with signature "s" on interface "org.bluez.Adapter" doesn't exist
I have not found a way to tell me what mathod with what signatures exist and
besides this error message seemingly contradict with the inital dbus-send
invocation, which proofs that "org.bluez.Adapter.SetMode(s)" exists.
Answer: I found the solution by looking at the api:
dbus-send --system --dest=org.bluez --type=method_call --print-reply /org/bluez/hci0 org.freedesktop.DBus.Introspectable.Introspect
and here is the python code:
import dbus
bus = dbus.SystemBus()
hci0 = bus.get_object('org.bluez', '/org/bluez/hci0')
props = dbus.Interface(hci0, 'org.freedesktop.DBus.Properties')
props.Set('org.bluez.Adapter1', 'Discoverable', True)
I am still not sure why the initial dbus-send command even works. The only
reference to SetMode I can find elsewhere is here:
<http://svn.openmoko.org/developers/erin_yueh/bt/bt_adapter.py>.
|
Python socket security
Question: I plan to use socket (<https://docs.python.org/2/library/socket.html#example>)
to allow a simple software i'm writing to be clustered across multiple
computers, and i'm wondering what security risks there are with using Socket.
I know that open ports CAN be vulnerable depending on the software behind
them, and I know how to cleanse input data and such to prevent buffer overflow
type attacks and code injection, but are there any major security
vulnerabilities using Socket? Is there a way to make it secure for prolonged
use of the port?
I am just starting to delve into programming that involves networking, so if I
have the wrong idea entirely please let me know.
Answer: Since the Python language takes care of all of the memory management and
variable typing for us, buffer overflow vulnerabilities is off the table (
Unless python core vulnerabilities. For example;
<https://hackerone.com/reports/55017> )
Another major thing is Secure Socket Layer. You should use SSL on your socket
mechanism. Depending on the data that is going to be transmitted over
network(sockets), SSL may be the most important security measure of your
application.
|
What is the Matlab install directory on 64bit or how to get it in Python?
Question: I have Matlab2013b on my system at :
> C:\Program Files\MATLAB\R2013b\bin
I am writing Python script that searches for Matlab.exe first at this location
and then at location for 64 bit. The Python script will be run on a server
which might have a 64 bit instead of 32 bit. So, I need to search for both
locations.
Since I don't have 64 bit version on my machine, I don't know the location.
I am speculating it will be:
> C:\Program Files (x86)\MATLAB\R2013b\bin
But can anybody confirm that?
Thanks
sedy
Answer: You can read out the environment variable **PATH** (I'd say it's quite sure
_MATLAB_ it's included) and use some string operations to get the Matlab path:
import os
path = os.environ.get('path')
pathlist = path.split(';')
matlabpath = [s for s in pathlist if all(x in s for x in ['MATLAB','R','bin'])]
print(matlabpath)
this way you don't need to _speculate_ what I would consider a generally bad
programming practice.
* * *
In my case there is also an toolbox **polyspace** on the same path, you need
to exclude that:
matlabpath = [s for s in pathlist if all(x in s for x in ['MATLAB','R','bin']) and not 'polyspace' in s]
There may be other toolboxes writing itself into **path** \- it could be
cumbersome to exclude all of them, so the easiest would be to just return the
shortest of all Matlab related paths:
import os
path = os.environ.get('path')
pathlist = path.split(';')
matlabpath = min([s for s in pathlist if 'MATLAB' in s], key=len)
print(matlabpath)
|
label.configure works sometimes why?
Question: Part of my code is as follows:
def get_songs():
label6.configure(text='Wait')
os.system('/home/norman/my-startups/grabsongs')
label6.configure(text='Done')
The label is not updated at the first `.configure()` but is at the second one.
Except if I cause a deliberate error immediately after the first one at which
point it is updated and then the program terminates. The system call takes
about 2 minutes to complete so it isn't as if there isn't time to display the
first one.
I am using Python 2.7.6
Does anyone know why please?
Answer: I'm going to guess you're using `Tkinter`. If so, as @albert just suggested,
you'll want to call `label.update_idletasks()` or `label.update()` to tell
Tkinter to refresh the display.
As a very crude example to reproduce your problem, let's make a program that
will:
1. Wait 1 second
2. Do something (sleep for 2 seconds) and update the text to "wait"
3. Display "done" afterwards
For example:
import Tkinter as tk
import time
root = tk.Tk()
label = tk.Label(root, text='Not waiting yet')
label.pack()
def do_stuff():
label.configure(text='Wait')
time.sleep(2)
label.configure(text='Done')
label.after(1000, do_stuff)
tk.mainloop()
Notice that "Wait" will never be displayed.
To fix that, let's call `update_idletasks()` after initially setting the text:
import Tkinter as tk
import time
root = tk.Tk()
label = tk.Label(root, text='Not waiting yet')
label.pack()
def do_stuff():
label.configure(text='Wait')
label.update_idletasks()
time.sleep(2)
label.configure(text='Done')
label.after(1000, do_stuff)
tk.mainloop()
* * *
As far as why this happens, it actually is because Tkinter doesn't have time
to update the label.
Calling `configure` doesn't automatically force a refresh of the display, it
just queues one the next time things are idle. Because you immediately call
something that will halt execution of the mainloop (calling an executable and
forcing python to halt until it finishes), Tkinter never gets a chance to
process the changes to the label.
Notice that while the gui displays "Wait" (while your process/sleep is
running) it won't respond to resizing, etc. Python has halted execution until
the other process finishes running.
To get around this, consider using `subprocess.Popen` (or something similar)
instead of `os.system`. You'll then need to perodically poll the returned pipe
to see if the subprocess has finished.
As an example (I'm also moving this into a class to keep the scoping from
getting excessively confusing):
import Tkinter as tk
import subprocess
class Application(object):
def __init__(self, parent):
self.parent = parent
self.label = tk.Label(parent, text='Not waiting yet')
self.label.pack()
self.parent.after(1000, self.do_stuff)
def do_stuff(self):
self.label.configure(text='Wait')
self._pipe = subprocess.Popen(['/bin/sleep', '2'])
self.poll()
def poll(self):
if self._pipe.poll() is None:
self.label.after(100, self.poll)
else:
self.label.configure(text='Done')
root = tk.Tk()
app = Application(root)
tk.mainloop()
The key difference here is that we can resize/move/interact with the window
while we're waiting for the external process to finish. Also note that we
never needed to call `update_idletasks`/`update`, as Tkinter now does have
idle time to update the display.
|
Python-Returning to a specific point in the code
Question: So I'm writing a little bit of code as a fun project that will randomly
generate a subject for me to study each day. But once a subject has appeared
once I don't want it to appear for the rest of the week. To do this I'm using
a list. Basically, when it picks the subject it adds it to the list and the
next time it checks to see if it's already on the list, and if so, I want it
to return to the random number generator. How do I do this? Here's my code.
import random
import time
import datetime
#Subject veto list
x=[]
# Messages to instil, um, enthusiasm.
if datetime.date.today().strftime("%A") == "Monday":
response = input("Here we go again. Are you ready? ")
elif datetime.date.today().strftime("%A") == "Tuesday" or "Wednesday" or "Thursday":
response = input("Are you ready? ")
elif datetime.date.today().strftime("%A") == "Friday":
response = input("Half day! Are you ready? ")
elif datetime.date.today().strftime("%A") == "Saturday" or "Sunday":
response = input("It's the weekend! Are you ready? ")
# Random picking of subject to study. Also adds sbject to veto list for rest of week.
if response == "Yes":
subject = random.randint(1, 7)
print("Today you are studying...")
time.sleep(3)
if subject == (1):
"Englsh" in x
print("English")
x.extend([English])
elif subject == (2):
"Irish" in x
print("Irish")
x.extend([Irish])
elif subject == (3):
"Maths" in x
print("Maths")
x.extend([Maths])
elif subject == (4):
"French" in x
print("French")
x.extend([French])
elif subject == (5):
"Physics" in x
print("Physics")
x.extend([Physics])
elif subject == (6):
"Chemistry" in x
print("Chemistry")
x.extend([Chemistry])
elif subject == (7):
"History" in x
print("History")
x.extend([History])
Answer: This function will let you choose a random thing from a list based on the
weekday, without repeating* and without having to store anything between runs.
The only potential issue would be if Python's PRNG was changed mid-week :P.
import datetime
import itertools
import random
def random_choice_per_date(iterable, date=None):
choices = list(itertools.islice(iterable, 7))
date = date or datetime.date.today()
year, week, weekday = date.isocalendar()
rand = random.Random((year, week)) # Seed PRNG with (year, week)
rand.shuffle(choices)
index = weekday % len(choices)
return choices[index]
This is generalizable beyond dates, but I didn't want to complicate the code.
* _It**will** repeat if the iterable has less than seven items. It also only uses the first seven items at most._
|
Reverse each iterable in a list using functional Python
Question: I have a list of strings and lists. I want to reverse each iterable in my list
using `map`. My current setup is as follows:
forwards_list = [
'abc',
'def',
[1, 2, 3, 4, 5],
]
def reverse(item):
object_type = type(item)
return object_type(reversed(item))
backwards_list = list(map(reverse, forwards_list))
print(backwards_list)
**Output:**
['<reversed object at 0x000000000A29A5C0>',
'<reversed object at 0x000000000A29A5C0>',
[5, 4, 3, 2, 1]]
**Desired Output:**
['cba', 'fed', [5, 4, 3, 2, 1]]
The problem seems to be that `list(reversed([]))` returns `[]`, whereas
`str(reversed(''))` returns `'<reversed object at 0x000000000A335898>' `. Can
anybody help?
Answer: If you're content to work only with sequences, you could use slicing rather
than `reversed`:
def reverse(item):
return item[::-1]
In general, sequences will return an object of the same time when slicing:
>>> 'foo'[::-1]
'oof'
>>> ['foo', 'bar', 'baz'][::-1]
['baz', 'bar', 'foo']
>>> ('foo', 'bar', 'baz')[::-1]
('baz', 'bar', 'foo')
There are a few types however where this certainly _won't_ work (e.g. `dict`).
However, `reversed` requires sequences too (or objects that have a
[`__reversed__`](https://docs.python.org/2/reference/datamodel.html#object.__reversed__)
method).
>>> reversed(dict.fromkeys([1,2,3], None))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: argument to reversed() must be a sequence
You could handle this last case by type-checking and combining with your
original solution:
import collections # Need `collections.abc` on python3.4+ IIRC
def reverse(item):
if isinstance(item, collections.Sequence):
return item[::-1]
obj_type = type(item)
return obj_type(reversed(item))
Of course, even that won't work all (most?) the time since there is no rule
(or even convention) to say that the constructor of an object must accept a
reversed iterator generated from another instance of that type :0).
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.