title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
sequence
Understanding Inheritance in python
40,080,783
<p>I am learning OOP in python.</p> <p>I am struggling why this is not working as I intended?</p> <pre><code>class Patent(object): """An object to hold patent information in Specific format""" def __init__(self, CC, PN, KC=""): self.cc = CC self.pn = PN self.kc = KC class USPatent(Patent): """"Class for holding information of uspto patents in Specific format""" def __init__(self, CC, PN, KC=""): Patent.__init__(self, CC, PN, KC="") pt1 = Patent("US", "20160243185", "A1") pt2 = USPatent("US", "20160243185", "A1") pt1.kc Out[168]: 'A1' pt2.kc Out[169]: '' </code></pre> <p>What obvious mistake I am making so that I am not able to get kc in USPatent instance?</p>
2
2016-10-17T07:34:11Z
40,080,883
<p>You are passing in an empty string:</p> <pre><code>Patent.__init__(self, CC, PN, KC="") </code></pre> <p>That calls the <code>Patent.__init__()</code> method setting <code>KC</code> to <code>""</code>, always.</p> <p>Pass in whatever value of <code>KC</code> you received instead:</p> <pre><code>class USPatent(Patent): """"Class for holding information of uspto patents in Specific format""" def __init__(self, CC, PN, KC=""): Patent.__init__(self, CC, PN, KC=KC) </code></pre> <p>Within <code>USPatent.__init__()</code>, <code>KC</code> is just another variable, just like <code>self</code>, <code>CC</code> and <code>PN</code>. It is either set to <code>""</code> already, or to whatever was passed in when you call <code>USPatent(...)</code> with arguments. You simply want to call the <code>Patent.__init__()</code> method passing on all the values you have.</p> <p>You can drop the keyword argument syntax from the call too:</p> <pre><code>Patent.__init__(self, CC, PN, KC) </code></pre>
4
2016-10-17T07:39:52Z
[ "python", "inheritance" ]
Understanding Inheritance in python
40,080,783
<p>I am learning OOP in python.</p> <p>I am struggling why this is not working as I intended?</p> <pre><code>class Patent(object): """An object to hold patent information in Specific format""" def __init__(self, CC, PN, KC=""): self.cc = CC self.pn = PN self.kc = KC class USPatent(Patent): """"Class for holding information of uspto patents in Specific format""" def __init__(self, CC, PN, KC=""): Patent.__init__(self, CC, PN, KC="") pt1 = Patent("US", "20160243185", "A1") pt2 = USPatent("US", "20160243185", "A1") pt1.kc Out[168]: 'A1' pt2.kc Out[169]: '' </code></pre> <p>What obvious mistake I am making so that I am not able to get kc in USPatent instance?</p>
2
2016-10-17T07:34:11Z
40,080,893
<p>The line</p> <pre><code>Patent.__init__(self, CC, PN, KC="") </code></pre> <p>Should be</p> <pre><code>Patent.__init__(self, CC, PN, KC) </code></pre> <p>The former sets the argument with the name "KC" to the value <code>""</code> (the empty string) using the keyword-style argument syntax. What you want is pass the value of the variable <code>KC</code> instead.</p>
1
2016-10-17T07:40:31Z
[ "python", "inheritance" ]
Establishing a socket connection between computers?
40,080,811
<p>I was learning about networking and I have some trouble understanding what went wrong.</p> <p>I created a Client and a Server script:</p> <p>Server:</p> <pre><code>import socket s = socket.socket() host = socket.gethostname() port = 12345 s.bind((host,port)) s.listen(5) while True: c, addr = s.accept() print ("Got connection from: " ,addr) c.send("Thanks".encode('utf-8')) # c.sendto(("Thank you for connection").encode('utf-8'), addr) c.close() </code></pre> <p>and Client:</p> <pre><code>import socket s=socket.socket() host=socket.gethostname() port = 12345 s.connect((host,port)) c=s.recv(1024) print (c) s.close </code></pre> <p>When I am trying to run from my computer I have no problem (both scripts) But when I am running the Client from another Computer, the following error pops up for the Client: <code>ConnectionRefuseError: WinError10061 No connection could be made because the target machine actively refused it</code>.</p> <p>Got any idea what could fix this?</p>
0
2016-10-17T07:35:53Z
40,080,962
<p>The problem was that I wasn't referring to the server IP when running from another computer, I fixed it by passing the server IP, in the client script, like this host = "10.x.x.x"</p> <p>Sorry for creating a useless question! </p>
0
2016-10-17T07:45:04Z
[ "python", "sockets", "networking", "communication" ]
Separate system of coordinates for x and y
40,080,850
<p>I am using matplotlib for plotting in my project. I have a time series on my chart and I would like to add a text annotation. However I would like it to be floating like this: x dimension of the text would be bound to data (e.g. certain date on x-axis like 2015-05-04) and y dimension bound to Axes coordinates system (e.g. top of the Axes object). Could you please help me accomplish something like this?</p>
0
2016-10-17T07:37:58Z
40,080,939
<p>It seems like I found the solution: one should use blended transformation: <a href="http://matplotlib.org/users/transforms_tutorial.html#blended-transformations" rel="nofollow">http://matplotlib.org/users/transforms_tutorial.html#blended-transformations</a></p>
0
2016-10-17T07:43:41Z
[ "python", "matplotlib", "text", "transformation", "axes" ]
Logging with hug and waitress
40,080,937
<p>I want to add logging to my Python <a href="http://www.hug.rest" rel="nofollow">hug</a> REST app. I couldn't find any wayto do it when serving the app through the <code>hug</code> command (via <code>hug -f app.py</code>), therefore I try to combine hug with <a href="https://github.com/Pylons/waitress/" rel="nofollow">waitress</a>.</p> <p>My minimal app structure in the file <code>app.py</code> looks like:</p> <pre><code>import logging logger = logging.getLogger(__name__) import hug . . . @hug.get() def func(detail): logger.debug("debug func") . . . </code></pre> <p>And I serve this with a waitress script <code>run.py</code>:</p> <pre><code>import logging import waitress import app logger = logging.getLogger('waitress') logger.setLevel(logging.DEBUG) logger.debug("logger set to DEBUG") waitress.serve(app.__hug_wsgi__) </code></pre> <p>When I execute <code>python run.py</code> in a console, the app spins up nicely and the results of <code>func</code> are served back, however the debug messages from inside func ("debug func") <em>and</em> from <code>run.py</code> ("logger set to DEBUG") I cannot see in the console.</p> <p>What is going wrong and how can I fix it? (I'm happy to use another (Windows-capable) WSGI server if that's easier.)</p>
1
2016-10-17T07:43:40Z
40,081,533
<p>You have to configure logging for the <code>logging</code> module. Take a look at the <a href="https://docs.python.org/2/library/logging.config.html" rel="nofollow">documentation for <code>logging.config</code></a> (in particular <code>dictConfig</code> and <code>fileConfig</code>). As a start, to test whether it works, you can simply call</p> <pre><code>logging.basicConfig() </code></pre> <p>in <code>app.py</code> before you do any logging. This will make the output of all channels go to <code>sys.stderr</code>.</p> <p>Don't forget to do <code>logging.setLevel(logging.DEBUG)</code> in <code>app.py</code> too if you want the debug message there to be visible.</p>
1
2016-10-17T08:19:04Z
[ "python", "logging", "wsgi", "waitress", "hug" ]
How could I clean up this pattern drawing code?
40,081,081
<p>so I am trying to draw a simple pattern, kind of two parted. The way it is supposed to look is:</p> <pre><code>**........* *.*.......* *..*......* *...*.....* .........* ........* .......* ......* </code></pre> <p>As of now, I have the bottom part finished but it isn't very clean at all, it is very bulky and there has to be a way to smooth it out, and make it quicker and more concise, for the top part, I still need to figure out how to add in the *'s moving across, I had a few ideas but they all failed haha.</p> <p>So what I have as of now is:</p> <pre><code> x = 8 while x &gt; 4: for c in range(0,1): print('*', end='') for r in range(0,10): print('.', end='') print('*') x = x-1 </code></pre> <p>This is the top part as of now, it works to get the * on either side of the periods, my bottom part however is really messy and I just think there has to be a way to make it quicker:</p> <pre><code>while x == 4: for c in range(0,9): print('.', end='') print('*') x = x-1 while x == 3: for c in range(0,8): print('.', end='') print('*') x = x-1 while x == 2: </code></pre> <p>it keeps going like that until while x == 1: and that is that, but is there a way to compress that code into something quicker, and how exactly do I go about adding in the *s going side to side on the top 4 rows? I'm not asking for answers per say, other than the bottom side - just a point in the right direction, I prefer to learn rather than just be given the answer.</p>
0
2016-10-17T07:52:38Z
40,081,472
<pre><code>'.'*6 </code></pre> <p>means <code>......</code> (dot six times) so instead of </p> <pre><code>for c in range(0,9): print('.', end='') </code></pre> <p>just do</p> <pre><code>print('.'*9,end='') </code></pre>
0
2016-10-17T08:14:52Z
[ "python", "loops", "while-loop", "range" ]
Pandas: convert unicode elem in column to list
40,081,109
<p>I have dataframe</p> <pre><code>category dictionary Classified [u'\u043e', u'\u0441', u'\u043a', u'\u043f\u043e', u'\u0443', u'avito', u'\u043e\u0431', u'\u043d\u0438', u'\u043e\u0431\u044a\u044f\u0432\u043b\u0435\u043d\u0438\u044f', u'%8f-', u'\u0434\u043e', u'\u0435\u0449\u0435', u'\u043f\u0440\u0438', u'000', u'\u0436\u0435', u'\u043a\u0430\u043a', u'\u0441\u043e', u'\u0438\u043b\u0438', u'\u0442\u043e\u0432\u0430\u0440', u'\u0442\u0430\u043a', u'\u043e\u0431\u044a\u044f\u0432\u043b\u0435\u043d\u0438\u0435', u'\u043e\u0431\u044a\u044f\u0432\u043b\u0435\u043d\u0438\u0439', u'\u0443\u0441\u043b\u0443\u0433', u'\u0441\u0430\u0439\u0442', u'[\u043a\u0430\u043a', u'\u043b\u0438', u'\u043e\u0431\u044a\u0435\u043a\u0442', u'avito.', u'###', u'[\u043e', u'\u0444\u043e\u0442\u043e', u'\u0442\u043e\u043b\u044c\u043a\u043e', u'-avito-', u'\u0434\u043e\u043b\u0436\u043d\u043e', u'\u043d\u0438\u043c'] Search [u'\u0441', u'\u0443', u'\u043a', u'\u043f\u043e', u'\u043e', u'\u043e\u0431', u'\u0432\u0430\u043c', u'\u0432\u0430\u0448', u'\u0437\u0430\u043f\u0440\u043e\u0441\u044b', u'\u044f\u043d\u0434\u0435\u043a\u0441', u'\u0430\u0432\u0442\u043e\u043c\u0430\u0442\u0438\u0447\u0435\u0441\u043a\u0438\u0435', u'\u0440\u0430\u0437', u'\u0432\u043e\u0437\u043c\u043e\u0436\u043d\u043e,', u'\u0441\u0438\u043c\u0432\u043e\u043b\u044b', u'\u044d\u0442\u043e\u043c', u'\u0438\u043b\u0438', u'\u0442\u0430\u043a', u'\u0435\u0441\u043b\u0438', u'\u0431\u0440\u0430\u0443\u0437\u0435\u0440\u0435', u'\u0432\u0430\u0448\u0435\u0433\u043e', u'\u0432\u0430\u0448\u0435\u043c', u'\u0432\u0438\u0440\u0443\u0441\u043d\u043e\u0439', u'\u0432\u043e\u0441\u043f\u043e\u043b\u044c\u0437\u0443\u0439\u0442\u0435\u0441\u044c', u'\u043a\u043e\u043c\u043f\u044c\u044e\u0442\u0435\u0440', u'\u043c\u043e\u0436\u0435\u0442', u'\u043d\u0430\u043f\u0440\u0438\u043c\u0435\u0440,', u'\u043d\u0430\u0448\u0435\u0439', u'\u043d\u0435\u043e\u0431\u0445\u043e\u0434\u0438\u043c\u043e', u'\u043f\u043e\u0436\u0430\u043b\u0443\u0439\u0441\u0442\u0430,', u'\u043f\u043e\u0438\u0441\u043a\u0443.', u'\u0440\u0435\u043a\u043e\u043c\u0435\u043d\u0434\u0443\u0435\u043c', u'\u0441\u043b\u0443\u0447\u0430\u0435', u'\u0447\u0442\u043e\u0431\u044b', u'an', u'.ru'] Агрегатор [u'\u0441', u'\u0443', u'\u043a', u'\u043f\u043e', u'\u043e', u'\u043e\u0431', u'\u0432\u0430\u043c', u'\u0432\u0430\u0448', u'\u0437\u0430\u043f\u0440\u043e\u0441\u044b', u'\u0430\u0432\u0442\u043e\u043c\u0430\u0442\u0438\u0447\u0435\u0441\u043a\u0438\u0435', u'\u044f\u043d\u0434\u0435\u043a\u0441', u'\u0440\u0430\u0437', u'\u0432\u043e\u0437\u043c\u043e\u0436\u043d\u043e,', u'\u0441\u0438\u043c\u0432\u043e\u043b\u044b', u'\u044d\u0442\u043e\u043c', u'\u0442\u0430\u043a', u'![](//yastatic.net/lego/_/la6qi18z8lwgnzdsar1qy1gwcwo.gif)', u'\u0438\u043b\u0438', u'\u0431\u0440\u0430\u0443\u0437\u0435\u0440\u0435', u'\u0432\u0430\u0448\u0435\u0433\u043e', u'\u0432\u0430\u0448\u0435\u043c', u'\u0432\u0438\u0440\u0443\u0441\u043d\u043e\u0439', u'\u0432\u043e\u0441\u043f\u043e\u043b\u044c\u0437\u0443\u0439\u0442\u0435\u0441\u044c', u'\u0435\u0441\u043b\u0438', u'\u043a\u043e\u043c\u043f\u044c\u044e\u0442\u0435\u0440', u'\u043c\u043e\u0436\u0435\u0442', u'\u043d\u0430\u043f\u0440\u0438\u043c\u0435\u0440,', u'\u043d\u0430\u0448\u0435\u0439', u'\u043d\u0435\u043e\u0431\u0445\u043e\u0434\u0438\u043c\u043e', u'\u043f\u043e\u0436\u0430\u043b\u0443\u0439\u0441\u0442\u0430,', u'\u043f\u043e\u0438\u0441\u043a\u0443.', u'\u0440\u0435\u043a\u043e\u043c\u0435\u043d\u0434\u0443\u0435\u043c', u'\u0441\u043b\u0443\u0447\u0430\u0435', u'\u0447\u0442\u043e\u0431\u044b', u'\u043d\u0438'] Аксессуары [u'\u0441', u'\u043a', u'g3', u'lg', u'mypads.ru', u'\u043e', u'[\u0447\u0435\u0445\u043b\u044b', u'\u0442\u0435\u043b\u0435\u0444\u043e\u043d', u'dual', u'lte', u'\u0442\u0435\u043b\u0435\u0444\u043e\u043d\u043e\u0432', u'mac-set', u'\u043f\u043e', u'\u0443', u'\u043f\u043b\u0430\u043d\u0448\u0435\u0442\u043e\u0432', u'-dual-', u'one', u'\u0441\u0430\u043c\u043e\u0432\u044b\u0432\u043e\u0437', u'/g3', u'd855/d856/d858', u'|', u'mah', u'[\u0434\u043b\u044f', u'\u0440.', u'sony', u'\u043c\u043e\u0441\u043a\u0432\u0435', u'\u0442\u0435\u043b\u0435\u0444\u043e\u043d\u0430', u'"\u0441\u0430\u043c\u043e\u0432\u044b\u0432\u043e\u0437', u'"\u043f\u0443\u043d\u043a\u0442', u'dopolnitelnoj-batarei-dlya-telefo-lg', u'\u0441\u0430\u043c\u043e\u0432\u044b\u0432\u043e\u0437\u0430', u'asus', u'circle', u'quick', u'\u0431\u0435\u0441\u043f\u0440\u043e\u0432\u043e\u0434\u043d\u0430\u044f'] Информационный ресурс [u'\u043e', u'\u0441', u'\u043a', u'apple', u'\u0443', u'\u043f\u043e', u'apple.com', u'|', u'app', u'store', u'iphone', u'](', u'mail.ru', u'\u043c', u'ipad', u'\u0432\u043e', u'\u0441\u043e', u'\u0433', u'image', u'id', u'\u0434\u043e', u'one', u'\u0442\u0435', u'\u043e\u0431', u'itunes', u'phone', u'le', u'ip', u'apple.', u'\u043f\u0440\u0438', u'\u0436\u0435', u'se', u'os', u'###', u'![](http://store.storeimages.cdn-apple.com/4662/as-'] Магазины производителей [u'\u043e', u'\u0441', u'\u043a', u'](/', u'\u0443', u'lenovo', u'\u043f\u043e', u'\u0441\u043e', u'\u0437\u0430\u043a\u0430\u0437', u'\u043f\u0440\u0438', u'lenovo](/', u'[\u0430\u043a\u0441\u0435\u0441\u0441\u0443\u0430\u0440\u044b', u'\u0432\u0441\u0435', u'\u0438\u043b\u0438', u'[](javascript:void\\(0\\)', u'\u043f\u043a', u'![](/local/templates/lenovo/images/close-buy-dialog.png)', u'\u0434\u043e\u0441\u0442\u0430\u0432\u043a\u0430', u'###', u'\u0438\u043d\u0442\u0435\u0440\u043d\u0435\u0442-\u043c\u0430\u0433\u0430\u0437\u0438\u043d', u'[yoga', u'\u0442\u043e\u0432\u0430\u0440', u'\u0440\u0443\u0431.', u'\u0440\u0435\u0433\u0438\u0441\u0442\u0440\u0430\u0446\u0438\u044f', u'\u043f\u0440\u043e\u0434\u0443\u043a\u0442\u044b', u'\u0444\u043e\u0440\u0443\u043c\u0430', u'\u043f\u043e\u043a\u0443\u043f\u043a\u0438', u'\u043f\u043e\u0434\u0434\u0435\u0440\u0436\u043a\u0430', u'\u043e\u0444\u043e\u0440\u043c\u0438\u0442\u044c', u'\u0433\u0430\u0440\u0430\u043d\u0442\u0438\u044f', u'lenovo.', u'\u0440\u0430\u043c\u043a\u0430\u0445', u'\u043f\u043e\u0434\u0434\u0435\u0440\u0436\u043a\u0438', u'\u0432\u043e\u0439\u0442\u0438', u'\u043c\u0435\u043d\u044f'] Монобрендовые страницы с информацией [u'\u043e', u'\u0441', u'\u043a', u'apple', u'\u0443', u'iphone', u'appleinsider.ru', u'mac', u'os', u'\u043f\u043e', u'ios', u'\u0436\u0435', u'macos', u'\u043f\u0440\u0438', u'\u0436', u'[\u043e', u'\u043d\u0438', u'\u043b\u0438', u'macdigger', u'macosworld.ru', u'\u0434\u043e', u'](http://macosworld.ru', u'\u0442\u0430', u'macdigger.', u'macdigger.ru', u'](http://macosworld.ru/', u'](http://www.macdigger.ru', u'login.php?redirect_to=http%3a%2f%2fappleinsider.ru%2fios%2fbystro-', u'razryazhaetsya-iphone-vy-mozhete-emu-pomoch.html)', u'\u043d\u043e\u044f\u0431\u0440\u044c', u'\u2212', u'2015\u0433](http://appleinsider.ru/ios/bystro-razryazhaetsya-iphone-vy-', u'[\u043e\u0442\u0432\u0435\u0442\u0438\u0442\u044c](http://appleinsider.ru/wp-', u'\u043f\u0440\u0438\u043b\u043e\u0436\u0435\u043d\u0438\u044f', u'\u0432\u043e'] Онлайн-магазин [u'\u043e', u'\u0441', u'\u043a', u'\u0443', u'\u043f\u043e', u'\u0433', u'se', u'\u0434\u043e', u'\u0447', u'](', u'\u0441\u043e', u'\u043d\u0438', u'\u043e\u0431', u'\u0432\u043e', u'\u0442\u0435', u'\u0441\u043c\u0430\u0440\u0442\u0444\u043e\u043d', u'smart', u'ip', u'\u043f\u0440\u0438', u'\u043b\u0438', u'\u043e\u0441', u'pro', u'apple', u'phone', u'|', u'aliexpress', u'ebay', u'\u0430\u043a\u0441\u0435\u0441\u0441\u0443\u0430\u0440\u044b', u'dns', u'\u043b', u'\u0436\u0435', u'\u0433\u0431', u'3d', u'\u043a\u0430\u0440\u0442', u'](//static.mvideo.ru/assets/img/stub.gif)'] Официальные и монобрендовые магазины [u'\u043e', u'\u0441', u'\u043a', u'microsoft', u'\u0443', u'lumia', u'pro', u'sim', u'microsoftstore.ru', u'\u0440.', u'\u043d\u0438', u'\u043f\u043e', u'dual', u'\u0432\u043e', u'\u043e\u0441', u'office', u'3g', u'\u043b\u0438', u'de', u'nokia', u'hp', u'\u0438\u043b\u0438', u'xbox', u'![microsoft', u'windows', u'06', u'\u043f\u0440\u0438', u'lenovo', u'one', u'\u0447\u0435\u0440\u043d\u044b\u0439', u'361254', u'\u043d\u0443', u'[microsoft', u'spectre', u'###'] Ремонт [u'\u043e', u'\u0441', u'\u043a', u'mcrf', u'\u0443', u'|', u'mcrf.ru', u'\u043f\u043e', u'ati', u'sony', u'os', u'--', u'se', u'\u0441\u043e', u'\u0432\u043e', u'alva', u'android', u'htc', u'xperia', u'mi', u'](//www.mcrf.ru/forum/styles/mcrf/images/reputation/reputation_highpos.gif)', u'nics', u'\u043b\u0438', u'\u0436\u0435', u'box', u'![alva', u'###', u'\u0442\u0435\u043b', u'\u0434\u043e', u'\u043f\u0440\u0438', u'ns', u'![nics', u'param', u'wp', u'\u0440\u0443\u0431.'] Сайты производителей [u'\u043e', u'\u0441', u'\u043a', u'microsoft', u'\u0443', u'store', u'|', u'\u043f\u043e', u'windows', u'you', u'one', u'search', u'samsung', u'](javascript:void\\(0\\))', u'lg', u'](', u'](https://www.microsoft.com', u'\u0432\u043e', u'xbox', u'office', u'###', u'support', u'htc', u'account', u'\u0434\u043e', u'javascript', u'&amp;amp;', u'apps', u'+7', u're', u'games', u'mobile', u'phone', u'the', u'lg.com'] Телеком [u'\u043e', u'\u0441', u'\u043a', u'tele2', u'tele2.', u'\u043f\u043e', u'\u0434\u043e', u'\u0440\u0435\u0433\u0438\u043e\u043d', u'\u0443', u'\u043d\u043e\u043c\u0435\u0440\u0430', u'\u0437\u0430\u043a\u0430\u0437\u0430', u'\u0440\u0443\u0431.', u'|', u'###', u'\u0442\u0430\u0440\u0438\u0444', u'\u0438\u043d\u0442\u0435\u0440\u043d\u0435\u0442-\u043c\u0430\u0433\u0430\u0437\u0438\u043d', u'![\u0441\u0430\u0440\u0430\u0442\u043e\u0432', u'\u0442\u0432', u'\u043f\u0435\u0440\u0435\u0439\u0442\u0438', u'\u043d\u043e\u043c\u0435\u0440\u043e\u0432', u'\u0431\u0438\u043b\u0430\u0439\u043d', u'\u0438\u043d\u0442\u0435\u0440\u043d\u0435\u0442', u'[\u0441\u043c\u0430\u0440\u0442\u0444\u043e\u043d\u044b', u'\u0433\u0431', u'[\u043f\u0435\u0440\u0435\u043d\u043e\u0441', u'\u0437\u0432\u043e\u043d\u043e\u043a', u'\u043e\u0431\u043b\u0430\u0441\u0442\u044c', u'\u0440\u0435\u0433\u0438\u043e\u043d\u0430', u'\u0441\u0430\u0440\u0430\u0442\u043e\u0432\u0441\u043a\u0430\u044f', u'\u0442\u0432\u043e\u0439', u'####', u'\u0441\u043c\u0430\u0440\u0442\u0444\u043e\u043d', u'[\u0438\u043d\u0444\u043e\u0440\u043c\u0430\u0446\u0438\u044f](/info)', u'\u043f\u043b\u044e\u0441', u'\u043e\u0431'] Форумы и отзывы [u'\u043e', u'\u0441', u'\u043a', u'\u0443', u'\u043f\u043e', u'\u0434\u043e', u'\u043e\u0442\u0437\u044b\u0432', u'\u0432\u043e', u'\u043d\u0438', u'\u0442\u0435', u'\u043e\u0431', u'\u0431', u'\u0436\u0435', u'"\u0438\u043d\u0444\u043e\u0440\u043c\u0430\u0446\u0438\u044f', u'\u043f\u043e\u043b\u044c\u0437\u043e\u0432\u0430\u0442\u0435\u043b\u0435."', u'de', u'\u0435', u'\u043c', u'\u0444\u043e\u0442\u043e', u'\u043f\u043e\u043b', u'\u0436', u'\u043a\u043e', u'\u0438\u043c', u'\u0432\u0441\u0435', u'\u0441\u043e', u'|', u'\u043f\u0440\u0438', u'\u0432\u043e\u043b\u043e\u0441', u'\u0440', u'\u0447\u0438\u0442\u0430\u0442\u044c', u'_3', u'\u0431\u044b', u'\u044d\u0442\u043e', u'\u043e\u0442\u0437\u044b\u0432\u044b', u'\u0442\u0430\u043a'] </code></pre> <p>I need to convert column <code>dictionary</code> to lists. These are unicode. I try</p> <pre><code>lsts = df.dictionary.values.tolist() for lst in lsts: [unicode(i) for i in lst.strip('[]').split(',')] </code></pre> <p>But it doesn't help.</p>
0
2016-10-17T07:53:38Z
40,081,967
<p>Try this:</p> <pre><code>rlst = [] for lst in lsts: ls0 = lst.strip('[] ').split(',') rlst.append([unicode(l.lstrip(' u\'').rstrip('\'')) for l in ls0]) </code></pre> <p><code>rlst</code> is your result as a list of lists of unicode strings.</p>
0
2016-10-17T08:45:22Z
[ "python", "pandas", "unicode" ]
Testing List of Lists to remove unwanted lists based on a constraint
40,081,231
<p>I have a list of lists. The lists are made up of people from certain areas, if the lists have too many people from a certain area I would like to remove the list from the set of lists. The lists are lengths of 9</p> <pre><code>list=[[["Aarat","California"], ["Aaron","California"], ["Abba","California"], ["Abaddon","California"], ["Abner","Nevada"], ["Abram","Nevada"], ["Abraham","Nevada"], ["Absalom","Nevada"], ["Adullam","Utah"]], ......, [["Abital","California"], ["Abitub","California"], ["Absalom","Nevada"], ["Accad","Nevada"], ["Agar","Utah"], ["Agee","Utah"], ["Aijeleth-Shahar","New Mexico"], ["Ain","New Mexico"], ["Amram","Washington"]]] Cities=["California","Nevada","Utah","New Mexico","Idaho","Washington"] denk=[] for city in Cities: den=[] for i in list: a=i[0] b=i[1] c=i[2] d=i[3] e=i[4] f=i[5] g=i[6] h=i[7] k=i[8] if a==city: ab=1 if b==city: ac=1 if c==city: ad=1 if d==city: ae=1 if e==city: af=1 if f==city: ag=1 if g==city: ah=1 if h==city: ai=1 if k==city: aj=1 if (ab+ac+ad+ae+af+ag+ah+ai+aj)&gt;3: den.append(1) if (ab+ac+ad+ae+af+ag+ah+ai+aj)&lt;4: den.append(0) denk.append(sum(den)) finalList=[] for i, j in enumerate(denk): if j == 0: finalList.append(list[i]) </code></pre> <p>I attempt to count the amount of people from the city, if the amount of people is greater than 3 I try to append a 1, if not 0. I only do this so i can sum up the amount of times the list goes over the quota. </p> <pre><code>Cities=["California","Nevada","Utah","New Mexico","Idaho","Washington"] [["Aarat","California"], ["Aaron","California"], ["Abba","California"], ["Abaddon","California"], ["Abner","Nevada"], ["Abram","Nevada"], ["Abraham","Nevada"], ["Absalom","Nevada"], ["Adullam","Utah"]] </code></pre> <p>In testing this particular list the testing to see how many people are from California would make den=1 because there are more than 3 people from California. The next city, Nevada, would also make den=1, and so on.... den=[1,1,0,0,0,0] denk=[2] So this list gets thrown out</p> <pre><code>[["Abital","California"], ["Abitub","California"], ["Absalom","Nevada"], ["Accad","Nevada"], ["Agar","Utah"], ["Agee","Utah"], ["Aijeleth-Shahar","New Mexico"], ["Ain","New Mexico"], ["Amram","Washington"]] </code></pre> <p>Doing the same here yields den=0 for each city in Cities, den=[0,0,0,0,0,0], denk=[0] so the list will be accepted.</p> <p>The finalList should not have any lists that have too many people from one place.</p>
1
2016-10-17T08:00:58Z
40,081,552
<p>Say you start with something like:</p> <pre><code>list=[[["Aarat","California"], ["Aaron","California"], ["Abba","California"], ["Abaddon","California"], ["Abner","Nevada"], ["Abram","Nevada"], ["Abraham","Nevada"], ["Absalom","Nevada"], ["Adullam","Utah"]],[["Abital","California"], ["Abitub","California"], ["Absalom","Nevada"], ["Accad","Nevada"], ["Agar","Utah"], ["Agee","Utah"], ["Aijeleth-Shahar","New Mexico"], ["Ain","New Mexico"], ["Amram","Washington"]]] </code></pre> <p>To find the distribution within each second-level list, you could use list comprehension and <a href="https://docs.python.org/2/library/collections.html#collections.Counter" rel="nofollow"><code>collections.Counter</code></a>:</p> <pre><code>import collections &gt;&gt;&gt; [collections.Counter(e[1] for e in l) for l in list] [Counter({'California': 4, 'Nevada': 4, 'Utah': 1}), Counter({'California': 2, 'Nevada': 2, 'New Mexico': 2, 'Utah': 2, 'Washington': 1})] </code></pre> <p>To find the most common count within each second-level list, you could use</p> <pre><code>&gt;&gt;&gt; [collections.Counter(e[1] for e in l).most_common(1)[0][1] for l in list] [4, 2] </code></pre> <p>So, to retain only second-level lists where the most common count is at most, say, 3, you could just use</p> <pre><code>&gt;&gt;&gt; [l for l in list if collections.Counter(e[1] for e in l).most_common(1)[0][1] &lt;= 3] [[['Abital', 'California'], ['Abitub', 'California'], ['Absalom', 'Nevada'], ['Accad', 'Nevada'], ['Agar', 'Utah'], ['Agee', 'Utah'], ['Aijeleth-Shahar', 'New Mexico'], ['Ain', 'New Mexico'], ['Amram', 'Washington']]] </code></pre>
1
2016-10-17T08:20:20Z
[ "python", "algorithm", "list" ]
How to create 2 gram shingles?
40,081,237
<p>I have this code which i got from some tutorial-:</p> <pre><code>list1 = [['hello','there','you','too'],['hello','there','you','too','there'],['there','you','hello']] def get_shingle(size,f): #shingles = set() for i in range (0,len(f)-2+1): yield f[i:i+2] #shingles1 = set(get_shingle(list1[0],2)) #shingles2 = set(get_shingle(list1[1],2)) shingles1 = set(get_shingle(2,list1[0])) shingles2 = set(get_shingle(2,list1[1])) print shingles1 print shingles2 print "done" </code></pre> <p>When i try to run this code i am getting an error -: </p> <pre><code>Traceback (most recent call last): File "E:\Research\Shingle Method\create_shingle.py", line 10, in &lt;module&gt; shingles1 = set(get_shingle(2,list1[0])) TypeError: unhashable type: 'list' </code></pre> <p>If list1 is set then the error does not come. But i cannot convert list1 into set as <strong>it removes duplicate words</strong> and also i need it to be a list for my major code which processes a huge text file in the form of list. Why am i getting this 'unhashable list'? Can't we pass list as argument?</p>
2
2016-10-17T08:01:13Z
40,081,519
<p>Because the <code>yield</code> command returns a generator. The conversion of a generator to a set is triggering the unhashable type error.</p> <p>You can make your code work by a simple alteration.</p> <pre><code>shingles1 = get_shingle(2,list1[0]) lst = [x for x in shingles1] </code></pre> <p>This will give you all the bigrams from <code>list1[0]</code> and put it into <code>lst</code></p>
1
2016-10-17T08:17:42Z
[ "python" ]
How to create 2 gram shingles?
40,081,237
<p>I have this code which i got from some tutorial-:</p> <pre><code>list1 = [['hello','there','you','too'],['hello','there','you','too','there'],['there','you','hello']] def get_shingle(size,f): #shingles = set() for i in range (0,len(f)-2+1): yield f[i:i+2] #shingles1 = set(get_shingle(list1[0],2)) #shingles2 = set(get_shingle(list1[1],2)) shingles1 = set(get_shingle(2,list1[0])) shingles2 = set(get_shingle(2,list1[1])) print shingles1 print shingles2 print "done" </code></pre> <p>When i try to run this code i am getting an error -: </p> <pre><code>Traceback (most recent call last): File "E:\Research\Shingle Method\create_shingle.py", line 10, in &lt;module&gt; shingles1 = set(get_shingle(2,list1[0])) TypeError: unhashable type: 'list' </code></pre> <p>If list1 is set then the error does not come. But i cannot convert list1 into set as <strong>it removes duplicate words</strong> and also i need it to be a list for my major code which processes a huge text file in the form of list. Why am i getting this 'unhashable list'? Can't we pass list as argument?</p>
2
2016-10-17T08:01:13Z
40,081,589
<p>The issue lies in the fact that your get_shingle() function yields <code>lists</code>. Lists are not hashable, which is needed to build a set. You can easily solve this by yielding a tuple (which is hashable), instead of a list.</p> <p>Transform the following line in your code:</p> <pre><code>yield tuple(f[i:i+2]) </code></pre> <p>This will result in the following:</p> <pre><code>list1 = [['hello','there','you','too'],['hello','there','you','too','there'],['there','you','hello']] def get_shingle(size,f): #shingles = set() print(f) for i in range (0,len(f)-2+1): yield tuple(f[i:i+2]) shingles1 = { i for i in get_shingle(2,list1[0])} print(shingles1) </code></pre> <p>and outputs:</p> <pre><code>['hello', 'there', 'you', 'too'] {('you', 'too'), ('hello', 'there'), ('there', 'you')} </code></pre>
1
2016-10-17T08:22:41Z
[ "python" ]
How to create 2 gram shingles?
40,081,237
<p>I have this code which i got from some tutorial-:</p> <pre><code>list1 = [['hello','there','you','too'],['hello','there','you','too','there'],['there','you','hello']] def get_shingle(size,f): #shingles = set() for i in range (0,len(f)-2+1): yield f[i:i+2] #shingles1 = set(get_shingle(list1[0],2)) #shingles2 = set(get_shingle(list1[1],2)) shingles1 = set(get_shingle(2,list1[0])) shingles2 = set(get_shingle(2,list1[1])) print shingles1 print shingles2 print "done" </code></pre> <p>When i try to run this code i am getting an error -: </p> <pre><code>Traceback (most recent call last): File "E:\Research\Shingle Method\create_shingle.py", line 10, in &lt;module&gt; shingles1 = set(get_shingle(2,list1[0])) TypeError: unhashable type: 'list' </code></pre> <p>If list1 is set then the error does not come. But i cannot convert list1 into set as <strong>it removes duplicate words</strong> and also i need it to be a list for my major code which processes a huge text file in the form of list. Why am i getting this 'unhashable list'? Can't we pass list as argument?</p>
2
2016-10-17T08:01:13Z
40,081,924
<p>Yield command generates a generator and set(iterator) expects an iterator which is immutable </p> <p>So something like this will work</p> <pre><code>shingles1 = set(get_shingle(2,list1[0])) set(tuple(x) for x in shingles1) </code></pre>
1
2016-10-17T08:42:43Z
[ "python" ]
Matplotlib odd subplots
40,081,489
<p>I have to plot a figure with 11 subpots as you can see below. But as it is an odd number, i dont know how to deal the subplot (4,3,12) to remove it... and place the 2 last plots on the center Moreover i would like to increse the subplot size as the space is too important. The code is below.</p> <p><a href="https://i.stack.imgur.com/c6JK1.png" rel="nofollow"><img src="https://i.stack.imgur.com/c6JK1.png" alt="enter image description here"></a></p> <p>The code is :</p> <pre><code>plt.close() fig, axes = plt.subplots(nrows=4, ncols=3) plt.tight_layout(pad=0.05, w_pad=0.001, h_pad=2.0) ax1 = plt.subplot(431) # creates first axis ax1.set_xticks([]) ax1.set_yticks([]) ax1.tick_params(labelsize=8) i1 = ax1.imshow(IIIm,cmap='hot',extent=(0,2000,0,2000),vmin=-0.2,vmax=-0.1) i11 = ax1.plot((0,600),(1000,1000),'k-',linewidth=3) cb1=plt.colorbar(i1,ax=ax1,ticks=[-0.2,-0.15,-0.1],fraction=0.046, pad=0.04,format='%.3f') cb1.ax.tick_params(labelsize=8) ax1.set_title("$n = -3$", y=1.05, fontsize=12) ax2 = plt.subplot(432) # creates second axis ax2.set_xticks([]) ax2.set_yticks([]) i2=ax2.imshow(IIm,cmap='hot',extent=(0,2000,0,2000),vmin=-0.1,vmax=0.1) i22 = ax2.plot((0,600),(1000,1000),'k-',linewidth=3) ax2.set_title("$n = -2$", y=1.05, fontsize=12) ax2.set_xticklabels([]) ax2.set_yticklabels([]) cb2=plt.colorbar(i2,ax=ax2,ticks=[-0.1,0.0,0.1],fraction=0.046, pad=0.04,format='%.3f') cb2.ax.tick_params(labelsize=8) ax3 = plt.subplot(433) # creates first axis ax3.set_xticks([]) ax3.set_yticks([]) i3 = ax3.imshow(Im,cmap='hot',extent=(0,2000,0,2000),vmin=-1,vmax=-0.2) i33 = ax3.plot((0,600),(1000,1000),'k-',linewidth=3) ax3.set_title("$n = -1$ ", y=1.05, fontsize=12) cb3=plt.colorbar(i3,ax=ax3,ticks=[-1,-0.6,-0.2],fraction=0.046, pad=0.04,format='%.3f') ax3.set_xticklabels([]) ax3.set_yticklabels([]) cb3.ax.tick_params(labelsize=8) #plt.gcf().tight_layout() #plt.tight_layout(pad=0.05, w_pad=0.001, h_pad=2.0) ax1 = plt.subplot(434) # creates first axis ax1.set_xticks([]) ax1.set_yticks([]) ax1.tick_params(labelsize=8) i1 = ax1.imshow(ZV_0_modeI,extent=(0,2000,0,2000),cmap=plt.cm.hot,origin="lower", vmin=-1, vmax=1) i11 = ax1.plot((0,600),(1000,1000),'k-',linewidth=3) cb1=plt.colorbar(i1,ax=ax1,ticks=[-1,0, 1],fraction=0.046, pad=0.04,format='%.2f') cb1.ax.tick_params(labelsize=8) ax1.set_title("$ n = 0$", y=1.05, fontsize=12) ax2 = plt.subplot(435) # creates second axis ax2.set_xticks([]) ax2.set_yticks([]) i2=ax2.imshow(I,cmap='hot',extent=(0,2000,0,2000), vmin=-1, vmax=1) i22 = ax2.plot((0,600),(1000,1000),'k-',linewidth=3) ax2.set_title("$n = 1$", y=1.05, fontsize=12) ax2.set_xticklabels([]) ax2.set_yticklabels([]) cb2=plt.colorbar(i2,ax=ax2,fraction=0.046, pad=0.04,ticks=[-1,0,1],format='%.2f') cb2.ax.tick_params(labelsize=8) ax3 = plt.subplot(436) # creates first axis ax3.set_xticks([]) ax3.set_yticks([]) i3 = ax3.imshow(II,cmap='hot',extent=(0,2000,0,2000),vmin=-1,vmax=1) i33 = ax3.plot((0,600),(1000,1000),'k-',linewidth=3) ax3.set_title("$n = 2$ ", y=1.05, fontsize=12) cb3=plt.colorbar(i3,ax=ax3,fraction=0.046, pad=0.04,ticks=[-1.,0,1.],format='%.2f') ax3.set_xticklabels([]) ax3.set_yticklabels([]) cb3.ax.tick_params(labelsize=8) plt.gcf().tight_layout() plt.tight_layout(pad=0.05, w_pad=0.001, h_pad=2.0) ax1 = plt.subplot(437) # creates first axis ax1.set_xticks([]) ax1.set_yticks([]) ax1.tick_params(labelsize=8) i1 = ax1.imshow(III,cmap=plt.cm.hot,origin="lower",extent=(0,2000,0,2000),vmin=-1, vmax=1) i11 = ax1.plot((0,600),(1000,1000),'k-',linewidth=3) cb1=plt.colorbar(i1,ax=ax1,ticks=[-1,0, 1],fraction=0.046, pad=0.04,format='%.2f') cb1.ax.tick_params(labelsize=8) ax1.set_title("$ n = 3$", y=1.05, fontsize=12) ax2 = plt.subplot(438) # creates second axis ax2.set_xticks([]) ax2.set_yticks([]) i2=ax2.imshow(IV,cmap='hot',extent=(0,2000,0,2000), vmin=-1, vmax=1) i22 = ax2.plot((0,600),(1000,1000),'k-',linewidth=3) ax2.set_title("$n = 4$", y=1.05, fontsize=12) ax2.set_xticklabels([]) ax2.set_yticklabels([]) cb2=plt.colorbar(i2,ax=ax2,fraction=0.046, pad=0.04,ticks=[-1,0,1],format='%.2f') cb2.ax.tick_params(labelsize=8) ax3 = plt.subplot(439) # creates first axis ax3.set_xticks([]) ax3.set_yticks([]) i3 = ax3.imshow(V,cmap='hot',extent=(0,2000,0,2000),vmin=-1,vmax=1) i33 = ax3.plot((0,600),(1000,1000),'k-',linewidth=3) ax3.set_title("$n = 5$ ", y=1.05, fontsize=12) cb3=plt.colorbar(i3,ax=ax3,fraction=0.046, pad=0.04,ticks=[-1.,0,1.],format='%.2f') ax3.set_xticklabels([]) ax3.set_yticklabels([]) cb3.ax.tick_params(labelsize=8) plt.gcf().tight_layout() plt.tight_layout(pad=0.05, w_pad=0.001, h_pad=2.0) ax1 = plt.subplot(4,3,10) # creates first axis ax1.set_xticks([]) ax1.set_yticks([]) ax1.tick_params(labelsize=8) i1 = ax1.imshow(VI,cmap=plt.cm.hot,origin="lower",extent=(0,2000,0,2000),vmin=-1, vmax=1) i11 = ax1.plot((0,600),(1000,1000),'k-',linewidth=3) cb1=plt.colorbar(i1,ax=ax1,ticks=[-1,0, 1],fraction=0.046, pad=0.04,format='%.2f') cb1.ax.tick_params(labelsize=8) ax1.set_title("$ n = 6$", y=1.05, fontsize=12) ax2 = plt.subplot(4,3,11) # creates second axis ax2.set_xticks([0]) ax2.set_yticks([]) i2=ax2.imshow(VII,cmap='hot',extent=(0,2000,0,2000), vmin=-1, vmax=1) i22 = ax2.plot((0,600),(1000,1000),'k-',linewidth=3) ax2.set_title("$n = 7$", y=1.05, fontsize=12) ax2.set_xticklabels([]) ax2.set_yticklabels([]) cb2=plt.colorbar(i2,ax=ax2,fraction=0.046, pad=0.04,ticks=[-1,0,1],format='%.2f') cb2.ax.tick_params(labelsize=8) plt.savefig('filtre.png', dpi=250,bbox_inches='tight', pad_inches=0.1) plt.show() </code></pre>
1
2016-10-17T08:16:13Z
40,081,771
<p>One way of achieving what you require is to use matplotlibs <a href="http://matplotlib.org/users/gridspec.html" rel="nofollow">subplot2grid</a> feature. Using this you can set the total size of the grid (4,3 in your case) and choose to only plot data in certain subplots in this grid. Below is a simplified example:</p> <pre><code>import matplotlib.pyplot as plt x = [1,2] y = [3,4] ax1 = plt.subplot2grid((4, 3), (0, 0)) ax2 = plt.subplot2grid((4, 3), (0, 1)) ax3 = plt.subplot2grid((4, 3), (0, 2)) ax4 = plt.subplot2grid((4, 3), (1, 0)) ax5 = plt.subplot2grid((4, 3), (1, 1)) ax6 = plt.subplot2grid((4, 3), (1, 2)) ax7 = plt.subplot2grid((4, 3), (2, 0)) ax8 = plt.subplot2grid((4, 3), (2, 1)) ax9 = plt.subplot2grid((4, 3), (2, 2)) ax10 = plt.subplot2grid((4, 3), (3, 0)) ax11 = plt.subplot2grid((4, 3), (3, 1)) plt.subplots_adjust(wspace = 0.3, hspace = 0.3) #make the figure look better ax1.plot(x,y) ax2.plot(x,y) ax3.plot(x,y) ax4.plot(x,y) ax5.plot(x,y) ax6.plot(x,y) ax7.plot(x,y) ax8.plot(x,y) ax9.plot(x,y) ax10.plot(x,y) ax11.plot(x,y) plt.show() </code></pre> <p>This produces the figure:</p> <p><a href="https://i.stack.imgur.com/S3bUu.png" rel="nofollow"><img src="https://i.stack.imgur.com/S3bUu.png" alt="enter image description here"></a></p>
1
2016-10-17T08:33:52Z
[ "python", "matplotlib", "subplot" ]
Python AppEngine MapReduce
40,081,642
<p>i have created a pretty simple MapReduce pipeline, but i am having a cryptic:</p> <p><code>PipelineSetupError: Error starting production.cron.pipelines.ItemsInfoPipeline(*(), **{})#a741186284ed4fb8a4cd06e38921beff:</code></p> <p>when i try to start it. This is the pipeline code:</p> <pre><code>class ItemsInfoPipeline(base_handler.PipelineBase): """ """ def run(self): output = yield mapreduce_pipeline.MapreducePipeline( job_name="items_job", mapper_spec="production.cron.mappers.items_info_mapper", input_reader_spec="mapreduce.input_readers.DatastoreInputReader", mapper_params={ "input_reader": { "entity_kind": "production.models.Transaction" } } ) yield ItemsInfoStorePipeline(output) class ItemsInfoStorePipeline(base_handler.PipelineBase): """ """ def run(self, statistics): print statistics return "OK" </code></pre> <p>Of course i have double checked that the mapper path is right, and take into account that ItemsInfoStorePipeline is not doing anything because i am focusing the have the pipeline started, which is not happening. </p> <p>It is all triggered by a Flask view, the following:</p> <pre><code>class ItemsInfoMRJob(views.MethodView): """ It's based on transacions. """ def get(self): """ :return: """ pipeline = ItemsInfoPipeline() pipeline.start() redirect_url = "%s/status?root=%s" % (pipeline.base_path, pipeline.pipeline_id) return flask.redirect(redirect_url) </code></pre> <p>I am using <code>GoogleAppEngineMapReduce==1.9.22.0</code></p> <p>Thanks for any help.</p> <p><strong>UPDATE</strong></p> <p>The above code works once deployed.</p> <p><strong>UPDATE 2</strong></p> <p>Apparently there's more people dealing with this:</p> <p><a href="https://github.com/GoogleCloudPlatform/appengine-mapreduce/issues/103" rel="nofollow">https://github.com/GoogleCloudPlatform/appengine-mapreduce/issues/103</a></p>
0
2016-10-17T08:25:32Z
40,093,196
<p>This was driving me nuts today. When I run in pycharm I don't get this errors - on command line I did:</p> <pre><code>python /usr/local/google_appengine/dev_appserver.py app.yaml --port 8000 --host localhost --admin_port=8080 --port=8000 </code></pre> <p>instead of:</p> <pre><code>dev_appserver.py app.yaml --port 8000 --host localhost --admin_port=8080 --port=8000 </code></pre> <p>... and now I don't get the error.</p>
0
2016-10-17T18:24:45Z
[ "python", "google-app-engine", "mapreduce", "google-app-engine-python" ]
Getting low test accuracy using Tensorflow batch_norm function
40,081,697
<p>I am using the official Batch Normalization (BN) function (<a href="https://github.com/tensorflow/tensorflow/blob/b826b79718e3e93148c3545e7aa3f90891744cc0/tensorflow/contrib/layers/python/layers/layers.py#L100" rel="nofollow">tf.contrib.layers.batch_norm()</a>) of Tensorflow on the MNIST data. I use the following code for adding BN:</p> <pre><code>local4_bn = tf.contrib.layers.batch_norm(local4, is_training=True) </code></pre> <p>During testing, I change "is_training=False" in the above line of code and observe only 20% accuracy. However, it gives ~99% accuracy if I use the above code also for testing (i.e., keeping is_training=True) with a batch size of 100 images. This observation indicates that the <em>exponential moving average and variance</em> computed by <a href="https://github.com/tensorflow/tensorflow/blob/b826b79718e3e93148c3545e7aa3f90891744cc0/tensorflow/contrib/layers/python/layers/layers.py#L100" rel="nofollow">batch_norm()</a> are probably incorrect or I am missing something in my code.</p> <p>Can anyone please answer about the solution of the above problem.</p>
1
2016-10-17T08:28:39Z
40,083,061
<p>You get ~99% accuracy when you test you model with <code>is_training=True</code> only because of the batch size of 100. If you change the batch size to 1 your accuracy will decrease.</p> <p>This is due to the fact that you're computing the exponential moving average and variance for the input batch and than you're (batch-)normalizing the layers output using these values.</p> <p>The <code>batch_norm</code> function have the parameter <code>variables_collections</code> that helps you to store the computed moving average and variance during the train phase and reuse them during the test phase.</p> <p>If you define a collection for these variables, then the <code>batch_norm</code> layer will use them during the testing phase, instead of calculating new values.</p> <p>Therefore, if you change you batch normalization layer definition to</p> <pre><code>local4_bn = tf.contrib.layers.batch_norm(local4, is_training=True, variables_collections=["batch_norm_non_trainable_variables_collection"]) </code></pre> <p>The layer will store the computed variables into the <code>"batch_norm_non_trainable_variables_collection"</code> collection.</p> <p>In the test phase, when you pass the <code>is_training=False</code> parameters, the layer will re-use the computed value that it find in the collection.</p> <p>Note that the moving average and the variance are not trainable parameters and therefore, if you save only your model trainable parameters in the checkpoint files, you have to manually add the non-trainable variables stored into the previously defined collection.</p> <p>You can do it when you create the <code>Saver</code> object:</p> <pre><code>saver = tf.train.Saver(tf.get_trainable_variables() + tf.get_collection_ref("batch_norm_non_trainable_variables_co‌​llection") + otherlistofvariables) </code></pre> <p>In addiction, since batch normalization can limit the expressive power of the layer which is applied to (because it restricts the range of the values), you should enable the network to learn the parameters <code>gamma</code> and <code>beta</code> (the affine transformation coefficients described in the <a href="https://arxiv.org/abs/1502.03167" rel="nofollow">paper</a>) that allows the network to learn, thus, an affine transformation that increase the representation power of the layer.</p> <p>You can enable the learning of these parameters setting to <code>True</code> the parameter of the <code>batch_norm</code> function, in this way:</p> <pre><code>local4_bn = tf.contrib.layers.batch_norm( local4, is_training=True, center=True, # beta scale=True, # gamma variables_collections=["batch_norm_non_trainable_variables_collection"]) </code></pre>
1
2016-10-17T09:42:34Z
[ "python", "tensorflow" ]
Delete first n digits from a column
40,081,709
<p>I have a pandas dataframe(roughly 7000 rows) that looks as follows:</p> <pre><code>Col1 Col2 12345 1234 678910 6789 </code></pre> <p>I would like to delete the first 4 digits from col1, so as to end up with:</p> <pre><code>Col1 Col2 5 1234 10 6789 </code></pre> <p>Or just separate the first column in 2 columns.</p>
0
2016-10-17T08:29:28Z
40,081,768
<p>Separating first column into two new ones:</p> <pre><code>In [5]: df[['New1','New2']] = (df['Col1'].astype(str) .str.extract(r'(\d{4})(\d+)', expand=True) .astype(int)) In [6]: df Out[6]: Col1 Col2 New1 New2 0 12345 1234 1234 5 1 678910 6789 6789 10 In [9]: df.dtypes Out[9]: Col1 int64 Col2 int64 New1 int32 New2 int32 dtype: object </code></pre> <p><strong>NOTE:</strong> this solution will work with Pandas version 0.18.0+</p>
3
2016-10-17T08:33:29Z
[ "python", "database", "pandas", "dataframe" ]
I can't iterate over line_styles (Matplotlib)
40,081,848
<p>Plot generates different colors for each lines but I also need to generate different line_styles for the graph. After searching for some information, I found itertools module. Yet I can't generate plot with the error: <strong>There is no Line2D property "shape_list".</strong></p> <pre><code>import itertools from glob import glob import numpy as np import matplotlib.pyplot as plt import matplotlib as mpl shape_list = ["square", "triangle", "circle", "pentagon", "star", "octagon"] # loop over all files in the current directory ending with .txt for fname in glob("*.txt"): # read file, skip header (1 line) and unpack into 3 variables WL, ABS, T = np.genfromtxt(fname, skip_header=1, unpack=True) g = itertools.cycle(shape_list) plt.plot(WL, T, label=fname[0:3],shape_list = g.__next__()) plt.xlabel('Wavelength (nm)') plt.xlim(200,1000) plt.ylim(0,100) plt.ylabel('Transmittance (%)') mpl.rcParams.update({'font.size': 12}) plt.legend(loc=4,prop={'size':10}) plt.grid(True) #plt.legend(loc='lower center') plt.savefig('Transmittance', dpi=600) </code></pre>
0
2016-10-17T08:38:25Z
40,082,148
<p>I think that <code>g = itertools.cycle(shape_list)</code> should go outside the loop</p> <p>Also see <a href="http://matplotlib.org/api/markers_api.html#module-matplotlib.markers" rel="nofollow">here</a> for valid markers What you probably want is <code> plt.plot(WL, T, label=fname[0:3], marker = g.__next__())</code></p>
1
2016-10-17T08:55:20Z
[ "python", "matplotlib" ]
I can't iterate over line_styles (Matplotlib)
40,081,848
<p>Plot generates different colors for each lines but I also need to generate different line_styles for the graph. After searching for some information, I found itertools module. Yet I can't generate plot with the error: <strong>There is no Line2D property "shape_list".</strong></p> <pre><code>import itertools from glob import glob import numpy as np import matplotlib.pyplot as plt import matplotlib as mpl shape_list = ["square", "triangle", "circle", "pentagon", "star", "octagon"] # loop over all files in the current directory ending with .txt for fname in glob("*.txt"): # read file, skip header (1 line) and unpack into 3 variables WL, ABS, T = np.genfromtxt(fname, skip_header=1, unpack=True) g = itertools.cycle(shape_list) plt.plot(WL, T, label=fname[0:3],shape_list = g.__next__()) plt.xlabel('Wavelength (nm)') plt.xlim(200,1000) plt.ylim(0,100) plt.ylabel('Transmittance (%)') mpl.rcParams.update({'font.size': 12}) plt.legend(loc=4,prop={'size':10}) plt.grid(True) #plt.legend(loc='lower center') plt.savefig('Transmittance', dpi=600) </code></pre>
0
2016-10-17T08:38:25Z
40,082,194
<p>The markers you can use with <code>plot</code> are defined <a href="http://matplotlib.org/api/markers_api.html" rel="nofollow">in the documentation</a> </p> <p>to change the marker style, use the <code>marker=</code> argument to the call to <code>plot()</code></p> <p>eg:</p> <pre><code>plt.plot(WL, T, label=fname[0:3], marker=g.__next__()) </code></pre>
1
2016-10-17T08:57:34Z
[ "python", "matplotlib" ]
How can I show a flyout in GTK 3?
40,081,856
<p>I started creating GTK3 apps in Python a few days ago.<br> I was wondering how to create a flyout menu like the one gedit shows when Open button is clicked.</p> <p>Thank you!</p> <p><a href="https://i.stack.imgur.com/NMjzF.png" rel="nofollow"><img src="https://i.stack.imgur.com/NMjzF.png" alt="Flyout in gedit"></a></p>
1
2016-10-17T08:38:42Z
40,100,121
<p>You're looking for a <a href="http://lazka.github.io/pgi-docs/index.html#Gtk-3.0/classes/MenuButton.html#Gtk.MenuButton" rel="nofollow"><code>Gtk.MenuButton</code></a> with the <code>use_popover</code> property set to <code>True</code>.</p>
0
2016-10-18T05:22:08Z
[ "python", "gtk3" ]
how to disable window7 minimize ability
40,081,882
<p>I want to disable all windows minimized ability in win7.I have used <strong>SetWindowLong</strong> in Python win32gui.</p> <pre><code>from win32gui import * def disablemin(hwnd,HWMD): SetWindowLong(hwnd,win32con.GWL_STYLE,GetWindowLong(hwnd,win32con.GWL_STYLE) &amp; ~win32con.WS_MINIMIZEBOX) EnumWindows(disablemin, 0) </code></pre> <p>But it only made buttons can't be used.I find some windows still can minimize.How to solve it?</p>
0
2016-10-17T08:40:04Z
40,087,113
<p>There are a few problems with your idea.</p> <ol> <li>You're trying to alter the behavior of other windows, windows that aren't yours. This is always a bad idea.</li> <li>You try to alter the window styles once. This isn't necessarily sufficient; they can be restored by the victim process.</li> <li>Even if the window style is altered, it doesn't mean the actual minimize box is gone. This just removes the one created by the OS; manually-drawn icons might still be present.</li> <li>Even if the minimize box is gone, this won't stop calls to <code>ShowWindow(SW_MINIMIZE)</code></li> </ol>
0
2016-10-17T13:01:15Z
[ "python", "c++", "pywin32" ]
Saving image in python
40,081,960
<p>I'm new to python. What I want to do is to read in an image, convert it to gray value and save it.</p> <p>This is what I have so far:</p> <pre><code># construct the argument parser and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-i", "--image", required=True, help="Path to the image") args = vars(ap.parse_args()) #Greyvalue image im = Image.open(args["image"]) im_grey = im.convert('LA') # convert to grayscale </code></pre> <p>Now my problem is how to save it. As far as I unterstood there are a lot of different modules (Using python 2.7) Could someone give my an example?</p> <p>Thanks</p>
0
2016-10-17T08:44:46Z
40,081,991
<p>I don't know what you mean about "lots of different modules". You're presumably using Pillow; you opened the image via <code>Image.open</code> and assigned the converted image to <code>im_grey</code>, so now you have an instance of Image which has <a href="http://pillow.readthedocs.io/en/3.4.x/reference/Image.html#PIL.Image.Image.save" rel="nofollow">a save method</a>:</p> <pre><code>im_grey.save('path/to/new/filename') </code></pre>
0
2016-10-17T08:46:38Z
[ "python", "image", "save" ]
Saving image in python
40,081,960
<p>I'm new to python. What I want to do is to read in an image, convert it to gray value and save it.</p> <p>This is what I have so far:</p> <pre><code># construct the argument parser and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-i", "--image", required=True, help="Path to the image") args = vars(ap.parse_args()) #Greyvalue image im = Image.open(args["image"]) im_grey = im.convert('LA') # convert to grayscale </code></pre> <p>Now my problem is how to save it. As far as I unterstood there are a lot of different modules (Using python 2.7) Could someone give my an example?</p> <p>Thanks</p>
0
2016-10-17T08:44:46Z
40,082,036
<p><strong>Method 1: save method</strong></p> <pre><code>im_grey.save('greyscale.png') </code></pre> <p>Use Image_object.save() method</p> <p><strong>Method 2: imsave method</strong></p> <pre><code>import matplotlib.image as mpimg mpimg.imsave("greyscale.png", im_grey) </code></pre>
1
2016-10-17T08:49:17Z
[ "python", "image", "save" ]
How to clean a string using python regular expression
40,082,056
<p>I have the following string which have to clean</p> <pre><code>#import re addr="abcd&amp;^fhj" problemchars = re.compile(r'[=\+/&amp;&lt;&gt;;\'"\?%#$@\,\. \t\r\n]') re.search(problemchars,addr) </code></pre>
-1
2016-10-17T08:50:12Z
40,082,109
<p>In that case use <code>re.sub</code> searching <code>\W</code> (non-alphanum) and replacing by nothing.</p> <pre><code>import re addr="abcd&amp;^fhj" print(re.sub("\W","",addr)) </code></pre> <p>(<code>"\W+"</code> works too, but not sure it would be more performant) </p>
1
2016-10-17T08:53:46Z
[ "python", "regex" ]
How to clean a string using python regular expression
40,082,056
<p>I have the following string which have to clean</p> <pre><code>#import re addr="abcd&amp;^fhj" problemchars = re.compile(r'[=\+/&amp;&lt;&gt;;\'"\?%#$@\,\. \t\r\n]') re.search(problemchars,addr) </code></pre>
-1
2016-10-17T08:50:12Z
40,082,282
<p>you could use the filter function as well if you don't want to go with regex</p> <pre><code>line = "abcd&amp;^fhj" line = filter(str.isalpha, line) print line # Change for python3 </code></pre> <p>Output : </p> <pre><code>abcdfhj </code></pre> <p>Edit: For python 3 you could change the print statement like this</p> <pre><code>print(''.join(list(line))) </code></pre>
0
2016-10-17T09:02:45Z
[ "python", "regex" ]
How to faster compute the count frequency of words in a large words list with python and be a dictionary
40,082,114
<p>There is a very long words list, the length of list is about 360000. I want to get the each word frequency, and to be a dictionary. </p> <p>For example:</p> <pre><code>{'I': 50, 'good': 30,.......} </code></pre> <p>Since the word list is large, I found it take a lot of time to compute it. Do you have faster method to accomplish this?</p> <p>My code, so far, is the following:</p> <pre><code> dict_pronoun = dict([(i, lst_all_tweet_noun.count(i)) for i in lst_all_tweet_noun]) sorted(dict_pronoun) </code></pre>
0
2016-10-17T08:54:02Z
40,082,491
<p>You are doing several things wrong here:</p> <ul> <li><p>You are building a huge list first, then turn that list object into a dictionary. There is no need to use the <code>[..]</code> list comprehension; just dropping the <code>[</code> and <code>]</code> would turn it into a much more memory-efficient generator expression.</p></li> <li><p>You are using <code>dict()</code> with a loop instead of a <code>{keyexpr: valueexpr for ... in ...}</code> dictionary comprehension; this would avoid a generator expression altogether and go straight to building a dictionary.</p></li> <li><p>You are using <code>list.count()</code>, this does a <strong>full scan</strong> of the list for every element. You turned a linear scan to count N items into a O(N**2) quadratic problem. You could simply increment an integer in the dictionary each time you find the key already is present, set the value to 0 otherwise, but there are better options (see below).</p></li> <li><p>The <code>sorted()</code> call is busy-work; it returns a sorted list of keys that is then discarded again. Dictionaries are not sortable, not and produce a dictionary again at any rate.</p></li> </ul> <p>Use a <a href="https://docs.python.org/3/library/collections.html#collections.Counter" rel="nofollow"><code>collections.Counter()</code> object</a> here to do your counting; it uses a linear scan:</p> <pre><code>from collections import Counter dict_pronoun = Counter(lst_all_tweet_noun) </code></pre> <p>A <code>Counter</code> has a <a href="https://docs.python.org/3/library/collections.html#collections.Counter.most_common" rel="nofollow"><code>Counter.most_common()</code> method</a> which will efficiently give you output <em>sorted by counts</em>, which is what I suspect you wanted to achieve with the <code>sorted()</code> call.</p> <p>For example, to get the top K elements (where K is smaller than N, the size of the dictionary), a <code>heapq</code> is used to get you those elements in O(NlogK) time (avoiding a full O(NlogN) sort).</p>
9
2016-10-17T09:13:08Z
[ "python", "performance", "list", "python-3.x", "dictionary" ]
python create .pgm file
40,082,165
<p>Well first I have to mention that I read the material on this page including : <a href="http://stackoverflow.com/questions/12374937/create-binary-pbm-pgm-ppm">Create binary PBM/PGM/PPM</a></p> <p>I also read the page explaining the .pgm file format<a href="http://netpbm.sourceforge.net/doc/pgm.html" rel="nofollow">.pgm file format</a>. I know that there is a difference between .pgm "raw" format and .pgm "plain" format. I also know that these files are being created as 8-bit (allowing integer values between 0-255) or 16-bit (allowing integer values between 0-65535) binary files.</p> <p>Non of these information could yet help me to write a clean piece of code that creates a plain .pgm file in either 8-bit of 16-bit formats.</p> <p>Here I attach my python script. This code results in a file with distorted (integer) values! </p> <pre><code>import numpy as np # define the width (columns) and height (rows) of your image width = 20 height = 40 p_num = width * height arr = np.random.randint(0,255,p_num) # open file for writing filename = 'test.pgm' fout=open(filename, 'wb') # define PGM Header pgmHeader = 'P5' + ' ' + str(width) + ' ' + str(height) + ' ' + str(255) + '\n' pgmHeader_byte = bytearray(pgmHeader,'utf-8') # write the header to the file fout.write(pgmHeader_byte) # write the data to the file img = np.reshape(arr,(height,width)) for j in range(height): bnd = list(img[j,:]) bnd_str = np.char.mod('%d',bnd) bnd_str = np.append(bnd_str,'\n') bnd_str = [' '.join(bnd_str)][0] bnd_byte = bytearray(bnd_str,'utf-8') fout.write(bnd_byte) fout.close() </code></pre> <p>As the result of this code a .pgm file is being created where data are completely changed (as if squeezed into the (10-50) range) I would appreciate any comment / corrections on this code. </p>
0
2016-10-17T08:56:12Z
40,083,491
<p>First your code has missing open <code>'</code> for <code>\n'</code> in statement <code>pgmHeader = 'P5' + ...</code>. Second no <code>fout = open(filename, 'wb')</code>. The main problem is that you use <code>ASCII</code> format to encode the pixel data, you should use <code>binary</code> format to encode them (because you used magic number 'P5'):</p> <pre><code>for j in range(height): bnd = list(img[j,:]) fout.write(bytearray(bnd)) # for 8-bit data only </code></pre>
0
2016-10-17T10:02:34Z
[ "python", "image", "format", "pgm" ]
IndexError: list index out of range when composing dict from 2 column csv
40,082,321
<p>My script that crunches numbers stored in csv gets the numbers into a dict from the csv like this:</p> <pre><code>fide_rating_file = fide_csv_rating_file.read() fide_rating_file = fide_rating_file.split("\n") fide_rating_file2 = [f for f in fide_rating_file if len(f) &gt; 0] fide_rating_file3 = [f.split(",") for f in fide_rating_file2] fide_ratings = {f[0]: f[1] for f in fide_rating_file3} </code></pre> <p>This is probably not a pythonic way to work with csv (suggestions are welcome), I succesfully ran the script on tens of csv files. Now I am getting a traceback telling me that:</p> <pre><code>File "...script.py", line 76, in script fide_ratings = {f[0]: f[1] for f in fide_rating_file3} File "...script.py", line 76, in &lt;dictcomp&gt; fide_ratings = {f[0]: f[1] for f in fide_rating_file3} IndexError: list index out of range </code></pre> <p>The csv looks like this (this is just an excerpt, is way too long to post here):</p> <pre><code>1701991,2383 1407589,2188 1401815,2451 1411802,1913 1406248,2068 504599,2134 2252465,2099 </code></pre> <p>The fact that it ran without errors on other files would suggest that this particular csv is corrupted. I checked for all inconsistencies I thought of compared to the rest of the files and found none. Thats why I am asking about the script here.</p>
1
2016-10-17T09:04:55Z
40,082,395
<p>You only get a sublist containing at least two items iff, there is a comma on every line</p> <pre><code>fide_rating_file3 = [f.split(",") for f in fide_rating_file2] # ^^^^^^^^^^^^ </code></pre> <p>You may test for the presence of commas before splitting or clean out lines without commas in the pre-processing stage:</p> <pre><code>fide_rating_file = [f.strip().split(",") for f in fide_csv_rating_file if ',' in f] fide_ratings = {f[0]: f[1] for f in fide_rating_file} </code></pre> <p>On another note, it would be better if your actually used the <a href="https://docs.python.org/2/library/csv.html" rel="nofollow"><code>csv</code></a> module from the standard library for parsing your file content.</p>
0
2016-10-17T09:08:21Z
[ "python" ]
IndexError: list index out of range when composing dict from 2 column csv
40,082,321
<p>My script that crunches numbers stored in csv gets the numbers into a dict from the csv like this:</p> <pre><code>fide_rating_file = fide_csv_rating_file.read() fide_rating_file = fide_rating_file.split("\n") fide_rating_file2 = [f for f in fide_rating_file if len(f) &gt; 0] fide_rating_file3 = [f.split(",") for f in fide_rating_file2] fide_ratings = {f[0]: f[1] for f in fide_rating_file3} </code></pre> <p>This is probably not a pythonic way to work with csv (suggestions are welcome), I succesfully ran the script on tens of csv files. Now I am getting a traceback telling me that:</p> <pre><code>File "...script.py", line 76, in script fide_ratings = {f[0]: f[1] for f in fide_rating_file3} File "...script.py", line 76, in &lt;dictcomp&gt; fide_ratings = {f[0]: f[1] for f in fide_rating_file3} IndexError: list index out of range </code></pre> <p>The csv looks like this (this is just an excerpt, is way too long to post here):</p> <pre><code>1701991,2383 1407589,2188 1401815,2451 1411802,1913 1406248,2068 504599,2134 2252465,2099 </code></pre> <p>The fact that it ran without errors on other files would suggest that this particular csv is corrupted. I checked for all inconsistencies I thought of compared to the rest of the files and found none. Thats why I am asking about the script here.</p>
1
2016-10-17T09:04:55Z
40,085,105
<p>Per everyones advice I used the standard library csv module</p> <pre><code>dict(csv.reader(fide_csv_rating_file)) </code></pre> <p>Following error traceback pointed me to the incorrect lines, just removed them with regex find and replace and then succesfully created the dict.</p> <p>Thanks guys.</p>
0
2016-10-17T11:23:05Z
[ "python" ]
Weird namespace behaviour
40,082,617
<p>When trying to run the following code:</p> <pre><code>i = 0 def truc(): print (i) if (False): i = 0 truc() </code></pre> <p>it yields an UnboundLocalError, but</p> <pre><code>i = 0 def truc(): print (i) #if (False): i = 0 truc() </code></pre> <p>doesn't.</p> <p>Is that a wanted behaviour ?</p> <p>Is there a way to modify the value of a variable without creating a new one ? I could use a dict of one element. It works but it seems ugly:</p> <pre><code>i = {0 : 0} def truc(): print (i[0]) if (False): i[0] = 0 truc() </code></pre> <p>Isn't it a better solution ?</p>
0
2016-10-17T09:20:06Z
40,082,728
<pre><code>i = 0 def truc(): global i print (i) if (False): i = 0 truc() </code></pre> <p>To refer the outer scope variable of the function, <code>i</code> should be declared as global.</p>
-1
2016-10-17T09:24:57Z
[ "python" ]
Weird namespace behaviour
40,082,617
<p>When trying to run the following code:</p> <pre><code>i = 0 def truc(): print (i) if (False): i = 0 truc() </code></pre> <p>it yields an UnboundLocalError, but</p> <pre><code>i = 0 def truc(): print (i) #if (False): i = 0 truc() </code></pre> <p>doesn't.</p> <p>Is that a wanted behaviour ?</p> <p>Is there a way to modify the value of a variable without creating a new one ? I could use a dict of one element. It works but it seems ugly:</p> <pre><code>i = {0 : 0} def truc(): print (i[0]) if (False): i[0] = 0 truc() </code></pre> <p>Isn't it a better solution ?</p>
0
2016-10-17T09:20:06Z
40,082,747
<p>just add</p> <pre><code>global i </code></pre> <p>at the beginning of the method <code>truc()</code> to declare that <code>i</code> is global variable</p> <pre><code>def truc(): global i if (False): i = 0 </code></pre> <p>Take a look at this <a href="https://docs.python.org/3/faq/programming.html#why-am-i-getting-an-unboundlocalerror-when-the-variable-has-a-value" rel="nofollow">topic in Python's FAQ</a> to get more informations</p>
0
2016-10-17T09:25:48Z
[ "python" ]
Weird namespace behaviour
40,082,617
<p>When trying to run the following code:</p> <pre><code>i = 0 def truc(): print (i) if (False): i = 0 truc() </code></pre> <p>it yields an UnboundLocalError, but</p> <pre><code>i = 0 def truc(): print (i) #if (False): i = 0 truc() </code></pre> <p>doesn't.</p> <p>Is that a wanted behaviour ?</p> <p>Is there a way to modify the value of a variable without creating a new one ? I could use a dict of one element. It works but it seems ugly:</p> <pre><code>i = {0 : 0} def truc(): print (i[0]) if (False): i[0] = 0 truc() </code></pre> <p>Isn't it a better solution ?</p>
0
2016-10-17T09:20:06Z
40,083,039
<p>You'll have to add <code>global i</code> to the function.</p> <pre><code>i = 0 def truc(): global i if (False): i = 0 </code></pre> <p>Other ways to handle this problem is:</p> <p>Making <code>i</code> capitalized you'll be able to access it without the <code>global</code> however <code>i</code> is then unchangeable which seems like something that isn't appropriate in your case.</p> <p>Taking <code>i</code> as an argument. This makes the code less messy and easier to debug later.</p>
-1
2016-10-17T09:40:57Z
[ "python" ]
PDB BioPython- extracting the coordinates
40,082,685
<p>I am new to Python, is there any function in BioPython to calculate the vector of an atom given a PDB file, passing the coordinates as its input?</p> <p>[OR]</p> <p>Is there a BioPython function to extract the coordinates separately from a PDB file?</p>
0
2016-10-17T09:23:18Z
40,095,551
<p>There's such method for atoms and surpringsingly it's called <a href="http://biopython.org/DIST/docs/api/Bio.PDB.Atom.Atom-class.html#get_vector" rel="nofollow">get_vector()</a>.</p> <pre><code>from Bio.PDB import PDBParser p = PDBParser() s = p.get_structure("4K5Y", "4K5Y.pdb") for chains in s: for chain in chains: for residue in chain: for atom in residue: print(atom.get_vector()) </code></pre> <p>After that, you have some methods available for each <code>Vector</code> object documented <a href="http://biopython.org/DIST/docs/api/Bio.PDB.Vector%27.Vector-class.html" rel="nofollow">here</a></p>
0
2016-10-17T20:59:17Z
[ "python", "biopython" ]
Installing python software in virtual environment gives 'Permission denied' error
40,082,688
<p>I am trying to install a piece of python software on our server(<a href="http://integronfinder.readthedocs.io/en/v1.5/" rel="nofollow">http://integronfinder.readthedocs.io/en/v1.5/</a>). However, I am not the server administrator and cannot run the command under <code>sudo</code>, as I get a 'permission denied' error when I try it. I tried circumventing the problem through creating a virtual environment and installing the program there, but I still get the same error!</p> <pre><code>(my_root) [user1@server Integron_Finder-1.5]$ python setup.py install running install running build running build_scripts changing mode of build/scripts-2.7/integron_finder from 664 to 775 running install_scripts moving build/scripts-2.7/integron_finder.tmp -&gt; build/scripts-2.7/integron_finder copying build/scripts-2.7/integron_finder -&gt; /home/user1/.conda/envs/my_root/bin changing mode of /home/user1/.conda/envs/my_root/bin/integron_finder to 775 running install_data creating /usr/share/integron_finder error: could not create '/usr/share/integron_finder': Permission denied </code></pre> <p>Installing the software in a virtual environment is also what the developers suggest for users that do not have administrator rights. Can someone tell me what I am doing wrong and how I could try to fix it?</p>
0
2016-10-17T09:23:26Z
40,082,894
<p>It isn't that much you can do than to try to get sudo permission in some way or another.</p>
0
2016-10-17T09:33:06Z
[ "python", "install", "virtualenv", "permission-denied" ]
Complex pivoting in pandas
40,082,726
<p>I have a dataframe like:</p> <pre><code> In [4]: df Out[4]: A B C D E F G 0 apple orange 10 20 cat rat 10 1 apple orange 10 20 cat rat 20 2 grapes banana 22 34 dog frog 34 3 grapes banana 22 34 dog frog 40 4 grapes banana 22 34 dog frog 67 5 kiwi avocado 90 89 ant fox 76 6 apple orange 10 20 cat rat 10 7 cherry date 56 91 tiger lion 65 </code></pre> <p>My desired output is like:</p> <pre><code>In [3]: df Out[3]: A B C D E F G_1 G_2 G_3 0 apple orange 10 20 cat rat 10 20 10 1 grapes banana 22 34 dog frog 34 40 67 2 kiwi avocado 90 89 ant fox 76 0 0 3 cherry date 56 91 tiger lion 65 0 0 </code></pre> <p>I'm confused and tried a lot with <code>pivot_table</code> but could not figure how to add additional columns depending on values. Thanks for your reply.<br> <strong>EDIT</strong> I found a method using groupby but it works only if it is unique:</p> <pre><code>df.groupby(['A','B','C','D','E','F'])['G'].unique() Out[26]: A B C D E F apple orange 10 20 cat rat [10, 20] cherry date 56 91 tiger lion [65] grapes banana 22 34 dog frog [34, 40, 67] kiwi avocado 90 89 ant fox [76] </code></pre> <p>then I will have to split the list into separate columns.</p> <p>Suppose if I have two duplicated rows then still I would like to add the value in <code>G</code> as separate column as shown in desired output.How can I include the duplicated values in separate columns.</p>
1
2016-10-17T09:24:54Z
40,082,961
<p>Here's one way</p> <pre><code>In [237]: dff = (df.groupby(['A','B','C','D','E','F'])['G'].unique() .....: .apply(pd.Series, 1).fillna(0)) In [238]: dff.columns = ['G_%s' % (x+1) for x in dff.columns] In [239]: dff Out[239]: G_1 G_2 G_3 A B C D E F apple orange 10 20 cat rat 10.0 20.0 0.0 cherry date 56 91 tiger lion 65.0 0.0 0.0 grapes banana 22 34 dog frog 34.0 40.0 67.0 kiwi avocado 90 89 ant fox 76.0 0.0 0.0 </code></pre>
2
2016-10-17T09:36:32Z
[ "python", "pandas", "numpy" ]
Python - Write to csv from a particular column number
40,082,743
<p>I have a csv file Temp.csv that I am copying over to another csv file Final.csv using the following piece of code.</p> <pre><code>dirname = os.path.dirname(os.path.abspath(__file__)) csvfilename = os.path.join(dirname, 'Final.csv') tempfile = os.path.join(dirname, 'Temp.csv') with open(csvfilename, 'wb') as output_file: writer = csv.writer(output_file, delimiter=',') writer.writerow(["Title","This"]) writer.writerow([]) with open(tempfile, 'r') as data_file: for line in data_file: line = line.replace('\n', '') row = line.split(",") writer.writerow(row) </code></pre> <p>The last 5 lines of the code are going to write to the Final.csv file from column A. I want them to write to final.csv from column D. Also I have multiple columns in the Temp.csv file. What is the best solution for this? Using python 2.7.10.</p>
1
2016-10-17T09:25:39Z
40,082,868
<p>You can try :</p> <pre><code>with open(tempfile, 'r') as data_file: for line in data_file: line = line.replace('\n', '') row = line.split(",") csvfilename.writerow([row[0]+","+row[3]]) </code></pre>
0
2016-10-17T09:31:42Z
[ "python", "csv" ]
Python - Write to csv from a particular column number
40,082,743
<p>I have a csv file Temp.csv that I am copying over to another csv file Final.csv using the following piece of code.</p> <pre><code>dirname = os.path.dirname(os.path.abspath(__file__)) csvfilename = os.path.join(dirname, 'Final.csv') tempfile = os.path.join(dirname, 'Temp.csv') with open(csvfilename, 'wb') as output_file: writer = csv.writer(output_file, delimiter=',') writer.writerow(["Title","This"]) writer.writerow([]) with open(tempfile, 'r') as data_file: for line in data_file: line = line.replace('\n', '') row = line.split(",") writer.writerow(row) </code></pre> <p>The last 5 lines of the code are going to write to the Final.csv file from column A. I want them to write to final.csv from column D. Also I have multiple columns in the Temp.csv file. What is the best solution for this? Using python 2.7.10.</p>
1
2016-10-17T09:25:39Z
40,083,024
<p><strong>EDIT</strong> Answering comment. You just need to prepend 3 empty strings to your <code>row</code></p> <pre><code>dirname = os.path.dirname(os.path.abspath(__file__)) csvfilename = os.path.join(dirname, 'Final.csv') tempfile = os.path.join(dirname, 'Temp.csv') with open(csvfilename, 'wb') as output_file: writer = csv.writer(output_file, delimiter=',') writer.writerow(["Title","This"]) writer.writerow([]) with open(tempfile, 'r') as data_file: for line in data_file: line = line.replace('\n', '') row = line.split(",") # there are better ways to do this but this is the most straight forward new_row = ['', '', ''] new_row.extend(row) writer.writerow(new_row) </code></pre>
1
2016-10-17T09:39:58Z
[ "python", "csv" ]
slicing series of panels
40,082,844
<p>I have a simple dataframe:</p> <pre><code>&gt;&gt;&gt; df = pd.DataFrame(np.random.randint(0,5,(20, 2)), columns=['col1','col2']) &gt;&gt;&gt; df['ind1'] = list('AAAAAABBBBCCCCCCCCCC') &gt;&gt;&gt; df.set_index(['ind1'], inplace=True) &gt;&gt;&gt; df col1 col2 ind1 A 0 4 A 1 2 A 1 0 A 4 1 A 1 3 A 0 0 B 0 4 B 2 0 B 3 1 B 0 3 C 1 3 C 2 1 C 4 0 C 4 0 C 4 1 C 3 0 C 4 4 C 0 2 C 0 2 C 1 2 </code></pre> <p>I am trying to get the rolling correlation coefficient of its two columns:</p> <pre><code>&gt;&gt;&gt; df.groupby(level=0).rolling(3,min_periods=1).corr() ind1 A &lt;class 'pandas.core.panel.Panel'&gt; Dimensions: ... B &lt;class 'pandas.core.panel.Panel'&gt; Dimensions: ... C &lt;class 'pandas.core.panel.Panel'&gt; Dimensions: ... dtype: object </code></pre> <p>The problem is that the result is series of panels:</p> <pre><code>&gt;&gt;&gt; type(df.groupby(level=0).rolling(3,min_periods=1).corr()) pandas.core.series.Series </code></pre> <p>I am able to get desired coefficient for each row separately...</p> <pre><code>&gt;&gt;&gt; df.groupby(level=0).rolling(3,min_periods=1).corr()['A'] &lt;class 'pandas.core.panel.Panel'&gt; Dimensions: 10 (items) x 2 (major_axis) x 2 (minor_axis) Items axis: C to C Major_axis axis: col1 to col2 Minor_axis axis: col1 to col2 &gt;&gt;&gt; df.groupby(level=0).rolling(3,min_periods=1).corr().loc['A'].ix[2] col1 col2 col1 1.000000 -0.866025 col2 -0.866025 1.000000 &gt;&gt;&gt; df.groupby(level=0).rolling(3,min_periods=1).corr().loc['A'].ix[2,'col1','col2'] -0.86602540378443849 </code></pre> <p>...but I don't know how to slice the result (series of panels) in order to assign the results as a column to existing dataframe. Something like:</p> <pre><code>df['cor_coeff'] = df.groupby(level=0).rolling(3,min_periods=1).corr()['some slicing'] </code></pre> <p>Any clues? Or a better way to get rolling correlation coefficients?</p>
3
2016-10-17T09:30:24Z
40,090,424
<p>Your problem is that <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.window.Rolling.corr.html" rel="nofollow"><code>.corr()</code></a> is being called without specifying the <code>other</code> argument. Even though your dataframe only has two columns, Pandas doesn't know which correlation you actually want, so it calculates all possible correlations (<code>col1</code> x <code>col1</code>, <code>col1</code> x <code>col2</code>, <code>col2</code> x <code>col1</code>, <code>col2</code> x <code>col2</code>) and gives the results to you in a 2x2 datastructure. If you want to get the results from one correlation, you need to specify the correlation you want by setting the base column and the <code>other</code> column. If you weren't using <code>groupby</code> you'd just do it this way:</p> <pre><code>df['col1'].rolling(min_periods=1, window=3).corr(other=g['col2']) </code></pre> <p>Since you're using <code>groupby</code>, you need to nest it in an <code>apply</code> clause with a lambda function (or you could move it into a separate function if you preferred):</p> <pre><code>df.groupby(level=0).apply(lambda g: g['col1'].rolling(min_periods=1, window=3).corr(other=g['col2'])) </code></pre>
2
2016-10-17T15:35:29Z
[ "python", "pandas", "slice" ]
Error when using sympy's solver on polynomials with complex coefficients (4th deg)
40,082,924
<p>Trying to solve a 4th degree polynomial equation with sympy, I arrived at some difficulties. My code and the equation i'm trying to solve:</p> <pre><code>import sympy as sym from sympy import I sym.init_printing() k = sym.Symbol('k') t, sigma ,k0, L , V = sym.symbols('t, sigma, k0, L,V') x4 = ( -t**2 + 2*I * t / sigma**2 + 1/sigma**4) x3 = ( -2*I * t * k0 / sigma**2 - 2*k0 / sigma**4) x2 = ( L**2 + k0 **2 / sigma **4 + t**2 * V - 2 * I * t * V / sigma**2 -V/sigma**4) x1 = (2*I * V * k0 / sigma**2 + 2*k0 * V / sigma **4) x0 = (2*I*k0*t*V / sigma**2 - k0 **2 *V / sigma**4) expr = x4 * k**4 + x3 * k**3 + x2 * k**2 + x1 * k + x0 expr2 = expr.subs({k0 :2 , sigma : .2 , L : 1, V:1}) sym.solvers.solve(expr2,k) </code></pre> <p>Output:</p> <pre><code>Traceback (most recent call last): File "&lt;ipython-input-4-e1ce7d8c9531&gt;", line 1, in &lt;module&gt; sols = sym.solvers.solve(expr2,k) File "/usr/local/lib/python2.7/dist-packages/sympy/solvers /solvers.py", line 1125, in solve solution = nfloat(solution, exponent=False) File "/usr/local/lib/python2.7/dist-packages/sympy/core/function.py", line 2465, in nfloat return type(expr)([nfloat(a, n, exponent) for a in expr]) File "/usr/local/lib/python2.7/dist-packages/sympy/core/function.py", line 2499, in nfloat lambda x: isinstance(x, Function))) File "/usr/local/lib/python2.7/dist-packages/sympy/core/basic.py", line 1087, in xreplace value, _ = self._xreplace(rule) File "/usr/local/lib/python2.7/dist-packages/sympy/core/basic.py", line 1095, in _xreplace return rule[self], True File "/usr/local/lib/python2.7/dist-packages/sympy/core/rules.py", line 59, in __getitem__ return self._transform(key) File "/usr/local/lib/python2.7/dist-packages/sympy/core/function.py", line 2498, in &lt;lambda&gt; lambda x: x.func(*nfloat(x.args, n, exponent)), File "/usr/local/lib/python2.7/dist-packages/sympy/core/function.py", line 2465, in nfloat return type(expr)([nfloat(a, n, exponent) for a in expr]) File "/usr/local/lib/python2.7/dist-packages/sympy/core/function.py", line 2465, in nfloat return type(expr)([nfloat(a, n, exponent) for a in expr]) TypeError: __new__() takes exactly 3 arguments (2 given) </code></pre> <p>And I really can't make anything out of it. I am not so sure what's causing this, I "tested" this solver for more compact polynomials and it worked well.</p>
1
2016-10-17T09:34:39Z
40,142,028
<p>Looks like you can work around the issue by using <code>solve(expr2, k, rational=False)</code>. </p>
2
2016-10-19T21:45:10Z
[ "python", "sympy", "solver" ]
MySQL python: Cursor returns the variable name
40,082,968
<p>I saw before if someone ask the same but i didn't see a similar question. </p> <pre><code>def get_session(Var = '', Key=''): conn = connection() cursor = conn.cursor() print "Var: "+Var cursor.execute("Select %(var)s FROM users WHERE sessions = %(key)s",{'var':Var,'key':Key}) if(cursor.rowcount == 1): name = cursor.fetchone() print name return name </code></pre> <p>The output is like:</p> <pre><code>get_session(Var = 'NAME', Key='asdf123') </code></pre> <blockquote> <p>Var:NAME</p> <p>('NAME',)</p> </blockquote> <pre><code>+----+----------+----------+---------------------------+ | ID | NAME | PASS | sessions | +----+----------+----------+---------------------------+ | 1 | Potato | test | asdf123 | | 2 | asdf123 | test2 | asdasd | +----+----------+----------+---------------------------+ </code></pre> <p>The correct output must be:</p> <pre><code>get_session(Var = 'NAME', Key='asdf123') </code></pre> <blockquote> <p>Var:NAME</p> <p>('Potato',)</p> </blockquote> <p>or</p> <pre><code>get_session(Var = 'PASS', Key='asdf123') </code></pre> <blockquote> <p>Var:PASS</p> <p>('test',)</p> </blockquote> <p>I supposed that the main problem is that mysql detects the name with quotes but i don't know how to fix it. </p>
0
2016-10-17T09:36:49Z
40,083,105
<p>You can try:</p> <pre><code>cursor.execute("Select %s FROM users WHERE sessions = '%s'" %("Name","asdf123")) </code></pre> <p>So query becomes : </p> <pre><code>Select Name FROM users WHERE sessions = 'asdf123' </code></pre> <p>You can use variables as well Eg:</p> <pre><code>name_of_col = "Name" session_name = "asdf123" cursor.execute("Select %s FROM users WHERE sessions = '%s'" %(name_of_col,session_name )) </code></pre>
0
2016-10-17T09:45:00Z
[ "python", "mysql", "string", "cursor" ]
MySQL python: Cursor returns the variable name
40,082,968
<p>I saw before if someone ask the same but i didn't see a similar question. </p> <pre><code>def get_session(Var = '', Key=''): conn = connection() cursor = conn.cursor() print "Var: "+Var cursor.execute("Select %(var)s FROM users WHERE sessions = %(key)s",{'var':Var,'key':Key}) if(cursor.rowcount == 1): name = cursor.fetchone() print name return name </code></pre> <p>The output is like:</p> <pre><code>get_session(Var = 'NAME', Key='asdf123') </code></pre> <blockquote> <p>Var:NAME</p> <p>('NAME',)</p> </blockquote> <pre><code>+----+----------+----------+---------------------------+ | ID | NAME | PASS | sessions | +----+----------+----------+---------------------------+ | 1 | Potato | test | asdf123 | | 2 | asdf123 | test2 | asdasd | +----+----------+----------+---------------------------+ </code></pre> <p>The correct output must be:</p> <pre><code>get_session(Var = 'NAME', Key='asdf123') </code></pre> <blockquote> <p>Var:NAME</p> <p>('Potato',)</p> </blockquote> <p>or</p> <pre><code>get_session(Var = 'PASS', Key='asdf123') </code></pre> <blockquote> <p>Var:PASS</p> <p>('test',)</p> </blockquote> <p>I supposed that the main problem is that mysql detects the name with quotes but i don't know how to fix it. </p>
0
2016-10-17T09:36:49Z
40,083,190
<p>The problem was the quotes that sql put in automatically</p> <pre><code>def get_session(Var = '', Key=''): conn = connection() cursor = conn.cursor() sql = "Select "+ Var+" FROM users WHERE sessions = %(key)s" print "Var: "+Var cursor.execute(sql,{'key':Key}) print str(cursor.rowcount) if(cursor.rowcount == 1): name = cursor.fetchone() print name return name else: return None cursor.close() conn.close() </code></pre> <p>Thanks to all for responding.</p>
0
2016-10-17T09:48:20Z
[ "python", "mysql", "string", "cursor" ]
Nesting a string inside a list n times ie list of a list of a list
40,083,007
<pre><code>def nest(x, n): a = [] for i in range(n): a.append([x]) return a print nest("hello", 5) </code></pre> <p>This gives an output</p> <pre><code>[['hello'], ['hello'], ['hello'], ['hello'], ['hello']] </code></pre> <p>The desired output is </p> <pre><code>[[[[["hello"]]]]] </code></pre>
4
2016-10-17T09:39:05Z
40,083,051
<p>instead of appending you sould wrap <code>x</code> and call recursively the method till call number is lesser than <code>n</code></p> <pre><code>def nest(x, n): if n &lt;= 0: return x else: return [nest(x, n-1)] </code></pre>
2
2016-10-17T09:41:54Z
[ "python", "list", "python-2.7", "nested" ]
Nesting a string inside a list n times ie list of a list of a list
40,083,007
<pre><code>def nest(x, n): a = [] for i in range(n): a.append([x]) return a print nest("hello", 5) </code></pre> <p>This gives an output</p> <pre><code>[['hello'], ['hello'], ['hello'], ['hello'], ['hello']] </code></pre> <p>The desired output is </p> <pre><code>[[[[["hello"]]]]] </code></pre>
4
2016-10-17T09:39:05Z
40,083,055
<p>Every turn through the loop you are adding to the list. You want to be further nesting the list, not adding more stuff onto it. You could do it something like this:</p> <pre><code>def nest(x, n): for _ in range(n): x = [x] return x </code></pre> <p>Each turn through the loop, <code>x</code> has another list wrapped around it.</p>
5
2016-10-17T09:42:23Z
[ "python", "list", "python-2.7", "nested" ]
Nesting a string inside a list n times ie list of a list of a list
40,083,007
<pre><code>def nest(x, n): a = [] for i in range(n): a.append([x]) return a print nest("hello", 5) </code></pre> <p>This gives an output</p> <pre><code>[['hello'], ['hello'], ['hello'], ['hello'], ['hello']] </code></pre> <p>The desired output is </p> <pre><code>[[[[["hello"]]]]] </code></pre>
4
2016-10-17T09:39:05Z
40,083,670
<p>Here is a pythonic recursion approach:</p> <pre><code>In [8]: def nest(x, n): ...: return nest([x], n-1) if n else x </code></pre> <p>DEMO:</p> <pre><code>In [9]: nest(3, 4) Out[9]: [[[[3]]]] In [11]: nest("Stackoverflow", 7) Out[11]: [[[[[[['Stackoverflow']]]]]]] </code></pre>
1
2016-10-17T10:11:54Z
[ "python", "list", "python-2.7", "nested" ]
Adding a table after a Python matplotlib basemap
40,083,108
<p>I am creating a map using matplotlib basemap and I want to add a table underneath it (say 4 colums, 4 rows) with text in the cells (the text and table is not linked in any way to the basemap). I have not been able to do so with subplots. This is saved as a 1 page pdf. Any suggestions?</p> <pre><code>from mpl_toolkits.basemap import Basemap import matplotlib matplotlib.use('Agg') import matplotlib.pyplot as plt fig = plt.figure(figsize=(11.69*2, 8.27*2), dpi=120) fig.add_axes([0.1,0.1,0.8,0.8]) map = Basemap(projection='merc', lat_0=57, lon_0=-135, resolution = 'l', area_thresh = 10000, llcrnrlon=-110, llcrnrlat=-50, urcrnrlon=150, urcrnrlat=60) # A lot of map calls drawing the map plt.savefig('map.pdf', bbox_inches='tight') </code></pre>
0
2016-10-17T09:45:08Z
40,117,650
<p>Create two subplots and add a table to the second axe object? See the documentation for <code>Axes.table()</code></p> <p><a href="http://matplotlib.org/api/axes_api.html?highlight=table#matplotlib.axes.Axes.table" rel="nofollow">http://matplotlib.org/api/axes_api.html?highlight=table#matplotlib.axes.Axes.table</a></p> <blockquote> <p>table(**kwargs) Add a table to the current axes.</p> <p>Call signature:</p> <p>table(cellText=None, cellColours=None, cellLoc='right', colWidths=None, rowLabels=None, rowColours=None, rowLoc='left', colLabels=None, colColours=None, colLoc='center', loc='bottom', bbox=None): Returns a matplotlib.table.Table instance. </p> <p>For finer grained control over tables, use the Table class and add it to the axes with add_table().</p> </blockquote>
0
2016-10-18T20:38:19Z
[ "python", "matplotlib", "matplotlib-basemap" ]
Consecutive elements in a Sparse matrix row
40,083,118
<p>I am working on a sparse matrix stored in COO format. What would be the fastest way to get the number of consecutive elements per each row.</p> <p>For example consider the following matrix:</p> <pre><code>a = [[0,1,2,0],[1,0,0,2],[0,0,0,0],[1,0,1,0]] </code></pre> <p>Its COO representation would be </p> <pre><code> (0, 1) 1 (0, 2) 2 (1, 0) 1 (1, 3) 2 (3, 0) 1 (3, 2) 1 </code></pre> <p>I need the result to be <code>[1,2,0,2]</code>. The first row contains two Non-zero elements that lies nearby. Hence its a group or set. In the second row we have two non-zero elements,but they dont lie nearby, and hence we can say that it forms two groups. The third row there are no non-zeroes and hence no groups. The fourth row has again two non-zeroes but separated by zeroes nad hence we consider as two groups. It would be like the number of clusters per row. Iterating through the rows are an option but only if there is no faster solution. Any help in this regard is appreciated. </p> <p>Another simple example: consider the following row:</p> <pre><code>[1,2,3,0,0,0,2,0,0,8,7,6,0,0] </code></pre> <p>The above row should return <code>[3]</code> sine there are three groups of non-zeroes getting separated by zeroes.</p>
0
2016-10-17T09:45:26Z
40,091,662
<p>Convert it to a dense array, and apply your logic row by row.</p> <ul> <li>you want the number of groups per row</li> <li>zeros count when defining groups</li> <li>row iteration is faster with arrays</li> </ul> <p>In <code>coo</code> format your matrix looks like:</p> <pre><code>In [623]: M=sparse.coo_matrix(a) In [624]: M.data Out[624]: array([1, 2, 1, 2, 1, 1]) In [625]: M.row Out[625]: array([0, 0, 1, 1, 3, 3], dtype=int32) In [626]: M.col Out[626]: array([1, 2, 0, 3, 0, 2], dtype=int32) </code></pre> <p>This format does not implement row indexing; <code>csr</code> and <code>lil</code> do</p> <pre><code>In [627]: M.tolil().data Out[627]: array([[1, 2], [1, 2], [], [1, 1]], dtype=object) In [628]: M.tolil().rows Out[628]: array([[1, 2], [0, 3], [], [0, 2]], dtype=object) </code></pre> <p>So the sparse information for the 1st row is a list of nonzero data values, <code>[1,2]</code>, and list of their column numbers, <code>[1,2]</code>. Compare that with the row of the dense array, <code>[0, 1, 2, 0]</code>. Which is easier to analyze?</p> <p>Your first task is to write a function that analyzes one row. I haven't studied your logic enough to say whether the dense form is better than the sparse one or not. It is easy to get the column information from the dense form with <code>M.A[0,:].nonzero()</code>.</p> <p>In your last example, I can get the nonzero indices:</p> <pre><code>In [631]: np.nonzero([1,2,3,0,0,0,2,0,0,8,7,6,0,0]) Out[631]: (array([ 0, 1, 2, 6, 9, 10, 11], dtype=int32),) In [632]: idx=np.nonzero([1,2,3,0,0,0,2,0,0,8,7,6,0,0])[0] In [633]: idx Out[633]: array([ 0, 1, 2, 6, 9, 10, 11], dtype=int32) In [634]: np.diff(idx) Out[634]: array([1, 1, 4, 3, 1, 1], dtype=int32) </code></pre> <p>We may be able to get the desired count from the number of <code>diff</code> values <code>&gt;1</code>, though I'd have to look at more examples to define the details.</p> <p>Extension of the analysis to multiple rows depends on first thoroughly understanding the single row case.</p>
1
2016-10-17T16:47:07Z
[ "python", "python-2.7", "matrix", "scipy", "sparse-matrix" ]
Where is the image being uploaded?
40,083,181
<p>I have created a custom User model in django:</p> <pre><code>class CustomUser(AbstractBaseUser, PermissionsMixin): email = models.EmailField(_('email address'), max_length=254, unique=True) first_name = models.CharField(_('first name'), max_length=30) image = models.ImageField(_('profile image'), upload_to='userimages/', default = 'user_default.jpeg') </code></pre> <p>Now I have created a serializer for django-rest-framework to register a new user:</p> <pre><code>class UserRegistrationSerializer(serializers.ModelSerializer): password = serializers.CharField(style={'input_type': 'password'}, write_only=True, validators=settings.get('PASSWORD_VALIDATORS')) image = serializers.ImageField(max_length=None, use_url=True) class Meta: model = User fields = tuple(User.REQUIRED_FIELDS) + ( User.USERNAME_FIELD, User._meta.pk.name, 'image', 'password', ) def create(self, validated_data): if settings.get('SEND_ACTIVATION_EMAIL'): with transaction.atomic(): user = User.objects.create_user(**validated_data) user.is_active = False user.save(update_fields=['is_active']) else: user = User.objects.create_user(**validated_data) return user </code></pre> <p>My media settings are:</p> <pre><code>import os BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) MEDIA_ROOT = os.path.join(BASE_DIR, 'media') MEDIA_URL = '/media/' </code></pre> <p>My 'urls.py':</p> <pre><code>urlpatterns = [ url(r'^admin/', admin.site.urls), url(r'^', include('app.urls')), ]+ static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT) </code></pre> <p>Now when I 'POST' on the /register/ endpoint, the User gets created and I am able to see the image in the browser from the url in admin panel. But when I look into my project directory, the image is just not there. I am not able to figure out the image location. I am pretty sure that the image gets uploaded in my computer as I tried uploading it from some other device on LAN and was able to access it (from the browser)after disconnecting from the other computer.</p> <p>Please help.</p>
-1
2016-10-17T09:48:00Z
40,083,377
<p>You're correct that user uploaded files should be stored in MEDIA_ROOT.</p> <p>You can check this location by firing up the Django shell and checking on setting:</p> <pre><code>$ python manage.py shell &gt;&gt;&gt; from django.conf import settings &gt;&gt;&gt; settings.MEDIA_ROOT &lt;&lt; snip: the media root location &gt;&gt; </code></pre>
0
2016-10-17T09:56:32Z
[ "python", "django", "image", "django-rest-framework" ]
Where is the image being uploaded?
40,083,181
<p>I have created a custom User model in django:</p> <pre><code>class CustomUser(AbstractBaseUser, PermissionsMixin): email = models.EmailField(_('email address'), max_length=254, unique=True) first_name = models.CharField(_('first name'), max_length=30) image = models.ImageField(_('profile image'), upload_to='userimages/', default = 'user_default.jpeg') </code></pre> <p>Now I have created a serializer for django-rest-framework to register a new user:</p> <pre><code>class UserRegistrationSerializer(serializers.ModelSerializer): password = serializers.CharField(style={'input_type': 'password'}, write_only=True, validators=settings.get('PASSWORD_VALIDATORS')) image = serializers.ImageField(max_length=None, use_url=True) class Meta: model = User fields = tuple(User.REQUIRED_FIELDS) + ( User.USERNAME_FIELD, User._meta.pk.name, 'image', 'password', ) def create(self, validated_data): if settings.get('SEND_ACTIVATION_EMAIL'): with transaction.atomic(): user = User.objects.create_user(**validated_data) user.is_active = False user.save(update_fields=['is_active']) else: user = User.objects.create_user(**validated_data) return user </code></pre> <p>My media settings are:</p> <pre><code>import os BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) MEDIA_ROOT = os.path.join(BASE_DIR, 'media') MEDIA_URL = '/media/' </code></pre> <p>My 'urls.py':</p> <pre><code>urlpatterns = [ url(r'^admin/', admin.site.urls), url(r'^', include('app.urls')), ]+ static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT) </code></pre> <p>Now when I 'POST' on the /register/ endpoint, the User gets created and I am able to see the image in the browser from the url in admin panel. But when I look into my project directory, the image is just not there. I am not able to figure out the image location. I am pretty sure that the image gets uploaded in my computer as I tried uploading it from some other device on LAN and was able to access it (from the browser)after disconnecting from the other computer.</p> <p>Please help.</p>
-1
2016-10-17T09:48:00Z
40,084,118
<p>If what you want is just to know your folder's location, you can run the following command (In linux-based systems):</p> <pre><code>sudo find / -name userimages </code></pre>
1
2016-10-17T10:34:03Z
[ "python", "django", "image", "django-rest-framework" ]
Replace comma with dot Pandas
40,083,266
<p>Given the following array, I want to replace commas with dots:</p> <pre><code>array(['0,140711', '0,140711', '0,0999', '0,0999', '0,001', '0,001', '0,140711', '0,140711', '0,140711', '0,140711', '0,140711', '0,140711', 0L, 0L, 0L, 0L, '0,140711', '0,140711', '0,140711', '0,140711', '0,140711', '0,1125688', '0,140711', '0,1125688', '0,140711', '0,1125688', '0,140711', '0,1125688', '0,140711', '0,140711', '0,140711', '0,140711', '0,140711', '0,140711', '0,140711', '0,140711', '0,140711', '0,140711', '0,140711', '0,140711', '0,140711', '0,140711', '0,140711', '0,140711', '0,140711', '0,140711', '0,140711', '0,140711'], dtype=object) </code></pre> <p>I've been trying different ways but I can`t figure out how to do this. Also, I have as a DataFrame but can't apply the function:</p> <pre><code>df 1-8 1-7 H0 0,140711 0,140711 H1 0,0999 0,0999 H2 0,001 0,001 H3 0,140711 0,140711 H4 0,140711 0,140711 H5 0,140711 0,140711 H6 0 0 H7 0 0 H8 0,140711 0,140711 H9 0,140711 0,140711 H10 0,140711 0,1125688 H11 0,140711 0,1125688 H12 0,140711 0,1125688 H13 0,140711 0,1125688 H14 0,140711 0,140711 H15 0,140711 0,140711 H16 0,140711 0,140711 H17 0,140711 0,140711 H18 0,140711 0,140711 H19 0,140711 0,140711 H20 0,140711 0,140711 H21 0,140711 0,140711 H22 0,140711 0,140711 H23 0,140711 0,140711 df.applymap(lambda x: str(x.replace(',','.'))) </code></pre> <p>Any suggestion? Thanks</p>
1
2016-10-17T09:51:27Z
40,083,822
<p>You need to assign the result of your operate back as the operation isn't inplace, besides you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.apply.html" rel="nofollow"><code>apply</code></a> or <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.stack.html" rel="nofollow"><code>stack</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.unstack.html" rel="nofollow"><code>unstack</code></a> with vectorised <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.replace.html#pandas.Series.str.replace" rel="nofollow"><code>str.replace</code></a> to do this quicker:</p> <pre><code>In [5]: df.apply(lambda x: x.str.replace(',','.')) Out[5]: 1-8 1-7 H0 0.140711 0.140711 H1 0.0999 0.0999 H2 0.001 0.001 H3 0.140711 0.140711 H4 0.140711 0.140711 H5 0.140711 0.140711 H6 0 0 H7 0 0 H8 0.140711 0.140711 H9 0.140711 0.140711 H10 0.140711 0.1125688 H11 0.140711 0.1125688 H12 0.140711 0.1125688 H13 0.140711 0.1125688 H14 0.140711 0.140711 H15 0.140711 0.140711 H16 0.140711 0.140711 H17 0.140711 0.140711 H18 0.140711 0.140711 H19 0.140711 0.140711 H20 0.140711 0.140711 H21 0.140711 0.140711 H22 0.140711 0.140711 H23 0.140711 0.140711 In [4]: df.stack().str.replace(',','.').unstack() Out[4]: 1-8 1-7 H0 0.140711 0.140711 H1 0.0999 0.0999 H2 0.001 0.001 H3 0.140711 0.140711 H4 0.140711 0.140711 H5 0.140711 0.140711 H6 0 0 H7 0 0 H8 0.140711 0.140711 H9 0.140711 0.140711 H10 0.140711 0.1125688 H11 0.140711 0.1125688 H12 0.140711 0.1125688 H13 0.140711 0.1125688 H14 0.140711 0.140711 H15 0.140711 0.140711 H16 0.140711 0.140711 H17 0.140711 0.140711 H18 0.140711 0.140711 H19 0.140711 0.140711 H20 0.140711 0.140711 H21 0.140711 0.140711 H22 0.140711 0.140711 H23 0.140711 0.140711 </code></pre> <p>the key thing here is to assign back the result:</p> <p><code>df = df.stack().str.replace(',','.').unstack()</code></p>
1
2016-10-17T10:19:55Z
[ "python", "pandas" ]
python. How to redirect Maya history?
40,083,308
<p>All Maya script logs and errors printed in history tab. This is output from all commands and python scripts.</p> <p>For better debugging scripts I want all the logs were sent somewhere on the server. How to intercept and send the output to your script. Then I will do all that is necessary, and the output is either a remote console, or somewhere in the files on the server. </p> <p>The task to intercept the output. How to do it?</p> <p><a href="https://i.stack.imgur.com/uvjsl.jpg" rel="nofollow"><img src="https://i.stack.imgur.com/uvjsl.jpg" alt="enter image description here"></a></p>
0
2016-10-17T09:53:13Z
40,083,783
<p>its sounds like that you need a real-time error tracker like <a href="https://sentry.io/welcome/" rel="nofollow">Sentry</a> , in Sentry are logging modules that are maked exactly for this reason, communicate Server/Client <a href="https://docs.sentry.io/clients/python/integrations/logging/" rel="nofollow">logging</a> with a richer error/debug handling </p> <p>here a example for <a href="http://danostrov.com/2013/02/06/rerouting-the-maya-script-editor-to-a-terminal-and-other-places/" rel="nofollow">Rerouting</a> the Maya Script Editor to a terminal </p>
3
2016-10-17T10:17:55Z
[ "python", "debugging", "logging", "output", "maya" ]
python. How to redirect Maya history?
40,083,308
<p>All Maya script logs and errors printed in history tab. This is output from all commands and python scripts.</p> <p>For better debugging scripts I want all the logs were sent somewhere on the server. How to intercept and send the output to your script. Then I will do all that is necessary, and the output is either a remote console, or somewhere in the files on the server. </p> <p>The task to intercept the output. How to do it?</p> <p><a href="https://i.stack.imgur.com/uvjsl.jpg" rel="nofollow"><img src="https://i.stack.imgur.com/uvjsl.jpg" alt="enter image description here"></a></p>
0
2016-10-17T09:53:13Z
40,092,916
<p>You can also redirect Script Editor history using Maya's <code>scriptEditorInfo</code> command found <a href="http://help.autodesk.com/cloudhelp/2017/ENU/Maya-Tech-Docs/CommandsPython/scriptEditorInfo.html" rel="nofollow">here</a>:</p> <p>An example usage of this would be something like:</p> <pre><code>import maya.cmds as cmds outfile = r'/path/to/your/outfile.txt' # begin output capture cmds.scriptEditorInfo(historyFilename=outfile, writeHistory=True) # stop output capture cmds.scriptEditorInfo(writeHistory=False) </code></pre> <p>There is also <code>cmdFileOutput</code> which you can either call interactively or enable/disable via a registry entry to <code>MAYA_CMD_FILE_OUTPUT</code>, documentation <a href="http://help.autodesk.com/cloudhelp/2017/ENU/Maya-Tech-Docs/CommandsPython/cmdFileOutput.html" rel="nofollow">here</a></p> <p>Lastly, you can augment Maya start using the <code>-log</code> flag to write the Output Window text to another location. With this however, you do not get the Script Editor output, but could be all you desire given what it is you are trying to log.</p>
2
2016-10-17T18:06:17Z
[ "python", "debugging", "logging", "output", "maya" ]
How exactly is SELECT blah, blah FROM table stored in a python file
40,083,401
<pre><code>query_string = 'SELECT item_id, item_name, description, item_price FROM valve_list' </code></pre> <p>* valve_list is a database table.</p> <p>The code above is in a python file, how exactly does it stores the data? is it a list containing a list for each item in my database table? like this?</p> <pre><code>query_string = [[item_id, item_name, description, item_price], [item_id, item_name, description, item_price], [item_id, item_name, description, item_price], [item_id, item_name, description, item_price]] </code></pre> <p>How exactly is this stored?</p>
-2
2016-10-17T09:57:46Z
40,083,601
<p>At the moment "query_string" is just a var. You typically need to pass the string to an object which has a connection to a database and executes on the database, typically a cursor. The cursor fetches a result set which can be iterated over to produce the database output.</p>
0
2016-10-17T10:09:02Z
[ "python", "mysql", "sqlite", "sqlite3" ]
Accessing to a dict on a json in python
40,083,438
<p>I am trying to assign a value of a dict copied in JSON to a variable in my code. This is the dictionary copied on the .json:</p> <pre><code>"Monetarios": [{"MIFID_NO_CURR_RISK":"B1"},{"MIFID_CURR_RISK":"B2"}], "Monetario Dinamico": [{"MIFID_NO_CURR_RISK":"B1"},{"MIFID_CURR_RISK":"B2"}], "Renta Fija Corto Plazo": [{"MIFID_NO_CURR_RISK":"B1"},{"MIFID_CURR_RISK":"B2"}], "Garantizados de RF": [{"MIFID_NO_CURR_RISK":"B1"},{"MIFID_CURR_RISK":"B2"}], "Renta Fija Largo Plazo": [{"MIFID_NO_CURR_RISK":"B1"},{"MIFID_CURR_RISK":"B2"}] </code></pre> <p>And i am trying to show on screen for example the B1 of MIFID NO CURR RISK from "Renta Fija Corto Plazo"</p> <p>I do this and works fine :</p> <pre><code>carga_dict['Renta Fija Corto Plazo'] Out[56]: [{u'MIFID_NO_CURR_RISK': u'B1'}, {u'MIFID_CURR_RISK': u'B2'}] </code></pre> <p>But then I do this and i get an error</p> <pre><code>carga_dict['Renta Fija Corto Plazo']['MIFID_NO_CURR_RISK'] Traceback (most recent call last): File "&lt;ipython-input-57-46b56ce8491a&gt;", line 1, in &lt;module&gt; carga_dict['Renta Fija Corto Plazo']['MIFID_NO_CURR_RISK'] TypeError: list indices must be integers, not str </code></pre> <p>Thank you all</p>
-1
2016-10-17T09:59:56Z
40,083,473
<pre><code>carga_dict['Renta Fija Corto Plazo'][0]['MIFID_NO_CURR_RISK'] </code></pre> <p><code>carga_dict['Renta Fija Corto Plazo']</code> is a list of dictionary. so you have to first select the list element and find the appropriate key.</p>
0
2016-10-17T10:01:21Z
[ "python", "json", "dictionary" ]
Mocking an object retaining its functionality in Python
40,083,528
<p>I'm testing the behavior of an object. In the sample code below, I check if the method <code>bar2</code> is called.</p> <p>The debug print shows that yes, <code>bar2</code> is called. Unfortunately, the <code>mock</code>-library doesn't track this call.</p> <p>What should I do to make <code>mock</code> to notice such "internal" calls?</p> <pre><code>import mock class Foo: def bar1(self): print "in bar1" self.bar2() def bar2(self): print "in bar2" m = mock.Mock(wraps = Foo()) m.bar1() print "Method calls:", m.method_calls m.bar2.assert_called_with() </code></pre> <p>Output:</p> <pre><code>in bar1 in bar2 Method calls: [call.bar1()] ... AssertionError: Expected call: bar2() Not called </code></pre> <p><strong>Update</strong></p> <p>I've accepted the answer with patching as the methodically correct, but would like also to show my direct approach:</p> <pre><code>obj = Foo() m = mock.Mock(wraps = obj) obj.bar2 = m.bar2 # Patch the object manually m.bar1() m.bar2.assert_called_with() </code></pre>
0
2016-10-17T10:04:47Z
40,083,608
<p>You don't. You mocked out <code>Foo()</code>, so the implementation is gone; the whole point of mocking is to replace the class. </p> <p>If you are testing <code>Foo</code> itself, don't mock the class. You can mock individual methods instead:</p> <pre><code>with mock.patch('__main__.Foo.bar2') as bar2_mock: f = Foo() f.bar1() print "Method calls:", bar2_mock.method_calls bar2_mock.assert_called_with() </code></pre> <p>This outputs:</p> <pre><code>in bar1 Method calls: [] </code></pre> <p>as now only <code>bar2</code> has been mocked.</p>
4
2016-10-17T10:09:29Z
[ "python", "mocking" ]
why python not crash in this case?
40,083,647
<p>For the example in this page: <a href="https://wiki.python.org/moin/CrashingPython#Exhausting_Resources" rel="nofollow">https://wiki.python.org/moin/CrashingPython#Exhausting_Resources</a> Why the case can't be reproduced in my python 2.7 Why it can make python crash?</p> <pre><code>$ python Python 2.4.2 (#2, Sep 30 2005, 21:19:01) [GCC 4.0.2 20050808 (prerelease) (Ubuntu 4.0.1-4ubuntu8)] on linux2 Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; f = lambda: None &gt;&gt;&gt; for i in xrange(1000000): ... f = f.__call__ ... &gt;&gt;&gt; del f Segmentation fault </code></pre>
-7
2016-10-17T10:10:55Z
40,083,776
<p>This was simply a bug, see <a href="http://bugs.python.org/issue532646" rel="nofollow">issue #532646</a>.</p> <p>All software has bugs, and the Python project is no exception. It can't be reproduced in 2.7 because the bug was found and fixed.</p> <p>Specifically, the page you found documents various such crashing bugs, and it states so <em>at the top</em>:</p> <blockquote> <p>While a lot of effort has gone into making it difficult or impossible to crash the Python interpreter in normal usage, there are lots fairly easy ways to crash the interpreter. The BDFL pronounced recently on the python-dev mailing list:</p> <blockquote> <p>I'm not saying it's uncrashable. I'm saying that if you crash it, it's a bug unless proven harebrained.</p> </blockquote> </blockquote> <p>Any known, still outstanding bugs are added to the <a href="https://hg.python.org/cpython/file/2.7/Lib/test/crashers" rel="nofollow">crashers test suite</a>. If you follow back the history on those tests, you'll be able to find more crashers and the versions of Python they apply to. Most require obscure setups like the one in your question.</p>
0
2016-10-17T10:17:41Z
[ "python" ]
I want to crawl data from 1 to 10 pages automatically from website.How can i do it?
40,083,648
<pre><code>import requests from bs4 import BeautifulSoup My_Url = "http://questions.consumercomplaints.in/page/2" Data = requests.get(My_Url) Soup = BeautifulSoup(Data.content) head_id = Soup.find_all({"div":"href"}) len(head_id) for i in head_id: print i.text </code></pre> <p>From above code i scrapped (reviews/complaints) from web page 2. How do i craw data automatically from Page 3 to 10 (<a href="http://questions.consumercomplaints.in/page/3" rel="nofollow">http://questions.consumercomplaints.in/page/3</a>) </p>
-1
2016-10-17T10:10:58Z
40,083,706
<p>Why not surround your function in a ranged for loop?</p> <pre><code>import requests from bs4 import BeautifulSoup for i in range(3,11): My_Url = "http://questions.consumercomplaints.in/page/" + str(i) Data = requests.get(My_Url) Soup = BeautifulSoup(Data.content) head_id = Soup.find_all({"div":"href"}) len(head_id) for i in head_id: print i.text </code></pre> <p>Have look at how the range function works <a href="http://pythoncentral.io/pythons-range-function-explained/" rel="nofollow">here</a>.</p>
0
2016-10-17T10:13:39Z
[ "python", "python-2.7", "web-scraping", "web-crawler", "ipython" ]
Alternative to deepcopy a lit of BitVectors in python
40,083,846
<p>In my project, I am using <code>deepcopy()</code> function to create a deep copy a list of <code>BitVector</code>s. Unfortunately, its taking a lot of time. </p> <pre><code>a = [&lt;BitVector obj at 0x000...&gt;, &lt;BitVector obj at 0x000...&gt;, &lt;BitVector obj at 0x000...&gt; ...] </code></pre> <p>On changing <code>a</code>, I do not want <code>b</code> to reflect the changes. </p> <pre><code>b = deepcopy(a) </code></pre> <p>But the above equation is taking a lot of time. What alternative should I use for better performance?</p>
0
2016-10-17T10:21:11Z
40,086,450
<p>The <code>BitVector</code> implementation is pure python and copying could be optimised substantially. Further, the provided implementation of <code>BitVector.deep_copy</code> is substantially slower than <code>copy.deepcopy</code>. Here is a an implementation of a deep copy that is ~10 times faster on my machine.</p> <pre><code>def bitvector_copy(bitvector): new = BitVector.__new__(BitVector) new.__dict__ = { "size": bitvector.size, # size is an int and immutable "vector": bitvector.vector[:], # vector is an array, this is enough to get a deepcopy # the copy will be disassociated with any file it originated from # this emulates how BitVector.deep_copy works "filename": None, "FILEIN": None, "FILEOUT": None } return new </code></pre> <p>To copy your list you would now do:</p> <pre><code>new_list = [bitvector_copy(vec) for vec in old_list] </code></pre> <p>For most use cases that should be enough. However, this isn't an exact deep copy though. Whilst all the new <code>BitVector</code>s are independent, it causes problems if the list contains references to the same bit vector eg.</p> <pre><code>old_list = [BitVector(size=8)] * 2 assert old_list[0] is old_list[1] new_list = [bitvector_copy(vec) for vec in old_list] assert new_list[0] is new_list[1] # AssertionError! </code></pre> <p>With a few modifications you can modify the copy function to work with <code>copy.deepcopy</code> and have the function return a true deep copy. This does slow the copying down a little bit though.</p> <pre><code>def bitvector_deepcopy(self, memo): if id(self.vector) in memo: vector = memo[id(self.vector)] else: vector = memo[id(self.vector)] = self.vector[:] new = BitVector.__new__(BitVector) new.__dict__ = { "size": self.size, "vector": vector, "filename": None, "FILEIN": None, "FILEOUT": None } return new BitVector.__deepcopy__ = bitvector_deepcopy </code></pre>
1
2016-10-17T12:30:47Z
[ "python", "python-3.x", "deep-copy" ]
tornado.locks.Lock release
40,083,970
<p>The tornado example gives the following example for locks:</p> <pre><code>&gt;&gt;&gt; from tornado import gen, locks &gt;&gt;&gt; lock = locks.Lock() &gt;&gt;&gt; &gt;&gt;&gt; @gen.coroutine ... def f(): ... with (yield lock.acquire()): ... # Do something holding the lock. ... pass ... ... # Now the lock is released. </code></pre> <p>Do you need to release the lock manually after the with or is that the purpose of using the with statement in that block? If this is the case why is there a separate release() and does this function need to be yielded?</p>
0
2016-10-17T10:27:45Z
40,085,656
<p>Right, the <code>with</code> statement ensures the Lock is released, no need to call <code>release</code> yourself.</p> <p><code>release</code> is naturally non-blocking -- your coroutine can complete a call to <code>release</code> without waiting for any other coroutines -- therefore <code>release</code> does not require a <code>yield</code> statement. You can determine that for yourself by noticing the return value of <code>release</code> is <code>None</code>, whereas the return value of <code>acquire</code> is a Future.</p>
1
2016-10-17T11:50:22Z
[ "python", "tornado" ]
Remove duplicates from a list of list based on a subset of each list
40,084,023
<p>I wrote a function to remove "duplicates" from a list of list.</p> <p>The elements of my list are: </p> <pre><code>[ip, email, phone number]. </code></pre> <p>I would like to remove the sublists that got the same EMAIL and PHONE NUMBER, I don't really care about the IP address.</p> <p>The solution that I currently use is :</p> <pre><code>def remove_duplicate_email_phone(data): for i in range(len(data)): for j in reversed(range(i+1,len(data))): if data[i][1] == data[j][1] and data[i][2] == data[j][2] : data.pop(j) return data </code></pre> <p>I would like to optimize this. It took more than 30 minutes to get the result.</p>
1
2016-10-17T10:29:59Z
40,084,082
<p>Your approach does a full scan for each and every element in the list, making it take O(N**2) (quadratic) time. The <code>list.pop(index)</code> is also expensive as everything following <code>index</code> is moved up, making your solution approach O(N**3) cubic time.</p> <p>Use a set and add <code>(email, phonenumber)</code> tuples to it to check if you already have seen that pair; testing containment against a set takes O(1) constant time, so you can clean out dupes in O(N) total time:</p> <pre><code>def remove_duplicate_email_phone(data): seen = set() cleaned = [] for ip, email, phone in data: if (email, phone) in seen: continue cleaned.append([ip, email, phone]) seen.add((email, phone)) return cleaned </code></pre> <p>This produces a <em>new</em> list, the old list is left untouched.</p>
4
2016-10-17T10:32:37Z
[ "python", "performance", "list", "python-3.x" ]
Remove duplicates from a list of list based on a subset of each list
40,084,023
<p>I wrote a function to remove "duplicates" from a list of list.</p> <p>The elements of my list are: </p> <pre><code>[ip, email, phone number]. </code></pre> <p>I would like to remove the sublists that got the same EMAIL and PHONE NUMBER, I don't really care about the IP address.</p> <p>The solution that I currently use is :</p> <pre><code>def remove_duplicate_email_phone(data): for i in range(len(data)): for j in reversed(range(i+1,len(data))): if data[i][1] == data[j][1] and data[i][2] == data[j][2] : data.pop(j) return data </code></pre> <p>I would like to optimize this. It took more than 30 minutes to get the result.</p>
1
2016-10-17T10:29:59Z
40,084,874
<p>Another solution might be to use groupby.</p> <pre><code>from itertools import groupby from operator import itemgetter deduped = [] data.sort(key=itemgetter(1,2)) for k, v in groupby(data, key=itemgetter(1,2): deduped.append(list(v)[0]) </code></pre> <p>or using a list comprehension:</p> <pre><code>deduped = [next(v) for k, v in groupby(data, key=itemgetter(1,2))] </code></pre>
0
2016-10-17T11:09:23Z
[ "python", "performance", "list", "python-3.x" ]
Remove duplicates from a list of list based on a subset of each list
40,084,023
<p>I wrote a function to remove "duplicates" from a list of list.</p> <p>The elements of my list are: </p> <pre><code>[ip, email, phone number]. </code></pre> <p>I would like to remove the sublists that got the same EMAIL and PHONE NUMBER, I don't really care about the IP address.</p> <p>The solution that I currently use is :</p> <pre><code>def remove_duplicate_email_phone(data): for i in range(len(data)): for j in reversed(range(i+1,len(data))): if data[i][1] == data[j][1] and data[i][2] == data[j][2] : data.pop(j) return data </code></pre> <p>I would like to optimize this. It took more than 30 minutes to get the result.</p>
1
2016-10-17T10:29:59Z
40,085,224
<p>Another approach could be to use a <code>Counter</code></p> <pre><code>from collections import Counter data = [(1, "[email protected]", 1234), (1, "[email protected]", 1234), (2, "[email protected]", 1234)] counts = Counter([i[:2] for i in data]) print [i for i in data if counts[i[:2]] == 1] # Get unique </code></pre>
0
2016-10-17T11:28:55Z
[ "python", "performance", "list", "python-3.x" ]
Background color of disabled Spinner
40,084,041
<p>I'm trying to set the background color of the Spinner, if disabled.</p> <p>Here is what I tried in my kv-file:</p> <pre><code>&lt;MySpinner@Spinner&gt;: background_normal: '' background_disabled_normal: '' disabled_color: (0, 0, 0, 1) color: (0, 0, 0, 1) background_disabled_color: (1,1,1,1) background_color: (0.62,0.67,0.72,1) </code></pre> <p>Obviously the <code>background_disabled_color</code> is not the right parameter. But what should I use instead?</p>
1
2016-10-17T10:30:44Z
40,089,959
<p>It inherits from <code>Button</code>, therefore if it's not in the <code>spinner.py</code> file, it'll be in <a href="https://github.com/kivy/kivy/blob/master/kivy/uix/button.py#L90" rel="nofollow"><code>button.py</code></a></p> <p>You can see that <code>Button</code> uses images for the background and with <code>background_color</code> it's only tinted, yet there's no <code>background_disabled_color</code> (afaik). The background works like this - you set <code>background_color</code> and if the widget is disabled, it tints the default background image for disabled (which is little bit <a href="https://github.com/kivy/kivy/blob/master/kivy/data/images/defaulttheme-0.png" rel="nofollow">darker</a>):</p> <pre><code>Button: text: 'jump' disabled: True # background_disabled_normal: '' # allow to see the behavior w/o default disabled bg background_color: (1,0,0,1) </code></pre> <p>To achieve another color for disabled widget than the default <code>background_color</code> you need to change the <code>background_color</code> when the <code>Button</code> is disabled (in your case <code>Spinner</code>):</p> <pre><code>from kivy.lang import Builder from kivy.base import runTouchApp from kivy.uix.boxlayout import BoxLayout Builder.load_string(''' &lt;Test&gt;: Spinner: id: special values: [str(i) for i in range(10)] size_hint_y: None text: 'jump' disabled: True #background_disabled_normal: '' background_color: (1,0,0,1) if not self.disabled else (0,1,0,1) Button: on_release: special.disabled = not special.disabled ''') class Test(BoxLayout): pass runTouchApp(Test()) </code></pre> <p>Note that this won't work for the <code>DropDown</code>-like children, because those use <a href="https://github.com/kivy/kivy/blob/master/kivy/uix/spinner.py#L48" rel="nofollow">different</a> class, so to change them you'll need to change properties of that class.</p>
1
2016-10-17T15:11:58Z
[ "python", "kivy" ]
Iterating over an image in python
40,084,246
<p>I would like to iterate over an image in python. This is my code so far:</p> <pre><code>def imageIteration(greyValueImage): for (x,y), pixel in np.ndenumerate(greyValueImage): vals = greyValueImage[x, y] print(vals) </code></pre> <p>The problem here ist that I get the following exception:</p> <pre><code>for (x,y), pixel in np.ndenumerate(greyValueImage): ValueError: too many values to unpack </code></pre> <p>Now my question is what is the fastest way to solve this? Do I really need to split the image into a few peaces, but taking this step how do I get the count of the necessary loops without trying out?</p> <p>Thanks for Your ideas </p> <p>P.s. im = Image.open(args["image"]) im_grey = im.convert('LA') # convert to grayscale</p>
0
2016-10-17T10:39:26Z
40,084,317
<p>You can't unpack like that. Just do</p> <pre><code>def imageIteration(greyValueImage): for index, pixel in np.ndenumerate(greyValueImage): x, y, _ = index vals = greyValueImage[x, y] print(vals) </code></pre> <p>Because <code>ndenumerate</code> returns 2 values list of 2, and number. <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndenumerate.html" rel="nofollow">http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndenumerate.html</a></p>
1
2016-10-17T10:43:20Z
[ "python", "image", "loops", "for-loop", "iteration" ]
Iterating over an image in python
40,084,246
<p>I would like to iterate over an image in python. This is my code so far:</p> <pre><code>def imageIteration(greyValueImage): for (x,y), pixel in np.ndenumerate(greyValueImage): vals = greyValueImage[x, y] print(vals) </code></pre> <p>The problem here ist that I get the following exception:</p> <pre><code>for (x,y), pixel in np.ndenumerate(greyValueImage): ValueError: too many values to unpack </code></pre> <p>Now my question is what is the fastest way to solve this? Do I really need to split the image into a few peaces, but taking this step how do I get the count of the necessary loops without trying out?</p> <p>Thanks for Your ideas </p> <p>P.s. im = Image.open(args["image"]) im_grey = im.convert('LA') # convert to grayscale</p>
0
2016-10-17T10:39:26Z
40,084,406
<p>What is the value of greyValueImage?</p> <p>There should be nothing wrong with your code, you can of course unpack the values of a tuple in a for loop.</p>
0
2016-10-17T10:48:03Z
[ "python", "image", "loops", "for-loop", "iteration" ]
Iterating over an image in python
40,084,246
<p>I would like to iterate over an image in python. This is my code so far:</p> <pre><code>def imageIteration(greyValueImage): for (x,y), pixel in np.ndenumerate(greyValueImage): vals = greyValueImage[x, y] print(vals) </code></pre> <p>The problem here ist that I get the following exception:</p> <pre><code>for (x,y), pixel in np.ndenumerate(greyValueImage): ValueError: too many values to unpack </code></pre> <p>Now my question is what is the fastest way to solve this? Do I really need to split the image into a few peaces, but taking this step how do I get the count of the necessary loops without trying out?</p> <p>Thanks for Your ideas </p> <p>P.s. im = Image.open(args["image"]) im_grey = im.convert('LA') # convert to grayscale</p>
0
2016-10-17T10:39:26Z
40,085,250
<p>You can of course just use a for loop to iterate over each pixel. An example is shown below:</p> <pre><code># assuming greyValueImage is a numpy array rows, cols = greyValueImage.shape() # get the shape of the array for x in range (cols): for y in range (rows): vals = greyValueImage[x,y] print (vals) </code></pre> <p>Using a simple array for an example:</p> <pre><code>random_array = np.random.randint(0,2,(3,4)) rows, cols = random_array.shape print (random_array) for x in range(cols): for y in range(rows): print (random_array[y,x]) </code></pre> <p>Running the example produces the output:</p> <pre><code>[[0 1 1 0] [0 0 1 0] [1 0 1 0]] 0 0 1 1 0 0 1 1 1 0 0 0 </code></pre>
0
2016-10-17T11:30:44Z
[ "python", "image", "loops", "for-loop", "iteration" ]
python: importing libpci raises SyntaxError
40,084,353
<p>I just installed libpci on my machine: </p> <pre><code>$ pip2.7 install libpci </code></pre> <p>And tried to run this: </p> <pre><code>#!/usr/local/bin/python2.7 import libpci print('hello libpci') </code></pre> <p>but this raises the following syntax error:</p> <pre><code>Traceback (most recent call last): File "./test.py", line 2, in &lt;module&gt; import libpci File "/usr/local/lib/python2.7/site-packages/libpci/__init__.py", line 26, in &lt;module&gt; from libpci.wrapper import LibPCI File "/usr/local/lib/python2.7/site-packages/libpci/wrapper.py", line 26, in &lt;module&gt; from libpci._functions import pci_alloc File "/usr/local/lib/python2.7/site-packages/libpci/_functions.py", line 39 def pci_alloc() -&gt; ctypes.POINTER(pci_access): ^ SyntaxError: invalid syntax </code></pre> <p>How is it possible to have SyntaxError raised in libpci?<br> Is it because I am missing some dependencies? </p>
1
2016-10-17T10:45:23Z
40,084,421
<p>The <a href="https://pypi.python.org/pypi/libpci" rel="nofollow"><code>libpci</code> project</a> requires Python 3.4 or newer. From the project tags:</p> <blockquote> <pre><code>Categories [...] Programming Language :: Python :: 3 Programming Language :: Python :: 3.4 </code></pre> </blockquote> <p>The syntax error is thrown because the project uses <a href="https://www.python.org/dev/peps/pep-3107/" rel="nofollow">annotations</a>, a Python 3 feature, to configure the <code>ctypes</code> layer, see the <a href="https://github.com/zyga/libpci/blob/master/libpci/_native.py#L54" rel="nofollow"><code>_ctypes_metadata()</code> function</a>.</p>
3
2016-10-17T10:48:34Z
[ "python", "python-2.7" ]
Strange behavior of urllib.request in two different cases while webscrapping
40,084,385
<p>I am trying to download bhavcopy of NSE EoD data for last 15 years using web scrapping through urllib.request.</p> <p>I am seeing that urllib.request is behaving strangely, it is working in one case but in another case it is throwing me error 403 Access Denies ..</p> <p>I used the HTTP Headers for masking but in one case it is failing ..</p> <p>Here is the code</p> <pre><code>import urllib.request def downloadCMCSV(year="2001",mon="JAN",dd="01"): #baseurl = "https://www.nseindia.com" headers = {'Host':'www.nseindia.com:443', 'Accept':'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8', 'Accept-Encoding':'gzip, deflate, sdch, br', 'Accept-Language':'en-US,en;q=0.8', #'Cookie':'NSE-TEST-1=1809850378.20480.0000; pointer=1; sym1=ONGC; pointerfo=1; underlying1=ONGC; instrument1=FUTSTK; optiontype1=-; expiry1=27OCT2016; strikeprice1=-', 'Cookie':'NSE-TEST-1=1809850378.20480.0000; pointer=1; sym1=ONGC; pointerfo=1; underlying1=ONGC; instrument1=FUTSTK; optiontype1=-; expiry1=27OCT2016; strikeprice1=-; JSESSIONID=B4CA0543FF4C33FD9EA9D18B95238DB4', 'Referer':'Referer: https://www.nseindia.com/products/content/equities/equities/archieve_eq.htm', 'Upgrade-Insecure-Requests':'1', 'User-Agent':'Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.36'} filename = "cm%s%s%sbhav.csv" % (dd,mon,year) urlcm = "https://www.nseindia.com/content/historical/EQUITIES/%s/%s/%s.zip" % (year, mon, filename) print(urlcm) request = urllib.request.Request(urlcm, headers = headers) #print(dir(request)) #print(request.headers) try: response = urllib.request.urlopen(request) except urllib.error.HTTPError as e: if e.code == 404: print("Bhavcopy not available for", year, mon, dd) return print(e.code) print(e.read()) return if response.code == 200: print("The response is good", response.length) if __name__ == "__main__": #getAll() downloadCMCSV('2001','JAN', '01') downloadCMCSV('2016','JAN', '01') </code></pre> <p>Got Output as below </p> <pre><code>https://www.nseindia.com/content/historical/EQUITIES/2001/JAN/cm01JAN2001bhav.csv.zip 403 b'&lt;HTML&gt;&lt;HEAD&gt;\n&lt;TITLE&gt;Access Denied&lt;/TITLE&gt;\n&lt;/HEAD&gt;&lt;BODY&gt;\n&lt;H1&gt;Access Denied&lt;/H1&gt;\n \nYou don\'t have permission to access "http&amp;#58;&amp;#47;&amp;#47;www&amp;#46;nseindia&amp;#46;com&amp;#47;content&amp;#47;historical&amp;#47;EQUITIES&amp;#47;2001&amp;#47;JAN&amp;#47;cm01JAN2001bhav&amp;#46;csv&amp;#46;zip" on this server.&lt;P&gt;\nReference&amp;#32;&amp;#35;18&amp;#46;33210f17&amp;#46;1476700779&amp;#46;13b4f615\n&lt;/BODY&gt;\n&lt;/HTML&gt;\n' https://www.nseindia.com/content/historical/EQUITIES/2016/JAN/cm01JAN2016bhav.csv.zip The response is good 58943 </code></pre> <p>Could you guys please help me what mistake I am doing ?</p>
0
2016-10-17T10:47:07Z
40,092,289
<p>Pass a <em>user-agent</em> , <code>'Accept': '*/*'</code> and the <em>referer</em> headers:</p> <pre><code>url = "https://www.nseindia.com/content/historical/EQUITIES/2001/JAN/cm01JAN2001bhav.csv.zip" r = request.Request(url, headers={'User-Agent': 'mybot', 'Accept': '*/*', "Referer": "https://www.nseindia.com/products/content/equities/equities/archieve_eq.htm"}) print(request.urlopen(r)) </code></pre> <p>You don't need cookies or anything else set.</p>
0
2016-10-17T17:26:21Z
[ "python", "url", "web-scraping", "urllib", "http-status-code-403" ]
pyexcel export error "No content, file name. Nothing is given"
40,084,424
<p>I'm using <code>django-pyexcel</code> to export data from the website, but when I go to the export URL I get the error:</p> <blockquote> <p>Exception Type: IOError</p> <p>Exception Value: No content, file name. Nothing is given</p> </blockquote> <p>The code to export the data was copied from the <a href="http://django-excel.readthedocs.io/en/latest/#handle-data-export" rel="nofollow">example</a> given in the documentation:</p> <pre><code>return excel.make_response_from_a_table(Question, 'xls', file_name="sheet") </code></pre>
0
2016-10-17T10:48:39Z
40,084,501
<p>The problem turned out to be the file format used, <code>xls</code> in this case.</p> <p>I had only installed the <code>xlsx</code> (<code>pyexcel-xlsx</code>) processor so it did not know how to handle the <code>xls</code> file format.</p> <p>The exception message could have been a bit better as I spent ages trying to figure out if there was a problem with the filename I'd supplied.</p>
0
2016-10-17T10:51:31Z
[ "python", "django", "pyexcel", "django-excel" ]
Reuse lexer object in antlr4 python target
40,084,581
<p>I have a simple antlr4 lexer, the following script works,</p> <pre><code> lexer = MyLexer(InputStream(argv[1])) stream = CommonTokenStream(lexer) parser = MyParser(stream) tree = parser.query() v = MyVisitor() v.visit(tree) </code></pre> <p>But I'm wondering if I can reuse the <code>MyLexer</code> class object?</p> <p>If so, how can I re-set the input string? </p>
0
2016-10-17T10:55:53Z
40,101,520
<p>It's possible by setting an input stream in the lexer (could even be the same as before) via <code>lexer.setInputStream()</code>. Then also re-set the lexer in the parser (can also be the same) via <code>parser.setTokenSource()</code>. Finally call <code>stream.reset()</code> and <code>parser.reset()</code> if you re-use them too.</p> <p>FYI: if you even want to re-use the input stream you can call <code>input.load(text)</code> on it to load the new input.</p>
0
2016-10-18T06:56:37Z
[ "python", "antlr", "antlr4" ]
AttributeError: module 'ctypes.wintypes' has no attribute 'create_unicode_buffer'
40,084,734
<p>This is a part of my code:</p> <pre><code>import os def _get_appdata_path(): import ctypes from ctypes import wintypes, windll CSIDL_APPDATA = 26 _SHGetFolderPath = windll.shell32.SHGetFolderPathW _SHGetFolderPath.argtypes = [wintypes.HWND, ctypes.c_int, wintypes.HANDLE, wintypes.DWORD, wintypes.LPCWSTR] path_buf = wintypes.create_unicode_buffer(wintypes.MAX_PATH) result = _SHGetFolderPath(0, CSIDL_APPDATA, 0, 0, path_buf) return path_buf.value </code></pre> <p>But when I call <code>wintypes.create_unicode_buffer()</code> I get the error:</p> <pre><code>AttributeError: module 'ctypes.wintypes' has no attribute 'create_unicode_buffer' </code></pre> <p>I am using Python 3.5.1. What could I do?</p>
0
2016-10-17T11:03:20Z
40,085,812
<p>Use ctypes.create_unicode_buffer, which works in both PY2 and PY3. It was only accidentally in wintypes in PY2 due to its use of <code>from ctypes import *</code>. :D</p>
2
2016-10-17T11:58:40Z
[ "python", "compiler-errors", "ctypes", "python-3.5" ]
Download csv file from https tableau server with python using request and basic authentication
40,084,814
<p>I have a tableau 10.0 server. I have a view at <code>https://example.com/views/SomeReport/Totals</code>. In a browser, if I navigate to <code>https://example.com/views/SomeReport/Totals.csv</code> I am prompted to login and then the view is downloaded as a csv file. </p> <p>Im trying to download this same file using python </p> <p>Details</p> <ul> <li>tableau 10.0 </li> <li>python 2.7 </li> <li>tableau running on windows</li> <li>python running on a mac OSX El capitan</li> </ul> <p>Here is what I tried:</p> <pre><code> tableau_file_path = '{tableau_path}{tableau_view}{file_ext}'.format(**cur_file) local_path = cur_file['local_path'] user = 'xxxx' password = 'xxxx' # print tableau_file_path # outputs https://example.com/views/SomeReport/Totals.csv # print local_path # outputs ../generated_resources/Some Folder Name/Totals.csv r = requests.get(tableau_file_path, auth=HTTPBasicAuth(user, password), stream=True, timeout=5) # also tried auth=(user, password) print r.status_code # outputs 401 if r.status_code == 200: with open(local_path, 'wb') as f: f.write(r.content) </code></pre> <p>I feel like this <em>should</em> work, but it does not. I get <code>401</code> for the status</p> <p>If I remove the <code>.csv</code> from the path, it works and I do get the actual tableau dashboard page. But I need the csv file not the html from the tableau dashboard </p> <p>I've copied the username and password directly from my code and pasted them into the form that pops up in my browser and the creds work seemingly ruling out issues with wrong creds.</p> <h3>This works for normal webpages, but does not work when hitting the url prompts a file download. How can I make this work with file downloads?</h3> <p><strong>Note</strong> I am open to other methods of downloading the file with python</p>
0
2016-10-17T11:07:29Z
40,085,848
<p>I'm gonna go out on a limb here and say that Tableau is not doing HTTP basic authentication, but rather have their own method. So when you try to access the thing through requests, it rejects it because it doesn't know how to auth like that.</p> <p>When you access it as a human, it knows to redirect you to their login page and do the whole "login flow" and finally redirect you to the proper download page. However, requests tries to clearly identify itself as a robot, and uses a special UserAgent string (just "requests" I believe), which is why the server decides to return a straight up 401.</p> <p>Depending on your use case, you could log in manually, and see what cookies are being set by Tableau, and include those in your request. You might also change your user agent to a browser one (they start with "Mozilla 5.0/" in the vast-vast majority of cases), and see if you download the login page. If that's the case, you could reverse-engineer some of their login process - enough to find out how they submit the user and password, and how they redirect to the page you want. After that, you'll most likely make a POST to their login infrastructure with that data and the required redirect.</p>
1
2016-10-17T12:00:18Z
[ "python", "python-requests", "basic-authentication", "tableau-server" ]
Python RegEx in-text slash escape
40,084,822
<p>I apologize for the incoherent title, but it's hard to come up with one in this situation.</p> <p>I have a bunch of texts and (almost) always they start either like this:</p> <pre><code>Word (Foo) - Main Text </code></pre> <p>or this:</p> <pre><code>Word (Foo/Bar) - Main Text </code></pre> <p>I want to remove everything before the <code>Main Text</code>, but it seems like the <code>/</code> character is messing up the regex I have.</p> <p>So far I have this: <code>re.search('^[^)]*/*\)(.*)$', my_text)</code></p> <p>I've tested it on the regex101 site, and it should work on both instances (either with or without the <code>/</code>) However, when I plug it in my Python code, it returns a <code>NoneType</code> when it encounters a <code>/</code>. What am I missing?</p>
0
2016-10-17T11:07:36Z
40,084,955
<p>Do:</p> <pre><code>^[^-]*-\s*(.*) </code></pre> <p>Now only captured group is your desired portion.</p> <ul> <li><p><code>^[^-]*</code> matches substring upto first <code>-</code></p></li> <li><p><code>-</code> matches a literal <code>-</code>, <code>\s*</code> matches zero or more whitespace</p></li> <li><p>The only captured group <code>(.*)</code> matches rest of the string</p></li> </ul> <p><strong>Example:</strong></p> <pre><code>In [10]: s = 'Word (Foo/Bar) - Main Text' In [11]: re.search(r'^[^-]*-\s*(.*)', s).group(1) Out[11]: 'Main Text' </code></pre>
2
2016-10-17T11:14:10Z
[ "python", "regex" ]
(Python) Using modulo to get the remainder from changing secs, to hrs and minutes
40,084,887
<p>I'm currently new to learning python and stumbled upon this problem: </p> <blockquote> <p>Exercise 2-7 Get the Time Write an algorithm that reads the amount of time in seconds and then displays the equivalent hours, minutes and remaining seconds. • One hour corresponds to 60 minutes. • One minute corresponds to 60 seconds.</p> </blockquote> <p>Here's how I wrote it:</p> <pre><code>def amount_time(t): h = t//3600 m = int((t/3600 - h)*60) s = int(((t/3600 - h)*60 - m)*60) print("Number of hours:",h) print("Number of minutes:",m) print("Number of Seconds:", s) amount_time(5000) </code></pre> <p>I was told there was an easier way to write it using modulo to get the remainder for the minutes and seconds, could someone help? Thanks!</p>
0
2016-10-17T11:10:19Z
40,085,098
<p>This is just "out of my head", because I got no testing system I could use right now.</p> <pre><code>def amount_time(t): print("Number of Seconds:", t % 60) print("Number of Minutes:", (t // 60) % 60) print("Number of Hours:", (t // 3600)) </code></pre> <p>What "t % 60" does: </p> <ol> <li>take t, divide it by 60</li> <li>remove everthing left of the dot</li> <li>multiply with 60</li> </ol> <p>With numbers:</p> <ol> <li>5000 / 60 = 83.33333</li> <li>=> 0.33333</li> <li>0.33333 * 60 = 20</li> </ol>
0
2016-10-17T11:22:41Z
[ "python", "modulo" ]
(Python) Using modulo to get the remainder from changing secs, to hrs and minutes
40,084,887
<p>I'm currently new to learning python and stumbled upon this problem: </p> <blockquote> <p>Exercise 2-7 Get the Time Write an algorithm that reads the amount of time in seconds and then displays the equivalent hours, minutes and remaining seconds. • One hour corresponds to 60 minutes. • One minute corresponds to 60 seconds.</p> </blockquote> <p>Here's how I wrote it:</p> <pre><code>def amount_time(t): h = t//3600 m = int((t/3600 - h)*60) s = int(((t/3600 - h)*60 - m)*60) print("Number of hours:",h) print("Number of minutes:",m) print("Number of Seconds:", s) amount_time(5000) </code></pre> <p>I was told there was an easier way to write it using modulo to get the remainder for the minutes and seconds, could someone help? Thanks!</p>
0
2016-10-17T11:10:19Z
40,085,115
<p>Your calculation of minuts doesnt work correct, you use Modulo with the hours. You could do the same with minutes as you did wirh seconds instead:</p> <pre><code>m = (t - h * 3600) // 60 s = int (t - h*3600 - m*60) ms = int ((t - h*3600 - m*60 - s) * 1000) # milliseconds </code></pre> <p>and to build the result string there are better ways, f.e.</p> <pre><code>print '{}:{}:{}.{}'.format(h,m,s,ms) </code></pre> <p>You find more ways and options to Format, if you look here:</p> <p><a href="https://docs.python.org/2/library/string.html" rel="nofollow">python dokumentation for string formatting</a></p>
0
2016-10-17T11:23:22Z
[ "python", "modulo" ]
How do I properly plot data extracted from a scope as .csv file
40,084,895
<p>I just extracted a .csv file from a scope, which shows how a signal changes in the time duration of 6 seconds. Problem is that I can't come up with a proper way of plotting this signal, without things getting mushed together.</p> <p>Mushed together like this:</p> <p><a href="https://i.stack.imgur.com/XYYa9.png" rel="nofollow"><img src="https://i.stack.imgur.com/XYYa9.png" alt="Matplot screen shoot"></a></p> <p>The .csv file is <a href="https://ufile.io/12a6" rel="nofollow">here</a>.</p> <p>The signal goes through four stages, which I want to show with the plot, without it being to mushed together? How do I plot it?</p> <p>More info about the signal:</p> <p>The signal is a PWM signal that changes frequency. Either plotting the PWM signal vs. time, or the frequency of the PWM signal vs time, would be plot representation I would be /seeking for. </p>
2
2016-10-17T11:10:48Z
40,085,620
<p>I don't know if this was intended, but the output of your signal simply oscillates between 0 and 1. I tried</p> <pre><code>import numpy as np a = np.genfromtxt('test.csv', delimiter=',') #Using numpy to directly read csv file into numpy array. Also, I renamed the csv file to test.csv a=a[1:] #To remove the headers print(np.nonzero(a[::2, 1])) print(np.nonzero(a[1::2, 1]-1)) </code></pre> <p>both yield empty lists, indicating all even positions have the value 1 and all odd indices have 0. That is why you see the mushed up plot.</p> <p>So taking the mean of 1/4th the x-span(for four stages) and rendering it would be meaningless. So the only option you would be left with is to plot only a very small sub-section of your csv file</p> <p>That can be easily done with just (assuming you want to plot from the 10th to the 20th value)</p> <pre><code>plt.plot( a[10:20,0], a[10:20,1]) </code></pre> <p><a href="https://i.stack.imgur.com/e546v.png" rel="nofollow"><img src="https://i.stack.imgur.com/e546v.png" alt="enter image description here"></a></p> <p>An observation: although y-value jumps between 0 and 1 alternatively, x-value does not change in any pattern</p>
0
2016-10-17T11:48:22Z
[ "python", "matlab", "csv", "plot", "gnuplot" ]
How do I properly plot data extracted from a scope as .csv file
40,084,895
<p>I just extracted a .csv file from a scope, which shows how a signal changes in the time duration of 6 seconds. Problem is that I can't come up with a proper way of plotting this signal, without things getting mushed together.</p> <p>Mushed together like this:</p> <p><a href="https://i.stack.imgur.com/XYYa9.png" rel="nofollow"><img src="https://i.stack.imgur.com/XYYa9.png" alt="Matplot screen shoot"></a></p> <p>The .csv file is <a href="https://ufile.io/12a6" rel="nofollow">here</a>.</p> <p>The signal goes through four stages, which I want to show with the plot, without it being to mushed together? How do I plot it?</p> <p>More info about the signal:</p> <p>The signal is a PWM signal that changes frequency. Either plotting the PWM signal vs. time, or the frequency of the PWM signal vs time, would be plot representation I would be /seeking for. </p>
2
2016-10-17T11:10:48Z
40,086,324
<p>As a rudimentary analysis, you could do the following using a <code>deque</code> to process a sliding window of values:</p> <pre><code>from collections import deque import csv import matplotlib import matplotlib.pyplot as plt maxlen = 20 window = deque(maxlen=maxlen) with open('12a6-data_extracted_2.csv') as f_input: csv_input = csv.reader(f_input, skipinitialspace=True) header = next(csv_input) freq = [] x = [] for v1, v2 in csv_input: v1 = float(v1) window.append(v1) if len(window) == maxlen: x.append(v1) freq.append(maxlen / ((window[-1] - window[0]))) plt.plot(x, freq) plt.show() </code></pre> <p>This would give you an output looking like:</p> <p><a href="https://i.stack.imgur.com/SmIBH.png" rel="nofollow"><img src="https://i.stack.imgur.com/SmIBH.png" alt="Frequency"></a></p>
2
2016-10-17T12:24:25Z
[ "python", "matlab", "csv", "plot", "gnuplot" ]
How do I properly plot data extracted from a scope as .csv file
40,084,895
<p>I just extracted a .csv file from a scope, which shows how a signal changes in the time duration of 6 seconds. Problem is that I can't come up with a proper way of plotting this signal, without things getting mushed together.</p> <p>Mushed together like this:</p> <p><a href="https://i.stack.imgur.com/XYYa9.png" rel="nofollow"><img src="https://i.stack.imgur.com/XYYa9.png" alt="Matplot screen shoot"></a></p> <p>The .csv file is <a href="https://ufile.io/12a6" rel="nofollow">here</a>.</p> <p>The signal goes through four stages, which I want to show with the plot, without it being to mushed together? How do I plot it?</p> <p>More info about the signal:</p> <p>The signal is a PWM signal that changes frequency. Either plotting the PWM signal vs. time, or the frequency of the PWM signal vs time, would be plot representation I would be /seeking for. </p>
2
2016-10-17T11:10:48Z
40,088,759
<h2>Zoomable plot with Gnuplot</h2> <p>Your problem is that you are plotting this with linear interpolation, with Gnuplot you plot digital data with the <code>with steps</code> style. If you use the <code>wxt</code> terminal (some other terminals also work) you get a zoomable plot, e.g.:</p> <pre class="lang-gnuplot prettyprint-override"><code>set term wxt set key above plot 'foo.csv' with steps title columnhead </code></pre> <p>Results in:</p> <p><img src="http://i.imgur.com/WqjEsi7.png" alt="Properly &quot;stepped&quot; pwm signal"></p> <p>Or to plot a subsection of the data:</p> <pre class="lang-gnuplot prettyprint-override"><code>set term wxt set key above set datafile separator comma plot [1.7:1.8] [-.1:1.1] 'foo.csv' with steps title columnhead" </code></pre> <p>Output:</p> <p><img src="http://i.imgur.com/nXBF0Lb.png" alt="Zoom in at 1.7 to 1.8"></p> <h2>Determine the PWM frequency with awk</h2> <p>As each line in your dataset represents a state switch, the switch frequency can be calculated by counting the switches and dividing by the difference in timestamps. This can be expressed in awk like this:</p> <ol> <li>Split data into <code>winsz</code> chunks</li> <li>For each chunk output <code>winsz / delta_t</code></li> </ol> <p>Note I ignore the first two lines of the csv file.</p> <pre><code>winsz=10 # Ignore heading and the first data point tail -n+3 foo.csv | # Chunk data into winsz blocks awk -F, 'NR % winsz == 0 { printf "\n" } 1' winsz=$winsz | # Output winsz awk -F, 'NF &gt; 2 { print winsz / ($(NF-1) - $1)}' RS= winsz=$winsz &gt; foo-freq.txt </code></pre> <p>Here is a sample of <code>foo-freq.txt</code>:</p> <pre><code>6.294413875000000 1237.47 6.303694208333334 1194.89 6.313335750000000 1150.17 6.323380625000000 1103.85 6.333885375000000 1055.28 6.344918833333334 1004.47 6.356571500000000 950.826 6.368958500000001 894.181 6.382239458333333 833.608 6.396642625000000 768.32 </code></pre> <p>You can plot this with the following Gnuplot code:</p> <pre class="lang-gnuplot prettyprint-override"><code>set xlabel 'Time (s)' set ylabel 'Frequency (Hz)' plot 'foo-freq.txt' with lines </code></pre> <p>Result:</p> <p><img src="http://i.imgur.com/3kW1v3W.png" alt="PWM frequency plot"></p> <h2>Determine the duty cycle with awk</h2> <p>I know you did not ask for it, but here is how you can determine the duty cycle of the PWM with <code>awk</code>. You probably need to use GNU awk with multi-precision support, as you have 14 decimals in the sample timestamps.</p> <p><em>duty-cycle.awk</em></p> <pre class="lang-awk prettyprint-override"><code>NR == 1 { start_time = time_stamp = $1 next } # Count the length of time the signal is 0 and 1 respectiviely $2 == 0 { len0 += $1-time_stamp } $2 == 1 { len1 += $1-time_stamp } # Remember previous timestamp { time_stamp = $1 } # How frequently to calculate and output the duty cycle NR % winsz == 0 { delta_t = time_stamp - start_time print len0 / delta_t, len1 / delta_t start_time = time_stamp len0 = len1 = 0 } </code></pre> <p>Run it like this on your data:</p> <pre><code>tail -n+3 foo.csv | awk -M -F, -f duty-cycle.awk winsz=50 &gt; duty-cycle.txt </code></pre> <p>Result:</p> <p><img src="http://i.imgur.com/GYyjHrp.png" alt="Plot of the calculated duty cycle"></p> <h2>Datafile for future reference</h2> <p>I have uploaded the datafile to a <a href="http://toggle.be/stackexchange/foo.csv" rel="nofollow">separate location</a> for future reference.</p>
2
2016-10-17T14:14:39Z
[ "python", "matlab", "csv", "plot", "gnuplot" ]
Taking subarrays from numpy array with given stride
40,084,931
<p>Lets say I have a Python Numpy array array a.</p> <pre><code>numpy.array([1,2,3,4,5,6,7,8,9,10,11].) </code></pre> <p>I want to create a matrix of sub sequences from this array of length 5 with stride 3. The results matrix hence will look as follows:</p> <pre><code>numpy.array([[1,2,3,4,5],[4,5,6,7,8],[7,8,9,10,11]]) </code></pre> <p>One possible way of implementing this would be using a for-loop.</p> <pre><code>result_matrix = np.zeros((3, 5)) for i in range(0, len(a), 3): result_matrix[i] = a[i:i+5] </code></pre> <p>Is there a cleaner way to implement this is Numpy?</p>
4
2016-10-17T11:12:49Z
40,085,052
<p><strong>Approach #1 :</strong> Using <a href="https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html" rel="nofollow"><code>broadcasting</code></a> -</p> <pre><code>nrows = ((a.size-L)//S)+1 out = a[S*np.arange(nrows)[:,None] + np.arange(L)] </code></pre> <p><strong>Approach #2 :</strong> Using more efficient <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.strides.html" rel="nofollow"><code>NumPy strides</code></a> -</p> <pre><code>n = a.strides[0] out = np.lib.stride_tricks.as_strided(a, shape=(nrows,L), strides=(S*n,n)) </code></pre> <p>Sample run -</p> <pre><code>In [183]: a Out[183]: array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]) In [184]: L = 5 # length In [185]: S = 3 # stride In [186]: nrows = ((a.size-L)//S)+1 In [187]: a[S*np.arange(nrows)[:,None] + np.arange(L)] Out[187]: array([[ 1, 2, 3, 4, 5], [ 4, 5, 6, 7, 8], [ 7, 8, 9, 10, 11]]) In [188]: n = a.strides[0] In [189]: np.lib.stride_tricks.as_strided(a, shape=(nrows,L), strides=(S*n,n)) Out[189]: array([[ 1, 2, 3, 4, 5], [ 4, 5, 6, 7, 8], [ 7, 8, 9, 10, 11]]) </code></pre>
5
2016-10-17T11:20:00Z
[ "python", "numpy", "vectorization" ]
How can you write a dict lookup to a csv from mysql with python
40,085,056
<p>I write this csv from a mysql db tabel in this way:</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>with open('outfile','w') as f: writer = csv.writer(f) for row in cursor.fetchall(): writer.writerow(row)</code></pre> </div> </div> </p> <p>This will output this csv :</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>34,0.0,2016-04-07 14:51:16 8,59.08,2016-04-07 16:55:26 55,207.76,2016-04-08 06:24:42 37,247.14,2016-04-08 07:50:02 40,255.35,2016-04-08 07:58:51 49,480.26,2016-04-08 08:11:31</code></pre> </div> </div> </p> <p>And I have created this lookup dict like this :</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>machine_lookup = {} for row in rows: machine_lookup[row['id']] = str(row['mr_machine_id'])</code></pre> </div> </div> </p> <p>This dict look like:</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>{34: '1137', 8: '1125', 55: '1139', 37: '1140', 40: '1124', 40: '1138'}</code></pre> </div> </div> </p> <p>How I can edit my initial csv creator:</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>with open('outfile','w') as f: writer = csv.writer(f) for row in cursor.fetchall(): writer.writerow(row)</code></pre> </div> </div> </p> <p>to insert the first row as in the dict lookup in order to output my csv file to look like this :</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>1137,0.0,2016-04-07 14:51:16 1125,59.08,2016-04-07 16:55:26 1139,207.76,2016-04-08 06:24:42 1140,247.14,2016-04-08 07:50:02 1124,255.35,2016-04-08 07:58:51 1138,480.26,2016-04-08 08:11:31</code></pre> </div> </div> </p>
1
2016-10-17T11:20:19Z
40,086,082
<p>Since each row is a list. You can modify it in-place, or create a new list with the value you want to replace with. for example:</p> <pre><code>row[0] = value </code></pre> <p>In your code you can do like this.</p> <pre><code>with open('outfile','w') as f: writer = csv.writer(f) for row in cursor.fetchall(): row[0] = str(row['mr_machine_id']) writer.writerow(row) </code></pre>
0
2016-10-17T12:12:22Z
[ "python", "mysql" ]
Why won't my Python script run using Docker?
40,085,168
<p>I need to use Tensorflow on my Windows machine. I have installed Docker, and following these two tutorials (<a href="https://runnable.com/docker/python/dockerize-your-python-application" rel="nofollow">https://runnable.com/docker/python/dockerize-your-python-application</a> and <a href="https://civisanalytics.com/blog/engineering/2014/08/14/Using-Docker-to-Run-Python/" rel="nofollow">https://civisanalytics.com/blog/engineering/2014/08/14/Using-Docker-to-Run-Python/</a>), I am trying to run my Python script. My Dockerfile is nearly identical to the one in the first tutorial, except that instead of installing pystrich, I'm installing Tensorflow. I've successfully made a Docker image called python-stuff, and I've made a script called my_script.py which just imports Tensorflow and then prints Hello world.</p> <p>When I run the command <code>docker run python-stuff python my_script.py</code>, I don't get any errrors, but the script does not produce any output. Any ideas?</p> <p>EDIT: My Dockerfile:</p> <pre><code>FROM python:3 ADD my_script.py / RUN pip3 install --upgrade https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.9.0-cp35-cp35m-linux_x86_64.whl CMD ["python", "./my_script.py"] </code></pre> <p>Running <code>docker logs python-stuff</code> gives <code>Error: No such container: python-stuff</code></p>
-1
2016-10-17T11:26:26Z
40,087,989
<p>If you want to see inline output, try adding the <code>--tty</code> and <code>--interactive</code> (or <code>-ti</code> for short) options. This will give you the stdout from your container on the console, as well as interact via stdin with your script.</p> <p>The other way to run is with <code>--detach</code>, which will run in the background. If you do this, you Docker will print the container ID to the console, and you can then run <code>docker logs ${ID}</code> (replacing <code>${ID}</code> with the ID that was printed, to see the current output your script has written to stdout. If you want to avoid using the long, generated ID, you can add the <code>--name foo</code> option to your container, to add a name which can be used in commands like <code>docker logs foo</code>.</p>
1
2016-10-17T13:40:02Z
[ "python", "windows", "docker", "tensorflow" ]
Why won't my Python script run using Docker?
40,085,168
<p>I need to use Tensorflow on my Windows machine. I have installed Docker, and following these two tutorials (<a href="https://runnable.com/docker/python/dockerize-your-python-application" rel="nofollow">https://runnable.com/docker/python/dockerize-your-python-application</a> and <a href="https://civisanalytics.com/blog/engineering/2014/08/14/Using-Docker-to-Run-Python/" rel="nofollow">https://civisanalytics.com/blog/engineering/2014/08/14/Using-Docker-to-Run-Python/</a>), I am trying to run my Python script. My Dockerfile is nearly identical to the one in the first tutorial, except that instead of installing pystrich, I'm installing Tensorflow. I've successfully made a Docker image called python-stuff, and I've made a script called my_script.py which just imports Tensorflow and then prints Hello world.</p> <p>When I run the command <code>docker run python-stuff python my_script.py</code>, I don't get any errrors, but the script does not produce any output. Any ideas?</p> <p>EDIT: My Dockerfile:</p> <pre><code>FROM python:3 ADD my_script.py / RUN pip3 install --upgrade https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.9.0-cp35-cp35m-linux_x86_64.whl CMD ["python", "./my_script.py"] </code></pre> <p>Running <code>docker logs python-stuff</code> gives <code>Error: No such container: python-stuff</code></p>
-1
2016-10-17T11:26:26Z
40,104,804
<p>I fixed it! The problem was simply the './' in the CMD line in the Dockerfile. Removing this and building it again solved the problem.</p>
0
2016-10-18T09:42:39Z
[ "python", "windows", "docker", "tensorflow" ]
How to smooth lines in a figure in python?
40,085,300
<p>So with the code below I can plot a figure with 3 lines, but they are angular. Is it possible to smooth the lines? </p> <pre><code>import matplotlib.pyplot as plt import pandas as pd # Dataframe consist of 3 columns df['year'] = ['2005, 2005, 2005, 2015, 2015, 2015, 2030, 2030, 2030'] df['name'] = ['A', 'B', 'C', 'A', 'B', 'C', 'A', 'B', 'C'] df['weight'] = [80, 65, 88, 65, 60, 70, 60, 55, 65] fig,ax = plt.subplots() # plot figure to see how the weight develops through the years for name in ['A','B','C']: ax.plot(df[df.name==name].year,df[df.name==name].weight,label=name) ax.set_xlabel("year") ax.set_ylabel("weight") ax.legend(loc='best') </code></pre>
0
2016-10-17T11:33:00Z
40,115,645
<p>You should apply interpolation on your data and it shouldn't be "linear". Here I applied the "cubic" interpolation using scipy's <code>interp1d</code>. Also, note that for using cubic interpolation your data should have at least 4 points. So I added another year 2031 and another value too all weights (I got the new weight value by subtracting 1 from the last value of weights): </p> <p>Here's the code: </p> <pre><code>import matplotlib.pyplot as plt import pandas as pd from scipy.interpolate import interp1d import numpy as np # df['year'] = ['2005, 2005, 2005, 2015, 2015, 2015, 2030, 2030, 2030'] # df['name'] = ['A', 'B', 'C', 'A', 'B', 'C', 'A', 'B', 'C'] # df['weight'] = [80, 65, 88, 65, 60, 70, 60, 55, 65] df1 = pd.DataFrame() df1['Weight_A'] = [80, 65, 60 ,59] df1['Weight_B'] = [65, 60, 55 ,54] df1['Weight_C'] = [88, 70, 65 ,64] df1.index = [2005,2015,2030,2031] ax = df1.plot.line() ax.set_title('Before interpolation') ax.set_xlabel("year") ax.set_ylabel("weight") f1 = interp1d(df1.index, df1['Weight_A'],kind='cubic') f2 = interp1d(df1.index, df1['Weight_B'],kind='cubic') f3 = interp1d(df1.index, df1['Weight_C'],kind='cubic') df2 = pd.DataFrame() new_index = np.arange(2005,2031) df2['Weight_A'] = f1(new_index) df2['Weight_B'] = f2(new_index) df2['Weight_C'] = f3(new_index) df2.index = new_index ax2 = df2.plot.line() ax2.set_title('After interpolation') ax2.set_xlabel("year") ax2.set_ylabel("weight") plt.show() </code></pre> <p>And the results: </p> <p><a href="https://i.stack.imgur.com/QnDvl.png" rel="nofollow"><img src="https://i.stack.imgur.com/QnDvl.png" alt="Before interpolation"></a> <a href="https://i.stack.imgur.com/aUJGs.png" rel="nofollow"><img src="https://i.stack.imgur.com/aUJGs.png" alt="After interpolation"></a></p>
0
2016-10-18T18:37:23Z
[ "python", "pandas", "matplotlib", "plot", "lines" ]
Extract each object after edge detection using sobel filter
40,085,314
<p>Sobel Edge detection:<br> <img src="https://i.stack.imgur.com/bbgpD.png" alt="Sobel Edge detection"></p> <p>Original Image:<br> <img src="https://i.stack.imgur.com/L8yvq.jpg" alt="Original Image"></p> <p>I have used sobel edge detection technique to identify the boundaries of each object in the given image. How can I extract the objects in the original image using these boundaries. We can ignore the objects with smaller pixel count.</p>
0
2016-10-17T11:33:26Z
40,087,750
<p>What you are after is called image segmentation. Your case looks particularly difficult because of low contrast between the furniture elements, and because of texture and shadows.</p> <p>You will also realize that you need to define what you call an "object" and you will realize that it is about impossible to isolate the pieces of furniture in this scene.</p> <p>Another bad news: neither Sobel nor Canny will be good enough to address this, as the true edges will be discontinous at places and there will be many false responses.</p> <p>In my opinion, the current state of the art does not allow to solve your problem.</p>
1
2016-10-17T13:30:10Z
[ "python", "opencv", "image-processing", "image-segmentation", "skimage" ]
Extract each object after edge detection using sobel filter
40,085,314
<p>Sobel Edge detection:<br> <img src="https://i.stack.imgur.com/bbgpD.png" alt="Sobel Edge detection"></p> <p>Original Image:<br> <img src="https://i.stack.imgur.com/L8yvq.jpg" alt="Original Image"></p> <p>I have used sobel edge detection technique to identify the boundaries of each object in the given image. How can I extract the objects in the original image using these boundaries. We can ignore the objects with smaller pixel count.</p>
0
2016-10-17T11:33:26Z
40,087,908
<h1>What</h1> <p>You want to use <code>cv2.findContours()</code> with <a href="http://docs.opencv.org/2.4/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?highlight=findcontours#findcontours" rel="nofollow">hierarchy enabled</a>. Try retr_ccomp.</p> <h1>Why</h1> <p>That will try to find the regions that can be filled in. Or, more precisely, holes in closed contours. Since the sobel filter returned soft edges, we want to detect the closed contours among the soft edges, which will simply be the edges themselves. The <strong>holes</strong> are the objects. </p> <h1>How</h1> <p>You'll get both a <code>contours</code>, a list of points, and <code>hierarchy</code>, a list of tuples. If hierarchy[i][3] is positive, then contours[i] has a parent, and therefore is a hole, since ccomp only allows 2 levels.</p> <p>I should note we've been working on the image segmentation problem for 50 years and no one has a great solution. You will find that this approach is often unreliable for arbitrary scenes. </p>
0
2016-10-17T13:36:55Z
[ "python", "opencv", "image-processing", "image-segmentation", "skimage" ]
Size of list and string
40,085,333
<p>I am trying to fetch the size of list in bytes and also size of string in bytes.</p> <p>If we see the output below for the code, size of list is shown as <code>52 bytes</code>, where as when I join the list and check the size, output is <code>35 bytes</code>. At last I tried to get size of string <code>"Iamtestingsize"</code> the output was again <code>35 bytes</code>. So, size of string after "join" and also size of string "Iamtestingsize" is same.</p> <p>I have 2 questions here:</p> <p>1) why is the size of list showing a different output ? also, please let me know if you have any idea on how to fetch the size of contents of list ?</p> <p>2) I thought, 1 byte == 1 character and I was expecting size of strings msgstr and string will show as 14 bytes instead of 35. Please let me know if am missing anything here ?</p> <p>3) when I perform len() on list and strings, for msgstr and string - 14 was returned whereas length of list returned 4 which is as I expected. </p> <pre><code>import sys list = ['I', 'am', 'testing', 'size'] print sys.getsizeof(list) msgstr = "".join(list) print "size of msgstr is " + str(sys.getsizeof(msgstr)) print msgstr string = "Iamtestingsize" print "size of str is " + str(sys.getsizeof(string)) print len(list) print len(msgstr) print len(string) Output: 52 size of msgstr is 35 Iamtestingsize size of str is 35 4 14 14 </code></pre> <p>Note: I am using python 2.7</p>
0
2016-10-17T11:34:38Z
40,085,512
<ol> <li><p>A list (any list) data structure requires additional maintenance overhead to keep elements inside it. This overhead is reflected in the difference of the results of <code>getsizeof</code>.</p></li> <li><p>Python strings are a <a href="https://docs.python.org/2.7/library/stdtypes.html#typesseq" rel="nofollow">text sequence type - str</a>, and not C strings or anything alike. Same as Python lists, there is associated metadata involved, beyond the contents of the string alone:</p></li> </ol> <hr> <pre><code>Python 2.7.10 (default, Jul 30 2016, 18:31:42) [GCC 4.2.1 Compatible Apple LLVM 8.0.0 (clang-800.0.34)] on darwin Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; import sys &gt;&gt;&gt; sys.getsizeof(b'asd') 40 &gt;&gt;&gt; sys.getsizeof('asd') 40 &gt;&gt;&gt; sys.getsizeof(u'asd') 56 </code></pre> <hr> <ol start="3"> <li>The length of a string is, intuitively, defined as the number of characters in it.</li> </ol>
2
2016-10-17T11:43:10Z
[ "python", "python-2.7" ]
Aggregating a dataframe to give the sum of values
40,085,339
<p>I tried to group data based on multiple fields but the groupby is not coming in a perfect dataframe</p> <pre><code>Comgroup Cd Geo IMT Best Remit Name Q1 Q2 Q3 Q4 Total IT Product AP ANZ AVNET 0 0 0 823.09 823.09 IT Product AG ANZ TOSHIBA 0 0 5065.4 237060.72 242126.12 IT Product EMEA ANZ LENOVO 126664.47 0 0 113285.78 239950.25 IT Product AP ANZ LENOVO 196154.85 0 1122.15 10252.13 207529.13 </code></pre> <p>I wrote the code as below and was the grouping has come in a totally different pattern. </p> <pre><code>f = {'Q1':['sum'] , 'Q2':['sum'] , 'Q3':['sum'] , 'Q4':['sum'], 'Total':['sum']} total_spendfinal = total_spendfinal.groupby(['Geo','IMT','Spend Type','Spend List']).agg(f) </code></pre> <p><a href="https://i.stack.imgur.com/TjFNx.jpg" rel="nofollow">Image</a> of how the dataframe looks like is attached. </p>
1
2016-10-17T11:34:50Z
40,087,968
<p>Need to do total_spendfinal.reset_index(inplace=True) </p>
0
2016-10-17T13:39:22Z
[ "python", "pandas", "dataframe" ]
Which scipy function should I use to interpolate from a rectangular grid to regularly spaced rectangular grid in 2D?
40,085,367
<p>I pretty new to python, and I'm looking for the most efficient pythonic way to interpolate from a grid to another one. The original grid is a structured grid (the terms regular or rectangular grid are also used), and the spacing is not uniform. The new grid is a regularly spaced grid. Both grids are 2D. For now it's ok using a simple interpolation method like linear interpolation, pheraps in the future I could try something different like bicubic.</p> <p>I'm aware that there are methods to interpolate from an irregular grid to a regular one, however given the regular structure of my original grid, more efficient methods should be available.</p> <p>After searching in the scipy docs, I have found 2 methods that seem to fit my case: <code>scipy.interpolate.RegularGridInterpolator</code> and <code>scipy.interpolate.RectBivariateSpline</code>. I don't understand the difference between the two functions, which one should I use? Is the difference purely in the interpolation methods? Also, while the non-uniform spacing of the original grid is explicitly mentioned in <code>RegularGridInterpolator</code>, <code>RectBivariateSpline</code> doc is silent about it.</p> <p>Thanks,</p> <p>Andrea</p>
0
2016-10-17T11:36:40Z
40,087,542
<h1>RectBivariateSpline</h1> <p>Imagine your grid as a canyon, where the high values are peaks and the low values are valleys. The bivariate spline is going to try to fit a thin sheet over that canyon to interpolate. This will still work on irregularly space input, as long as the x and y array you supply are also irregularly spaced, and everything still lies on a rectangular grid.</p> <h1>RegularGridInterpolator</h1> <p>Same canyon, but now we'll linearly interpolate the surrounding gridpoints to interpolate. We'll assume the input data is regularly space to save some computation. It sounds like this won't work for you. </p> <h1>Now What?</h1> <p>Both of these map 2D-1D. It sounds like you have an input space with irregular, but rectangularly space sample points, and an output space with regularly spaced sample points. You might just try LinearNDInterpolator, since you're in 2D it won't be that much more expensive.</p> <p>If you're trying to interpolate a mapping between two 2D things, you'll want to do two interpolations, one that interpolates (x1, y1) -> x2 and one that interpolates (x1, y1) -> y2. </p> <p>Vstacking the output of those will give you an array of points in your output space. </p> <p>I don't know of a more efficient method in scipy for taking advantage of the expected structure of the interpolation output, given an irregular grid input.</p>
0
2016-10-17T13:21:24Z
[ "python", "numpy", "scipy", "grid" ]
Find pd.DataFrame cell that has the value of None
40,085,390
<p>I have a <code>pd.DataFrame</code> that looks like this:</p> <pre><code>index A B 0 apple bear 0 axis None 0 ant None 0 avocado None </code></pre> <p>and my goal is to simple drop those row if they have <code>None</code> in the B column.</p> <p>I tried <code>df[df['B'] != None]</code> but pandas returns me the exact same DataFrame without deleting the rows. And <code>df[df['B'] == None]</code> gives only an empty DataFrame.</p> <p>Is this the correct way of trying to match the None value? I am not sure what is going on but <code>test = None</code> and then <code>test == None</code> works fine (returns True). Any thoughts are appreciated!</p>
0
2016-10-17T11:37:42Z
40,085,459
<p>You could try this:</p> <pre><code>df.dropna(subset=["B"]) </code></pre>
0
2016-10-17T11:40:27Z
[ "python", "pandas", "dataframe" ]
Accessing dictionary keys&values via list of keys and values
40,085,536
<p>This is going to be a very embarrassing first post from me - just coming back to coding in Python after learning basics ~6 months ago and not using it since then.</p> <p>I am coding a Blackjack game in object oriented approach, and defined a Deck object as shown below:</p> <pre><code>class Deck(object): suits = ["spades", "clubs", "diamonds", "hearts"] ranks = ["2", "3", "4", "5", "6", "7", "8", "9", "10", "Jack", "Queen", "King", "Ace"] def __init__(self): self.spades = {"2": 0, "3": 0, "4": 0, "5": 0, "6": 0, "7": 0, "8": 0, "9": 0, "10": 0, "Jack": 0, "Queen": 0, "King": 0, "Ace": 0} self.clubs = {"2": 0, "3": 0, "4": 0, "5": 0, "6": 0, "7": 0, "8": 0, "9": 0, "10": 0, "Jack": 0, "Queen": 0, "King": 0, "Ace": 0} self.diamonds = {"2": 0, "3": 0, "4": 0, "5": 0, "6": 0, "7": 0, "8": 0, "9": 0, "10": 0, "Jack": 0, "Queen": 0, "King": 0, "Ace": 0} self.hearts = {"2": 0, "3": 0, "4": 0, "5": 0, "6": 0, "7": 0, "8": 0, "9": 0, "10": 0, "Jack": 0, "Queen": 0, "King": 0, "Ace": 0} </code></pre> <p>Afterwards, I wanted to create a neat method which would initiate this deck by filling all 4 dictionaries holding count of ranks in each suit, by writing something like below (that's the reason for class attributes I defined above:</p> <pre><code>def initialize_deck(self): for suit in self.suits: for rank in self.ranks: self.suit[rank] = 1 </code></pre> <p>The problem is, the code just does not do what I think it should be doing (as in e.g.: pulling first suit from self.suits list, using it as a name of the dictionary iterating over all its keys and setting their associated values to 1). Instead I just get an error "AttributeError: 'Deck' object has no attribute 'suit'".</p> <p>What am I doing wrong in here? Is there a neat way of writing what I have in mind with 2 nested loops, instead of writing it like below?</p> <pre><code>def initialize_deck(self): for rank in self.ranks: self.spades[rank] = 1 for rank in self.ranks: self.clubs[rank] = 1 for rank in self.ranks: self.diamonds[rank] = 1 for rank in self.ranks: self.hearts[rank] = 1 </code></pre> <p>Thanks in advance for answers to what I know is probably a very basic problem.</p> <p>Cheers</p>
1
2016-10-17T11:44:06Z
40,085,612
<p>You can use <code>getattr</code> function do get attribute from object</p> <pre><code>def initialize_deck(self): for suit in self.suits: suit_dict = getattr(self, suit) for rank in self.ranks: suit_dict[rank] = 1 </code></pre> <p>Or you can do it using <code>setattr</code></p> <pre><code>def initialize_deck(self): rank_map = {key: 1 for key in self.ranks} for suit in self.suits: setattr(self, suit, rank_map.copy()) </code></pre>
1
2016-10-17T11:48:07Z
[ "python", "dictionary", "iteration" ]
How to use Python main() function in GAE (Google App Engine)?
40,085,542
<p>I'd like to use a <code>main()</code> function in my GAE code <br>(note: the code below is just a minimal demonstration for a much larger program, hence the need for a <code>main()</code>).</p> <p>If I use the following code, it performs as expected:</p> <pre><code>import webapp2 class GetHandler(webapp2.RequestHandler): def get(self): self.response.headers['Content-Type'] = 'text/plain' self.response.write('in GET') class SetHandler(webapp2.RequestHandler): def get(self): self.response.headers['Content-Type'] = 'text/plain' self.response.write('in SET') app = webapp2.WSGIApplication([ ('/get', GetHandler), ('/set', SetHandler), ], debug=True) </code></pre> <p>where my <code>app.yaml</code> is:</p> <pre><code>runtime: python27 api_version: 1 threadsafe: true handlers: - url: /.* script: main.app </code></pre> <p><strong>However</strong>, I cannot figure out how to implement a <code>main()</code> function, and still have <code>app</code> act as it does in the code at the top. Namely, the following:</p> <pre><code># with main() import webapp2 class GetHandler(webapp2.RequestHandler): def get(self): self.response.headers['Content-Type'] = 'text/plain' self.response.write('in GET') class SetHandler(webapp2.RequestHandler): def get(self): self.response.headers['Content-Type'] = 'text/plain' self.response.write('in SET') def main(): app = webapp2.WSGIApplication([ ('/get', GetHandler), ('/set', SetHandler), ], debug=True) if __name__ == '__main__': main() </code></pre> <p>gives the following error for <code>http://localhost:8080/get:</code></p> <pre><code>$ dev_appserver.py . INFO 2016-10-17 11:29:30,962 devappserver2.py:769] Skipping SDK update check. INFO 2016-10-17 11:29:31,059 api_server.py:205] Starting API server at: http://localhost:45865 INFO 2016-10-17 11:29:31,069 dispatcher.py:197] Starting module "default" running at: http://localhost:8080 INFO 2016-10-17 11:29:31,073 admin_server.py:116] Starting admin server at: http://localhost:8000 ERROR 2016-10-17 11:29:37,461 wsgi.py:263] Traceback (most recent call last): File "/home/.../sdk/platform/google_appengine/google/appengine/runtime/wsgi.py", line 240, in Handle handler = _config_handle.add_wsgi_middleware(self._LoadHandler()) File "/home/.../sdk/platform/google_appengine/google/appengine/runtime/wsgi.py", line 302, in _LoadHandler raise err ImportError: &lt;module 'main' from '/home/.../main.pyc'&gt; has no attribute app INFO 2016-10-17 11:29:37,496 module.py:788] default: "GET /get HTTP/1.1" 500 - </code></pre> <h2>Edit 1</h2> <p>Trying:</p> <pre><code># with main() import webapp2 app = webapp2.RequestHandler() class GetHandler(webapp2.RequestHandler): def get(self): self.response.headers['Content-Type'] = 'text/plain' self.response.write('in GET') class SetHandler(webapp2.RequestHandler): def get(self): self.response.headers['Content-Type'] = 'text/plain' self.response.write('in SET') def main(): global app app = webapp2.WSGIApplication([ ('/get', GetHandler), ('/set', SetHandler), ], debug=True) return app if __name__ == '__main__': app = main() </code></pre> <p>Results in:</p> <pre><code>INFO 2016-10-17 12:30:34,751 module.py:402] [default] Detected file changes: /home/openstack/googleAppEngine/fastsimon/task2/task2_with_main/main.py ERROR 2016-10-17 12:30:42,328 wsgi.py:279] Traceback (most recent call last): File "/home/openstack/googleAppEngine/google-cloud-sdk/platform/google_appengine/google/appengine/runtime/wsgi.py", line 267, in Handle result = handler(dict(self._environ), self._StartResponse) TypeError: 'RequestHandler' object is not callable INFO 2016-10-17 12:30:42,335 module.py:788] default: "GET /get HTTP/1.1" 500 - </code></pre>
0
2016-10-17T11:44:21Z
40,087,671
<p>GAE apps are not designed to be standalone apps, a <code>main()</code> function doesn't make a lot of sense for them.</p> <p>Basically GAE apps are really just collections of handler code and rules/configurations designed to extend and customize the behaviour of the generic GAE infra/sandbox code so that it behaves your app. You can see that from your backtrace - other code is invoking your handler code (and the stack before reaching your code can be <em>a lot deeper</em> than that).</p> <p>In your particular case the <code>app</code> variable <strong>must</strong> be a global in <code>main.py</code> to match the <code>script: main.app</code> config line in the <code>app.yaml</code> config file. This is what the traceback is about.</p> <p>As for organizing the code for huge apps, there are other ways of doing it:</p> <ul> <li><p>splitting the app in multiple <a href="https://cloud.google.com/appengine/docs/python/an-overview-of-app-engine#services_the_building_blocks_of_app_engine" rel="nofollow">modules/services</a>, each with their own <code>app.yaml</code> config file. For example: <a href="http://stackoverflow.com/questions/34110178/can-a-default-module-in-a-google-app-engine-app-be-a-sibling-of-a-non-default-mo">Can a default module in a Google App Engine app be a sibling of a non-default module in terms of folder structure?</a></p></li> <li><p>splitting a service/module into multiple "scripts" - primary entry points into the <code>app.yaml</code> file similar to your <code>main.py</code> file, each with their own <code>app</code> config` - which really are just mappers between routes and handlers. For example: <a href="http://stackoverflow.com/questions/39349590/app-engine-throws-404-not-found-for-any-path-but-root">App Engine throws 404 Not Found for any path but root</a></p></li> <li><p>splitting the handlers for one <code>app</code> mapper into multiple files, using webapp2's lazy loaded handler technique. Examples:</p> <ul> <li><a href="http://stackoverflow.com/questions/33453441/app-engine-few-big-scripts-or-many-small-ones/33457227#33457227">App Engine: Few big scripts or many small ones?</a></li> <li><a href="http://stackoverflow.com/questions/34635012/what-determines-start-up-time-of-dynamic-instance-and-can-it-vary-between-weeks/34637044#34637044">What determines start up time of dynamic instance and can it vary between weeks if code is same</a> </li> </ul></li> </ul> <p>In an extreme case a <code>main.py</code> file could contain <em>just</em> the <code>app</code> variable - that is really the only requirement.</p>
1
2016-10-17T13:26:25Z
[ "python", "python-2.7", "google-app-engine", "main", "wsgi" ]
How to use Python main() function in GAE (Google App Engine)?
40,085,542
<p>I'd like to use a <code>main()</code> function in my GAE code <br>(note: the code below is just a minimal demonstration for a much larger program, hence the need for a <code>main()</code>).</p> <p>If I use the following code, it performs as expected:</p> <pre><code>import webapp2 class GetHandler(webapp2.RequestHandler): def get(self): self.response.headers['Content-Type'] = 'text/plain' self.response.write('in GET') class SetHandler(webapp2.RequestHandler): def get(self): self.response.headers['Content-Type'] = 'text/plain' self.response.write('in SET') app = webapp2.WSGIApplication([ ('/get', GetHandler), ('/set', SetHandler), ], debug=True) </code></pre> <p>where my <code>app.yaml</code> is:</p> <pre><code>runtime: python27 api_version: 1 threadsafe: true handlers: - url: /.* script: main.app </code></pre> <p><strong>However</strong>, I cannot figure out how to implement a <code>main()</code> function, and still have <code>app</code> act as it does in the code at the top. Namely, the following:</p> <pre><code># with main() import webapp2 class GetHandler(webapp2.RequestHandler): def get(self): self.response.headers['Content-Type'] = 'text/plain' self.response.write('in GET') class SetHandler(webapp2.RequestHandler): def get(self): self.response.headers['Content-Type'] = 'text/plain' self.response.write('in SET') def main(): app = webapp2.WSGIApplication([ ('/get', GetHandler), ('/set', SetHandler), ], debug=True) if __name__ == '__main__': main() </code></pre> <p>gives the following error for <code>http://localhost:8080/get:</code></p> <pre><code>$ dev_appserver.py . INFO 2016-10-17 11:29:30,962 devappserver2.py:769] Skipping SDK update check. INFO 2016-10-17 11:29:31,059 api_server.py:205] Starting API server at: http://localhost:45865 INFO 2016-10-17 11:29:31,069 dispatcher.py:197] Starting module "default" running at: http://localhost:8080 INFO 2016-10-17 11:29:31,073 admin_server.py:116] Starting admin server at: http://localhost:8000 ERROR 2016-10-17 11:29:37,461 wsgi.py:263] Traceback (most recent call last): File "/home/.../sdk/platform/google_appengine/google/appengine/runtime/wsgi.py", line 240, in Handle handler = _config_handle.add_wsgi_middleware(self._LoadHandler()) File "/home/.../sdk/platform/google_appengine/google/appengine/runtime/wsgi.py", line 302, in _LoadHandler raise err ImportError: &lt;module 'main' from '/home/.../main.pyc'&gt; has no attribute app INFO 2016-10-17 11:29:37,496 module.py:788] default: "GET /get HTTP/1.1" 500 - </code></pre> <h2>Edit 1</h2> <p>Trying:</p> <pre><code># with main() import webapp2 app = webapp2.RequestHandler() class GetHandler(webapp2.RequestHandler): def get(self): self.response.headers['Content-Type'] = 'text/plain' self.response.write('in GET') class SetHandler(webapp2.RequestHandler): def get(self): self.response.headers['Content-Type'] = 'text/plain' self.response.write('in SET') def main(): global app app = webapp2.WSGIApplication([ ('/get', GetHandler), ('/set', SetHandler), ], debug=True) return app if __name__ == '__main__': app = main() </code></pre> <p>Results in:</p> <pre><code>INFO 2016-10-17 12:30:34,751 module.py:402] [default] Detected file changes: /home/openstack/googleAppEngine/fastsimon/task2/task2_with_main/main.py ERROR 2016-10-17 12:30:42,328 wsgi.py:279] Traceback (most recent call last): File "/home/openstack/googleAppEngine/google-cloud-sdk/platform/google_appengine/google/appengine/runtime/wsgi.py", line 267, in Handle result = handler(dict(self._environ), self._StartResponse) TypeError: 'RequestHandler' object is not callable INFO 2016-10-17 12:30:42,335 module.py:788] default: "GET /get HTTP/1.1" 500 - </code></pre>
0
2016-10-17T11:44:21Z
40,114,504
<p>Seems that the solution was quite simple (it kept eluding me because it hid in plain sight): <code>__name__</code> is <strong>main</strong> and not <strong>__main__</strong>!</p> <p>In short, the following code utilises a <code>main()</code> as expected:</p> <pre><code># with main() import webapp2 class GetHandler(webapp2.RequestHandler): def get(self): self.response.headers['Content-Type'] = 'text/plain' self.response.write('in GET') class SetHandler(webapp2.RequestHandler): def get(self): self.response.headers['Content-Type'] = 'text/plain' self.response.write('in SET') def main(): global app app = webapp2.WSGIApplication([ ('/get', GetHandler), ('/set', SetHandler), ], debug=True) # Note that it's 'main' and not '__main__' if __name__ == 'main': main() </code></pre>
0
2016-10-18T17:28:24Z
[ "python", "python-2.7", "google-app-engine", "main", "wsgi" ]
python 2.7 unicode greek characters using matplotlib
40,085,589
<p>I have this coode to plot a figure:</p> <pre><code>plt.plot(x,y, color = "blue", label = u'\u03C9') </code></pre> <p> The unicode don't work in plot label, but it works when I try</p> <pre><code>print(u'\u03C9') </code></pre> <p>Thank's in advance.</p>
0
2016-10-17T11:47:04Z
40,085,718
<p>Run <code>plt.legend()</code> to show the labels.</p> <p>You can see a working version here: <a href="https://gist.github.com/davidbrai/742de3cb7def6da8f125e4572e691ee4" rel="nofollow">https://gist.github.com/davidbrai/742de3cb7def6da8f125e4572e691ee4</a></p>
0
2016-10-17T11:53:19Z
[ "python", "matplotlib", "unicode", "label" ]
Appending multiple elements with etree, how to write each element on new line?
40,085,606
<p>I am adding some elements to some nodes in a a graphml file using Python and etree. I have two lists of strings with some data which I want to write to my .graphml file. I have managed to do this but when using the .append() function it writes the two new elements on the same line. Is there a good way to get a line separation between these new elements while writing them in the same loop?</p> <p>I have the following dataset:</p> <pre><code>&lt;?xml version="1.0" encoding="UTF-8"?&gt; &lt;graphml xmlns="http://graphml.graphdrawing.org/xmlns"&gt; &lt;node id="node1"&gt; &lt;data key="label"&gt;node1&lt;/data&gt; &lt;data key="degree"&gt;6&lt;/data&gt; &lt;/node&gt; &lt;node id="node2"&gt; &lt;data key="label"&gt;node2&lt;/data&gt; &lt;data key="degree"&gt;32&lt;/data&gt; &lt;/node&gt; &lt;node id="node3"&gt; &lt;data key="label"&gt;node3&lt;/data&gt; &lt;data key="degree"&gt;25&lt;/data&gt; &lt;/node&gt; &lt;/graphml&gt; </code></pre> <p>and two lists containing years:</p> <pre><code>lastActive["2013","2014","2015"] lastRelated["2012","2014","2011"] </code></pre> <p>Using the following code to append the lists as elements in the dataset:</p> <pre><code>for node in root: #checks if correct node for index, i in enumerate(nameOfNode): if i == node[0].text: #create and add lastRelated element lastRelated = Element('data') lastRelated.set('key', 'lastRelated') node.append(lastRelated) lastRelated.text = lastRelated[index] #create and add lastActive element lastActive = Element('data') lastActive.set('key', 'lastActive') node.append(lastActive) lastActive.text = lastActive[index] updatedText = etree.tostring(node) #write to file file = open('dataset.graphml', 'wb') file.write(updatedText) file.close() </code></pre> <p>The following results are:</p> <pre><code> &lt;node id="node1"&gt; &lt;data key="label"&gt;node1&lt;/data&gt; &lt;data key="degree"&gt;6&lt;/data&gt; &lt;data key="lastActive"&gt;2015&lt;/data&gt;&lt;data key="lastRelated"&gt;2011&lt;/data&gt;&lt;/node&gt; </code></pre> <p>I would like it to be structured as:</p> <pre><code> &lt;node id="node1"&gt; &lt;data key="label"&gt;node1&lt;/data&gt; &lt;data key="degree"&gt;6&lt;/data&gt; &lt;data key="lastActive"&gt;2015&lt;/data&gt; &lt;data key="lastRelated"&gt;2011&lt;/data&gt; &lt;/node&gt; </code></pre> <p>Anyone got a solution for this?</p>
0
2016-10-17T11:47:48Z
40,092,623
<p>You should be able to get the wanted output by providing a suitable value for the <a href="https://docs.python.org/2.7/library/xml.etree.elementtree.html#xml.etree.ElementTree.Element.tail" rel="nofollow"><code>tail</code></a> property on the new elements. The <code>tail</code> is text that comes after an element's end tag and before the following element's start tag.</p> <pre><code>... thetail = "\n " lastRelated.tail = thetail lastActive.tail = thetail updatedText = etree.tostring(node) ... </code></pre>
0
2016-10-17T17:47:18Z
[ "python", "xml", "elementtree", "graphml" ]
Using regex to match words and numbers with whitespaces
40,085,728
<p>I am trying to create a regex pattern that would match the following:</p> <pre><code>Updated x word word </code></pre> <p><code>x</code> = being a number, e.g. 2</p> <p>I have tried the following:</p> <pre><code>(Updated\s\d\s\w\s\w) </code></pre> <p>For example, I would like: <code>Updated 2 mins ago</code>, to match.</p> <p>But it doesn't seem to work: <a href="http://regexr.com/3ef05" rel="nofollow">http://regexr.com/3ef05</a></p>
1
2016-10-17T11:54:02Z
40,085,767
<p>You need to use <a href="http://www.regular-expressions.info/repeat.html" rel="nofollow">quantifiers</a> to show that the digit and word groups can consist of more than one character:</p> <pre><code>(Updated\s\d+\s\w+\s\w+) </code></pre> <p>This works: <a href="http://regexr.com/3ef08" rel="nofollow">http://regexr.com/3ef08</a></p>
1
2016-10-17T11:56:09Z
[ "python", "regex", "python-2.7" ]
Using regex to match words and numbers with whitespaces
40,085,728
<p>I am trying to create a regex pattern that would match the following:</p> <pre><code>Updated x word word </code></pre> <p><code>x</code> = being a number, e.g. 2</p> <p>I have tried the following:</p> <pre><code>(Updated\s\d\s\w\s\w) </code></pre> <p>For example, I would like: <code>Updated 2 mins ago</code>, to match.</p> <p>But it doesn't seem to work: <a href="http://regexr.com/3ef05" rel="nofollow">http://regexr.com/3ef05</a></p>
1
2016-10-17T11:54:02Z
40,085,771
<p>Try this :</p> <pre><code>(Updated\s\d+\s\w+\s\w+) </code></pre> <p>The <code>+</code> means "one or more characters of this type", which is probably what you need here.</p> <p>See it here : <a href="http://regexr.com/3ef0b" rel="nofollow">http://regexr.com/3ef0b</a></p>
1
2016-10-17T11:56:21Z
[ "python", "regex", "python-2.7" ]
Median of each columns in python
40,085,735
<p>I am trying to find median of each column of my list in python. Here is my code snippet-</p> <pre><code>X_train1=np.array(X_train1).astype(np.float) median1= X_train1.median(axis=0) </code></pre> <p>But I am getting the following error-</p> <pre><code>AttributeError: 'numpy.ndarray' object has no attribute 'median' </code></pre> <p>Here is my X_train1 array-</p> <pre><code>[[ 100. 100. 100. ..., 100. 100. 100. ] [ 91.56786232 96.62190102 98.08459941 ..., 100.4891341 99.60223361 93.26315789] [ 92.90859973 97.64075269 103.1123983 ..., 96.08483893 99.20446722 86.42105263] ..., [ 1193.656511 43.95921162 204.9478628 ..., 260.0710993 196.0911803 12.53894737] [ 1199.482215 44.61122178 207.833733 ..., 266.2309527 196.7031286 12.66526316] [ 1226.497073 44.64553811 209.5192855 ..., 267.744815 199.6481297 13.01894737]] </code></pre>
0
2016-10-17T11:54:13Z
40,085,787
<p>You should use np.median. e.g.:</p> <pre><code>np.median(X_train1,axis=0) </code></pre>
3
2016-10-17T11:57:02Z
[ "python", "numpy" ]
Importing dictionaries into python script?
40,085,759
<p>So i am creating a very basic text based game, however i want the answers the program gives to be relatively correct in response to different inputs. Instead of having to tell the program to write a specific thing for each different answer, i want to be able to import a dictionary of types into the script and have keywords for the different groups of words (like negative and positive connotations). Is this possible? Everything i have read hasn't really answered my question or done what i wanted...</p> <p>my mock up scripts below:</p> <p>Diction included:</p> <pre><code>a=raw_input("Are you well?") if a=='yes': print("Good to hear.") elif a=='yep": print("Good to hear!") #I don't want to put it all in manually or use 'or' elif a=='no': print("That's not good!") elif a=='nope': print("That's not good!") else: print("Oh.") </code></pre> <p>with dictionary:</p> <h1>mydiction.py</h1> <pre><code>NegConns=('no','nope','never','No','Nope') #'no' etc. is what the user inputs, NegConns is the keyword PosConns=('yes','sure','okay','fine','good') </code></pre> <h1>testmod.py</h1> <pre><code>import mydiction question=raw_input("How are you?") if question== NegConns: print("That's not good.") elif question==PosConns: print("Good to hear.") else: print("oh.") </code></pre> <p>So basically if the input is negative the program is sympathetic, if the input is positive the program congratulates. I'm not sure if this is possible for exactly what i'm wanting or if i'm going about this the wrong way, and i can't seem to find help so i'm putting it out there... Thanks in advance.</p>
1
2016-10-17T11:55:35Z
40,085,932
<p>This is almost correct, except you want to tweak your import statement a bit:</p> <p><code>from mydiction import NegConns, PosConns </code></p> <p>and change your equality tests to array-membership tests, like so:</p> <p><code>if question in NegConns:</code> or, <code>if question in PosConns: </code></p> <p>Also, you should check out <a href="https://docs.python.org/3/tutorial/controlflow.html#default-argument-values" rel="nofollow">https://docs.python.org/3/tutorial/controlflow.html#default-argument-values</a>. The example code snippet looks almost exactly like the problem you're trying to solve.</p> <p>Additionally, consider using dictionaries or sets instead of lists/tuples. You should get the benefit of O(1) lookup while using the <code>in</code> operator on dictionaries/sets.</p>
1
2016-10-17T12:04:22Z
[ "python", "python-2.7" ]
Importing dictionaries into python script?
40,085,759
<p>So i am creating a very basic text based game, however i want the answers the program gives to be relatively correct in response to different inputs. Instead of having to tell the program to write a specific thing for each different answer, i want to be able to import a dictionary of types into the script and have keywords for the different groups of words (like negative and positive connotations). Is this possible? Everything i have read hasn't really answered my question or done what i wanted...</p> <p>my mock up scripts below:</p> <p>Diction included:</p> <pre><code>a=raw_input("Are you well?") if a=='yes': print("Good to hear.") elif a=='yep": print("Good to hear!") #I don't want to put it all in manually or use 'or' elif a=='no': print("That's not good!") elif a=='nope': print("That's not good!") else: print("Oh.") </code></pre> <p>with dictionary:</p> <h1>mydiction.py</h1> <pre><code>NegConns=('no','nope','never','No','Nope') #'no' etc. is what the user inputs, NegConns is the keyword PosConns=('yes','sure','okay','fine','good') </code></pre> <h1>testmod.py</h1> <pre><code>import mydiction question=raw_input("How are you?") if question== NegConns: print("That's not good.") elif question==PosConns: print("Good to hear.") else: print("oh.") </code></pre> <p>So basically if the input is negative the program is sympathetic, if the input is positive the program congratulates. I'm not sure if this is possible for exactly what i'm wanting or if i'm going about this the wrong way, and i can't seem to find help so i'm putting it out there... Thanks in advance.</p>
1
2016-10-17T11:55:35Z
40,086,122
<p>I think that's OK, but you could create a list with the Negative and Positive answers and check it during the if statement. See how it works:</p> <pre><code> negativeAnswers = ['No','Nope','I don't think so'] positiveAnswers = ['Yeah', 'Yes', 'Of Course'] question1 = raw_input('How are you?') if question.lowercase() in positiveAnswers; print('Nice to hear that!') elif question.lowercase() in negativeAnswers: print('Oh') else: print('Sorry. I didn't understand you. :(') </code></pre> <p>I hope it helped you.</p>
0
2016-10-17T12:14:20Z
[ "python", "python-2.7" ]
Importing dictionaries into python script?
40,085,759
<p>So i am creating a very basic text based game, however i want the answers the program gives to be relatively correct in response to different inputs. Instead of having to tell the program to write a specific thing for each different answer, i want to be able to import a dictionary of types into the script and have keywords for the different groups of words (like negative and positive connotations). Is this possible? Everything i have read hasn't really answered my question or done what i wanted...</p> <p>my mock up scripts below:</p> <p>Diction included:</p> <pre><code>a=raw_input("Are you well?") if a=='yes': print("Good to hear.") elif a=='yep": print("Good to hear!") #I don't want to put it all in manually or use 'or' elif a=='no': print("That's not good!") elif a=='nope': print("That's not good!") else: print("Oh.") </code></pre> <p>with dictionary:</p> <h1>mydiction.py</h1> <pre><code>NegConns=('no','nope','never','No','Nope') #'no' etc. is what the user inputs, NegConns is the keyword PosConns=('yes','sure','okay','fine','good') </code></pre> <h1>testmod.py</h1> <pre><code>import mydiction question=raw_input("How are you?") if question== NegConns: print("That's not good.") elif question==PosConns: print("Good to hear.") else: print("oh.") </code></pre> <p>So basically if the input is negative the program is sympathetic, if the input is positive the program congratulates. I'm not sure if this is possible for exactly what i'm wanting or if i'm going about this the wrong way, and i can't seem to find help so i'm putting it out there... Thanks in advance.</p>
1
2016-10-17T11:55:35Z
40,086,803
<p>Let's say your directory structure is as:</p> <pre><code>--\my_project # Your directory containing the files -| mydiction.py -| testmod.py </code></pre> <p>Add a <code>__init__.py</code> in <em>my_project</em> directory to make it as python <a href="https://docs.python.org/2/tutorial/modules.html" rel="nofollow">module</a>. Now your project structure will be like:</p> <pre><code>--\my_project -| __init__.py -| mydiction.py -| testmod.py </code></pre> <p>Now in order to <code>import</code> from <em>mydiction.py</em> to <em>testmod.py</em>, you have to mention in your <em>testmod.py</em> as:</p> <pre><code>from mydiction import NegConns, PosConns </code></pre>
0
2016-10-17T12:47:57Z
[ "python", "python-2.7" ]
Python : Data extraction from xml file
40,085,783
<p>Given the following file of <code>xml</code> which stores the values of vehicles trip info. How can I generate a cumulative traveled distance over each time step as a .text file. There is no specific order in the xml, it's all random. </p> <pre><code>&lt;tripinfos&gt; &lt;tripinfo id="1" depart="1.00" arrival="2" duration="1.00" distance="3"/&gt; &lt;tripinfo id="5" depart="2.00" arrival="4" duration="2.00" distance="5"/&gt; &lt;tripinfo id="10" depart="5.00" arrival="8" duration="3.00" distance="1"/&gt; &lt;tripinfo id="3" depart="3.00" arrival="6" duration="3.00" distance="2"/&gt; &lt;tripinfo id="8" depart="8.00" arrival="10" duration="2.00" distance="4"/&gt; &lt;/tripinfos&gt; </code></pre> <p>output.textfile</p> <pre><code>0 //Time step #0 0 3 3 8 8 10 10 11 11 15 </code></pre>
0
2016-10-17T11:56:54Z
40,086,713
<p>I'm sure there's a better solution that uses an xml library, but here's an easy one</p> <pre><code>import numpy as np a = open('file.xml') lines = a.readlines() my_arr = np.zeros((len(lines)-2,2)) for i in range(len(lines[1:-1])): contents=lines[i+1].split('\"') my_arr[i,0]=(eval(contents[5])) my_arr[i,1]=(eval(contents[9])) #Now sort according to arrival times my_arr = (my_arr[my_arr[:,0].argsort()]) print(my_arr) final_output=[] cum_dist=0 last_index=0 for i in range(int(my_arr[-1,0])+1): if(i == my_arr[last_index,0]): cum_dist+=my_arr[last_index,1] last_index+=1 final_output.append(int(cum_dist)) print(final_output) np.savetxt('outputfile.txt',np.array(final_output), newline=',',fmt='%s') a.close() </code></pre> <p>Your output is </p> <pre><code>[[ 2. 3.] [ 4. 5.] [ 6. 2.] [ 8. 1.] [ 10. 4.]] [0, 0, 3, 3, 8, 8, 10, 10, 11, 11, 15] </code></pre>
0
2016-10-17T12:43:40Z
[ "python", "xml", "python-2.7" ]
Python : Data extraction from xml file
40,085,783
<p>Given the following file of <code>xml</code> which stores the values of vehicles trip info. How can I generate a cumulative traveled distance over each time step as a .text file. There is no specific order in the xml, it's all random. </p> <pre><code>&lt;tripinfos&gt; &lt;tripinfo id="1" depart="1.00" arrival="2" duration="1.00" distance="3"/&gt; &lt;tripinfo id="5" depart="2.00" arrival="4" duration="2.00" distance="5"/&gt; &lt;tripinfo id="10" depart="5.00" arrival="8" duration="3.00" distance="1"/&gt; &lt;tripinfo id="3" depart="3.00" arrival="6" duration="3.00" distance="2"/&gt; &lt;tripinfo id="8" depart="8.00" arrival="10" duration="2.00" distance="4"/&gt; &lt;/tripinfos&gt; </code></pre> <p>output.textfile</p> <pre><code>0 //Time step #0 0 3 3 8 8 10 10 11 11 15 </code></pre>
0
2016-10-17T11:56:54Z
40,089,430
<p>I ended up using a dict to store each arrival time, and since I know the max arrival time, I can initalize the key with a range.</p> <pre><code>import xml.etree.ElementTree as ET filepath = r'tripinfo.xml' tree = ET.parse(filepath) root = tree.getroot() mydict = {k:[] for k in range(7202)} for trip in root.iter('tripinfo'): arrived = int(float(trip.get('arrival'))) distance = float(trip.get('distance')) mydict[arrived].append(distance) mysum = 0 outputfilepath = 'travelledDuration.txt' outputfile = open(outputfilepath, 'a') for i in range(7202): distanceList = mydict[i] mysum += sum(distanceList) outputfile.write(str(mysum)+"\n") outputfile.close() </code></pre>
0
2016-10-17T14:45:32Z
[ "python", "xml", "python-2.7" ]
Python : Data extraction from xml file
40,085,783
<p>Given the following file of <code>xml</code> which stores the values of vehicles trip info. How can I generate a cumulative traveled distance over each time step as a .text file. There is no specific order in the xml, it's all random. </p> <pre><code>&lt;tripinfos&gt; &lt;tripinfo id="1" depart="1.00" arrival="2" duration="1.00" distance="3"/&gt; &lt;tripinfo id="5" depart="2.00" arrival="4" duration="2.00" distance="5"/&gt; &lt;tripinfo id="10" depart="5.00" arrival="8" duration="3.00" distance="1"/&gt; &lt;tripinfo id="3" depart="3.00" arrival="6" duration="3.00" distance="2"/&gt; &lt;tripinfo id="8" depart="8.00" arrival="10" duration="2.00" distance="4"/&gt; &lt;/tripinfos&gt; </code></pre> <p>output.textfile</p> <pre><code>0 //Time step #0 0 3 3 8 8 10 10 11 11 15 </code></pre>
0
2016-10-17T11:56:54Z
40,091,042
<p>Here I offer a partial solution to your problem.</p> <pre><code># import some packages from numpy import array import xml.etree.ElementTree as et # init some lists ids=[] depart=[] arrival=[] duration=[] distance=[] # prep the xml document xmltxt = """ &lt;root&gt; &lt;tripinfo id="1" depart="1.00" arrival="2" duration="1.00" distance="3"/&gt; &lt;tripinfo id="5" depart="2.00" arrival="4" duration="2.00" distance="5"/&gt; &lt;tripinfo id="3" depart="3.00" arrival="6" duration="3.00" distance="2"/&gt; &lt;tripinfo id="10" depart="5.00" arrival="8" duration="3.00" distance="1"/&gt; &lt;tripinfo id="8" depart="8.00" arrival="10" duration="2.00" distance="4"/&gt; &lt;/root&gt; """ # parse the xml text xmldoc = et.fromstring(xmltxt) # extract and output tripinfo attributes # collect them into lists for item in xmldoc.iterfind('tripinfo'): att=item.attrib ids.append(int(att['id'])) depart.append(float(att['depart'])) arrival.append(float(att['arrival'])) duration.append(float(att['duration'])) distance.append(float(att['distance'])) # put lists into an np.array # and transpose it arr=array([ids, depart, arrival, duration, distance]).T # sort array by 'depart' column. (index=1) arr = arr[arr[:,1].argsort()] sumdist=0 dept=0 print "depart: %s; Sum_dist= %s" % ( dept, sumdist ) for ea in arr: sumdist += ea[4] # distance dept = ea[1] # depart # get 'arrival', 'duration' here, so that # you can use them to manipulate and get your exact solution print "depart: %s; Sum_dist= %s" % ( dept, sumdist ) </code></pre> <p>The output is</p> <pre><code>depart: 0; Sum_dist= 0 depart: 1.0; Sum_dist= 3.0 depart: 2.0; Sum_dist= 8.0 depart: 3.0; Sum_dist= 10.0 depart: 5.0; Sum_dist= 11.0 depart: 8.0; Sum_dist= 15.0 </code></pre>
0
2016-10-17T16:09:11Z
[ "python", "xml", "python-2.7" ]