title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
sequence
I have a numpy array, and an array of indexs, how can I access to these positions at the same time
40,038,557
<p>for example, I have the numpy arrays like this</p> <pre><code>a = array([[1, 2, 3], [4, 3, 2]]) </code></pre> <p>and index like this to select the max values</p> <pre><code>max_idx = array([[0, 2], [1, 0]]) </code></pre> <p>how can I access there positions at the same time, to modify them. like "a[max_idx] = 0" getting the following</p> <pre><code>array([[1, 2, 0], [0, 3, 2]]) </code></pre>
1
2016-10-14T08:31:36Z
40,039,444
<p>Numpy support <a href="http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#advanced-indexing" rel="nofollow">advanced slicing</a> like this:</p> <pre><code>a[b[:, 0], b[:, 1]] = 0 </code></pre> <p>Code above would fit your requirement. </p> <p>If <code>b</code> is more than 2-D. A better way should be like this:</p> <pre><code>a[np.split(b, 2, axis=1)] </code></pre> <p>The <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.split.html" rel="nofollow">np.split</a> will split ndarray into columns.</p>
0
2016-10-14T09:17:54Z
[ "python", "arrays", "numpy" ]
Using Python to remove paratext (or 'noise') from txt files
40,038,596
<p>I am in the proces of preparing a corpus of textfiles, consisting of 170 Dutch novels. I am a literary scholar and relatively new to Python, and also to programming in general. What I am trying to do is writing a Python script for removing everything from each .txt file that does NOT belong to the actual content of the novel (i.e. the story). Things I want to remove are: added biographies of the author, blurbs, and other pieces of information that comes with converting an ePub to .txt. </p> <p>My idea is to manually decide for every .txt file at which line the actual content of the novel begins and where it ends. I am using the following block of code for the purpose of removing every information in the .txt file that is not contained between those two line numbers:</p> <pre><code>def removeparatext(inputFilename, outputFilename): inputfile = open(inputFilename,'rt', encoding='utf-8') outputfile = open(outputFilename, 'w', encoding='utf-8') for line_number, line in enumerate(inputfile, 1): if line_number &gt;= 80 and line_number &lt;= 2741: outputfile.write(inputfile.readline()) inputfile.close() outputfile.close() removeparatext(inputFilename, outputFilename) </code></pre> <p>The numbers 80 and 2741 are the start and end numbers for the actual content of one specific novel. However, the outputfile only outputs a .txt file with the text removed BEFORE linenumber 80, it still contains everyhing AFTER line number 2741. I do not seem to understand why. Perhaps I am not using the enumerate() function in the right way. </p> <p>Another thing is that I would like to get rid of all unnecessary spaces in the .txt-file. But the .strip() method does not seem to work when I implement it in this block of code. </p> <p>Could anyone give me a suggestion as to how to solve this problem? Many thanks in advance!</p>
1
2016-10-14T08:33:35Z
40,038,646
<p><code>enumerate</code> already provides the <em>line</em> alongside its index, so you don't need call <code>readline</code> on the file object again as that would lead to unpredictable behavior - more like reading the file object at a double pace:</p> <pre><code>for line_number, line in enumerate(inputfile, 1): if line_number &gt;= 80 and line_number &lt;= 2741: outputfile.write(line) # ^^^^ </code></pre> <hr> <p>As an alternative to using <code>enumerate</code> and iterating through the entire file, you may consider <em>slicing</em> the file object using <a href="https://docs.python.org/2/library/itertools.html#itertools.islice" rel="nofollow"><code>itertools.islice</code></a> which takes the start and stop indices, and then writing the <em>sliced sequence</em> to the output file using <a href="http://python-reference.readthedocs.io/en/latest/docs/file/writelines.html" rel="nofollow"><code>writelines</code></a>:</p> <pre><code>from itertools import islice def removeparatext(inputFilename, outputFilename): inputfile = open(inputFilename,'rt', encoding='utf-8') outputfile = open(outputFilename, 'w', encoding='utf-8') # use writelines to write sliced sequence of lines outputfile.writelines(islice(inputfile, 79, 2741)) # indices start from zero inputfile.close() outputfile.close() </code></pre> <hr> <p>In addition, you can <em>open</em> files and leave the closing/cleanup to Python by using a context manager <em>with</em> the <code>with</code> statement. See <a href="http://stackoverflow.com/questions/9282967/how-to-open-a-file-using-the-open-with-statement">How to open a file using the open with statement</a>.</p> <pre><code>from itertools import islice def removeparatext(inputFilename, outputFilename): with open(inputFilename,'rt', encoding='utf-8') as inputfile,\ open(outputFilename, 'w', encoding='utf-8') as outputfile: # use writelines to write sliced sequence of lines outputfile.writelines(islice(inputfile, 79, 2741)) removeparatext(inputFilename, outputFilename) </code></pre>
1
2016-10-14T08:36:08Z
[ "python", "enumerate", "data-cleaning" ]
type 'set''s difference between __str__ and printing directly
40,038,618
<pre><code>In [1]: import sys In [2]: sys.version_info Out[2]: sys.version_info(major=3, minor=5, micro=2, releaselevel='final', serial=0) In [3]: b=set([10,20,40,32,67,40,20,89,300,400,15]) In [4]: b Out[4]: {10, 11, 15, 20, 32, 40, 67, 89, 111, 300, 400} </code></pre> <pre><code>In [1]: import sys In [2]: sys.version_info Out[2]: sys.version_info(major=2, minor=7, micro=12, releaselevel='final', serial=0) In [3]: b=set([10,20,40,32,67,40,20,89,300,400,15]) In [4]: b Out[4]: set([32, 67, 40, 10, 11, 300, 15, 400, 20, 89, 111]) </code></pre> <p>why have this different between 2 and 3?</p>
1
2016-10-14T08:34:37Z
40,038,641
<p>Because the <code>{...}</code> syntax <a href="https://docs.python.org/2/whatsnew/2.7.html#python-3-1-features" rel="nofollow">wasn't introduced until Python 2.7</a>, and by that time the <code>set([...])</code> <code>repr()</code> format was already established.</p> <p>So to keep existing Python 2 code that may have relied on the <code>set([...])</code> representation working, the <code>repr()</code> wasn't changed in the 2.x series. Python 3 had <code>{...}</code> notation for sets from the start.</p>
5
2016-10-14T08:35:57Z
[ "python", "python-3.x", "set", "python-2.x" ]
if-else sentence in one-line-python
40,038,799
<p>I have some code:</p> <pre><code>a_part = [2001, 12000] b_part = [1001, 2000] c_part = [11, 1000] d_part = [1, 10] data = range(1, 12000) labels = [a_part, b_part, c_part, d_part] sizes = [] # --- for part in labels: sum = 0 for each in data: sum += each if each &gt;= part[0] and each &lt;= part[1] else 0 # error # sum += each if each &gt;= part[0] and each &lt;= part[1] sizes.append(sum) print(sizes) </code></pre> <p>And I rewrote this to be more Pythonic:</p> <pre><code>sizes = [sum(x for x in data if low&lt;=x&lt;=high) for low,high in labels] # error # sizes = [sum(x for x in data if low&lt;=x&lt;=high else 0) for low else 0,high in labels] print(sizes) </code></pre> <p>I found that in the first snippet I can't leave out <code>else 0</code> while the second example can't contain <code>else 0</code>.</p> <p>What is the difference <code>else 0</code> makes between these examples?</p>
1
2016-10-14T08:43:58Z
40,038,870
<p>This is not the same syntax at all. The first one is a ternary operator (like (a ? b : c) in C). It wouldn't make any sense not to have the else clause here. The second one is a list comprehension and the purpose of the if clause is to filter the elements of the iterable.</p>
1
2016-10-14T08:47:30Z
[ "python" ]
if-else sentence in one-line-python
40,038,799
<p>I have some code:</p> <pre><code>a_part = [2001, 12000] b_part = [1001, 2000] c_part = [11, 1000] d_part = [1, 10] data = range(1, 12000) labels = [a_part, b_part, c_part, d_part] sizes = [] # --- for part in labels: sum = 0 for each in data: sum += each if each &gt;= part[0] and each &lt;= part[1] else 0 # error # sum += each if each &gt;= part[0] and each &lt;= part[1] sizes.append(sum) print(sizes) </code></pre> <p>And I rewrote this to be more Pythonic:</p> <pre><code>sizes = [sum(x for x in data if low&lt;=x&lt;=high) for low,high in labels] # error # sizes = [sum(x for x in data if low&lt;=x&lt;=high else 0) for low else 0,high in labels] print(sizes) </code></pre> <p>I found that in the first snippet I can't leave out <code>else 0</code> while the second example can't contain <code>else 0</code>.</p> <p>What is the difference <code>else 0</code> makes between these examples?</p>
1
2016-10-14T08:43:58Z
40,038,872
<p>You have two very different things here.</p> <p>In the first you have an expression, and are using a <a href="https://docs.python.org/3/reference/expressions.html#conditional-expressions" rel="nofollow"><em>conditional expression</em></a> to produce the value; that requires an <code>else</code> because an expression always needs to produce <em>something</em>.</p> <p>For example, if you wrote:</p> <pre><code>sum += each if each &gt;= part[0] and each &lt;= part[1] # removing "else 0" </code></pre> <p>then what would be added to the sum if the test was false?</p> <p>In the second you have a <a href="https://docs.python.org/3/reference/expressions.html#generator-expressions" rel="nofollow"><em>generator expression</em></a>, and the <code>if</code> is part of the possible parts (called <a href="https://docs.python.org/3/reference/expressions.html#displays-for-lists-sets-and-dictionaries" rel="nofollow"><code>comp_if</code> in the grammar</a>), next to (nested) <code>for</code> loops. Like an <code>if ...:</code> statement, it filters what elements in the sequence are used, and that doesn't need to produce a value in the <em>false</em> case; you would not be filtering otherwise.</p> <p>To bring that back to your example:</p> <pre><code>sum(x for x in data if low&lt;=x&lt;=high) </code></pre> <p>when the <code>if</code> test is false, that <code>x</code> is just omitted from the loop and not summed. You'd do the same thing in the first example with:</p> <pre><code>if each &gt;= part[0] and each &lt;= part[1]: # only add to `sum` if true sum += each </code></pre>
4
2016-10-14T08:47:30Z
[ "python" ]
if-else sentence in one-line-python
40,038,799
<p>I have some code:</p> <pre><code>a_part = [2001, 12000] b_part = [1001, 2000] c_part = [11, 1000] d_part = [1, 10] data = range(1, 12000) labels = [a_part, b_part, c_part, d_part] sizes = [] # --- for part in labels: sum = 0 for each in data: sum += each if each &gt;= part[0] and each &lt;= part[1] else 0 # error # sum += each if each &gt;= part[0] and each &lt;= part[1] sizes.append(sum) print(sizes) </code></pre> <p>And I rewrote this to be more Pythonic:</p> <pre><code>sizes = [sum(x for x in data if low&lt;=x&lt;=high) for low,high in labels] # error # sizes = [sum(x for x in data if low&lt;=x&lt;=high else 0) for low else 0,high in labels] print(sizes) </code></pre> <p>I found that in the first snippet I can't leave out <code>else 0</code> while the second example can't contain <code>else 0</code>.</p> <p>What is the difference <code>else 0</code> makes between these examples?</p>
1
2016-10-14T08:43:58Z
40,038,898
<pre><code>x if y else z </code></pre> <p>This is an inline-if expression which returns a value. It must always return a value, it cannot not return a value, hence it must contain an <code>else</code> clause.</p> <pre><code>[x for x in y if z] </code></pre> <p>This is a list comprehension. The <code>if</code> here acts as a filter for the loop. It's equivalent to <code>for x in y: if z: ...</code>.</p> <p>To have an <code>else</code> in there, you put the inline-if expression in place of <code>x</code>:</p> <pre><code>[x if y else z for foo in bar] </code></pre>
1
2016-10-14T08:49:03Z
[ "python" ]
Loop with very large number (2 ^ 64) hangs, how to iterate faster?
40,038,810
<p>I'm using python 2.7 version. I have simple loop like below given</p> <pre><code>while i &lt; math.pow(2,64): do_something() i += 1000 </code></pre> <p>But this code runs too long. I have heard that python has scientific libraries for working with very large numbers like scipy, numpy, but i don't use them previously. Is there any way to iterate over that range faster ?</p>
-1
2016-10-14T08:44:39Z
40,039,011
<p><strong>EDIT</strong>: corrected base conversion from 2 to 10 (thanks Nick A!)</p> <p>It's not clear what <code>do_something()</code> does, but I don't think this is a senseful approach. If you have to loop over <code>2^54</code> items it simply won't ever stop because it takes so long. Let's do some math.</p> <p>Say your loop is this:</p> <pre><code>while i &lt; math.pow(2, 54): i += 1 </code></pre> <p>You do nothing in the cycle, you just increment <code>i</code> and cycle <code>2^54</code> times.</p> <blockquote> <p>You increment by 1000, which is roughly <code>2^10</code> (1024). The cycles will be <code>2^64 / 2^10 = ~2^54</code>. In my example I increment by 1, so that's why I cycle to <code>2^54</code> and not <code>2^64</code>.</p> </blockquote> <p>You run this single-threaded on a 4 GHz CPU (4*10^9 cycles per second).</p> <p>Now we devide these two numbers and get the seconds it takes to compute:</p> <pre><code>2^54 / (4 * 10^9) = ~4611686 </code></pre> <p>I've devided by 1000 (not 1024) and rounded it down. Let's see how long those seconds are:</p> <pre><code>76861 miutes, or 1281 hours, or 53 days </code></pre> <p>So, on a single-core CPU it takes 53 days (!) to just count and cycle.</p> <blockquote> <p>On a 8-core CPU it would still take ~7 days... but you have to be able to implement <code>do_something()</code> for parallel execution, which is not always possible (it totally depends on what it actually does). And parallelization introduces some overhead.</p> </blockquote> <p>Now add in your <code>do_something()</code> function and you can imagine the general idea:</p> <blockquote> <p>If an algorithm does not terminate in a relatively short time, it is as if it never ends. As such, it underlies the halting problem and therefore it is not decidable if the result is correct.</p> </blockquote> <p>In other words <strong>your code is wrong because it does not end in time</strong>.</p> <p>Say you switch your computer off once a month. Anything that takes longer than a month would be plain out wrong.</p> <p>More about the <a href="https://en.wikipedia.org/wiki/Halting_problem" rel="nofollow">halting problem here</a>.</p> <hr> <p><strong>EDIT</strong>: I'm wrong! It takes longer than that.</p> <p>Your program cannot just increment <code>i</code>, you also must test it to be less than <code>2^54</code> and only loop if it's not. The least pseudo-assembler code would be:</p> <pre><code> MOV 2 ^ 54, i loop: DEC i JNZ loop # Jump Not Zero </code></pre> <p>You need at least 2 instructions in the loop.</p> <blockquote> <p>x86 CPUs actually have special instructions that do all three steps at once (like <a href="http://stackoverflow.com/a/1756322/3227403">LOOP, LOOPE, LOOPNE</a>). From a comment in the link we know it takes longer than the <code>DEC/JNZ</code> combination, which takes 2 cycles.</p> </blockquote> <p>This means the computation will take twice as long:</p> <pre><code>single core: ~106 days octa-core: ~13 days </code></pre> <p>But that's without doing anything useful.</p> <p>Even if you do very simple stuff in <code>do_something()</code> it easily <strong>BLOATS</strong> those numbers up, even to years and beyond...</p>
6
2016-10-14T08:55:43Z
[ "python", "numpy", "scipy", "largenumber" ]
Getting whole text of an td in Python (lxml)
40,038,814
<p>I'm trying to get the whole text that is contained in this td:</p> <p>Example:</p> <pre><code>&lt;td&gt; &lt;p&gt;Some Text&lt;/p&gt; &lt;a&gt;SAMPLE&lt;/a&gt; &lt;table&gt; &lt;tbody&gt; &lt;tr&gt; &lt;td&gt;something&lt;/td&gt; .... &lt;/tr&gt; ... &lt;/tbody&gt; &lt;/table&gt; ... &lt;/td&gt; </code></pre> <p>There are lots of tags inside this td, which makes it hard for me. Even tables are contained.</p> <p>In FirePath (Firefox) I can simply attach //text()</p> <blockquote> <p>.//*[@id='Testcases__list']/table/tbody/tr/td//text()</p> </blockquote> <p>But in Python code the //text() part throws me an error, using the lxml library</p> <pre><code>Traceback (most recent call last): File "D:\pythonscripts\Bachelor\TestMain.py", line 52, in &lt;module&gt; print tr.findall('./td[6]//text()')[0].text File "src\lxml\lxml.etree.pyx", line 1563, in lxml.etree._Element.findall (src\lxml\lxml.etree.c:56897) File "C:\Python27\lib\site-packages\lxml\_elementpath.py", line 304, in findall return list(iterfind(elem, path, namespaces)) File "C:\Python27\lib\site-packages\lxml\_elementpath.py", line 277, in iterfind selector = _build_path_iterator(path, namespaces) File "C:\Python27\lib\site-packages\lxml\_elementpath.py", line 260, in _build_path_iterator selector.append(ops[token[0]](_next, token)) KeyError: '()' </code></pre> <p>How can I get the whole text of that td in Python?</p>
0
2016-10-14T08:44:53Z
40,038,927
<p>If it is a website, you want BeautifulSoup! <a href="https://www.crummy.com/software/BeautifulSoup/" rel="nofollow">https://www.crummy.com/software/BeautifulSoup/</a></p> <p>Something like this: </p> <pre><code>import requests from bs4 import BeautifulSoup r = requests.get("Your_Link") soup = BeautifulSoup(r.content) print soup.find('td').text </code></pre> <p>This scrapes the website looking for the td tag and will return anything between it. If this tag is a child of another tag, you need to look at parent and child tag relationships so that you can navigate to this particular tag and print its information.</p>
-1
2016-10-14T08:50:46Z
[ "python", "html", "xpath", "lxml" ]
Getting whole text of an td in Python (lxml)
40,038,814
<p>I'm trying to get the whole text that is contained in this td:</p> <p>Example:</p> <pre><code>&lt;td&gt; &lt;p&gt;Some Text&lt;/p&gt; &lt;a&gt;SAMPLE&lt;/a&gt; &lt;table&gt; &lt;tbody&gt; &lt;tr&gt; &lt;td&gt;something&lt;/td&gt; .... &lt;/tr&gt; ... &lt;/tbody&gt; &lt;/table&gt; ... &lt;/td&gt; </code></pre> <p>There are lots of tags inside this td, which makes it hard for me. Even tables are contained.</p> <p>In FirePath (Firefox) I can simply attach //text()</p> <blockquote> <p>.//*[@id='Testcases__list']/table/tbody/tr/td//text()</p> </blockquote> <p>But in Python code the //text() part throws me an error, using the lxml library</p> <pre><code>Traceback (most recent call last): File "D:\pythonscripts\Bachelor\TestMain.py", line 52, in &lt;module&gt; print tr.findall('./td[6]//text()')[0].text File "src\lxml\lxml.etree.pyx", line 1563, in lxml.etree._Element.findall (src\lxml\lxml.etree.c:56897) File "C:\Python27\lib\site-packages\lxml\_elementpath.py", line 304, in findall return list(iterfind(elem, path, namespaces)) File "C:\Python27\lib\site-packages\lxml\_elementpath.py", line 277, in iterfind selector = _build_path_iterator(path, namespaces) File "C:\Python27\lib\site-packages\lxml\_elementpath.py", line 260, in _build_path_iterator selector.append(ops[token[0]](_next, token)) KeyError: '()' </code></pre> <p>How can I get the whole text of that td in Python?</p>
0
2016-10-14T08:44:53Z
40,039,117
<p>Here I will Give you code.</p> <pre><code>from lxml import etree from lxml.html import tostring,fromstring import re TAG_RE = re.compile(r'&lt;[^&gt;]+&gt;') tree = etree.HTML(''' &lt;td&gt; &lt;p&gt;Some Text&lt;/p&gt; &lt;a&gt;SAMPLE&lt;/a&gt; &lt;table&gt; &lt;tbody&gt; &lt;tr&gt; &lt;td&gt;something&lt;/td&gt; .... &lt;/tr&gt; ... &lt;/tbody&gt; &lt;/table&gt; ... &lt;/td&gt; ''') print TAG_RE.sub('',tostring(tree.xpath("//td")[0])) </code></pre>
-1
2016-10-14T09:01:29Z
[ "python", "html", "xpath", "lxml" ]
Getting whole text of an td in Python (lxml)
40,038,814
<p>I'm trying to get the whole text that is contained in this td:</p> <p>Example:</p> <pre><code>&lt;td&gt; &lt;p&gt;Some Text&lt;/p&gt; &lt;a&gt;SAMPLE&lt;/a&gt; &lt;table&gt; &lt;tbody&gt; &lt;tr&gt; &lt;td&gt;something&lt;/td&gt; .... &lt;/tr&gt; ... &lt;/tbody&gt; &lt;/table&gt; ... &lt;/td&gt; </code></pre> <p>There are lots of tags inside this td, which makes it hard for me. Even tables are contained.</p> <p>In FirePath (Firefox) I can simply attach //text()</p> <blockquote> <p>.//*[@id='Testcases__list']/table/tbody/tr/td//text()</p> </blockquote> <p>But in Python code the //text() part throws me an error, using the lxml library</p> <pre><code>Traceback (most recent call last): File "D:\pythonscripts\Bachelor\TestMain.py", line 52, in &lt;module&gt; print tr.findall('./td[6]//text()')[0].text File "src\lxml\lxml.etree.pyx", line 1563, in lxml.etree._Element.findall (src\lxml\lxml.etree.c:56897) File "C:\Python27\lib\site-packages\lxml\_elementpath.py", line 304, in findall return list(iterfind(elem, path, namespaces)) File "C:\Python27\lib\site-packages\lxml\_elementpath.py", line 277, in iterfind selector = _build_path_iterator(path, namespaces) File "C:\Python27\lib\site-packages\lxml\_elementpath.py", line 260, in _build_path_iterator selector.append(ops[token[0]](_next, token)) KeyError: '()' </code></pre> <p>How can I get the whole text of that td in Python?</p>
0
2016-10-14T08:44:53Z
40,052,997
<p>You should be using <em>.xpath</em> not <em>findall</em>:</p> <pre><code>tr.xpath("//*[@id='Testcases__list']/table/tbody/tr/td//text()") </code></pre> <p>To just get the first td:</p> <pre><code> tr.xpath("(//*[@id='Testcases__list']/table/tbody/tr/td)[1]/text()") </code></pre> <p>I would also verify that the source actually has a <em>tbody</em> element, often it is added by the browser and not in the actual source.</p> <p>You can <em>text_content</em>:</p> <pre><code>tr.xpath("(//*[@id='Testcases__list']/table/tbody/tr/td)[1]")[0].text_content() </code></pre>
1
2016-10-14T22:43:08Z
[ "python", "html", "xpath", "lxml" ]
Animate matshow function in matplotlib
40,039,112
<p>I have a matrix which is time-depedent and I want to plot the evolution as an Animation. </p> <p>My code is the following:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt from matplotlib.animation import FuncAnimation n_frames = 3 #Numero de ficheros que hemos generado data = np.empty(n_frames, dtype=object) #Almacena los datos #Leer todos los datos for k in range(n_frames): data[k] = np.loadtxt("frame"+str(k)) fig = plt.figure() plot =plt.matshow(data[0]) def init(): plot.set_data(data[0]) return plot def update(j): plot.set_data(data[j]) return [plot] anim = FuncAnimation(fig, update, init_func = init, frames=n_frames, interval = 30, blit=True) plt.show() </code></pre> <p>However, when I run it I always obtain the following error: <code>draw_artist can only be used after an initial draw which caches the render</code>. I don't know where this error come from and no idea on how to solve it. I have read <a href="http://stackoverflow.com/questions/10429556/animate-quadratic-grid-changes-matshow">this answer</a> and also <a href="https://datasciencelab.wordpress.com/tag/matplotlib/" rel="nofollow">this article</a> but still don't know why my code is not working.</p> <p>Any help is appreciated, thank you!</p>
1
2016-10-14T09:01:12Z
40,039,691
<p>You're very close to a working solution. Either change</p> <pre><code>plot = plt.matshow(data[0]) </code></pre> <p>to</p> <pre><code>plot = plt.matshow(data[0], fignum=0) </code></pre> <p>or use</p> <pre><code>plot = plt.imshow(data[0]) </code></pre> <p>instead.</p> <hr> <p>The problem with using <code>plt.matshow(data[0])</code> here is that it <a href="https://github.com/matplotlib/matplotlib/blob/master/lib/matplotlib/pyplot.py#L2241" rel="nofollow">creates a new figure</a> if the <code>fignum</code> parameter is left blank (i.e. equal to <code>None</code> by default). Since <code>fig = plt.figure()</code> is called and that <code>fig</code> is passed to <code>FuncAnimation</code>, you end up with two figures, one with the result of <code>plt.matshow</code>, and the other blank one being drawn on by <code>FuncAnimation</code>. The figure that <code>FuncAnimation</code> is drawing on does not find the initial draw, so it raises</p> <pre><code>AttributeError: draw_artist can only be used after an initial draw which caches the render </code></pre>
1
2016-10-14T09:29:02Z
[ "python", "animation", "matplotlib" ]
ASCII codec can't encode character u'\u2013'
40,039,212
<p>I have a little Python code in Q_GIS which opens objects. The problem I have is that in the directory there is a character (underscore like character) that can not be encoded. The error is:</p> <blockquote> <p>Traceback (most recent call last): File "", line 1, in UnicodeEncodeError: 'ascii' codec can't encode character u'\u2013' in position 10: ordinal not in range(128)</p> </blockquote> <p>My little code is:</p> <pre><code>from os import startfile; proj = QgsProject.instance(); UriFile = str(proj.fileName()); img = '[% "pad" %]'; Path = str(os.path.dirname(UriFile)); startfile(Path+img) </code></pre> <p>Because of my little programming skills, I ask you to help me add some code in this little code to overcome the problem.</p>
-1
2016-10-14T09:07:06Z
40,043,279
<p>please bear with my cellphones skills ;).</p> <p>There are different encoding that you can use for making python understand or decode strings. i.e. str("some awkward string",'UTF-16') should be able to decode what you need it to. </p> <p>To help you understand though, i would use an IDE ( i.e. pycharm) and run the program up to this line ( in debug mode). then I would try to evalute each step of your line to understand what it return right now. </p>
-1
2016-10-14T12:33:30Z
[ "python", "unicode" ]
ASCII codec can't encode character u'\u2013'
40,039,212
<p>I have a little Python code in Q_GIS which opens objects. The problem I have is that in the directory there is a character (underscore like character) that can not be encoded. The error is:</p> <blockquote> <p>Traceback (most recent call last): File "", line 1, in UnicodeEncodeError: 'ascii' codec can't encode character u'\u2013' in position 10: ordinal not in range(128)</p> </blockquote> <p>My little code is:</p> <pre><code>from os import startfile; proj = QgsProject.instance(); UriFile = str(proj.fileName()); img = '[% "pad" %]'; Path = str(os.path.dirname(UriFile)); startfile(Path+img) </code></pre> <p>Because of my little programming skills, I ask you to help me add some code in this little code to overcome the problem.</p>
-1
2016-10-14T09:07:06Z
40,044,179
<p>I assume:</p> <ul> <li>you are using a Python2 version</li> <li><code>QgsProject.instance().fileName()</code> is a unicode string containing a EN-DASH (Unicode char U+2013: &#8211;) this looks like a normal dash (Unicode char U+2D: -) but does not exists in ASCII nor in any common 8bits character set.</li> </ul> <p>The error is then normal: in Python2 the conversion of an unicode string to a plain 8bits string uses the ASCII character set.</p> <p>Workaround:<br> You can use an explicit encoding asking to use a <em>replace</em> character for unmapped ones:</p> <pre><code>UriFile = proj.fileName().encode('ascii', 'replace') </code></pre> <p>at least you will see where the offending characters occurs.</p> <p>Solution:</p> <p>You should either use full unicode processing (and use Python3) or make sure that all string processed are representable in you current character set (often latin1)</p> <p>Alternatively, if it make sense in your use case, you could try to use the UTF8 encoding which can successfully represent any UNICODE character in 1, 2 or 3 bytes:</p> <pre><code>UriFile = proj.fileName().encode('utf8') </code></pre>
2
2016-10-14T13:17:33Z
[ "python", "unicode" ]
ASCII codec can't encode character u'\u2013'
40,039,212
<p>I have a little Python code in Q_GIS which opens objects. The problem I have is that in the directory there is a character (underscore like character) that can not be encoded. The error is:</p> <blockquote> <p>Traceback (most recent call last): File "", line 1, in UnicodeEncodeError: 'ascii' codec can't encode character u'\u2013' in position 10: ordinal not in range(128)</p> </blockquote> <p>My little code is:</p> <pre><code>from os import startfile; proj = QgsProject.instance(); UriFile = str(proj.fileName()); img = '[% "pad" %]'; Path = str(os.path.dirname(UriFile)); startfile(Path+img) </code></pre> <p>Because of my little programming skills, I ask you to help me add some code in this little code to overcome the problem.</p>
-1
2016-10-14T09:07:06Z
40,074,519
<p>Thanks for the answers,</p> <p>I have found the answer in replacing str with unicode in the python code, see the code below.</p> <p>from os import startfile; proj = QgsProject.instance();UriFile = unicode(proj.fileName()); img = '[% "pad" %]'; Path = unicode(os.path.dirname(UriFile)); startfile(Path+img)</p>
0
2016-10-16T19:28:44Z
[ "python", "unicode" ]
No such option, click version 6.6
40,039,243
<p>Using <a href="http://click.pocoo.org/5/" rel="nofollow">http://click.pocoo.org/5/</a></p> <p>I have this command defined, however, when I run the command, the option missing is passed through correctly (I can see the value), but I get <code>Error: no such option: --missing</code> in the terminal and the command fails. </p> <p>What am I doing wrong here exactly? The code below has had some information stripped from it to make it less overwhelming, but the logic is the same.</p> <pre><code>@cli.group() def migrator(): """Migrator from existing HEPData System to new Version""" @migrator.command() @with_appcontext @click.option('--missing', is_flag=True, help='...') @click.option('--start', '-s', type=int, default=None, help='...') @click.option('--end', '-e', default=None, type=int, help='...') @click.option('--date', '-d', type=str, default=None, help='...') def migrate(start, end, missing, date=None): """ Migrates all content... """ if missing: ids = get_missing_records() else: ids = get_all_ids_in_current_system(date) print("Found {} ids to load.".format(len(ids))) if start is not None: _slice = slice(int(start), end) ids = ids[_slice] print("Sliced, going to load {} records.".format(len(ids))) print(ids) load_files(ids) </code></pre>
1
2016-10-14T09:08:26Z
40,082,322
<p>I found the issue. Was nothing much to do with pocoo click. It was because the <code>get_missing_records()</code> function is actually another CLI command. The <code>missing</code> parameter is subsequently passed through to this function as well, and the <code>get_missing_records()</code> method obviously knows nothing about that parameter. So all solved.</p>
0
2016-10-17T09:04:58Z
[ "python", "click", "command-line-interface" ]
Games console troubleshoot code using tkinter python
40,039,322
<p>I need to create a games console troubleshoot using tkinter but I have no idea where to start! Can you recommend what code to use or any good tutorial websites?</p>
-2
2016-10-14T09:12:06Z
40,053,775
<p>Some good tutorials include:</p> <p><a href="https://www.tutorialspoint.com/python3/python_gui_programming.htm" rel="nofollow">https://www.tutorialspoint.com/python3/python_gui_programming.htm</a></p> <p><a href="http://www.python-course.eu/python_tkinter.php" rel="nofollow">http://www.python-course.eu/python_tkinter.php</a></p> <p><a href="https://pythonprogramming.net/tkinter-python-3-tutorial-adding-buttons/" rel="nofollow">https://pythonprogramming.net/tkinter-python-3-tutorial-adding-buttons/</a></p> <p>Also, some example code:</p> <p>Note: the first two are in python 2 so switch the line <code>import Tkinter</code> to <code>import tkinter</code></p> <p><a href="https://github.com/ripexz/python-tkinter-minesweeper" rel="nofollow">https://github.com/ripexz/python-tkinter-minesweeper</a> Comment out <code>import tkMessageBox</code></p> <p><a href="https://github.com/orbitbreak/tkinter-calc" rel="nofollow">https://github.com/orbitbreak/tkinter-calc</a></p> <p><a href="https://github.com/Innoviox/HammerHead" rel="nofollow">https://github.com/Innoviox/HammerHead</a></p>
0
2016-10-15T00:28:44Z
[ "python", "tkinter" ]
I need to help at for or while loops
40,039,357
<p>Create a scripts which will print the following, using for or while loops:</p> <pre><code> # # ## ## ### ### #### #### ##### ##### ###### ###### ####### ####### ######## ######## ######## ######## ####### ####### ###### ###### ##### ##### #### #### ### ### ## ## # # </code></pre> <p>so this is pretty much my "Homework", can someone help me? i dont get it... i guess i wasn't listening to lection.. i have few more but im not posting them cuz i wanna do it by my self </p>
-3
2016-10-14T09:13:50Z
40,039,463
<p>You should understand what a loop do.</p> <ul> <li><strong>For</strong> : You can repeat similar action X times</li> <li><strong>While</strong> : You can repeat similar action until a condition is verified.</li> </ul> <p>If you want your exercise to works, then you only want to run a loop X times, and then run another loop X times.</p> <p>The first loop for example, will print "#" X times, then a space, and "#" X times again. Each time you pass through the loop, X have to be incremented.</p> <p><strong>Now, go update your post with some code so we can help you understand !</strong></p>
1
2016-10-14T09:18:59Z
[ "python" ]
How to change date in pandas dataframe
40,039,457
<p>I have dataframe like below</p> <pre><code> day 0 2016-07-12 1 2016-08-13 2 2016-09-14 3 2016-10-15 4 2016-11-01 dtype:datetime64 </code></pre> <p>I would like to change the day like below</p> <pre><code> day 0 2016-07-01 1 2016-08-01 2 2016-09-01 3 2016-10-01 4 2016-11-01 </code></pre> <p>I tried</p> <pre><code>df.day.dt.day=1 </code></pre> <p>but It didnt work well How can I transform?</p>
1
2016-10-14T09:18:26Z
40,039,602
<p>You can use <code>numpy</code>, first convert to <code>numpy array</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.values.html" rel="nofollow"><code>values</code></a> and then convert to <code>datetime64[M]</code> by <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.astype.html" rel="nofollow"><code>astype</code></a>, what is the fastest solution:</p> <pre><code>df['day'] = df['day'].values.astype('datetime64[M]') print (df) day 0 2016-07-01 1 2016-08-01 2 2016-09-01 3 2016-10-01 4 2016-11-01 </code></pre> <hr> <p>Another slowier solution:</p> <pre><code>df['day'] = df['day'].map(lambda x: pd.datetime(x.year, x.month, 1)) print (df) day 0 2016-07-01 1 2016-08-01 2 2016-09-01 3 2016-10-01 4 2016-11-01 </code></pre> <p><strong>Timings</strong>:</p> <pre><code>#[50000 rows x 1 columns] df = pd.concat([df]*10000).reset_index(drop=True) def f(df): df['day'] = df['day'].values.astype('datetime64[M]') return df print (f(df)) In [281]: %timeit (df['day'].map(lambda x: pd.datetime(x.year, x.month, 1))) 10 loops, best of 3: 160 ms per loop In [282]: %timeit (f(df)) 100 loops, best of 3: 4.38 ms per loop </code></pre>
2
2016-10-14T09:25:16Z
[ "python", "pandas" ]
Python return value
40,039,504
<p>I have a global variable <code>global_a</code> and the function to determine the value</p> <pre><code># long Syntax def assignement_a(): global_a = len(xyz) return global_a </code></pre> <p>The point is to return the value to caller and simultaneously assign the value to the global to saved for further use, so I need something like this</p> <pre><code># shorter syntax def assignement_a(): return global_a = len(xyz) </code></pre> <p>Is there a short syntax solution in python?</p>
-1
2016-10-14T09:21:09Z
40,039,721
<p>You can change the global variable from the caller function or statement for ex:</p> <pre><code>global_a=0 var1=0 xyz="stackoverflow" def assignement_a(): return len(xyz) def test(): global global_a global var1 var1=global_a=assignement_a() print(var1) test() print(var1) print(global_a) </code></pre> <p>Like test method is a caller and thus return value is assigned to global variable or it can be use for further process.</p>
0
2016-10-14T09:30:43Z
[ "python", "function", "return-value" ]
Python return value
40,039,504
<p>I have a global variable <code>global_a</code> and the function to determine the value</p> <pre><code># long Syntax def assignement_a(): global_a = len(xyz) return global_a </code></pre> <p>The point is to return the value to caller and simultaneously assign the value to the global to saved for further use, so I need something like this</p> <pre><code># shorter syntax def assignement_a(): return global_a = len(xyz) </code></pre> <p>Is there a short syntax solution in python?</p>
-1
2016-10-14T09:21:09Z
40,039,872
<p>Just for the record, the correct version of "long syntax" is:</p> <pre><code>def assignement_a(): global global_a global_a = len(xyz) return global_a </code></pre> <p>As for the "short syntax" I don't see why it would be necessary here. Maybe it can be made shorter using a lambda, but that might just obscure the purpose of this function. Further using globals is not really a best practice anyway and is why Python requires you to be explicit about it (that and the way Python handles variable scope)</p>
1
2016-10-14T09:37:01Z
[ "python", "function", "return-value" ]
string.format() stops working correctly in my script
40,039,516
<p>I have this format string:</p> <pre><code>print "{u: &lt;16} {l: &gt;16} {lse: &lt;19} {active: &lt;12}".format(...) </code></pre> <p>which works fine in the Python console, but when run in my program, it prints <code>&lt;19</code> (literally) for the <code>lse</code> part. When I remove the <code>&lt;19</code> in the format specifier for <code>lse</code>, it does work...</p> <p>My data is fine, because when just using plain <code>{u}</code>, etc, the correct data is printed.</p> <p><strong>Update</strong> the field is a <code>datetime</code> field. How can I print a <code>datetime</code> field using the format specifiers?</p>
2
2016-10-14T09:21:29Z
40,039,924
<p>Apparently, a <code>datetime</code> object does not abide by the format specifier rules. When cast to a <code>str</code>, it works:</p> <pre><code>print "{d: &lt;12}".format(str(datetime.datetime.now())) </code></pre>
1
2016-10-14T09:39:53Z
[ "python", "format" ]
Pandas Delete rows from dataframe based on condition
40,039,531
<p>Consider this code:</p> <pre><code>from StringIO import StringIO import pandas as pd txt = """a, RR 10, 1asas 20, 1asasas 30, 40, asas 50, ayty 60, 2asas 80, 3asas""" frame = pd.read_csv(StringIO(txt), skipinitialspace=True) print frame,"\n\n\n" l=[] for i,j in frame[~ frame['RR'].str.startswith("1", na=True)]['RR'].iteritems(): if j.startswith(('2','3')): if frame[frame['RR'].str.startswith("1", na=False)]['RR'].str.match("1"+j[1:], as_indexer = True).any(): l.append(i) else: if frame[frame['RR'].str.startswith("1", na=False)]['RR'].str.match("1"+j, as_indexer = True).any(): l.append(i) frame = frame.drop(frame.index[l]) print frame </code></pre> <p>What i am doing here is,</p> <p>1)Loop through dataframes to drop any <code>RR</code> which already has <code>1RR</code> in dataframe</p> <p>2)If <code>RR</code> has <code>2 or 3</code> at start , then drop if that <code>RR</code> has <code>1RR[1:]</code> in dataframe.</p> <p>3)If <code>RR</code> startswith <code>1</code> or is <code>NaN</code> do not touch it.</p> <p>The code is working fine but this <code>dataframe</code> will have upto 1 million entries and i dont think this code is optimsed.As i have just started with <code>pandas</code> i have limited knowledge. Is there any way we can achieve this without <code>iteration</code>.Does <code>pandas</code> has any inbuilt utility to do this?</p>
2
2016-10-14T09:22:13Z
40,045,991
<p><strong>First</strong>, keep all strings starting with <code>1</code> or <code>nan</code>:</p> <pre><code>keep = frame['RR'].str.startswith("1", na=True) keep1 = keep[keep] # will be used at the end </code></pre> <p><strong>Second</strong>, keep strings starting with <code>2</code> or <code>3</code> that are not in the first dataframe <code>rr1</code>:</p> <pre><code>rr1 = frame.loc[frame['RR'].str.startswith("1", na=False), 'RR'] keep2 = ~frame.loc[ (frame['RR'].str.startswith("2")) | (frame['RR'].str.startswith("3")), 'RR' ].str.slice(1).isin(rr1.str.slice(1)) </code></pre> <p><strong>Third</strong>, keep other strings that are not in <code>rr1</code> after adding a leading <code>1</code>:</p> <pre><code>import numpy as np keep3 = ~("1" + frame.loc[ ~frame['RR'].str.slice(0,1).isin([np.nan, "1", "2", "3"]), 'RR' ]).isin(rr1) </code></pre> <p><strong>Finally</strong>, put everything together:</p> <pre><code>frame[pd.concat([keep1, keep2, keep3]).sort_index()] </code></pre>
1
2016-10-14T14:45:10Z
[ "python", "python-2.7", "pandas" ]
Pylatex error when generate PDF file - No such file or directory
40,039,763
<p>I just want to use Pylatex to generate the pdf file. I look at the basic example and re-run the script but it raised the error: OSError: [Errno 2] No such file or directory.</p> <p>Here is my script:</p> <pre><code>import sys from pylatex import Document, Section, Subsection, Command from pylatex.utils import italic, NoEscape def fill_document(doc): """Add a section, a subsection and some text to the document. :param doc: the document :type doc: :class:`pylatex.document.Document` instance """ with doc.create(Section('A section')): doc.append('Some regular text and some ') doc.append(italic('italic text. ')) with doc.create(Subsection('A subsection')): doc.append('Also some crazy characters: $&amp;#{}') if __name__ == '__main__': reload(sys) sys.setdefaultencoding('utf8') # Basic document doc = Document() fill_document(doc) doc.generate_pdf("full") doc.generate_tex() </code></pre> <p>And the error:</p> <pre><code>Traceback (most recent call last): File "/Users/code/Test Generator/Generate.py", line 34, in &lt;module&gt; doc.generate_pdf("full") File "/Library/Python/2.7/site-packages/pylatex/document.py", line 227, in generate_pdf raise(os_error) OSError: [Errno 2] No such file or directory </code></pre> <p>Can someone help me ? :-D thanks a lot.</p>
0
2016-10-14T09:32:36Z
40,140,312
<p>Based on the code around the error, you're probably missing a latex compiler:</p> <pre><code>compilers = ( ('latexmk', latexmk_args), ('pdflatex', []) ) </code></pre> <p>Try doing this:</p> <pre><code>apt-get install latexmk </code></pre>
0
2016-10-19T19:49:02Z
[ "python", "nosuchfileexception" ]
Pandas + HDF5 Panel data storage for large data
40,039,776
<p>As part of my research, I am searching a good storing design for my panel data. I am using pandas for all in-memory operations. I've had a look at the following two questions/contributions, <a href="http://stackoverflow.com/questions/14262433/large-data-work-flows-using-pandas">Large Data Work flows using Pandas</a> and <a href="http://stackoverflow.com/questions/23863553/query-hdf5-in-pandas">Query HDF5 Pandas</a> as they come closest to my set-up. However, I have a couple of questions left. First, let me define my data and some requirements:</p> <ol> <li>Size: I have around 800 dates, 9000 IDs and up to 200 variables. Hence, flattening the panel (along dates and IDs) corresponds to 7.2mio rows and 200 columns. This might all fit in memory or not, let's assume it does not. Disk-space is not an issue.</li> <li>Variables are typically calculated once, but updates/changes probably happen from time to time. Once updates occur, old versions don't matter anymore.</li> <li>New variables are added from time to time, mostly one at a time only.</li> <li>New rows are not added.</li> <li>Querying takes place. For example, often I need to select only a certain date range like <code>date&gt;start_date &amp; date&lt;end_date</code>. But some queries need to consider rank conditions on dates. For example, get all data (i.e. columns) where <code>rank(var1)&gt;500 &amp; rank(var1)&lt;1000</code>, where rank is as of date.</li> </ol> <p>The objective is to achive fast reading/quering of data. Data writing is not so critical.</p> <p>I thought of the following HDF5 design:</p> <ol> <li>Follow the groups_map approach (of <a href="http://stackoverflow.com/questions/14262433/large-data-work-flows-using-pandas">1</a>) to store variables in different tables. Limit the number of columns for each group to 10 (to avoid large memory loads when updating single variables, see point 3).</li> <li>Each group represents one table, where I use the multi-index based on dates &amp; ids for each table stored.</li> <li>Create an update function, to update variables. The functions loads the table with all (10) columns to memory as a df, deletes the table on the disk, replaces the updated variable in df and saves the table from memory back to disk.</li> <li>Create an add function, add var1 to a group with less than 10 columns, or create new group if required. Saving similar as in 3. load current group to memory, delete table on disk, add new column and save it back on disk.</li> <li>Calculate ranks as of date for relevant variables and add them to disk-storage as rank_var1, which should reduce the query as of to simply <code>rank_var1 &gt; 500 &amp; rank_var1&lt;1000</code>.</li> </ol> <p>I have the following questions:</p> <ol> <li>Updating HDFTable, I suppose I have to delete the entire table in order to update a single column?</li> <li>When to use 'data_columns', or should I simply assign True in HDFStore.append()?</li> <li>If I want to query based on condition of <code>rank_var1 &gt; 500 &amp; rank_var1&lt;1000</code>, but I need columns from other groups. Can I enter the index received from the rank_var1 condition into the query to get other columns based on this index (the index is a multi-index with date and ID)? Or would I need to loop this index by date and then chunk the IDs similar as proposed in <a href="http://stackoverflow.com/questions/23863553/query-hdf5-in-pandas">2</a> and repeat the procedure for each group where I need. Alternatively, (a) I could add to each groups table rank columns, but it seems extremely inefficient in terms of disk-storage. Note, the number of variables where rank filtering is relevant is limited (say 5). Or (b) I could simply use the df_rank received from the rank_var1 query and use in-memory operations via <code>df_rank.merge(df_tmp, left_index=True, right_index=True, how='left')</code> and loop through groups (df_tmp) where I select the desired columns.</li> <li>Say I have some data in different frequencies. Having different group_maps (or different storages) for different freq is the way to go I suppose?</li> <li>Copies of the storage might be used on win/ux systems. I assume it is perferctly compatible, anything to consider here?</li> <li>I plan to use <code>pd.HDFStore(str(self.path), mode='a', complevel=9, complib='blosc')</code>. Any concerns regarding complevel or complib?</li> </ol> <p>I've started to write up some code, once I have something to show I'll edit and add it if desired. Please, let me know if you need any more information.</p> <p>EDIT I here a first version of my storage class, please adjust path at bottom accordingly. Sorry for the length of the code, comments welcome</p> <pre><code>import pandas as pd import numpy as np import string class LargeDFStorage(): # TODO add index features to ensure correct indexes # index_names = ('date', 'id') def __init__(self, h5_path, groups_map): """ Parameters ---------- h5_path: str hdf5 storage path groups_map: dict where keys are group_names and values are dict, with at least key 'columns' where the value is list of column names. A special group_name is reserved for group_name/key "query", which can be used as queering and conditioning table when getting data, see :meth:`.get`. """ self.path = str(h5_path) self.groups_map = groups_map self.column_map = self._get_column_map() # if desired make part of arguments self.complib = 'blosc' self.complevel = 9 def _get_column_map(self): """ Calc the inverse of the groups_map/ensures uniqueness of cols Returns ------- dict: with cols as keys and group_names as values """ column_map = dict() for g, value in self.groups_map.items(): if len(set(column_map.keys()) &amp; set(value['columns'])) &gt; 0: raise ValueError('Columns have to be unique') for col in value['columns']: column_map[col] = g return column_map @staticmethod def group_col_names(store, group_name): """ Returns all column names of specific group Parameters ---------- store: pd.HDFStore group_name: str Returns ------- list: of all column names in the group """ if group_name not in store: return [] # hack to get column names, straightforward way!? return store.select(group_name, start=0, stop=0).columns.tolist() @staticmethod def stored_cols(store): """ Collects all columns stored in HDF5 store Parameters ---------- store: pd.HDFStore Returns ------- list: a list of all columns currently in the store """ stored_cols = list() for x in store.items(): group_name = x[0][1:] stored_cols += LargeDFStorage.group_col_names(store, group_name) return stored_cols def _find_groups(self, columns): """ Searches all groups required for covering columns Parameters ---------- columns: list list of valid columns Returns ------- list: of unique groups """ groups = list() for column in columns: groups.append(self.column_map[column]) return list(set(groups)) def add_columns(self, df): """ Adds columns to storage for the first time. If columns should be updated use(use :meth:`.update` instead) Parameters ---------- df: pandas.DataFrame with new columns (not yet stored in any of the tables) Returns ------- """ store = pd.HDFStore(self.path, mode='a' , complevel=self.complevel, complib=self.complib) # check if any column has been stored already if df.columns.isin(self.stored_cols(store)).any(): store.close() raise ValueError('Some cols are already in the store') # find all groups needed to store the data groups = self._find_groups(df.columns) for group in groups: v = self.groups_map[group] # select columns of current group in df select_cols = df.columns[df.columns.isin(v['columns'])].tolist() tmp = df.reindex(columns=select_cols, copy=False) # set data column to False only in case of query data dc = None if group=='query': dc = True stored_cols = self.group_col_names(store,group) # no columns in group (group does not exists yet) if len(stored_cols)==0: store.append(group, tmp, data_columns=dc) else: # load current disk data to memory df_grp = store.get(group) # remove data from disk store.remove(group) # add new column(s) to df_disk df_grp = df_grp.merge(tmp, left_index=True, right_index=True, how='left') # save old data with new, additional columns store.append(group, df_grp, data_columns=dc) store.close() def _query_table(self, store, columns, where): """ Selects data from table 'query' and uses where expression Parameters ---------- store: pd.HDFStore columns: list desired data columns where: str a valid select expression Returns ------- """ query_cols = self.group_col_names(store, 'query') if len(query_cols) == 0: store.close() raise ValueError('No data to query table') get_cols = list(set(query_cols) &amp; set(columns)) if len(get_cols) == 0: # load only one column to minimize memory usage df_query = store.select('query', columns=query_cols[0], where=where) add_query = False else: # load columns which are anyways needed already df_query = store.select('query', columns=get_cols, where=where) add_query = True return df_query, add_query def get(self, columns, where=None): """ Retrieve data from storage Parameters ---------- columns: list/str list of columns to use, or use 'all' if all columns should be retrieved where: str a valid select statement Returns ------- pandas.DataFrame with all requested columns and considering where """ store = pd.HDFStore(str(self.path), mode='r') # get all columns in stored in HDFStorage stored_cols = self.stored_cols(store) if columns == 'all': columns = stored_cols # check if all desired columns can be found in storage if len(set(columns) - set(stored_cols)) &gt; 0: store.close() raise ValueError('Column(s): {}. not in storage'.format( set(columns)- set(stored_cols))) # get all relevant groups (where columns are taken from) groups = self._find_groups(columns) # if where query is defined retrieve data from storage, eventually # only index of df_query might be used if where is not None: df_query, add_df_query = self._query_table(store, columns, where) else: df_query, add_df_query = None, False # dd collector df = list() for group in groups: # skip in case where was used and columns used from if where is not None and group=='query': continue # all columns which are in group but also requested get_cols = list( set(self.group_col_names(store, group)) &amp; set(columns)) tmp_df = store.select(group, columns=get_cols) if df_query is None: df.append(tmp_df) else: # align query index with df index from storage df_query, tmp_df = df_query.align(tmp_df, join='left', axis=0) df.append(tmp_df) store.close() # if any data of query should be added if add_df_query: df.append(df_query) # combine all columns df = pd.concat(df, axis=1) return df def update(self, df): """ Updates data in storage, all columns have to be stored already in order to be accepted for updating (use :meth:`.add_columns` instead) Parameters ---------- df: pd.DataFrame with index as in storage, and column as desired Returns ------- """ store = pd.HDFStore(self.path, mode='a' , complevel=self.complevel, complib=self.complib) # check if all column have been stored already if df.columns.isin(self.stored_cols(store)).all() is False: store.close() raise ValueError('Some cols have not been stored yet') # find all groups needed to store the data groups = self._find_groups(df.columns) for group in groups: dc = None if group=='query': dc = True # load current disk data to memory group_df = store.get(group) # remove data from disk store.remove(group) # update with new data group_df.update(df) # save updated df back to disk store.append(group, group_df, data_columns=dc) store.close() class DataGenerator(): np.random.seed(1282) @staticmethod def get_df(rows=100, cols=10, freq='M'): """ Simulate data frame """ if cols &lt; 26: col_name = list(string.ascii_lowercase[:cols]) else: col_name = range(cols) if rows &gt; 2000: freq = 'Min' index = pd.date_range('19870825', periods=rows, freq=freq) df = pd.DataFrame(np.random.standard_normal((rows, cols)), columns=col_name, index=index) df.index.name = 'date' df.columns.name = 'ID' return df @staticmethod def get_panel(rows=1000, cols=500, items=10): """ simulate panel data """ if items &lt; 26: item_names = list(string.ascii_lowercase[:cols]) else: item_names = range(cols) panel_ = dict() for item in item_names: panel_[item] = DataGenerator.get_df(rows=rows, cols=cols) return pd.Panel(panel_) def main(): # Example of with DataFrame path = 'D:\\fc_storage.h5' groups_map = dict( a=dict(columns=['a', 'b', 'c', 'd', 'k']), query=dict(columns=['e', 'f', 'g', 'rank_a']), ) storage = LargeDFStorage(path, groups_map=groups_map) df = DataGenerator.get_df(rows=200000, cols=15) storage.add_columns(df[['a', 'b', 'c', 'e', 'f']]) storage.update(df[['a']]*3) storage.add_columns(df[['d', 'g']]) print(storage.get(columns=['a','b', 'f'], where='f&lt;0 &amp; e&lt;0')) # Example with panel and rank condition path2 = 'D:\\panel_storage.h5' storage_pnl = LargeDFStorage(path2, groups_map=groups_map) panel = DataGenerator.get_panel(rows=800, cols=2000, items=24) df = panel.to_frame() df['rank_a'] = df[['a']].groupby(level='date').rank() storage_pnl.add_columns(df[['a', 'b', 'c', 'e', 'f']]) storage_pnl.update(df[['a']]*3) storage_pnl.add_columns(df[['d', 'g', 'rank_a']]) print(storage_pnl.get(columns=['a','b','e', 'f', 'rank_a'], where='f&gt;0 &amp; e&gt;0 &amp; rank_a &lt;100')) if __name__ == '__main__': main() </code></pre>
1
2016-10-14T09:33:03Z
40,041,761
<p>It's bit difficult to answer those questions without particular examples...</p> <blockquote> <p>Updating HDFTable, I suppose I have to delete the entire table in order to update a single column?</p> </blockquote> <p>AFAIK yes unless you are storing single columns separately, but it will be done automatically, you just have to write your DF/Panel back to HDF Store.</p> <blockquote> <p>When to use 'data_columns', or should I simply assign True in HDFStore.append()?</p> </blockquote> <p><code>data_columns=True</code> - will index <strong>all</strong> your columns - IMO it's waste of resources unless you are going to use <strong>all</strong> columns in the where parameter (i.e. if all columns should be indexed). I would specify there only those columns that will be used often for searching in <code>where=</code> clause. Consider those columns as indexed columns in a database table.</p> <blockquote> <p>If I want to query based on condition of rank_var1 > 500 &amp; rank_var1&lt;1000, but I need columns from other groups. Can I enter the index received from the rank_var1 condition into the query to get other columns based on this index (the index is a multi-index with date and ID)?</p> </blockquote> <p>I think we would need some reproducible sample data and examples of your queries in order to give a reasonable answer...</p> <blockquote> <p>Copies of the storage might be used on win/ux systems. I assume it is perferctly compatible, anything to consider here?</p> </blockquote> <p>Yes, it should be fully compatible</p> <blockquote> <p>I plan to use pd.HDFStore(str(self.path), mode='a', complevel=9, complib='blosc'). Any concerns regarding complevel or complib?</p> </blockquote> <p>Test it with <strong>your data</strong> - results might depend on dtypes, number of unique values, etc. You may also want to consider <code>lzo</code> complib - it might be faster in some use-cases. Check <a href="http://www.pytables.org/usersguide/optimization.html" rel="nofollow">this</a>. Sometimes a high <code>complevel</code> doesn't give you better copression ratio, but will be slower (see results of <a href="http://stackoverflow.com/a/37012035/5741205">my old comparison</a>)</p>
1
2016-10-14T11:12:45Z
[ "python", "database", "pandas", "hdf5" ]
Python Ternary Rescursion
40,039,958
<p>I'm creating two functions one, that returns the ternary representation for a base 10 number, and one that returns the base 10 representation for a ternary number using recursion. For example 52 would return 1221. Right now, I have this down, but I'm not sure how to make it. I'm mostly confused with the aspect of the 2 in ternary representation and how to implement that into code.</p> <pre><code>def numToTernary(n): '''Precondition: integer argument is non-negative. Returns the string with the ternary representation of non-negative integer n. If n is 0, the empty string is returned.''' if n==0: return '' if n&lt;3: return str(n) return numToTernary(n//3)+ </code></pre>
-1
2016-10-14T09:41:25Z
40,040,245
<p>You were nearly there with your code. This should do the trick, according to <a href="http://stackoverflow.com/questions/2088201/integer-to-base-x-system-using-recursion-in-python">this question</a>.</p> <p>However, you will have to do the search for "0" outside this function: as it was done in your code, the "0" digits were not skipped in the output, and a number that should have output "120011" would have instead output "1211" for example.</p> <pre><code>def numToTernary(n): '''Precondition: integer argument is non-negative. Returns the string with the ternary representation of non-negative integer n. If n is 0, the empty string is returned.''' if n&lt;3: return str(n) return numToTernary(n//3)+str(n%3) </code></pre>
0
2016-10-14T09:54:56Z
[ "python", "recursion", "ternary" ]
Python Ternary Rescursion
40,039,958
<p>I'm creating two functions one, that returns the ternary representation for a base 10 number, and one that returns the base 10 representation for a ternary number using recursion. For example 52 would return 1221. Right now, I have this down, but I'm not sure how to make it. I'm mostly confused with the aspect of the 2 in ternary representation and how to implement that into code.</p> <pre><code>def numToTernary(n): '''Precondition: integer argument is non-negative. Returns the string with the ternary representation of non-negative integer n. If n is 0, the empty string is returned.''' if n==0: return '' if n&lt;3: return str(n) return numToTernary(n//3)+ </code></pre>
-1
2016-10-14T09:41:25Z
40,040,835
<p>So the big idea with all base change is the following:</p> <p>You take a <code>number n</code> written in <code>base b</code> as this 123. That means <code>n</code> in <code>base 10</code> is equal to <code>1*b² + 2*b + 3</code> . So convertion from <code>base b</code> to <code>base 10</code> is straigtforward: you take all digits and multiply them by the base at the right power. </p> <p>Now the for the reverse operation: you have a <code>number n</code> in <code>base 10</code> and want to turn it in <code>base b</code>. The operation is simply a matter of calculating each digit in the new base. (I'll assume my result has only three digits for the following example) So I am looking for d2,d1,d0 the digits in <code>base b</code> of n. I know that <code>d2*b² + d1*b + d0 = n</code>. That means that <code>(d2*b + d1)*b + d0 = n</code> so we recognize the result of an euclidian division where d0 is the remainder of the euclidian division of n by d : <code>d0=n%d</code>. We have identified d0 as the remainder so the expression in parentheses is the <code>quotien q</code>, <code>q=n//b</code> so we have a new equation to solve using the exact same method (hence the recursion) <code>d2*b + d1 = q</code>.</p> <p>And all that translate to the code you almost had :</p> <pre><code>def numToTernary(n): '''Precondition: integer argument is non-negative. Returns the string with the ternary representation of non-negative integer n. If n is 0, the empty string is returned.''' if n==0: return '' if n&lt;3: return str(n) return numToTernary(n//3)+str(n%3) print(numToTernary(10)) Out[1]: '101' </code></pre>
0
2016-10-14T10:22:50Z
[ "python", "recursion", "ternary" ]
Python using logging module than print
40,040,129
<p>I want to use a logging module to replace the <code>print()</code>, Would appreciate some suggestions on how o get this done.</p> <p><strong>Code in Context</strong></p> <pre><code>from bs4 import BeautifulSoup, Tag import requests from pprint import pprint import sys import logging from logging.config import fileConfig fileConfig("logging.conf") url = "http://hortonworks.com/careers/open-positions/" response = requests.get(url) if response.status_code != 200: print("Request failed with http code {}".format(response.status_code)) sys.exit(1) soup = BeautifulSoup(response.text, "html.parser") jobs = [] div_main = soup.select("div#careers_list") for div in div_main: for element in div: if isinstance(element, Tag) and "class" in element.attrs: if "department_title" in element.attrs["class"]: department_title = element.get_text().strip() elif "career" in element.attrs["class"]: location = element.select("div.location")[0].get_text().strip() title = element.select("div.title")[0].get_text().strip() job = { "job_location": location, "job_title": title, "job_dept": department_title } jobs.append(job) logging.info(jobs) </code></pre> <p>I replaced the <code>pprint</code> with the logging.info.</p> <p>my logging.conf file</p> <pre><code>[loggers] keys=root [handlers] keys=hand01 [formatters] keys=form01 [logger_root] level=DEBUG handlers=hand01 [handler_hand01] level=DEBUG class=StreamHandler args=(sys.stdout,) formatter=form01 [formatter_form01] format= %(processName)s %(asctime)s %(pathname)s %(levelname)-9s %(message)s datefmt=%Y-%m-%d %H:%M:%S class=logging.Formatter </code></pre> <p>Is this the right approach?</p>
-2
2016-10-14T09:49:28Z
40,045,723
<p>What you want to achieve is still not quite clear to me - specially, why you want to use a logger here. It looks like your script's goal is to output the jobs listing ? If yes, printing it to stdout was probably the RightThing(tm) to do. </p> <p>As a general rule: for a command line application, stdout is for normal program output, stderr for error reporting / debug trace stuff. </p> <p>Logging is usually used for complex application (GUI, daemons etc), and by well written library code, with the library code <em>only</em> logging, and leaving the logging configuration part to the application (because the library code cannot know in which context it will be used), and it is used for reporting (errors, warnings, debug trace...) so the admins and or developers can analyse the application's behaviour and eventually trace problems when something goes wrong. </p> <p>IOW, using logging in a =~ 40 lines quick script with not even a function in it really doesn't make sense, and using it to send the script's output to stdout is actually a wtf (please pardon my french). </p> <p>Now if what you wanted was insight on "how" to use the logging module that's another question. </p> <p>First, as I stated above, one of the points of the <code>logging</code> module is to decouple logging configuration (which should be done by the application's entry point, as soon as possible, and can be done in anyway that please the application's developper - using <code>basicConfig</code>, <code>dictConfig</code> or plain manual configuration) from logging use (which happens inside the application's and libraries code). Typically this means you have a very simple main script (the "application entry point") that merely set up logging etc, checks environment variables / command line arguments etc, imports the application code (which itself will import required libs) and run the application.</p> <p>In this context, each module (either application modules or library module) will ask for it's own logger, canonically using the module's name (ie <code>myapp.mypackage.mymodule</code>) as provided by the <code>__name__</code> magic variable as logger name. This will build a "tree" of logger (with, from root to leaf, 'root' -> 'myapp' -> 'mypackage' -> 'mymodule'), each of which can be configured specifically or relie on it's parent logger config. Then, it's up to you as the module's developper to log what you think will be relevant, using the appropriate level. </p>
0
2016-10-14T14:32:32Z
[ "python", "logging", "beautifulsoup", "python-requests" ]
How to use different env for different projects?
40,040,221
<p>I have two projects running on a server. I'm storing AWS_SECRET in env.</p> <p>I set those env in my ~/.bash_profile. How do I make sure that one project gets the correct key? Can I set env only on project scope?</p> <p>Thanks.</p>
0
2016-10-14T09:53:44Z
40,040,370
<p>On thing you can do is set the environment variables on runtime.</p> <p>Example:</p> <pre><code>$ AWS_SECRET="Secret1" run_server1 arg1 arg2 arg3 $ AWS_SECRET="Secret2" run_server2 arg1 arg2 arg3 </code></pre>
0
2016-10-14T10:00:17Z
[ "python", "unix" ]
How to use different env for different projects?
40,040,221
<p>I have two projects running on a server. I'm storing AWS_SECRET in env.</p> <p>I set those env in my ~/.bash_profile. How do I make sure that one project gets the correct key? Can I set env only on project scope?</p> <p>Thanks.</p>
0
2016-10-14T09:53:44Z
40,040,467
<p><a href="https://github.com/kennethreitz/autoenv" rel="nofollow">Autoenv</a> is built for this for this exact purpose:</p> <pre><code>pip install autoenv echo 'AWS_SECRET="Secret1"' &gt; ./project1/.env echo 'AWS_SECRET="Secret2"' &gt; ./project2/.env </code></pre>
0
2016-10-14T10:06:11Z
[ "python", "unix" ]
How to use different env for different projects?
40,040,221
<p>I have two projects running on a server. I'm storing AWS_SECRET in env.</p> <p>I set those env in my ~/.bash_profile. How do I make sure that one project gets the correct key? Can I set env only on project scope?</p> <p>Thanks.</p>
0
2016-10-14T09:53:44Z
40,040,661
<p>For the AWS CLI (<code>aws</code>) specifically, you can use "<a href="http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-multiple-profiles" rel="nofollow">profiles</a>", which you can select with the <code>--profile</code> argument. For example, in your <code>~/.aws/credentials</code>:</p> <pre><code>[default] aws_access_key_id=AKIAIOSFODNN7EXAMPLE aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY [user2] aws_access_key_id=AKIAI44QH8DHBEXAMPLE aws_secret_access_key=je7MtGbClwBF/2Zp9Utk/h3yCo8nvbEXAMPLEKEY </code></pre> <p>Then use <code>aws --profile user2</code> to select the non-default profile as a one-off, or set the environment variable <code>AWS_DEFAULT_PROFILE=user2</code> in your shell to make it permanent. This might be easier than setting <code>AWS_SECRET</code> directly, because profile names are actually something you can remember :)</p> <p>Besides credentials, profiles also let you switch other settings, like <code>region</code>.</p>
1
2016-10-14T10:15:11Z
[ "python", "unix" ]
Creating Weighted Directed Graph in Python based on User Input
40,040,304
<p>I need to create something like this to represent a directed weighted graph based on user input - </p> <pre><code>graph = { 'a': {'b': 1, 'c': 4}, 'b': {'c': 3, 'd': 2, 'e': 2}, 'c': {}, 'd': {'b': 1, 'c': 5}, 'e': {'d': -2} } </code></pre> <p>So far,</p> <pre><code>import pprint graph = {} values = {} v = int(input("Enter number of vertices: ")) print("Enter vertices(keys) : ") for i in range(v): graph.setdefault(input()) edges = {} for x in graph: edges.setdefault(x) for i in graph: graph[i] = edges print("Enter weights: ") for i in graph: print(i) for j in graph[i]: var = input() graph[i][j] = var pprint.pprint(graph) </code></pre> <p>I tried but for some reason, it is replacing the previously read weights with last read weights. Any solutions? </p>
0
2016-10-14T09:57:29Z
40,042,334
<p>Do you have an indentation error?</p> <p>Instead of </p> <pre><code>for i in graph: print(i) for j in graph[i]: var = input() graph[i][j] = var </code></pre> <p>You maybe intended to write</p> <pre><code>for i in graph: print(i) for j in graph[i]: var = input() graph[i][j] = var </code></pre> <p>?</p>
0
2016-10-14T11:42:15Z
[ "python", "python-3.x", "dictionary", "graph-theory" ]
Creating Weighted Directed Graph in Python based on User Input
40,040,304
<p>I need to create something like this to represent a directed weighted graph based on user input - </p> <pre><code>graph = { 'a': {'b': 1, 'c': 4}, 'b': {'c': 3, 'd': 2, 'e': 2}, 'c': {}, 'd': {'b': 1, 'c': 5}, 'e': {'d': -2} } </code></pre> <p>So far,</p> <pre><code>import pprint graph = {} values = {} v = int(input("Enter number of vertices: ")) print("Enter vertices(keys) : ") for i in range(v): graph.setdefault(input()) edges = {} for x in graph: edges.setdefault(x) for i in graph: graph[i] = edges print("Enter weights: ") for i in graph: print(i) for j in graph[i]: var = input() graph[i][j] = var pprint.pprint(graph) </code></pre> <p>I tried but for some reason, it is replacing the previously read weights with last read weights. Any solutions? </p>
0
2016-10-14T09:57:29Z
40,043,068
<pre><code>for i in graph: graph[i] = edges </code></pre> <p>You're assigning the same dict (<code>edges</code>) to each key of <code>graph</code>. Therefore, when you assign a value to any of them, you're assigning that value to <em>all</em> of them. It looks like what you actually want is <em>copies</em> of <code>edges</code>. In this case, since you haven't assigned any mutable values to <code>edges</code>, a shallow copy is sufficient:</p> <pre><code>for i in graph: graph[i] = edges.copy() </code></pre>
1
2016-10-14T12:23:20Z
[ "python", "python-3.x", "dictionary", "graph-theory" ]
Pip module installation issue
40,040,356
<p>I'm having difficulty installing modules using pip for python. The exact error message is:</p> <pre><code>Could not find a version that satisfies the requirement shapefile (from versions: DistributionNotFound: No matching distribution found for shapefile </code></pre> <p>When I type:</p> <pre><code>pip install -vvv shapefile </code></pre> <p>I get a 404 status code saying: </p> <pre><code>Could not fetch URL https://pypi.python.org/simple/shapefile/: 404 Client Error </code></pre> <p>I've browsed around and have seen there is a config file that allows you to change where pip installs modules from. However, I can't find this file in my /.pip folder.</p> <p>Does anyone know how I would go about fixing my pip configuration so that I can install packages?</p>
0
2016-10-14T09:59:53Z
40,040,415
<p>Here is a list that matches your module name: <a href="https://pypi.python.org/pypi?%3Aaction=search&amp;term=shapefile&amp;submit=search" rel="nofollow">PyPI search result for shapefile</a></p> <p>Maybe the module is called <code>pyshapefile</code> not just <code>shapefile</code>.</p> <pre><code>pip install pyshapefile </code></pre>
2
2016-10-14T10:02:34Z
[ "python", "pip" ]
get row and column names of n maximum values in dataframe
40,040,364
<p>For the dataframe </p> <pre><code>import pandas as pd df=pd.DataFrame({'col1':[1,2],'col2':[4,5]},index=['row1','row2']) print df col1 col2 row1 1 4 row2 2 5 </code></pre> <p>I want to get the row name and the col name of the 2 maximum values and the according maximum values, such that the resulting expression returns something like this: </p> <pre><code>[(row1,col2,4)(row2,col2,5)] </code></pre> <p>Whats the most concise way to do that in pandas?</p>
1
2016-10-14T10:00:08Z
40,040,613
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.stack.html" rel="nofollow"><code>stack</code></a> for creating <code>Series</code>, then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.nlargest.html" rel="nofollow"><code>Series.nlargest</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reset_index.html" rel="nofollow"><code>reset_index</code></a> and last create <code>tuples</code> by list comprehension:</p> <pre><code>print (df) col1 col2 row1 1 4 row2 2 5 df1 = df.stack().nlargest(2).reset_index() print (df1) level_0 level_1 0 0 row2 col2 5 1 row1 col2 4 tuples = [tuple(x) for x in df1.values] print (tuples) [('row2', 'col2', 5), ('row1', 'col2', 4)] </code></pre>
1
2016-10-14T10:12:50Z
[ "python", "pandas" ]
Print elements of ndarray in their storage order
40,040,379
<p>To check whether my assumptions on memory layout are correct, I'd sometimes like to print elements of an <code>ndarray</code> exactly in the storage order in memory.</p> <p>I know <code>flatten</code>, <code>ravel</code>, <code>flat</code>, <code>flatiter</code> but I'm still not sure which function will <strong>truly</strong> reflect the memory order?</p>
0
2016-10-14T10:00:54Z
40,046,499
<p>Probably <code>ravel</code> will suit your needs, if you use the <code>order='K'</code> option. From the <a href="http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.ravel.html" rel="nofollow">docs</a>:</p> <blockquote> <p>order : {‘C’,’F’, ‘A’, ‘K’}, optional</p> <p>[...] ‘K’ means to read the elements in the order they occur in memory, except for reversing the data when strides are negative.[...]</p> </blockquote> <p>If you just want to learn more about the memory layout of an array without printing all the elements, you can look at its <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.strides.html" rel="nofollow"><code>strides</code> attribute</a>.</p>
3
2016-10-14T15:10:44Z
[ "python", "numpy" ]
How to send messages with Elixir porcelain to a python script?
40,040,417
<p>I'm trying to learn how to do interop from Elixir with the <a href="https://github.com/alco/porcelain" rel="nofollow" title="porcelain module">porcelain module</a>.</p> <p>So I made this simple example:</p> <p>I have an Elixir function that looks like this:</p> <pre><code>defmodule PythonMessenger do alias Porcelain.Process, as: Proc alias Porcelain.Result def test_messages do proc = %Proc{pid: pid} = Porcelain.spawn_shell("python ./python_scripts/reply_to_elixir.py", in: :receive, out: {:send, self()}) Proc.send_input(proc, "Greetings from Elixir\n") data = receive do {^pid, :data, :out, data} -&gt; data end IO.inspect data Proc.send_input(proc, "Elixir: I heard you said \"#{data}\"\n") data = receive do {^pid, :data, data} -&gt; data end IO.inspect data Proc.send_input(proc, "Please quit\n") data = receive do {^pid, :data, data} -&gt; data end IO.inspect data end end </code></pre> <p>and a python script that looks like this:</p> <pre><code>import sys while 1: line = sys.stdin.readline() if "quit" in line: print("Quitting, bye for now") sys.exit() print(line) </code></pre> <p>but this does not work. The python script never exits. If a read just one line like:</p> <pre><code> line = sys.stdin.readline() </code></pre> <p>it works just fine.</p> <p>So whats the problem, any ideas?</p>
0
2016-10-14T10:02:54Z
40,040,746
<p>You need to pass <code>-u</code> to disable buffering in <code>sys.stdin.readline()</code>. You won't see this when running the program interactively, but you will see it when the program is spawned without a TTY. Because of the default buffering, the Python process was not printing anything for a short message like <code>"Greetings from Elixir\n"</code>, and because of the <code>receive</code> expression, the Elixir code was blocking forever, waiting for the Python process to print something.</p> <p>From <code>man python</code>:</p> <pre><code> -u Force stdin, stdout and stderr to be totally unbuffered. On systems where it matters, also put stdin, stdout and stderr in binary mode. Note that there is internal buffering in xread- lines(), readlines() and file-object iterators ("for line in sys.stdin") which is not influ- enced by this option. To work around this, you will want to use "sys.stdin.readline()" inside a "while 1:" loop. </code></pre> <p>You also had some mistakes in the 2nd and 3rd <code>receive</code> patterns. Here's the code that works for me:</p> <pre><code>defmodule PythonMessenger do alias Porcelain.Process, as: Proc alias Porcelain.Result def test_messages do proc = %Proc{pid: pid} = Porcelain.spawn_shell("python -u ./a.py", in: :receive, out: {:send, self()}) Proc.send_input(proc, "Greetings from Elixir\n") data = receive do {^pid, :data, :out, data} -&gt; data end IO.inspect data Proc.send_input(proc, "Elixir: I heard you said \"#{data}\"\n") data = receive do {^pid, :data, :out, data} -&gt; data end IO.inspect data Proc.send_input(proc, "Please quit\n") data = receive do {^pid, :data, :out, data} -&gt; data end IO.inspect data end end PythonMessenger.test_messages </code></pre> <p>Output:</p> <pre><code>"Greetings from Elixir\n\n" "Elixir: I heard you said \"Greetings from Elixir\n\n\n\n" "\"\n\n" </code></pre>
2
2016-10-14T10:18:34Z
[ "python", "elixir" ]
Can sentences be formed AFTER raw text is converted into nltk.Text?
40,040,423
<p>Usual way of converting file data into nltk.Text seems as follows:</p> <pre><code>f=open('my-file.txt','rU') raw=f.read() tokens = nltk.word_tokenize(raw) text = nltk.Text(tokens) </code></pre> <p>Now, 'text' (the nltk.Text object) is just a list of words. How can I get a list of sentences from it? Basically wish to split 'text' into list of sentences. How?</p>
0
2016-10-14T10:03:07Z
40,041,123
<p>you can use the 'sent_tokenize()' and 'line_tokenize()' methods on the raw text (depends if you want to split lines (\n), sentences (by '.', '?', etc) or both):</p> <pre><code>f=open('my-file.txt','rU') raw=f.read() # tokenize lines first and then by sentence markers sents = [nltk.tokenize.sent_tokenize(l) for l in nltk.tokenize.line_tokenize(raw)] tokens = nltk.word_tokenize(raw) text = nltk.Text(tokens) </code></pre> <p>an alternative would be to use just one of the options:</p> <pre><code>sents = nltk.tokenize.sent_tokenize(l) # tokenize only by sentence markers </code></pre> <p>or </p> <pre><code>sents = nltk.tokenize.line_tokenize(l) # tokenize only by line markers </code></pre> <p>and to answer your original question - you can't really tokenize sentences after converting a raw text to the nltk.Text() structure. Of course you can do some workaround to make it, but it will be far from pretty (and far from useful, assuming you have the original raw text).</p>
0
2016-10-14T10:37:10Z
[ "python", "nltk" ]
Scraping Pantip Forum using BeautifulSoup
40,040,427
<p>I'm trying to scrape some forum posts from <a href="http://pantip.com/tag/Isuzu" rel="nofollow">http://pantip.com/tag/Isuzu</a></p> <p>One such page is <a href="http://pantip.com/topic/35647305" rel="nofollow">http://pantip.com/topic/35647305</a></p> <p>I want to get each post text along with its author and timestamp into a csv file.</p> <p>I'm using Beautiful Soup, but admittedly I'm a complete beginner at python and web scraping. The code that I have right now gets the required fields, but only for the first post. I need information for all posts on that thread. I tried <em>soup.find_all()</em> and <em>soup.select()</em>, but I'm not getting the desired results.</p> <p>Here's the code I'm using:</p> <pre><code>from bs4 import BeautifulSoup import urllib2 print "Reading URL..." url = urllib2.urlopen("http://pantip.com/topic/35647305") content = url.read() soup = BeautifulSoup(content, "html.parser") print "Finding desired HTML..." table = soup.select("abbr.timeago") print "\nScraped HTML is:" print table text = BeautifulSoup(str(table).strip(),"html.parser").get_text().encode("utf-8").replace("\n", "") print "\nScraped text is:\n" + text </code></pre> <p>Any clues as to what I'm doing wrong would be deeply appreciated. Also, any suggestions as to how this could be done in a better, cleaner way are welcome.</p> <p>As mentioned, I'm a beginner, so please don't mind any stupid mistakes. :-)</p> <p>Thanks!</p>
1
2016-10-14T10:03:25Z
40,044,738
<p>The comments are rendered using an Ajax request:</p> <pre><code>import requests from bs4 import BeautifulSoup params = {"tid": "35647305", # in the url "type": "3"} with requests.Session() as s: s.headers.update({"User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.36", "X-Requested-With": "XMLHttpRequest"}) r = (s.get("http://pantip.com/forum/topic/render_comments", params=params)) data = r.json() # data["comments"] contains what you want </code></pre> <p>Which will give you all the data. So all you need is to pass the <em>tid</em> from each url and update the tid in the params dict.</p>
0
2016-10-14T13:44:13Z
[ "python", "html", "web-scraping", "beautifulsoup" ]
Using Python to create a (random) sample of n words from text files
40,040,453
<p>For my PhD project I am evaluating all existing Named Entity Recogition Taggers for Dutch. In order to check the precision and recall for those taggers I want to manually annotate all Named Entities in a random sample from my corpus. That manually annotated sample will function as the 'gold standard' to which I will compare the results of the different taggers. </p> <p>My corpus consists of 170 Dutch novels. I am writing a Python script to generate a random sample of a specific amount of words for each novel (which I will use to annotate afterwards). All novels will be stored in the same directory. The following script is meant to generate for each novel in that directory a random sample of n-lines:</p> <pre><code>import random import os import glob import sys import errno path = '/Users/roelsmeets/Desktop/libris_corpus_clean/*.txt' files = glob.glob(path) for text in files: try: with open(text, 'rt', encoding='utf-8') as f: # number of lines from txt file random_sample_input = random.sample(f.readlines(),100) except IOError as exc: # Do not fail if a directory is found, just ignore it. if exc.errno != errno.EISDIR: raise # This block of code writes the result of the previous to a new file random_sample_output = open("randomsample", "w", encoding='utf-8') random_sample_input = map(lambda x: x+"\n", random_sample_input) random_sample_output.writelines(random_sample_input) random_sample_output.close() </code></pre> <p>There are two problems with this code:</p> <ol> <li><p>Currently, I have put two novels (.txt files) in the directory. But the code only outputs a random sample for one of each novels. </p></li> <li><p>Currently, the code samples a random amount of LINES from each .txt file, but I prefer to generate a random amount of WORDS for each .txt file. Ideally, I would like to generate a sample of, say, the first or last 100 words of each of the 170 .txt-files. In that case, the sample won't be random at all; but thus far, I couldn't find a way to create a sample without using the random library.</p></li> </ol> <p>Could anyone give a suggestion how to solve both problems? I am still new to Python and programming in general (I am a literary scholar), so I would be pleased to learn different approaches. Many thanks in advance!</p>
3
2016-10-14T10:05:02Z
40,040,701
<p>You just have to split your lines into words, store them somewhere, and then, after having read all of your files and stored their words, pick 100 with <code>random.sample</code>. It it what I did in the code below. However, I am not quite sure if it is able to deal with 170 novels, since it will likely result in a lot of memory usage.</p> <pre><code>import random import os import glob import sys import errno path = '/Users/roelsmeets/Desktop/libris_corpus_clean/*.txt' files = glob.glob(path) words = [] for text in files: try: with open(text, 'rt', encoding='utf-8') as f: # number of lines from txt file for line in f: for word in line.split(): words.append(word) except IOError as exc: # Do not fail if a directory is found, just ignore it. if exc.errno != errno.EISDIR: raise random_sample_input = random.sample(words, 100) # This block of code writes the result of the previous to a new file random_sample_output = open("randomsample", "w", encoding='utf-8') random_sample_input = map(lambda x: x+"\n", random_sample_input) random_sample_output.writelines(random_sample_input) random_sample_output.close() </code></pre> <p>In the above code, the more words a novel has, the more likely is to be represented in the output sample. That may or may not be the desired behaviour. If you want each novel to have the same ponderation, you can select, let's say, 100 words from it to add in the <code>words</code> variable, and then select 100 hundred words from there at the end. It will also have the side effect of using a lot less memory, since only one novel will be stored at a time.</p> <pre><code>import random import os import glob import sys import errno path = '/Users/roelsmeets/Desktop/libris_corpus_clean/*.txt' files = glob.glob(path) words = [] for text in files: try: novel = [] with open(text, 'rt', encoding='utf-8') as f: # number of lines from txt file for line in f: for word in line.split(): novel.append(word) words.append(random.sample(novel, 100)) except IOError as exc: # Do not fail if a directory is found, just ignore it. if exc.errno != errno.EISDIR: raise random_sample_input = random.sample(words, 100) # This block of code writes the result of the previous to a new file random_sample_output = open("randomsample", "w", encoding='utf-8') random_sample_input = map(lambda x: x+"\n", random_sample_input) random_sample_output.writelines(random_sample_input) random_sample_output.close() </code></pre> <p>Third version, this one will deal with sentences instead of words, and keep the punctuation. Also, each book has the same "weight" on the final sentences kept, regardless of its size. Keep in mind that the sentence detection is done by an algorithm that is quite clever, but not infallible.</p> <pre><code>import random import os import glob import sys import errno import nltk.data path = '/home/clement/Documents/randomPythonScripts/data/*.txt' files = glob.glob(path) sentence_detector = nltk.data.load('tokenizers/punkt/dutch.pickle') listOfSentences = [] for text in files: try: with open(text, 'rt', encoding='utf-8') as f: fullText = f.read() listOfSentences += [x.replace("\n", " ").replace(" "," ").strip() for x in random.sample(sentence_detector.tokenize(fullText), 30)] except IOError as exc: # Do not fail if a directory is found, just ignore it. if exc.errno != errno.EISDIR: raise random_sample_input = random.sample(listOfSentences, 15) print(random_sample_input) # This block of code writes the result of the previous to a new file random_sample_output = open("randomsample", "w", encoding='utf-8') random_sample_input = map(lambda x: x+"\n", random_sample_input) random_sample_output.writelines(random_sample_input) random_sample_output.close() </code></pre>
2
2016-10-14T10:16:40Z
[ "python", "text", "random", "nlp", "named-entity-recognition" ]
Using Python to create a (random) sample of n words from text files
40,040,453
<p>For my PhD project I am evaluating all existing Named Entity Recogition Taggers for Dutch. In order to check the precision and recall for those taggers I want to manually annotate all Named Entities in a random sample from my corpus. That manually annotated sample will function as the 'gold standard' to which I will compare the results of the different taggers. </p> <p>My corpus consists of 170 Dutch novels. I am writing a Python script to generate a random sample of a specific amount of words for each novel (which I will use to annotate afterwards). All novels will be stored in the same directory. The following script is meant to generate for each novel in that directory a random sample of n-lines:</p> <pre><code>import random import os import glob import sys import errno path = '/Users/roelsmeets/Desktop/libris_corpus_clean/*.txt' files = glob.glob(path) for text in files: try: with open(text, 'rt', encoding='utf-8') as f: # number of lines from txt file random_sample_input = random.sample(f.readlines(),100) except IOError as exc: # Do not fail if a directory is found, just ignore it. if exc.errno != errno.EISDIR: raise # This block of code writes the result of the previous to a new file random_sample_output = open("randomsample", "w", encoding='utf-8') random_sample_input = map(lambda x: x+"\n", random_sample_input) random_sample_output.writelines(random_sample_input) random_sample_output.close() </code></pre> <p>There are two problems with this code:</p> <ol> <li><p>Currently, I have put two novels (.txt files) in the directory. But the code only outputs a random sample for one of each novels. </p></li> <li><p>Currently, the code samples a random amount of LINES from each .txt file, but I prefer to generate a random amount of WORDS for each .txt file. Ideally, I would like to generate a sample of, say, the first or last 100 words of each of the 170 .txt-files. In that case, the sample won't be random at all; but thus far, I couldn't find a way to create a sample without using the random library.</p></li> </ol> <p>Could anyone give a suggestion how to solve both problems? I am still new to Python and programming in general (I am a literary scholar), so I would be pleased to learn different approaches. Many thanks in advance!</p>
3
2016-10-14T10:05:02Z
40,040,763
<p>A few suggestions:</p> <p>Take random sentences, not words or lines. NE taggers will work much better if input is grammatical sentences. So you need to use a sentence splitter.</p> <p>When you iterate over the files, <code>random_sample_input</code> contains lines from only the last file. You should move the block of code that writes the selected content to a file inside the for-loop. You can then write the selected sentences to either one file or into separate files. E.g.:</p> <pre><code>out = open("selected-sentences.txt", "w") for text in files: try: with open(text, 'rt', encoding='utf-8') as f: sentences = sentence_splitter.split(f.read()) for sentence in random.sample(sentences, 100): print &gt;&gt; out, sentence except IOError as exc: # Do not fail if a directory is found, just ignore it. if exc.errno != errno.EISDIR: raise out.close() </code></pre> <p>[edit] Here is how you should be able to use an NLTK sentence splitter:</p> <pre><code>import nltk.data sentence_splitter = nltk.data.load("tokenizers/punkt/dutch.pickle") text = "Dit is de eerste zin. Dit is de tweede zin." print sentence_splitter.tokenize(text) </code></pre> <p>Prints:</p> <pre><code>["Dit is de eerste zin.", "Dit is de tweede zin."] </code></pre> <p>Note you'd need to download the Dutch tokenizer first, using <code>nltk.download()</code> from the interactive console.</p>
3
2016-10-14T10:19:26Z
[ "python", "text", "random", "nlp", "named-entity-recognition" ]
Using Python to create a (random) sample of n words from text files
40,040,453
<p>For my PhD project I am evaluating all existing Named Entity Recogition Taggers for Dutch. In order to check the precision and recall for those taggers I want to manually annotate all Named Entities in a random sample from my corpus. That manually annotated sample will function as the 'gold standard' to which I will compare the results of the different taggers. </p> <p>My corpus consists of 170 Dutch novels. I am writing a Python script to generate a random sample of a specific amount of words for each novel (which I will use to annotate afterwards). All novels will be stored in the same directory. The following script is meant to generate for each novel in that directory a random sample of n-lines:</p> <pre><code>import random import os import glob import sys import errno path = '/Users/roelsmeets/Desktop/libris_corpus_clean/*.txt' files = glob.glob(path) for text in files: try: with open(text, 'rt', encoding='utf-8') as f: # number of lines from txt file random_sample_input = random.sample(f.readlines(),100) except IOError as exc: # Do not fail if a directory is found, just ignore it. if exc.errno != errno.EISDIR: raise # This block of code writes the result of the previous to a new file random_sample_output = open("randomsample", "w", encoding='utf-8') random_sample_input = map(lambda x: x+"\n", random_sample_input) random_sample_output.writelines(random_sample_input) random_sample_output.close() </code></pre> <p>There are two problems with this code:</p> <ol> <li><p>Currently, I have put two novels (.txt files) in the directory. But the code only outputs a random sample for one of each novels. </p></li> <li><p>Currently, the code samples a random amount of LINES from each .txt file, but I prefer to generate a random amount of WORDS for each .txt file. Ideally, I would like to generate a sample of, say, the first or last 100 words of each of the 170 .txt-files. In that case, the sample won't be random at all; but thus far, I couldn't find a way to create a sample without using the random library.</p></li> </ol> <p>Could anyone give a suggestion how to solve both problems? I am still new to Python and programming in general (I am a literary scholar), so I would be pleased to learn different approaches. Many thanks in advance!</p>
3
2016-10-14T10:05:02Z
40,040,953
<p>This solves both problems:</p> <pre><code>import random import os import glob import sys import errno path = '/Users/roelsmeets/Desktop/libris_corpus_clean/*.txt' files = glob.glob(path) with open("randomsample", "w", encoding='utf-8') as random_sample_output: for text in files: try: with open(text, 'rt', encoding='utf-8') as f: # number of lines from txt file random_sample_input = random.sample(f.read().split(), 10) except IOError as exc: # Do not fail if a directory is found, just ignore it. if exc.errno != errno.EISDIR: raise # This block of code writes the result of the previous to a new file random_sample_input = map(lambda x: x + "\n", random_sample_input) random_sample_output.writelines(random_sample_input) </code></pre>
1
2016-10-14T10:28:43Z
[ "python", "text", "random", "nlp", "named-entity-recognition" ]
Error while Adding buttons dynamically to Screen in ScreenManager Kivy
40,040,526
<p>I am trying to add buttons dynamicaly to screen. I have the following error when I run the app. Please help me resolve the issue.</p> <blockquote> <p>Traceback (most recent call last): File "main.py", line 174, in screenManager.add_widget( HomeScreen( name = 'homeScreen' ) ) File "main.py", line 162, in <strong>init</strong> self.add_widget(btn) File "/Applications/Kivy.app/Contents/Resources/kivy/kivy/uix/floatlayout.py", line 111, in add_widget pos=self._trigger_layout, AttributeError: 'HomeScreen' object has no attribute '_trigger_layout'</p> </blockquote> <p>Here is my main.py</p> <pre><code>class HomeScreen(Screen): def __init__(self, **kwargs): for i in range(80): btn = Button(text=str(i), size=(90, 90), size_hint=(None, None)) self.add_widget(btn) # Screen Manager screenManager = ScreenManager( transition = FadeTransition() ) # Add all screens to screen manager #screenManager.add_widget( UsernameScreen( name = 'usernameScreen' ) ) #screenManager.add_widget( PasswordScreen( name = 'passwordScreen' ) ) #screenManager.add_widget( LevelTwoScreen( name = 'levelTwoScreen' ) ) #screenManager.add_widget( LevelTwoScreen( name = 'levelThreeScreen' ) ) screenManager.add_widget( HomeScreen( name = 'homeScreen' ) ) class ThreeLevelAuthApp(App): def build(self): return screenManager if __name__ == '__main__': ThreeLevelAuthApp().run() </code></pre> <p>kivy file</p> <pre><code>&lt;HomeScreen&gt;: ScrollView: size_hint: None, None size: 400, 400 pos_hint: { 'center_x': 0.5,'center_y': 0.5 } do_scroll_x: False GridLayout: cols: 6 padding: 20 spacing: 20 size_hint: None, None width: 400 </code></pre>
0
2016-10-14T10:08:56Z
40,053,198
<p>Let's start with <code>__init__</code>:</p> <pre><code>class HomeScreen(Screen): def __init__(self, **kwargs): for i in range(80): btn = Button(text=str(i), size=(90, 90), size_hint=(None, None)) self.add_widget(btn) </code></pre> <p>although this looks fine and is called when you make an instance, there's one basic thing missing - <a class='doc-link' href="http://stackoverflow.com/documentation/python/419/classes/1399/basic-inheritance#t=201610142247298133988"><code>super()</code></a>. You need <code>super()</code> to initialize the <code>Screen</code> first, so that it has all the required variables and methods that make it a class with real behavior and therefore makes you able to actually <em>inherit</em> the behavior.</p> <p>Note that the <code>Screen</code> itself is a <a href="https://kivy.org/docs/api-kivy.uix.relativelayout.html" rel="nofollow"><code>RelativeLayout</code></a> and you'll need to handle additional stuff such as positioning and/or sizing if you won't use a layout that does that for you.</p> <pre><code>import random class HomeScreen(Screen): def __init__(self, **kwargs): super(HomeScreen, self).__init__(**kwargs) for i in range(80): btn = Button(text=str(i), size=(90, 90), size_hint=(None, None), pos=[random.randint(0,500), random.randint(0,500)]) self.add_widget(btn) </code></pre>
0
2016-10-14T23:04:43Z
[ "python", "kivy" ]
Automating file upload using Selenium and pywinauto
40,040,564
<p>I am trying to automate a file uplad in a form. The form works as follows: - some data insert - click on add attachment button - windows dialogue window appears - select the file - open it</p> <p><strong>I am using python, Selenium webdriver and pywinauto module.</strong></p> <p>Similar approach was described <a href="http://stackoverflow.com/questions/21236183/upload-a-file-using-webdriver-pywinauto">here</a> but it only deals with file name and not with path to it.</p> <p>Sending keys to the element with Selenium is not possible because there is no textbox that would contain the path. I have tried using AutoIT with the followin code:</p> <pre><code>$hWnd = WinWaitActive("[REGEXPTITLE:Otev.*|Op.*)]") If WinGetTitle($hWnd) = "Open" then Send(".\Gandalf") Send("{ENTER}") Else Send(".\Gandalf") Send("{ENTER}") EndIf </code></pre> <p>The code basically waits for window with title Open or Otevrit (in CZ) to appear and then do the magic. This code is compiled into an .exe and ran at appropriate moment.</p> <p>The code works fine and does the upload, but I <strong>can not alter the path to file</strong>. This is neccessary if I want run my code on any computer. The mobility of the code is neccessary because it is a part of a desktop application for running Selenium tests. </p> <p>The window which I am trying to handle is:</p> <p><a href="https://i.stack.imgur.com/GHmcW.png" rel="nofollow"><img src="https://i.stack.imgur.com/GHmcW.png" alt="enter image description here"></a></p> <p>Basically I would like to input my path string and open the file location. After that I would input the file name and open it (perform the upload). Currently my code looks like:</p> <pre><code> # click on upload file button: WebDriverWait(self.driver, 10).until(EC.presence_of_element_located((By.XPATH, "//*[@class=\"qq-upload- button-selector qq-upload-button\"]"))).click() # wait two seconds for dialog to appear: time.sleep(2) # start the upload dialogWindow = pywinauto.application.Application() windowHandle = pywinauto.findwindows.find_windows(title=u'Open', class_name='#32770')[0] window = dialogWindow.window_(handle=windowHandle) # this is the element that I would like to access (not sure) toolbar = window.Children()[41] # send keys: toolbar.TypeKeys("path/to/the/folder/") # insert file name: window.Edit.SetText("Gandalf.jpg") # click on open: window["Open"].Click() </code></pre> <p>I am not sure where my problem is. Input the file name is no problem and I can do it with:</p> <pre><code>window.Edit.SetText("Gandalf.jpg") </code></pre> <p>But For some reason I can't do the same with the path element. <strong>I have tried setting focus on it and clicking</strong> but the code fails.</p> <p>Thank you for help.</p> <p><strong>BUTTON HTML:</strong></p> <pre><code>&lt;div class="qq-upload-button-selector qq-upload-button" style="position: relative; overflow: hidden; direction: ltr;"&gt; &lt;div&gt;UPLOAD FILE&lt;/div&gt; &lt;input qq-button-id="8032e5d2-0f73-4b7b-b64a-e125fd2a9aaf" type="file" name="qqfile" style="position: absolute; right: 0px; top: 0px; font-family: Arial; font-size: 118px; margin: 0px; padding: 0px; cursor: pointer; opacity: 0; height: 100%;"&gt;&lt;/div&gt; </code></pre>
3
2016-10-14T10:10:34Z
40,041,063
<p>Try following code and let me know in case of any issues:</p> <pre><code>WebDriverWait(self.driver, 10).until(EC.element_to_be_clickable((By.XPATH, '//input[@type="file"][@name="qqfile"]'))).send_keys("/path/to/Gandalf.jpg") </code></pre> <p>P.S. You should replace string <code>"/path/to/Gandalf.jpg"</code> with actual path to file</p>
2
2016-10-14T10:34:10Z
[ "python", "selenium", "file-upload", "selenium-webdriver", "pywinauto" ]
Probabilities are ignored in numpy
40,040,569
<p>I'm trying to use the numpy library to sample a character from a distribution, but it seems to ignore the probabilities I give in input. I have a probability array, which just to test I set to </p> <pre><code>vec_p=[0,0,1,0,0] </code></pre> <p>and a character array </p> <pre><code>vec_c=[a,b,c,d,e] </code></pre> <p>If I do</p> <pre><code>numpy.random.choice(vec_c,10,vec_p) </code></pre> <p>I would expect to get </p> <pre><code>cccccccccc </code></pre> <p>since the other probabilities are all zero, but it just gives me random values ignoring the vec_p array. Am I doing something wrong?</p>
1
2016-10-14T10:10:50Z
40,040,671
<p>Passing the parameters as keyword arguments gives the correct results:</p> <pre><code>&gt;&gt;&gt; import numpy as np &gt;&gt;&gt; vec_p = [0,0,1,0,0] &gt;&gt;&gt; num = np.arange(5) &gt;&gt;&gt; np.random.choice(num, size=10, p=vec_p) array([2, 2, 2, 2, 2, 2, 2, 2, 2, 2]) </code></pre>
2
2016-10-14T10:15:30Z
[ "python", "arrays", "numpy" ]
I am deploying Django App on heroku, after successful deployment , I am getting operational Error
40,040,670
<pre><code>OperationalError at /admin/login/ could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"? Request Method: POST Django Version: 1.10b1 Exception Type: OperationalError Exception Value: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"? Exception Location: /app/.heroku/python/lib/python2.7/site-packages/psycopg2/__init__.py in connect, line 164 Python Executable: /app/.heroku/python/bin/python Python Version: 2.7.12 Python Path: ['/app', '/app/.heroku/python/bin', '/app', '/app/.heroku/python/lib/python27.zip', '/app/.heroku/python/lib/python2.7', '/app/.heroku/python/lib/python2.7/plat-linux2', '/app/.heroku/python/lib/python2.7/lib-tk', '/app/.heroku/python/lib/python2.7/lib-old', '/app/.heroku/python/lib/python2.7/lib-dynload', '/app/.heroku/python/lib/python2.7/site-packages', '/app/.heroku/python/lib/python2.7/site-packages/setuptools-25.2.0-py2.7.egg', '/app/.heroku/python/lib/python2.7/site-packages/pip-8.1.2-py2.7.egg'] </code></pre>
-1
2016-10-14T10:15:27Z
40,044,438
<p>There is a problem in your database settings. You have to modify database settings to be able to deploy to Heroku. I have never used it, but this page explains how to configure database for Django: <a href="https://devcenter.heroku.com/articles/django-app-configuration" rel="nofollow">https://devcenter.heroku.com/articles/django-app-configuration</a></p>
1
2016-10-14T13:30:43Z
[ "python", "django", "python-2.7", "heroku" ]
set_cookie() missing 1 required positional argument: 'self'
40,040,726
<p>In Django, Im trying to render a template and send a cookie at the same time with this code:</p> <pre><code>template = loader.get_template('list.html') context = {'documents': documents, 'form': form} if ('user') not in request.COOKIES: id_user = ''.join(random.SystemRandom().choice(string.ascii_uppercase + string.ascii_lowercase + string.digits) for _ in range(30)) HttpResponse.set_cookie(key='user', value=id_user, max_age=63072000) return HttpResponse(template.render(context, request)) </code></pre> <p>But I get the Error:</p> <blockquote> <p>TypeError at /myapp/list/</p> <p>set_cookie() missing 1 required positional argument: 'self'</p> </blockquote> <p>I have checked the <a href="https://docs.djangoproject.com/en/1.10/ref/request-response/" rel="nofollow">documentation</a>, but I dont find the solution. Help me please :)</p>
1
2016-10-14T10:17:42Z
40,040,791
<p>Close - HttpResponse is the class, not the instance of the class. The last line is creating one and returning it - so your earlier line needs to act on that instance...</p> <p>try (untested code):</p> <pre><code>myResponse = HttpResponse(template.render(context, request)) myResponse.set_cookie(...) return myResponse </code></pre>
3
2016-10-14T10:20:39Z
[ "python", "django", "httpcookie" ]
Manual progress bar python
40,040,908
<p>Im trying to make a code so that it displays all numbers from 1 to 100 to show as its loading something.</p> <pre><code>for i in range(101): self.new = Label(self.label_progress, text=i) time.sleep(1) self.new.place(in_= self.label_progress) if i == 100: self.new1=Label(self.label_progress, text="the download is complete") self.new1.place(in_=self.label_progress, x=50) </code></pre> <p>but it seems like it doesnt want to show each number untill the loop is complete, at the end it just shows a 100. Any suggestions on how to fix it?</p>
0
2016-10-14T10:26:20Z
40,041,597
<p><code>tkinter</code> has <code>mainloop()</code> which runs all the time and does many thing - ie. it updates data in widget, redraws widgets, executes your function. When <code>mainloop()</code> executes your function then it can't updates/redraws widgets till your function stop running. You can use <code>root.update()</code> in your function to force tkinter to update/readraw widgets.</p> <p>Or you can use </p> <pre><code>root.after(miliseconds, function_name) </code></pre> <p>to periodically execute function which will update <code>Label</code> . And between executions <code>tkinter</code> will have time to update/redraw widgets.</p> <p>Examples which use <code>after</code> to update time in <code>Label</code>:</p> <p><a href="https://github.com/furas/my-python-codes/tree/master/tkinter/timer-using-after" rel="nofollow">https://github.com/furas/my-python-codes/tree/master/tkinter/timer-using-after</a></p> <hr> <p><strong>BTW:</strong> <code>tkinter</code> has <code>ttk.Progressbar</code></p> <p>Example code: <a href="http://stackoverflow.com/a/24770800/1832058">http://stackoverflow.com/a/24770800/1832058</a></p> <p><img src="http://i.stack.imgur.com/WNzEZ.png" alt="enter image description here"></p>
2
2016-10-14T11:03:13Z
[ "python", "loops", "tkinter" ]
A class instance initializing with previously inititalized attributes
40,040,915
<p>I have a slight complication with my code. I want the pirate attribute to take the value True if the other two attributes are higher than some number when summed up and multiplied by some factor.</p> <p>For instance, maybe I want the pirate attribute to be True only if social*0.6 + fixed is greater than 5, and false otherwise.</p> <pre><code>import random class consumer(object): """Initialize consumers""" def __init__(self, fixed, social,pirate): self.social = social self.fixed = fixed self.pirate = pirate """Create an array of people""" for x in range(1,people): consumerlist.append(consumer(random.uniform(0,10),random.uniform(0,10),True)) pass </code></pre>
0
2016-10-14T10:26:38Z
40,041,005
<p>You need to store the random values for <code>fixed</code> and <code>social</code> and then use them for the comparison that generates <code>pirate</code>:</p> <pre><code>for x in range(1,people): fixed = random.uniform(0,10) social = random.uniform(0,10) pirate = (social * 0.6 + fixed) &gt; 5 # boolean consumerlist.append(consumer(fixed, social, pirate)) </code></pre> <hr> <p>That pass in your for is redundant</p>
0
2016-10-14T10:31:24Z
[ "python" ]
A class instance initializing with previously inititalized attributes
40,040,915
<p>I have a slight complication with my code. I want the pirate attribute to take the value True if the other two attributes are higher than some number when summed up and multiplied by some factor.</p> <p>For instance, maybe I want the pirate attribute to be True only if social*0.6 + fixed is greater than 5, and false otherwise.</p> <pre><code>import random class consumer(object): """Initialize consumers""" def __init__(self, fixed, social,pirate): self.social = social self.fixed = fixed self.pirate = pirate """Create an array of people""" for x in range(1,people): consumerlist.append(consumer(random.uniform(0,10),random.uniform(0,10),True)) pass </code></pre>
0
2016-10-14T10:26:38Z
40,041,204
<p>In response to Moses answer: Using a calculated property is safer than calculating the pirate value at initialization only. When decorating a method with the @property attribute, it acts as a property (you don't have to use brackets as is the case for methods), which is always up to date when the social member is changed afterwards.</p> <pre><code>class Consumer(object): def __init__(self, fixed, social): self.fixed = fixed self.social = social @property def pirate(self): return self.social * 0.6 + self.fixed &gt; 5 consumer1 = Consumer(1, 12) print("Value of pirate attribute: " + str(consumer1.pirate)) </code></pre>
1
2016-10-14T10:41:30Z
[ "python" ]
python script is not running without sudo - why?
40,040,961
<p>This is my python script which downloads the most recent image from my S3 bucket. When I run this script using <code>sudo python script.py</code> it does as expected, but not when I run it as <code>python script.py</code>. In this case, the script finishes cleanly without exceptions or process lockup, but there is no image file.</p> <p>Why does this happen? Is there something I could do at boto's library end or any other thing?</p> <pre><code>import boto import logging def s3_download(): bucket_name = 'cloudadic' conn = boto.connect_s3('XXXXXX', 'YYYYYYY') bucket = conn.get_bucket(bucket_name) for key in bucket.list('ocr/uploads'): try: l = [(k.last_modified, k) for k in bucket] key = sorted(l, cmp=lambda x, y: cmp(x[0], y[0]))[-1][1] res = key.get_contents_to_filename(key.name) except: logging.info(key.name + ":" + "FAILED") if __name__ == "__main__": s3_download() </code></pre>
0
2016-10-14T10:29:17Z
40,041,548
<p>Presumably the problem is that you try to store things somewhere your user doesn't have permission to. The second issue is that your script is hiding the error. The <code>except</code> block completely ignores what sort of exceptions occur (and of course consumes them so you never see them), and uses <code>logging.info</code> which by default doesn't show; that should probably be at least a warning and would be better if it showed something about what went wrong. By the way, odds are you shouldn't be posting S3 authentication keys here. </p>
1
2016-10-14T10:59:57Z
[ "python", "boto", "sudo" ]
python script is not running without sudo - why?
40,040,961
<p>This is my python script which downloads the most recent image from my S3 bucket. When I run this script using <code>sudo python script.py</code> it does as expected, but not when I run it as <code>python script.py</code>. In this case, the script finishes cleanly without exceptions or process lockup, but there is no image file.</p> <p>Why does this happen? Is there something I could do at boto's library end or any other thing?</p> <pre><code>import boto import logging def s3_download(): bucket_name = 'cloudadic' conn = boto.connect_s3('XXXXXX', 'YYYYYYY') bucket = conn.get_bucket(bucket_name) for key in bucket.list('ocr/uploads'): try: l = [(k.last_modified, k) for k in bucket] key = sorted(l, cmp=lambda x, y: cmp(x[0], y[0]))[-1][1] res = key.get_contents_to_filename(key.name) except: logging.info(key.name + ":" + "FAILED") if __name__ == "__main__": s3_download() </code></pre>
0
2016-10-14T10:29:17Z
40,043,253
<p>Always install <a href="https://virtualenv.pypa.io/en/stable/" rel="nofollow">virtual environment</a> before you start development in python. </p> <p>The issue you face is typical python newbie problem : use <code>sudo</code> to install all the pypi package. </p> <p>When you do <code>sudo pip install boto3</code>, it will install pypi package to system workspace and only accessible by <code>sudo</code>. In such case, <code>sudo python script.py</code> will works, because it has the rights to access the package.</p> <p>To resolve this issues and isolation of development environment (so you don't pollute different project with differnt pypi package), python developer will install python virtual environment (from above link), then use <code>mkvirtualenv</code> to create your project workspace, run <code>pip install</code> and <code>python setup.py install</code> to install required package to the environment, then you can run python without the <code>sudo python</code>.</p> <p>Python virtualenv is also deployed inside production environment for the same reason.</p> <p><strong>Important Note</strong> : avoid boto and boto2. AWS no longer support them nor with bug fixing(AWS is not officially support boto2, use it at your own risk). Switch to boto3. </p> <hr> <p>For the exceptional handling issues, @Yann Vernier has mentioned it. And exception error will not logged by <code>logging.info</code>. You may try with <code>logging.debug</code> or simple use <code>raise</code> to raise the actual exception error.</p>
0
2016-10-14T12:32:18Z
[ "python", "boto", "sudo" ]
python script is not running without sudo - why?
40,040,961
<p>This is my python script which downloads the most recent image from my S3 bucket. When I run this script using <code>sudo python script.py</code> it does as expected, but not when I run it as <code>python script.py</code>. In this case, the script finishes cleanly without exceptions or process lockup, but there is no image file.</p> <p>Why does this happen? Is there something I could do at boto's library end or any other thing?</p> <pre><code>import boto import logging def s3_download(): bucket_name = 'cloudadic' conn = boto.connect_s3('XXXXXX', 'YYYYYYY') bucket = conn.get_bucket(bucket_name) for key in bucket.list('ocr/uploads'): try: l = [(k.last_modified, k) for k in bucket] key = sorted(l, cmp=lambda x, y: cmp(x[0], y[0]))[-1][1] res = key.get_contents_to_filename(key.name) except: logging.info(key.name + ":" + "FAILED") if __name__ == "__main__": s3_download() </code></pre>
0
2016-10-14T10:29:17Z
40,044,137
<p>As @Nearoo in comments had suggested to use <code>except Exception as inst</code> to catch exceptions. </p> <p>catching the exception like this </p> <pre><code>except Exception as inst: print(type(inst)) print(inst.args) print(inst) </code></pre> <p>Should fetch you this error if the script is compiled using python 3x If you would compile your script using python 2.7, this error wouldn't come.</p> <p>Probably you might be having multiple versions of python on your system, which should be the reason for your difference in behavior of <code>python script.py</code> and <code>sudo python script.py</code> and for the same @mootmoot answer suggests to use <code>virtualenv</code></p> <p><code>'cmp' is an invalid keyword argument for this function</code>. </p> <p>Now if you would have googled this error, you should find that <code>cmp</code> has been deprecated in python 3x and suggestions would be to use <code>key</code> instead.</p> <p>add these imports</p> <pre><code>import functools from functools import cmp_to_key </code></pre> <p>and replace with these</p> <pre><code>key2 = sorted(l, key = cmp_to_key(lambda x,y: (x[0] &gt; y[0]) - (x[0] &lt; y[0])))[-1][1] res = key2.get_contents_to_filename(key2.name) </code></pre> <p><code>(x[0] &gt; y[0]) - (x[0] &lt; y[0])</code> is an alternate to <code>cmp(x[0], y[0])</code></p> <p><code>cmp</code> is replaced by <code>key</code> and <code>cmp_to_key</code> is used from functools <code>lib</code> </p> <p><a href="http://stackoverflow.com/questions/20202418/why-is-the-cmp-parameter-removed-from-sort-sorted-in-python3-0">Check this out</a></p>
1
2016-10-14T13:15:21Z
[ "python", "boto", "sudo" ]
How to find instances that DONT have a tag using Boto3
40,040,985
<p>I'm trying to find instances that DONT have a certain tag.</p> <p>For instance I want all instances that don't have the Foo tag. I also want instances that don't have the Foo value equal to Bar.</p> <p>This is what I'm doing now:</p> <pre><code>import boto3 def aws_get_instances_by_name(name): """Get EC2 instances by name""" ec2 = boto3.resource('ec2') instance_iterator = ec2.instances.filter( Filters=[ { 'Name': 'tag:Name', 'Values': [ name, ] }, { 'Name': 'tag:Foo', 'Values': [ ] }, ] ) return instance_iterator </code></pre> <p>This is returning nothing.</p> <p>What is the correct way?</p>
1
2016-10-14T10:30:22Z
40,054,287
<p>Here's some code that will display the <code>instance_id</code> for instances <strong>without</strong> a particular tag:</p> <pre><code>import boto3 instances = [i for i in boto3.resource('ec2', region_name='ap-southeast-2').instances.all()] # Print instance_id of instances that do not have a Tag of Key='Foo' for i in instances: if 'Foo' not in [t['Key'] for t in i.tags]: print i.instance_id </code></pre>
0
2016-10-15T02:17:20Z
[ "python", "amazon-ec2", "boto3" ]
Optimize Python code. Optimize Pandas apply. Numba slow than pure python
40,041,076
<p>I'm facing a huge bottleneck where I apply a method() to each row in Pandas DataFrame. The execution time is in sorts of 15-20 minutes.</p> <p>Now, the code I use is as follows:</p> <pre><code>def FillTarget(self, df): backup = df.copy() target = list(set(df['ACTL_CNTRS_BY_DAY'])) df = df[~df['ACTL_CNTRS_BY_DAY'].isnull()] tmp = df[df['ACTL_CNTRS_BY_DAY'].isin(target)] tmp = tmp[['APPT_SCHD_ARVL_D', 'ACTL_CNTRS_BY_DAY']] tmp.drop_duplicates(subset='APPT_SCHD_ARVL_D', inplace=True) t1 = dt.datetime.now() backup['ACTL_CNTRS_BY_DAY'] = backup.apply(self.ImputeTargetAcrossSameDate,args=(tmp, ), axis=1) # backup['ACTL_CNTRS_BY_DAY'] = self.compute_(tmp, backup) t2 = dt.datetime.now() print("Time for the bottleneck is ", (t2-t1).microseconds) print("step f") return backup </code></pre> <p>And, the method ImputeTargetAcrossSameDate() method is as follows:</p> <pre><code>def ImputeTargetAcrossSameDate(self, x, tmp): ret = tmp[tmp['APPT_SCHD_ARVL_D'] == x['APPT_SCHD_ARVL_D']] ret = ret['ACTL_CNTRS_BY_DAY'] if ret.empty: r = 0 else: r = ret.values r = r[0] return r </code></pre> <p>Is there any way to optimize this apply() call to reduce the overall time. Note that, I'll have to run this process on DataFrame which stores data for 2 years. I was running it for 15 days, and it took me 15-20minutes, while when ran for 1 month of data, it was executing for more than 45 minutes, after which I had to force stop the process, thus while running on full dataset, it ll be huge problem.</p> <p>Also note that, I came across few examples <a href="http://pandas.pydata.org/pandas-docs/stable/enhancingperf.html" rel="nofollow">http://pandas.pydata.org/pandas-docs/stable/enhancingperf.html</a> for introducing numba to optimize code, and the following is my numba implementation:</p> <p>Statement to call numba method:</p> <pre><code>backup['ACTL_CNTRS_BY_DAY'] = self.compute_(tmp, backup) </code></pre> <p>Compute method of numba:</p> <pre><code>@numba.jit def compute_(self, df1, df2): n = len(df2) result = np.empty(n, dtype='float64') for i in range(n): d = df2.iloc[i] result[i] = self.apply_ImputeTargetAcrossSameDate_method(df1['APPT_SCHD_ARVL_D'].values, df1['ACTL_CNTRS_BY_DAY'].values, d['APPT_SCHD_ARVL_D'], d['ACTL_CNTRS_BY_DAY']) return result </code></pre> <p>This is wrapper method which replaces Pandas' apply to call the Impute method on each row. The impute method using numba is as follows:</p> <pre><code>@numba.jit def apply_ImputeTargetAcrossSameDate_method(self, df1col1, df1col2, df2col1, df2col2): dd = np.datetime64(df2col1) idx1 = np.where(df1col1 == dd)[0] if idx1.size == 0: idx1 = idx1 else: idx1 = idx1[0] val = df1col2[idx1] if val.size == 0: r = 0 else: r = val return r </code></pre> <p>I ran the normal apply() method as well as numba() method for data having time period of 5 days, and following were my results:</p> <pre><code>With Numba: 749805 microseconds With DF.apply() 484603 microseconds. </code></pre> <p>As you can see numba is slower, which should not happen, so in case I'm missing something, lemme know so that I can optimize this piece of code.</p> <p>Thanks in advance</p> <p><strong>Edit 1</strong> As requested, the data snipped (head of top 20 rows) is added as following: Before:</p> <pre><code> APPT_SCHD_ARVL_D ACTL_CNTRS_BY_DAY 919 2020-11-17 NaN 917 2020-11-17 NaN 916 2020-11-17 NaN 915 2020-11-17 NaN 918 2020-11-17 NaN 905 2014-06-01 NaN 911 2014-06-01 NaN 913 2014-06-01 NaN 912 2014-06-01 NaN 910 2014-06-01 NaN 914 2014-06-01 NaN 908 2014-06-01 NaN 906 2014-06-01 NaN 909 2014-06-01 NaN 907 2014-06-01 NaN 898 2014-05-29 NaN 892 2014-05-29 NaN 893 2014-05-29 NaN 894 2014-05-29 NaN 895 2014-05-29 NaN </code></pre> <p>After:</p> <pre><code>APPT_SCHD_ARVL_D ACTL_CNTRS_BY_DAY 919 2020-11-17 0.0 917 2020-11-17 0.0 916 2020-11-17 0.0 915 2020-11-17 0.0 918 2020-11-17 0.0 905 2014-06-01 0.0 911 2014-06-01 0.0 913 2014-06-01 0.0 912 2014-06-01 0.0 910 2014-06-01 0.0 914 2014-06-01 0.0 908 2014-06-01 0.0 906 2014-06-01 0.0 909 2014-06-01 0.0 907 2014-06-01 0.0 898 2014-05-29 0.0 892 2014-05-29 0.0 893 2014-05-29 0.0 894 2014-05-29 0.0 895 2014-05-29 0.0 </code></pre> <p>What the method does? In the above data example, you can see some dates are repeated, and values against them is NaN. If all the rows having same date has value NaN, it replaces them with 0. But there are some cases, lets say for example: <strong>2014-05-29</strong> where there will be 10 rows having same date, and only 1 row against that date where there will be some value. (Lets say 10). Then the method() shall populate all values against that particular date with 10 instead of NaNs.</p> <p>Example:</p> <pre><code>898 2014-05-29 NaN 892 2014-05-29 NaN 893 2014-05-29 NaN 894 2014-05-29 10 895 2014-05-29 NaN </code></pre> <p>The above shall become:</p> <pre><code>898 2014-05-29 10 892 2014-05-29 10 893 2014-05-29 10 894 2014-05-29 10 895 2014-05-29 10 </code></pre>
0
2016-10-14T10:34:57Z
40,045,544
<p>This is a bit rushed solution because I'm about to leave into the weekend now, but it works.</p> <p>Input Dataframe:</p> <pre><code>index APPT_SCHD_ARVL_D ACTL_CNTRS_BY_DAY 919 2020-11-17 NaN 917 2020-11-17 NaN 916 2020-11-17 NaN 915 2020-11-17 NaN 918 2020-11-17 NaN 905 2014-06-01 NaN 911 2014-06-01 NaN 913 2014-06-01 NaN 912 2014-06-01 NaN 910 2014-06-01 NaN 914 2014-06-01 NaN 908 2014-06-01 NaN 906 2014-06-01 NaN 909 2014-06-01 NaN 907 2014-06-01 NaN 898 2014-05-29 NaN 892 2014-05-29 NaN 893 2014-05-29 NaN 894 2014-05-29 10 895 2014-05-29 NaN 898 2014-05-29 NaN </code></pre> <p>The code:</p> <pre><code>tt = df[pd.notnull(df.ACTL_CNTRS_BY_DAY)].APPT_SCHD_ARVL_D.unique() vv = df[pd.notnull(df.ACTL_CNTRS_BY_DAY)] for i,_ in df.iterrows(): if df.ix[i,"APPT_SCHD_ARVL_D"] in tt: df.ix[i,"ACTL_CNTRS_BY_DAY"] = vv[vv.APPT_SCHD_ARVL_D == df.ix[i,"APPT_SCHD_ARVL_D"]]["ACTL_CNTRS_BY_DAY"].values[0] df = df.fillna(0.0) </code></pre> <p>Basically there is no need to <code>apply</code> a function. What I'm doing here is:</p> <ul> <li>Get all unique dates with a value that is not null. -> <code>tt</code></li> <li>Create a dataframe of only the non-null values. -> <code>vv</code></li> <li>Iterate over all rows and test if the date in each row is present in <code>tt</code>.</li> <li>If true take the value from <code>vv</code> where the date in <code>df</code> is the same and assign it to <code>df</code>.</li> <li>Then fill all other null values with <code>0.0</code>.</li> </ul> <p>Iterating over rows isn't a fast thing, but I hope it's faster than your old code. If I had more time I would think of a solution without iteration, maybe on Monday.</p> <p>EDIT: Solution without iteration using <code>pd.merge()</code> instead:</p> <pre><code>dg = df[pd.notnull(df.ACTL_CNTRS_BY_DAY)].groupby("APPT_SCHD_ARVL_D").first()["ACTL_CNTRS_BY_DAY"].to_frame().reset_index() df = pd.merge(df,dg,on="APPT_SCHD_ARVL_D",how='outer').rename(columns={"ACTL_CNTRS_BY_DAY_y":"ACTL_CNTRS_BY_DAY"}).drop("ACTL_CNTRS_BY_DAY_x",axis=1).fillna(0.0) </code></pre> <p>Your data implies that there's at most only one value in <code>ACTL_CNTRS_BY_DAY</code> that is not null, so I'm using <code>first()</code> in the <code>groupby</code> to pick the only value that exists.</p>
1
2016-10-14T14:24:01Z
[ "python", "pandas", "optimization", "jit", "numba" ]
How to fastly build a graph with Numpy?
40,041,205
<p>In order to use PyStruct to perform image segmentation (by means of inference [1]), I first need to build a graph whose nodes correspond to pixels and edges are the link between these pixels.</p> <p>I have thus written a function, which works, to do so:</p> <pre><code> def create_graph_for_pystruct(mrf, betas, nb_labels): M, N = mrf.shape b1, b2, b3, b4 = betas edges = [] pairwise = np.zeros((nb_labels, nb_labels)) # loop over rows for i in range(M): # loop over columns for j in range(N): # get rid of pixels belonging to image's borders if i!=0 and i!=M-1 and j!=0 and j!=N-1: # get the current linear index current_linear_ind = i * N + j # retrieve its neighborhood (yield a list of tuple (row, col)) neigh = np.array(getNeighborhood(i, j, M, N)) # convert neighbors indices to linear ones neigh_linear_ind = neigh[:, 0] * N + neigh[:, 1] # add edges [edges.append((current_linear_ind, n)) for n in neigh_linear_ind] mat1 = b1 * np.eye(nb_labels) mat2 = b2 * np.eye(nb_labels) mat3 = b3 * np.eye(nb_labels) mat4 = b4 * np.eye(nb_labels) pairwise = np.ma.dstack((pairwise, mat1, mat1, mat2, mat2, mat3, mat3, mat4, mat4)) return np.array(edges), pairwise[:, :, 1:] </code></pre> <p>However, it is slow and I am wondering where I can improve my function in order to speed it up. [1] <a href="https://pystruct.github.io/generated/pystruct.inference.inference_dispatch.html" rel="nofollow">https://pystruct.github.io/generated/pystruct.inference.inference_dispatch.html</a></p>
0
2016-10-14T10:41:30Z
40,052,094
<p>Here is a code suggestion that should run much faster (in numpy one should focus on using vectorisation against for-loops). I try to build the whole output in a single pass using vectorisation, I used the helpfull <code>np.ogrid</code> to generate xy coordinates.</p> <pre><code>def new(mrf, betas, nb_labels): M, N = mrf.shape b1, b2, b3, b4 = betas mat1,mat2,mat3,mat4 = np.array([b1,b2,b3,b4])[:,None,None]*np.eye(nb_labels)[None,:,:] pairwise = np.array([mat1, mat1, mat2, mat2, mat3, mat3, mat4, mat4]*((M-2)*(N-2))).transpose() m,n=np.ogrid[0:M,0:N] a,b,c= m[0:-2]*N+n[:,0:-2],m[1:-1]*N+n[:,0:-2],m[2: ]*N+n[:,0:-2] d,e,f= m[0:-2]*N+n[:,1:-1],m[1:-1]*N+n[:,1:-1],m[2: ]*N+n[:,1:-1] g,h,i= m[0:-2]*N+n[:,2: ],m[1:-1]*N+n[:,2: ],m[2: ]*N+n[:,2: ] center_index = e edges_index = np.stack([a,b,c,d,f,g,h,i]) edges=np.empty(list(edges_index.shape)+[2]) edges[:,:,:,0]= center_index[None,:,:] edges[:,:,:,1]= edges_index edges=edges.reshape(-1,2) return edges,pairwise </code></pre> <p><strong>Timing and comparison test :</strong> </p> <pre><code>import timeit args=(np.empty((40,50)), [1,2,3,4], 10) f1=lambda : new(*args) f2=lambda : create_graph_for_pystruct(*args) edges1, pairwise1 = f1() edges2, pairwise2 = f2() #outputs are not exactly indentical: the order isn't the the same #I sort both to compare the results edges1 = edges1[np.lexsort(np.fliplr(edges1).T)] edges2 = edges2[np.lexsort(np.fliplr(edges2).T)] print("edges identical ?",(edges1 == edges2).all()) print("pairwise identical ?",(pairwise1 == pairwise2).all()) print("new : ",timeit.timeit(f1,number=1)) print("old : ",timeit.timeit(f2,number=1)) </code></pre> <p><strong>Output :</strong></p> <pre><code>edges identical ? True pairwise identical ? True new : 0.015270026000507642 old : 4.611805051001284 </code></pre> <p>Note: I had to guess what was in the <code>getNeighborhood</code> function</p>
1
2016-10-14T21:14:18Z
[ "python", "numpy" ]
creating a csv file from a function result, python
40,041,267
<p>I am using this pdf to csv function from {<a href="http://stackoverflow.com/questions/25665/python-module-for-converting-pdf-to-text">Python module for converting PDF to text</a>} and I was wondering how can I now export the result to a csv file on my drive? I tried adding in the function </p> <pre><code>with open('C:\location', 'wb') as f: writer = csv.writer(f) for row in data: writer.writerow(row) </code></pre> <p>but the resulting csv file has one character per row and not the rows I have when printing data in python.</p>
1
2016-10-14T10:45:00Z
40,041,972
<p>If you are printing a single character per row, then what you have is a string. Your loop</p> <pre><code>for row in data: </code></pre> <p>translates to</p> <pre><code>for character in string: </code></pre> <p>so you need to break your string up into the chunks you want written on a single row. You might be able to use something like <code>data.split()</code> but it's hard to say without seeing more of your code and data.</p> <p>In response to your comment: yes, you can just dump the data to a CSV... If it adheres to the rules of CSV. If your data is separated by commas, with each row terminated by a newline, then you can just write your data to a file.</p> <pre><code>with open ("file.csv",'w') as f: f.write(data) </code></pre> <p>This will ONLY work if your data adheres to the rules of csv.</p>
1
2016-10-14T11:22:41Z
[ "python", "csv", "pdf" ]
Convert custom strings to datetime format
40,041,398
<p>I have a list of date-time data strings that looks like this:</p> <pre><code>list = ["2016-08-02T09:20:32.456Z", "2016-07-03T09:22:35.129Z"] </code></pre> <p>I want to convert this to the example format (for the first item):</p> <pre><code>"8/2/2016 9:20:32 AM" </code></pre> <p>I tried this:</p> <pre><code>from datetime import datetime for time in list: date = datetime.strptime(time, '%Y %m %d %I %R') </code></pre> <p>But returns error:</p> <blockquote> <p>ValueError: 'R' is a bad directive in format '%Y %m %d %I %R'</p> </blockquote> <p>Thanks for any help!</p>
0
2016-10-14T10:52:20Z
40,041,552
<pre><code>s = "2016-08-02T09:20:32.456Z" d = datetime.strptime(s, "%Y-%m-%dT%H:%M:%S.%fZ") </code></pre> <p>The use <code>d.strftime</code> (<a href="https://docs.python.org/3.0/library/datetime.html#id1" rel="nofollow">https://docs.python.org/3.0/library/datetime.html#id1</a>).</p>
1
2016-10-14T11:00:05Z
[ "python", "date", "converter" ]
Convert custom strings to datetime format
40,041,398
<p>I have a list of date-time data strings that looks like this:</p> <pre><code>list = ["2016-08-02T09:20:32.456Z", "2016-07-03T09:22:35.129Z"] </code></pre> <p>I want to convert this to the example format (for the first item):</p> <pre><code>"8/2/2016 9:20:32 AM" </code></pre> <p>I tried this:</p> <pre><code>from datetime import datetime for time in list: date = datetime.strptime(time, '%Y %m %d %I %R') </code></pre> <p>But returns error:</p> <blockquote> <p>ValueError: 'R' is a bad directive in format '%Y %m %d %I %R'</p> </blockquote> <p>Thanks for any help!</p>
0
2016-10-14T10:52:20Z
40,041,720
<p>You have several problems in you code:</p> <ul> <li><code>%R</code>doesn't seem to be a <a href="https://docs.python.org/2/library/datetime.html#strftime-and-strptime-behavior" rel="nofollow">correct directive</a> (that's your error, I think you are looking for <code>%p</code>).</li> <li>the format in strptime is the format of your input, not the output</li> <li>you are using <code>list</code> and <code>time</code> as a name variable which is bad ;]</li> </ul> <p>So, you could for first convert you time to a correct datetime object:</p> <pre><code>&gt;&gt;&gt; t='2016-08-02T09:20:32.456Z' &gt;&gt;&gt; d=datetime.strptime(t, "%Y-%m-%dT%H:%M:%S.%fZ") &gt;&gt;&gt; d datetime.datetime(2016, 8, 2, 9, 20, 32, 456000) </code></pre> <p>And then convert this object to a string with strftime</p> <pre><code>&gt;&gt;&gt; datetime.strftime(d, "%m/%d/%Y %I:%M:%S %p") '08/02/2016 09:20:32 AM' </code></pre>
1
2016-10-14T11:10:23Z
[ "python", "date", "converter" ]
XML Processing not working
40,041,410
<p>I am trying to extract data from a sensor (it communicate with "xml type" strings) and convert it to csv. With my actual code i already write xml files, but the data come in single rows (from root to /root it is).</p> <p>Dunno if this is the reason but i get a <strong>elementtree.parse error junk after document element</strong>. Everything that i've read so far, the problem was in xml construction (more than one root, no root, etc) so i'm a bit at loss with my case.</p> <p>Logged in xml file: </p> <pre><code>&lt;li820&gt;&lt;data&gt;&lt;celltemp&gt;5.1120729e1&lt;/celltemp&gt;&lt;cellpres&gt;9.7705745e1&lt;/cellpres&gt;&lt;co2&gt;7.7808494e2&lt;/co2&gt;&lt;co2abs&gt;5.0983281e-2&lt;/co2abs&gt;&lt;ivolt&gt;1.1380004e1&lt;/ivolt&gt;&lt;raw&gt;2726238,1977386&lt;/raw&gt;&lt;/data&gt;&lt;/li820&gt; &lt;li820&gt;&lt;data&gt;&lt;celltemp&gt;5.1120729e1&lt;/celltemp&gt;&lt;cellpres&gt;9.7684698e1&lt;/cellpres&gt;&lt;co2&gt;7.7823929e2&lt;/co2&gt;&lt;co2abs&gt;5.0991268e-2&lt;/co2abs&gt;&lt;ivolt&gt;1.1380004e1&lt;/ivolt&gt;&lt;raw&gt;2725850,1976922&lt;/raw&gt;&lt;/data&gt;&lt;/li820&gt; &lt;li820&gt;&lt;data&gt;&lt;celltemp&gt;5.1120729e1&lt;/celltemp&gt;&lt;cellpres&gt;9.7705745e1&lt;/cellpres&gt;&lt;co2&gt;7.7797288e2&lt;/co2&gt;&lt;co2abs&gt;5.0977463e-2&lt;/co2abs&gt;&lt;ivolt&gt;1.1373291e1&lt;/ivolt&gt;&lt;raw&gt;2726166,1977001&lt;/raw&gt;&lt;/data&gt;&lt;/li820&gt; </code></pre> <p>Content of one of the previous row (in tree view) :</p> <pre><code>&lt;li820&gt; &lt;data&gt; &lt;celltemp&gt;1.9523970e1&lt;/celltemp&gt; &lt;cellpres&gt;9.8993663e1&lt;/cellpres&gt; &lt;co2&gt;3.5942180e4&lt;/co2&gt; &lt;co2abs&gt;4.0364418e-1&lt;/co2abs&gt; &lt;ivolt&gt;1.1802978e1&lt;/ivolt&gt; &lt;raw&gt;2789123,1884335&lt;/raw&gt; &lt;/data&gt; &lt;/li820&gt; </code></pre> <p>Error : </p> <pre><code>Traceback (most recent call last): File "licor_read.py", line 96, in &lt;module&gt; tree = et.parse(file_xml) # Set XML Parser File "/usr/lib/python2.7/xml/etree/ElementTree.py", line 1182, in parse tree.parse(source, parser) File "/usr/lib/python2.7/xml/etree/ElementTree.py", line 656, in parse parser.feed(data) File "/usr/lib/python2.7/xml/etree/ElementTree.py", line 1642, in feed self._raiseerror(v) File "/usr/lib/python2.7/xml/etree/ElementTree.py", line 1506, in _raiseerror raise err xml.etree.ElementTree.ParseError: junk after document element: line 2, column 0 </code></pre> <p>My code :</p> <pre><code>import os, sys, subprocess import time, datetime import serial import string import glob import csv import xml.etree.ElementTree as et from xml.etree.ElementTree import XMLParser, XML, fromstring, tostring from os import path from bs4 import BeautifulSoup as bs #------------------------------------------------------------- #------------------ Open configurations ---------------------- #------------------------------------------------------------- ############ # Settings # ############ DEBUG = True LOG = True FREQ = 1 PORT = '/dev/ttyUSB0' BAUD = 9600 PARITY = 'N' STOPBIT = 1 BYTE_SZ = 8 TIMEOUT = 5.0 log_dir = 'logs/' out_dir = 'export/' fname_xml = 'licor820-data-{}.xml'.format(datetime.datetime.now()) # DO NOT touch the {} brackets fname_csv = 'licor820-data-{}.csv'.format(datetime.datetime.now()) # isLooping = 20 # Nr of data extractions isHeader = True # Do not touch if data headers are required isBegin = False #------------------------------------------------------------- #----- Better know what you are doing from this point -------- #------------------------------------------------------------- ################## # Initialisation # ################## file_xml = os.path.join(log_dir, fname_xml) # Define path and file name file_csv = os.path.join(out_dir, fname_csv) # fp_xml = open(file_xml, 'w') # Open writing streams fp_csv = open(file_csv, 'w') # try: buff = serial.Serial(PORT, BAUD, BYTE_SZ, PARITY, STOPBIT, TIMEOUT) # Open Serial connection except Exception as e: if DEBUG: print ("ERROR: {}".format(e)) sys.exit("Could not connect to the Licor") csv_writer = csv.writer(fp_csv) # Define CSV writer instruct_head = [] # '' ################ # Main program # ################ while isLooping : # Define nr of refreshed data extracted #os.system('clear') print('RAW/XML in progress... ' + str(isLooping)) # Debug this loop if(isBegin is False) : # Verify presence of the &lt;licor&gt; tag while(buff.readline()[0] is not '&lt;' and buff.readline()[1] is not 'l') : raw_output = buff.readline() # Jump the lines readed until &lt;licor&gt; isBegin = True raw_output = buff.readline() xml_output = raw_output print(xml_output) fp_xml.write(xml_output) # Write from serial port to xml isLooping -= 1 fp_xml.close() tree = et.parse(file_xml) # Set XML Parser root = tree.getroot() # '' for instruct_row in root.findall('li820'): # XML to CSV buffer instruct = [] if isHeader is True: # Buffering header celltemp = instruct_row.find('celltemp').tag instruct_head.append(celltemp) cellpres = instruct_row.find('cellpres').tag instruct_head.append(cellpres) co2 = instruct_row.find('co2').tag instruct_head.append(co2) co2abs = instruct_row.find('co2abs').tag instruct_head.append(co2abs) ivolt = instruct_row.find('ivolt').tag instruct_head.append(ivolt) raw = instruct_row.find('raw').tag instruct_head.append(raw) csv_writer.writerow(instruct_head) # Write header isHeader = False celltemp = instruct_row.find('celltemp').text # Buffering data instruct.append(celltemp) cellpres = instruct_row.find('cellpres').text instruct.append(cellpres) co2 = instruct_row.find('co2').text instruct.append(co2) co2abs = instruct_row.find('co2abs').text instruct.append(co2abs) ivolt = instruct_row.find('ivolt').text instruct.append(ivolt) raw = instruct_row.find('raw').text instruct.append(raw) csv_writer.writerow(instruct) # Write data''' csv_writer.close() fp_csv.close() os.system('clear') print('Job done. \nSaved at : ./' + file_xml + '\nAnd at ./' + file_csv + '\n') </code></pre>
1
2016-10-14T10:52:37Z
40,045,479
<p>You should open your input file by 'read' instead of 'write'. Or you will empty your file when you run your code.</p> <pre><code>fp_xml = open(file_xml, 'r'); </code></pre> <p>Besides, I have a better way to get all elements.You don't need to know what the names of all tags ahead of time.</p> <pre><code>header = [] isHeader = True for instruct_row in root.getchildren(): # XML to CSV buffer instruct = [] for i,item in enumerate(instruct_row.getchildren()): if isHeader is True: header.append(item.tag) instruct.append(item.text) isHeader = False csv_writer.writerow(instruct)# Write data''' fp_csv.close() </code></pre> <p>My input xml is below:</p> <pre><code>&lt;li820&gt; &lt;data&gt;&lt;celltemp&gt;5.1120729e1&lt;/celltemp&gt;&lt;cellpres&gt;9.7705745e1&lt;/cellpres&gt;&lt;co2&gt;7.7808494e2&lt;/co2&gt;&lt;co2abs&gt;5.0983281e-2&lt;/co2abs&gt;&lt;ivolt&gt;1.1380004e1&lt;/ivolt&gt;&lt;raw&gt;2726238,1977386&lt;/raw&gt;&lt;/data&gt; &lt;data&gt;&lt;celltemp&gt;5.1120729e1&lt;/celltemp&gt;&lt;cellpres&gt;9.7705745e1&lt;/cellpres&gt;&lt;co2&gt;7.7808494e2&lt;/co2&gt;&lt;co2abs&gt;5.0983281e-2&lt;/co2abs&gt;&lt;ivolt&gt;1.1380004e1&lt;/ivolt&gt;&lt;raw&gt;2726238,1977386&lt;/raw&gt;&lt;/data&gt; &lt;/li820&gt; </code></pre> <p>Finally you can see data in you csv file.</p>
0
2016-10-14T14:19:36Z
[ "python", "xml", "csv" ]
How to stop or interrupt a function in python 3 with Tkinter
40,041,440
<p>I started with programming in python just a few months ago and I really love it. It's so intuitive and fun to start with.</p> <p>Data for starting point: I have a linux mashine on which runs python 3.2.3 I have three buttons on GUI to start a function with and one to stop that proccess or the proccesses (Idea).</p> <p>The source is as follows:</p> <pre><code>def printName1(event): while button5 != True print('Button 1 is pressed') time.sleep(3) # just for simulation purposes to get reaction time for stopping return print('STOP button is pressed') def StopButton(): button5 = True </code></pre> <p>I have tried while, and try with except, but the main problem is that the GUI (thinter) is not responding at that time during the process is running. It stores the input and runs it after the first function (printName1) is finished. I also looked here at stackoverflow, but the solutions didn't worked properly for me and they had the same issues with interrupting. I apologize for that (maybe) basic question, but I am very new in python and spend a few day of searching an trying. </p> <p>Is there a way to do that? The solution can maybe made with threading? But how? Any advise/help is really appreciated.</p> <p>Thanks a lot!</p>
0
2016-10-14T10:54:34Z
40,046,134
<p>Use <code>threading.Event</code></p> <pre><code>import threading class ButtonHandler(threading.Thread): def __init__(self, event): threading.Thread.__init__(self) self.event = event def run (self): while not self.event.is_set(): print("Button 1 is pressed!") time.sleep(3) print("Button stop") myEvent = threading.Event() #The start button Button(root,text="start",command=lambda: ButtonHandler(myEvent).start()).pack() #Say this is the exit button Button(root, text="stop",command=lambda: myEvent.set()).pack() </code></pre>
0
2016-10-14T14:52:35Z
[ "python", "tkinter" ]
How to import only in parts of the code? [Python]
40,041,469
<p>I use webbrowser only in one part of my code, and it seems waste to import it since the beginning of the script, is it a way to import it only at the part of the code I need to use it? Thanks!</p>
0
2016-10-14T10:55:52Z
40,041,543
<p>Yes. You move the import statement from the top of the script to the point in your script where you want it. For example, you would move the below line from the top of your script to somewhere in the code.</p> <pre><code>import your_library </code></pre> <p>Be warned though, if the import is in a function, every time you call that function it will import that library, which may add time to the overall running of the file.</p> <p>Let me know if you meant a different question.</p>
0
2016-10-14T10:59:35Z
[ "python", "performance", "python-2.7", "libraries" ]
Python multiple search based on enabled or not
40,041,477
<p>Everyone, hello!</p> <p>I'm currently trying to find several strings of text in telnet, from the Telnetlib (<a href="https://docs.python.org/2/library/telnetlib.html" rel="nofollow">https://docs.python.org/2/library/telnetlib.html</a>) in Python 2.7.</p> <p>So far, the snippet I'm using that works great is:</p> <pre><code> while True: r = tn.read_some() if any(x in r for x in [string1, string2]): action </code></pre> <p>The issue I'm facing is that I have several more strings (roughly 3 or so in total). Depending if they are enabled I would like to include them into the if function.</p> <p>The enabling/disabling is set in ConfigParser in a config.ini file named string1_enable = yes/no</p> <p>The longest route (and the only one I can think of right now is)</p> <pre><code>if string1_enable == "yes" and string2_enable == "no" and string3_enable "no": s1 = (x in r for x in [string1]) if string1_enable == "no" and string2_enable == "yes" and string3_enable "no": s2 = (x in r for x in [string2]) </code></pre> <p>This would of course be a disaster to look at, impossibly long to go through and I wouldn't even know how to properly implement it.... Thus, any help would be really appreciated. Thanks!</p>
0
2016-10-14T10:56:07Z
40,042,559
<p>You can build a list based on the config:</p> <pre><code>strings = [] if string1_enable == "yes": strings.append(string1) if string2_enable == "yes": strings.append(string2) if string3_enable == "yes": strings.append(string3) while True: r = tn.read_some() if any(x in r for x in strings): action </code></pre>
1
2016-10-14T11:54:13Z
[ "python", "if-statement" ]
Python C extension: Extract parameter from the engine
40,041,498
<p>I would like to use Python to perform Mathematics tests on my functions. A typical program that can gain access to Python is this:</p> <pre><code>#include &lt;iostream&gt; #include &lt;string&gt; #include &lt;Python.h&gt; int RunTests() { Py_Initialize(); PyRun_SimpleString("a=5"); PyRun_SimpleString("b='Hello'"); PyRun_SimpleString("c=1+2j"); PyRun_SimpleString("d=[1,3,5,7,9]"); //question here Py_Finalize(); return 0; } </code></pre> <p>My question is: How can I extract the parameters <code>a,b,c,d</code> to <code>PyObject</code>s?</p>
0
2016-10-14T10:57:06Z
40,041,694
<p><code>PyRun_SimpleString()</code> executes the code in the context of the <code>__main__</code> module. You can retrieve a reference to this module using <code>PyImport_AddModule()</code>, get the globals dictionary from this module and lookup variables:</p> <pre><code>PyObject *main = PyImport_AddModule("__main__"); PyObject *globals = PyModule_GetDict(main); PyObject *a = PyDict_GetItemString(globals, "a"); </code></pre> <p>Instead of using this approach, you might be better off creating a new <code>globals</code> dictionary and using <code>PyRun_String()</code> to execute code in the context of that <code>globals</code> dict:</p> <pre><code>PyObject *globals = PyDict_New(); PyObject *a = PyRun_String("5", Py_single_input, globals, globals); </code></pre> <p>This way, you don't need to first store the result of your expression in some variable and then extract it from the global scope of <code>__main__</code>. You can still use variables to store intermediate results, which can then be extracted from <code>globals</code> as above.</p>
0
2016-10-14T11:09:14Z
[ "python", "c++", "c", "python-c-extension" ]
Python Dictionarys
40,041,542
<p>I am stucked with a school project. The following function should set the name of a room in a given dictionary.The rooms dicionary should stay because functions using it.It should randomly set the name of the rom to display it later in another function.</p> <pre><code>import string import random names = ['This', 'happens', 'all', 'the', 'time'] for key, value in rooms.items(): # do something with value rooms[value]["name"] = random.choice(names) names.remove(rooms[value]["name"]) room_1 = {"name" : "", "description" : """ """, "exits" : {"east": "Second" , "south":"Fourth"}, "items" : []} room_2 = {"name": "", "description" : """ """, "exits" : {"west": "First" , "south":"Fifth" , "east":"Third"}, "items" : []} rooms = {"First" : room_1, "Second" : room_2,} </code></pre>
-1
2016-10-14T10:59:25Z
40,041,838
<p>To make the script you posted work, you need to:</p> <ol> <li><p>Move the <code>for</code> lop to the end of the script (after you set the value of <code>rooms</code>). It operates on <code>rooms</code> variable, which must be defined firs</p></li> <li><p>Fix the <code>for</code> loop. If you want to modify a value of a dictionary, access it with <code>rooms[key]</code>, not <code>rooms['value']</code>. </p></li> </ol> <p>`Something like this:</p> <pre><code>import random names = ['This', 'happens', 'all', 'the', 'time'] rooms = { "First": {}, "Second": {}, } for key in rooms: # do something with value name = random.choice(names) rooms[key]["name"] = name names.remove(name) </code></pre>
0
2016-10-14T11:16:08Z
[ "python", "python-3.x", "dictionary" ]
How to create index for dic in mongodb with python?
40,041,547
<p>I have some objects like:</p> <pre><code>{ 'id':1 'claims':{ 'c1':{xxxxx}, 'c8':{xxxxx}, 'c20':{xxxxx} } } </code></pre> <p>if I use <code>d.create_index([('claims', 1)])</code> directly, it will use whole <code>{'c1':{xxxxx},'c8':{xxxxx},'c20':{xxxxx}}</code> as index which is not what I want. I want just use 'c1', 'c8', 'c20' as keys. I read the document said that if it is an array like:</p> <pre><code>{ 'id':1 'claims':[ 'c1':{xxxxx}, 'c8':{xxxxx}, 'c20':{xxxxx} ] } </code></pre> <p>It will be possible. And I want to know if this can be done when it is a dict. or how can I convert it into an array?</p>
0
2016-10-14T10:59:54Z
40,041,987
<p>The way of doing that is as follows:</p> <pre><code>db.collection.createIndex( { 'claims.c1':1, 'claims.c8':1, 'claims.c20':1 } ) </code></pre>
1
2016-10-14T11:23:39Z
[ "python", "arrays", "mongodb", "dictionary", "indexing" ]
How is the output.xml file generated in robot framework
40,041,661
<p>I am currently working on adding support for execution of Robot scripts from our home-grown Automation Framework. I understand that Robot, by default, generates the output.xml file upon the execution of the Robot scripts.</p> <p>So as to maintain uniformity, I am exploring the option of using the Robot Logging module for our custom automation scripts as well. On checking the source code, I see there is a Logger class under <code>robot.output</code> directory which logs messages on the console. However, I want to generate the same log and report files as it is done for the Robot scripts. For that, I need to know how the <code>output.xml</code> is generated and how it works.</p> <p>Can someone point me to the correct module/direction to move forward on this?</p>
-2
2016-10-14T11:07:21Z
40,042,942
<p><code>robot/running/model.py</code> defines a class named <code>TestSuite</code>. In that class definition is a method named <code>run</code> which is responsible for running the test. As part of its initialization it creates an instance of <code>Output</code>, which is the xml logger. This logger is defined in the file <code>robot/output/output.py</code>.</p>
0
2016-10-14T12:16:07Z
[ "python", "xml", "python-2.7", "robotframework" ]
Converting pandas dataframe to csv
40,041,757
<p><a href="https://i.stack.imgur.com/ZlUc1.png" rel="nofollow"><img src="https://i.stack.imgur.com/ZlUc1.png" alt="enter image description here"></a> I have the dataframe above and I wish to convert it into a csv file.<br> I am currently using <code>df.to_csv('my_file.csv')</code> to convert it but I want to leave 3 blanks columns. For the rows of the file above I have the following procedure. </p> <pre><code>dirname = os.path.dirname(os.path.abspath(__file__)) csvfilename = os.path.join(dirname, 'MainFile.csv') with open(csvfilename, 'wb') as output_file: writer = csv.writer(output_file, delimiter=',') writer.writerow([]) writer.writerow(["","Amazon","Weekday","Weekend"]) writer.writerow(["","Ebay",wdvad,wevad]) writer.writerow(["","Kindle",wdmpv,wempv]) writer.writerow([]) </code></pre> <p>I want to incorporate the data frame right after the blanks space with three blank columns. How can I add the dataframe in continuation to the existing csv file so that I can also add more rows with data after the dataframe. </p>
2
2016-10-14T11:12:29Z
40,048,515
<p>Consider outputting data frame initially as is to a temp file. Then, during creation of the <em>MainCSV</em>, read in temp file, iteratively writing lines, then destroy temp file. Also, prior to writing dataframe to csv, create the three blank columns. </p> <p>Below assumes you want two tasks: 1) three blank columns and 2) writing dataframe values below Amazon/Ebay/Kindle row headers. Example data uses random normal values and scalar values for <em>wdvad</em>, <em>wevad</em>, <em>wdmpv</em>, <em>wempv</em> are the string literals of their names:</p> <pre><code>import csv, os import pandas as pd # TEMP DF CSV dirname = os.path.dirname(os.path.abspath(__file__)) df = pd.DataFrame([np.random.normal(loc=3.0, scale=1.0, size=24)*1000 for i in range(7)], index=['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday']) df['Blank1'], df['Blank2'], df['Blank3'] = None, None, None df.to_csv(os.path.join(dirname, 'temp.csv')) # OUTPUT TEMP DF CSV # MAIN CSV csvfilename = os.path.join(dirname, 'MainFile.csv') tempfile = os.path.join(dirname, 'temp.csv') wdvad = 'wdvad'; wevad = 'wevad'; wdmpv = 'wdmpv'; wempv = 'wempv' with open(csvfilename, 'w', newline='') as output_file: writer = csv.writer(output_file) writer.writerow([""]) writer.writerow(["","Amazon","Weekday","Weekend"]) writer.writerow(["","Ebay",wdvad,wevad]) writer.writerow(["","Kindle",wdmpv,wempv]) writer.writerow([""]) with open(tempfile, 'r') as data_file: for line in data_file: line = line.replace('\n', '') row = line.split(",") writer.writerow(row) os.remove(tempfile) # DESTROY TEMP DF CSV </code></pre> <p><strong>Output</strong></p> <p><a href="https://i.stack.imgur.com/qVOCL.png" rel="nofollow"><img src="https://i.stack.imgur.com/qVOCL.png" alt="CSV File Output"></a></p>
1
2016-10-14T17:07:33Z
[ "python", "csv", "pandas" ]
python selenium how to deal with absence of an element
40,041,809
<p>So Im trying to translate all the reviews on tripadvisor to save comments(non-translated, original) and translated comments (from portuguese to english). </p> <p>So the scraper first selects portuguese comments to be displayed , then as usual it converts them into english one by one and saves the translated comments in com_, whereas the expanded non-translated comments in expanded_comments.</p> <p>The problem now is that for comments which are already in english , there isnt any "Google Translate" widget inside them. But if there isnt any then also I want to atleast save these comments in as English itself. But Im unable to handle the absence of element. </p> <p>Basically <code>save_comments(driver)</code> function is where its happening.</p> <pre><code>from selenium import webdriver from selenium.webdriver.common.by import By import time from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC com_=[] expanded_comments=[] date_=[] driver = webdriver.Chrome("C:\Users\shalini\Downloads\chromedriver_win32\chromedriver.exe") driver.maximize_window() from bs4 import BeautifulSoup def expand_reviews(driver): # TRYING TO EXPAND REVIEWS (&amp; CLOSE A POPUP) try: driver.find_element_by_class_name("moreLink").click() except: print "err" try: driver.find_element_by_class_name("ui_close_x").click() except: print "err2" try: driver.find_element_by_class_name("moreLink").click() except: print "err3" def save_comments(driver): # SELECTING ALL GOOGLE-TRANSLATOR links gt= driver.find_elements(By.CSS_SELECTOR,".googleTranslation&gt;.link") # NOW PRINTING TRANSLATED COMMENTS for i in gt: try: driver.execute_script("arguments[0].click()",i) #com=driver.find_element_by_class_name("ui_overlay").text com= driver.find_element_by_xpath(".//span[@class = 'ui_overlay ui_modal ']//div[@class='entry']") com_.append(com.text) time.sleep(5) driver.find_element_by_class_name("ui_close_x").click().perform() time.sleep(5) except Exception as e: pass #AS PER user : BREAKS_SOFTWARE if gt.size()==0: print "ERR" # ITERATING THROIGH ALL 200 tripadvisor webpages and saving comments &amp; translated comments for i in range(56,58): page=i*10 url="https://www.tripadvisor.com/Airline_Review-d8729164-Reviews-Cheap-Flights-or"+str(page)+"-TAP-Portugal#REVIEWS" driver.get(url) wait = WebDriverWait(driver, 10) if i==0: # SELECTING PORTUGUESE COMMENTS ONLY # Run for one time then iterate over pages try: langselction = wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, "span.sprite-date_picker-triangle"))) langselction.click() driver.find_element_by_xpath("//div[@class='languageList']//li[normalize-space(.)='Englsih first']").click() time.sleep(5) except Exception as e: print e save_comments(driver) </code></pre>
0
2016-10-14T11:14:50Z
40,042,197
<pre><code>if (gt.size() == 0): insert code here to extract the english comments </code></pre>
0
2016-10-14T11:35:11Z
[ "python", "selenium", "web-scraping" ]
Choosing between pandas, OOP classes, and dicts (Python)
40,041,845
<p>I have written a program that reads a couple of .csv files (they are not large, a couple of thousands rows each), I do some data cleaning and wrangling and this is the final structure of each .csv file looks (fake data for illustration purposes only).</p> <pre><code>import pandas as pd data = [[112233,'Rob',99],[445566,'John',88]] managers = pd.DataFrame(data) managers.columns = ['ManagerId','ManagerName','ShopId'] print managers ManagerId ManagerName ShopId 0 112233 Rob 99 1 445566 John 88 data = [[99,'Shop1'],[88,'Shop2']] shops = pd.DataFrame(data) shops.columns = ['ShopId','ShopName'] print shops ShopId ShopName 0 99 Shop1 1 88 Shop2 data = [[99,2000,3000,4000],[88,2500,3500,4500]] sales = pd.DataFrame(data) sales.columns = ['ShopId','Year2010','Year2011','Year2012'] print sales ShopId Year2010 Year2011 Year2012 0 99 2000 3000 4000 1 88 2500 3500 4500 </code></pre> <p>Then I use <code>xlsxwriter</code> and <code>reportlab</code> Python packages for creating custom Excel sheets and .pdf reports while iterating the data frames. Everything looks great, and all of the named packages do their job really well.</p> <p>My concern though is that I feel that my code gets hard to maintain as I need to access the same data frame rows multiple times in multiple calls.</p> <p>Say I need to get manager names that are responsible for shops which had sales more than 1500 in year 2010. My code is filled with this kind of calls:</p> <pre><code>managers[managers['ShopId'].isin(sales[sales['Year2010'] &gt; 1500]['ShopId'])]['ManagerName'].values &gt;&gt;&gt; array(['Rob', 'John'], dtype=object) </code></pre> <p>I think it is hard to see what is going on while reading this line of code. I could create multiple intermediate variables, but this would add multiple lines of code.</p> <p>How common is it to sacrifice database normalization ideology and merge all the pieces into a single data frame to get a more maintainable code? There are obviously cons of having a single data frame as it might get messy when trying to merge other data frames that might be needed later on. Merging them of course leads to data redundancy as the same manager can be assigned to multiple shops.</p> <pre><code>df = managers.merge(sales,how='left',on='ShopId').merge(shops,how='left',on='ShopId') print df ManagerId ManagerName ShopId Year2010 Year2011 Year2012 ShopName 0 112233 Rob 99 2000 3000 4000 Shop1 1 445566 John 88 2500 3500 4500 Shop2 </code></pre> <p>At least this call gets smaller:</p> <pre><code>df[df['Year2010'] &gt; 1500]['ManagerName'].values &gt;&gt;&gt; array(['Rob', 'John'], dtype=object) </code></pre> <p>Maybe pandas is a wrong tool for this kind of job? </p> <p>C# developers at office frown at me and tell me use the classes, but then I will have a bunch of methods like <code>get_manager_sales(managerid)</code> and so forth. Iterating class instances for reporting also sounds troublesome as I would need to implement some sorting and indexing (which I get for free with <code>pandas</code>).</p> <p>Dictionary would work, but it makes it also difficult to modify existing data, doing merges etc. The syntax doesn't get much better either.</p> <pre><code>data_dict = df.to_dict('records') [{'ManagerId': 112233L, 'ManagerName': 'Rob', 'ShopId': 99L, 'ShopName': 'Shop1', 'Year2010': 2000L, 'Year2011': 3000L, 'Year2012': 4000L}, {'ManagerId': 445566L, 'ManagerName': 'John', 'ShopId': 88L, 'ShopName': 'Shop2', 'Year2010': 2500L, 'Year2011': 3500L, 'Year2012': 4500L}] </code></pre> <p>Get manager names that are responsible for shops which had sales more than 1500 in year 2010.</p> <pre><code>[row['ManagerName'] for row in data_dict if row['Year2010'] &gt; 1500] &gt;&gt;&gt; ['Rob', 'John'] </code></pre> <p><strong>In this particular case with the data I operate with, should I go all the way with <code>pandas</code> or is there another way to write cleaner code while taking advantage of the power of <code>pandas</code>?</strong></p>
5
2016-10-14T11:16:30Z
40,042,236
<p>I would choose Pandas, because it's much faster, has an excellent and extremely rich API, a source code looks much cleaner and better, etc.</p> <p>BTW the following line can be easily rewritten:</p> <pre><code>managers[managers['ShopId'].isin(sales[sales['Year2010'] &gt; 1500]['ShopId'])]['ManagerName'].values </code></pre> <p>as:</p> <pre><code>ShopIds = sales.ix[sales['Year2010'] &gt; 1500, 'ShopId'] managers.query('ShopId in @ShopIds')['ManagerName'].values </code></pre> <p>IMO it's pretty easy to read and understand</p> <p>PS you may also want to store your data in a <code>SQL</code>-able database and use SQL or to store it in HDF Store and use <code>where</code> parameter - in both cases you can benefit from indexing "search" columns</p>
2
2016-10-14T11:37:11Z
[ "python", "class", "oop", "pandas", "dictionary" ]
Choosing between pandas, OOP classes, and dicts (Python)
40,041,845
<p>I have written a program that reads a couple of .csv files (they are not large, a couple of thousands rows each), I do some data cleaning and wrangling and this is the final structure of each .csv file looks (fake data for illustration purposes only).</p> <pre><code>import pandas as pd data = [[112233,'Rob',99],[445566,'John',88]] managers = pd.DataFrame(data) managers.columns = ['ManagerId','ManagerName','ShopId'] print managers ManagerId ManagerName ShopId 0 112233 Rob 99 1 445566 John 88 data = [[99,'Shop1'],[88,'Shop2']] shops = pd.DataFrame(data) shops.columns = ['ShopId','ShopName'] print shops ShopId ShopName 0 99 Shop1 1 88 Shop2 data = [[99,2000,3000,4000],[88,2500,3500,4500]] sales = pd.DataFrame(data) sales.columns = ['ShopId','Year2010','Year2011','Year2012'] print sales ShopId Year2010 Year2011 Year2012 0 99 2000 3000 4000 1 88 2500 3500 4500 </code></pre> <p>Then I use <code>xlsxwriter</code> and <code>reportlab</code> Python packages for creating custom Excel sheets and .pdf reports while iterating the data frames. Everything looks great, and all of the named packages do their job really well.</p> <p>My concern though is that I feel that my code gets hard to maintain as I need to access the same data frame rows multiple times in multiple calls.</p> <p>Say I need to get manager names that are responsible for shops which had sales more than 1500 in year 2010. My code is filled with this kind of calls:</p> <pre><code>managers[managers['ShopId'].isin(sales[sales['Year2010'] &gt; 1500]['ShopId'])]['ManagerName'].values &gt;&gt;&gt; array(['Rob', 'John'], dtype=object) </code></pre> <p>I think it is hard to see what is going on while reading this line of code. I could create multiple intermediate variables, but this would add multiple lines of code.</p> <p>How common is it to sacrifice database normalization ideology and merge all the pieces into a single data frame to get a more maintainable code? There are obviously cons of having a single data frame as it might get messy when trying to merge other data frames that might be needed later on. Merging them of course leads to data redundancy as the same manager can be assigned to multiple shops.</p> <pre><code>df = managers.merge(sales,how='left',on='ShopId').merge(shops,how='left',on='ShopId') print df ManagerId ManagerName ShopId Year2010 Year2011 Year2012 ShopName 0 112233 Rob 99 2000 3000 4000 Shop1 1 445566 John 88 2500 3500 4500 Shop2 </code></pre> <p>At least this call gets smaller:</p> <pre><code>df[df['Year2010'] &gt; 1500]['ManagerName'].values &gt;&gt;&gt; array(['Rob', 'John'], dtype=object) </code></pre> <p>Maybe pandas is a wrong tool for this kind of job? </p> <p>C# developers at office frown at me and tell me use the classes, but then I will have a bunch of methods like <code>get_manager_sales(managerid)</code> and so forth. Iterating class instances for reporting also sounds troublesome as I would need to implement some sorting and indexing (which I get for free with <code>pandas</code>).</p> <p>Dictionary would work, but it makes it also difficult to modify existing data, doing merges etc. The syntax doesn't get much better either.</p> <pre><code>data_dict = df.to_dict('records') [{'ManagerId': 112233L, 'ManagerName': 'Rob', 'ShopId': 99L, 'ShopName': 'Shop1', 'Year2010': 2000L, 'Year2011': 3000L, 'Year2012': 4000L}, {'ManagerId': 445566L, 'ManagerName': 'John', 'ShopId': 88L, 'ShopName': 'Shop2', 'Year2010': 2500L, 'Year2011': 3500L, 'Year2012': 4500L}] </code></pre> <p>Get manager names that are responsible for shops which had sales more than 1500 in year 2010.</p> <pre><code>[row['ManagerName'] for row in data_dict if row['Year2010'] &gt; 1500] &gt;&gt;&gt; ['Rob', 'John'] </code></pre> <p><strong>In this particular case with the data I operate with, should I go all the way with <code>pandas</code> or is there another way to write cleaner code while taking advantage of the power of <code>pandas</code>?</strong></p>
5
2016-10-14T11:16:30Z
40,042,283
<p>Creating classes that operate on dataframes is not a good idea, because it'll hide away the fact that you're using a data frame, and open the way to very bad decisions (like iterating over a dataframe with a <code>for</code> loop).</p> <p>Solution 1: Denormalize the data. You don't have to keep your data in a normal form. Normal form is preferrable when you have to keep your entries consistent throughout the database. This is not a database, you don't do constant inserts, updates and deletes. So just denormalize it, and work with one large dataframe, as it's clearly more convenient, and better suits your needs. </p> <p>Solution 2: Use a database. You can dump your data into a SQLite database (pandas has a built-in function for that), and execute all kinds of crazy queries on it. In my personal opition, SQL queries are much more readable than the stuff you posted. If you do this kind of analysis regularly, and the data structure remains the same, this may be a preferrable solution. You can dump data ino a db, and then use SQLAlchemy to work with it.</p> <p>Solution 3. Create your own datarame. You can inherit from <code>pandas.DataFrame</code> and add custom methods to it. You need to dig into the guts of <code>pandas</code> for that, though, to see how to implement those methods. This way you can create, for example, custom methods of accessing certain parts of a dataframe.</p> <p>Unless you know pandas really well, I'd go for solutions 1 or 2. If you need more flexibility, and the data manipulation is different every time, use 1. If you need to execute roughly the same analysis every time, use 2 (especially if your data analysis code is a part of a bigger application).</p> <p>Also, I don't understand why "adding more lines of code" is bad. By breaking up a huge one-liner into many expressions, you don't increase the <em>actual</em> complexity, and you decrease the <em>perceived</em> complexity. Maybe all you need to do is just refactor your code, and pack some operations into reusable functions?</p>
2
2016-10-14T11:39:46Z
[ "python", "class", "oop", "pandas", "dictionary" ]
How can I make sure the content of a tkinter entry is saved on FocusOut?
40,041,902
<p>I have an app that uses <code>&lt;FocusOut&gt;</code> binding to automatically save the edits in an <code>Entry</code> to a list.</p> <p>There is no problem saving the <code>Entry</code> text when using <code>TAB</code> to navigate through the Entries or when I click on another Entry, but If I change the text on one Entry and then if I mouse-click on a <code>ListBox</code> in another frame, <code>&lt;FocusOut&gt;</code> doesn't work on the last selected Entry and the information in it is not registered. </p> <p>How can I avoid this without resorting to a <code>Save</code> button on the GUI? For every selection in the <code>ListBox</code> there are different <code>Entry</code> boxes, so the user would have to press the <code>Save</code> button numerous times. I would like to avoid that.</p>
-2
2016-10-14T11:19:18Z
40,042,945
<h3>Realtime edit / save text instead</h3> <p>It looks like you want to get the updated text realtime. What I do in such a case is use the <code>'KeyRelease'</code> -binding. Simple, effective enrtry- specific and works instantly.</p> <p>In concept:</p> <pre class="lang-py prettyprint-override"><code>win = Tk() def dosomething(*args): # update the corresponding text anywhere, save your text, whatever print(entry.get()) entry = Entry() entry.bind("&lt;KeyRelease&gt;", dosomething) entry.pack() win.mainloop() </code></pre> <p><a href="https://i.stack.imgur.com/oDHl1.png" rel="nofollow"><img src="https://i.stack.imgur.com/oDHl1.png" alt="enter image description here"></a></p> <p>In action:</p> <pre class="lang-none prettyprint-override"><code>M Mo Mon Monk Monke Monkey Monkey Monkey e Monkey ea Monkey eat Monkey eats Monkey eats Monkey eats b Monkey eats ban Monkey eats ban Monkey eats bana Monkey eats banan Monkey eats banana </code></pre>
0
2016-10-14T12:16:12Z
[ "python", "tkinter", "entry", "focusout" ]
Nicer command line parse python
40,042,071
<p>Using argparse, I have created a small script that contains a command line parser for my analysis program which is part of a self made python package. It works perfectly, but I don't really like how to control it.</p> <p>This is how the code looks in the script itself</p> <pre><code>def myAnalysis(): parser = argparse.ArgumentParser(description=''' lala''') parser.add_argument('-d', '--data',help='') parser.add_argument('-e', '--option_1', help='', default=False, required=False) parser.add_argument('-f', '--option_2', help='', default=False, required=False) # combine parsed arguments args = parser.parse_args()code here </code></pre> <p>Additional to this there is some more in the setup file of the analysis package</p> <pre><code>entry_points={ 'console_scripts': [ 'py_analysis = edit.__main__:myAnalysis' ] </code></pre> <p>As I said, this works without any problems. To analyze some data I have to use</p> <pre><code>py_analysis --data path_to_data_file </code></pre> <p>Sometimes, I need some of the options. For this it may look loke</p> <pre><code>py_analysis --data path_to_data_file --option_1 True --option_2 True </code></pre> <p>In my personal taste, this is kind of ugly. I would prefer something like</p> <pre><code>py_analysis path_to_data_file --option_1 --option_2 </code></pre> <p>I am pretty sure this is possible. I just don't know how</p>
1
2016-10-14T11:28:47Z
40,042,224
<p>Use <em>store_true</em> action</p> <pre><code>parser.add_argument('-e', '--option_1', help='', default=False, action ='store_true') </code></pre> <p>Then just adding to command line <em>--option_1</em> will set its value to <em>True</em>.</p>
5
2016-10-14T11:36:26Z
[ "python", "argparse" ]
Nicer command line parse python
40,042,071
<p>Using argparse, I have created a small script that contains a command line parser for my analysis program which is part of a self made python package. It works perfectly, but I don't really like how to control it.</p> <p>This is how the code looks in the script itself</p> <pre><code>def myAnalysis(): parser = argparse.ArgumentParser(description=''' lala''') parser.add_argument('-d', '--data',help='') parser.add_argument('-e', '--option_1', help='', default=False, required=False) parser.add_argument('-f', '--option_2', help='', default=False, required=False) # combine parsed arguments args = parser.parse_args()code here </code></pre> <p>Additional to this there is some more in the setup file of the analysis package</p> <pre><code>entry_points={ 'console_scripts': [ 'py_analysis = edit.__main__:myAnalysis' ] </code></pre> <p>As I said, this works without any problems. To analyze some data I have to use</p> <pre><code>py_analysis --data path_to_data_file </code></pre> <p>Sometimes, I need some of the options. For this it may look loke</p> <pre><code>py_analysis --data path_to_data_file --option_1 True --option_2 True </code></pre> <p>In my personal taste, this is kind of ugly. I would prefer something like</p> <pre><code>py_analysis path_to_data_file --option_1 --option_2 </code></pre> <p>I am pretty sure this is possible. I just don't know how</p>
1
2016-10-14T11:28:47Z
40,042,275
<p>To have a positional argument instead of an option, replace:</p> <pre><code>parser.add_argument('-d', '--data',help='') </code></pre> <p>by:</p> <pre><code>parser.add_argument('data_file', help='') </code></pre>
1
2016-10-14T11:39:15Z
[ "python", "argparse" ]
Apache virtualenv and mod_wsgi : ImportError : No module named 'django'
40,042,096
<p>I'm having issues running django and apache2/mod_wsgi. This is my current setup:</p> <pre><code>Ubuntu: 16.0 Apache: 2.4.18 Python: 3.5 Django: 1.10 </code></pre> <p>I have installed a virtualenv inside my django project for user 'carma'. Structure is:</p> <pre><code>/home/carma/mycarma |- manage.py static mycarma |__init__.py |settings.py |urls.py |wsgi.py mycarmanev bin include lib </code></pre> <p>This is the content of <strong>/etc/apache2/sites-available/000-default.conf</strong></p> <pre><code>&lt;VirtualHost *:80&gt; Alias /static /home/carma/mycarma/static &lt;Directory /home/carma/mycarma/static&gt; Require all granted &lt;/Directory&gt; &lt;Directory /home/carma/mycarma/mycarma&gt; &lt;Files wsgi.py&gt; Require all granted &lt;/Files&gt; &lt;/Directory&gt; WSGIDaemonProcess mycarma python-path=/home/carma/mycarma/ python-home=/home/carma/mycarma/mycarmavirtuale$ WSGIProcessGroup mycarma WSGIScriptAlias / /home/carma/mycarma/mycarma/wsgi.py </code></pre> <p></p> <p>This is the content of <strong>wsgi.py</strong></p> <pre><code>import os,sys from django.core.wsgi import get_wsgi_application DJANGO_PATH = os.path.join(os.path.abspath(os.path.dirname(__file__)), '..') sys.path.append(DJANGO_PATH) os.environ.setdefault("DJANGO_SETTINGS_MODULE", "mycarma.settings") application = get_wsgi_application() </code></pre> <p>And I have already given permissions:</p> <pre><code>sudo chown -R www-data:www-data /home/carma/mycarma/mycarmaenv sudo chown -R www-data:www-data /home/carma/mycarma </code></pre> <p>The problem comes when I try to access the url of my server, checking the apache log this is the issue:</p> <pre><code>[wsgi:error] [pid 25183] mod_wsgi (pid=25183): Target WSGI script '/home/carma/mycarma/mycarma/wsgi.py' cannot be loaded as Python module. [wsgi:error] [pid 25183] mod_wsgi (pid=25183): Exception occurred processing WSGI script '/home/carma/mycarma/mycarma/wsgi.py'. [wsgi:error] [pid 25183] Traceback (most recent call last): [wsgi:error] [pid 25183] File "/home/carma/mycarma/mycarma/wsgi.py", line 12, in &lt;module&gt; [wsgi:error] [pid 25183] from django.core.wsgi import get_wsgi_application [wsgi:error] [pid 25183] ImportError: No module named 'django' </code></pre> <p>I have read all the possible discussions here and outside, found also <a href="http://stackoverflow.com/questions/38756969/apache-with-virtualenv-and-mod-wsgi-importerror-no-module-named-django">this thread</a> which expose exactly the same problem but nothing worked for me.</p> <p>Any help is appreciated thanks!</p>
1
2016-10-14T11:29:41Z
40,042,403
<p>I think it is a typo, <code>mycarmanev</code> or <code>mycarmavirtuanev</code> ?</p> <pre><code>WSGIDaemonProcess mycarma python-path=/home/carma/mycarma/ python-home=/home/carma/mycarma/myprojectenv </code></pre>
3
2016-10-14T11:45:55Z
[ "python", "django", "apache", "mod-wsgi" ]
User defined legend in python
40,042,223
I have this plot in which some areas between curves are being filled by definition. Is there any way to include them in legend? Especially where those filled areas are overlapped and as well as that a new and different color is being appeared. <p>Or there is possibility to define an arbitrary legend regardless of the curves' data? <a href="https://i.stack.imgur.com/NtJ74.png" rel="nofollow"><img src="https://i.stack.imgur.com/NtJ74.png" alt="enter image description here"></a></p>
3
2016-10-14T11:36:22Z
40,066,329
<p>Using <code>fill_bettween</code> to plot your data will automatically include the filled area in the legend.</p> <p>To include the areas where the two datasets overlap, you can combine the legend handles from both dataset into a single legend handle.</p> <p>As pointed out in the comments, you can also define any arbitrary legend handle with a proxy.</p> <p>Finally, you can define exactly what handles and labels you want to appear in the legend, regardless of the data plotted in your graph.</p> <p>See the MWE below that illustrates the points stated above:</p> <pre><code>import matplotlib.pyplot as plt import numpy as np plt.close('all') # Gererate some datas: x = np.random.rand(50) y = np.arange(len(x)) # Plot data: fig, ax = plt.subplots(figsize=(11, 4)) fillA = ax.fill_between(y, x-0.25, 0.5, color='darkolivegreen', alpha=0.65, lw=0) fillB = ax.fill_between(y, x, 0.5, color='indianred', alpha=0.75, lw=0) linec, = ax.plot(y, np.zeros(len(y))+0.5, color='blue', lw=1.5) linea, = ax.plot(y, x, color='orange', lw=1.5) lineb, = ax.plot(y, x-0.25, color='black', lw=1.5) # Define an arbitrary legend handle with a proxy: rec1 = plt.Rectangle((0, 0), 1, 1, fc='blue', lw=0, alpha=0.25) # Generate the legend: handles = [linea, lineb, linec, fillA, fillB, (fillA, fillB), rec1, (fillA, fillB, rec1)] labels = ['a', 'b', 'c', 'A', 'B', 'A+B', 'C', 'A+B+C'] ax.legend(handles, labels, loc=2, ncol=4) ax.axis(ymin=-1, ymax=2) plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/e3hqM.png" rel="nofollow"><img src="https://i.stack.imgur.com/e3hqM.png" alt="enter image description here"></a></p>
1
2016-10-16T02:51:03Z
[ "python", "matplotlib", "legend-properties" ]
User defined legend in python
40,042,223
I have this plot in which some areas between curves are being filled by definition. Is there any way to include them in legend? Especially where those filled areas are overlapped and as well as that a new and different color is being appeared. <p>Or there is possibility to define an arbitrary legend regardless of the curves' data? <a href="https://i.stack.imgur.com/NtJ74.png" rel="nofollow"><img src="https://i.stack.imgur.com/NtJ74.png" alt="enter image description here"></a></p>
3
2016-10-14T11:36:22Z
40,069,079
<p>Yes, you are absolutely right ian_itor, tacaswell and Jean-Sébastien, user defined legend seems to be the unique solution, in addition I made different <em>linewidth</em> for those area to be distinguishable from the curves, and playing with <em>alpha</em> got the right color.</p> <pre><code>handles, labels = ax.get_legend_handles_labels() display = (0,1,2,3,4) overlap_1 = plt.Line2D((0,1),(0,0), color='firebrick', linestyle='-',linewidth=15, alpha = 0.85) overlap_2= plt.Line2D((0,1),(0,0), color='darkolivegreen',linestyle='-',linewidth=15, alpha = 0.65) over_lo_3= plt.Line2D((0,1),(0,0), color='indianred',linestyle='-',linewidth=15, alpha = 0.75) ax.legend([handle for i,handle in enumerate(handles) if i in display]+[overlap_1 , overlap_2 , overlap_3 ], [label for i,label in enumerate(labels) if i in display]+['D','F','G']) </code></pre> <p><a href="https://i.stack.imgur.com/WeOw1.png" rel="nofollow"><img src="https://i.stack.imgur.com/WeOw1.png" alt="enter image description here"></a></p>
0
2016-10-16T10:04:33Z
[ "python", "matplotlib", "legend-properties" ]
'type' object is not iterable in using django_enums
40,042,345
<p>I tried to use <a href="https://pypi.python.org/pypi/django-enums/" rel="nofollow">django_enums</a> and got an error with <strong>list(cls)</strong> when making migrations:</p> <pre><code>(venv:SFS)rita@rita-notebook:~/Serpentarium/ServiceForServices/project/serviceforservices$ python manage.py makemigrations Traceback (most recent call last): File "manage.py", line 22, in &lt;module&gt; execute_from_command_line(sys.argv) File "/home/rita/Serpentarium/ServiceForServices/venv/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 367, in execute_from_command_line utility.execute() File "/home/rita/Serpentarium/ServiceForServices/venv/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 341, in execute django.setup() File "/home/rita/Serpentarium/ServiceForServices/venv/local/lib/python2.7/site-packages/django/__init__.py", line 27, in setup apps.populate(settings.INSTALLED_APPS) File "/home/rita/Serpentarium/ServiceForServices/venv/local/lib/python2.7/site-packages/django/apps/registry.py", line 108, in populate app_config.import_models(all_models) File "/home/rita/Serpentarium/ServiceForServices/venv/local/lib/python2.7/site-packages/django/apps/config.py", line 199, in import_models self.models_module = import_module(models_module_name) File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module __import__(name) File "/home/rita/Serpentarium/ServiceForServices/project/serviceforservices/service/models.py", line 84, in &lt;module&gt; class EmployeesStatus(models.Model): File "/home/rita/Serpentarium/ServiceForServices/project/serviceforservices/service/models.py", line 86, in EmployeesStatus status = enum.EnumField(StatusEnum, default=StatusEnum.AT) File "/home/rita/Serpentarium/ServiceForServices/venv/local/lib/python2.7/site-packages/django_enums/enum.py", line 53, in __init__ kwargs['max_length'] = self.enum.get_max_length() File "/home/rita/Serpentarium/ServiceForServices/venv/local/lib/python2.7/site-packages/django_enums/enum.py", line 39, in get_max_length return len(max(list(cls), key=(lambda x: len(x.key))).key) TypeError: 'type' object is not iterable </code></pre> <p>I installed django-enums, enum and six:</p> <pre><code>(venv:SFS)rita@rita-notebook:~/Serpentarium/ServiceForServices/project/serviceforservices$ pip install django-enums (venv:SFS)rita@rita-notebook:~/Serpentarium/ServiceForServices/project/serviceforservices$ pip install enum (venv:SFS)rita@rita-notebook:~/Serpentarium/ServiceForServices/project/serviceforservices$ pip install six </code></pre> <p>Using in Models.py:</p> <pre><code>... from django_enums import enum ... class StatusEnum(enum.Enum): __order__ = 'AT BT BC AC' # for python 2 AT = (u'АВ', u'Активен временно') BT = (u'БВ', u'Блокирован временно') BC = (u'БП', u'Блокирован постоянно') AC = (u'АП', u'Активен постоянно') class EmployeesStatus(models.Model): name = models.CharField(max_length=128) status = enum.EnumField(StatusEnum, default=StatusEnum.AT) </code></pre> <p>It seems like project is alive, it also was updated for Django 1.10 and compartible with python 2 allegedly. So, what I'm doing wrong? </p>
2
2016-10-14T11:42:58Z
40,046,073
<p>It looks like you are trying to use the backported enum from the 3.4 stdlib, but you installed <code>enum</code> -- you need to install <code>enum34</code>.</p>
1
2016-10-14T14:49:37Z
[ "python", "django", "enums" ]
Two Hours Mongodb Aggregation
40,042,406
<p>Here is my sample Data:</p> <pre><code>{ "_id": { "$oid": "5654a8f0d487dd1434571a6e" }, "ValidationDate": { "$date": "2015-11-24T13:06:19.363Z" }, "DataRaw": " WL 00100100012015-08-28 02:44:17+0000+ 16.81 8.879 1084.00", "ReadingsAreValid": true, "locationID": " WL 001", "Readings": { "pH": { "value": 8.879 }, "SensoreDate": { "value": { "$date": "2015-08-28T02:44:17.000Z" } }, "temperature": { "value": 16.81 }, "Conductivity": { "value": 1084 } }, "HMAC":"ecb98d73fcb34ce2c5bbcc9c1265c8ca939f639d791a1de0f6275e2d0d71a801" </code></pre> <p>}</p> <p>I am trying to group average values by two hours interval and have the following aggregation query.</p> <pre><code>Query = [{"$unwind":"$Readings"}, {'$group' : { "_id": { "year": { "$year": "$Readings.SensoreDate.value" }, "dayOfYear": { "$dayOfYear": "$Readings.SensoreDate.value" }, "interval": { "$subtract": [ { "$hour": "$Readings.SensoreDate.value"}, { "$mod": [{ "$hour": "$Readings.SensoreDate.value"},2]} ] } }}, 'AverageTemp' : { '$avg' : '$Readings.temperature.value'}, "AveragePH": {"$avg" : "$Readings.pH.value"}, "AverageConduc": {"$avg" : "$Readings.Conductivity.value"}} , {"$limit":10}] </code></pre> <p>This gives me an error saying <code>A pipeline stage specification object must contain exactly one field.</code> and I have done all research but can't get the desired results.</p>
1
2016-10-14T11:46:09Z
40,042,606
<p>After some formatting, your present aggregation pipeline looks like:</p> <pre><code>Query = [ { "$unwind": "$Readings" }, { '$group' : { "_id": { "year": { "$year": "$Readings.SensoreDate.value" }, "dayOfYear": { "$dayOfYear": "$Readings.SensoreDate.value" }, "interval": { "$subtract": [ { "$hour": "$Readings.SensoreDate.value"}, { "$mod": [ { "$hour": "$Readings.SensoreDate.value" }, 2 ] } ] } } }, 'AverageTemp' : { '$avg' : '$Readings.temperature.value' }, "AveragePH": { "$avg" : "$Readings.pH.value" }, "AverageConduc": { "$avg" : "$Readings.Conductivity.value" } }, { "$limit": 10 } ] </code></pre> <p>with which mongo is complaining </p> <blockquote> <p>A pipeline stage specification object must contain exactly one field.</p> </blockquote> <p>because it's failing to recognise the misplaced fields</p> <pre><code>'AverageTemp' : { '$avg' : '$Readings.temperature.value' }, "AveragePH": { "$avg" : "$Readings.pH.value" }, "AverageConduc": { "$avg" : "$Readings.Conductivity.value" } </code></pre> <p>A correct pipeline should have these fields within the <strong><code>$group</code></strong> pipeline stage, so a working pipeline follows:</p> <pre><code>Query = [ { "$unwind": "$Readings" }, { "$group" : { "_id": { "year": { "$year": "$Readings.SensoreDate.value" }, "dayOfYear": { "$dayOfYear": "$Readings.SensoreDate.value" }, "interval": { "$subtract": [ { "$hour": "$Readings.SensoreDate.value"}, { "$mod": [ { "$hour": "$Readings.SensoreDate.value" }, 2 ] } ] } }, "AverageTemp" : { "$avg" : "$Readings.temperature.value" }, "AveragePH": { "$avg" : "$Readings.pH.value" }, "AverageConduc": { "$avg" : "$Readings.Conductivity.value" } } }, { "$limit": 10 } ] </code></pre>
1
2016-10-14T11:56:30Z
[ "javascript", "python", "arrays", "mongodb", "pymongo-3.x" ]
python regular expressions using the re module can you write a regex which looks inside the result of another regex?
40,042,524
<p>I am wanting to write a regular expression which pulls out the following from between but not including <code>&lt;p&gt;</code> and <code>&lt;/p&gt;</code> tags:</p> <p>begins with the word "Cash" has some stuff I don't want then a number $336,008.</p> <p>From:</p> <blockquote> <p><code>&lt;p&gt;Cash nbs $13,000&lt;/p&gt;</code></p> </blockquote> <p><strong>I want "Cash" and if available (maybe 0 or nothing) the number.</strong></p> <p>What I have so far gets me everything inside of the <code>&lt;p&gt;</code> tags including the tags themselves.</p> <pre><code>\&lt;p\&gt;(|.*Cash.*|.*Total.*\$\d+.*)\&lt;\/p\&gt; </code></pre>
0
2016-10-14T11:52:23Z
40,043,001
<p>If you using re.findall, just use () with you want to catch.</p> <pre><code>&gt;&gt;&gt; test = '&lt;p&gt;Cash nbs $13,000&lt;/p&gt;' &gt;&gt;&gt; re.findall(r'&lt;p&gt;(Cash).+\${1}\d*|\,&lt;/p&gt;', test) ['Cash'] </code></pre>
0
2016-10-14T12:19:22Z
[ "python", "regex" ]
Efficiently change order of numpy array
40,042,573
<p>I have a 3 dimensional numpy array. The dimension can go up to 128 x 64 x 8192. What I want to do is to change the order in the first dimension by interchanging pairwise. </p> <p>The only idea I had so far is to create a list of the indices in the correct order.</p> <pre><code>order = [1,0,3,2...127,126] data_new = data[order] </code></pre> <p>I fear, that this is not very efficient but I have no better idea so far</p>
2
2016-10-14T11:54:34Z
40,042,659
<p>You could reshape to split the first axis into two axes, such that latter of those axes is of length <code>2</code> and then flip the array along that axis with <code>[::-1]</code> and finally reshape back to original shape.</p> <p>Thus, we would have an implementation like so -</p> <pre><code>a.reshape(-1,2,*a.shape[1:])[:,::-1].reshape(a.shape) </code></pre> <p>Sample run -</p> <pre><code>In [170]: a = np.random.randint(0,9,(6,3)) In [171]: order = [1,0,3,2,5,4] In [172]: a[order] Out[172]: array([[0, 8, 5], [4, 5, 6], [0, 0, 2], [7, 3, 8], [1, 6, 3], [2, 4, 4]]) In [173]: a.reshape(-1,2,*a.shape[1:])[:,::-1].reshape(a.shape) Out[173]: array([[0, 8, 5], [4, 5, 6], [0, 0, 2], [7, 3, 8], [1, 6, 3], [2, 4, 4]]) </code></pre> <p>Alternatively, if you are looking to efficiently create those constantly flipping indices <code>order</code>, we could do something like this -</p> <pre><code>order = np.arange(data.shape[0]).reshape(-1,2)[:,::-1].ravel() </code></pre>
4
2016-10-14T11:59:12Z
[ "python", "numpy" ]
why windll.user32.GetWindowThreadProcessID can't find the function?
40,042,596
<p>I'm reading <em>Black Hat Python</em> and in chapter 8 I find "user32.GetWindowThreadProcessID(hwnd,byref(pid))" doesn't work, just like the picture shows.</p> <p>It seems that python can't find <em>GetWindowThreadProcessID</em>, but it can find <em>GetForegroundWindow</em> which also is exported from user32.dll.</p> <p>I also try "windll.LoadLibrary("user32.dll")", but it still doesn't work.</p> <p>Thank you!</p>
0
2016-10-14T11:55:55Z
40,043,277
<p>It should work if you your OS version is at least Windows 2000 Professional:</p> <pre><code>import ctypes import ctypes.wintypes pid = ctypes.wintypes.DWORD() hwnd = ctypes.windll.user32.GetForegroundWindow() print( ctypes.windll.user32.GetWindowThreadProcessId(hwnd,ctypes.byref(pid)) ) </code></pre>
0
2016-10-14T12:33:28Z
[ "python" ]
pythonic way of initialization of dict in a for loop
40,042,615
<p>Is there a python way of initialization of a dictionary?</p> <pre><code>animals = ["dog","cat","cow"] for x in animals: primers_pos[x]={} </code></pre> <p>Is there something like</p> <pre><code>(primers_pos[x]={} for x in animals) </code></pre>
0
2016-10-14T11:56:55Z
40,042,640
<p>You can use a dictionary comprehension (supported in Python 2.7+):</p> <pre><code>&gt;&gt;&gt; animals = ["dog", "cat", "cow"] &gt;&gt;&gt; {x: {} for x in animals} {'dog': {}, 'cow': {}, 'cat': {}} </code></pre>
2
2016-10-14T11:58:05Z
[ "python", "dictionary" ]
pythonic way of initialization of dict in a for loop
40,042,615
<p>Is there a python way of initialization of a dictionary?</p> <pre><code>animals = ["dog","cat","cow"] for x in animals: primers_pos[x]={} </code></pre> <p>Is there something like</p> <pre><code>(primers_pos[x]={} for x in animals) </code></pre>
0
2016-10-14T11:56:55Z
40,042,838
<p>You may also use <em>collections.defaultdict</em></p> <pre><code>primer_pos = defaultdict(dict) </code></pre> <p>Then whenever you reference <em>primer_pos</em> with a new key, a dictionary will be created automatically</p> <pre><code>primer_pos['cat']['Fluffy'] = 'Nice kitty' </code></pre> <p>Will by chain create <em>{'Fluffy': 'Nice Kitty'}</em> as value of key 'cat' in dictionary <em>primer_pos</em></p>
0
2016-10-14T12:09:37Z
[ "python", "dictionary" ]
How to convert an string with array form to an array?
40,042,628
<p>I got a string in this form</p> <pre><code>payload = ["Text 1", "Text 2"] </code></pre> <p>I want to use <code>Text 2</code> as an object. How can I return it? </p> <p><strong>UPDATE</strong> I'm making a function which returns a <a href="https://developers.facebook.com/docs/messenger-platform/send-api-reference/receipt-template" rel="nofollow">generic template</a> Facebook API. The first <code>payload</code> works well, but I want to return a <code>string</code> and a <code>object</code> in the second <code>payload</code> ( result )</p> <pre><code>button = [ { "type": "postback", "title": "Buy item", "payload": "Buy this item: " + (product['id']) }, { "type": "postback", "title": "Add to wishlist", "payload": result } ] </code></pre> <p>My second payload should look like this:</p> <pre><code>payload = { 'Order_item', product['title'] } </code></pre> <p>Because I got this error <code>[buttons][1][payload] must be a UTF-8 encoded string</code> so I did convert it and it returns a <strong>STRING</strong> in this form <code>["Order_item", "Ledverlichting Fiets Blauw"]</code></p> <p>Because I want that when a Facebook user clicks on a postback ( Add to wishlist ), the <code>product['title']</code> value will be save in Django database. And <code>product['title']</code> is the <code>Text 2</code> in the question above.</p>
0
2016-10-14T11:57:26Z
40,042,717
<p>You need to split the string then trim and keep splitting/trimming to get all parts that you want to have in list</p> <pre><code>s = 'payload = ["Text 1", "Text 2"]' items = s.strip()[1:-1] #if 'payload = ' is also part of string you need to split it also by '=' so it would be: #items = s.split("=").strip()[1:-1] l = [item.strip()[1:-1] for item in items.split(",")] </code></pre> <p>then you have list of string that you can iterate, get n-th item and so on</p> <pre><code>the_item = l[1] return the_item </code></pre>
3
2016-10-14T12:02:13Z
[ "python", "arrays", "django", "string" ]
How to convert an string with array form to an array?
40,042,628
<p>I got a string in this form</p> <pre><code>payload = ["Text 1", "Text 2"] </code></pre> <p>I want to use <code>Text 2</code> as an object. How can I return it? </p> <p><strong>UPDATE</strong> I'm making a function which returns a <a href="https://developers.facebook.com/docs/messenger-platform/send-api-reference/receipt-template" rel="nofollow">generic template</a> Facebook API. The first <code>payload</code> works well, but I want to return a <code>string</code> and a <code>object</code> in the second <code>payload</code> ( result )</p> <pre><code>button = [ { "type": "postback", "title": "Buy item", "payload": "Buy this item: " + (product['id']) }, { "type": "postback", "title": "Add to wishlist", "payload": result } ] </code></pre> <p>My second payload should look like this:</p> <pre><code>payload = { 'Order_item', product['title'] } </code></pre> <p>Because I got this error <code>[buttons][1][payload] must be a UTF-8 encoded string</code> so I did convert it and it returns a <strong>STRING</strong> in this form <code>["Order_item", "Ledverlichting Fiets Blauw"]</code></p> <p>Because I want that when a Facebook user clicks on a postback ( Add to wishlist ), the <code>product['title']</code> value will be save in Django database. And <code>product['title']</code> is the <code>Text 2</code> in the question above.</p>
0
2016-10-14T11:57:26Z
40,042,735
<p>you can try:</p> <pre><code>&gt;&gt;&gt; payload = ["t1", "t2"] &gt;&gt;&gt; t2 = [1,2] &gt;&gt;&gt; eval(payload[1]) [1, 2] </code></pre>
0
2016-10-14T12:02:58Z
[ "python", "arrays", "django", "string" ]
How to convert an string with array form to an array?
40,042,628
<p>I got a string in this form</p> <pre><code>payload = ["Text 1", "Text 2"] </code></pre> <p>I want to use <code>Text 2</code> as an object. How can I return it? </p> <p><strong>UPDATE</strong> I'm making a function which returns a <a href="https://developers.facebook.com/docs/messenger-platform/send-api-reference/receipt-template" rel="nofollow">generic template</a> Facebook API. The first <code>payload</code> works well, but I want to return a <code>string</code> and a <code>object</code> in the second <code>payload</code> ( result )</p> <pre><code>button = [ { "type": "postback", "title": "Buy item", "payload": "Buy this item: " + (product['id']) }, { "type": "postback", "title": "Add to wishlist", "payload": result } ] </code></pre> <p>My second payload should look like this:</p> <pre><code>payload = { 'Order_item', product['title'] } </code></pre> <p>Because I got this error <code>[buttons][1][payload] must be a UTF-8 encoded string</code> so I did convert it and it returns a <strong>STRING</strong> in this form <code>["Order_item", "Ledverlichting Fiets Blauw"]</code></p> <p>Because I want that when a Facebook user clicks on a postback ( Add to wishlist ), the <code>product['title']</code> value will be save in Django database. And <code>product['title']</code> is the <code>Text 2</code> in the question above.</p>
0
2016-10-14T11:57:26Z
40,042,936
<p>assuming that you want a string object, you can get the specific index in the array(first spot = 0, second spot = 1, etc).</p> <p>you can save the string object contained in the array like this:</p> <pre><code>payload = ["Text 1", "Text 2"] s = payload[1] </code></pre> <p>and than you can return the s object.</p>
1
2016-10-14T12:15:52Z
[ "python", "arrays", "django", "string" ]
How to convert an string with array form to an array?
40,042,628
<p>I got a string in this form</p> <pre><code>payload = ["Text 1", "Text 2"] </code></pre> <p>I want to use <code>Text 2</code> as an object. How can I return it? </p> <p><strong>UPDATE</strong> I'm making a function which returns a <a href="https://developers.facebook.com/docs/messenger-platform/send-api-reference/receipt-template" rel="nofollow">generic template</a> Facebook API. The first <code>payload</code> works well, but I want to return a <code>string</code> and a <code>object</code> in the second <code>payload</code> ( result )</p> <pre><code>button = [ { "type": "postback", "title": "Buy item", "payload": "Buy this item: " + (product['id']) }, { "type": "postback", "title": "Add to wishlist", "payload": result } ] </code></pre> <p>My second payload should look like this:</p> <pre><code>payload = { 'Order_item', product['title'] } </code></pre> <p>Because I got this error <code>[buttons][1][payload] must be a UTF-8 encoded string</code> so I did convert it and it returns a <strong>STRING</strong> in this form <code>["Order_item", "Ledverlichting Fiets Blauw"]</code></p> <p>Because I want that when a Facebook user clicks on a postback ( Add to wishlist ), the <code>product['title']</code> value will be save in Django database. And <code>product['title']</code> is the <code>Text 2</code> in the question above.</p>
0
2016-10-14T11:57:26Z
40,087,443
<p>I found another way should work also.</p> <p>I edited my <code>payload</code> to:</p> <pre><code>payload = { 'text': 'Order_item', 'title': product['title'] } </code></pre> <p>Then I use:</p> <p><code>title = json.loads(message['postback']['payload']['title'])</code></p> <p>With this way I will always get the <code>title</code> object and don't have problem with <code>unordered json</code>.</p>
0
2016-10-17T13:17:08Z
[ "python", "arrays", "django", "string" ]
Trouble while providing webservices in django python to android
40,042,939
<p>I am new for Django1.10 Python.</p> <p>I've to provide web services for an app. So I created a web service in python's Django framework. The web service is properly working for <code>iOS</code> but getting issue while handling with <code>android</code>. For dealing with web services in android, using <code>Volley</code> library.</p> <p>Following error occurring:- <code>Error Code 500 Internal Server Error</code></p> <p>So unable to POST data through android end...</p> <p>For Web service I am using following code :-</p> <p><code>views.py</code></p> <pre><code>from django.http import HttpResponseRedirect, HttpResponse from django.views.decorators import csrf from django.views.decorators.csrf import csrf_protect, csrf_exempt from django.db import IntegrityError, connection from django.views.decorators.cache import cache_control from django.core.files.images import get_image_dimensions import json from json import loads, dump from models import Gym @csrf_exempt def gym_register_web(request): data = json.loads(request.body) gN = str(data['gym_name']) gPh = str(data['gym_phone']) gE = str(data['gym_email']) gL = str(data['gym_landmark']) gAdd = str(data['gym_address']) exE = Gym.objects.filter(gym_email = gE) if exE: status = 'failed' msg = 'EmailId already exist' responseMsg = '{\n "status" : "'+status+'",\n "msg" : "'+msg+'"\n}' return HttpResponse(responseMsg) else: gymI = Gym(gym_name = gN, gym_phone = gPh, gym_email = gE, gym_area = gL, gym_address = gAdd) gymI.save() data = Gym.objects.get(gym_email = gE) status = 'success' dataType = 'gym_id' val = str(data.gym_id) responseMsg = '{\n "status" : "'+status+'",\n "'+dataType+'" : "'+val+'"\n}' return HttpResponse(responseMsg) </code></pre> <p><code>urls.py</code></p> <pre><code>from django.conf.urls import url, include from . import views from django.views.decorators.csrf import csrf_protect, csrf_exempt admin.autodiscover() urlpatterns=[ url(r'^gymRegister/$', views.gym_register_web), ]+ static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT) </code></pre> <p>EDIT: <code>models.py</code></p> <pre><code>from django.db import models from django.contrib import admin class Gym(models.Model): gym_id = models.AutoField(primary_key = True) gym_name = models.CharField(max_length = 100, null=True,default = None ) gym_email = models.CharField(max_length = 100, null=True,default = None ) gym_phone = models.BigIntegerField(null=True,default = None ) gym_area = models.TextField(max_length = 255, null=True,default = None ) gym_address = models.TextField(max_length = 255, null=True,default = None ) gym_latitude = models.CharField(max_length = 100, null=True,default = None ) gym_longitude = models.CharField(max_length = 100, null=True,default = None ) gym_status = models.IntegerField(null=True,default = None ) gym_website = models.CharField(max_length = 255,null=True,default = None ) gym_ladies_special = models.IntegerField(null=True,default = None ) </code></pre> <p>I tested web service on <code>Advance REST Client</code> providing desired output, and I would like to remind once again the web service is properly working for <code>iOS</code></p> <p>So, Where Should I improve my code...?</p> <p>Thanks in Advance :)</p> <p>EDIT:</p> <p>my android developer trying to send data in json object manner</p> <pre><code>{ "key1"="val1", "key2"="val2"} </code></pre> <p>instead sending it in json array(key:value) manner</p> <pre><code>{ "key1" : "val1", "key2" : "val2" } </code></pre> <p>How can I fetch data sent in <code>object</code> format...</p> <p>ThankYou</p>
0
2016-10-14T12:16:02Z
40,043,335
<ol> <li>Make sure that in your 'request.body', you are getting a proper dictionary which have all the keys u accessed below.Because,if your 'data' variable dont have any key that you accessed below, it will raise keyerror and u didnt handle keyerror in your code. </li> </ol> <p>If above does not worked, then show me your models.py</p>
0
2016-10-14T12:36:12Z
[ "android", "python", "web-services", "django-1.10" ]
Error at my python code
40,042,984
<p>when i run this code :</p> <pre><code>import math print "welcome !This program will solve your quadratic equation" A = input("inter a =") B = input("inter b =") C = input("inter c =") J=math.pow(B,2)-4*A*C X1=-B+math.sqrt(J)/2*A X2=-B-(math.sqrt(J))/(2*A) print "your roots is :" , X1 , X2 </code></pre> <hr> <p>it gives me error ?! why thank you ^</p>
-8
2016-10-14T12:18:43Z
40,043,165
<p>Your <code>print</code>'s have missing <code>()</code>. Fixed version:</p> <pre><code>import math print("welcome !This program will solve your quadratic equation") A = input("inter a =") B = input("inter b =") C = input("inter c =") J = math.pow(B,2)-4*A*C X1 = -B+math.sqrt(J)/2*A X2 = -B-(math.sqrt(J))/(2*A) print("your roots is :" , X1 , X2) </code></pre> <p>I have fixed only that errors, I dont know if it calculates right what you need.</p> <p>P.S There is suggestion from me. Try to look for errors in default Python interpreter, and read the code before giving up and asking questions</p> <p>P.P.S I also didnt noticed a wrong assigning to variables. It must be <code>J = (...)</code> but not <code>J=(...)</code>. Also I have fixed upper code according to this</p>
1
2016-10-14T12:28:24Z
[ "python", "pycharm" ]
Error at my python code
40,042,984
<p>when i run this code :</p> <pre><code>import math print "welcome !This program will solve your quadratic equation" A = input("inter a =") B = input("inter b =") C = input("inter c =") J=math.pow(B,2)-4*A*C X1=-B+math.sqrt(J)/2*A X2=-B-(math.sqrt(J))/(2*A) print "your roots is :" , X1 , X2 </code></pre> <hr> <p>it gives me error ?! why thank you ^</p>
-8
2016-10-14T12:18:43Z
40,048,398
<p>Without any information about which kind of error your program raises, I think that the problem is linked to the function <strong>math.sqrt</strong> which doesn't find complex roots.</p> <p>So I suggest you to use <strong>cmath</strong> in this way:</p> <pre><code>import cmath, math print "welcome !This program will solve your quadratic equation" A = input("inter a =") B = input("inter b =") C = input("inter c =") J=math.pow(B,2)-4*A*C X1=-B+cmath.sqrt(J)/2*A X2=-B-(cmath.sqrt(J))/(2*A) print "your roots is :" , X1 , X2 </code></pre> <p>Furthermore, I suggest you to implement some checks on the input, because user could insert a <strong>string</strong> or a <strong>null</strong> value (as @Ted Klein Bergman suggests).</p> <p>If this doesn't solve your problem, please edit your question and add more details about your error.</p>
0
2016-10-14T16:58:58Z
[ "python", "pycharm" ]
AWS S3 download file from Flask
40,043,018
<p>I have created a small app that should download file from a AWS S3. </p> <p>I can download the data correctly in this way:</p> <pre><code> s3_client = boto3.resource('s3') req = s3_client.meta.client.download_file(bucket, ob_key, dest) </code></pre> <p>but if I add this function in a flask route it does not work anymore. I obtain this error:</p> <p>ClientError: An error occurred (400) when calling the HeadObject operation: Bad Request</p> <p>I'm not able to figure out why it does not work inside the route. Any idea?</p>
-1
2016-10-14T12:20:09Z
40,043,126
<p>That is related to your AWS region. Mention the region name as an added parameter.</p> <p>Try it on your local machine, using </p> <pre><code>aws s3 cp s3://bucket-name/file.png file.png --region us-east-1 </code></pre> <p>If you are able to download the file using this command, then it should work fine from your API also.</p>
0
2016-10-14T12:26:10Z
[ "python", "amazon-s3", "flask", "boto3" ]
AWS S3 download file from Flask
40,043,018
<p>I have created a small app that should download file from a AWS S3. </p> <p>I can download the data correctly in this way:</p> <pre><code> s3_client = boto3.resource('s3') req = s3_client.meta.client.download_file(bucket, ob_key, dest) </code></pre> <p>but if I add this function in a flask route it does not work anymore. I obtain this error:</p> <p>ClientError: An error occurred (400) when calling the HeadObject operation: Bad Request</p> <p>I'm not able to figure out why it does not work inside the route. Any idea?</p>
-1
2016-10-14T12:20:09Z
40,044,758
<p>The problem was that with flask I needed to declare s3_client as global variable instead of just inside the function. </p> <p>Now it works perfectly!</p>
-1
2016-10-14T13:45:15Z
[ "python", "amazon-s3", "flask", "boto3" ]
Unable to install modules using 'pip' or 'easy_install' on Mac
40,043,052
<p>Default Python installation is 2.7.x on Mac. (currently running El Capitan)</p> <p>I've changed default to 3.4.5.(mandatory for my course)</p> <p>I was instructed by my professor to use MacPorts and it requires an SSL to download libraries so we are using following code to bypass it:</p> <pre><code>#from https://dnaeon.github.io/disable-python-ssl-verification/ import nltk import ssl try: _create_unverified_https_context = ssl._create_unverified_context except AttributeError: # Legacy Python that doesn't verify HTTPS certificates by default pass else: # Handle target environment that doesn't support HTTPS verification ssl._create_default_https_context = _create_unverified_https_context #download all nltk data nltk.download('all') </code></pre> <p>Everything installed using MacPorts works just fine, but most of the libraries are not available on MacPorts, so when I try to use pip3, it simply downloads the library but it never works. For exmaple:</p> <p>In Terminal, <a href="https://i.stack.imgur.com/DfRKJ.png" rel="nofollow"><img src="https://i.stack.imgur.com/DfRKJ.png" alt="enter image description here"></a></p> <p>In Python Shell, <a href="https://i.stack.imgur.com/3f9ga.png" rel="nofollow"><img src="https://i.stack.imgur.com/3f9ga.png" alt="enter image description here"></a></p> <p>If I use 'pip' instead of 'pip3', it installs libraries for python 2.7.x. I've tried the same process with tweepy and twython, but it doesn't work after installation. How to solve this issue?</p>
0
2016-10-14T12:22:31Z
40,043,892
<p>Try executing the following command:</p> <pre><code>$ sudo python3 -m pip install textblob </code></pre> <p>Best way would be to clone the repo:[EDIT]</p> <pre><code>$ git clone https://github.com/sloria/TextBlob.git $ cd TextBlob/ $ python setup.py install </code></pre> <p>For more details see <a href="http://textblob.readthedocs.io/en/dev/install.html" rel="nofollow">this</a></p>
2
2016-10-14T13:03:22Z
[ "python", "osx", "python-3.x", "pip" ]
python - subtracting ranges from bigger ranges
40,043,103
<p>The problem I have is that I basically would like to find if there are any free subnets between a BGP aggregate-address (ex: 10.76.32.0 255.255.240.0) and all the network commands on the same router (ex: 10.76.32.0 255.255.255.0, 10.76.33.0 255.255.255.0)</p> <p>In the above example 10.76.34.0 -> 10.76.47.255 would be free.</p> <p>I'm thinking of tackling this problem by converting the IP addresses and subnet masks to binary and subtracting that way.</p> <p>To keep it simple I will keep this example in decimal but doing this would leave me with the following problem: let's say I have a range from 1 to 250, I subtract from this a smaller range that goes from 20 to 23, I would like to end up with a range from 1 to 19 and 24 to 250.</p> <p>Using the range command doesn't really give me the expected results and while I could possibly create a list with every item in the range and subtract another list with a sub-set of items, it seems to me that it might not be a good idea to have lists with possibly tens of thousands of elements.</p> <p>Hunor</p>
0
2016-10-14T12:25:00Z
40,043,669
<p>If you are trying to create a "range" with a gap in it, i.e., with 1-9 and 24-250, you could try to use <a href="https://docs.python.org/3/library/itertools.html#itertools.filterfalse" rel="nofollow"><code>filterfalse</code></a> (or <a href="https://docs.python.org/2.7/library/itertools.html#itertools.ifilterfalse" rel="nofollow"><code>ifilterfalse</code></a> if you are using Python 2.X) from the <a href="https://docs.python.org/3/library/itertools.html#module-itertools" rel="nofollow"><code>itertools</code></a> module, which takes as its arguments a predicate and a sequence, and returns elements of the sequence where the predicate returns <code>False</code>. As an example, if you do:</p> <pre><code>from itertools import filterfalse new_range = filterfalse(lambda x: 20 &lt;= x &lt;= 23, range(1,251)) </code></pre> <p><code>new_range</code> will be an iterable containing the numbers 1-19, and 24-250, which can be used similarly to <code>range()</code>:</p> <pre><code>for i in new_range: do_things() </code></pre>
1
2016-10-14T12:52:24Z
[ "python", "networking", "range", "subnet" ]
Parsing old tweets and putting their attributes into a Pandas dataframe
40,043,112
<p>I am trying to pull a selection of old tweets and then put various attributes into a Pandas dataframe. My code is below. However, when I run the final line, I get a dataframe that only contains attribute data for one tweet rather than for all of them in my selection.</p> <p>Can someone tell me where my error is? How do I get each tweet into the dataframe as a separate row?</p> <pre><code>import got3 #Need this to get older tweets tweetCriteria = got3.manager.TweetCriteria() tweetCriteria.setQuerySearch("Kentucky Derby") tweetCriteria.setSince("2016-05-07") tweetCriteria.setUntil("2016-05-08") tweetCriteria.setMaxTweets(1000) #Here I set the parameters for the old tweets I am seeking TweetCriteria = got3.manager.TweetCriteria() KYDerby_tweets = got3.manager.TweetManager.getTweets(tweetCriteria) #Named my list of tweets for x in KYDerby_tweets: Text = x.text Retweets = x.retweets Favorites = x.favorites Date = x.date Id = x.id #created variables for each of the attributes I want in the dataframe df = pd.DataFrame(DataSet) df #result here is a 1x6 dataframe containing just one tweet, rather than the 1000x6 dataframe I want, which would contain all of the tweets from my selection... </code></pre>
0
2016-10-14T12:25:19Z
40,047,140
<p>After the clarification from your comment, this should work:</p> <pre><code># make a list of lists that contain the data # pandas considers a list of lists to be a list of rows data_set = [[x.id, x.date, x.text, x.retweets, x.favorites, x.date, x.id] for x in KYDerby_tweets] df = pd.DataFrame(data=data_set, columns=["Id", "Date", "Text", "Favorites", "Retweets"]) </code></pre>
0
2016-10-14T15:42:21Z
[ "python", "pandas", "tweepy" ]
How do I minimize my Tkinter window whilst I run a batch file?
40,043,144
<p>I have a Tkinter window with a button. This button, on click, runs a batch file which in turn runs a test suite. This is working fine so far. But I want the Tkinter window to minimize itself when I click this button and restore itself when the batch file execution is completed. I don't know how to handle this event. Help.</p> <p>Script:</p> <pre><code>import Tkinter import subprocess top=Tkinter.Tk() def callBatchFile(): filepath=r"C:\TestFolder\ChangeManagementBatchFile.bat" p=subprocess.Popen(filepath,shell=True,stdout=subprocess.PIPE) stdout,stderr=p.communicate() print p.returncode button=Tkinter.Button(top,text="Execute",command=callBatchFile) button.pack() top.mainloop() </code></pre>
0
2016-10-14T12:27:11Z
40,043,184
<pre><code>top.wm_state('iconic') </code></pre> <p>should minimize your win</p>
1
2016-10-14T12:29:17Z
[ "python", "button", "tkinter", "onclick", "minimize" ]
Python Pandas - multiple arithmetic operation in a single dataframe
40,043,182
<p>I am analysing a stock market data and I was able to get only the open, high, low, close, and volume. Now I wanted to calculate the percentage increase for each day using Pandas. The following is my dataframe:</p> <pre><code>&gt;&gt;&gt; df.head() date open high low close volume 0.0 Aug 18, 2016 1,250.00 1,294.85 1,250.00 1,293.25 1,312,905 1.0 Aug 17, 2016 1,240.00 1,275.00 1,235.05 1,243.85 1,704,985 2.0 Aug 16, 2016 1,297.00 1,297.95 1,206.65 1,237.10 3,054,180 3.0 Aug 12, 2016 1,406.25 1,406.25 1,176.75 1,276.40 8,882,899 4.0 Aug 11, 2016 1,511.85 1,584.50 1,475.00 1,580.00 1,610,322 </code></pre> <p>Then I needed the previous days close, so I used the <code>shift</code> method and is as follows:</p> <pre><code>&gt;&gt;&gt; df['pre_close'] = df['close'].shift(-1) &gt;&gt;&gt; df.head() date open high low close volume \ 0.0 Aug 18, 2016 1,250.00 1,294.85 1,250.00 1,293.25 1,312,905 1.0 Aug 17, 2016 1,240.00 1,275.00 1,235.05 1,243.85 1,704,985 2.0 Aug 16, 2016 1,297.00 1,297.95 1,206.65 1,237.10 3,054,180 3.0 Aug 12, 2016 1,406.25 1,406.25 1,176.75 1,276.40 8,882,899 4.0 Aug 11, 2016 1,511.85 1,584.50 1,475.00 1,580.00 1,610,322 pre_close 0.0 1,243.85 1.0 1,237.10 2.0 1,276.40 3.0 1,580.00 4.0 1,510.05 </code></pre> <p>Now I wanted to calculate the percentage increase for each day, but all my data was in string so I replaced the <code>commas</code> with <code>''</code> and is as follows:</p> <pre><code>&gt;&gt;&gt; df.dtypes date object open object high object low object close object volume object tomm_close object dtype: object &gt;&gt;&gt; df = df.replace({',': ''}, regex=True) </code></pre> <p>Now my main problem starts, I wanted to do the following arithmetic operation:</p> <pre><code>% increase = (New Number - Original Number) ÷ Original Number × 100. </code></pre> <p>And to do arithmetic operations we need to to have float data type and I wrote a code which converts the data type and calculate the profit, and is as follows:</p> <pre><code>&gt;&gt;&gt; df['per']=((df['close'].astype(float) \ .sub(df['pre_close'].astype(float), axis=0)) \ .div(df['close'].astype(float),axis=0)) \ .mul(float(100)) &gt;&gt;&gt; df.head() date open high low close volume pre_close \ 0.0 Aug 18 2016 1250.00 1294.85 1250.00 1293.25 1312905 1243.85 1.0 Aug 17 2016 1240.00 1275.00 1235.05 1243.85 1704985 1237.10 2.0 Aug 16 2016 1297.00 1297.95 1206.65 1237.10 3054180 1276.40 3.0 Aug 12 2016 1406.25 1406.25 1176.75 1276.40 8882899 1580.00 4.0 Aug 11 2016 1511.85 1584.50 1475.00 1580.00 1610322 1510.05 per 0.0 3.819834 1.0 0.542670 2.0 -3.176784 3.0 -23.785647 4.0 4.427215 </code></pre> <p>My code is working correctly, but my doubt is is there any better way than this? Am I doing the type conversion correctly and is that the correct way of using multiple arithmetic operations for a single operation? Thanks for the help.</p>
0
2016-10-14T12:29:13Z
40,043,345
<p>There is a <code>pct_change()</code> function for calculating the percent change between current day and previous day, which you can use (note the <code>NA</code> here is due to the fact that I have access to only five rows of your data):</p> <pre><code>df['per'] = (df.close.replace({',':''}, regex=True).astype(float) .pct_change().shift(-1) * 100) </code></pre> <p><a href="https://i.stack.imgur.com/yzXzQ.png" rel="nofollow"><img src="https://i.stack.imgur.com/yzXzQ.png" alt="enter image description here"></a></p>
4
2016-10-14T12:36:55Z
[ "python", "pandas" ]
dict to list, and compare lists python
40,043,225
<p>I have made a function, were I count how many times each word is used in a file, that will say the word frequency. Right now the function can calculate the sum of all words, and show me the seven most common words and how many times they are used. Now I want to compare my first file were I have analyzed the word frequency with another file were I have the most common words used in the english language, and I want to compare those words with the words I have in my first file to see if any of the words matches. </p> <p>What I have come up to is to make lists of the two files and then compare them with each other. But the code I wrote for this doesn't give me any output, any idea on how I can solve this?</p> <pre><code>def CountWords(): filename = input('What is the name of the textfile you want to open?: ') if filename == "alice" or "alice-ch1.txt" or " ": file = open("alice-ch1.txt","r") print('You want to open alice-ch1.txt') wordcount = {} for word in file.read().split(): if word not in wordcount: wordcount[word] = 1 else: wordcount[word] += 1 wordcount = {k.lower(): v for k, v in wordcount.items() } print (wordcount) sum = 0 for val in wordcount.values(): sum += val print ('The total amount of words in Alice adventures in wonderland: ' + str(sum)) sortList = sorted(wordcount.values(), reverse = True) most_freq_7 = sortList[0:7] #print (most_freq_7) print ('Totoro says: The 7 most common words in Alice Adventures in Wonderland:') print(list(wordcount.keys())[list(wordcount.values()).index(most_freq_7[0])] + " " + str(most_freq_7[0])) print(list(wordcount.keys())[list(wordcount.values()).index(most_freq_7[1])] + " " + str(most_freq_7[1])) print(list(wordcount.keys())[list(wordcount.values()).index(most_freq_7[2])] + " " + str(most_freq_7[2])) print(list(wordcount.keys())[list(wordcount.values()).index(most_freq_7[3])] + " " + str(most_freq_7[3])) print(list(wordcount.keys())[list(wordcount.values()).index(most_freq_7[4])] + " " + str(most_freq_7[4])) print(list(wordcount.keys())[list(wordcount.values()).index(most_freq_7[5])] + " " + str(most_freq_7[5])) print(list(wordcount.keys())[list(wordcount.values()).index(most_freq_7[6])] + " " + str(most_freq_7[6])) file_common = open("common-words.txt", "r") commonwords = [] contents = file_common.readlines() for i in range(len(contents)): commonwords.append(contents[i].strip('\n')) print(commonwords) #From here's the code were I need to find out how to compare the lists: alice_keys = wordcount.keys() result = set(filter(set(alice_keys).__contains__, commonwords)) newlist = list() for elm in alice_keys: if elm not in result: newlist.append(elm) print('Here are the similar words: ' + str(newlist)) #Why doesn't show? else: print ('I am sorry, that filename does not exist. Please try again.') </code></pre>
0
2016-10-14T12:31:04Z
40,043,503
<p>I'm not in front of an interpreter so my code might be slightly off. But try something more like this.</p> <pre><code>from collections import Counter with open("some_file_with_words") as f_file counter = Counter(f_file.read()) top_seven = counter.most_common(7) with open("commonwords") as f_common: common_words = f_common.read().split() for word, count in top_seven: if word in common_words: print "your word " + word + " is in the most common words! It appeared " + str(count) + " times!" </code></pre>
0
2016-10-14T12:44:56Z
[ "python", "list", "dictionary", "key" ]
How to count number of variables in a dictionary
40,043,235
<p>This is my current code to append different time to the same name id in a dict.</p> <blockquote> <pre><code>for name, date, time ,price in reader(f): group_dict[name].append([time]) </code></pre> </blockquote> <p>my question is how do i count the number of 'time' i have in each key in the dict ? </p> <p>An example of output is</p> <pre><code> {(name1): [time1, time2, time3]} </code></pre> <p>and i am trying a way to include the count '3' inside</p>
-3
2016-10-14T12:31:28Z
40,043,637
<pre><code>for key, value in yourDict.iteritems(): print len(value) </code></pre> <p>if you want include to your structure try:</p> <pre><code>for key, value in yourDict.iteritems(): yourDict[key] = (value, len(value)) </code></pre> <p>Code for python2 for python 3 use items() instead of iteritems()</p>
2
2016-10-14T12:51:14Z
[ "python", "csv", "count" ]
How to count number of variables in a dictionary
40,043,235
<p>This is my current code to append different time to the same name id in a dict.</p> <blockquote> <pre><code>for name, date, time ,price in reader(f): group_dict[name].append([time]) </code></pre> </blockquote> <p>my question is how do i count the number of 'time' i have in each key in the dict ? </p> <p>An example of output is</p> <pre><code> {(name1): [time1, time2, time3]} </code></pre> <p>and i am trying a way to include the count '3' inside</p>
-3
2016-10-14T12:31:28Z
40,044,837
<p>You can accomplish what you want by reassigning your key's value to include the length of its value:</p> <pre><code>for key, value in yourDict.items(): # Change to .iteritems() if using Python 2.x yourDict[key] = value + [len(value)] </code></pre> <p>Or, you could do this using dictionary comprehension:</p> <pre><code>yourDict = {key:value+[len(value)] for key, value in yourDict.items()} # Python 3.x yourDict = {key:value+[len(value)] for key, value in yourDict.iteritems()} # Python 2.x </code></pre>
0
2016-10-14T13:49:17Z
[ "python", "csv", "count" ]