title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
list
Unable to loop thru JSON array inside an object
39,886,460
<p>This is how my JSON return object looks like:</p> <pre><code>{ u'Policy': u'{ "Version": "2012-10-17", "Statement": [{ "Sid": "xxxxx", "Effect": "Allow", "Principal": { "Service": "cloudtrail.amazonaws.com" }, "Action": "s3:GetBucketAcl", "Resource": "arn:aws:s3:::xxxxx" }, { "Sid": "yyyyyyy", "Effect": "Allow", "Principal": { "Service": "cloudtrail.amazonaws.com" }, "Action": "s3:PutObject", "Resource": "arn:aws:s3:::xxxxx/AWSLogs/00000000/*", "Condition": { "StringEquals": { "s3:x-amz-acl": "bucket-owner-full-control" } } }] }', 'ResponseMetadata': { 'HTTPHeaders': { 'content-length': '479', 'content-type': 'application/json', 'date': 'Thu, 06Oct201602: 07: 48GMT', 'server': 'AmazonS3', 'x-amz-id-2': 'xxxxxxx', 'x-amz-request-id': 'xxxxxxx' }, 'HTTPStatusCode': 200, 'HostId': 'xxxxxxx', 'RequestId': 'xxxxxxx', 'RetryAttempts': 0 } } </code></pre> <p>This is how I am trying to loop thru Statements:</p> <pre><code>for stmt in bucket_policy['Policy']['Statement']: print stmt </code></pre> <p>This is the error I am getting:</p> <pre><code>TypeError: string indices must be integers </code></pre> <p>Is it because the return data is Unicode or something wrong with the way I am looping?</p>
0
2016-10-06T02:13:49Z
39,886,498
<p>You should convert the JSON into a dictionary with</p> <pre><code>import json mydict = json.load(bucket_policy) </code></pre> <p>Then loop:</p> <pre><code>for stmt in mydict['Policy']['Statement']: print stmt </code></pre>
0
2016-10-06T02:20:38Z
[ "python", "json" ]
Unable to loop thru JSON array inside an object
39,886,460
<p>This is how my JSON return object looks like:</p> <pre><code>{ u'Policy': u'{ "Version": "2012-10-17", "Statement": [{ "Sid": "xxxxx", "Effect": "Allow", "Principal": { "Service": "cloudtrail.amazonaws.com" }, "Action": "s3:GetBucketAcl", "Resource": "arn:aws:s3:::xxxxx" }, { "Sid": "yyyyyyy", "Effect": "Allow", "Principal": { "Service": "cloudtrail.amazonaws.com" }, "Action": "s3:PutObject", "Resource": "arn:aws:s3:::xxxxx/AWSLogs/00000000/*", "Condition": { "StringEquals": { "s3:x-amz-acl": "bucket-owner-full-control" } } }] }', 'ResponseMetadata': { 'HTTPHeaders': { 'content-length': '479', 'content-type': 'application/json', 'date': 'Thu, 06Oct201602: 07: 48GMT', 'server': 'AmazonS3', 'x-amz-id-2': 'xxxxxxx', 'x-amz-request-id': 'xxxxxxx' }, 'HTTPStatusCode': 200, 'HostId': 'xxxxxxx', 'RequestId': 'xxxxxxx', 'RetryAttempts': 0 } } </code></pre> <p>This is how I am trying to loop thru Statements:</p> <pre><code>for stmt in bucket_policy['Policy']['Statement']: print stmt </code></pre> <p>This is the error I am getting:</p> <pre><code>TypeError: string indices must be integers </code></pre> <p>Is it because the return data is Unicode or something wrong with the way I am looping?</p>
0
2016-10-06T02:13:49Z
39,886,506
<p>The way it's shown in your question, <code>bucket_policy['Policy']</code> contains a string, not a dictionary.</p> <p>Either define it as a dictionary (i.e. get rid of <code>u'</code> at the beginning and <code>'</code> at the end) or parse the JSON with <a href="https://docs.python.org/3/library/json.html#json.loads" rel="nofollow"><code>json.loads</code></a>.</p>
0
2016-10-06T02:21:35Z
[ "python", "json" ]
can't get multi-select field name when post (no choice)
39,886,466
<p>html:</p> <pre><code>&lt;form method="post"&gt;{% module xsrf_form_html() %} &lt;input name="username" type="text"&gt; &lt;select name="ss" multiple&gt; &lt;option value="1"&gt;hello&lt;/option&gt; &lt;option value="2"&gt;word&lt;/option&gt; &lt;/select&gt; &lt;button type="submit"&gt;提交&lt;/button&gt; &lt;/form&gt; </code></pre> <p>tornado:</p> <pre><code>class TestMultiSelectEmptyPost(BaseHandler): def get(self, *args, **kwargs): self.render('multi-select-empty-post.html') def post(self, *args, **kwargs): print self.request.arguments </code></pre> <p>browser: <a href="http://i.stack.imgur.com/i6MLo.png" rel="nofollow">enter image description here</a></p> <p>server return:</p> <pre><code>{'username': ['aaaa'], '_xsrf': ['xxx|xxx|xxx|xxx']} </code></pre> <p>multi-select field name "ss" is missing</p>
0
2016-10-06T02:14:54Z
39,886,874
<p>To determine whether the user selected <strong>none</strong> of the options, do:</p> <pre><code>ss = self.request.arguments.get("ss") </code></pre> <p>The <code>get</code> method returns <code>None</code> if there is no value. So,</p> <pre><code>if ss is None: print("User selected nothing") else: print("ss = %s" % ss) </code></pre>
0
2016-10-06T03:04:46Z
[ "python", "html", "tornado", "multi-select" ]
can't get multi-select field name when post (no choice)
39,886,466
<p>html:</p> <pre><code>&lt;form method="post"&gt;{% module xsrf_form_html() %} &lt;input name="username" type="text"&gt; &lt;select name="ss" multiple&gt; &lt;option value="1"&gt;hello&lt;/option&gt; &lt;option value="2"&gt;word&lt;/option&gt; &lt;/select&gt; &lt;button type="submit"&gt;提交&lt;/button&gt; &lt;/form&gt; </code></pre> <p>tornado:</p> <pre><code>class TestMultiSelectEmptyPost(BaseHandler): def get(self, *args, **kwargs): self.render('multi-select-empty-post.html') def post(self, *args, **kwargs): print self.request.arguments </code></pre> <p>browser: <a href="http://i.stack.imgur.com/i6MLo.png" rel="nofollow">enter image description here</a></p> <p>server return:</p> <pre><code>{'username': ['aaaa'], '_xsrf': ['xxx|xxx|xxx|xxx']} </code></pre> <p>multi-select field name "ss" is missing</p>
0
2016-10-06T02:14:54Z
39,915,109
<p>Instead of parsing <code>request.arguments</code>, I recommend using the <a href="http://www.tornadoweb.org/en/stable/web.html#tornado.web.RequestHandler.get_argument" rel="nofollow"><code>get_argument</code></a> method.</p>
0
2016-10-07T10:28:05Z
[ "python", "html", "tornado", "multi-select" ]
Using PyCharm, Can't Seem to Import Pyperclip module
39,886,467
<p>When I type <code>pip install pyperclip</code>, I get:</p> <pre><code>Collecting pyperclip Using cached pyperclip-1.5.27.zip Installing collected packages: pyperclip Running setup.py install for pyperclip ... error Complete output from command /usr/bin/python -u -c "import setuptools, tokenize;__file__='/private/var/folders/70/2rxbr83d37xgqtk215y88p_m0000gn/T/pip-build-0mgGiQ/pyperclip/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /var/folders/70/2rxbr83d37xgqtk215y88p_m0000gn/T/pip-7v2hjv-record/install-record.txt --single-version-externally-managed --compile: running install running build running build_py creating build creating build/lib creating build/lib/pyperclip copying pyperclip/__init__.py -&gt; build/lib/pyperclip copying pyperclip/clipboards.py -&gt; build/lib/pyperclip copying pyperclip/exceptions.py -&gt; build/lib/pyperclip copying pyperclip/windows.py -&gt; build/lib/pyperclip running install_lib creating /Library/Python/2.7/site-packages/pyperclip error: could not create '/Library/Python/2.7/site-packages/pyperclip': Permission denied Command "/usr/bin/python -u -c "import setuptools, tokenize;__file__='/private/var/folders/70/2rxbr83d37xgqtk215y88p_m0000gn/T/pip-build-0mgGiQ/pyperclip/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /var/folders/70/2rxbr83d37xgqtk215y88p_m0000gn/T/pip-7v2hjv-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /private/var/folders/70/2rxbr83d37xgqtk215y88p_m0000gn/T/pip-build-0mgGiQ/pyperclip/ </code></pre>
0
2016-10-06T02:14:56Z
39,886,534
<p>You are not able to make the necessary directories. Either do this:</p> <pre><code>sudo pip install paperclip </code></pre> <p>or the better and preferred solution, install <code>virtualenv</code>. Plenty of SO examples, here's link:</p> <p><a href="http://docs.python-guide.org/en/latest/dev/virtualenvs/" rel="nofollow">http://docs.python-guide.org/en/latest/dev/virtualenvs/</a></p>
1
2016-10-06T02:24:47Z
[ "python", "pyperclip" ]
Coordinate transformations from a randomly generated normal vector
39,886,503
<p>I'm trying to randomly generate coordinate transformations for a fitting routine I'm writing in python. I want to rotate my data (a bunch of [x,y,z] coordinates) about the origin, ideally using a bunch of randomly generated normal vectors I've already created to define planes -- I just want to shift each plane I've defined so that it lies in the z=0 plane.</p> <p>Here's a snippet of my code that should take care of things once I have my transformation matrix. I'm just not sure how to get my transformation matrix from my normal vector and if I need something more complicated than numpy for this.</p> <pre><code>import matplotlib as plt import numpy as np import math origin = np.array([35,35,35]) normal = np.array([np.random.uniform(-1,1),np.random.uniform(-1,1),np.random.uniform(0,1)]) mag = np.sum(np.multiply(normal,normal)) normal = normal/mag a = normal[0] b = normal[1] c = normal[2] #I know this is not the right transformation matrix but I'm not sure what is... #Looking for the steps that will take me from the normal vector to this transformation matrix rotation = np.array([[a, 0, 0], [0, b, 0], [0, 0, c]]) #Here v would be a datapoint I'm trying to shift? v=(test_x,test_y,test_z) s = np.subtract(v,origin) #shift points in the plane so that the center of rotation is at the origin so = np.multiply(rotation,s) #apply the rotation about the origin vo = np.add(so,origin) #shift again so the origin goes back to the desired center of rotation x_new = vo[0] y_new = vo[1] z_new = vo[2] fig = plt.figure(figsize=(9,9)) plt3d = fig.gca(projection='3d') plt3d.scatter(x_new, y_new, z_new, s=50, c='g', edgecolor='none') </code></pre>
0
2016-10-06T02:20:56Z
39,920,799
<p>I think you have a wrong concept of rotation matrices. Rotation matrices define rotation of a certain angle and can not have diagonal structure. </p> <p>If you imagine every rotation as a composition of a rotation around the X axis, then around the Y axis, then around the Z axis, you can build each matrix and compose the final rotation as product of matrices</p> <pre><code>R = Rz*Ry*Rx Rotated_item = R*original_item </code></pre> <p>or</p> <pre><code>Rotated_item = np.multiply(R,original_item) </code></pre> <p>In this formula Rx is the first applied rotation. <br>Be aware that</p> <ul> <li>you can obtain your rotation by composing many different set of 3 rotation </li> <li>the sequence it is not fixed, it could be X-Y-Z or Z-Y-X or Z-X-Z or whatever combination. The values of angles may change as the sequence changes</li> <li>it is "dangerous" use this matrices for rotation of critical values (90-180-270-360 degrees)</li> </ul> <p>How to compose each single rotation matrix around 1 axis? See <a href="https://wikimedia.org/api/rest_v1/media/math/render/svg/a6821937d5031de282a190f75312353c970aa2df" rel="nofollow">this image from wikipedia</a>. Numpy has all things you need. </p> <p>Now you just have to define 3 angles values. Of course you can derive 3 angles values from a random normalized vector (a,b,c) as you write in your question, but rotation is a process that transform a vector in another vector. Maybe you have to specify something like <em>"I want to find the rotation R around the origin that transform (0,0,1) into (a,b,c)".</em> A completely different rotation R' is the one that transform (1,0,0) into (a,b,c). </p>
1
2016-10-07T15:24:22Z
[ "python", "3d", "coordinate-transformation", "rotational-matrices" ]
Coordinate transformations from a randomly generated normal vector
39,886,503
<p>I'm trying to randomly generate coordinate transformations for a fitting routine I'm writing in python. I want to rotate my data (a bunch of [x,y,z] coordinates) about the origin, ideally using a bunch of randomly generated normal vectors I've already created to define planes -- I just want to shift each plane I've defined so that it lies in the z=0 plane.</p> <p>Here's a snippet of my code that should take care of things once I have my transformation matrix. I'm just not sure how to get my transformation matrix from my normal vector and if I need something more complicated than numpy for this.</p> <pre><code>import matplotlib as plt import numpy as np import math origin = np.array([35,35,35]) normal = np.array([np.random.uniform(-1,1),np.random.uniform(-1,1),np.random.uniform(0,1)]) mag = np.sum(np.multiply(normal,normal)) normal = normal/mag a = normal[0] b = normal[1] c = normal[2] #I know this is not the right transformation matrix but I'm not sure what is... #Looking for the steps that will take me from the normal vector to this transformation matrix rotation = np.array([[a, 0, 0], [0, b, 0], [0, 0, c]]) #Here v would be a datapoint I'm trying to shift? v=(test_x,test_y,test_z) s = np.subtract(v,origin) #shift points in the plane so that the center of rotation is at the origin so = np.multiply(rotation,s) #apply the rotation about the origin vo = np.add(so,origin) #shift again so the origin goes back to the desired center of rotation x_new = vo[0] y_new = vo[1] z_new = vo[2] fig = plt.figure(figsize=(9,9)) plt3d = fig.gca(projection='3d') plt3d.scatter(x_new, y_new, z_new, s=50, c='g', edgecolor='none') </code></pre>
0
2016-10-06T02:20:56Z
39,924,357
<p>Thanks to the people over in the math stack exchange, I have an answer that works. But note that it would not work if you also needed to perform a translation, which I didn't because I'm defining my planes by a normal vector and a point, and the normal vector changes but the point does not. Here's what worked for me.</p> <pre><code>import matplotlib as plt import numpy as np import math def unit_vector(vector): """ Returns the unit vector of the vector. """ return vector / np.linalg.norm(vector) cen_x, cen_y, cen_z = 35.112, 35.112, 35.112 origin = np.array([[cen_x,cen_y,cen_z]]) z_plane_norm = np.array([1,1,0]) z_plane_norm = unit_vector(z_plane_norm) normal = np.array([np.random.uniform(-1,1),np.random.uniform(-1,1),np.random.uniform(0,1)]) normal = unit_vector(normal) a1 = normal[0] b1 = normal[1] c1 = normal[2] rot = np.matrix([[b1/math.sqrt(a1**2+b1**2), -1*a1/math.sqrt(a1**2+b1**2), 0], [a1*c1/math.sqrt(a1**2+b1**2), b1*c1/math.sqrt(a1**2+b1**2), -1*math.sqrt(a1**2+b1**2)], [a1, b1, c1]]) init = np.matrix(normal) fin = rot*init.T fin = np.array(fin) # equation for a plane is a*x+b*y+c*z+d=0 where [a,b,c] is the normal # so calculate d from the normal d1 = -origin.dot(normal) # create x,y xx, yy = np.meshgrid(np.arange(cen_x-0.5,cen_x+0.5,0.05),np.arange(cen_y-0.5,cen_y+0.5,0.05)) # calculate corresponding z z1 = (-a1 * xx - b1 * yy - d1) * 1./c1 #------------- a2 = fin[0][0] b2 = fin[1][0] c2 = fin[2][0] d2 = -origin.dot(fin) d2 = np.array(d2) d2 = d2[0][0] z2 = (-a2 * xx - b2 * yy - d2) * 1./c2 #------------- # plot the surface fig = plt.figure(figsize=(9,9)) plt3d = fig.gca(projection='3d') plt3d.plot_surface(xx, yy, z1, color='r', alpha=0.5, label = "original") plt3d.plot_surface(xx, yy, z2, color='b', alpha=0.5, label = "rotated") plt3d.set_xlabel('X (Mpc)') plt3d.set_ylabel('Y (Mpc)') plt3d.set_zlabel('Z (Mpc)') plt.show() </code></pre> <p>If you do need to perform a translation as well, see the full answer I worked off of <a href="http://math.stackexchange.com/questions/1956699/getting-a-transformation-matrix-from-a-normal-vector">here</a>.</p>
0
2016-10-07T19:14:32Z
[ "python", "3d", "coordinate-transformation", "rotational-matrices" ]
Python 3.x - Cannot fix error: "slice indices must be integers or None or have an __index__ method" + more
39,886,505
<p>I'm working on a class that replicates the Sieve of Eratosthenes and have been getting the error message given in the title. Below is my code, more questions follow.</p> <pre><code> class Sieve: def __init__(self, max): if max &lt; 0: raise RuntimeError else: self.numbers = [([False] * 2) + ([True] * (max - 1))] def findPrimes(self): for i in self.numbers: if self.numbers: for j in self.numbers[i:]: if j % i == 0: self.numbers[j] = False else: None else: None def howMany(self): ##Must use reduce reduce((lambda i: True if numbers else False), self.numbers, self.numbers[2:]) def toList(self): T = [L[i] for i in self.numbers] return print(self.numbers) </code></pre> <p>So in the function findPrimes(self), specifically on line 14, I get the error message described in the title. What exactly is the source of the problem?</p> <p>Additionally I want to ask a few questions about my methods. In findPrimes, I'm trying to visit each element of the list. If it is true, I want to take the index number (i) and visit all other indexes that are multiples of i (i, i+i, i+i+i, etc). I haven't tested this completely yet, but I feel like my syntax is wrong. Any help is appreciated.</p> <p>Finally, I need to use a reduce function in the method howMany in order to determine how many elements of the list are true, and return that number. Again, any knowledge on the topic is greatly appreciated.</p> <p>Thank you everyone :)</p>
0
2016-10-06T02:21:22Z
39,886,752
<p>In python, <code>for i in self.numbers</code> works like Java enhanced For loop, where the For loop will iterate over the list, <code>i</code> will be the list entry (In your case, <code>True / False</code>) instead of the index.</p> <p>If you want to access the list index, please utilize <code>for i in range(len(self.numbers))</code>, <code>i</code> will then loop from <code>0</code> to <code>(length of list - 1)</code>.</p>
1
2016-10-06T02:50:44Z
[ "python", "list", "class", "syntax" ]
Variables in jinja2 template imported by flask duplicated
39,886,535
<p>I try to use Flask to scan all the file in a specific folder and build the url links to those files automatically, so first I defined a app route in Flask:</p> <pre><code>homepath = os.getcwd() # the root path of the app @app.route('/&lt;folder&gt;/') def showList(folder): folder_abs_path = homepath + '/static/'+folder files = os.listdir(folder_abs_path) return render_template('blog_list.html', files=files, folderName=folder) </code></pre> <p>And create a jinja2 template like:</p> <pre><code>{% extends "base.html" %} {% block body %} &lt;div id="main-contents"&gt; &lt;ul&gt; {% for item in files %} &lt;li&gt;&lt;a href="{{ folderName +'/'+ item }}"&gt; {{ item }}&lt;/a&gt;&lt;/li&gt; {% endfor %} &lt;/ul&gt; &lt;/div&gt; {% endblock %} </code></pre> <p>When flask is running,I typed:</p> <pre><code>http://localhost:5000/test/ </code></pre> <p>it does work, list all the files in my test folder(which is "file1.md" and "file2.md"), but it doesn't ceate the proper url links for the files, when I clicked the file1.md in the local web page, it directed to a url like:</p> <pre><code>http://localhost:5000/test/test/file1.md </code></pre> <p>What I want is "<a href="http://localhost:5000/test/file1.md" rel="nofollow">http://localhost:5000/test/file1.md</a>", so why there two "test" folder name?</p>
0
2016-10-06T02:24:57Z
39,906,538
<p>As @Klaus said in the comment, you are generating relative paths for the static files. To avoid that, just use the <a href="http://flask.pocoo.org/docs/0.11/api/#flask.url_for" rel="nofollow">url_for()</a> function in the template for path generating:</p> <pre><code>{% extends "layouts/main.html" %} {% block content %} &lt;div id="main-contents"&gt; &lt;ul&gt; {% for item in files %} &lt;li&gt;&lt;a href="{{ url_for('static', filename=folderName + '/' + item)}}"&gt; {{ item }}&lt;/a&gt;&lt;/li&gt; {% endfor %} &lt;/ul&gt; &lt;/div&gt; {% endblock %} </code></pre>
0
2016-10-06T22:24:03Z
[ "python", "templates", "flask", "jinja2" ]
Converting python lists to pd.DataFrame
39,886,556
<p>If I have 3 python lists items:</p> <pre><code>list1 = [1,2,3] list2 = ['a','b','c'] list3 = ['I',"II","III"] </code></pre> <p>And my goal is to build a pd.DataFrame using list1 as index and the rest as columns:</p> <pre><code> list2 list3 1 "a" "I" 2 "b" "II" 3 "c" "III" </code></pre> <p>What would be the most efficient way to do it? Most of the similar questions I found here are joining <em>list of lists</em> or <em>dictionary</em>, but my data only has a simple structure and a natural order, so it should be simpler?</p> <p>I have checked both <code>pd.Series</code> and <code>pd.DataFrame</code> documentation, but examples are also mostly about joining <code>Series</code> or <code>DataFrame</code> together.</p>
1
2016-10-06T02:28:21Z
39,886,612
<p>Use the DataFrame constructor:</p> <pre><code>In [1]: import pandas as pd, numpy as np In [2]: list1 = [1,2,3] In [3]: list2 = ['a','b','c'] In [4]: list3 = ['I',"II","III"] In [5]: pd.DataFrame({'list2':list2,'list3':list3}, index=list1) Out[5]: list2 list3 1 a I 2 b II 3 c III In [6]: </code></pre>
0
2016-10-06T02:34:36Z
[ "python", "pandas" ]
Why are the conditions in if block not working correctly?
39,886,604
<p>Yakshemash ! I am a Java programmer learning python to make throwaway scripts. I want to make a parser which is shown in the code below.</p> <pre><code>class Parser(object): def parse_message(self, message): size= len(message) if size != 3 or size != 5: raise ValueError("Message length is not valid.") parser = Parser() message = "12345" parser.parse_message(message) </code></pre> <p>This code throws the error:</p> <pre><code>Traceback (most recent call last): File "/temp/file.py", line 9, in &lt;module&gt; parser.parse_message(message) File "/temp/file.py", line 5, in parse_message raise ValueError("Message length is not valid.") ValueError: Message length is not valid. </code></pre> <p>What is my mistake and how do I correct it ?</p>
0
2016-10-06T02:33:59Z
39,886,656
<p>Your problem is with your conditional statement using <code>or</code>: </p> <pre><code>if size != 3 or size != 5: </code></pre> <p>If the size is not equal to 3 "OR" it is not equal to 5, then raise. </p> <p>So, with your input being passed: <code>12345</code></p> <pre><code>Is it not equal to 3? True Is it not equal to 5? False True or False = True Result: Enter condition and raise </code></pre> <p>Instead, use <code>and</code></p> <pre><code>if size != 3 and size != 5: Is it not equal to 3? True Is it not equal to 5? False True and False = False Result: Do not enter condition </code></pre> <p>Even better, use <code>not in</code></p> <pre><code>if size not in (3, 5): </code></pre>
3
2016-10-06T02:40:35Z
[ "python" ]
Why are the conditions in if block not working correctly?
39,886,604
<p>Yakshemash ! I am a Java programmer learning python to make throwaway scripts. I want to make a parser which is shown in the code below.</p> <pre><code>class Parser(object): def parse_message(self, message): size= len(message) if size != 3 or size != 5: raise ValueError("Message length is not valid.") parser = Parser() message = "12345" parser.parse_message(message) </code></pre> <p>This code throws the error:</p> <pre><code>Traceback (most recent call last): File "/temp/file.py", line 9, in &lt;module&gt; parser.parse_message(message) File "/temp/file.py", line 5, in parse_message raise ValueError("Message length is not valid.") ValueError: Message length is not valid. </code></pre> <p>What is my mistake and how do I correct it ?</p>
0
2016-10-06T02:33:59Z
39,886,887
<p>Syntactically, there is nothing wrong in your code. The output is correct according to your code.</p> <pre><code>size != 3 or size != 5: # This will be always *true* because it is impossible for **message** to be of two different lengths at the same time to make the condition false. </code></pre> <p>Since the above condition always results <em>true</em>, I assume that you wanted to do something else. </p> <p>Now putting an <strong>and</strong> logical operator works like below:</p> <pre><code>size != 3 and size != 5: # This will be true if the length of **message** is neither 3 nor 5 # This will be false if the length of **message** is either 3 or 5 </code></pre>
1
2016-10-06T03:06:40Z
[ "python" ]
Checking a variable against a list and seeing if a word in that list starts with the variable
39,886,605
<p>I'm trying to write a function that uses a checker that could be any length and checks it against the list. It should be case insensitive when checking and print the word. Example below</p> <p><code>Input= startsWith('a',['apple','ApPle','orange','Apple','kiwi','apricot'])</code></p> <p><strong>Output:</strong></p> <pre><code>apple ApPle Apple apricot </code></pre> <p>But, it prints every string in the list all in lower case.</p> <pre><code>def startsWith(checker,lister): checker.lower() size = len(lister) i=0 checklength = len(checker) lister = [element.lower() for element in lister] while(i&lt;size): checkinlist = lister[i] if(checkinlist[0:checklength-1] in checker): # this is just to test to see if the variables are what i need # once the if statement works just append them to a new list # and print that print(lister[i]) i=i+1 </code></pre>
-1
2016-10-06T02:34:04Z
39,886,701
<pre><code>def startsWith(checker,lister): for i in range(len(lister)): words = lister[i].lower() if(words.startswith(checker)): print(lister[i]) def main(): startsWith('a',['apple','ApPle','orange','Apple','kiwi','apricot']) main() </code></pre> <p>OUTPUT</p> <pre><code>apple ApPle Apple apricot &gt;&gt;&gt; </code></pre>
1
2016-10-06T02:45:17Z
[ "python" ]
Checking a variable against a list and seeing if a word in that list starts with the variable
39,886,605
<p>I'm trying to write a function that uses a checker that could be any length and checks it against the list. It should be case insensitive when checking and print the word. Example below</p> <p><code>Input= startsWith('a',['apple','ApPle','orange','Apple','kiwi','apricot'])</code></p> <p><strong>Output:</strong></p> <pre><code>apple ApPle Apple apricot </code></pre> <p>But, it prints every string in the list all in lower case.</p> <pre><code>def startsWith(checker,lister): checker.lower() size = len(lister) i=0 checklength = len(checker) lister = [element.lower() for element in lister] while(i&lt;size): checkinlist = lister[i] if(checkinlist[0:checklength-1] in checker): # this is just to test to see if the variables are what i need # once the if statement works just append them to a new list # and print that print(lister[i]) i=i+1 </code></pre>
-1
2016-10-06T02:34:04Z
39,886,845
<p>Here's the root of the problem</p> <pre><code>lister = [element.lower() for element in lister] </code></pre> <p><code>lister</code> now only contains lowercase strings, which you then print. You need to delay the <code>lower()</code> until you check for <code>checker</code>. </p> <hr> <p>No need to check the length of anything. You can use <a href="http://stackoverflow.com/questions/12319025/filters-in-python#12319034"><code>filter</code></a></p> <pre><code>def startsWith(checker, lister): return list(filter(lambda x: x.lower().startswith(checker.lower()), lister)) for x in startsWith('a',['apple','ApPle','orange','Apple','kiwi','apricot']): print(x) </code></pre> <p>Output</p> <pre><code>apple ApPle Apple apricot </code></pre>
1
2016-10-06T03:01:41Z
[ "python" ]
Checking a variable against a list and seeing if a word in that list starts with the variable
39,886,605
<p>I'm trying to write a function that uses a checker that could be any length and checks it against the list. It should be case insensitive when checking and print the word. Example below</p> <p><code>Input= startsWith('a',['apple','ApPle','orange','Apple','kiwi','apricot'])</code></p> <p><strong>Output:</strong></p> <pre><code>apple ApPle Apple apricot </code></pre> <p>But, it prints every string in the list all in lower case.</p> <pre><code>def startsWith(checker,lister): checker.lower() size = len(lister) i=0 checklength = len(checker) lister = [element.lower() for element in lister] while(i&lt;size): checkinlist = lister[i] if(checkinlist[0:checklength-1] in checker): # this is just to test to see if the variables are what i need # once the if statement works just append them to a new list # and print that print(lister[i]) i=i+1 </code></pre>
-1
2016-10-06T02:34:04Z
39,886,854
<p>You should not mutate the original elements of <code>lister</code>, rather do the comparison on a new copy of those elements that has been converted to lower case.</p> <p>It can be done in a single list comprehension.</p> <pre><code>def startsWith(checker, lister): cl = checker.lower() return [s for s in lister if s.lower().startswith(cl)] Input= startsWith('a',['apple','ApPle','orange','Apple','kiwi','apricot']) for i in Input: print(i) </code></pre> <p>Output:</p> <pre><code>apple ApPle Apple apricot </code></pre>
0
2016-10-06T03:02:17Z
[ "python" ]
"SyntaxError: Invalid Syntax" but it worked before! (Python 3.5.2 doing a tutorial for beginners)
39,886,673
<p>Soo I swear this code worked before! </p> <h1>Is there a setting on IDLE that I might have accidently turned on?</h1> <p>I'm trying to do something very simple..set variables.</p> <p>As you can see these work:</p> <pre><code>&gt;&gt;&gt; print("hello") hello &gt;&gt;&gt; def main(): print("hey") &gt;&gt;&gt;main() hey </code></pre> <p>So when I try to recreate an example problem, you would expect these variables to work, right?</p> <pre><code>def main(): print("This calculates the cost of coffee.") print() n = eval(input("Enter amount of coffee in pounds: ") m = 10.50 * n SyntaxError: invalid syntax </code></pre> <p>Why??? Why does Python 3.5.2 return "SyntaxError: invalid syntax"?</p> <p>Thanks guys! Sorry I'm such a noob.</p>
0
2016-10-06T02:41:51Z
39,887,299
<p>As pointed out in the comments several times, You are simply missing a closing <code>)</code> on line 4 of your program. line 4 should look like</p> <p><code>n = eval(input("Enter amount of coffee in pounds: "))# &lt;--extra parenthesis</code></p> <hr> <p>Some unrelated points in your code:</p> <ul> <li>Why are you using <code>eval()</code>? It looks like your going for float/integer conversion, so use <code>int()</code> or <code>float()</code>.</li> <li>Instead of adding a extra print to achieve a newline, simply say <code>print("This calculates the cost of coffee.\n")</code></li> <li>The last two lines of your program can be condensed into a single statement: <code>n = float(input("Enter amount of coffee in pounds: "))*10.50</code></li> </ul> <hr> <p>After adding my suggestions to your code, it would yield something like this:</p> <pre><code>def main(): print("This calculates the cost of coffee.\n") n = float(input("Enter amount of coffee in pounds: "))*10.50 </code></pre>
0
2016-10-06T04:00:52Z
[ "python" ]
Delete the first elements of each row in list in python2.7
39,886,674
<p>My list looks like: ['0 0.690001', '1 0.970671', '2 1.520989', '3 1.946516', '4 2.229378']</p> <p>how can I get [ 0.69000,0.970671,1.520989,1.946516,2.229378]</p>
-3
2016-10-06T02:41:52Z
39,886,850
<p>Use list comprehension as:</p> <pre><code>my_list = ['0 0.690001', '1 0.970671', '2 1.520989', '3 1.946516', '4 2.229378'] [float(item.split()[1]) for item in my_list] </code></pre> <p>OR, you may also use <code>map()</code>:</p> <pre><code>map(lambda x: float(x.split()[1]), my_list) </code></pre>
0
2016-10-06T03:02:17Z
[ "python", "python-2.7" ]
Delete the first elements of each row in list in python2.7
39,886,674
<p>My list looks like: ['0 0.690001', '1 0.970671', '2 1.520989', '3 1.946516', '4 2.229378']</p> <p>how can I get [ 0.69000,0.970671,1.520989,1.946516,2.229378]</p>
-3
2016-10-06T02:41:52Z
39,888,827
<pre><code>import re map(lambda x:float(re.sub(r'[^ ]+ ','',x)),l) </code></pre>
0
2016-10-06T06:18:36Z
[ "python", "python-2.7" ]
Django: Organize objects in a dictionary by foreign key
39,886,744
<p>I have this problem: I've got two models, products and categories, the categories are defined in a fixture (they won't change), and I need to display all categories in a single template, as a grid. The models are rather simple, the important thing is that Products have a foreign key pointing to Category model, and another one pointing to User model (owner).</p> <p>I have to display all products in each category block (CSS doesn't matter, I just need to be able to bring them up) but the solution I've got so far is this</p> <p><strong>View</strong></p> <pre><code>def index(request): user_products = [] if request.user.is_authenticated(): user_products = Product.objects.filter(owner=request.user) categories = Category.objects.all() return render(request, 'index.html', {'user_products': user_products, 'categories': categories}) </code></pre> <p><strong>Template</strong></p> <pre><code>&lt;!-- This goes for each category, 12 in total --&gt; &lt;div&gt; &lt;h3&gt;Category (name hardcoded)&lt;/h3&gt; {% for category in categories %} {% if category.pk == 3 %} &lt;ul class="product_list"&gt; {% for product in category.product_set.all %} {% if product.owner == request.user %} &lt;li&gt; &lt;div class="product"&gt; &lt;h2 class="title"&gt;{{ product.title }} &lt;/h2&gt; &lt;p class="text"&gt;{{ product.text }}&lt;/p&gt; &lt;/div&gt; &lt;/li&gt; {% endif %} {% empty %} {% endfor %} &lt;/ul&gt; {% endif %} {% endfor %} &lt;/div&gt; </code></pre> <p>I want send in context something like:</p> <pre><code>user_products = {&lt;category.pk&gt;: [list of Product objects in category]} </code></pre> <p>so I can access the product list of each category without defining a <em>for</em> loop each time <strong>and</strong> filtering them by <em>request.user</em>. Can you think of a better way to do this? I had to hardcode each category because they have a given order, but if you know how to display them dynamically while keeping that order it would be great. Thanks!</p>
0
2016-10-06T02:50:06Z
39,887,619
<p>There is better way to do this. Use <a href="https://docs.djangoproject.com/en/1.10/ref/models/querysets/#prefetch-objects" rel="nofollow"><code>Prefetch</code></a> object. Prefetch will have only filtered data for each category.</p> <pre><code>from django.db.models.query import Prefetch def index(request): user_products = [] if request.user.is_authenticated(): user_products = Product.objects.filter(owner=request.user) else: # answering comment user_products = Product.objects.none() prefetch = Prefetch('product_set', queryset=user_products) categories = Category.objects.all().prefetch_related(prefetch) return render(request, 'index.html', {'categories': categories}) </code></pre> <p>And then you can do this in template</p> <pre><code>&lt;!-- You need to do this only once. No need to copy this for each category --&gt; &lt;div&gt; {% for category in categories %} &lt;h3&gt;Category {{ category.name }}&lt;/h3&gt; &lt;ul class="product_list"&gt; {% for product in category.product_set.all %} &lt;li&gt; &lt;div class="product"&gt; &lt;h2 class="title"&gt;{{ product.title }} &lt;/h2&gt; &lt;p class="text"&gt;{{ product.text }}&lt;/p&gt; &lt;/div&gt; &lt;/li&gt; {% endfor %} &lt;/ul&gt; {% endfor %} &lt;/div&gt; </code></pre>
2
2016-10-06T04:38:03Z
[ "python", "django", "django-templates" ]
How to customize the list_display option in django using queryset?
39,886,747
<p>I want to make a course guidance app for my college. Here is my model.py</p> <pre><code>from django.db import models from django.contrib.auth.models import User # Create your models here. class Instructor(models.Model): name = models.CharField(max_length=200) owner = models.ForeignKey(User) # other stuff here def __str__(self): return self.name class Course(models.Model): course_code = models.CharField(max_length=10, default='CS') instructor = models.ForeignKey(Instructor) course_name = models.CharField(max_length=200) # other stuff here def __str__(self): return self.course_name class CourseOutline(models.Model): course = models.OneToOneField(Course) objectives = models.TextField(blank=True) # other stuff </code></pre> <p>And this this my admin.py</p> <pre><code>from django.contrib import admin from models import Course, CourseOutline, Instructor # Register your models here. admin.site.register(Instructor) class CourseAdmin(admin.ModelAdmin): # some other stuff def queryset(self, request): qs = super(CourseAdmin, self).queryset(request) if request.user.is_superuser: return qs # get instructor's "owner" return qs.filter(instructor__owner=request.user) def formfield_for_foreignkey(self, db_field, request, **kwargs): if db_field.name == "instructor" and not request.user.is_superuser: kwargs["queryset"] = Instructor.objects.filter(owner=request.user) return db_field.formfield(**kwargs) return super(CourseAdmin, self).formfield_for_foreignkey(db_field, request, **kwargs) list_display = ('course_name', 'instructor') list_filter = ('queryset',) admin.site.register(Course, CourseAdmin) class CourseOutlineAdmin(admin.ModelAdmin): # nothing here of importance # whatever was here def queryset(self, request): qs = super(CourseOutlineAdmin, self).queryset(request) if request.user.is_superuser: return qs # get instructor's "owner" return qs.filter(course__instructor__owner=request.user) def formfield_for_foreignkey(self, db_field, request, **kwargs): if db_field.name == "course" and not request.user.is_superuser: kwargs["queryset"] = Course.objects.filter(instructor__owner=request.user) return db_field.formfield(**kwargs) return super(CourseAdmin, self).formfield_for_foreignkey(db_field, request, **kwargs) admin.site.register(CourseOutline, CourseOutlineAdmin) </code></pre> <p>Here i'm using row level permission for instructors so that they can only add courses related to them and add course outline to those courses only to which they are related. That's why i used instructor as foreign key for courses and wrote a function to manipulate the foreign key drop-down menu so that only their name will come in the drop-down menu. But when they are seeing the courses they can watch other instructor's courses and if they have privileges to delete courses then they can delete courses of other instructor's also. So i want that they can see only those courses in the list which they own only. In one post i have seen the queryset function and tried to implement it but it doesn't solved my problem.</p>
1
2016-10-06T02:50:16Z
39,887,393
<p>Your code is mostly right but the method is actually <code>get_queryset</code> rather than <code>queryset</code> you've probably mixed it up with the queryset property</p> <pre><code>def get_queryset(self, request): qs = super(CourseAdmin, self).get_queryset(request) if request.user.is_superuser: return qs # get instructor's "owner" return qs.filter(instructor__owner=request.user) </code></pre> <p>Note that if you are super user you will still be able to see everyone else's courses, so make sure that you login as a less privileged user. You will need to change the method name in the other class too.</p>
0
2016-10-06T04:13:53Z
[ "python", "django", "django-admin", "django-authentication" ]
How to add Pandas dataframe column from a element of a list in another column without lambda?
39,886,810
<p>Suppose I have this dataframe</p> <pre><code>A B [ 'a' , 'b' , 'c' ] 3 [ 'e' , 'f' , 'g' , 'h'] 5 </code></pre> <p>How can I create a new column such as below without lambda?</p> <pre><code>A B C [ 'a' , 'b' , 'c' ] 3 'b' [ 'e' , 'f' , 'g' , 'h' ] 5 'g' </code></pre> <p>If using lambda, it will be</p> <pre><code>df['C'] = df['A'].apply( lambda x : x[-2] ) </code></pre> <p>EDIT: Example code:</p> <pre><code>import pandas as pd mydata = [ { 'A' : [ 'a' , 'b' , 'c' ] , 'B' : 3 } , { 'A' : [ 'e' , 'f' , 'g' , 'h' ] , 'B' : 5}] df = pd.DataFrame(mydata) </code></pre>
0
2016-10-06T02:59:10Z
39,887,010
<p>You can define your lambda as a regular function and call it using apply like so:</p> <pre><code>def my_function(x): return x[-2] df['C'] = df['A'].apply(my_function) </code></pre>
0
2016-10-06T03:24:28Z
[ "python", "pandas" ]
How to add Pandas dataframe column from a element of a list in another column without lambda?
39,886,810
<p>Suppose I have this dataframe</p> <pre><code>A B [ 'a' , 'b' , 'c' ] 3 [ 'e' , 'f' , 'g' , 'h'] 5 </code></pre> <p>How can I create a new column such as below without lambda?</p> <pre><code>A B C [ 'a' , 'b' , 'c' ] 3 'b' [ 'e' , 'f' , 'g' , 'h' ] 5 'g' </code></pre> <p>If using lambda, it will be</p> <pre><code>df['C'] = df['A'].apply( lambda x : x[-2] ) </code></pre> <p>EDIT: Example code:</p> <pre><code>import pandas as pd mydata = [ { 'A' : [ 'a' , 'b' , 'c' ] , 'B' : 3 } , { 'A' : [ 'e' , 'f' , 'g' , 'h' ] , 'B' : 5}] df = pd.DataFrame(mydata) </code></pre>
0
2016-10-06T02:59:10Z
39,887,192
<p>A solution is to use <code>str</code> to index from the list but it requires the string lengths to be uniform. You could also use the list comprehension proposed by Bob which is the most robust solution.</p> <pre><code>In [110]: df.A.str.join(',').str[-3] Out[110]: 0 b 1 g Name: A, dtype: object </code></pre> <p>Here's a regex solution because this may be more robust to different string lengths and types in the list. Replace <code>"\w"</code> if actual list elements are more complex. </p> <pre><code>#create data: df = pd.DataFrame({'A':[['a', 'b', 'c'], ['e', 'f', 'g', 'h']], 'B': [3,5]}) #one step solution: df['C'] = df.A.str.join(sep=',').str.extract("(\w),\w$", expand=False) #result: In [22]: df.C Out[22]: 0 b 1 g Name: C, dtype: object </code></pre>
0
2016-10-06T03:45:49Z
[ "python", "pandas" ]
Reading data in python pandas by defining width of each column as number of characters
39,886,884
<p>I'm trying to read a file in which columns are separated by variable spaces. I was wondering if there is a way to read the file by defining the width of each column in terms of number of characters reserved for that column.</p> <p>For example:</p> <pre><code>A B C D - ---------- -- --- 1 foo 32 9.5 4 bar 5.4 5 foofoo_bar 44 </code></pre> <p>Let's say we have to read the above data. Notice that some entries do not exist in columns C and D. However, note that the second line in the file (the one with the dashes) indicates the maximum number of characters that particular column can have.</p> <p>So, the question is given the maximum width of each column in the dataset, is there a way to read the dataset in python using pandas or any other package?</p>
0
2016-10-06T03:06:20Z
39,886,927
<p>You should use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_fwf.html" rel="nofollow"><code>pandas.read_fwf()</code></a>. It stands for Read Fixed Width File.</p>
3
2016-10-06T03:12:18Z
[ "python", "csv", "pandas", "numpy", "data-analysis" ]
Reading data in python pandas by defining width of each column as number of characters
39,886,884
<p>I'm trying to read a file in which columns are separated by variable spaces. I was wondering if there is a way to read the file by defining the width of each column in terms of number of characters reserved for that column.</p> <p>For example:</p> <pre><code>A B C D - ---------- -- --- 1 foo 32 9.5 4 bar 5.4 5 foofoo_bar 44 </code></pre> <p>Let's say we have to read the above data. Notice that some entries do not exist in columns C and D. However, note that the second line in the file (the one with the dashes) indicates the maximum number of characters that particular column can have.</p> <p>So, the question is given the maximum width of each column in the dataset, is there a way to read the dataset in python using pandas or any other package?</p>
0
2016-10-06T03:06:20Z
39,887,092
<p>The <code>delimiter</code> for <code>np.genfromtxt</code> can be a list of column widths instead of a delimiter character.</p>
1
2016-10-06T03:34:48Z
[ "python", "csv", "pandas", "numpy", "data-analysis" ]
Shuffling specific values in a dictionary (Python 3.5)
39,886,922
<p>I am writing a murder mystery--a lot like Clue. I am using dictionaries to store all my info. My question: is there a way to shuffle dictionary values which pull from a set range of integers? I want every game to shuffle certain values within the dictionaries when starting a new game. Right now I'm focused on character placement...I'm trying to figure out the best way to shuffle character locations once each game (locations[current_room]["char"]. Once I understand how to do this, I want to apply this to a bunch of other aspects of the game--with the idea of creating a brand new mystery to solve each game. Any suggestions welcome!</p> <p><strong>EDIT</strong> Thanks for the answers! I am not trying to randomize my entire dictionary, just change the values of certain keys each game. I think I might need to solve this problem another way. I will update this post when I get things working.</p> <p><strong>FINAL EDIT</strong> I'm changing the value of specific keys I want "shuffled" by editing the dictionary. locations[1]["char"] = random.choice([0,1,2]) and then changing the other values based on a series of IF statements from the result in the random.choice. Thanks again for the help.</p> <pre><code>locations = { 1: {"name": "bedroom", "msg": "There is a painting on the wall. \nThe window looks out into the garden.", "char": 2, "item": "matches", "west": 2}, 2: {"name": "living room", "msg" : "The room is freezing cold. The fireplace is empty.", "char": 1, "Item": "firewood", "east": 1}, } characters = { 1: {"name": "doctor crichton", "msg": "A tall, handsome archeologist home from a dig in Africa."}, 2: {"name": "the widow", "msg": "An beautiful woman with a deep air of sadness."}, } current_room = 1 current_char = locations[current_room]["char"] def status(): """updates player on info on current room""" print("You are in the " + locations[current_room]["name"]) char_status() def char_status(): """compiles character infomation in current room""" if current_char &gt; 0: char_room_info() else: print("\nThis room is empty.") def char_room_info(): """NPC behavior in each room""" print(characters[current_char]["name"].title() + " is in the room with you.") status() </code></pre>
0
2016-10-06T03:10:46Z
39,886,963
<p>Python dictionaries are unordered, so I dont see why you woulf want to "shuffle" them. If you want to pick random strings inside the dict, maybe you could use a list to wrap this dictionary for simplicity sake.</p> <p>Like so:</p> <p><code>Locations = [ {"name": "bedroom...}, {"name": "livingroom"...},...]</code></p> <p>I think you get the point. So now to access them:</p> <p><code>Locations[random.rand]["name"]</code></p> <p>Also you could use this:</p> <pre><code>random.choice(locations.keys()) </code></pre> <p>Which is far easier.</p>
0
2016-10-06T03:18:23Z
[ "python", "dictionary", "text", "adventure" ]
Shuffling specific values in a dictionary (Python 3.5)
39,886,922
<p>I am writing a murder mystery--a lot like Clue. I am using dictionaries to store all my info. My question: is there a way to shuffle dictionary values which pull from a set range of integers? I want every game to shuffle certain values within the dictionaries when starting a new game. Right now I'm focused on character placement...I'm trying to figure out the best way to shuffle character locations once each game (locations[current_room]["char"]. Once I understand how to do this, I want to apply this to a bunch of other aspects of the game--with the idea of creating a brand new mystery to solve each game. Any suggestions welcome!</p> <p><strong>EDIT</strong> Thanks for the answers! I am not trying to randomize my entire dictionary, just change the values of certain keys each game. I think I might need to solve this problem another way. I will update this post when I get things working.</p> <p><strong>FINAL EDIT</strong> I'm changing the value of specific keys I want "shuffled" by editing the dictionary. locations[1]["char"] = random.choice([0,1,2]) and then changing the other values based on a series of IF statements from the result in the random.choice. Thanks again for the help.</p> <pre><code>locations = { 1: {"name": "bedroom", "msg": "There is a painting on the wall. \nThe window looks out into the garden.", "char": 2, "item": "matches", "west": 2}, 2: {"name": "living room", "msg" : "The room is freezing cold. The fireplace is empty.", "char": 1, "Item": "firewood", "east": 1}, } characters = { 1: {"name": "doctor crichton", "msg": "A tall, handsome archeologist home from a dig in Africa."}, 2: {"name": "the widow", "msg": "An beautiful woman with a deep air of sadness."}, } current_room = 1 current_char = locations[current_room]["char"] def status(): """updates player on info on current room""" print("You are in the " + locations[current_room]["name"]) char_status() def char_status(): """compiles character infomation in current room""" if current_char &gt; 0: char_room_info() else: print("\nThis room is empty.") def char_room_info(): """NPC behavior in each room""" print(characters[current_char]["name"].title() + " is in the room with you.") status() </code></pre>
0
2016-10-06T03:10:46Z
39,887,050
<p>If Python3.x then do:</p> <pre><code>import random random.choice(list(someDictionary.keys())) </code></pre> <p>If Python2.x then do:</p> <pre><code>import random random.choice(someDictionary.keys()) </code></pre>
0
2016-10-06T03:29:10Z
[ "python", "dictionary", "text", "adventure" ]
python - How to deploy Flask+Gunicorn+Nginx+supervisor on a cloud server?
39,886,992
<p>I've read a lot of instructions since yesterday about this issue but all of them have similar steps. However I followed step by step but still can't get everything Ok.</p> <p>Actually I can make Flask+Gunicorn+supervisor working but Nginx is not working well.</p> <p>I connect my remote cloud server with SSH and I'm not deploying the site on my computer.</p> <p>Nginx is installed correctly because when I visit the site via the domain name (aka. <code>example.com</code>) it shows the Nginx welcome page. </p> <p>I use <code>supervisor</code> to start Gunicorn and the configuration is</p> <pre><code>[program:myapp] command=/home/fh/test/venv/bin/gunicorn -w4 -b 0.0.0.0:8000 myapp:app directory=/home/fh/test startsecs=0 stopwaitsecs=0 autostart=false autorestart=false stdout_logfile=/home/fh/test/log/gunicorn.log stderr_logfile=/home/fh/test/log/gunicorn.err </code></pre> <p>here I bind the server to port <strong>8000</strong> and <strong>I don't actually know what does 0.0.0.0 stand for but I think it doesn't mean the localhost because I can visit the site via <em><a href="http://example.com:8000" rel="nofollow">http://example.com:8000</a></em> and it works well.</strong></p> <p>Then I tried to use Nginx as a proxy server.</p> <p>I deleted <code>/etc/nginx/sites-available/default' and '/etc/nginx/sites-enabled/default/</code> and created <code>/etc/nginx/sites-available/test.com</code> and <code>/etc/nginx/sites-enabled/test.com</code> and symlink them.</p> <p><code>test.com</code></p> <pre><code>server { server_name www.penguin-penpen.com; rewrite ^ http://example/ permanent; } # Handle requests to example.com on port 80 server { listen 80; server_name example.com; # Handle all locations location / { # Pass the request to Gunicorn proxy_pass http://127.0.0.1:8000; # Set some HTTP headers so that our app knows where the request really came from proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } </code></pre> <p>To my understanding, what Nginx do is when I visit <code>http://example.com</code> it passes my request to <code>http://example.com:8000</code>.</p> <p>I'm not quite sure that I should use <code>proxy_pass http://127.0.0.1:8000</code> here because I don't know whether should Nginx pass the request to localhost <strong>But I 've tried to change it to <code>0.0.0.0:8000</code> but it still doesn't work.</strong></p> <p>Can anyone help?</p>
-1
2016-10-06T03:21:51Z
39,887,658
<p><code>0.0.0.0</code> means the server will accept connections from all IP address. See <a href="https://en.wikipedia.org/wiki/0.0.0.0" rel="nofollow">https://en.wikipedia.org/wiki/0.0.0.0</a> for more detail.</p> <p>If the gunicorn server listens on <code>127.0.0.1</code>, only you (or someone else on the same machine with gunicorn server) can access it through local loop <a href="https://en.wikipedia.org/wiki/Local_loop" rel="nofollow">https://en.wikipedia.org/wiki/Local_loop</a>.</p> <p>But since you use Nginx to accept connection from the internet, you can just <code>proxy_pass http://127.0.0.1:8000;</code> and change the command to <code>command=/home/fh/test/venv/bin/gunicorn -w4 -b 127.0.0.1:8000 myapp:app</code>. In this scenario, gunicorn itself only need to accept connections from Nginx which runs on the same machine with gunicorn.</p> <p>The whole process looks like this</p> <pre><code>Connections from the Internet -&gt; Nginx (reverse proxy, listen on 0.0.0.0:80) -&gt; Gunicorn (which runs your Python code, listen on 127.0.0.1:8000) </code></pre>
0
2016-10-06T04:41:22Z
[ "python", "nginx", "web", "flask", "server" ]
How to search CSV line for string in certain column, print entire line to file if found
39,887,098
<p>Sorry, very much a beginner with Python and could really use some help.</p> <p>I have a large CSV file, items separated by commas, that I'm trying to go through with Python. Here is an example of a line in the CSV.</p> <p>123123,JOHN SMITH,SMITH FARMS,A,N,N,12345 123 AVE,CITY,NE,68355,US,12345 123 AVE,CITY,NE,68355,US,(123) 555-5555,(321) 555-5555,[email protected],15-JUL-16,11111,2013,22-DEC-93,NE,2,1\par</p> <p>I'd like my code to scan each line and look at only the 9th item (the state). For every item that matches my query, I'd like that entire line to be written to an CSV.</p> <p>The problem I have is that my code will find <em>every</em> occurrence of my query throughout the entire line, instead of just the 9th item. For example, if I scan looking for "NE", it will write the above line in my CSV, but also one that contains the string "NEARY ROAD."</p> <p>Sorry if my terminology is off, again, I'm a beginner. Any help would be greatly appreciated.</p> <p>I've listed my coding below:</p> <pre><code>import csv with open('Sample.csv', 'rb') as f, open('NE_Sample.csv', 'wb') as outf: reader = csv.reader(f, delimiter=',') writer = csv.writer(outf) for line in f: if "NE" in line: print ('Found: []'.format(line)) writer.writerow([line]) </code></pre>
0
2016-10-06T03:35:11Z
39,887,170
<p>You're not actually using your <code>reader</code> to read the input CSV, you're just reading the raw lines from the file itself.</p> <p>A fixed version looks like the following (untested):</p> <pre><code>import csv with open('Sample.csv', 'rb') as f, open('NE_Sample.csv', 'wb') as outf: reader = csv.reader(f, delimiter=',') writer = csv.writer(outf) for row in reader: if row[8] == 'NE': print ('Found: {}'.format(row)) writer.writerow(row) </code></pre> <p>The changes are as follows:</p> <ul> <li>Instead of iterating over the input file's lines, we iterate over the rows parsed by the <code>reader</code> (each of which is a <code>list</code> of each of the values in the row).</li> <li>We check to see if the 9th item in the row (i.e. <code>row[8]</code>) is equal to <code>"NE"</code>.</li> <li>If so, we output that row to the output file by passing it in, as-is, to the writer's <code>writerow</code> method.</li> <li>I also fixed a typo in your <code>print</code> statement - the <code>format</code> method uses braces (not square brackets) to mark replacement locations.</li> </ul>
2
2016-10-06T03:43:20Z
[ "python", "string", "csv", "search" ]
How to search CSV line for string in certain column, print entire line to file if found
39,887,098
<p>Sorry, very much a beginner with Python and could really use some help.</p> <p>I have a large CSV file, items separated by commas, that I'm trying to go through with Python. Here is an example of a line in the CSV.</p> <p>123123,JOHN SMITH,SMITH FARMS,A,N,N,12345 123 AVE,CITY,NE,68355,US,12345 123 AVE,CITY,NE,68355,US,(123) 555-5555,(321) 555-5555,[email protected],15-JUL-16,11111,2013,22-DEC-93,NE,2,1\par</p> <p>I'd like my code to scan each line and look at only the 9th item (the state). For every item that matches my query, I'd like that entire line to be written to an CSV.</p> <p>The problem I have is that my code will find <em>every</em> occurrence of my query throughout the entire line, instead of just the 9th item. For example, if I scan looking for "NE", it will write the above line in my CSV, but also one that contains the string "NEARY ROAD."</p> <p>Sorry if my terminology is off, again, I'm a beginner. Any help would be greatly appreciated.</p> <p>I've listed my coding below:</p> <pre><code>import csv with open('Sample.csv', 'rb') as f, open('NE_Sample.csv', 'wb') as outf: reader = csv.reader(f, delimiter=',') writer = csv.writer(outf) for line in f: if "NE" in line: print ('Found: []'.format(line)) writer.writerow([line]) </code></pre>
0
2016-10-06T03:35:11Z
39,887,350
<p>This snippet should solves your problem</p> <pre><code>import csv with open('Sample.csv', 'rb') as f, open('NE_Sample.csv', 'wb') as outf: reader = csv.reader(f, delimiter=',') writer = csv.writer(outf) for row in reader: if "NE" in row: print ('Found: {}'.format(row)) writer.writerow(row) </code></pre> <p><code>if "NE" in line</code> in your code is trying to find out whether <code>"NE"</code> is a substring of string <code>line</code>, which works not as intended. The <code>line</code>s are raw lines of your input file.</p> <p>If you use <code>if "NE" in row:</code> where <code>row</code> is parsed line of your input file, you are doing exact element matching.</p>
0
2016-10-06T04:08:21Z
[ "python", "string", "csv", "search" ]
How to extract a string from a list
39,887,149
<p>I'm new to Python without any programming background except for some shell scripting.</p> <p>I want to extract 2 fields from the below list: Here I want 'title' and 'plays' and assign it to a another list eg) new_list=[title,plays]</p> <pre><code>alist=[(0, 'title', 'TEXT', 0, None, 0), (1, 'plays', 'integer', 0, None, 0)] </code></pre>
0
2016-10-06T03:41:39Z
39,887,252
<p>The simplest way, of course, it is just write the assignment statement:</p> <pre><code>new_list=['title','plays'] </code></pre> <p>But you probably intended to ask a more general question, like "How can I extract the 2nd item from the first two tuples in a list?" Like so:</p> <pre><code>new_list = [alist[0][1], alist[1][1]] </code></pre> <p>Or maybe you meant, "How can I extract the 2nd item from each tuple in a list?" Like so:</p> <pre><code>new_list = [t[1] for t in alist] </code></pre>
3
2016-10-06T03:54:55Z
[ "python", "python-2.7", "python-3.x" ]
How to extract a string from a list
39,887,149
<p>I'm new to Python without any programming background except for some shell scripting.</p> <p>I want to extract 2 fields from the below list: Here I want 'title' and 'plays' and assign it to a another list eg) new_list=[title,plays]</p> <pre><code>alist=[(0, 'title', 'TEXT', 0, None, 0), (1, 'plays', 'integer', 0, None, 0)] </code></pre>
0
2016-10-06T03:41:39Z
39,887,269
<pre><code>alist = [(0, 'title', 'TEXT', 0, None, 0),(0, 'plays', 'integer', 0, None, 0)] new_list = [alist[0][1], alist[1][1]] </code></pre> <p>to check,</p> <pre><code>print(new_list) </code></pre> <p><strong>Explain</strong></p> <p>This line: </p> <pre><code>alist = [(0, 'title', 'TEXT', 0, None, 0), (0, 'plays', 'integer', 0, None, 0)] </code></pre> <p>Above is actually a <code>tuple</code> inside <code>list</code>. So inside the the <code>alist</code>, there are two <code>tuples</code>. Inside each <code>tuple</code>, there are <code>6</code> <code>objects</code>.</p> <p>So <code>alist[0]</code> means, you are calling the first <code>tuple</code> inside the <code>alist</code>. and <code>alist[0][1]</code> means your are calling <code>second</code> element of the first <code>tuple</code>. Like this, you can think about <code>alist[1][1]</code> also.</p>
2
2016-10-06T03:58:00Z
[ "python", "python-2.7", "python-3.x" ]
Round Robin Scheduling for a pandas dataframe
39,887,219
<p>I have been working on a bit of code that reads in a tab-delimited CSV file, which represents a series of processes and their start times and durations, and creates a dataframe for it using pandas. I then need to apply the simplified round-robin form of scheduling to find the turnaround time for the process, with the time slice taken from the user input. </p> <p>So far, I am able to read in the CSV file, label it and sort it properly. However, when trying to construct the loop to iterate over the rows to find each process' completion time, I get stuck.</p> <p>The code so far looks like:</p> <pre><code># round robin def rr(): docname = (sys.argv[1]) method = (sys.argv[2]) # creates a variable from the user input to define timeslice timeslice = int(re.search(r'\d+', method).group()) # use pandas to create a 2-d data frame from tab delimited file, set column 0 (process names) to string, set column # 1 &amp; 2 (start time and duration, respectively) to integers d = pd.read_csv(docname, delimiter="\t", header=None, dtype={'0': str, '1': np.int32, '2': np.int32}) # sort d into d1 by values of start times[1], ascending d1 = d.sort_values(by=1) # Create a 4th column, set to 0, for the Completion time d1[3] = 0 # change column names d1.columns = ['Process', 'Start', 'Duration', 'Completion'] # intialize counter counter = 0 # if any values in column 'Duration' are above 0, continue the loop while (d1['Duration']).any() &gt; 0: for index, row in d1.iterrows(): # if value in column 'Duration' &gt; the timeslice, add the value of the timeslice to the current counter, # subtract it from the the current value in column 'Duration' if row.Duration &gt; timeslice: counter += timeslice row.Duration -= timeslice print(index, row.Duration) # if value in column "Duration" &lt;= the timeslice, add the current value of the row:Duration to the counter # subtract the Duration from itself, to make it 0 # set row:Completion to the current counter, which is the completion time for the process elif row.Duration &lt;= timeslice and row.Duration != 0: counter += row.Duration row.Duration -= row.Duration row.Completion = counter print(index, row.Duration) # otherwise, if the value in Duration is already 0, print that index, with the "Done" indicator else: print(index, "Done") </code></pre> <p>Given the sample CSV file, <code>d1</code> looks like </p> <pre><code> Process Start Duration Completion 3 p4 0 280 0 0 p1 5 140 0 1 p2 14 75 0 2 p3 36 320 0 5 p6 40 0 0 4 p5 67 125 0 </code></pre> <p>And when I run my code with <code>timeslice = 70</code>, I get an infinite loop of:</p> <pre><code>3 210 0 70 1 5 2 250 5 Done 4 55 3 210 0 70 1 5 2 250 5 Done 4 55 </code></pre> <p>which seems it is iterating the loop correctly once, and then infinitely repeating. However, <code>print(d1['Completion'])</code> gives a value of all 0's, meaning it isn't assigning the correct <code>counter</code> value to <code>d1['Completion']</code> either.</p> <p>Ideally, the <code>Completion</code> values would fill out to their corresponding times, given <code>timeslice=70</code> like:</p> <pre><code> Process Start Duration Completion 3 p4 0 280 830 0 p1 5 140 490 1 p2 14 75 495 2 p3 36 320 940 5 p6 40 0 280 4 p5 67 125 620 </code></pre> <p>Which I could then use to find the average turnaround time. For some reason, however, my loop seems to iterate once and then repeat itself endlessly. When I tried switching the order of the <code>while</code> and <code>for</code> statements, it would iterate each row repeatedly until it reached 0, also giving the incorrect completion time.</p> <p>Thanks in advance.</p>
0
2016-10-06T03:50:05Z
39,887,746
<p>I revised your code a little bit and it works.You can't actually cover the original value with a revised value in your way, so the loop will not end.</p> <pre><code>while (d1['Duration']).any() &gt; 0: for index, row in d1.iterrows(): # if value in column 'Duration' &gt; the timeslice, add the value of the timeslice to the current counter, # subtract it from the the current value in column 'Duration' if row.Duration &gt; timeslice: counter += timeslice #row.Duration -= timeslice # !!!LOOK HERE!!! d1['Duration'][index] -= timeslice print(index, row.Duration) # if value in column "Duration" &lt;= the timeslice, add the current value of the row:Duration to the counter # subtract the Duration from itself, to make it 0 # set row:Completion to the current counter, which is the completion time for the process elif row.Duration &lt;= timeslice and row.Duration != 0: counter += row.Duration #row.Duration -= row.Duration #row.Completion = counter # !!!LOOK HERE!!! d1['Duration'][index] = 0 d1['Completion'][index] = counter print(index, row.Duration) # otherwise, if the value in Duration is already 0, print that index, with the "Done" indicator else: print(index, "Done") </code></pre> <p><strong>By the way, I guess you might want to simulate the process scheduling algorithm. In that case, you have to consider the 'Start', because not every process starts at the same time.</strong> </p> <p><strong>(Your ideal table is somehow wrong.)</strong></p>
0
2016-10-06T04:50:42Z
[ "python", "python-3.x", "csv", "pandas" ]
Plotting unique values in pandas
39,887,382
<p>I have a data frame where the first column is time and the second is a letter:</p> <pre><code>Time Letter 2016-10-05 20:46:12 'A' 2016-10-05 20:47:12 'A' 2016-10-05 20:50:12 'B' 2016-10-06 00:46:12 'A' 2016-10-06 01:46:12 'B' 2016-10-06 01:47:12 'C' 2016-10-06 02:46:12 'D' </code></pre> <p>I need to group the data by hour and count number of unique letters per hour:</p> <pre><code>Time Unique_values 2016-10-05 20 2 2016-10-06 00 1 2016-10-06 01 2 2016-10-06 00 1 df.groupby([df.index.date,df.index.hour]).Letter.nunique().plot(kind = 'bar', rot =0) </code></pre> <p>provides the plot with labels like (2016-10-05,7), (2016-10-05,8)...</p> <p>Is there any way to remove the brackets and instead of 7, 8 etc. use 07:00:00, 08:00:00?</p>
0
2016-10-06T04:12:27Z
39,887,966
<p>You can either use pd.Grouper:</p> <pre><code>df.groupby(pd.Grouper(key='Time', freq='H'))['Letter'].nunique() </code></pre> <p>Or set the time column as index and resample:</p> <pre><code>df.set_index('Time').resample('H')['Letter'].nunique() </code></pre> <p>Both will fill the missing interval with zeros. Since you are plotting, I guess you'd want that. If not, you can assign the resulting Series to a variable and filter:</p> <pre><code>ser = df.groupby(pd.Grouper(key='Time', freq='H'))['Letter'].nunique() ser = ser[ser&gt;0] </code></pre> <hr> <p>Due to a <a href="https://github.com/pydata/pandas/issues/13453" rel="nofollow">bug</a>, nunique may not work correctly in the current version. A workaround supplied there by @jcrist is to use pd.Series.nunique with agg. So you can update the above code to:</p> <pre><code>df.groupby(pd.Grouper(key='Time', freq='H'))['Letter'].agg(pd.Series.nunique) </code></pre> <p>Or, </p> <pre><code>df.set_index('Time').resample('H')['Letter'].agg(pd.Series.nunique) </code></pre>
0
2016-10-06T05:11:43Z
[ "python", "pandas" ]
Plotting unique values in pandas
39,887,382
<p>I have a data frame where the first column is time and the second is a letter:</p> <pre><code>Time Letter 2016-10-05 20:46:12 'A' 2016-10-05 20:47:12 'A' 2016-10-05 20:50:12 'B' 2016-10-06 00:46:12 'A' 2016-10-06 01:46:12 'B' 2016-10-06 01:47:12 'C' 2016-10-06 02:46:12 'D' </code></pre> <p>I need to group the data by hour and count number of unique letters per hour:</p> <pre><code>Time Unique_values 2016-10-05 20 2 2016-10-06 00 1 2016-10-06 01 2 2016-10-06 00 1 df.groupby([df.index.date,df.index.hour]).Letter.nunique().plot(kind = 'bar', rot =0) </code></pre> <p>provides the plot with labels like (2016-10-05,7), (2016-10-05,8)...</p> <p>Is there any way to remove the brackets and instead of 7, 8 etc. use 07:00:00, 08:00:00?</p>
0
2016-10-06T04:12:27Z
39,888,004
<p>You can remove minutes and seconds by converting column <code>Time</code> to <code>numpy array</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.values.html" rel="nofollow"><code>values</code></a> and then <code>groupby</code>:</p> <pre><code>print (df.Time.values.astype('&lt;M8[h]')) ['2016-10-05T20' '2016-10-05T20' '2016-10-05T20' '2016-10-06T00' '2016-10-06T01' '2016-10-06T01' '2016-10-06T02'] print (df.groupby([df.Time.values.astype('&lt;M8[h]')])['Letter'].nunique()) 2016-10-05 20:00:00 2 2016-10-06 00:00:00 1 2016-10-06 01:00:00 2 2016-10-06 02:00:00 1 Name: Letter, dtype: int64 </code></pre> <p><strong>Timings</strong>:</p> <pre><code>In [72]: %timeit (df.groupby([df.Time.values.astype('&lt;M8[h]')])['Letter'].nunique()) 100 loops, best of 3: 7.94 ms per loop In [73]: %timeit (ayh1(df1)) 1 loop, best of 3: 301 ms per loop In [74]: %timeit (ayh2(df2)) 1 loop, best of 3: 298 ms per loop </code></pre> <p><strong>Code</strong>:</p> <pre><code>start = pd.to_datetime('2015-02-24 20:00:15') rng = pd.date_range(start, periods=10000, freq='40T') df = pd.DataFrame({'Time': rng, 'Letter': np.random.choice(list(ascii_letters.upper()), (10000,))}) print (df) df1 = df.copy() df2 = df.copy() def ayh1(df): ser = df.groupby(pd.Grouper(key='Time', freq='H'))['Letter'].agg(pd.Series.nunique) return ser[ser&gt;0] def ayh2(df): ser = df.set_index('Time').resample('H')['Letter'].agg(pd.Series.nunique) return ser[ser&gt;0] print (df.groupby([df.Time.values.astype('&lt;M8[h]')])['Letter'].nunique()) print (ayh1(df1)) print (ayh2(df2)) </code></pre>
0
2016-10-06T05:15:37Z
[ "python", "pandas" ]
How to use two nets in TFlearn?
39,887,397
<p>I am trying to train DQN to play Tic-Tac-Toe. I trained it to play X (while O moves are random). After 12h of training it plays ok, but not flawless. Now I want to train two nets simultaneously - one for X moves and one for O moves. But when I try to do model.predict(state) on second network, I get errors like:</p> <pre><code>ValueError: Cannot feed value of shape (9,) for Tensor 'InputData/X:0', which has shape '(?, 9)' </code></pre> <p>But I know for shure that network definitions and data dimensions are identical. There is something with defining two DNNs.</p> <p>Here is a generic example:</p> <pre><code>import tflearn import random X = [[random.random(),random.random()] for x in range(1000)] #reverse values order like [1,0] -&gt; [0,1] Y = [[x[1],x[0]] for x in X] n = tflearn.input_data(shape=[None,2]) n = tflearn.fully_connected(n, 2) n = tflearn.regression(n) m = tflearn.DNN(n) m.fit(X, Y, n_epoch = 20) #should print like [0.1,0.9] print(m.predict([[0.9,0.1]])) n2 = tflearn.input_data(shape=[None,2]) n2 = tflearn.fully_connected(n2, 2) n2 = tflearn.regression(n2) m2 = tflearn.DNN(n2) # set second element value to first e.g. [1,0] -&gt; [1,1] Y = [[x[0],x[0]] for x in X] m2.fit(X, Y, n_epoch = 20) #should print like [0.9,0.9] print(m2.predict([[0.9,0.1]])) </code></pre> <p>Error will be like:</p> <pre><code>Traceback (most recent call last): File "2_dnn_test.py", line 25, in &lt;module&gt; m2.fit(X, Y, n_epoch = 20) File "/home/cpro/.pyenv/versions/3.5.1/lib/python3.5/site-packages/tflearn/models/dnn.py", line 157, in fit self.targets) File "/home/cpro/.pyenv/versions/3.5.1/lib/python3.5/site-packages/tflearn/utils.py", line 267, in feed_dict_builder feed_dict[net_inputs[i]] = x IndexError: list index out of range </code></pre> <p>Error is different because in my tic-tac-toe I call predict on second DNN sooner than do first fit(). If I comment out <code>m2.fit(X, Y, n_epoch = 20)</code> in my example I get same error:</p> <pre><code>Traceback (most recent call last): File "2_dnn_test.py", line 27, in &lt;module&gt; print(m2.predict([[0.9,0.1]])) File "/home/cpro/.pyenv/versions/3.5.1/lib/python3.5/site-packages/tflearn/models/dnn.py", line 204, in predict return self.predictor.predict(feed_dict) File "/home/cpro/.pyenv/versions/3.5.1/lib/python3.5/site-packages/tflearn/helpers/evaluator.py", line 69, in predict o_pred = self.session.run(output, feed_dict=feed_dict).tolist() File "/home/cpro/.pyenv/versions/3.5.1/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 372, in run run_metadata_ptr) File "/home/cpro/.pyenv/versions/3.5.1/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 625, in _run % (np_val.shape, subfeed_t.name, str(subfeed_t.get_shape()))) ValueError: Cannot feed value of shape (2,) for Tensor 'InputData/X:0', which has shape '(?, 2)' </code></pre> <p>So two identical networks do not work at the same time. How do I make both of them work?</p> <p>BTW example does not get expected prediction result :)</p>
0
2016-10-06T04:14:20Z
39,891,385
<p>Looks like I should add</p> <pre><code>with tf.Graph().as_default(): #define model here </code></pre> <p>to prevent TFLearn to append both models to default graph. With this addition everything works.</p>
0
2016-10-06T08:39:38Z
[ "python", "machine-learning", "tensorflow" ]
More arguments in derived class __init__ than base class __init__
39,887,422
<p>How can I extend the <code>__init__</code> of a base class, <strong>add more arguments to be parsed</strong>, without requiring <code>super().__init__(foo, bar)</code> in every derived class?</p> <pre class="lang-python prettyprint-override"><code>class Ipsum: """ A base ipsum instance. """ def __init__(self, foo, bar): self.foo = flonk(foo) grungole(self, bar) self._baz = make_the_baz() class LoremIpsum(Ipsum): """ A more refined ipsum that also lorems. """ def __init__(self, foo, bar, dolor, sit, amet): super().__init__(foo, bar) farnark(sit, self, amet) self.wibble(dolor) </code></pre> <p>The purpose of the example is to show that there is significant processing happening in the <code>Ipsum.__init__</code>, so it should not be duplicated in every sub-class; and the <code>LoremIpsum.__init__</code> needs the <code>foo</code> and <code>bar</code> parameters processed exactly the same as <code>Ipsum</code>, but also has its own special parameters.</p> <p>It also shows that, if <code>Ipsum</code> needs to be modified to accept a different signature, <em>every derived class</em> also needs to change not only its signature, but how it calls the superclass <code>__init__</code>. That's unacceptably fragile.</p> <p>Instead, I'd like to do something like:</p> <pre class="lang-python prettyprint-override"><code>class Ipsum: """ A base ipsum instance. """ def __init__(self, foo, bar, **kwargs): self.foo = flonk(foo) grungole(self, bar) self._baz = make_the_baz() self.parse_init_kwargs(kwargs) def parse_init_kwargs(self, kwargs): """ Parse the remaining kwargs to `__init__`. """ pass class LoremIpsum(Ipsum): """ A more refined ipsum that also lorems. """ def parse_init_kwargs(self, kwargs): (dolor, sit, amet) = (kwargs['dolor'], kwargs['sit'], kwargs['amet']) farnark(sit, self, amet) self.wibble(dolor) </code></pre> <p>That has the big advantage that <code>LoremIpsum</code> need <em>only</em> do the parts that are special to that class; handling <code>Ipsum</code> arguments is handled by that class's <code>__init__</code> without any extra code.</p> <p>The disadvantage is obvious, though: this is effectively re-implementing the handling of named parameters by passing a dictionary around. It avoids a lot of repetition, but isn't very clear.</p> <p>What tools are available to avoid the sub-class definitions <em>always</em> needing to declare the <code>foo</code> and <code>bar</code> parameters, and <em>always</em> needing to call <code>super().__init__(foo, bar)</code>? Those are easy to get wrong, so it would be better if they weren't needed and could <strong>just automatically happen</strong>, while still allowing <code>LoremIpsum</code>'s customisation of the initialisation.</p>
2
2016-10-06T04:16:21Z
39,887,755
<p>The usual way to write these is roughly as follows:</p> <pre><code>class Ipsum: # 3.x-ism; in 2.x always inherit from object. def __init__(self, arg1, arg2, arg3): # etc... class LoremIpsum(Ipsum): def __init__(self, arg4, arg5, *args, **kwargs): super().__init__(*args, **kwargs) # 3.x-ism; in 2.x super() needs arguments. # Do stuff with arg4 and arg5 </code></pre> <p>This does not require modifying derived classes when the base class changes signatures. You will still need to modify everything that directly instantiates your base class, so changing the signature is still not something you want to do frequently.</p> <p>This approach is also superior because it will generally behave more or less correctly in the case of multiple inheritance, even if your derived class gets put in front of another derived class in the method resolution order and you've never heard of that other class (in other words, even if <code>super().__init__()</code> is calling a method you know nothing about). This is discussed in detail in Hettinger's <a href="https://rhettinger.wordpress.com/2011/05/26/super-considered-super/" rel="nofollow">super() considered super</a> blog post.</p>
1
2016-10-06T04:51:42Z
[ "python", "inheritance" ]
More arguments in derived class __init__ than base class __init__
39,887,422
<p>How can I extend the <code>__init__</code> of a base class, <strong>add more arguments to be parsed</strong>, without requiring <code>super().__init__(foo, bar)</code> in every derived class?</p> <pre class="lang-python prettyprint-override"><code>class Ipsum: """ A base ipsum instance. """ def __init__(self, foo, bar): self.foo = flonk(foo) grungole(self, bar) self._baz = make_the_baz() class LoremIpsum(Ipsum): """ A more refined ipsum that also lorems. """ def __init__(self, foo, bar, dolor, sit, amet): super().__init__(foo, bar) farnark(sit, self, amet) self.wibble(dolor) </code></pre> <p>The purpose of the example is to show that there is significant processing happening in the <code>Ipsum.__init__</code>, so it should not be duplicated in every sub-class; and the <code>LoremIpsum.__init__</code> needs the <code>foo</code> and <code>bar</code> parameters processed exactly the same as <code>Ipsum</code>, but also has its own special parameters.</p> <p>It also shows that, if <code>Ipsum</code> needs to be modified to accept a different signature, <em>every derived class</em> also needs to change not only its signature, but how it calls the superclass <code>__init__</code>. That's unacceptably fragile.</p> <p>Instead, I'd like to do something like:</p> <pre class="lang-python prettyprint-override"><code>class Ipsum: """ A base ipsum instance. """ def __init__(self, foo, bar, **kwargs): self.foo = flonk(foo) grungole(self, bar) self._baz = make_the_baz() self.parse_init_kwargs(kwargs) def parse_init_kwargs(self, kwargs): """ Parse the remaining kwargs to `__init__`. """ pass class LoremIpsum(Ipsum): """ A more refined ipsum that also lorems. """ def parse_init_kwargs(self, kwargs): (dolor, sit, amet) = (kwargs['dolor'], kwargs['sit'], kwargs['amet']) farnark(sit, self, amet) self.wibble(dolor) </code></pre> <p>That has the big advantage that <code>LoremIpsum</code> need <em>only</em> do the parts that are special to that class; handling <code>Ipsum</code> arguments is handled by that class's <code>__init__</code> without any extra code.</p> <p>The disadvantage is obvious, though: this is effectively re-implementing the handling of named parameters by passing a dictionary around. It avoids a lot of repetition, but isn't very clear.</p> <p>What tools are available to avoid the sub-class definitions <em>always</em> needing to declare the <code>foo</code> and <code>bar</code> parameters, and <em>always</em> needing to call <code>super().__init__(foo, bar)</code>? Those are easy to get wrong, so it would be better if they weren't needed and could <strong>just automatically happen</strong>, while still allowing <code>LoremIpsum</code>'s customisation of the initialisation.</p>
2
2016-10-06T04:16:21Z
39,887,759
<p>A flexible approach is to have every method in the ancestor tree cooperatively designed to accept keyword arguments and a keyword-arguments dictionary, to remove any arguments that it needs, and to forward the remaining arguments using <code>**kwds</code>, eventually leaving the dictionary empty for the final call in the chain.</p> <p>Each level strips-off the keyword arguments that it needs so that the final empty dict can be sent to a method that expects no arguments at all (for example, object.__init__ expects zero arguments):</p> <pre><code>class Shape: def __init__(self, shapename, **kwds): self.shapename = shapename super().__init__(**kwds) class ColoredShape(Shape): def __init__(self, color, **kwds): self.color = color super().__init__(**kwds) cs = ColoredShape(color='red', shapename='circle') </code></pre> <p>For more on this approach, see the "Practical Advice" section of the <a href="https://rhettinger.wordpress.com/2011/05/26/super-considered-super/" rel="nofollow">Super Considered Super</a> blogpost, or see the related at <a href="https://www.youtube.com/watch?v=EiOglTERPEo" rel="nofollow">Pycon video</a>.</p> <p>In your example, the code would look like this:</p> <pre><code>class Ipsum: """ A base ipsum instance. """ def __init__(self, foo, bar): self.foo = flonk(foo) grungole(self, bar) self._baz = make_the_baz() class LoremIpsum(Ipsum): """ A more refined ipsum that also lorems. """ def __init__(self, dolor, sit, amet, **kwds): super().__init__(**kwds) farnark(sit, self, amet) self.wibble(dolor) </code></pre> <p>An instantiation would look like this:</p> <pre><code>li = LoremIpsum(dolor=1, sit=2, amet=3, foo=4, bar=5) </code></pre> <p>Note, this lets you achieve your original objective of adding new arguments to either __init__ method without affecting the other. </p>
2
2016-10-06T04:52:11Z
[ "python", "inheritance" ]
More arguments in derived class __init__ than base class __init__
39,887,422
<p>How can I extend the <code>__init__</code> of a base class, <strong>add more arguments to be parsed</strong>, without requiring <code>super().__init__(foo, bar)</code> in every derived class?</p> <pre class="lang-python prettyprint-override"><code>class Ipsum: """ A base ipsum instance. """ def __init__(self, foo, bar): self.foo = flonk(foo) grungole(self, bar) self._baz = make_the_baz() class LoremIpsum(Ipsum): """ A more refined ipsum that also lorems. """ def __init__(self, foo, bar, dolor, sit, amet): super().__init__(foo, bar) farnark(sit, self, amet) self.wibble(dolor) </code></pre> <p>The purpose of the example is to show that there is significant processing happening in the <code>Ipsum.__init__</code>, so it should not be duplicated in every sub-class; and the <code>LoremIpsum.__init__</code> needs the <code>foo</code> and <code>bar</code> parameters processed exactly the same as <code>Ipsum</code>, but also has its own special parameters.</p> <p>It also shows that, if <code>Ipsum</code> needs to be modified to accept a different signature, <em>every derived class</em> also needs to change not only its signature, but how it calls the superclass <code>__init__</code>. That's unacceptably fragile.</p> <p>Instead, I'd like to do something like:</p> <pre class="lang-python prettyprint-override"><code>class Ipsum: """ A base ipsum instance. """ def __init__(self, foo, bar, **kwargs): self.foo = flonk(foo) grungole(self, bar) self._baz = make_the_baz() self.parse_init_kwargs(kwargs) def parse_init_kwargs(self, kwargs): """ Parse the remaining kwargs to `__init__`. """ pass class LoremIpsum(Ipsum): """ A more refined ipsum that also lorems. """ def parse_init_kwargs(self, kwargs): (dolor, sit, amet) = (kwargs['dolor'], kwargs['sit'], kwargs['amet']) farnark(sit, self, amet) self.wibble(dolor) </code></pre> <p>That has the big advantage that <code>LoremIpsum</code> need <em>only</em> do the parts that are special to that class; handling <code>Ipsum</code> arguments is handled by that class's <code>__init__</code> without any extra code.</p> <p>The disadvantage is obvious, though: this is effectively re-implementing the handling of named parameters by passing a dictionary around. It avoids a lot of repetition, but isn't very clear.</p> <p>What tools are available to avoid the sub-class definitions <em>always</em> needing to declare the <code>foo</code> and <code>bar</code> parameters, and <em>always</em> needing to call <code>super().__init__(foo, bar)</code>? Those are easy to get wrong, so it would be better if they weren't needed and could <strong>just automatically happen</strong>, while still allowing <code>LoremIpsum</code>'s customisation of the initialisation.</p>
2
2016-10-06T04:16:21Z
39,887,762
<p>Derived classes need to call the superclass <code>__init__</code>. That's how object-oriented programming works. Is it brittle? Yes, it can be; that's why deep or broad class hierarchies aren't usually a good idea. If the operations on <code>foo</code> and <code>bar</code> in <code>Lorem</code> are significant (as you say they are), then correctly invoking <code>Lorem.__init__()</code> is a cost that must be paid. Default values for <code>foo</code> and <code>bar</code> (and any new parameters) are the only tool I can think of that might make change the amount of textual change needed, but default values aren't always appropriate, so that mitigation might not apply.</p>
0
2016-10-06T04:52:25Z
[ "python", "inheritance" ]
Unable to use jinja2 template in pip package created from python module
39,887,573
<p>I have a python project with the following directory structure</p> <pre><code>foo |--MANIFEST.in |--requirements.txt |--setup.py |--foo.py |--templates/ |--bar.tmpl </code></pre> <p>where <code>setup.py</code> is a python script. Previously, I was using an install script to symlink the script to my user's private <code>bin</code> (and using it with success), but decided to package it up (first time doing so). I'm able to successfully install the package in a virutalenv using <code>$ pip install .</code> from the project's root directory, and I am able to execute most of the script until I generate a template with <code>jinja2</code>. It seems as though either the <code>templates</code> dir is not getting installed with the rest of the package, or my script is not finding the path to the <code>templates</code> dir correctly.</p> <p>Excerpt from <code>foo.py</code>:</p> <pre><code>from jinja2 import Environment, ModuleLoader def generate_readme(template_file): template_env = Environment(loader=PackageLoader('foo','templates')) template = template_env.get_template(template_file) template_vars = {"title": get_title()} output = template.render(template_vars) return output </code></pre> <p><strong>Note</strong>: <code>'bar.tmpl'</code> is passed to this function as <code>template_file</code></p> <p>Excerpt from <code>setup.py</code>:</p> <pre><code>from setuptools import setup setup(name='foo', py_modules=['foo'], entry_points={'console_scripts': ['foo = foo:foo']} include_package_data=True, zip_safe=False) </code></pre> <p>Contents of <code>MANIFEST.in</code>:</p> <pre><code>include templates/* </code></pre> <p>Relevant Traceback:</p> <pre><code> File "/home/username/.virtualenvs/foo/local/lib/python2.7/site-packages/foo.py", line 30, in generate_readme template = template_env.get_template(template_file) File "/home/username/.virtualenvs/foo/local/lib/python2.7/site-packages/jinja2/environment.py", line 812, in get_template return self._load_template(name, self.make_globals(globals)) File "/home/username/.virtualenvs/foo/local/lib/python2.7/site-packages/jinja2/environment.py", line 774, in _load_template cache_key = self.loader.get_source(self, name)[1] File "/home/username/.virtualenvs/foo/local/lib/python2.7/site-packages/jinja2/loaders.py", line 235, in get_source raise TemplateNotFound(template) jinja2.exceptions.TemplateNotFound: bar.tmpl </code></pre> <p>I have spent a few hours googling, reading similar stackoverflow threads, reading the jinja and setuptools docs, and reading similar code. Everything I have tested just results in the same error message, and I am somewhat at a loss. Any help is greatly appreciated.</p>
0
2016-10-06T04:32:02Z
39,887,784
<p><code>MANIFEST.in</code> tell what files to include in the source distribution, i.e. <code>python setup.py sdist</code>, but it does not directly affect what files are installed because <code>pip install .</code> just calls into setuptools and doesn't do anything special with package_data.</p> <p>You need to include the files in the setup.py file, either as package data or as additional files.</p> <p>See <a href="https://docs.python.org/2/distutils/setupscript.html" rel="nofollow">https://docs.python.org/2/distutils/setupscript.html</a> for details, pay attention to <code>data_files</code> and <code>package_data</code> on the page.</p>
1
2016-10-06T04:55:14Z
[ "python", "python-2.7", "pip", "jinja2", "setuptools" ]
How do I apply a smoothing filter with dask
39,887,682
<p>I have a 2 dimensional array, I would like to do a 2 dimensional convolution with a kernel, for example a simple flat square matrix.</p> <p>See for example: <a href="http://nbviewer.jupyter.org/gist/zonca/f0d819048ef7318eff944396b71af1c4" rel="nofollow">http://nbviewer.jupyter.org/gist/zonca/f0d819048ef7318eff944396b71af1c4</a></p> <p>Is there a way to run this multithreaded with <code>dask</code>?</p>
2
2016-10-06T04:44:08Z
39,895,157
<p>The <a href="http://dask.pydata.org/en/latest/array-api.html#dask.array.Array.map_overlap" rel="nofollow">map_overlap</a> method may do what you want. It allows you to map a function over chunks of your array where those chunks have been pre-buffered with an overlapping region from nearby chunks. </p> <p>Something like the following might be a good start:</p> <pre><code>In [1]: import numpy as np In [2]: x = np.random.normal(10, 1, size=(1000, 1000)) In [3]: from scipy.signal import convolve2d In [4]: import dask.array as da In [5]: d = da.from_array(x, chunks=(500, 500)) In [6]: filt = np.ones((8, 8)) In [7]: d.map_overlap(convolve2d, in2=filt, depth=8) Out[7]: dask.array&lt;trim-de..., shape=(1000, 1000), dtype=None, chunksize=(500, 500)&gt; </code></pre> <p>Although beware that the filter you've provided both smooths and amplifies. You might also need to play with the trimming in convolve2d.</p>
0
2016-10-06T11:46:24Z
[ "python", "dask" ]
Separating HTML file with keyword for scraping
39,887,834
<p>I am programming in Python with Scrapy and have a huge <code>html</code> file with a structure similar to the one demonstrated below:</p> <pre><code>&lt;span&gt;keyword&lt;/span&gt; &lt;title&gt;Title 1&lt;/title&gt; &lt;span&gt;Date 1&lt;/span&gt; &lt;div&gt;Content 1&lt;/div&gt; &lt;span&gt;keyword&lt;/span&gt; &lt;title&gt;Title 2&lt;/title&gt; &lt;span&gt;Date 2&lt;/span&gt; &lt;div&gt;Content 2&lt;/div&gt; ... &lt;span&gt;keyword&lt;/span&gt; &lt;title&gt;Title N&lt;/title&gt; &lt;span&gt;Date N&lt;/span&gt; &lt;div&gt;Content N&lt;/div&gt; </code></pre> <p>My goal is to get all the <code>title</code>, <code>date</code>, and contents inside <code>div</code> for each section, but the sections themselves are not located in separated <code>div</code> or other elements, just one after another until the N-th section.</p> <p>While I can try to get all the <code>title[1:N]</code>, <code>date[1:N]</code>, and <code>div[1:N]</code> as a list with <code>len() = N</code>, doing so prevent debugging as if <code>N</code> goes to 10,000 and <code>len(title)==len(date)==len(div) -&gt; False</code>, it will be hard to find where thing goes wrong (e.g. some titles are put in <code>&lt;strong&gt;</code> instead of <code>&lt;title&gt;</code>).</p> <p>One item I noticed is to the <strong>keyword</strong> located between each section. With the help of that keyword, is it possible to separate the entire <code>html</code> into N parts, and hopefully get <code>item[i] = ["Title_i", "Date_i", "DIV_i"]</code> for each section through iteration?</p> <p>This way missing data will be represented as <code>item[1]=["", Date_i, Div_i ]</code> and will be much easier to locate. </p>
-1
2016-10-06T05:00:16Z
39,888,963
<h2>Update</h2> <p>Carl, the thing is that regex and even mere text processing (split, etc.) are not effective to work with big text samples. </p> <p><strong>xPath</strong> is the technique that fits best in your case. </p> <p>Though a target html might be malformed, still the modern DOM libraries are good to turn html into DOM-nodes structure, the latter is well fitting for parsing. See the posts:</p> <ul> <li>Scrape ... using <a href="http://scraping.pro/python-lxml-scrape-online-dictionary/" rel="nofollow">Python and lxml</a> library</li> <li><a href="http://scraping.pro/extracting-sequential-html-element/" rel="nofollow">Extracting sequential HTML elements with XPath</a> </li> </ul> <h2>Old answer</h2> <p>Carl, you may try to split html file content into concise parts by keywords. </p> <ol> <li>You should be able to know a full set/dictionary of all possible keywords.</li> <li>Some keywords might be duplicated inside of any <code>Content</code> part... so you'd better split not with pure keyword values, nor with <code>&lt;span&gt;keyword&lt;/span&gt;</code> expression but with the most unique <code>&lt;span&gt;keyword&lt;/span&gt;\s*&lt;title&gt;</code> and <code>&lt;span&gt;keyword&lt;/span&gt;&lt;strong&gt;</code> expressions. Thus you split parts correctly with a big probability.</li> </ol>
0
2016-10-06T06:27:36Z
[ "python", "web-scraping", "scrapy" ]
Can a python function know when it's being called by a list comprehension?
39,887,880
<p>I want to make a python function that behaves differently when it's being called from a list comprehension:</p> <pre><code>def f(): # this function returns False when called normally, # and True when called from a list comprehension pass &gt;&gt;&gt; f() False &gt;&gt;&gt; [f() for _ in range(3)] [True, True, True] </code></pre> <p>I tried looking at the inspect module, the dis module, and lib2to3's parser for something to make this trick work, but haven't found anything. There also might be a simple reason why this cannot exist, that I haven't thought of.</p>
3
2016-10-06T05:04:10Z
39,888,055
<p>Addon to my comment:</p> <pre><code>def f(lst=False): return True if lst else False f() #False [f(True) for _ in range(3)] # [True True True] </code></pre> <p>This would work, but is this the real problem that you're trying to solve? It seems a really unintuitive use-case which can be solved better by other means.</p>
0
2016-10-06T05:19:17Z
[ "python" ]
Can a python function know when it's being called by a list comprehension?
39,887,880
<p>I want to make a python function that behaves differently when it's being called from a list comprehension:</p> <pre><code>def f(): # this function returns False when called normally, # and True when called from a list comprehension pass &gt;&gt;&gt; f() False &gt;&gt;&gt; [f() for _ in range(3)] [True, True, True] </code></pre> <p>I tried looking at the inspect module, the dis module, and lib2to3's parser for something to make this trick work, but haven't found anything. There also might be a simple reason why this cannot exist, that I haven't thought of.</p>
3
2016-10-06T05:04:10Z
39,888,195
<p>You can determine this by inspecting the stack frame in the following sort of way:</p> <pre><code>def f(): try: raise ValueError except Exception as e: if e.__traceback__.tb_frame.f_back.f_code.co_name == '&lt;listcomp&gt;': return True </code></pre> <p>Then: </p> <pre><code>&gt;&gt;&gt; print(f()) None &gt;&gt;&gt; print([f() for x in range(10)]) [True, True, True, True, True, True, True, True, True, True] </code></pre> <p>Its not to be recommended though. Really, its not.</p> <h3>NOTE</h3> <p>As it stands this only detects list comprehensions as requested. It will not detect the use of a generator. For example:</p> <pre><code>&gt;&gt;&gt; print(list(f() for x in range(10))) [None, None, None, None, None, None, None, None, None, None] </code></pre>
8
2016-10-06T05:30:50Z
[ "python" ]
Cannot import python naoqi library after upgrading Ubuntu 14.04 to 16.04
39,887,889
<p>I have recently upgraded the system to 16.04 Gnome. The most troubling thing that I am facing is that I cannot import a NAOqi library for my work. The python version of this library was pretty simple to set-up. One just has to untar the file and then enter a path variable called PYTHONPATH pointing to this library and it worked like a charm in 14.04. Now since upgrade I am facing: </p> <pre><code>Python 2.7.12 (default, Jul 1 2016, 15:12:24) [GCC 5.4.0 20160609] on linux2 Type "help", "copyright", "credits" or "license" for more information. import naoqi Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "/home/dell/nao_sdk/pynaoqi/naoqi.py", line 7, in &lt;module&gt; import qi File "/home/dell/nao_sdk/pynaoqi/qi/__init__.py", line 72, in &lt;module&gt; from _qi import Application as _Application ImportError: libqipython.so: cannot open shared object file: No such file or directory </code></pre> <p>If I add a path variable:</p> <p><code>export LD_LIBRARY_PATH=:/home/dell/nao_sdk/pynaoqi/</code> The error changes to:</p> <pre><code>Python 2.7.12 (default, Jul 1 2016, 15:12:24) [GCC 5.4.0 20160609] on linux2 Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; import naoqi Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "/home/dell/nao_sdk/pynaoqi/naoqi.py", line 7, in &lt;module&gt; import qi File "/home/dell/nao_sdk/pynaoqi/qi/__init__.py", line 72, in &lt;module&gt; from _qi import Application as _Application ImportError: libboost_regex.so.1.55.0: cannot open shared object file: No such file or directory </code></pre> <p>Please help me what should I do to get it working? I have also used python 2.6.9 but same error occurs with error below.</p> <p><code>ImportError: libboost_python.so.1.55.0: cannot open shared object file: No such file or directory</code></p>
0
2016-10-06T05:04:39Z
39,899,013
<p>Installing libboost1.55 did the trick. 16.04 comes with libboost1.58 but naoqi is not yet compatible with it. Manual installation of libboost1.55 solved the import error.</p>
0
2016-10-06T14:43:12Z
[ "python", "python-2.7", "gnome", "ubuntu-16.04", "nao-robot" ]
Django REST FileUpload serializer returns {'file': None}
39,887,923
<p>I’ve been working on a Django project that requires a file upload. I use API approach in my app designs using django-rest-framework. I created my model, APIView, and serializer but unfortunately every time the request goes through the serializer the upload.data returns {'file': None}. If I just use request.FILES['file'] it returns the file no problem but I want to use the serialized data. I'm using dropzone js on the front end to upload the file. Here is my code below. </p> <p>HTML</p> <pre><code>{% extends 'base_profile.html' %} {% load static %} {% block title %}File Import{% endblock %} {% block pagetitle %}File Import{% endblock %} {% block content %} &lt;div class="widget"&gt; &lt;div class="widget-heading clearfix"&gt; &lt;h3 class="widget-title pull-left list-inline"&gt;CSV &lt;/h3&gt; &lt;button type="button" class="btn btn-primary pull-right"&gt;&lt;i class="ti-upload mr-5"&gt;&lt;/i&gt; Upload&lt;/button&gt; &lt;/div&gt; &lt;div class="widget-body"&gt; &lt;form id="type-dz" class="dropzone"&gt;{% csrf_token %}&lt;/form&gt; &lt;/div&gt; &lt;/div&gt; {% endblock %} {% block js %} &lt;script type="text/javascript"&gt; $("#type-dz").dropzone({ url: "{% url 'api_import:file' %}", paramName: "file", acceptedFiles: ".csv", maxFilesize: 2, maxThumbnailFilesize: .5, dictDefaultMessage: "&lt;i class='icon-dz fa fa-file-text-o'&gt;&lt;/i&gt;Drop files here to upload" }); &lt;/script&gt; {% endblock %} </code></pre> <p>urls.py</p> <pre><code>urlpatterns = [ url(r'^api/import/', include('api.import_api.urls', namespace="api_import")), ] </code></pre> <p>api/urls.py</p> <pre><code>urlpatterns = [ url(r'^file/', FileImport.as_view(), name='file'), ] </code></pre> <p>views.py</p> <pre><code>class FileImport(APIView): parser_classes = (MultiPartParser, FormParser,) serializer = ImportSerializer def post(self, request, format=None): upload = self.serializer(data=request.FILES) if upload.is_valid(): file = FileUpload(file=upload.data['file'], uploaded_by=request.user.profile) file.save() return Response({'success': 'Imported successfully'}) else: return Response(upload.errors, status=400) </code></pre> <p>serializers.py</p> <pre><code>class ImportSerializer(serializers.Serializer): file = serializers.FileField() </code></pre> <p>models.py</p> <pre><code>class FileUpload(models.Model): file = models.FileField(upload_to='files/%Y/%m/%d') date_uploaded = models.DateTimeField(auto_now=True) uploaded_by = models.ForeignKey('UserProfile', blank=True, null=True) </code></pre>
0
2016-10-06T05:07:33Z
39,892,375
<p>It would be helpful to see how you are uploading the file. If you are using a multipart/form-data request and providing the json for "file" properly, the most likely thing is that the file is failing validation for some reason.</p> <p>If you can, it might also be helpful to test from the browsable api (as that guarantees nothing is wrong with your request).</p> <p>Edit:<br> The issue was that the <code>validated_data</code> field should be used instead of the <code>data</code> field after calling <code>is_valid()</code>.</p>
1
2016-10-06T09:29:01Z
[ "python", "django", "django-rest-framework", "django-serializer" ]
Split a string in parts and convert this in a datetime
39,887,944
<p>I scrapped a html string with the following content.</p> <blockquote> <p>[u'Mitglied seit M\xe4rz 2016']</p> </blockquote> <p>M\xe4rz should be März (German word for March).</p> <p>I want to convert this scrapped output in a datetime. My first try was to convert the output in a string and split this with the following code.</p> <pre><code>strDate = string.split(str(scrapped)) </code></pre> <p>My new output is now:</p> <blockquote> <p>["[u'Mitglied", 'seit', 'M\xe4rz', "2016']"]</p> </blockquote> <p>The next step will be to add the first day of the month in the string.</p> <pre><code>&gt; strDate = "1. " + strDate[2] + " " + strDate[3] </code></pre> <p>New Output is:</p> <blockquote> <p>"1. M\xe4rz 2016']"</p> </blockquote> <p>How can I delete the \xe4 in a ä and delete the ']. And finally how can I convert this string "1. März 2016" in a datetime with Python.</p> <p>Thanks for your answers.</p>
0
2016-10-06T05:09:47Z
39,888,002
<p>There is a way to encode the \xe4 character.</p> <pre><code>str = 'Mitglied seit M\xe4rz 2016' str = str.decode('unicode-escape').encode('utf-8') str = '1. ' + str.split(' ')[2] + ' ' + str.split(' ')[3] </code></pre>
0
2016-10-06T05:15:32Z
[ "python", "datetime", "scrapy", "type-conversion" ]
Split a string in parts and convert this in a datetime
39,887,944
<p>I scrapped a html string with the following content.</p> <blockquote> <p>[u'Mitglied seit M\xe4rz 2016']</p> </blockquote> <p>M\xe4rz should be März (German word for March).</p> <p>I want to convert this scrapped output in a datetime. My first try was to convert the output in a string and split this with the following code.</p> <pre><code>strDate = string.split(str(scrapped)) </code></pre> <p>My new output is now:</p> <blockquote> <p>["[u'Mitglied", 'seit', 'M\xe4rz', "2016']"]</p> </blockquote> <p>The next step will be to add the first day of the month in the string.</p> <pre><code>&gt; strDate = "1. " + strDate[2] + " " + strDate[3] </code></pre> <p>New Output is:</p> <blockquote> <p>"1. M\xe4rz 2016']"</p> </blockquote> <p>How can I delete the \xe4 in a ä and delete the ']. And finally how can I convert this string "1. März 2016" in a datetime with Python.</p> <p>Thanks for your answers.</p>
0
2016-10-06T05:09:47Z
39,903,281
<p>There is a lot of code. You can simplify or adopt it, but I think it should help:</p> <pre><code># encoding: utf-8 import datetime months = { u'Januar': '1', u'Februar': '2', u'März': '3', u'April': '4', u'Mai': '5', u'Juni': '6', u'Juli': '7', u'August': '8', u'September': '9', u'Oktober': '10', u'November': '11', u'Dezember': '12' } def str2date(str_date, date_format='%d.%m.%Y', err_value=None, do_raise=False): u""" Convert string with date to datetime instance """ if isinstance(str_date, datetime.date): result = str_date else: datetime_templates = ( (date_format, 255), ('%d.%m.%Y', 10), ('%Y-%m-%dT%H:%M:%S', 19), ('%Y-%m-%d %H:%M:%S', 19), ('%d.%m.%Y %H:%M:%S', 19), ('%Y-%m-%dT%H:%M', 16), ('%Y-%m-%d %H:%M', 16), ('%d.%m.%Y %H:%M', 16), ('%Y-%m-%d', 10), ('%H:%M:%S', 8), ('%H:%M', 5), ) for tmpl, baund in datetime_templates: try: result_date = datetime.datetime.strptime(str_date[:baund], tmpl).date() except (ValueError, TypeError) as err: raise err else: result = result_date break else: if do_raise: raise ValueError else: result = err_value return result scrapped = "[u'Mitglied seit M\xe4rz 2016']" encoded = unicode(scrapped.replace("[u'", '').replace("']", ''), 'unicode-escape') splitted = encoded.split() replaced = [months[i] if i in months else i for i in splitted] str_date = u'.'.join(['1', replaced[2], replaced[3]]) result_date = str2date(str_date) print result_date print isinstance(result_date, datetime.date) </code></pre>
0
2016-10-06T18:31:59Z
[ "python", "datetime", "scrapy", "type-conversion" ]
How to unit test program interacting with block devices
39,888,013
<p>I have a program that interacts with and changes block devices (/dev/sda and such) on linux. I'm using various external commands (mostly commands from the fdisk and GNU fdisk packages) to control the devices. I have made a class that serves as the interface for most of the basic actions with block devices (for information like: What size is it? Where is it mounted? etc.)</p> <p>Here is one such method querying the size of a partition:</p> <pre><code>def get_drive_size(device): """Returns the maximum size of the drive, in sectors. :device the device identifier (/dev/sda and such)""" query_proc = subprocess.Popen(["blockdev", "--getsz", device], stdout=subprocess.PIPE) #blockdev returns the number of 512B blocks in a drive output, error = query_proc.communicate() exit_code = query_proc.returncode if exit_code != 0: raise Exception("Non-zero exit code", str(error, "utf-8")) #I have custom exceptions, this is slight pseudo-code return int(output) #should always be valid </code></pre> <p>So this method accepts a block device path, and returns an integer. The tests will run as root, since this entire program will end up having to run as root anyway.</p> <p>Should I try and test code such as these methods? If so, how? I could try and create and mount image files for each test, but this seems like a lot of overhead, and is probably error-prone itself. It expects block devices, so I cannot operate directly on image files in the file system.</p> <p>I could try mocking, as some answers suggest, but this feels inadequate. It seems like I start to test the implementation of the method, if I mock the Popen object, rather than the output. Is this a correct assessment of proper unit-testing methodology in this case?</p> <p>I am using python3 for this project, and I have not yet chosen a unit-testing framework. In the absence of other reasons, I will probably just use the default unittest framework included in Python.</p>
3
2016-10-06T05:15:54Z
39,888,724
<p>You should look into the mock module (I think it's part of the unittest module now in Python 3).</p> <p>It enables you to run tests without the need to depened in any external resources while giving you control over how the mocks interact with your code.</p> <p>I would start from the docs in <a href="http://www.voidspace.org.uk/python/mock/" rel="nofollow">Voidspace</a></p> <p>Here's an example:</p> <pre><code>import unittest2 as unittest import mock class GetDriveSizeTestSuite(unittest.TestCase): @mock.patch('path/to/original/file.subprocess.Popen') def test_a_scenario_with_mock_subprocess(self, mock_popen): mock_popen.return_value.communicate.return_value = ('Expected_value', '') mock_popen.return_value.returncode = '0' self.assertEqual('expected_value', get_drive_size('some device')) </code></pre>
2
2016-10-06T06:12:07Z
[ "python", "unit-testing" ]
pyspark type error on reading a pandas dataframe
39,888,188
<p>I read some CSV file into pandas, nicely preprocessed it and set dtypes to desired values of float, int, category. However, when trying to import it into spark I get the following error:</p> <pre><code>Can not merge type &lt;class 'pyspark.sql.types.DoubleType'&gt; and &lt;class 'pyspark.sql.types.StringType'&gt; </code></pre> <p>After trying to trace it for a while I some source for my troubles -> see the CSV file:</p> <pre><code>"myColumns" "" "A" </code></pre> <p>Red into pandas like: <code>small = pd.read_csv(os.path.expanduser('myCsv.csv'))</code></p> <p>And failing to import it to spark with:</p> <pre><code>sparkDF = spark.createDataFrame(small) </code></pre> <p>Currently I use Spark 2.0.0</p> <h3>Possibly multiple columns are affected. How can I deal with this problem?</h3> <p><a href="http://i.stack.imgur.com/55CXw.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/55CXw.jpg" alt="enter image description here"></a></p>
1
2016-10-06T05:30:27Z
39,889,639
<p>You'll need to define the spark <code>DataFrame</code> schema explicitly and pass it to the <code>createDataFrame</code> function :</p> <pre><code>from pyspark.sql.types import * import pandas as pd small = pdf.read_csv("data.csv") small.head() # myColumns # 0 NaN # 1 A sch = StructType([StructField("myColumns", StringType(), True)]) df = spark.createDataFrame(small, sch) df.show() # +---------+ # |myColumns| # +---------+ # | NaN| # | A| # +---------+ df.printSchema() # root # |-- myColumns: string (nullable = true) </code></pre>
2
2016-10-06T07:07:56Z
[ "python", "csv", "pandas", "apache-spark", "pyspark" ]
Python 3.5 running in IDLE shell, but not macOS Terminal
39,888,268
<p>I'm trying to get Python 3.5 to run in my terminal. I made a script using idle that printed out the version of python in use to try to solve my problem. The script looked like this:</p> <pre><code>import sys print(sys.version_info) </code></pre> <p>When I ran it in IDLE, I got the following output:</p> <pre><code>sys.version_info(major=3, minor=5, micro=2, releaselevel='final', serial=0) </code></pre> <p>When I ran it in terminal, I got the following output instead:</p> <pre><code>sys.version_info(major=2, minor=7, micro=10, releaselevel='final', serial=0) </code></pre> <p>I want to be able to run my scripts in terminal because I want to be able to access files and use pip. If you know how I can do one or both of these things in the IDLE shell, or you know how to update terminal so I can do this.</p>
0
2016-10-06T05:37:02Z
39,888,289
<h1>Running Python from the terminal</h1> <p>If you have Python installed, then:</p> <ul> <li><code>python</code> or <code>python2</code> opens the interactive prompt or runs a script if a file is supplied as an argument</li> <li><code>python3</code> does the same as above, but for Python 3</li> </ul> <p>Note that if Python 3 is set as the default Python version, then running <code>python</code> will use Python 3</p> <h1>Running IDLE from the terminal</h1> <p>If you have IDLE installed, then:</p> <ul> <li><code>idle2</code> opens IDLE for Python version 2</li> <li><code>idle3</code> opens IDLE for Python version 3 (unless you change the default python)</li> </ul> <h1>Changing the default Python version</h1> <p>If you want Python 3 to be the default Python version, then take a look at this SO question: <a href="http://stackoverflow.com/questions/18425379/how-to-set-pythons-default-version-to-3-3-on-os-x">How to set Python&#39;s default version to 3.3 on OS X?</a> Be aware that there are some disadvantages and things to work around, so I advise you to read the entire article if you do so.</p>
0
2016-10-06T05:38:47Z
[ "python", "osx", "shell", "pip", "python-idle" ]
Python 3.5 running in IDLE shell, but not macOS Terminal
39,888,268
<p>I'm trying to get Python 3.5 to run in my terminal. I made a script using idle that printed out the version of python in use to try to solve my problem. The script looked like this:</p> <pre><code>import sys print(sys.version_info) </code></pre> <p>When I ran it in IDLE, I got the following output:</p> <pre><code>sys.version_info(major=3, minor=5, micro=2, releaselevel='final', serial=0) </code></pre> <p>When I ran it in terminal, I got the following output instead:</p> <pre><code>sys.version_info(major=2, minor=7, micro=10, releaselevel='final', serial=0) </code></pre> <p>I want to be able to run my scripts in terminal because I want to be able to access files and use pip. If you know how I can do one or both of these things in the IDLE shell, or you know how to update terminal so I can do this.</p>
0
2016-10-06T05:37:02Z
39,888,348
<p>To run idle from command line Type <code>idle3</code> in the terminal for <code>Python 3</code> idle and if you want to run <code>Python 2</code> idle, type <code>idle</code> in the terminal .</p> <p>Similarly, if you need to run a script or python prompt from terminal you should type <code>python3</code> when you want to use <code>Python 3</code> and <code>python</code> when you want to run <code>Python 2</code>.</p>
0
2016-10-06T05:44:15Z
[ "python", "osx", "shell", "pip", "python-idle" ]
How do I iterate through a dictionary's sets of values and return the key of the sets of values that matches a given set?
39,888,354
<p>I have a function guess which is supposed to return the key of the set of animal values (arg2) that matches the set of observations (arg1). For instance, if the set of observations = {'pet,' 'fluffy'} and the dictionary of animals = {'cat': {'pet,' 'fluffy', 'cute'}, 'dog': {'pet'}} then the function should return the key of cat because cat has all of the values and set elements that the set of observations has. Notice that cat also has extra values/elements that observations does not. Also notice that dog has one of the values that the set observations has - "pet" - however, dog doesn't have all of the values that the observations set has, and therefore, it isn't returned. </p> <p>This is my function so far:</p> <pre><code>def guess(observations, animals): for key, value in animals.items(): if observations in value: return key </code></pre> <p>This is a test:</p> <pre><code>guess({'pet', 'fluffy'}, {'cat': {'pet', 'fluffy', 'cute'}, 'dog': {'pet'}}) </code></pre> <p>So far, my function returns None when I want it to return cat. How do I see if my set of observations matches a set and/or subset of a set of animal values? My mind wants me to incorrectly believe that checking if the set of observations is in the set of values is the best way to compare sets. I've also tried iterating through my observations to see if they're in the set of values. This method works to an extent. However, when I try:</p> <pre><code>for key, value in animals.items(): for obs in observations: if obs in value: return key </code></pre> <p>It returns dog. I'd appreciate any help. Thanks</p>
0
2016-10-06T05:44:39Z
39,888,524
<p>Idea is to iterate over all potential animals and see if the observed values are a subset of their respective attributes). I'm assuming there could be multiple animals satisfying the observations.</p> <pre><code>observations = {'pet,' 'fluffy'} animals = {'cat': {'pet,' 'fluffy', 'cute'}, 'dog': {'pet'}} def guess(obs, anim): ret = [] for animal, attributes in anim.iteritems(): if obs &lt;= attributes: # Subset or equality implies this animal qualifies ret.append(animal) return ret &gt;&gt;&gt; print guess(observations, animals) ['cat'] </code></pre>
3
2016-10-06T05:56:27Z
[ "python", "python-3.x" ]
How do I iterate through a dictionary's sets of values and return the key of the sets of values that matches a given set?
39,888,354
<p>I have a function guess which is supposed to return the key of the set of animal values (arg2) that matches the set of observations (arg1). For instance, if the set of observations = {'pet,' 'fluffy'} and the dictionary of animals = {'cat': {'pet,' 'fluffy', 'cute'}, 'dog': {'pet'}} then the function should return the key of cat because cat has all of the values and set elements that the set of observations has. Notice that cat also has extra values/elements that observations does not. Also notice that dog has one of the values that the set observations has - "pet" - however, dog doesn't have all of the values that the observations set has, and therefore, it isn't returned. </p> <p>This is my function so far:</p> <pre><code>def guess(observations, animals): for key, value in animals.items(): if observations in value: return key </code></pre> <p>This is a test:</p> <pre><code>guess({'pet', 'fluffy'}, {'cat': {'pet', 'fluffy', 'cute'}, 'dog': {'pet'}}) </code></pre> <p>So far, my function returns None when I want it to return cat. How do I see if my set of observations matches a set and/or subset of a set of animal values? My mind wants me to incorrectly believe that checking if the set of observations is in the set of values is the best way to compare sets. I've also tried iterating through my observations to see if they're in the set of values. This method works to an extent. However, when I try:</p> <pre><code>for key, value in animals.items(): for obs in observations: if obs in value: return key </code></pre> <p>It returns dog. I'd appreciate any help. Thanks</p>
0
2016-10-06T05:44:39Z
39,888,637
<p>What you want to know is whether the value of each item in <code>animals</code> is a superset of <code>observations</code>. Thankfully, <code>set</code> has a <a href="https://docs.python.org/3/library/stdtypes.html#set.issuperset" rel="nofollow">method</a> to test exactly that, so your function is straightforward:</p> <pre><code>def guess(observations, animals): return {k for k, v in animals.items() if v.issuperset(observations)} </code></pre> <p>Note that <code>guess()</code> returns a set, because there may be more than one item in <code>animals</code> that matches the criteria:</p> <pre><code>&gt;&gt;&gt; animals = {'cat': {'pet,' 'fluffy', 'cute'}, 'dog': {'pet'}} &gt;&gt;&gt; observations = {'pet,' 'fluffy'} &gt;&gt;&gt; guess(observations, animals) {'cat'} </code></pre> <p>It's also possible to use the <code>&gt;=</code> and <code>&lt;=</code> operators as synonyms of <code>issuperset</code> and <code>issubset</code> (not <code>&gt;</code> and <code>&lt;</code>, which are synonyms for <em>proper</em> supersets and subsets respectively).</p>
1
2016-10-06T06:04:32Z
[ "python", "python-3.x" ]
How do I iterate through a dictionary's sets of values and return the key of the sets of values that matches a given set?
39,888,354
<p>I have a function guess which is supposed to return the key of the set of animal values (arg2) that matches the set of observations (arg1). For instance, if the set of observations = {'pet,' 'fluffy'} and the dictionary of animals = {'cat': {'pet,' 'fluffy', 'cute'}, 'dog': {'pet'}} then the function should return the key of cat because cat has all of the values and set elements that the set of observations has. Notice that cat also has extra values/elements that observations does not. Also notice that dog has one of the values that the set observations has - "pet" - however, dog doesn't have all of the values that the observations set has, and therefore, it isn't returned. </p> <p>This is my function so far:</p> <pre><code>def guess(observations, animals): for key, value in animals.items(): if observations in value: return key </code></pre> <p>This is a test:</p> <pre><code>guess({'pet', 'fluffy'}, {'cat': {'pet', 'fluffy', 'cute'}, 'dog': {'pet'}}) </code></pre> <p>So far, my function returns None when I want it to return cat. How do I see if my set of observations matches a set and/or subset of a set of animal values? My mind wants me to incorrectly believe that checking if the set of observations is in the set of values is the best way to compare sets. I've also tried iterating through my observations to see if they're in the set of values. This method works to an extent. However, when I try:</p> <pre><code>for key, value in animals.items(): for obs in observations: if obs in value: return key </code></pre> <p>It returns dog. I'd appreciate any help. Thanks</p>
0
2016-10-06T05:44:39Z
39,888,853
<p>It appears to me that you're looking for the <em>best</em> match (most matched attributes), so I'd propose this:</p> <pre><code>def guess(observations, animals): return max(list((len(observations &amp; value), key) # most matches for key, value in animals.items() if observations &amp; value # if this item had matches ) or # if _nothing_ matched [(0, None)] # use this )[1] # strip off the count </code></pre>
0
2016-10-06T06:20:09Z
[ "python", "python-3.x" ]
How do constants work in an if block?
39,888,396
<p>I am new to python. I want to learn how to make and use "constants". Here is my code:</p> <pre><code>class Constantine(object): ONE = 1 TWO = 2 def test(self, code): if code not in(self.ONE, self.TWO): print("safe") else: print("not safe") keeanu = Constantine() keeanu.test(1) </code></pre> <p>I expect the code to print safe. But, it prints not safe. Why ?</p>
-2
2016-10-06T05:48:42Z
39,888,493
<p>You are testing whether <code>code</code> is <strong><em>not</em></strong> in <code>(self.ONE, self.TWO)</code>. If it <strong><em>is</em></strong> found, it will print <code>not safe</code>, which it does.</p> <p>The reason it is found is because the interpreter first looks up <code>self.ONE</code> and <code>self.TWO</code> as instance variables. If they are not found at that level, the interpreter will attempt to resolve them as class variables. This succeeds for your code.</p>
3
2016-10-06T05:54:19Z
[ "python" ]
Defining the Linear Programming Model for Traveling Salesman in Python
39,888,448
<p>Using Python and <code>PuLP</code> library, how can we create the linear programming model to solve the Traveling Salesman Problem (TSP)?</p> <p>From Wikipedia, the objective function and constraints are </p> <p><a href="http://i.stack.imgur.com/Z9wuZ.png" rel="nofollow"><img src="http://i.stack.imgur.com/Z9wuZ.png" alt="enter image description here"></a></p> <p><strong>Problem:</strong> Here is my partial attempt where I am stuck. </p> <ol> <li><p>I did not include the final constraint in the code because I dont know how to define it. I believe this constraint with u variables are for preventing sub-cycles in the solution</p></li> <li><p>Also, solving for the current model gives decision variables such as <code>x0_0</code> and <code>x1_1</code> being equal to <code>1.0</code> which is definitely wrong... I can't figure out why this is so even though I had</p> <pre><code> if i == j: upperBound = 0 </code></pre></li> </ol> <p><strong>Python Code</strong></p> <pre><code>import pulp def get_dist(tsp): with open(tsp, 'rb') as tspfile: r = csv.reader(tspfile, delimiter='\t') d = [row for row in r] d = d[1:] # skip header row locs = set([r[0] for r in d]) | set([r[1] for r in d]) loc_map = {l:i for i, l in enumerate(locs)} idx_map = {i:l for i, l in enumerate(locs)} dist = [(loc_map[r[0]], loc_map[r[1]], r[2]) for r in d] return dist, idx_map def dist_from_coords(dist, n): points = [] for i in range(n): points.append([0] * n) for i, j, v in dist: points[i][j] = points[j][i] = float(v) return points def find_tour(): tsp_file = '/Users/test/' + 'my-waypoints-dist-dur.tsv' coords, idx_map = get_dist(tsp_file) n = len(idx_map) dist = dist_from_coords(coords, n) # Define the problem m = pulp.LpProblem('TSP', pulp.LpMinimize) # Create variables # x[i,j] is 1 if edge i-&gt;j is on the optimal tour, and 0 otherwise # Also forbid loops x = {} for i in range(n): for j in range(n): lowerBound = 0 upperBound = 1 # Forbid loops if i == j: upperBound = 0 # print i,i x[i,j] = pulp.LpVariable('x' + str(i) + '_' + str(j), lowerBound, upperBound, pulp.LpBinary) # x[j,i] = x[i,j] # Define the objective function to minimize m += pulp.lpSum([dist[i][j] * x[i,j] for i in range(n) for j in range(n)]) # Add degree-2 constraint for i in range(n): m += pulp.lpSum([x[i,j] for j in range(n)]) == 2 # Solve and display results status = m.solve() print pulp.LpStatus[status] for i in range(n): for j in range(n): if pulp.value(x[i,j]) &gt;0: print str(i) + '_' + str(j) + ': ' + str( pulp.value(x[i,j]) ) find_tour() </code></pre> <p><strong>my-waypoints-dist-dur.tsv</strong></p> <p>The data file <a href="https://gist.github.com/anonymous/43ba3f6337556eae4ce8104b8ed5992b" rel="nofollow">can be found here</a>.</p> <p><strong>Result</strong></p> <pre><code>0_0: 1.0 0_5: 1.0 1_1: 1.0 1_15: 1.0 2_2: 1.0 2_39: 1.0 3_3: 1.0 3_26: 1.0 4_4: 1.0 4_42: 1.0 5_5: 1.0 5_33: 1.0 6_6: 1.0 6_31: 1.0 7_7: 1.0 7_38: 1.0 8_8: 1.0 8_24: 1.0 9_9: 1.0 9_26: 1.0 10_4: 1.0 10_10: 1.0 11_11: 1.0 11_12: 1.0 12_11: 1.0 12_12: 1.0 13_13: 1.0 13_17: 1.0 14_14: 1.0 14_18: 1.0 15_1: 1.0 15_15: 1.0 16_3: 1.0 16_16: 1.0 17_13: 1.0 17_17: 1.0 18_14: 1.0 18_18: 1.0 19_19: 1.0 19_20: 1.0 20_4: 1.0 20_20: 1.0 21_21: 1.0 21_25: 1.0 22_22: 1.0 22_27: 1.0 23_21: 1.0 23_23: 1.0 24_8: 1.0 24_24: 1.0 25_21: 1.0 25_25: 1.0 26_26: 1.0 26_43: 1.0 27_27: 1.0 27_38: 1.0 28_28: 1.0 28_47: 1.0 29_29: 1.0 29_31: 1.0 30_30: 1.0 30_34: 1.0 31_29: 1.0 31_31: 1.0 32_25: 1.0 32_32: 1.0 33_28: 1.0 33_33: 1.0 34_30: 1.0 34_34: 1.0 35_35: 1.0 35_42: 1.0 36_36: 1.0 36_47: 1.0 37_36: 1.0 37_37: 1.0 38_27: 1.0 38_38: 1.0 39_39: 1.0 39_44: 1.0 40_40: 1.0 40_43: 1.0 41_41: 1.0 41_45: 1.0 42_4: 1.0 42_42: 1.0 43_26: 1.0 43_43: 1.0 44_39: 1.0 44_44: 1.0 45_15: 1.0 45_45: 1.0 46_40: 1.0 46_46: 1.0 47_28: 1.0 47_47: 1.0 ... </code></pre> <hr> <p><strong>Updated Code</strong></p> <pre><code>import csv import pulp def get_dist(tsp): with open(tsp, 'rb') as tspfile: r = csv.reader(tspfile, delimiter='\t') d = [row for row in r] d = d[1:] # skip header row locs = set([r[0] for r in d]) | set([r[1] for r in d]) loc_map = {l:i for i, l in enumerate(locs)} idx_map = {i:l for i, l in enumerate(locs)} dist = [(loc_map[r[0]], loc_map[r[1]], r[2]) for r in d] return dist, idx_map def dist_from_coords(dist, n): points = [] for i in range(n): points.append([0] * n) for i, j, v in dist: points[i][j] = points[j][i] = float(v) return points def find_tour(): tsp_file = '/Users/test/' + 'my-waypoints-dist-dur.tsv' coords, idx_map = get_dist(tsp_file) n = len(idx_map) dist = dist_from_coords(coords, n) # Define the problem m = pulp.LpProblem('TSP', pulp.LpMinimize) # Create variables # x[i,j] is 1 if edge i-&gt;j is on the optimal tour, and 0 otherwise # Also forbid loops x = {} for i in range(n+1): for j in range(n+1): lowerBound = 0 upperBound = 1 # Forbid loops if i == j: upperBound = 0 # print i,i # Create the decision variable and First constraint x[i,j] = pulp.LpVariable('x' + str(i) + '_' + str(j), lowerBound, upperBound, pulp.LpBinary) # x[j,i] = x[i,j] # Define the objective function to minimize m += pulp.lpSum([dist[i][j] * x[i,j] for i in range(1,n+1) for j in range(1,n+1)]) # Add degree-2 constraint (3rd and 4th) for i in range(1,n+1): m += pulp.lpSum([x[i,j] for j in range(1,n+1)]) == 2 # Add the last (5th) constraint (prevents subtours) u = [] for i in range(1, n+1): u.append(pulp.LpVariable('u_' + str(i), cat='Integer')) for i in range(1, n-1): for j in range(i+1, n+1): m += pulp.lpSum([ u[i] - u[j] + n*x[i,j]]) &lt;= n-1 # status = m.solve() # print pulp.LpStatus[status] # for i in range(n): # for j in range(n): # if pulp.value(x[i,j]) &gt;0: # print str(i) + '_' + str(j) + ': ' + str( pulp.value(x[i,j]) ) find_tour() </code></pre>
1
2016-10-06T05:52:00Z
39,888,814
<p>The last constraint is <strong>not</strong> a single constraint. You should add one constraint for each pair of indices <code>i, j</code> that satisfy that condition:</p> <pre><code>for i in range(n-1): for j in range(i+1, n): m += pulp.lpSum([ u[i] - u[j] + n*x[i,j]]) &lt;= n-1 </code></pre> <p>However you first have to declare the <code>u_i</code> variables as integers, passing the <code>cat='Integer'</code> argument to <a href="https://pythonhosted.org/PuLP/pulp.html" rel="nofollow"><code>LpVariable</code></a>:</p> <pre><code>u = [] for i in range(n): u.append(pulp.LpVariable('u_' + str(i), cat='Integer')) </code></pre>
0
2016-10-06T06:17:42Z
[ "python", "algorithm", "python-2.7", "linear-programming", "pulp" ]
Searching files based on extension using os.path
39,888,494
<p>I was wondering how to search for files based on their extension. If you were to search for py or txt or some other extension, the user would input it and then it would search for it. I'm not allowed to use glob nor os.walk</p> <p>This is the method I have. If necessary, I'll post the whole code.</p> <p>The user first inputs a directory which is in another method after that, they would say</p> <p>E txt or E .txt (both works) and it would return the files based on what extension you used. I'm not sure what's wrong with my code though.</p>
0
2016-10-06T05:54:20Z
39,888,621
<p>In your code,</p> <pre><code> for file in directory </code></pre> <p>is definitely wrong, as directory is a string with the path in it. it should be something like </p> <pre><code>for file in os.listdir(directory): </code></pre> <p>to get a list of files in the directory </p>
1
2016-10-06T06:03:15Z
[ "python", "os.path" ]
How can I draw a point with Canvas in Tkinter?
39,888,580
<p>I want to draw a point in Tkinter,Now I'm using <code>Canvas</code> to make it,but I didn't find such method to draw a point in <code>Canvas</code> class.<code>Canvas</code> provides a method called <code>crete_line(x1,y1,x2,y2)</code>,so i tried to set <code>x1=x2,y1=y2</code> to draw a point, but it doesn't work.</p> <p>So anyone can tell me how to make it,it will be better if use <code>Canvas</code> can make it,other solution will be also accepted.Thanks!</p>
0
2016-10-06T06:00:44Z
39,890,145
<p>There is no method to directly put a point on <code>Canvas</code>. The method below shows points using <code>create_oval</code> method.</p> <p>Try this:</p> <pre><code>from Tkinter import * canvas_width = 500 canvas_height = 150 def paint(event): python_green = "#476042" x1, y1 = (event.x - 1), (event.y - 1) x2, y2 = (event.x + 1), (event.y + 1) w.create_oval(x1, y1, x2, y2, fill=python_green) master = Tk() master.title("Points") w = Canvas(master, width=canvas_width, height=canvas_height) w.pack(expand=YES, fill=BOTH) w.bind("&lt;B1-Motion&gt;", paint) message = Label(master, text="Press and Drag the mouse to draw") message.pack(side=BOTTOM) mainloop() </code></pre>
0
2016-10-06T07:34:13Z
[ "python", "canvas", "tkinter" ]
Print predicted column content for Naive Bayes
39,888,607
<pre><code>import warnings from sklearn.naive_bayes import GaussianNB import numpy as np import csv with open('/home/kk/Neha/demo1.csv', 'rb') as csvfile: lines = csv.reader(csvfile) for row in lines: print ', '.join(row) x = dataset[:,0:1] y = dataset[:,1] model = GaussianNB() model.fit(x, y) predicted= model.predict([43]) DeprecationWarning("ignore") print predicted </code></pre> <p>now i want to print the predicted value as a content which ever y is containing.</p> <p>i am getting index value as 1,2,3,4,5</p> <p>but i want the value.</p>
-2
2016-10-06T06:02:29Z
39,888,717
<p>you need a map//list of the index to the associated values, and then </p> <pre><code>print "\n".join([index_to_value for marg_pred to predictions]) </code></pre>
0
2016-10-06T06:11:31Z
[ "python", "algorithm", "classification" ]
Not sure why my python output is looping
39,888,626
<p>I wrote a little bit of code to read a number in a file. Append it to a variable, then increment the number so the next time it runs the number in the file will be number +1. It looks like its working except it seems to increment twice.. For example here is my code :</p> <pre><code> 11 def mcIPNumber(): 12 with open('mcIPlatest.txt', 'r+') as file: 13 NameNumber= file.read().replace('\n','') 14 NameNumber=int(NameNumber) 15 NewNumber= NameNumber+1 16 print "newnumber = %s" % NewNumber 17 file.seek(0) 18 file.write(str(NewNumber)) 19 file.truncate() 20 return NameNumber 21 22 def makeNameMCTag(): 23 NameNumber = mcIPNumber() 24 NameTag = "varName" + str(NameNumber) 25 print "Name Tag: %s" % NameTag 26 mcGroup = "varTagmc" 27 #IPNumber = 1 28 mcIP = "172.16.0.%s" % NameNumber 29 print ( "Multicast Tag: %s, %s" % (mcGroup,mcIP)) 30 31 32 mcIPNumber() 33 makeNameMCTag() </code></pre> <p>But here is my output.. notice that "NewNumber" gets printed out twice.. for some reason" </p> <pre><code>newnumber = 2 newnumber = 3 Name Tag: varName2 Multicast Tag: varTagmc, 172.16.0.2 </code></pre> <p>So it correctly made my varName2 and my IP 172.16.0.2 (incremented my initial number in the file by 1) but this means the 2nd time I run it.. I get this:</p> <pre><code>newnumber = 4 newnumber = 5 Name Tag: varName Multicast Tag: varTagmc, 172.16.0.4 </code></pre> <p>My expected result is this:</p> <pre><code>newnumber = 3 Name Tag: varName3 Multicast Tag: varTagmc, 172.16.0.3 </code></pre> <p>Any idea why its looping?</p> <p>Thanks!</p> <p>(by the way if you're curious I'm trying to write some code which will eventually write the tf file for my TerraForm lab)</p>
0
2016-10-06T06:03:39Z
39,888,649
<p>Because of this:</p> <pre><code> def makeNameMCTag(): NameNumber = mcIPNumber() </code></pre> <p>You are calling mcIPNumber from inside makeNameMCTag, so you don't excplicitly need to call that method in line 32.</p> <p>Alternatively</p> <pre><code>def make_name_mc_tag(name_number): NameTag = "varName" + str(name_number) print "Name Tag: %s" % NameTag ... make_name_mc_tag(mcIPNumber()) </code></pre> <p>here you are passing the required data as a parameter.</p>
5
2016-10-06T06:05:10Z
[ "python", "loops" ]
Quick way to find the duplicate cell in a certain column of data frame in python-pandas?
39,888,653
<p>Given a data frame <code>df</code> in the following form:</p> <pre><code>item attr 1 {1, 2, 3, 4} 2 {2, 4, 3, 2, 10} 3 {4, 37} 4 {1, 2, 3, 4} </code></pre> <p>I want to find the item-pair with same <code>attr</code>, like, <code>item 1</code> and <code>item 2</code>. Please notice that the <code>df</code> has <code>200,000</code> items totally. And I want a fastest way to find them. Do you know how to do it? Thanks in advance!</p>
1
2016-10-06T06:05:18Z
39,888,926
<p>You can first convert <code>set</code> to <code>tuple</code> and then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.aggregate.html" rel="nofollow"><code>aggregate</code></a> <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.SeriesGroupBy.nunique.html" rel="nofollow"><code>nunique</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.SeriesGroupBy.unique.html" rel="nofollow"><code>unique</code></a>. Last use <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow"><code>boolean indexing</code></a>:</p> <pre><code>df = pd.DataFrame({'item':[1,2,3,4], 'attr':[set({1, 2, 3, 4}),set({2, 4, 3, 2, 10}), set({4, 37}), set({1, 2, 3, 4})]}) print (df) attr item 0 {1, 2, 3, 4} 1 1 {3, 10, 2, 4} 2 2 {4, 37} 3 3 {1, 2, 3, 4} 4 df.attr = df.attr.apply(tuple) print (df) attr item 0 (1, 2, 3, 4) 1 1 (3, 10, 2, 4) 2 2 (4, 37) 3 3 (1, 2, 3, 4) 4 df1 = df.item.groupby(df['attr']).agg(['nunique', 'unique']) df1 = df1[df1['nunique'] == 2] print (df1) nunique unique attr (1, 2, 3, 4) 2 [1, 4] </code></pre> <p>Another solution if there are only one or pair values in <code>DataFrame</code> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.duplicated.html" rel="nofollow"><code>duplicated</code></a>:</p> <pre><code>df = pd.DataFrame({'item':[1,2,3,4], 'attr':[set({1, 2, 3, 4}),set({4, 37}), set({4, 37}), set({1, 2, 3, 4})]}) print (df) attr item 0 {1, 2, 3, 4} 1 1 {4, 37} 2 2 {4, 37} 3 3 {1, 2, 3, 4} 4 df.attr = df.attr.apply(tuple) df1 = df[df.duplicated('attr', keep=False)] df1 = df1.groupby('attr')['item'].apply(lambda x: x.tolist()) print (df1) (1, 2, 3, 4) [1, 4] (4, 37) [2, 3] Name: item, dtype: object </code></pre> <p>EDIT by comment:</p> <p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.melt.html" rel="nofollow"><code>melt</code></a> for reshaping:</p> <pre><code>df = pd.DataFrame({'item':[1,2,3,4,5], 'attr1':[set({1, 2, 3, 4}),set({4, 37}),set({4, 37}), set({1, 2, 3, 4}), set({4,8})], 'attr2':[set({1, 2 }),set({4, 37}), set({4, 3}), set({1, 2}), set({4,8})]}) print (df) attr1 attr2 item 0 {1, 2, 3, 4} {1, 2} 1 1 {4, 37} {4, 37} 2 2 {4, 37} {3, 4} 3 3 {1, 2, 3, 4} {1, 2} 4 4 {8, 4} {8, 4} 5 df = pd.melt(df, id_vars='item', value_name='attr').drop('variable', axis=1) df.attr = df.attr.apply(tuple) print (df) item attr 0 1 (1, 2, 3, 4) 1 2 (4, 37) 2 3 (4, 37) 3 4 (1, 2, 3, 4) 4 5 (8, 4) 5 1 (1, 2) 6 2 (4, 37) 7 3 (3, 4) 8 4 (1, 2) 9 5 (8, 4) </code></pre>
1
2016-10-06T06:25:05Z
[ "python", "pandas", "python-3.5" ]
Extract specific string part from HTML content with Regex
39,888,770
<p>I'm trying to extract with REGEX from the following string:</p> <p><code>&lt;tr class="data"&gt;&lt;td class="first"&gt;&lt;a href="en/clinics/al-khayal-medical-centre-6d15db1f.aspx"&gt;AL KHAYAL MEDICAL CENTRE&lt;/a&gt;&lt;/td&gt;&lt;td class="second not-responsive"&gt;&lt;a href="/portal/en/healthcare/clinics.aspx?speciality=Urology"&gt;Urology&lt;/a&gt;&lt;/td&gt;&lt;td class="third"&gt;&lt;label style="display:block ;direction:ltr; text-align:left"&gt;042765600&lt;/label&gt;&lt;/td&gt;&lt;td class="fourth"&gt;&lt;a href="http://www.alkhayalmedicalcentre.com" target="_blank"&gt;www.alkhayalmedicalcentre.com&lt;/a&gt;&lt;/td&gt;&lt;td class="fifth not-responsive"&gt;&lt;a class="clinic-details btn noborder basic small" href="en/clinics/al-khayal-medical-centre-6d15db1f.aspx" rel="1dfd7cf2-49c8-43eb-80b8-9393c9f48ca4"&gt;More Info&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;</code></p> <p>I'm trying to extract just the following part: </p> <pre><code>en/clinics/almavita-medical-clinic-fz-llc.aspx </code></pre>
-2
2016-10-06T06:15:29Z
39,888,858
<p>The modern way of doing this is to use something like BeautifulSoup. One does not try to process mark up using regular expressions anymore because the resulting code is unreadble and even the slightest change to the markup can break your beautifully crafted regular expression.</p> <pre><code>from bs4 import BeautifulSoup bs = BeautifulSoup('&lt;tr class="data"&gt;&lt;td class="first"&gt;&lt;a href="en/clinics/al-khayal-medical-centre-6d15db1f.aspx"&gt;AL KHAYAL MEDICAL CENTRE&lt;/a&gt;&lt;/td&gt;&lt;td class="second not-responsive"&gt;&lt;a href="/portal/en/healthcare/clinics.aspx?speciality=Urology"&gt;Urology&lt;/a&gt;&lt;/td&gt;&lt;td class="third"&gt;&lt;label style="display:block ;direction:ltr; text-align:left"&gt;042765600&lt;/label&gt;&lt;/td&gt;&lt;td class="fourth"&gt;&lt;a href="http://www.alkhayalmedicalcentre.com" target="_blank"&gt;www.alkhayalmedicalcentre.com&lt;/a&gt;&lt;/td&gt;&lt;td class="fifth not-responsive"&gt;&lt;a class="clinic-details btn noborder basic small" href="en/clinics/al-khayal-medical-centre-6d15db1f.aspx" rel="1dfd7cf2-49c8-43eb-80b8-9393c9f48ca4"&gt;More Info&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt; ') for a in bs.find_all('a'): print a['href'] </code></pre> <p>gives</p> <pre><code>en/clinics/al-khayal-medical-centre-6d15db1f.aspx en/clinics/al-khayal-medical-centre-6d15db1f.aspx /portal/en/healthcare/clinics.aspx?speciality=Urology http://www.alkhayalmedicalcentre.com en/clinics/al-khayal-medical-centre-6d15db1f.aspx </code></pre> <p>painless</p> <p>Update: Original version of this answer, I didn't notice that multiple <code>a</code> tags were available. MOdified to process all <code>a</code> tags. If you want just the first one use bs.a['href']</p>
2
2016-10-06T06:20:21Z
[ "python", "regex", "python-2.7", "html-parsing" ]
Extract specific string part from HTML content with Regex
39,888,770
<p>I'm trying to extract with REGEX from the following string:</p> <p><code>&lt;tr class="data"&gt;&lt;td class="first"&gt;&lt;a href="en/clinics/al-khayal-medical-centre-6d15db1f.aspx"&gt;AL KHAYAL MEDICAL CENTRE&lt;/a&gt;&lt;/td&gt;&lt;td class="second not-responsive"&gt;&lt;a href="/portal/en/healthcare/clinics.aspx?speciality=Urology"&gt;Urology&lt;/a&gt;&lt;/td&gt;&lt;td class="third"&gt;&lt;label style="display:block ;direction:ltr; text-align:left"&gt;042765600&lt;/label&gt;&lt;/td&gt;&lt;td class="fourth"&gt;&lt;a href="http://www.alkhayalmedicalcentre.com" target="_blank"&gt;www.alkhayalmedicalcentre.com&lt;/a&gt;&lt;/td&gt;&lt;td class="fifth not-responsive"&gt;&lt;a class="clinic-details btn noborder basic small" href="en/clinics/al-khayal-medical-centre-6d15db1f.aspx" rel="1dfd7cf2-49c8-43eb-80b8-9393c9f48ca4"&gt;More Info&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;</code></p> <p>I'm trying to extract just the following part: </p> <pre><code>en/clinics/almavita-medical-clinic-fz-llc.aspx </code></pre>
-2
2016-10-06T06:15:29Z
39,888,890
<pre><code>import re html = ''' &lt;tr class="data"&gt;&lt;td class="first"&gt;&lt;a href="en/clinics/al-khayal- medical-centre-6d15db1f.aspx"&gt;AL KHAYAL MEDICAL CENTRE&lt;/a&gt;&lt;/td&gt; &lt;td class="second not-responsive"&gt;&lt;a href="/portal/en/healthcare/clinics.aspx?speciality=Urology"&gt;Urology&lt;/a&gt;&lt;/td&gt;&lt;td class="third"&gt; &lt;label style="display:block ;direction:ltr; text-align:left"&gt;042765600&lt;/label&gt;&lt;/td&gt;&lt;td class="fourth"&gt; &lt;a href="http://www.alkhayalmedicalcentre.com" target="_blank"&gt;www.alkhayalmedicalcentre.com&lt;/a&gt; &lt;/td&gt;&lt;td class="fifth not-responsive"&gt; &lt;a class="clinic-details btn noborder basic small" href="en/clinics/al-khayal-medical-centre-6d15db1f.aspx" rel="1dfd7cf2-49c8-43eb-80b8-9393c9f48ca4"&gt;More Info&lt;/a&gt; &lt;/td&gt;&lt;/tr&gt; ''' pat = re.compile(r"a href=(\".*?\")") # for being more specific can use this regex: # pat = re.compile(r"&lt;a.*?href=(\".*?\").*?&gt;") # retrieves all links print (re.findall(pat, html)) </code></pre> <p>will give:</p> <pre><code>[ '"en/clinics/al-khayal-medical-centre-6d15db1f.aspx"', '"/portal/en/healthcare/clinics.aspx?speciality=Urology"', '"http://www.alkhayalmedicalcentre.com"'] </code></pre> <p>so, do:</p> <pre><code># return the first occurring link print (re.findall(pat, html)[0]) </code></pre> <p>output:</p> <pre><code>"en/clinics/al-khayal-medical-centre-6d15db1f.aspx" </code></pre>
0
2016-10-06T06:22:19Z
[ "python", "regex", "python-2.7", "html-parsing" ]
Python loop ask
39,888,809
<p>Can someone help me how do i display both food and drink if i answered yes?</p> <p>example:</p> <p>choose:1</p> <p>you have entered food</p> <p>Do you want to select another?(Y/N):y</p> <p>choose:2</p> <p>Do you want to select another?(Y/N):n</p> <p>selected:</p> <p>food</p> <p>drink</p> <p>thanks in advance</p> <pre><code> answer="Y" while(answer=="Y"): print("1-food") print("2-drink") opt=int(input("Choose:"))` if(opt==1):` print("You have selected food") if(op==2): print("You have selected drink"): answer=input("Do you want to select another?(Y/N)").upper() </code></pre>
0
2016-10-06T06:17:22Z
39,889,151
<pre><code>food = False drink = False answer="Y" while(answer=="Y"): print("1-food") print("2-drink") opt=int(input("Choose:")) if(opt==1): print("You have selected food") food = True if(op==2): print("You have selected drink"): drink = True answer=input("Do you want to select another?(Y/N)").upper() if food: print "food" if drink: print "drink" </code></pre> <p>All you need to do is add two variables <code>food</code> and <code>drink</code> and if you selected the relevant option then you set it to <code>True</code>. Then at the end of your <code>while</code> loop you simply test to see if they are set.</p>
0
2016-10-06T06:39:11Z
[ "python", "loops", "while-loop" ]
python writing list to text file result in different lengths
39,888,880
<p>i have 2 lists of strings with the same length but when i write them to a file where each item appear on separate lines in the file, they length of the list and file do not match:</p> <pre><code>print len(x) print len(y) 317858 317858 </code></pre> <p>However when i write each item in the list to a text file: the number of lines in the text file do not match to length of the list. </p> <pre><code>with open('a.txt', 'wb') as f: for i in x[:222500]: print &gt;&gt; f, i </code></pre> <p>in linux, <code>wc -l a.txt</code> gives 222499 which is right. </p> <pre><code>with open('b.txt', 'wb') as f: for i in y[:222500]: print &gt;&gt; f, i </code></pre> <p>in linux, <code>wc -l b.txt</code> gives 239610 which is wrong. </p> <p>when i <code>vi b.txt</code> in the terminal, it did have 239610 lines so i am quite confused as to why this is happening..</p> <p>How can i debug this?</p>
0
2016-10-06T06:21:39Z
39,889,049
<p>The only possible way for finding more lines in b.txt than the number of string written is that some of the strings in <code>y</code> actually contain new lines.</p> <p>Here is a small example</p> <pre><code>l = [ 'a', 'b\nc'] print len(l) with open('tst.txt', 'wb') as fd: for i in l: print &gt;&gt; fd, i </code></pre> <p>This little code will print 2 because list l contains 2 elements, but the resulting file will contain 3 lines:</p> <pre><code>a b c </code></pre>
0
2016-10-06T06:34:06Z
[ "python", "linux" ]
python writing list to text file result in different lengths
39,888,880
<p>i have 2 lists of strings with the same length but when i write them to a file where each item appear on separate lines in the file, they length of the list and file do not match:</p> <pre><code>print len(x) print len(y) 317858 317858 </code></pre> <p>However when i write each item in the list to a text file: the number of lines in the text file do not match to length of the list. </p> <pre><code>with open('a.txt', 'wb') as f: for i in x[:222500]: print &gt;&gt; f, i </code></pre> <p>in linux, <code>wc -l a.txt</code> gives 222499 which is right. </p> <pre><code>with open('b.txt', 'wb') as f: for i in y[:222500]: print &gt;&gt; f, i </code></pre> <p>in linux, <code>wc -l b.txt</code> gives 239610 which is wrong. </p> <p>when i <code>vi b.txt</code> in the terminal, it did have 239610 lines so i am quite confused as to why this is happening..</p> <p>How can i debug this?</p>
0
2016-10-06T06:21:39Z
39,889,136
<p>I'm sure others will quickly point out the cause of this difference (it's related to newline characters), but since you asked 'How can I debug this?' I'd like to address that question:</p> <p>Since the only difference between the passing and the failing run are the lists themselves, I'd concentrate on those. There is some difference between the lists (i.e. at least one differing list element) which triggers this. Hence, you could perform a binary search to locate the first differing list element triggering this.</p> <p>To do so, just chop the lists in halves, e.g. take the first 317858/2 lines of each list. Do you still observe the same symptom? If so, repeat the exercise with that first half. Otherwise, repeat that exercise with the second half. That way, you'll need at most 19 tries to identify the line which triggers this. And at that point, the issue is simplified to a single string.</p> <p>Chances are that you can spot the issue by just looking at the strings, but in principle (e.g. if the strings are very long), you could then proceed to do a binary search on those strings to identify the first character triggering this issue.</p>
0
2016-10-06T06:38:24Z
[ "python", "linux" ]
ImportError: cannot import name _unquotepath
39,888,903
<p>I am trying to use scrapy for my project &amp; after some initial struggle i started with <a href="https://doc.scrapy.org/en/latest/intro/tutorial.html" rel="nofollow">https://doc.scrapy.org/en/latest/intro/tutorial.html</a> </p> <p>When i use :</p> <p><code>scrapy startproject tutorial</code></p> <p>It throws me error: </p> <pre><code>ubuntu@ip-10-241-62-56:~/Selenim$ scrapy startproject tutorial Traceback (most recent call last): File "/usr/local/bin/scrapy", line 7, in &lt;module&gt; from scrapy.cmdline import execute File "/usr/local/lib/python2.7/dist-packages/scrapy/__init__.py", line 34, in &lt;module&gt; from scrapy.spiders import Spider File "/usr/local/lib/python2.7/dist-packages/scrapy/spiders/__init__.py", line 10, in &lt;module&gt; from scrapy.http import Request File "/usr/local/lib/python2.7/dist-packages/scrapy/http/__init__.py", line 10, in &lt;module&gt; from scrapy.http.request import Request File "/usr/local/lib/python2.7/dist-packages/scrapy/http/request/__init__.py", line 13, in &lt;module&gt; from scrapy.utils.url import escape_ajax File "/usr/local/lib/python2.7/dist-packages/scrapy/utils/url.py", line 15, in &lt;module&gt; from w3lib.url import _safe_chars, _unquotepath ImportError: cannot import name _unquotepath </code></pre> <p>How do i resolve this?</p>
0
2016-10-06T06:23:19Z
40,026,134
<p>Upgrading w3lib (to 1.15.0) solved the problem for me.</p>
0
2016-10-13T16:16:32Z
[ "python", "ubuntu", "scrapy" ]
Fastest way to concatenate multiple files column wise - Python
39,888,949
<p><strong>What is the fastest method to concatenate multiple files column wise (within Python)?</strong></p> <p>Assume that I have two files with 1,000,000,000 lines and ~200 UTF8 characters per line.</p> <p><strong>Method 1:</strong> Cheating with <code>paste</code></p> <p>I could concatenate the two files under a linux system by using <code>paste</code> in shell and I could cheat using <code>os.system</code>, i.e.:</p> <pre><code>def concat_files_cheat(file_path, file1, file2, output_path, output): file1 = os.path.join(file_path, file1) file2 = os.path.join(file_path, file2) output = os.path.join(output_path, output) if not os.path.exists(output): os.system('paste ' + file1 + ' ' + file2 + ' &gt; ' + output) </code></pre> <p><strong>Method 2:</strong> Using nested context manager with <code>zip</code>:</p> <pre><code>def concat_files_zip(file_path, file1, file2, output_path, output): with open(output, 'wb') as fout: with open(file1, 'rb') as fin1, open(file2, 'rb') as fin2: for line1, line2 in zip(fin1, fin2): fout.write(line1 + '\t' + line2) </code></pre> <p><strong>Method 3:</strong> Using <code>fileinput</code></p> <p>Does <code>fileinput</code> iterate through the files in parallel? Or will they iterate through each file sequentially on after the other?</p> <p>If it is the former, I would assume it would look like this:</p> <pre><code>def concat_files_fileinput(file_path, file1, file2, output_path, output): with fileinput.input(files=(file1, file2)) as f: for line in f: line1, line2 = process(line) fout.write(line1 + '\t' + line2) </code></pre> <p><strong>Method 4</strong>: Treat them like <code>csv</code></p> <pre><code>with open(output, 'wb') as fout: with open(file1, 'rb') as fin1, open(file2, 'rb') as fin2: writer = csv.writer(w) reader1, reader2 = csv.reader(fin1), csv.reader(fin2) for line1, line2 in zip(reader1, reader2): writer.writerow(line1 + '\t' + line2) </code></pre> <p>Given the data size, which would be the fastest? </p> <p>Why would one choose one over the other? Would I lose or add information? </p> <p>For each method how would I choose a different delimiter other than <code>,</code> or <code>\t</code>?</p> <p>Are there other ways of achieving the same concatenation column wise? Are they as fast?</p>
3
2016-10-06T06:26:08Z
39,892,264
<p>You can try to test your function with <code>timeit</code>. This <a href="https://docs.python.org/2/library/timeit.html" rel="nofollow">doc</a> could be helpful.</p> <p>Or the same magic function <code>%%timeit</code> in Jupyter notebook. You just need to write <code>%%timeit func(data)</code> and you will get a response with the assessment of your function. This <a href="https://blog.dominodatalab.com/lesser-known-ways-of-using-notebooks/" rel="nofollow">paper</a> could help you with it.</p>
-1
2016-10-06T09:24:27Z
[ "python", "shell", "text-files", "delimiter", "paste" ]
Fastest way to concatenate multiple files column wise - Python
39,888,949
<p><strong>What is the fastest method to concatenate multiple files column wise (within Python)?</strong></p> <p>Assume that I have two files with 1,000,000,000 lines and ~200 UTF8 characters per line.</p> <p><strong>Method 1:</strong> Cheating with <code>paste</code></p> <p>I could concatenate the two files under a linux system by using <code>paste</code> in shell and I could cheat using <code>os.system</code>, i.e.:</p> <pre><code>def concat_files_cheat(file_path, file1, file2, output_path, output): file1 = os.path.join(file_path, file1) file2 = os.path.join(file_path, file2) output = os.path.join(output_path, output) if not os.path.exists(output): os.system('paste ' + file1 + ' ' + file2 + ' &gt; ' + output) </code></pre> <p><strong>Method 2:</strong> Using nested context manager with <code>zip</code>:</p> <pre><code>def concat_files_zip(file_path, file1, file2, output_path, output): with open(output, 'wb') as fout: with open(file1, 'rb') as fin1, open(file2, 'rb') as fin2: for line1, line2 in zip(fin1, fin2): fout.write(line1 + '\t' + line2) </code></pre> <p><strong>Method 3:</strong> Using <code>fileinput</code></p> <p>Does <code>fileinput</code> iterate through the files in parallel? Or will they iterate through each file sequentially on after the other?</p> <p>If it is the former, I would assume it would look like this:</p> <pre><code>def concat_files_fileinput(file_path, file1, file2, output_path, output): with fileinput.input(files=(file1, file2)) as f: for line in f: line1, line2 = process(line) fout.write(line1 + '\t' + line2) </code></pre> <p><strong>Method 4</strong>: Treat them like <code>csv</code></p> <pre><code>with open(output, 'wb') as fout: with open(file1, 'rb') as fin1, open(file2, 'rb') as fin2: writer = csv.writer(w) reader1, reader2 = csv.reader(fin1), csv.reader(fin2) for line1, line2 in zip(reader1, reader2): writer.writerow(line1 + '\t' + line2) </code></pre> <p>Given the data size, which would be the fastest? </p> <p>Why would one choose one over the other? Would I lose or add information? </p> <p>For each method how would I choose a different delimiter other than <code>,</code> or <code>\t</code>?</p> <p>Are there other ways of achieving the same concatenation column wise? Are they as fast?</p>
3
2016-10-06T06:26:08Z
39,962,542
<p>You can replace the <code>for</code> loop with <code>writelines</code> by passing a genexp to it and replace <code>zip</code> with <code>izip</code> from <code>itertools</code> in method 2. This may come close to <code>paste</code> or surpass it.</p> <pre><code>with open(file1, 'rb') as fin1, open(file2, 'rb') as fin2, open(output, 'wb') as fout: fout.writelines(b"{}\t{}".format(*line) for line in izip(fin1, fin2)) </code></pre> <p>If you don't want to embed <code>\t</code> in the format string, you can use <code>repeat</code> from <code>itertools</code>; </p> <pre><code> fout.writelines(b"{}{}{}".format(*line) for line in izip(fin1, repeat(b'\t'), fin2)) </code></pre> <p>If the files are of same length, you can do away with <code>izip</code>. </p> <pre><code>with open(file1, 'rb') as fin1, open(file2, 'rb') as fin2, open(output, 'wb') as fout: fout.writelines(b"{}\t{}".format(line, next(fin2)) for line in fin1) </code></pre>
1
2016-10-10T16:17:55Z
[ "python", "shell", "text-files", "delimiter", "paste" ]
Fastest way to concatenate multiple files column wise - Python
39,888,949
<p><strong>What is the fastest method to concatenate multiple files column wise (within Python)?</strong></p> <p>Assume that I have two files with 1,000,000,000 lines and ~200 UTF8 characters per line.</p> <p><strong>Method 1:</strong> Cheating with <code>paste</code></p> <p>I could concatenate the two files under a linux system by using <code>paste</code> in shell and I could cheat using <code>os.system</code>, i.e.:</p> <pre><code>def concat_files_cheat(file_path, file1, file2, output_path, output): file1 = os.path.join(file_path, file1) file2 = os.path.join(file_path, file2) output = os.path.join(output_path, output) if not os.path.exists(output): os.system('paste ' + file1 + ' ' + file2 + ' &gt; ' + output) </code></pre> <p><strong>Method 2:</strong> Using nested context manager with <code>zip</code>:</p> <pre><code>def concat_files_zip(file_path, file1, file2, output_path, output): with open(output, 'wb') as fout: with open(file1, 'rb') as fin1, open(file2, 'rb') as fin2: for line1, line2 in zip(fin1, fin2): fout.write(line1 + '\t' + line2) </code></pre> <p><strong>Method 3:</strong> Using <code>fileinput</code></p> <p>Does <code>fileinput</code> iterate through the files in parallel? Or will they iterate through each file sequentially on after the other?</p> <p>If it is the former, I would assume it would look like this:</p> <pre><code>def concat_files_fileinput(file_path, file1, file2, output_path, output): with fileinput.input(files=(file1, file2)) as f: for line in f: line1, line2 = process(line) fout.write(line1 + '\t' + line2) </code></pre> <p><strong>Method 4</strong>: Treat them like <code>csv</code></p> <pre><code>with open(output, 'wb') as fout: with open(file1, 'rb') as fin1, open(file2, 'rb') as fin2: writer = csv.writer(w) reader1, reader2 = csv.reader(fin1), csv.reader(fin2) for line1, line2 in zip(reader1, reader2): writer.writerow(line1 + '\t' + line2) </code></pre> <p>Given the data size, which would be the fastest? </p> <p>Why would one choose one over the other? Would I lose or add information? </p> <p>For each method how would I choose a different delimiter other than <code>,</code> or <code>\t</code>?</p> <p>Are there other ways of achieving the same concatenation column wise? Are they as fast?</p>
3
2016-10-06T06:26:08Z
40,003,693
<p>Method #1 is the fastest because it uses native (instead of Python) code to concatenate the files. However it is definitively cheating.</p> <p>If you want to cheat, you may also consider also writing your own C extension for Python - it may be even faster, depending on your coding skills.</p> <p>I am afraid Method #4 would not be working, since you are concatenating lists with string. I would go with <code>writer.writerow(line1 + line2)</code> instead. You may use <code>delimiter</code> parameter of both <code>csv.reader</code> and <code>csv.writer</code> to customise the delimiter (see <a href="https://docs.python.org/2/library/csv.html" rel="nofollow">https://docs.python.org/2/library/csv.html</a>).</p>
-1
2016-10-12T16:26:55Z
[ "python", "shell", "text-files", "delimiter", "paste" ]
Fastest way to concatenate multiple files column wise - Python
39,888,949
<p><strong>What is the fastest method to concatenate multiple files column wise (within Python)?</strong></p> <p>Assume that I have two files with 1,000,000,000 lines and ~200 UTF8 characters per line.</p> <p><strong>Method 1:</strong> Cheating with <code>paste</code></p> <p>I could concatenate the two files under a linux system by using <code>paste</code> in shell and I could cheat using <code>os.system</code>, i.e.:</p> <pre><code>def concat_files_cheat(file_path, file1, file2, output_path, output): file1 = os.path.join(file_path, file1) file2 = os.path.join(file_path, file2) output = os.path.join(output_path, output) if not os.path.exists(output): os.system('paste ' + file1 + ' ' + file2 + ' &gt; ' + output) </code></pre> <p><strong>Method 2:</strong> Using nested context manager with <code>zip</code>:</p> <pre><code>def concat_files_zip(file_path, file1, file2, output_path, output): with open(output, 'wb') as fout: with open(file1, 'rb') as fin1, open(file2, 'rb') as fin2: for line1, line2 in zip(fin1, fin2): fout.write(line1 + '\t' + line2) </code></pre> <p><strong>Method 3:</strong> Using <code>fileinput</code></p> <p>Does <code>fileinput</code> iterate through the files in parallel? Or will they iterate through each file sequentially on after the other?</p> <p>If it is the former, I would assume it would look like this:</p> <pre><code>def concat_files_fileinput(file_path, file1, file2, output_path, output): with fileinput.input(files=(file1, file2)) as f: for line in f: line1, line2 = process(line) fout.write(line1 + '\t' + line2) </code></pre> <p><strong>Method 4</strong>: Treat them like <code>csv</code></p> <pre><code>with open(output, 'wb') as fout: with open(file1, 'rb') as fin1, open(file2, 'rb') as fin2: writer = csv.writer(w) reader1, reader2 = csv.reader(fin1), csv.reader(fin2) for line1, line2 in zip(reader1, reader2): writer.writerow(line1 + '\t' + line2) </code></pre> <p>Given the data size, which would be the fastest? </p> <p>Why would one choose one over the other? Would I lose or add information? </p> <p>For each method how would I choose a different delimiter other than <code>,</code> or <code>\t</code>?</p> <p>Are there other ways of achieving the same concatenation column wise? Are they as fast?</p>
3
2016-10-06T06:26:08Z
40,053,439
<p>From all four methods I'd take the second. But you have to take care of small details in the implementation. (with a few improvements it takes <em>0.002 seconds</em> meanwhile the original implementation takes about <em>6 seconds</em>; the file I was working was 1M rows; but there should not be too much difference if the file is 1K times bigger as we are not using almost memory).</p> <p>Changes from the original implementation:</p> <ul> <li>Use iterators if possible, otherwise memory consumption will be penalized and you have to handle the whole file at once. (mainly if you are using python 2, instead of using zip use itertools.izip)</li> <li>When you are concatenating strings, use "%s%s".format() or similar; otherwise you generate one new string instance each time.</li> <li>There's no need of writing line by line inside the for. You can use an iterator inside the write.</li> <li>Small buffers are very interesting but if we are using iterators the difference is very small, but if we try to fetch all data at once (so, for example, we put f1.readlines(1024*1000), it's much slower).</li> </ul> <p>Example:</p> <pre><code>def concat_iter(file1, file2, output): with open(output, 'w', 1024) as fo, \ open(file1, 'r') as f1, \ open(file2, 'r') as f2: fo.write("".join("{}\t{}".format(l1, l2) for l1, l2 in izip(f1.readlines(1024), f2.readlines(1024)))) </code></pre> <p>Profiler original solution.</p> <p>We see that the biggest problem is in write and zip (mainly for not using iterators and having to handle/ process all file in memory).</p> <pre><code>~/personal/python-algorithms/files$ python -m cProfile sol_original.py 10000006 function calls in 5.208 seconds Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 1 0.000 0.000 5.208 5.208 sol_original.py:1(&lt;module&gt;) 1 2.422 2.422 5.208 5.208 sol_original.py:1(concat_files_zip) 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects} **9999999 1.713 0.000 1.713 0.000 {method 'write' of 'file' objects}** 3 0.000 0.000 0.000 0.000 {open} 1 1.072 1.072 1.072 1.072 {zip} </code></pre> <p>Profiler:</p> <pre><code>~/personal/python-algorithms/files$ python -m cProfile sol1.py 3731 function calls in 0.002 seconds Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 1 0.000 0.000 0.002 0.002 sol1.py:1(&lt;module&gt;) 1 0.000 0.000 0.002 0.002 sol1.py:3(concat_iter6) 1861 0.001 0.000 0.001 0.000 sol1.py:5(&lt;genexpr&gt;) 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects} 1860 0.001 0.000 0.001 0.000 {method 'format' of 'str' objects} 1 0.000 0.000 0.002 0.002 {method 'join' of 'str' objects} 2 0.000 0.000 0.000 0.000 {method 'readlines' of 'file' objects} **1 0.000 0.000 0.000 0.000 {method 'write' of 'file' objects}** 3 0.000 0.000 0.000 0.000 {open} </code></pre> <p>And in python 3 is even faster, because iterators are built-in and we dont need to import any library.</p> <pre><code>~/personal/python-algorithms/files$ python3.5 -m cProfile sol2.py 843 function calls (842 primitive calls) in 0.001 seconds [...] </code></pre> <p>And also it's very nice to see memory consumption and File System accesses that confirms what we have said before:</p> <pre><code>$ /usr/bin/time -v python sol1.py Command being timed: "python sol1.py" User time (seconds): 0.01 [...] Maximum resident set size (kbytes): 7120 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 914 [...] File system outputs: 40 Socket messages sent: 0 Socket messages received: 0 $ /usr/bin/time -v python sol_original.py Command being timed: "python sol_original.py" User time (seconds): 5.64 [...] Maximum resident set size (kbytes): 1752852 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 427697 [...] File system inputs: 0 File system outputs: 327696 </code></pre>
1
2016-10-14T23:36:50Z
[ "python", "shell", "text-files", "delimiter", "paste" ]
Tkinter .withdraw() strange behaviour
39,889,105
<p>Using the following code, the Tkinter root window will be hidden: </p> <pre><code>def main(): root = Tkinter.Tk() root.iconify() a = open(tkFileDialog.askopenfilename(), 'r') main() </code></pre> <p>However, using this variation, the root window will not be hidden: </p> <pre><code>class Comparison: def __init__(self, file=open(tkFileDialog.askopenfilename(),'r')): self.file = file self.length = sum(1 for _ in self.file) def main(): root = Tkinter.Tk() root.iconify() a = Comparison() main() </code></pre> <p>Why does calling <code>tkFileDialog.askopenfilename</code> with the constructor cause this behaviour? I have tried both <code>root.withdraw()</code> and <code>root.iconify()</code> and experienced the same behaviour. </p> <p>It may be worth noting that I am on OSX 10.11.6.</p> <p>Thanks!</p>
0
2016-10-06T06:37:01Z
39,889,411
<p>When you do this:</p> <pre><code>def __init__(self, file=open(tkFileDialog.askopenfilename(),'r')): </code></pre> <p>That <em>immediately</em> runs <code>open(tkFileDialog.askopenfilename(),'r')</code>, because default arguments are evaluated when the function is defined. Therefore, when you run the second code block, the interpreter creates a necessary Tkinter root window and opens that file chooser while it's still defining that class. <em>After</em> that, you define a function <code>main</code>. Finally, you call <code>main()</code>, which creates a root object, withdraws it, and instantiates an object of the <code>Comparison</code> class. The root window you explicitly created with <code>root = Tkinter.Tk()</code> <em>is</em> hidden. The older one, that Python was forced to create in order for the file dialog to exist, however, was not.</p> <p>To fix this, put the default behavior into the method body rather than its signature:</p> <pre><code>class Comparison: def __init__(self, file=None): if file is None: self.file = open(tkFileDialog.askopenfilename(),'r') else: self.file = file self.length = sum(1 for _ in self.file) </code></pre>
2
2016-10-06T06:53:20Z
[ "python", "python-2.7", "tkinter" ]
How to pass the java parameter to python , Call Python code from Java
39,889,146
<p>I create simple python statement my_utils.py</p> <pre><code>def adder(a, b): c=a+b return c </code></pre> <p>I would like to assign value to python in Java.</p> <pre><code>public class ParameterPy { public static void main(String a[]){ try{ int number1 = 100; int number2 = 200; ProcessBuilder pb = new ProcessBuilder("C:/Python27/python","D://my_utils.py",""+number1,""+number2); Process p = pb.start(); BufferedReader bfr = new BufferedReader(new InputStreamReader(p.getInputStream())); System.out.println(".........start process........."); String line = ""; while ((line = bfr.readLine()) != null){ System.out.println("Python Output: " + line); } System.out.println("........end process......."); }catch(Exception e){System.out.println(e);} } } </code></pre> <p>However, process builder is not able to pass the parameter value to a, b in python and display the result. <a href="http://i.stack.imgur.com/cR99O.png" rel="nofollow"><img src="http://i.stack.imgur.com/cR99O.png" alt="enter image description here"></a></p> <p>How to give the parameter value to python? If the numeric value is worked? How about if I pass the non-numeric value such as a string to python</p> <pre><code>def str(myWord): if myWord=="OK": print "the word is OK." else: print " the word is not OK." </code></pre>
0
2016-10-06T06:38:52Z
39,889,900
<p>The Python <code>sys</code> module provides access to any command-line arguments via the <code>sys.argv</code>. But the type of arguments is string always. Here is the example how I would like to check numeric values:</p> <pre><code>import sys print 'Number of arguments:', len(sys.argv), 'arguments.' print 'Argument List:', str(sys.argv) def adder(a, b): c=a+b return c def is_number(x): try: float(x) return True except ValueError: return False p1 = sys.argv[1] p2 = sys.argv[2] if is_number(p1) and is_number(p2): print "Sum: {}".format(adder(float(p1), float(p2))) else: print "Concatenation: {}".format(adder(p1, p2)) </code></pre> <p>UPDATED: As @cricket_007 mentioned, you can use <code>isdigit</code> if you want to check integers only. But it does not work for <code>float</code>:</p> <pre><code>&gt;&gt;&gt; "123".isdigit() True &gt;&gt;&gt; "12.3".isdigit() False </code></pre>
1
2016-10-06T07:21:50Z
[ "java", "python", "parameter-passing", "processbuilder" ]
How to pass the java parameter to python , Call Python code from Java
39,889,146
<p>I create simple python statement my_utils.py</p> <pre><code>def adder(a, b): c=a+b return c </code></pre> <p>I would like to assign value to python in Java.</p> <pre><code>public class ParameterPy { public static void main(String a[]){ try{ int number1 = 100; int number2 = 200; ProcessBuilder pb = new ProcessBuilder("C:/Python27/python","D://my_utils.py",""+number1,""+number2); Process p = pb.start(); BufferedReader bfr = new BufferedReader(new InputStreamReader(p.getInputStream())); System.out.println(".........start process........."); String line = ""; while ((line = bfr.readLine()) != null){ System.out.println("Python Output: " + line); } System.out.println("........end process......."); }catch(Exception e){System.out.println(e);} } } </code></pre> <p>However, process builder is not able to pass the parameter value to a, b in python and display the result. <a href="http://i.stack.imgur.com/cR99O.png" rel="nofollow"><img src="http://i.stack.imgur.com/cR99O.png" alt="enter image description here"></a></p> <p>How to give the parameter value to python? If the numeric value is worked? How about if I pass the non-numeric value such as a string to python</p> <pre><code>def str(myWord): if myWord=="OK": print "the word is OK." else: print " the word is not OK." </code></pre>
0
2016-10-06T06:38:52Z
39,892,057
<pre><code>import sys # print sys.argv print "sys.argv is:",sys.argv # ['D://my_utils.py', '100', '200', 'google'] a= sys.argv[1] b= sys.argv[2] print "a is:", a print "b is:", b a= int (a) b= int(b) def adder(a, b): c=a+b return c print adder(a,b) searchTerm=sys.argv[3] print searchTerm ##google def word(searchTerm): if searchTerm=="google": print " you get it" else: print " the word is different." word(searchTerm) </code></pre> <p>in java </p> <pre><code>int number1 = 100; int number2 = 200; String searchTerm="google"; ProcessBuilder pb = new ProcessBuilder("C:/Python27/python","D://searchTestJava//my_utils.py",""+number1,""+number2,""+searchTerm); Process p = pb.start(); BufferedReader bfr = new BufferedReader(new InputStreamReader(p.getInputStream())); System.out.println(".........start process........."); String line = ""; while ((line = bfr.readLine()) != null){ System.out.println("Python Output: " + line); </code></pre> <p>the output result:</p> <pre><code>Python Output: a is: 100 Python Output: b is: 200 Python Output: 300 Python Output: google Python Output: you get it </code></pre>
1
2016-10-06T09:13:25Z
[ "java", "python", "parameter-passing", "processbuilder" ]
Can tensorflow's mod operator match python's modulo implementation?
39,889,349
<p>Python's <code>%</code> operator always returns a number with the same sign as the divisor (second argument), for example:</p> <pre><code>-7.0 % 3.0 -&gt; 2.0 </code></pre> <p>However, Tensorflow's mod operator seems to be implemented slightly differently:</p> <pre><code>tf.mod(-7.0, 3.0).eval() -&gt; -1.0 </code></pre> <p>How can I get Tensorflow to return the same value as the python implementation?</p> <pre><code>import tensorflow as tf def main(): v_num = -7.0 v_div = 3.0 mod_tf = tf.mod(v_num, v_div) mod_py = v_num % v_div with tf.Session() as sess: sess.run(tf.initialize_all_variables()) print('TF: {} % {} = {}'.format(v_num, v_div, mod_tf.eval())) print('PY: {} % {} = {}'.format(v_num, v_div, mod_py)) if __name__ == "__main__": main() </code></pre>
0
2016-10-06T06:50:24Z
39,907,847
<p>Interesting finding. Maybe worth filing a github issue here: <a href="https://github.com/tensorflow/tensorflow/issues" rel="nofollow">https://github.com/tensorflow/tensorflow/issues</a></p> <p>For a workaround, I think you can use this line:</p> <pre><code>mod_tf = tf.cond(mod_tf &lt; 0, lambda: mod_tf+v_div, lambda: mod_tf) </code></pre>
0
2016-10-07T01:02:57Z
[ "python", "tensorflow" ]
Can tensorflow's mod operator match python's modulo implementation?
39,889,349
<p>Python's <code>%</code> operator always returns a number with the same sign as the divisor (second argument), for example:</p> <pre><code>-7.0 % 3.0 -&gt; 2.0 </code></pre> <p>However, Tensorflow's mod operator seems to be implemented slightly differently:</p> <pre><code>tf.mod(-7.0, 3.0).eval() -&gt; -1.0 </code></pre> <p>How can I get Tensorflow to return the same value as the python implementation?</p> <pre><code>import tensorflow as tf def main(): v_num = -7.0 v_div = 3.0 mod_tf = tf.mod(v_num, v_div) mod_py = v_num % v_div with tf.Session() as sess: sess.run(tf.initialize_all_variables()) print('TF: {} % {} = {}'.format(v_num, v_div, mod_tf.eval())) print('PY: {} % {} = {}'.format(v_num, v_div, mod_py)) if __name__ == "__main__": main() </code></pre>
0
2016-10-06T06:50:24Z
39,914,691
<p>Here's another solution. It adds the divisor to the result of the first modulo, then does modulo again.</p> <pre><code>def positive_mod(val, div): # Return the positive result of the modulo operator. # Does x = ((v % div) + div) % div return tf.mod(tf.add(tf.mod(val, div), div), div) </code></pre>
0
2016-10-07T10:07:19Z
[ "python", "tensorflow" ]
How to download the same file twice? (urlretrieve issue)
39,889,389
<p>Python 2.7.</p> <pre><code>from urllib import urlretrieve urlretrieve("ftp://ftp.wwpdb.org/pub/pdb/data/structures/divided/mmCIF/27/127d.cif.gz", "file1") urlretrieve("ftp://ftp.wwpdb.org/pub/pdb/data/structures/divided/mmCIF/27/127d.cif.gz", "file2") </code></pre> <p>The first download goes correctly but the second fails:</p> <pre><code>Traceback (most recent call last): File "C:/Jacek/Python/untitled/test2.py", line 3, in &lt;module&gt; urlretrieve("ftp://ftp.wwpdb.org/pub/pdb/data/structures/divided/mmCIF/27/127d.cif.gz", "file2") File "C:\Python27\lib\urllib.py", line 98, in urlretrieve return opener.retrieve(url, filename, reporthook, data) File "C:\Python27\lib\urllib.py", line 245, in retrieve fp = self.open(url, data) File "C:\Python27\lib\urllib.py", line 213, in open return getattr(self, name)(url) File "C:\Python27\lib\urllib.py", line 558, in open_ftp (fp, retrlen) = self.ftpcache[key].retrfile(file, type) File "C:\Python27\lib\urllib.py", line 906, in retrfile conn, retrlen = self.ftp.ntransfercmd(cmd) File "C:\Python27\lib\ftplib.py", line 334, in ntransfercmd host, port = self.makepasv() File "C:\Python27\lib\ftplib.py", line 312, in makepasv host, port = parse227(self.sendcmd('PASV')) File "C:\Python27\lib\ftplib.py", line 830, in parse227 raise error_reply, resp IOError: [Errno ftp error] 200 TYPE is now 8-bit binary </code></pre> <p>On python 3 (with corresponding version of urlretrieve) this works as expected - both downloads succeeded.</p> <p>Is it a way to resolve this on Python 2.7?</p> <p>(Of course, you can say that downloading twice the same file doesn't make sense. I agree. I went into this problem while testing a module trying to download a file with different parameters (as it has nothing common with the problem, I simplified the example code) and I just was surprised with this strange behavior)</p>
0
2016-10-06T06:52:12Z
39,889,508
<p>From the documentation for <a href="https://docs.python.org/2/library/urllib.html#urllib.urlretrieve" rel="nofollow">urlretrieve for Python2</a>:</p> <p><code>If the URL points to a local file, or a valid cached copy of the object exists, the object is not copied</code></p> <p>so the first impression is that the library will avoid the download the file twice.</p> <p>Have you tried to write the file on different files?</p> <pre><code>from urllib import urlretrieve urlretrieve("ftp://ftp.wwpdb.org/pub/pdb/data/structures/divided/mmCIF/27/127d.cif.gz", "/tmp/file1.gz") urlretrieve("ftp://ftp.wwpdb.org/pub/pdb/data/structures/divided/mmCIF/27/127d.cif.gz", "/tmp/file2.gz") </code></pre>
0
2016-10-06T07:00:04Z
[ "python", "python-2.7" ]
How to download the same file twice? (urlretrieve issue)
39,889,389
<p>Python 2.7.</p> <pre><code>from urllib import urlretrieve urlretrieve("ftp://ftp.wwpdb.org/pub/pdb/data/structures/divided/mmCIF/27/127d.cif.gz", "file1") urlretrieve("ftp://ftp.wwpdb.org/pub/pdb/data/structures/divided/mmCIF/27/127d.cif.gz", "file2") </code></pre> <p>The first download goes correctly but the second fails:</p> <pre><code>Traceback (most recent call last): File "C:/Jacek/Python/untitled/test2.py", line 3, in &lt;module&gt; urlretrieve("ftp://ftp.wwpdb.org/pub/pdb/data/structures/divided/mmCIF/27/127d.cif.gz", "file2") File "C:\Python27\lib\urllib.py", line 98, in urlretrieve return opener.retrieve(url, filename, reporthook, data) File "C:\Python27\lib\urllib.py", line 245, in retrieve fp = self.open(url, data) File "C:\Python27\lib\urllib.py", line 213, in open return getattr(self, name)(url) File "C:\Python27\lib\urllib.py", line 558, in open_ftp (fp, retrlen) = self.ftpcache[key].retrfile(file, type) File "C:\Python27\lib\urllib.py", line 906, in retrfile conn, retrlen = self.ftp.ntransfercmd(cmd) File "C:\Python27\lib\ftplib.py", line 334, in ntransfercmd host, port = self.makepasv() File "C:\Python27\lib\ftplib.py", line 312, in makepasv host, port = parse227(self.sendcmd('PASV')) File "C:\Python27\lib\ftplib.py", line 830, in parse227 raise error_reply, resp IOError: [Errno ftp error] 200 TYPE is now 8-bit binary </code></pre> <p>On python 3 (with corresponding version of urlretrieve) this works as expected - both downloads succeeded.</p> <p>Is it a way to resolve this on Python 2.7?</p> <p>(Of course, you can say that downloading twice the same file doesn't make sense. I agree. I went into this problem while testing a module trying to download a file with different parameters (as it has nothing common with the problem, I simplified the example code) and I just was surprised with this strange behavior)</p>
0
2016-10-06T06:52:12Z
39,900,735
<p>Unfortunately,urlretrieve has something wrong in Python2.7. Called urlretrieve repeatedly works by HTTP, not by FTP in Python2.7. The reason is ftplib sends PASV by ftplib again and again. Fortunately,we can call urlcleanup before urlretrieve when download file by ftp. And the documentation is <a href="https://docs.python.org/2/library/urllib.html" rel="nofollow">here</a>.</p>
0
2016-10-06T16:02:40Z
[ "python", "python-2.7" ]
how to make connection from node js to python?
39,889,395
<p>I am new to node js and python just i want to communicate node js to python and vice versa. I need simple script that runs and host on web browser to get executed python script in node js. Just wan to transfer one message from node js to python using after running node js script with hosted envirnoment.</p>
0
2016-10-06T06:52:46Z
39,889,710
<p>You need to execute python code from nodejs using system call.</p> <pre><code>var exec = require('child_process').exec; var cmd = 'python myscript.py'; exec(cmd, function(error, stdout, stderr) { // command output is in stdout (This the output from the python script that you might use }); </code></pre>
1
2016-10-06T07:11:31Z
[ "python", "node.js" ]
Ways to filter files in directory and join directory path - Python
39,889,516
<p>Given a suffix and a directory path, I need to extract the full path of the files in the directory that ends with a given suffix.</p> <p>Currently, I'm doing it as such:</p> <pre><code>import os dir_path = '/path/to/dir' suffix = '.xyz' filenames = filter(lambda x: x.endswith(suffix), os.listdir(dir_path)) filenames = map(lambda x: os.path.join(dir_path, x), filenames) </code></pre> <p>I could also do it with <code>glob</code>:</p> <pre><code>import glob dir_path = '/path/to/dir' suffix = '.xyz' glob.glob(dir_path+'*.'+suffix) </code></pre> <p>I understand that there's also <code>pathlib</code> that can check for suffixes using <code>PurePath</code> but I'm not sure what is the syntax for that.</p> <p>Are there other ways of achieving the same filtered list of full paths to the files?</p>
0
2016-10-06T07:00:42Z
39,889,726
<p>You can use a <a href="https://docs.python.org/3/tutorial/datastructures.html#list-comprehensions" rel="nofollow"><code>list comprehension</code></a> to build the result in one go:</p> <pre><code>&gt;&gt;&gt; [os.path.join(os.sep, x, dir_path) for x in os.listdir(dir_path) if x.endswith(suffix)] ['/home/msvalkon/foo.txt', '/home/msvalkon/output.txt', '/home/msvalkon/remaining_warnings.txt', '/home/msvalkon/test.txt', '/home/msvalkon/hdr_chksum_failure.txt'] </code></pre> <p>If <code>dir_path</code> is always an absolute path, you can use <code>os.path.abspath(x)</code> in place of the <code>os.path.join()</code>.</p> <p>For a large directory, it may be wise to use <a href="https://docs.python.org/3/library/os.html" rel="nofollow"><code>os.scandir</code></a> which returns an iterator. This will be way faster.</p> <pre><code>&gt;&gt;&gt; [entry.path for entry in os.scandir(dir_path) if entry.name.endswith(suffix)] ['/home/msvalkon/foo.txt', '/home/msvalkon/output.txt', '/home/msvalkon/remaining_warnings.txt', '/home/msvalkon/test.txt', '/home/msvalkon/hdr_chksum_failure.txt'] </code></pre>
1
2016-10-06T07:12:35Z
[ "python", "operating-system", "filepath", "glob", "listdir" ]
Django app defaults?
39,889,529
<p>I'm looking for a way to have application defaults and settings that are easy to use, difficult to get wrong, and have little overhead..</p> <p>Currently I have it organized as follows:</p> <pre><code>myapp/defaults.py # application defaults import sys if sys.platform == 'win32': MYAPP_HOME_ROOT = os.path.dirname(os.environ['USERPROFILE']) else: MYAPP_HOME_ROOT = '/home' </code></pre> <p>in my project I have:</p> <pre><code>mysite/settings.py from myapp.defaults import * # import all default from myapp MYAPP_HOME_ROOT = '/export/home' # overriding myapp.defaults </code></pre> <p>With this setup I could import and use settings in the regular django way (<code>from django.conf import settings</code> and <code>settings.XXX</code>).</p> <p><strong>update-3 (why we need this)</strong></p> <ol> <li>Default settings ("defaults"): <ol> <li>An application is more convenient to use if it can be configured by overriding a set of sensible default settings.</li> <li>the application "has domain knowledge", so it makes sense for it to provide sensible defaults whenever possible.</li> <li>it isn't convenient for a user of the application to need to provide all the settings needed by every app, it should be sufficient to override a small subset and leave the rest with default values.</li> <li>it is <em>very</em> useful if defaults can react to the environment. You'll often want to do something different when <code>DEBUG</code> is True, but any other global setting could be useful: e.g. <code>MYAPP_IDENTICON_DIR = os.path.join(settings.MEDIA_ROOT, 'identicons')</code> (<a href="https://en.wikipedia.org/wiki/Identicon" rel="nofollow">https://en.wikipedia.org/wiki/Identicon</a>)</li> <li>project (site/global) settings must override app-defaults, i.e. a user who defined <code>MYAPP_IDENTICON_DIR = 's3://myappbucket.example.com/identicons'</code> in the (global) <code>settings.py</code> file for their site should get this value, and not the application's default.</li> <li>any solution that keeps close to the normal way of using settings (<code>import .. settings; settings.FOO</code>) is superior to a solution that needs new syntax (since new syntax will diverge and we would get new and unique ways to use settings from app to app).</li> <li>the zen of python probably applies here: <ul> <li>If the implementation is hard to explain, it's a bad idea.</li> <li>If the implementation is easy to explain, it may be a good idea.</li> </ul></li> </ol></li> </ol> <p>(The original post posited the two key problems below, leaving the above assumptions unstated..)</p> <p><em>Problem #1:</em> When running unit tests for the app there is no site however, so <code>settings</code> wouldn't have any of the <code>myapp.defaults</code>.</p> <p><em>Problem #2:</em> There is also a big problem if <code>myapp.defaults</code> needs to use anything from settings (e.g. <code>settings.DEBUG</code>), since you can't import settings from <code>defaults.py</code> (since that would be a circular import).</p> <p>To solve problem #1 I created a layer of indirection:</p> <pre><code>myapp/conf.py from . import defaults from django.conf import settings class Conf(object): def __getattr__(self, attr): try: return getattr(settings, attr) except AttributeError: return getattr(defaults, attr) conf = Conf() # !&lt;-- create Conf instance </code></pre> <p>and usage:</p> <pre><code>myapp/views.py from .conf import conf as settings ... print settings.MYAPP_HOME_ROOT # will print '/export/home' when used from mysite </code></pre> <p>This allows the <code>conf.py</code> file to work with an "empty" settings file too, and the myapp code can continue using the familiar <code>settings.XXX</code>.</p> <p>It doesn't solve problem #2, defining application settings based on e.g. <code>settings.DEBUG</code>. My current solution is to add to the <code>Conf</code> class:</p> <pre><code> from . import defaults from django.conf import settings class Conf(object): def __getattr__(self, attr): try: return getattr(settings, attr) except AttributeError: return getattr(defaults, attr) if settings.DEBUG: MYAPP_HOME_ROOT = '/Users' else: MYAPP_HOME_ROOT = '/data/media' conf = Conf() # !&lt;-- create Conf instance </code></pre> <p>but this is not satisfying since mysite can no longer override the setting, and myapp's defaults/settings are now spread over two files...</p> <p>Is there an easier way to do this?</p> <p><strong>update-4: "just use the django test runner.."</strong></p> <blockquote> <p>The app you are testing relies on the Django framework - and you cannot get around the fact that you need to bootstrap the framework first before you can test the app. Part of the bootstrapping process is creating a default settings.py and further using the django-supplied test runners to ensure that your apps are being testing in the environment that they are likely to be run.</p> </blockquote> <p>While that sounds like it ought to be true, it doesn't actually make much sense, e.g. there is no such thing as a default <code>settings.py</code> (at least not for a reusable app). When talking about integration testing it makes sense to test an app with the site/settings/apps/database(s)/cache(s)/resource-limits/etc. that it will encounter in production. For unit testing, however, we want to test just the unit of code that we're looking at - with as few external dependencies (and settings) as possible. The Django test runner(s) do, and should, mock out major parts of the framework, so it can't be said to be running in any "real" environment.</p> <p>While the Django test runner(s) are great, there are a long list of issues it doesn't handle. The two biggies for us are (i) running tests sequentially is so slow that the test suite becomes unused (&lt;5mins when running in parallel, almost an hour when running sequentially), (ii) sometimes you just need to run tests on <em>big</em> databases (we restore last night's backup to a test database that the tests can run against - much too big for fixtures).</p> <p>The people who made nose, py.test, twill, Selenium, and any of the fuzz testing tools really do know testing well, since that is their only focus. It would be a shame to not be able to draw on their collective experience.</p> <p>I am not the first to have encountered this problem, and there doesn't look like there is an easy or common solution. Here are two projects that have different solution:</p> <p><strong>Update, python-oidc-provider method</strong>:</p> <p>The python-oidc-provider package (<a href="https://github.com/juanifioren/django-oidc-provider" rel="nofollow">https://github.com/juanifioren/django-oidc-provider</a>) has another creative way to solve the app-settings/defaults problem. It uses properties to define defaults in a <code>myapp/settings.py</code> file:</p> <pre><code>from django.conf import settings class DefaultSettings(object): @property def MYAPP_HOME_ROOT(self): return ... default_settings = DefaultSettings() def get(name): value = None try: value = getattr(default_settings, name) value = getattr(settings, name) except AttributeError: if value is None: raise Exception("Missing setting: " + name) </code></pre> <p>using a setting inside myapp becomes:</p> <pre><code>from myapp import settings print settings.get('MYAPP_HOME_ROOT') </code></pre> <p><em>good:</em> solves problem #2 (using settings when defining defaults), solves problem #1 (using default settings from tests).</p> <p><em>bad:</em> different syntax for accessing settings (<code>settings.get('FOO')</code> vs the normal <code>settings.FOO</code>), myapp cannot provide defaults for settings that will be used outside of myapp (the settings you get from <code>from django.conf import settings</code> will not contain any defaults from myapp). External code can do <code>from myapp import settings</code> to get regular settings and myapp defaults, but this breaks down if more than one app wants to do this...</p> <p><strong>Update2, the django-appconf package:</strong> (Note: not related to Django's AppConfig..)</p> <p>With <code>django-appconfig</code>, app settings are created in <code>myapp/conf.py</code> (which needs to be loaded early, so you should probably import the conf.py file from models.py - since it is loaded early):</p> <pre><code>from django.conf import settings from appconf import AppConf class MyAppConf(AppConf): HOME_ROOT = '...' </code></pre> <p>usage:</p> <pre><code>from myapp.conf import settings print settings.MYAPP_HOME_ROOT </code></pre> <p>AppConf will automagically add the <code>MYAPP_</code> prefix, and also automagically detect if <code>MYAPP_HOME_ROOT</code> has been redefined/overridden in the project's settings.</p> <p><em>pro:</em> simple to use, solves problem #1 (accessing app-settings from tests), and problem #2 (using settings when defining defaults). As long as the conf.py file is loaded early, external code should be able to use defaults defined in myapp.</p> <p><em>con:</em> significantly magical. The name of the setting in conf.py is different from its usage (since appconf automatically adds the <code>MYAPP_</code> prefix). Extenal/opaque dependency.</p>
1
2016-10-06T07:01:16Z
39,889,790
<blockquote> <p>Problem #1: When running unit tests for the app there is no site however, so settings wouldn't have any of the myapp.defaults.</p> </blockquote> <p>This problem is solved by using the testing framework that comes with django (see <a href="https://docs.djangoproject.com/en/1.10/topics/testing/overview/" rel="nofollow">the documentation</a>) as it bootstraps the test environment correctly for you.</p> <p>Keep in mind that django tests always run with <code>DEBUG=False</code></p> <blockquote> <p>Problem #2: There is also a big problem if myapp.defaults needs to use anything from settings (e.g. settings.DEBUG), since you can't import settings from defaults.py (since that would be a circular import).</p> </blockquote> <p>This is not really a problem; once you import from <code>myapp.defaults</code> in <code>myapp.settings</code>, everything in <code>settings</code> will be in scope. So you don't to import <code>DEBUG</code>, it is already available to you as its in the global scope.</p>
2
2016-10-06T07:15:53Z
[ "python", "django" ]
Slicing or aggregating data frame based on certain conditions
39,889,635
<p>I have a dataset which contains latitude and longitudes. I want to take those rows in data set where distance is less than 1 km. For entire distance calculation part I have written a function which will return true or false. So I want to consider those rows on which this condition is applied.</p> <p>It may be through lambda function but not sure how as I think lambda always return a value might be. If conditions can be applied for column values but think that is not enough in my case.</p> <p>Any leads are appreciated. TIA.</p>
0
2016-10-06T07:07:45Z
39,890,896
<p>Can you give more information about your dataset? Does he contain two columns, one for lat and one for long?</p> <p>If you want to apply your func to the dataset you can use <code>map</code> Somth like this:</p> <pre><code>func = lambda x: x &gt; 0 data = [1, -1, 3, -2] result = list(map(func, dataset)) </code></pre> <p>After that you will get:</p> <pre><code>result = [True, False, True, False] </code></pre> <p>Or you can vectorize your processing with use of <code>numpy</code></p> <pre><code>import numpy as np func = np.vectorize(lambda x: x &gt; 0) data = np.array([1, -1, 3, -2]) result = func(data) </code></pre> <p>It will fastest one solution.</p> <p>If u give more information about structure of dataset i will try to fix your problem</p>
0
2016-10-06T08:15:19Z
[ "python", "python-3.x", "lambda", "aggregate" ]
Loop of function for taking multiple audio files from a directory
39,889,680
<p>I am currently taking input from a directory, for a single audio file and I am saving my output in CSV file, with the file name and converted speech to text output but I have 100 files in that directory (i.e 001.wav,002.wav,003.wav..........100.wav)</p> <p>I want to write a loop or function which saves the speech to text output in CSV automatically with the respective file name in different rows.</p> <p>Here's the code:</p> <pre><code>import speech_recognition as sr import csv import os AUDIO_FILE =path.join(path.dirname('C:/path/to/directory'), "001.wav") file_name = os.path.basename(AUDIO_FILE) name = os.path.basename(AUDIO_FILE) # use the audio file as the audio source r = sr.Recognizer() with sr.AudioFile(AUDIO_FILE) as source: audio = r.record(source) # read the entire audio file # recognize speech using Google Speech Recognition try: # for testing purposes, we're just using the default API key # to use another API key, use `r.recognize_google(audio, key="GOOGLE_SPEECH_RECOGNITION_API_KEY")` # instead of `r.recognize_google(audio)` a = r.recognize_google(audio) except sr.UnknownValueError: a = "Google Speech Recognition could not understand audio" except sr.RequestError as e: a = "Could not request results from Google Speech Recognition service; {0}".format(e) try: b = r.recognize_sphinx(audio) except sr.UnknownValueError: b = "Sphinx could not understand audio" except sr.RequestError as e: b = "Sphinx error; {0}".format(e) with open('speech_output.csv', 'a') as f: writer = csv.writer(f) writer.writerow(['file_name','google',sphinx]) writer.writerow([file_name,a,b]) </code></pre> <p>References to the code. <a href="https://github.com/Uberi/speech_recognition/blob/master/examples/audio_transcribe.py" rel="nofollow">https://github.com/Uberi/speech_recognition/blob/master/examples/audio_transcribe.py</a></p>
0
2016-10-06T07:09:46Z
39,891,310
<p>You can get all files of a directory and subdirectory with <code>os.walk</code>, which I have included in the <code>get_file_paths()</code> in the code below, here is an example:</p> <pre><code>import speech_recognition as sr import csv import os DIRNAME = r'c:\path\to\directory' OUTPUTFILE = r'c:\path\to\outputfiledir\outputfile.csv' def get_file_paths(dirname): file_paths = [] for root, directories, files in os.walk(dirname): for filename in files: filepath = os.path.join(root, filename) file_paths.append(filepath) return file_paths def process_file(file): r = sr.Recognizer() a = '' with sr.AudioFile(file) as source: audio = r.record(source) try: a = r.recognize_google(audio) except sr.UnknownValueError: a = "Google Speech Recognition could not understand audio" except sr.RequestError as e: a = "Could not request results from Google Speech Recognition service; {0}".format(e) return a def main(): files = get_file_paths(DIRNAME) # get all file-paths of all files in dirname and subdirectories for file in files: # execute for each file (filepath, ext) = os.path.splitext(file) # get the file extension file_name = os.path.basename(file) # get the basename for writing to output file if ext == '.wav': # only interested if extension is '.wav' a = process_file(file) # result is returned to a with open(OUTPUTFILE, 'a') as f: # write results to file writer = csv.writer(f) writer.writerow(['file_name','google']) writer.writerow([file_name, a]) if __name__ == '__main__': main() </code></pre> <p>If you want to do multiple recognizers, something like this could work. Please note this is an untested example:</p> <pre><code>import speech_recognition as sr import csv import os DIRNAME = r'c:\path\to\directory' OUTPUTFILE = r'c:\path\to\outputfiledir\outputfile.csv' def get_file_paths(dirname): file_paths = [] for root, directories, files in os.walk(dirname): for filename in files: filepath = os.path.join(root, filename) file_paths.append(filepath) return file_paths def recog_multiple(file): r = sr.Recognizer() r_types = ['recognize_google', 'recognize_sphinx'] results = [] for r_type in r_types: result = '' with sr.AudioFile(file) as source: audio = r.record(source) try: result = r_type + ': ' + str(getattr(r, r_type)(audio)) except sr.UnknownValueError: result = r_type + ': Speech Recognition could not understand audio' except sr.RequestError as e: result = r_type + ': Could not request results from Speech Recognition service; {0}'.format(e) results.append(result) return results def main(): files = get_file_paths(DIRNAME) # get all file-paths of all files in dirname and subdirectories for file in files: # execute for each file (filepath, ext) = os.path.splitext(file) # get the file extension file_name = os.path.basename(file) # get the basename for writing to output file if ext == '.wav': # only interested if extension is '.wav' a = recog_multiple(file) # result is returned to a with open(OUTPUTFILE, 'a') as f: # write results to file writer = csv.writer(f) writer.writerow(['file_name','results']) writer.writerow([file_name, a]) if __name__ == '__main__': main() </code></pre>
1
2016-10-06T08:35:25Z
[ "python", "python-2.7", "python-3.x" ]
Looking for a way to adjust the values of one array based on another array?
39,889,689
<p>I started with a set of bivariate data. My goal is to first find points in that data set for which the y-values are outliers. Then, I wanted to create a new data set that included not only the outlier points, but also any points with an x-value of within 0.01 of any given outlier point.</p> <p>Then (if possible) I want to subtract the original outlier x-values from the new x-set, so that I have a group of points with x-values of between -0.01 and 0.01, with x-value now indicating distance from an original outlier x-value.</p> <p>I have this code:</p> <pre><code>import numpy as np mean = np.mean(y) SD = np.std(y) x_indices = [i for i in range(len(y)) if ((y[i]) &gt; ((2*SD)+mean))] expanded_indices = [i for i in range(len(x)) if np.any((abs(x[i] - x[x_indices])) &lt; 0.01)] </code></pre> <p>This worked great, and now I can call (and plot) x and y using the indices:</p> <pre><code>plt.plot(x[expanded_indices],y[expanded_indices]) </code></pre> <p>However, I have no idea how to subtract the original "x_indices" values to get an x range of -0.01 to 0.01, since everything I tried failed.</p> <p>I want to do something like what I have below, except I know that I can't subtract two arrays of different sizes, and I'm worried I can't use np.any in this context either.</p> <pre><code>x_values = [(x[expanded_indices] - x[indices]) if np.any((abs(x[expanded_indices] - x[indices])) &lt; 0.01)] </code></pre> <p>Any ideas? I'm sorry this is so long -- I'm very new at this and pretty lost. I've been giving it a go for the last few hours and any assistance would be appreciated. Thanks!</p> <p>sample data could be as follows: x =[0,0.994,0.995,0.996,0.997,0.998,1.134,1.245,1.459,1.499,1.500,1.501,2.103,2.104,2.105,2.106]</p> <p>y = [1.5,1.6,1.5,1.6,10,1.5,1.5,1.5,1.6,1.6,1.5,1.6,1.5,11,1.6,1.5]</p>
1
2016-10-06T07:10:03Z
39,892,419
<p>Once you have the set with y-outliers values and the set with the expanded values, you can go over the whole second set with a for loop and subtract the corresponding 1st set value using 2 <code>For()</code> loops:</p> <pre><code>import numpy as np x =np.array([0,0.994,0.995,0.996,0.997,0.998,1.134,1.245,1.459,1.499,1.500,1.501,2.103,2.104,2.105,2.106]) y = np.array([1.5,1.6,1.5,1.6,10,1.5,1.5,1.5,1.6,1.6,1.5,1.6,1.5,11,1.6,1.5]) mean = np.mean(y) SD = np.std(y) # elements with y-element outside defined region indices = [i for i in range(len(y)) if ((y[i]) &gt; ((2*SD)+mean))] my_1st_set = x[indices] # Set with values within 0.01 difference with 1st set points expanded_indices = [i for i in range(len(x)) if np.any((abs(x[i] - x[x_indices])) &lt; 0.01)] my_2nd_set = x[expanded_indices] # A final set with the subtracted values from the 2nd set my_final_set = my_2nd_set for i in range(my_final_set.size): for j in range(my_1st_set.size): if abs(my_final_set[i] - my_1st_set[j]) &lt; 0.01: my_final_set[i] = x[i] - my_1st_set[j] break </code></pre> <p>my_final_set is a numpy array with the resulting values of subtracting the original expanded_indices values with their corresponding value of the first set</p>
0
2016-10-06T09:31:29Z
[ "python", "arrays", "numpy", "indexing" ]
Looking for a way to adjust the values of one array based on another array?
39,889,689
<p>I started with a set of bivariate data. My goal is to first find points in that data set for which the y-values are outliers. Then, I wanted to create a new data set that included not only the outlier points, but also any points with an x-value of within 0.01 of any given outlier point.</p> <p>Then (if possible) I want to subtract the original outlier x-values from the new x-set, so that I have a group of points with x-values of between -0.01 and 0.01, with x-value now indicating distance from an original outlier x-value.</p> <p>I have this code:</p> <pre><code>import numpy as np mean = np.mean(y) SD = np.std(y) x_indices = [i for i in range(len(y)) if ((y[i]) &gt; ((2*SD)+mean))] expanded_indices = [i for i in range(len(x)) if np.any((abs(x[i] - x[x_indices])) &lt; 0.01)] </code></pre> <p>This worked great, and now I can call (and plot) x and y using the indices:</p> <pre><code>plt.plot(x[expanded_indices],y[expanded_indices]) </code></pre> <p>However, I have no idea how to subtract the original "x_indices" values to get an x range of -0.01 to 0.01, since everything I tried failed.</p> <p>I want to do something like what I have below, except I know that I can't subtract two arrays of different sizes, and I'm worried I can't use np.any in this context either.</p> <pre><code>x_values = [(x[expanded_indices] - x[indices]) if np.any((abs(x[expanded_indices] - x[indices])) &lt; 0.01)] </code></pre> <p>Any ideas? I'm sorry this is so long -- I'm very new at this and pretty lost. I've been giving it a go for the last few hours and any assistance would be appreciated. Thanks!</p> <p>sample data could be as follows: x =[0,0.994,0.995,0.996,0.997,0.998,1.134,1.245,1.459,1.499,1.500,1.501,2.103,2.104,2.105,2.106]</p> <p>y = [1.5,1.6,1.5,1.6,10,1.5,1.5,1.5,1.6,1.6,1.5,1.6,1.5,11,1.6,1.5]</p>
1
2016-10-06T07:10:03Z
39,892,452
<p>Let's see if I understood you correctly. This code should find the outliers, and put an array into res for each outlier.</p> <pre><code>import numpy as np mean = np.mean(y) SD = np.std(y) x = np.array([0,0.994,0.995,0.996,0.997,0.998,1.134,1.245,1.459,1.499,1.500,1.501,2.103,2.104,2.105,2.106]) y = np.array([1.5,1.6,1.5,1.6,10,1.5,1.5,1.5,1.6,1.6,1.5,1.6,1.5,11,1.6,1.5]) outlier_indices = np.abs(y - mean) &gt; 2*SD res = [] for x_at_outlier in x[np.flatnonzero(outlier_indices)]: part_res = x[np.abs(x - x_at_outlier) &lt; 0.01] part_res -= np.mean(part_res) res.append(part_res) </code></pre> <p><code>res</code> is now a list of arrays, with each array containing the values around one outlier. Perhaps it is easier to continue working with the data in this format?</p> <p>If you want all of them in one numpy array:</p> <pre><code>res = np.hstack(res) </code></pre>
0
2016-10-06T09:32:59Z
[ "python", "arrays", "numpy", "indexing" ]
Kivy - Is there any way to bind Python functions to Widgets created in the kv language?
39,889,786
<p>I am trying to create a simple Pokemon battle simulator. A trainer has 6 Pokémon, stored in a list. I have labels in the .kv file displaying the desired information.<br> My problem is that if I have the text property of the labels set to a Python variable:</p> <pre><code>text: '{}/{}'.format(root.pokemon.stats['cHealth'], root.pokemon.stats['Health']) </code></pre> <p>then the labels get constantly updated, but when set to a Python function:</p> <pre><code>text: root.pokemon.getHP() </code></pre> <p>with the getHP() function like:</p> <pre><code>def getHP(self): return '{}/{}'.format(self.stats['cHealth'], self.stats['Health']) </code></pre> <p>they do not update when the health is changed, but only when the list holding the Pokémon is changed. (eg. the order of the Pokémon is changed) Is there any way to get the binding to work when calling a function, or must all of the function calls be replaced by their return value?</p>
2
2016-10-06T07:15:37Z
39,890,598
<p>the <em>kv lang</em> will auto-bind properties that are declared in it, so can you send the stats to your function and the binding will occur :)</p> <pre><code>text: root.pokemon.getHP(root.pokemon.stats) </code></pre> <p>Each time that <strong>stats</strong> is changed, the function will be called.</p>
0
2016-10-06T07:58:02Z
[ "python", "kivy", "kivy-language" ]
Extract data from login authentication website using scrapy
39,889,986
<p>I am trying to login first and then extract data from pages which are visible after login. my spider is- </p> <pre><code>import scrapy from scrapy.selector import HtmlXPathSelector from scrapy.http.request import Request from scrapy.spiders import BaseSpider from scrapy.http import FormRequest from loginform import fill_login_form class ElementSpider(scrapy.Spider): name = 'example' start_urls = ['https://github.com/login'] def parse(self, response): return [FormRequest.from_response(response, formdata={'login': 'myid', 'password': 'my password'}, callback=self.after_login)] def after_login(self, response): if "Incorrect username or password" in response.body: print "hey" self.log("Login failed", level=log.ERROR) return else: return Request(url="https://github.com/settings/emails", callback=self.parse_data) def parse_data(self, response): email = response.xpath('//div[@class="boxed-group-inner"]/li[@class="clearfix css-truncate settings-email"]/span[@class="css-truncate-target"]/text()').extract() print email </code></pre> <p>I am getting nothing in the output. Is there error in implementation ???</p>
0
2016-10-06T07:26:43Z
39,890,275
<p>You haven't created an instance of your class <code>ElementSpider</code>.<br> You first need to create an instance of the class.<br><br> <strong>NOTICE</strong><br> Every class should have a constructor, Therefor it is recommended you should implement the <code>__init__</code> method in your class.<br></p> <p>This is how the code is should to look.</p> <pre><code>import scrapy from scrapy.selector import HtmlXPathSelector from scrapy.http.request import Request from scrapy.spiders import BaseSpider from scrapy.http import FormRequest from loginform import fill_login_form class ElementSpider(scrapy.Spider): name = 'example' start_urls = ['https://github.com/login'] def __init__(self, *args, **kwargs): super(ElementSpider, self).__init__(*args, **kwargs) def parse(self, response): return [FormRequest.from_response(response, formdata={'login': 'myid', 'password': 'my password'}, callback=self.after_login)] def after_login(self, response): if "Incorrect username or password" in response.body: print "hey" self.log("Login failed", level=log.ERROR) return else: return Request(url="https://github.com/settings/emails", callback=self.parse_data) def parse_data(self, response): email = response.xpath('//*[@id="settings-emails"]/li/span[@class="css-truncate-target"]').extract() print email if __name__ == "__main__": spider = ElementSpider() </code></pre>
0
2016-10-06T07:40:50Z
[ "python", "scrapy", "web-crawler" ]
Python: ImportError: No module named 'tutorial.quickstart'
39,890,020
<p>I am getting import error even when I am following the tutorial <a href="http://www.django-rest-framework.org/tutorial/quickstart/" rel="nofollow">http://www.django-rest-framework.org/tutorial/quickstart/</a> line by line.</p> <pre><code>from tutorial.quickstart import views </code></pre> <blockquote> <p>ImportError: No module named 'tutorial.quickstart'</p> </blockquote> <p><strong>where my urls.py file looks like</strong> </p> <pre><code>from django.conf.urls import url, include from rest_framework import routers from tutorial.quickstart import views router = routers.DefaultRouter() router.register(r'users', views.UserViewSet) router.register(r'groups', views.GroupViewSet) urlpatterns = [ url(r'^', include(router.urls)), url(r'^api-auth/', include('rest_framework.urls', namespace='rest_framework')) ] </code></pre> <p>Note: I have the project in Rest_Tutorial folder which consist of virtual enviroment - <code>env</code> and project <code>tutorial</code>. This tutorial consist of <code>quickstart</code> and <code>tutorial</code></p>
0
2016-10-06T07:28:13Z
39,891,590
<p>Make sure your tutorial.quickstart is in the same folder as your project. Also make sure it is unzipped ! Otherwise use a absolute path.</p> <p>Hope it helps !</p>
0
2016-10-06T08:49:51Z
[ "python", "django", "python-3.x" ]
Keras uses way too much GPU memory when calling train_on_batch, fit, etc
39,890,147
<p>I've been messing with Keras, and like it so far. There's one big issue I have been having, when working with fairly deep networks: When calling model.train_on_batch, or model.fit etc., Keras allocates significantly more GPU memory than what the model itself should need. This is not caused by trying to train on some really large images, it's the network model itself that seems to require a lot of GPU memory. I have created this toy example to show what I mean. Here's essentially what's going on:</p> <p>I first create a fairly deep network, and use model.summary() to get the total number of parameters needed for the network (in this case 206538153, which corresponds to about 826 MB). I then use nvidia-smi to see how much GPU memory Keras has allocated, and I can see that it makes perfect sense (849 MB).</p> <p>I then compile the network, and can confirm that this does not increase GPU memory usage. And as we can see in this case, I have almost 1 GB of VRAM available at this point.</p> <p>Then I try to feed a simple 16x16 image and a 1x1 ground truth to the network, and then everything blows up, because Keras starts allocating lots of memory again, for no reason that is obvious to me. Something about training the network seems to require a lot more memory than just having the model, which doesn't make sense to me. I have trained significantly deeper networks on this GPU in other frameworks, so that makes me think that I'm using Keras wrong (or there's something wrong in my setup, or in Keras, but of course that's hard to know for sure).</p> <p>Here's the code:</p> <pre><code>from scipy import misc import numpy as np from keras.models import Sequential from keras.layers import Dense, Activation, Convolution2D, MaxPooling2D, Reshape, Flatten, ZeroPadding2D, Dropout import os model = Sequential() model.add(Convolution2D(256, 3, 3, border_mode='same', input_shape=(16,16,1))) model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2))) model.add(Convolution2D(512, 3, 3, border_mode='same')) model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2))) model.add(Convolution2D(1024, 3, 3, border_mode='same')) model.add(Convolution2D(1024, 3, 3, border_mode='same')) model.add(Convolution2D(1024, 3, 3, border_mode='same')) model.add(Convolution2D(1024, 3, 3, border_mode='same')) model.add(Convolution2D(1024, 3, 3, border_mode='same')) model.add(Convolution2D(1024, 3, 3, border_mode='same')) model.add(Convolution2D(1024, 3, 3, border_mode='same')) model.add(Convolution2D(1024, 3, 3, border_mode='same')) model.add(Convolution2D(1024, 3, 3, border_mode='same')) model.add(Convolution2D(1024, 3, 3, border_mode='same')) model.add(Convolution2D(1024, 3, 3, border_mode='same')) model.add(Convolution2D(1024, 3, 3, border_mode='same')) model.add(Convolution2D(1024, 3, 3, border_mode='same')) model.add(Convolution2D(1024, 3, 3, border_mode='same')) model.add(Convolution2D(1024, 3, 3, border_mode='same')) model.add(Convolution2D(1024, 3, 3, border_mode='same')) model.add(Convolution2D(1024, 3, 3, border_mode='same')) model.add(Convolution2D(1024, 3, 3, border_mode='same')) model.add(Convolution2D(1024, 3, 3, border_mode='same')) model.add(Convolution2D(1024, 3, 3, border_mode='same')) model.add(Convolution2D(1024, 3, 3, border_mode='same')) model.add(Convolution2D(1024, 3, 3, border_mode='same')) model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2))) model.add(Convolution2D(256, 3, 3, border_mode='same')) model.add(Convolution2D(32, 3, 3, border_mode='same')) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Flatten()) model.add(Dense(4)) model.add(Dense(1)) model.summary() os.system("nvidia-smi") raw_input("Press Enter to continue...") model.compile(optimizer='sgd', loss='mse', metrics=['accuracy']) os.system("nvidia-smi") raw_input("Compiled model. Press Enter to continue...") n_batches = 1 batch_size = 1 for ibatch in range(n_batches): x = np.random.rand(batch_size, 16,16,1) y = np.random.rand(batch_size, 1) os.system("nvidia-smi") raw_input("About to train one iteration. Press Enter to continue...") model.train_on_batch(x, y) print("Trained one iteration") </code></pre> <p>Which gives the following output for me:</p> <pre><code>Using Theano backend. Using gpu device 0: GeForce GTX 960 (CNMeM is disabled, cuDNN 5103) /usr/local/lib/python2.7/dist-packages/theano/sandbox/cuda/__init__.py:600: UserWarning: Your cuDNN version is more recent than the one Theano officially supports. If you see any problems, try updating Theano or downgrading cuDNN to version 5. warnings.warn(warn) ____________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ==================================================================================================== convolution2d_1 (Convolution2D) (None, 16, 16, 256) 2560 convolution2d_input_1[0][0] ____________________________________________________________________________________________________ maxpooling2d_1 (MaxPooling2D) (None, 8, 8, 256) 0 convolution2d_1[0][0] ____________________________________________________________________________________________________ convolution2d_2 (Convolution2D) (None, 8, 8, 512) 1180160 maxpooling2d_1[0][0] ____________________________________________________________________________________________________ maxpooling2d_2 (MaxPooling2D) (None, 4, 4, 512) 0 convolution2d_2[0][0] ____________________________________________________________________________________________________ convolution2d_3 (Convolution2D) (None, 4, 4, 1024) 4719616 maxpooling2d_2[0][0] ____________________________________________________________________________________________________ convolution2d_4 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_3[0][0] ____________________________________________________________________________________________________ convolution2d_5 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_4[0][0] ____________________________________________________________________________________________________ convolution2d_6 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_5[0][0] ____________________________________________________________________________________________________ convolution2d_7 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_6[0][0] ____________________________________________________________________________________________________ convolution2d_8 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_7[0][0] ____________________________________________________________________________________________________ convolution2d_9 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_8[0][0] ____________________________________________________________________________________________________ convolution2d_10 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_9[0][0] ____________________________________________________________________________________________________ convolution2d_11 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_10[0][0] ____________________________________________________________________________________________________ convolution2d_12 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_11[0][0] ____________________________________________________________________________________________________ convolution2d_13 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_12[0][0] ____________________________________________________________________________________________________ convolution2d_14 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_13[0][0] ____________________________________________________________________________________________________ convolution2d_15 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_14[0][0] ____________________________________________________________________________________________________ convolution2d_16 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_15[0][0] ____________________________________________________________________________________________________ convolution2d_17 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_16[0][0] ____________________________________________________________________________________________________ convolution2d_18 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_17[0][0] ____________________________________________________________________________________________________ convolution2d_19 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_18[0][0] ____________________________________________________________________________________________________ convolution2d_20 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_19[0][0] ____________________________________________________________________________________________________ convolution2d_21 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_20[0][0] ____________________________________________________________________________________________________ convolution2d_22 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_21[0][0] ____________________________________________________________________________________________________ convolution2d_23 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_22[0][0] ____________________________________________________________________________________________________ convolution2d_24 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_23[0][0] ____________________________________________________________________________________________________ maxpooling2d_3 (MaxPooling2D) (None, 2, 2, 1024) 0 convolution2d_24[0][0] ____________________________________________________________________________________________________ convolution2d_25 (Convolution2D) (None, 2, 2, 256) 2359552 maxpooling2d_3[0][0] ____________________________________________________________________________________________________ convolution2d_26 (Convolution2D) (None, 2, 2, 32) 73760 convolution2d_25[0][0] ____________________________________________________________________________________________________ maxpooling2d_4 (MaxPooling2D) (None, 1, 1, 32) 0 convolution2d_26[0][0] ____________________________________________________________________________________________________ flatten_1 (Flatten) (None, 32) 0 maxpooling2d_4[0][0] ____________________________________________________________________________________________________ dense_1 (Dense) (None, 4) 132 flatten_1[0][0] ____________________________________________________________________________________________________ dense_2 (Dense) (None, 1) 5 dense_1[0][0] ==================================================================================================== Total params: 206538153 ____________________________________________________________________________________________________ None Thu Oct 6 09:05:42 2016 +------------------------------------------------------+ | NVIDIA-SMI 352.63 Driver Version: 352.63 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce GTX 960 Off | 0000:01:00.0 On | N/A | | 30% 37C P2 28W / 120W | 1082MiB / 2044MiB | 9% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 1796 G /usr/bin/X 155MiB | | 0 2597 G compiz 65MiB | | 0 5966 C python 849MiB | +-----------------------------------------------------------------------------+ Press Enter to continue... Thu Oct 6 09:05:44 2016 +------------------------------------------------------+ | NVIDIA-SMI 352.63 Driver Version: 352.63 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce GTX 960 Off | 0000:01:00.0 On | N/A | | 30% 38C P2 28W / 120W | 1082MiB / 2044MiB | 0% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 1796 G /usr/bin/X 155MiB | | 0 2597 G compiz 65MiB | | 0 5966 C python 849MiB | +-----------------------------------------------------------------------------+ Compiled model. Press Enter to continue... Thu Oct 6 09:05:44 2016 +------------------------------------------------------+ | NVIDIA-SMI 352.63 Driver Version: 352.63 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce GTX 960 Off | 0000:01:00.0 On | N/A | | 30% 38C P2 28W / 120W | 1082MiB / 2044MiB | 0% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 1796 G /usr/bin/X 155MiB | | 0 2597 G compiz 65MiB | | 0 5966 C python 849MiB | +-----------------------------------------------------------------------------+ About to train one iteration. Press Enter to continue... Error allocating 37748736 bytes of device memory (out of memory). Driver report 34205696 bytes free and 2144010240 bytes total Traceback (most recent call last): File "memtest.py", line 65, in &lt;module&gt; model.train_on_batch(x, y) File "/usr/local/lib/python2.7/dist-packages/keras/models.py", line 712, in train_on_batch class_weight=class_weight) File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 1221, in train_on_batch outputs = self.train_function(ins) File "/usr/local/lib/python2.7/dist-packages/keras/backend/theano_backend.py", line 717, in __call__ return self.function(*inputs) File "/usr/local/lib/python2.7/dist-packages/theano/compile/function_module.py", line 871, in __call__ storage_map=getattr(self.fn, 'storage_map', None)) File "/usr/local/lib/python2.7/dist-packages/theano/gof/link.py", line 314, in raise_with_op reraise(exc_type, exc_value, exc_trace) File "/usr/local/lib/python2.7/dist-packages/theano/compile/function_module.py", line 859, in __call__ outputs = self.fn() MemoryError: Error allocating 37748736 bytes of device memory (out of memory). Apply node that caused the error: GpuContiguous(GpuDimShuffle{3,2,0,1}.0) Toposort index: 338 Inputs types: [CudaNdarrayType(float32, 4D)] Inputs shapes: [(1024, 1024, 3, 3)] Inputs strides: [(1, 1024, 3145728, 1048576)] Inputs values: ['not shown'] Outputs clients: [[GpuDnnConv{algo='small', inplace=True}(GpuContiguous.0, GpuContiguous.0, GpuAllocEmpty.0, GpuDnnConvDesc{border_mode='half', subsample=(1, 1), conv_mode='conv', precision='float32'}.0, Constant{1.0}, Constant{0.0}), GpuDnnConvGradI{algo='none', inplace=True}(GpuContiguous.0, GpuContiguous.0, GpuAllocEmpty.0, GpuDnnConvDesc{border_mode='half', subsample=(1, 1), conv_mode='conv', precision='float32'}.0, Constant{1.0}, Constant{0.0})]] HINT: Re-running with most Theano optimization disabled could give you a back-trace of when this node was created. This can be done with by setting the Theano flag 'optimizer=fast_compile'. If that does not work, Theano optimizations can be disabled with 'optimizer=None'. HINT: Use the Theano flag 'exception_verbosity=high' for a debugprint and storage map footprint of this apply node. </code></pre> <p>A few things to note: </p> <ul> <li>I have tried both Theano and TensorFlow backends. Both have the same problems, and run out of memory at the same line. In TensorFlow, it seems that Keras preallocates a lot of memory (about 1.5 GB) so nvidia-smi doesn't help us track what's going on there, but I get the same out-of-memory exceptions. Again, this points towards an error in (my usage of) Keras (although it's hard to be certain about such things, it could be something with my setup).</li> <li>I tried using CNMEM in Theano, which behaves like TensorFlow: It preallocates a large amount of memory (about 1.5 GB) yet crashes in the same place.</li> <li>There are some warnings about the CudNN-version. I tried running the Theano backend with CUDA but not CudNN and I got the same errors, so that is not the source of the problem.</li> <li>If you want to test this on your own GPU, you might want to make the network deeper/shallower depending on how much GPU memory you have to test this.</li> <li>My configuration is as follows: Ubuntu 14.04, GeForce GTX 960, CUDA 7.5.18, CudNN 5.1.3, Python 2.7, Keras 1.1.0 (installed via pip)</li> <li>I've tried changing the compilation of the model to use different optimizers and losses, but that doesn't seem to change anything.</li> <li>I've tried changing the train_on_batch function to use fit instead, but it has the same problem.</li> <li>I saw one similar question here on StackOverflow - <a href="http://stackoverflow.com/questions/35757151/why-does-this-keras-model-require-over-6gb-of-memory">Why does this Keras model require over 6GB of memory?</a> - but as far as I can tell, I don't have those issues in my configuration. I've never had multiple versions of CUDA installed, and I've double checked my PATH, LD_LIBRARY_PATH and CUDA_ROOT variables more times than I can count.</li> <li>Julius suggested that the activation parameters themselves take up GPU memory. If this is true, can somebody explain it a bit more clearly? I have tried changing the activation function of my convolution layers to functions that are clearly hard-coded with no learnable parameters as far as I can tell, and that doesn't change anything. Also, it seems unlikely that these parameters would take up almost as much memory as the rest of the network itself.</li> <li>After thorough testing, the largest network I can train is about 453 MB of parameters, out of my ~2 GB of GPU RAM. Is this normal? </li> <li>After testing Keras on some smaller CNNs that do fit in my GPU, I can see that there are very sudden spikes in GPU RAM usage. If I run a network with about 100 MB of parameters, 99% of the time during training it'll be using less than 200 MB of GPU RAM. But every once in a while, memory usage spikes to about 1.3 GB. It seems safe to assume that it's these spikes that are causing my problems. I've never seen these spikes in other frameworks, but they might be there for a good reason? <strong>If anybody knows what causes them, and if there's a way to avoid them, please chime in!</strong></li> </ul>
4
2016-10-06T07:34:16Z
39,890,190
<p>It is a very common mistake to forget that the activations also take vram, not just the parameters. This takes the required vram multiple times higher than your calculation (at the very least by a <code>minibatch_size</code> factor.</p> <p>So, in the beginning when the network is created, only the parameters are allocated. However, when the training starts, the activations (times each minibatch) get allocated, giving the behavior you observe.</p>
1
2016-10-06T07:36:41Z
[ "python", "memory", "tensorflow", "theano", "keras" ]
UnboundLocalError: local variable 'restaurantToDelete' referenced before assignment Flask app
39,890,182
<p>When I try to delete an item from the database in a Flask view, the following error is shown</p> <pre><code>UnboundLocalError: local variable 'restaurantToDelete' referenced before assignment </code></pre> <pre><code>@app.route('/restaurant/&lt;int:restaurant_id&gt;/delete',methods=['GET','POST']) def deleteRestaurant(restaurant_id): if request.method=='POST': restaurantToDelete=session.query(Restaurant).filter_by(id=restaurant_id).one() session.delete(restaurantToDelete) session.commit() return redirect(url_for('showRestaurants')) else: return render_template('deleterestaurant.html',restaurant=restaurantToDelete) </code></pre>
0
2016-10-06T07:36:28Z
39,890,245
<p>Look at the else, at that point restaurantToDelete isn't defined, your code should be something like</p> <pre><code>@app.route('/restaurant/&lt;int:restaurant_id&gt;/delete',methods=['GET','POST']) def deleteRestaurant(restaurant_id): restaurantToDelete=session.query(Restaurant).filter_by(id=restaurant_id).one() if request.method=='POST': session.delete(restaurantToDelete) session.commit() return redirect(url_for('showRestaurants')) else: return render_template('deleterestaurant.html',restaurant=restaurantToDelete) </code></pre>
1
2016-10-06T07:39:20Z
[ "python", "flask", "sqlalchemy" ]
UnboundLocalError: local variable 'restaurantToDelete' referenced before assignment Flask app
39,890,182
<p>When I try to delete an item from the database in a Flask view, the following error is shown</p> <pre><code>UnboundLocalError: local variable 'restaurantToDelete' referenced before assignment </code></pre> <pre><code>@app.route('/restaurant/&lt;int:restaurant_id&gt;/delete',methods=['GET','POST']) def deleteRestaurant(restaurant_id): if request.method=='POST': restaurantToDelete=session.query(Restaurant).filter_by(id=restaurant_id).one() session.delete(restaurantToDelete) session.commit() return redirect(url_for('showRestaurants')) else: return render_template('deleterestaurant.html',restaurant=restaurantToDelete) </code></pre>
0
2016-10-06T07:36:28Z
39,890,253
<p>You're defining the variable <code>restaurantToDelete</code> inside an <code>if</code>-block, and then you try to use it inside the <code>else</code>-block. If the <code>request.method</code> is not <code>POST</code>, the variable does not exist, because your code does not enter the <code>if</code>-block. You can fix this by fetching the restaurant before checking for the request type:</p> <pre><code>@app.route('/restaurant/&lt;int:restaurant_id&gt;/delete',methods=['GET','POST']) def deleteRestaurant(restaurant_id): restaurantToDelete=session.query(Restaurant).filter_by(id=restaurant_id).one() if request.method=='POST': session.delete(restaurantToDelete) session.commit() return redirect(url_for('showRestaurants')) else: return render_template('deleterestaurant.html',restaurant=restaurantToDelete) </code></pre>
1
2016-10-06T07:39:32Z
[ "python", "flask", "sqlalchemy" ]
Python XML modifying by ElementTree destroys the XML structure
39,890,217
<p>I am using Python V 3.5.1 on windows framework in order to modify a text inside , the modification works great but after saving the tree all the empty tags get destroyed as the following example:</p> <pre><code>&lt;HOSTNAME&gt;&lt;/HOSTNAME&gt; Is being changed to &lt;HOSTNAME /&gt; </code></pre> <p>child with a text between the tags looks good:</p> <pre><code>&lt;HOSTNAME&gt;tnas2&lt;/HOSTNAME&gt; is being changed to &lt;HOSTNAME&gt;tnas2&lt;/HOSTNAME&gt; which is the same as the source. </code></pre> <p>The source XML file is:</p> <pre><code>&lt;ROOT&gt; &lt;DeletedName&gt; &lt;VERIFY_DEST_SIZE&gt;Y&lt;/VERIFY_DEST_SIZE&gt; &lt;VERIFY_BYTES&gt;Y&lt;/VERIFY_BYTES&gt; &lt;TIMESTAMP&gt;XXXXXXXXXDeletedXXXXXXXXXX&lt;/TIMESTAMP&gt; &lt;EM_USERS&gt;XXXXXXXXXDeletedXXXXXXXXXX&lt;/EM_USERS&gt; &lt;EM_GROUPS&gt;&lt;/EM_GROUPS&gt; &lt;LOCAL&gt; &lt;HOSTNAME&gt;&lt;/HOSTNAME&gt; &lt;PORT&gt;&lt;/PORT&gt; &lt;USERNAME&gt;XXXXXXXXXDeletedXXXXXXXXXX&lt;/USERNAME&gt; &lt;PASSWORD&gt;XXXXXXXXXDeletedXXXXXXXXXX&lt;/PASSWORD&gt; &lt;HOME_DIR&gt;&lt;/HOME_DIR&gt; &lt;OS_TYPE&gt;Windows&lt;/OS_TYPE&gt; &lt;/LOCAL&gt; &lt;REMOTE&gt; &lt;HOSTNAME&gt;DeletedHostName&lt;/HOSTNAME&gt; &lt;PORT&gt;22&lt;/PORT&gt; &lt;USERNAME&gt;XXXXXXXXXDeletedXXXXXXXXXX&lt;/USERNAME&gt; &lt;PASSWORD&gt;XXXXXXXXXDeletedXXXXXXXXXX&lt;/PASSWORD&gt; &lt;HOME_DIR&gt;XXXXXXXXXDeletedXXXXXXXXXX&lt;/HOME_DIR&gt; &lt;OS_TYPE&gt;Unix&lt;/OS_TYPE&gt; &lt;CHAR_SET&gt;UTF-8&lt;/CHAR_SET&gt; &lt;SFTP&gt;Y&lt;/SFTP&gt; &lt;ENCRYPTION&gt;Blowfish&lt;/ENCRYPTION&gt; &lt;COMPRESSION&gt;N&lt;/COMPRESSION&gt; &lt;/REMOTE&gt; &lt;/DeletedName&gt; &lt;/ROOT&gt; </code></pre> <p>the code is:</p> <pre><code>import os import xml.etree.ElementTree as ET from shutil import copyfile import datetime def AddAuthUserToAccountsFile(AccountsFile,RemoteMachine,UserToAdd): today = datetime.date.today() today = str(today) print(today) BackUpAccountsFile = AccountsFile + "-" + today try: tree = ET.parse(AccountsFile) except: pass try: copyfile(AccountsFile,BackUpAccountsFile) except: pass root = tree.getroot() UsersTags = tree.findall('.//EM_USERS') for UsersList in UsersTags: Users = UsersList.text Users = UsersList.text = Users.replace("||","|") if UserToAdd not in Users: print("The Users were : ",Users, "---&gt;&gt; Adding ",UserToAdd) UsersList.text = Users + UserToAdd +"|" tree.write(AccountsFile) </code></pre> <p>Appreciate for any help to pass this strange scenario.</p> <p>Thanks, Miki</p>
-1
2016-10-06T07:37:54Z
39,891,805
<p>OK, i found the solution - just adding method = "html" to the tree.write line it keeps it as needed.</p> <pre><code>tree.write(AccountsFile,method = 'html') </code></pre> <p>Thanks.</p>
0
2016-10-06T09:00:06Z
[ "python", "xml", "elementtree" ]
python Get HTML from JS using selenium
39,890,353
<p>I'm trying to get the div HTML from <a href="https://www.workday.com/en-us/company/careers/open-positions.html#?q=" rel="nofollow">https://www.workday.com/en-us/company/careers/open-positions.html#?q=</a>.</p> <p>But the div listing job posts is loaded from <code>granite.min.js</code> based on network XHR.</p> <pre><code>from selenium import webdriver from bs4 import BeautifulSoup from pprint import pprint path_to_chromedriver = "/Users/RichWin/Documents/chromedriver.exe" browser = webdriver.Chrome(executable_path=path_to_chromedriver) driver = browser.get('https://www.workday.com/en-us/company/careers/open-positions.html#?q=') elem = driver.find_element_by_id('template-content') soup = BeautifulSoup(elem.get_text, "html.parser") for tag in soup.find_all('div'): pprint(tag) </code></pre> <p>Can anyone help me?</p>
-2
2016-10-06T07:44:55Z
39,902,136
<p>Ok, so your code has a couple of problems.</p> <p>a) you need to wait for the <code>template-content</code> div to load its content. In the code bellow I use <a href="http://selenium-python.readthedocs.io/waits.html" rel="nofollow"><code>implicitly_wait</code></a> to wait 30 seconds. <br> b) <code>find_element_by_id</code> doesn't return <code>HTML</code> but a <code>Selenium</code> object. Therefore you cannot pass it to <code>BeautifulSoup</code> for parsing.</p> <pre><code>from pprint import pprint from bs4 import BeautifulSoup from selenium import webdriver url = 'https://www.workday.com/en-us/company/careers/open-positions.html#?q=' path_to_chromedriver = "/Users/RichWin/Documents/chromedriver.exe" browser = webdriver.Chrome(executable_path=path_to_chromedriver) browser.implicitly_wait(30) browser.get(url) elem = browser.find_element_by_id('template-content') elem_html = elem.get_attribute('innerHTML') soup = BeautifulSoup(elem_html, "html.parser") for tag in soup.find_all('div'): pprint(tag) browser.quit() </code></pre>
1
2016-10-06T17:21:51Z
[ "javascript", "python", "selenium" ]
Pandas fuzzy group summary statistics
39,890,417
<p>I have a data frame defined from a CSV and would like to calculate basic summary statistics e.g. mean, variance, ... for the train part of all the models.</p> <p>Inserting a model number and grouping by that would work fine - but does not seem to be a good solution. <strong>How can I get the summary statistics per model</strong> (only for training), as a group_by modelName does not work because of the counter.</p> <pre><code>df.groupby(['modelName', 'typeOfRun'])['kappa'].mean() </code></pre> <p>or</p> <pre><code>df[df.typeOfRun != 'validation'].describe() </code></pre> <p>do not yield the desired results. <a href="http://i.stack.imgur.com/LeMuY.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/LeMuY.jpg" alt="pct"></a></p> <pre><code>AUC_R,Accuracy,Error rate,False negative rate,False positive rate,Lift value,Precision J,Precision N,Rate of negative predictions,Rate of positive predictions,Sensitivity (true positives rate),Specificity (true negatives rate),f1_R,kappa,modelName,typeOfRun 0.7747622323007851,0.7182416731216111,0.28175832687838887,0.16519823788546256,0.28527729751296715,2.769918376242967,0.08117369886485329,0.9930703132218424,0.029305447973147433,0.3013813581203202,0.8348017621145375,0.7147227024870328,0.8312130234716368,0.09987857210248623,00_testing_1-training,training 0.7688154033277225,0.7295055512522592,0.27049444874774076,0.1894273127753304,0.27294188056922464,2.807689674786938,0.08228060368921185,0.9921956531603068,0.029305447973147433,0.28869739220242707,0.8105726872246696,0.7270581194307754,0.8391825769931881,0.10159217699431862,00_testing_2-training,training 0.7653761718477654,0.7217918925897238,0.2782081074102763,0.1883259911894273,0.2809216651150419,2.737743031677203,0.08023078597866318,0.9921552436003304,0.029305447973147433,0.29647560030983733,0.8116740088105727,0.7190783348849581,0.8338281219878937,0.09791120175612114,00_testing_3-training,training 0.7666987721022418,0.7202566535628756,0.2797433464371244,0.18396711202466598,0.2826353437708505,2.7358921138891255,0.08018987022168358,0.9923159476282464,0.02931031885891585,0.2982693958700465,0.816032887975334,0.7173646562291496,0.8327314318650539,0.097878484924986,00_testing-validation,validation 0.7776426005660843,0.7300542215336948,0.2699457784663052,0.17180616740088106,0.2729086314669504,2.8639238514789174,0.08392857142857142,0.9929168180167091,0.029305447973147433,0.28918151303898787,0.8281938325991189,0.7270913685330496,0.8394625719769673,0.10476961017159536,01_otherSet_1-training,training 0.7691501646636157,0.737412858249419,0.26258714175058095,0.197136563876652,0.2645631067961165,2.8639098209585327,0.08392816025788626,0.9919723742039644,0.029305447973147433,0.2803382390911438,0.802863436123348,0.7354368932038835,0.8446557452170924,0.1044486077353842,01_otherSet_2-training,training 0.770174515310113,0.7342176607281178,0.2657823392718823,0.19162995594713655,0.26802101343263735,2.847815513920855,0.08345650938032974,0.9921582766235522,0.029305447973147433,0.283856183836819,0.8083700440528634,0.7319789865673627,0.8424375777288816,0.10367514449353035,01_otherSet_3-training,training 0.7676347850606817,0.7317488289428102,0.26825117105718976,0.19424460431654678,0.2704858255620898,2.8156062097690264,0.08252631578947368,0.9920241385858671,0.02931031885891585,0.2861747473378218,0.8057553956834532,0.7295141744379102,0.8407546494992847,0.10196584743637081,01_otherSet-validation,validation </code></pre>
1
2016-10-06T07:48:25Z
39,890,470
<p>IIUC you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.DataFrameGroupBy.describe.html" rel="nofollow"><code>DataFrameGroupBy.describe</code></a>:</p> <pre><code>print (df.groupby(['modelName', 'typeOfRun']).describe()) f1_R kappa modelName typeOfRun 00_testing-validation validation count 1.000000 1.000000 mean 0.832731 0.097878 std NaN NaN min 0.832731 0.097878 25% 0.832731 0.097878 50% 0.832731 0.097878 75% 0.832731 0.097878 max 0.832731 0.097878 00_testing_1-training training count 1.000000 1.000000 mean 0.831213 0.099879 std NaN NaN min 0.831213 0.099879 25% 0.831213 0.099879 50% 0.831213 0.099879 75% 0.831213 0.099879 max 0.831213 0.099879 00_testing_2-training training count 1.000000 1.000000 mean 0.839183 0.101592 std NaN NaN ... ... </code></pre> <p>You can <code>groupby</code> by <code>Series</code> created by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.split.html" rel="nofollow"><code>split</code></a> and selected first item of list by <code>str[0]</code>:</p> <pre><code>print (df.modelName.str.split('_').str[0]) 0 00 1 00 2 00 3 00 4 01 5 01 6 01 7 01 Name: modelName, dtype: object print (df.groupby([df.modelName.str.split('_').str[0]]).describe()) AUC_R Accuracy Error;rate False;negative;rate \ modelName 00 count 4.000000 4.000000 4.000000 4.000000 mean 0.768913 0.722449 0.277551 0.181730 std 0.004149 0.004924 0.004924 0.011270 min 0.765376 0.718242 0.270494 0.165198 25% 0.766368 0.719753 0.276280 0.179275 50% 0.767757 0.721024 0.278976 0.186147 75% 0.770302 0.723720 0.280247 0.188601 max 0.774762 0.729506 0.281758 0.189427 01 count 4.000000 4.000000 4.000000 4.000000 mean 0.771151 0.733358 0.266642 0.188704 std 0.004452 0.003198 0.003198 0.011488 min 0.767635 0.730054 0.262587 0.171806 25% 0.768771 0.731325 0.264984 0.186674 50% 0.769662 0.732983 0.267017 0.192937 75% 0.772042 0.735016 0.268675 0.194968 max 0.777643 0.737413 0.269946 0.197137 ... ... </code></pre>
1
2016-10-06T07:50:54Z
[ "python", "pandas", "group-by", "summary" ]
numba argument argtypes deprecated keyword
39,890,420
<p>When i type following code :</p> <pre><code>mandel_numba = numba.jit(restype=uint32, argtypes=[float32, float32, uint32])(mandel) </code></pre> <p>and get error message </p> <pre><code> raise DeprecationError(_msg_deprecated_signature_arg.format('argtypes')) numba.errors.DeprecationError: Deprecated keyword argument `argtypes`. Signatures should be passed as the first positional argument. </code></pre> <p>My numba version is 0.28.0, i know that numba 0.18 version removes the old deprecated and undocumented argtypes and restype arguments to the @jit decorator. </p> <p>please help me solve this problem.</p>
1
2016-10-06T07:48:40Z
39,890,760
<p>The error message is telling you what it expects</p> <pre><code>Signatures should be passed as the first positional argument. </code></pre> <p>So instead of</p> <pre><code>numba.jit(restype=uint32, argtypes=[float32, float32, uint32]) </code></pre> <p>They should be positional</p> <pre><code>numba.jit(uint32(float32, float32, uint32)) </code></pre>
2
2016-10-06T08:06:31Z
[ "python", "numba" ]
range filter not working in django
39,890,447
<p>I want to filter my queryset on the basis of two values.</p> <p>I want result between two numbers.I am trying some code but it is not working .It did not return me proper result. my code</p> <pre><code>def project(request): try: proTitle = request.GET.get('title') ProDescription = request.GET.get('description') funAria = request.GET.get('funAria') femaleReq = request.GET.get('femaleReq') cost_gte = int(request.GET.get('cost_gte')) cost_lte = int(request.GET.get('cost_lte')) except: pass if cost_gte and cost_lte : list4 = [] result = Project.objects.filter(budgeted_cost__gte=cost_gte , budgeted_cost__lte=cost_lte ) print result for res in result: list4.append(res.project_id) data ={'cost result':list4} return HttpResponse(json.dumps(data)) return HttpResponse(json.dumps(data), content_type='application/json') </code></pre> <p>Some time it returns all product and some time it return none.I cant understand why it is not working.</p> <p>i also try by range</p> <pre><code>result = Product.objects.filter(cost__range=(cost_gte,cost_lte)) </code></pre> <p>but is also behave same.Please guide me what i am doing wrong.</p>
1
2016-10-06T07:49:52Z
39,893,326
<p>In python when you use <code>range</code> it means <code>range1 &lt;= i &lt; range2</code>, but the filter you are looking for your query is <code>range1 &lt;= i &lt;= range2</code>, so you should not look forward <code>range</code> instead go </p> <pre><code> result = Project.objects.filter(budgeted_cost__gte=cost_gte , budgeted_cost__lte=cost_lte ) </code></pre> <p>to get desired output.</p>
0
2016-10-06T10:14:57Z
[ "python", "django" ]
AppEngine docs recommend command-line flags instead of app.yaml file elements
39,890,593
<p>In the <a href="https://cloud.google.com/appengine/docs/python/config/appref" rel="nofollow">app.yaml</a> documentation, Google makes the following recommendation of number of times:</p> <blockquote> <p>"The recommended approach is to remove the <strong>ELEMENT NAME</strong> [e.g. <code>application</code>] from your app.yaml file and instead, use a command-line flag to specify your <strong>ELEMENT NAME</strong> [e.g. <code>application ID</code>]"</p> </blockquote> <p>Unfortunately, Google doesn't explain why they recommend this.</p> <p>In my opinion, an informative app.yaml file is much more helpful than deploying an app with command-line flags. Can anyone explain why Google makes this recommendation?</p>
4
2016-10-06T07:57:20Z
39,949,231
<p>I think mainly because they are slowly moving away from the <code>appcfg.py</code>, to start using the <a href="https://cloud.google.com/sdk/" rel="nofollow">Cloud SDK</a> instead, where <code>application</code> is not supported. You can set your default application so you won't need to use command line all the time.</p>
1
2016-10-09T22:36:51Z
[ "python", "google-app-engine", "documentation", "app.yaml" ]
Django export filtered query to csv
39,890,594
<p>So the scenario is this: I have a search form where the user types or selects the criteria. These criteria are being posted and I get a query in returned, filtered with these criteria. The code for the search form is similar to the code shown below. </p> <p>What I want to do now is take these results and export them in <code>csv</code> file. So, as I said I have almost the same code where I get my params/criteria with GET. Basically I GET the criteria the user has typed or selected after the search POST was made. </p> <p>So far, I have succesfully got the params, I can see them in print, and the export is down but I can see only the first row. It's like I can't pass the query (as provided in results).</p> <p>Here's my code:</p> <pre><code> def get_csv(request): response = HttpResponse(content_type='text/csv') response['Content-Disposition'] = 'attachment; filename=mycsv.csv' if request.method == 'GET': params = {} your_values = {} context = {} field_1 = request.GET.get('field_1',None) field_2 = request.GET.get('field_2',None) field_3 = request.GET.get('field_3',None) if field_1: your_values['field_1'] = field_1 if field_2: your_values['field_2'] = field_2 if field_3: your_values['field_3'] = field_3 if field_4: your_values['field_4'] = field_4 for key, value in your_values.items(): if value: params[key] = value print('params are:',params) results = Model.objects.filter(**params) writer = csv.writer(response) writer.writerow([h for h in [ 'Header_1', 'Header_2', 'Header_3', 'Header_4', ]]) #I tried that as well:writer.writerows([r for r in[results]]) for field in results: field_list = [ field.filed_1, field.field_2, field.field_3, field.field_4, ] for i, item in enumerate(field_list): if item: field_list[i] = item writer.writerow(field_list) return response </code></pre>
1
2016-10-06T07:57:26Z
39,890,967
<p>Youre seeing only one row because youre passing only one row:</p> <pre><code>for field in results: field_list = [ field.filed_1, field.field_2, field.field_3, field.field_4, ] </code></pre> <p>The above puts only the item from the last iteration in <code>field_list</code> and the previous results are thrown away.</p> <p>You can instead use a <em>list comprehension</em> to create the list of lists adding the indices at the same time using <code>enumerate</code>. And then write all the rows at the same time using <code>writerows</code>:</p> <pre><code>results = Model.objects.filter(**params) writer = csv.writer(response) writer.writerow(['Header_1', 'Header_2', 'Header_3', 'Header_4']) field_list = [[i, f.field_1, f.field_2, f.field_3, f.field_4] for i, f in enumerate(results) if f] writer.writerows(field_list) # ^ notice the s </code></pre>
0
2016-10-06T08:18:34Z
[ "python", "django", "csv" ]
Accessing a path which case sensitive without writing so
39,890,750
<p>I would like to know whether it possible to access linux path like: <code>/home/dan/CaseSensitivE/test.txt</code></p> <p>In a way we write it as <code>/home/dan/casesensitive/test.txt</code> and it goes to the right place, means python consider paths as not case sensitive and allow entering them that way, although they are case sensitive.</p>
0
2016-10-06T08:06:11Z
39,891,399
<p>As Klaus said, the simple answer is no. You could, however, take a more laborious route, and enumerate all folders/files in your top directory (<code>os.path, glob</code>), convert to lower case (<code>string.lower</code>), test equality, step one level down, etc.</p> <p>This works for me:</p> <pre><code>import os def match_lowercase_path(path): # get absolute path path = os.path.abspath(path) # try it first if os.path.exists(path): correct_path = path # no luck else: # works on linux, but there must be a better way components = path.split('/') # initialise answer correct_path = '/' # step through for c in components: if os.path.isdir(correct_path + c): correct_path += c +'/' elif os.path.isfile(correct_path + c): correct_path += c else: match = find_match(correct_path, c) correct_path += match return correct_path def find_match(path, ext): for child in os.listdir(path): if child.lower() == ext: if os.path.isdir(path + child): return child + '/' else: return child else: raise ValueError('Could not find a match for {}.'.format(path + ext)) </code></pre>
1
2016-10-06T08:40:05Z
[ "python", "python-2.7" ]
google cloud machine learning hyperparameter tuning avoid Nans
39,890,785
<p>I am running google cloud machine learning beta - and use the hypertune setup with tensorflow.</p> <p>In some of the sub runs of hyperparameter tuning I have losses becoming NaNs - and that crashes the computations - which in turns stop the hyperparameter tuning job. </p> <pre><code>Error reported to Coordinator: &lt;class 'tensorflow.python.framework.errors.InvalidArgumentError'&gt;, Nan in summary histogram for: softmax_linear/HistogramSummary [[Node: softmax_linear/HistogramSummary = HistogramSummary[T=DT_FLOAT, _device="/job:master/replica:0/task:0/cpu:0"] (softmax_linear/HistogramSummary/tag, softmax_linear/softmax_linear)]] Caused by op u'softmax_linear/HistogramSummary', defined at: File "/usr/lib/python2.7/runpy.py", line 162, in _run_module_as_main </code></pre> <p>What is the canonical way of handling these ? Should I protect the loss function ? </p> <p>Thanks</p>
2
2016-10-06T08:08:00Z
39,901,926
<p>You should protect the loss function by checking for NaNs. Any crash or exception thrown by the program is treated by Cloud ML as a failure of the trial, and if enough trials fail the entire job will be failed.</p> <p>If the trial exits cleanly without setting any hyperparameter summaries, the trial will be considered Infeasible and hyperparameters similar to those will be less likely to be tried again, but it will not be an error.</p>
1
2016-10-06T17:09:06Z
[ "python", "machine-learning", "tensorflow", "google-cloud-ml" ]