title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
---|---|---|---|---|---|---|---|---|---|
Updating model in Django based on dictionary values
| 39,675,149 |
<p>I have a dictionary in Django, contains a model's keys and some values. I want to update one of field (<em>score</em>) of the model (<em>A</em>) like this:
multiply that field for all the records to a constant, and then, if the record was found in the dictionary, add the corresponding dict value to the field. there are many records, and getting and then saving each object takes lots of time. I tried this update query, but it is not working:</p>
<p><code>A.objects.update(score=F('score') * CONST + (dictionary[F('key')] if F('key') in dictionary else 0))</code></p>
<p>the problem is that the condition seems to be false for all of the records, I tried to debug the problem, and it seems <code>F('key')</code> doesn't have the desired value (a string).</p>
<p>for further information, this line have the correct result:</p>
<p><code>a = A.objects.get(key='key1')
a.score * CONST + (dictionary[a.key] if a.key in dictionary else 0)</code></p>
| 0 |
2016-09-24T10:08:52Z
| 39,675,515 |
<p>It doesn't make sense to use a dictionary lookup here; this is transformed into a single SQL query, the value of F() is not resolved in Python so there is no way it can be used to look up in a dictionary.</p>
<p>Instead you will probably have to use a <a href="https://docs.djangoproject.com/en/1.10/ref/models/conditional-expressions" rel="nofollow">conditional expression</a>.</p>
| 2 |
2016-09-24T10:50:54Z
|
[
"python",
"sql",
"django"
] |
Python readline Auto-Complete
| 39,675,223 |
<p>I want to implement auto-complete in Command Line Interface for the following : </p>
<pre><code>Food :
Fruits:
Apples
Oranges
Vegetables:
Carrot
Beetroot
Snacks:
Chocolate
</code></pre>
<p>The output for the <code><TAB></code> would be :
food<br>
and so on...</p>
<p>Commands would be like :
<code>Food Fruits Apples</code>
or <code>Food Snacks Chocolate</code></p>
<p>Came across this <a href="https://pymotw.com/2/readline/" rel="nofollow">https://pymotw.com/2/readline/</a> while googling. But I don't understand how begin/end works. And how it will change for further nesting.</p>
<p>Any sort of help is appreciated. (Writing code in python. Prefer to use readline library)</p>
| -1 |
2016-09-24T10:17:45Z
| 39,675,420 |
<p>If you use a library like <a href="http://click.pocoo.org" rel="nofollow">click</a> for making your CLI you will get this mostly for free <a href="http://click.pocoo.org/5/bashcomplete/" rel="nofollow">http://click.pocoo.org/5/bashcomplete/</a></p>
| 0 |
2016-09-24T10:37:54Z
|
[
"python",
"readline"
] |
I am trying to find the nth binary palindrome
| 39,675,412 |
<p><a href="https://oeis.org/A006995" rel="nofollow">Binary palindromes: numbers whose binary expansion is palindromic.</a> <br>
Binary Palindrome -> is a number whose binary representation is a palindrome. <br>
<a href="http://ideone.com/CfQZuH" rel="nofollow">Here is link to the solution with naive approach</a></p>
<p>I have read from above link and it give a formula to find the nth binary palindrome. I am unable to understand and thus code the solution. </p>
<pre><code>def palgenbase2(): # generator of palindromes in base 2
#yield 0
x, n, n2 = 1, 1, 2
m = 1;
while True:
for y in range(n, n2):
s = format(y, 'b')
yield int(s+s[-2::-1], 2)
for y in range(n, n2):
s = format(y, 'b')
yield int(s+s[::-1], 2)
x += 1
n *= 2
n2 *= 2
if n2 > 1000000000:
break
ans = {}
for i,j in enumerate(palgenbase2()):
print i,j
ans[i]=j
with open("output","a") as f:
f.write(ans)
#use the saved output to give answer to query later
#this will work but it takes too much time.
n = int(raw_input())
for c in range(0,n):
z = int(raw_input())
print ans[z]
</code></pre>
<p>Here is one Python code but it is generating all such palindromes.<br>
I need help in the program to get the nth binary palindrome directly.<br>
as follows:</p>
<blockquote>
<p>Input -> 1 <= n <= 1000000000<br>
Function -> f(n)<br>
output -> nth binary palindrome.</p>
</blockquote>
<p>Can we do this in better time using the formula mentioned <a href="https://oeis.org/A006995" rel="nofollow">here</a>?</p>
| -2 |
2016-09-24T10:37:04Z
| 39,676,510 |
<p>Here's a fairly straight-forward implementation of the recursive algorithm given at <a href="https://oeis.org/A006995" rel="nofollow">A006995</a>.</p>
<p>To make it more efficient I use bit shifting to perform binary exponentiation: when <code>x</code> is a non-negative integer, <code>1 << x</code> is equivalent to <code>2 ** x</code> but substantially faster (at least, it is in both Python 2 and Python 3 on standard CPython).</p>
<p>Also, to make the recursion more efficient, the function stores previously calculated values in a dictionary. This also lets us easily handle when <code>n <= 2</code>, which the recursive formula itself does not handle.</p>
<pre><code>#!/usr/bin/env python
''' Binary palindromes
Find (non-negative) integers which are palindromes when written in binary
See http://stackoverflow.com/q/39675412/4014959
and https://oeis.org/A006995
Written by PM 2Ring 2016.09.24
Recursion for n>2: a(n)=2^(2k-q)+1+2^p*a(m), where k:=floor(log_2(n-1)), and p, q and m are determined as follows:
Case 1: If n=2^(k+1), then p=0, q=0, m=1;
Case 2: If 2^k<n<2^k+2^(k-1), then set i:=n-2^k, p=k-floor(log_2(i))-1, q=2, m=2^floor(log_2(i))+i;
Case 3: If n=2^k+2^(k-1), then p=0, q=1, m=1;
Case 4: If 2^k+2^(k-1)<n<2^(k+1), then set j:=n-2^k-2^(k-1), p=k-floor(log_2(j))-1, q=1, m=2*2^floor(log_2(j))+j;
'''
#Fast Python 3 version of floor(log2(n))
def flog2(n):
return n.bit_length() - 1
def binpal(n, cache={1:0, 2:1, 3:3}):
if n in cache:
return cache[n]
k = flog2(n - 1)
b = 1 << k
a, c = b >> 1, b << 1
if n == c:
p, q, m = 0, 0, 1
elif b < n < a + b:
i = n - b
logi = flog2(i)
p, q, m = k - logi - 1, 2, (1 << logi) + i
elif n == a + b:
p, q, m = 0, 1, 1
else:
#a + b < n < c
i = n - a - b
logi = flog2(i)
p, q, m = k - logi - 1, 1, (2 << logi) + i
result = (1 << (2*k - q)) + 1 + (1 << p) * binpal(m)
cache[n] = result
return result
def palgenbase2():
''' generator of binary palindromes '''
yield 0
x, n, n2 = 1, 1, 2
while True:
for y in range(n, n2):
s = format(y, 'b')
yield int(s+s[-2::-1], 2)
for y in range(n, n2):
s = format(y, 'b')
yield int(s+s[::-1], 2)
x += 1
n *= 2
n2 *= 2
gen = palgenbase2()
for i in range(1, 30):
b = next(gen)
c = binpal(i)
print('{0:>2}: {1} {1:b} {2}'.format(i, b, c))
</code></pre>
<p><strong>output</strong></p>
<pre><code> 1: 0 0 0
2: 1 1 1
3: 3 11 3
4: 5 101 5
5: 7 111 7
6: 9 1001 9
7: 15 1111 15
8: 17 10001 17
9: 21 10101 21
10: 27 11011 27
11: 31 11111 31
12: 33 100001 33
13: 45 101101 45
14: 51 110011 51
15: 63 111111 63
16: 65 1000001 65
17: 73 1001001 73
18: 85 1010101 85
19: 93 1011101 93
20: 99 1100011 99
21: 107 1101011 107
22: 119 1110111 119
23: 127 1111111 127
24: 129 10000001 129
25: 153 10011001 153
26: 165 10100101 165
27: 189 10111101 189
28: 195 11000011 195
29: 219 11011011 219
</code></pre>
<p>If you need to run this on Python 2 you won't be able to use that <code>flog2</code> function, since Python 2 integers do not have the <code>bit_length</code> method. Here's an alternative version:</p>
<pre><code>from math import floor, log
def flog2(n):
return int(floor(log(n) / log(2)))
</code></pre>
| 3 |
2016-09-24T12:47:05Z
|
[
"python",
"algorithm",
"binary",
"palindrome"
] |
I am trying to find the nth binary palindrome
| 39,675,412 |
<p><a href="https://oeis.org/A006995" rel="nofollow">Binary palindromes: numbers whose binary expansion is palindromic.</a> <br>
Binary Palindrome -> is a number whose binary representation is a palindrome. <br>
<a href="http://ideone.com/CfQZuH" rel="nofollow">Here is link to the solution with naive approach</a></p>
<p>I have read from above link and it give a formula to find the nth binary palindrome. I am unable to understand and thus code the solution. </p>
<pre><code>def palgenbase2(): # generator of palindromes in base 2
#yield 0
x, n, n2 = 1, 1, 2
m = 1;
while True:
for y in range(n, n2):
s = format(y, 'b')
yield int(s+s[-2::-1], 2)
for y in range(n, n2):
s = format(y, 'b')
yield int(s+s[::-1], 2)
x += 1
n *= 2
n2 *= 2
if n2 > 1000000000:
break
ans = {}
for i,j in enumerate(palgenbase2()):
print i,j
ans[i]=j
with open("output","a") as f:
f.write(ans)
#use the saved output to give answer to query later
#this will work but it takes too much time.
n = int(raw_input())
for c in range(0,n):
z = int(raw_input())
print ans[z]
</code></pre>
<p>Here is one Python code but it is generating all such palindromes.<br>
I need help in the program to get the nth binary palindrome directly.<br>
as follows:</p>
<blockquote>
<p>Input -> 1 <= n <= 1000000000<br>
Function -> f(n)<br>
output -> nth binary palindrome.</p>
</blockquote>
<p>Can we do this in better time using the formula mentioned <a href="https://oeis.org/A006995" rel="nofollow">here</a>?</p>
| -2 |
2016-09-24T10:37:04Z
| 39,689,086 |
<p>I am not going to write the full code.
Let us exam the algorithm</p>
<p>These columns are : bit count , combinations, combinations count</p>
<ul>
<li>1 1 | 1 </li>
<li>2 11 | 1</li>
<li>3 101 111 | 2</li>
<li>4 1001 1001 | 2</li>
<li>5 10001 10101 11011 11111 |4</li>
<li>6 100001 101101 110011 11111 |4</li>
</ul>
<p>if you follow this series you are going to increase esponentially each two step. Let n be the bit count wich bit count has the following amount of combinations: 2<<((n-1)>>1) .
Now I do not know if is possible compute it in close form but recursively is very fast since it is exponential :
let n be the count until the n-1 bit and m the current count</p>
<pre><code>int i,n=0,m=0;
for (i=1;m<nth;i++)
{
n=m;
m+=2<<((i-1)>>1);
}
</code></pre>
<p>Now you know how many bits are required: i</p>
<p>You build a char array with (i+1)/2 bits as 100...0
You add (nth-n)-1 (-1 because is 0 based) in binary form. And opla! you mirror your token and end.
Example: you need the 12 element
you are going to sum 1+1+2+2+4+4 . So you know your 12th element has 6 bits.
Until 5 bits you have 10 elements. So 12-10=2 2-1=1
Yout bit looks like
100 ( 6 bit /2)
you add 1-> binary 1
100+1=101
Your nth palindrom number has the follow form 101101. It works also with odd bit count. Check for the singularity 1 and 2 bit count</p>
| 1 |
2016-09-25T16:19:55Z
|
[
"python",
"algorithm",
"binary",
"palindrome"
] |
how to decode a string containing Persian/Arabic characters?
| 39,675,421 |
<p>In web scraping sometimes I need to get data from Persian webpages, so when I try to decode it and see the extracted data, the result is not what I expect to be.</p>
<p>Here is the step-by-step code for when this problem occurs :</p>
<p><strong>1.getting data from a Persian website</strong></p>
<pre><code>import urllib2
data = urllib2.urlopen('http://cafebazar.ir').read() # this is a persian website
</code></pre>
<p><strong>2.detecting type of encoding</strong></p>
<pre><code>import chardet
chardet.detect(data)
# in this case result is :
{'confidence': 0.6567038227597763, 'encoding': 'ISO-8859-2'}
</code></pre>
<p><strong>3. decoding and encoding</strong></p>
<pre><code>final = data.decode(chardet.detect(data)['encoding']).encode('ascii', 'ignore')
</code></pre>
<p>but the final result is not in Persian at all !</p>
| -1 |
2016-09-24T10:37:56Z
| 39,676,207 |
<p>Instead of encoding into ascii, you should decode into something else, for example <code>utf-8</code>:</p>
<pre><code>final = data.decode(chardet.detect(data)['encoding']).encode('utf-8')
</code></pre>
<p>In order to view it though, you should write it into a file as most terminals do not display non-ascii chars correctly:</p>
<pre><code>with open("temp_file.txt", "w", encoding="utf-8") as myfile:
myfile.write(data.decode(chardet.detect(data)['encoding']))
</code></pre>
| 0 |
2016-09-24T12:13:37Z
|
[
"python",
"python-2.7",
"decode",
"python-unicode"
] |
how to decode a string containing Persian/Arabic characters?
| 39,675,421 |
<p>In web scraping sometimes I need to get data from Persian webpages, so when I try to decode it and see the extracted data, the result is not what I expect to be.</p>
<p>Here is the step-by-step code for when this problem occurs :</p>
<p><strong>1.getting data from a Persian website</strong></p>
<pre><code>import urllib2
data = urllib2.urlopen('http://cafebazar.ir').read() # this is a persian website
</code></pre>
<p><strong>2.detecting type of encoding</strong></p>
<pre><code>import chardet
chardet.detect(data)
# in this case result is :
{'confidence': 0.6567038227597763, 'encoding': 'ISO-8859-2'}
</code></pre>
<p><strong>3. decoding and encoding</strong></p>
<pre><code>final = data.decode(chardet.detect(data)['encoding']).encode('ascii', 'ignore')
</code></pre>
<p>but the final result is not in Persian at all !</p>
| -1 |
2016-09-24T10:37:56Z
| 39,677,019 |
<p>The fundamental problem is that character-set detection is not a completely deterministic problem. <code>chardet</code>, and every program like it, is a <em>heuristic</em> detector. There is no guarantee or expectation that it will guess correctly all the time, and your program needs to cope with that.</p>
<p>If your problem is a single web site, simply inspect it and hard-code the correct character set.</p>
<p>If you are dealing with a constrained set of sites, with a restricted and somewhat predictable set of languages, most heuristic detectors have tweaks and settings you can pass in to improve the accuracy by constraining the possibilities.</p>
<p>In the most general case, there is no single solution which works correctly for all the sites in the world.</p>
<p>Many sites lie, they give you well-defined and helpful <code>Content-Type:</code> headers and <code>lang</code> tags ... which totally betray what's actually there - sometimes because of admin error, sometimes because they use a CMS which forces them to pretend their site is in a single language when in reality it isn't; and often because there is no language support in the back end, and something along the way "helpfully" adds a tag or header when in fact it would be more correct and actually helpful to say you don't know when you don't know.</p>
<p>What you can do is to code defensively. Maybe try <code>chardet</code>, then fall back to whatever the site tells you, then fall back to UTF-8, then maybe Latin-1? The jury is out while the world keeps on changing...</p>
| 1 |
2016-09-24T13:42:31Z
|
[
"python",
"python-2.7",
"decode",
"python-unicode"
] |
python, import module in vs code
| 39,675,568 |
<p>I installed cocos2d for python and the samples worked well</p>
<p>But when I move the python file into the folder that I selected in the visual studio code, it's only saying that it cannot find the module named cocos. </p>
<p>I guess I need to change the setting in the launch.json but I don't know how.</p>
<p>I'll just upload the part of my launch.json file.</p>
<pre><code>{
"version": "0.2.0",
"configurations": [
{
"name": "Python",
"type": "python",
"request": "launch",
"stopOnEntry": true,
"pythonPath": "C:/Users/Sanghun/AppData/Local/Programs/Python/Python35-32/python.exe",
"program": "${file}",
"debugOptions": [
"WaitOnAbnormalExit",
"WaitOnNormalExit",
"RedirectOutput"
]
},
</code></pre>
| 0 |
2016-09-24T10:57:09Z
| 39,713,749 |
<p>You will need to modify the python.pythonPath setting in settings.json to point to this interpreter as follows:
<code>
"python.pythonPath":"C:/Users/Sanghun/AppData/Local/Programs/Python/Python35-32/python.exe"
</code></p>
<p>Or you could just launch the command 'Select Workspace Interpreter' (<a href="https://github.com/DonJayamanne/pythonVSCode/wiki/Python-Path-and-Version#selecting-an-interpreter" rel="nofollow">https://github.com/DonJayamanne/pythonVSCode/wiki/Python-Path-and-Version#selecting-an-interpreter</a>)</p>
<p>One last note, you might want to revert the change in launch.json to the following (prevents you from having to specify the path in two places):
<code>"pythonPath": "${config.python.pythonPath}",</code></p>
| 0 |
2016-09-26T23:11:47Z
|
[
"python",
"module",
"cocos2d-x",
"vscode"
] |
How do I use the model Inception.tgz in TensorFlow?
| 39,675,604 |
<p>I just downloaded the inception.tgz file from tensorflow.org at <a href="http://download.tensorflow.org/models/image/imagenet/inception-2015-12-05.tgz" rel="nofollow">http://download.tensorflow.org/models/image/imagenet/inception-2015-12-05.tgz</a>. But, I do not know where I should extract this. </p>
<p>Also, when I used the models/image/imagenet/classify_image.py script to get the model, the model was not saved after a reboot, so I had to download it again via the same script. I need to use it at times I am not connected to the Internet, so downloading the model everytime I need to classify is not ideal for me. How can I persist the model once and for all? </p>
<p>Also, how can I use the .tgz inception model?</p>
| 0 |
2016-09-24T11:02:10Z
| 39,678,707 |
<p>By default the image model gets downloaded to /tmp/imagenet, but you can set your own folder by passing in the <code>--model_dir</code> command line parameter to classify_image.py:
<a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/models/image/imagenet/classify_image.py#L56" rel="nofollow">https://github.com/tensorflow/tensorflow/blob/master/tensorflow/models/image/imagenet/classify_image.py#L56</a></p>
| 0 |
2016-09-24T16:48:15Z
|
[
"python",
"tensorflow"
] |
How do I use the model Inception.tgz in TensorFlow?
| 39,675,604 |
<p>I just downloaded the inception.tgz file from tensorflow.org at <a href="http://download.tensorflow.org/models/image/imagenet/inception-2015-12-05.tgz" rel="nofollow">http://download.tensorflow.org/models/image/imagenet/inception-2015-12-05.tgz</a>. But, I do not know where I should extract this. </p>
<p>Also, when I used the models/image/imagenet/classify_image.py script to get the model, the model was not saved after a reboot, so I had to download it again via the same script. I need to use it at times I am not connected to the Internet, so downloading the model everytime I need to classify is not ideal for me. How can I persist the model once and for all? </p>
<p>Also, how can I use the .tgz inception model?</p>
| 0 |
2016-09-24T11:02:10Z
| 39,678,736 |
<p>I cannot make a comment to your Question since I do not have enough credits yet. So let me give you a generic answer.</p>
<ol>
<li><p>The <code>inception-2015-12-05.tgz</code> file you mentioned contains two files which you require:</p>
<p>a) imagenet_comp_graph_label_strings.txt</p>
<p>b) tensorflow_inception_graph.pb</p></li>
</ol>
<p>There is a license file that you won't require as well. These two files will let you make predictions on images.</p>
<ol start="2">
<li><p>The part where you mentioned <code>the model was not saved after a reboot, so I had to download it again via the same script</code> intrigues me. I have never come across such an issue. Try this now:</p>
<ul>
<li>Create a folder in a location of your choice. Say <code>~/Documents</code>.</li>
<li>When you run the python script <code>classify_image.py</code> use the <code>--model_dir</code> flag to redirect the model file directory to <code>~/Documents</code>. This will essentially download and extract the necessary files to the specified location and you can use the same location in <code>--model_dir</code> flag ever since.</li>
</ul></li>
</ol>
<p>Take a look at this:</p>
<pre><code>Aruns-MacBook-Pro:imagenet arundas$ python classify_image.py --model_dir ~/Documents/
>> Downloading inception-2015-12-05.tgz 100.0%
Succesfully downloaded inception-2015-12-05.tgz 88931400 bytes.
W tensorflow/core/framework/op_def_util.cc:332] Op BatchNormWithGlobalNormalization is deprecated. It will cease to work in GraphDef version 9. Use tf.nn.batch_normalization().
giant panda, panda, panda bear, coon bear, Ailuropoda melanoleuca (score = 0.89233)
indri, indris, Indri indri, Indri brevicaudatus (score = 0.00859)
lesser panda, red panda, panda, bear cat, cat bear, Ailurus fulgens (score = 0.00264)
custard apple (score = 0.00141)
earthstar (score = 0.00107)
Aruns-MacBook-Pro:imagenet arundas$ python classify_image.py --model_dir ~/Documents/
W tensorflow/core/framework/op_def_util.cc:332] Op BatchNormWithGlobalNormalization is deprecated. It will cease to work in GraphDef version 9. Use tf.nn.batch_normalization().
giant panda, panda, panda bear, coon bear, Ailuropoda melanoleuca (score = 0.89233)
indri, indris, Indri indri, Indri brevicaudatus (score = 0.00859)
lesser panda, red panda, panda, bear cat, cat bear, Ailurus fulgens (score = 0.00264)
custard apple (score = 0.00141)
earthstar (score = 0.00107)
</code></pre>
<p>The model was not downloaded the second time.
Hope this helps.</p>
| 0 |
2016-09-24T16:51:00Z
|
[
"python",
"tensorflow"
] |
How can I advance the index of a pandas.dataframe by one quarter?
| 39,675,716 |
<p>I would like to shift the index of a pandas.dataframe by one quarter. The dataframe looks like:</p>
<pre><code> ID Nowcast Forecast
1991-01-01 35 4144.70 4137.40
1991-01-01 40 4114.00 4105.00
1991-01-01 60 4135.00 4130.00
....
</code></pre>
<p>So far, I calculate the number of occurrences of the first timestamp <code>1991-01-01</code> and shifted the dataframe accordingly. The code is: </p>
<pre><code>Stamps = df.index.unique()
zero = 0
for val in df.index:
if val == Stamps[0]:
zero = zero + 1
df = df.shift(zero)
</code></pre>
<p>The operation results in the following dataframe: </p>
<pre><code> ID Nowcast Forecast
1991-04-01 35.0 4144.70 4137.40
1991-04-01 40.0 4114.00 4105.00
1991-04-01 60.0 4135.00 4130.00
</code></pre>
<p>The way I'm doing this strikes me as inefficient and error-prone. Is there a better way? </p>
| 1 |
2016-09-24T11:17:16Z
| 39,680,495 |
<p>you can use <a href="http://pandas.pydata.org/pandas-docs/stable/timeseries.html#dateoffset-objects" rel="nofollow">pd.DateOffset()</a>:</p>
<pre><code>In [110]: df
Out[110]:
ID Nowcast Forecast
1991-01-01 35 4144.7 4137.4
1991-01-01 40 4114.0 4105.0
1991-01-01 60 4135.0 4130.0
In [111]: df.index += pd.DateOffset(months=3)
In [112]: df
Out[112]:
ID Nowcast Forecast
1991-04-01 35 4144.7 4137.4
1991-04-01 40 4114.0 4105.0
1991-04-01 60 4135.0 4130.0
</code></pre>
| 1 |
2016-09-24T20:08:42Z
|
[
"python",
"pandas"
] |
We changed order for a list, now we need a mechanism for accessing elements by index with matching behaviour
| 39,675,758 |
<p>There is a class BaseFormSet. </p>
<pre><code>class BaseFormSet(object):
def __iter__(self):
return iter(self.forms)
def __getitem__(self, index):
return self.forms[index]
</code></pre>
<p>Where self.forms is a list. In the documentation they tell us that this class represents a set of forms, the order being in the order they were created. And we can change order: we have to override __iter__() method of BaseFormSet. But if we override __iter__(), we will need to also override __getitem__() to have matching behavior.</p>
<p>Well, this __getitem__() astonishes me a bit.</p>
<p>Well, we are talking about lists, as a matter of fact. So, in the example I'll use a list.</p>
<p>This is what I can think of to the best of my ability:</p>
<pre><code>l = [0, 1, 2, 3, 4, 5] // Imitating a formset.
def get_reversed_index(length, index):
return -(index+1) % length
def print_result(i):
reversed_index = get_reversed_index(len(l), i)
return "Index {}, reversed {}, reversed value {}".format(i, reversed_index, l[reversed_index])
</code></pre>
<p>This seems to reverse the index. But is it a matching behavior which documentation implies?</p>
<p>What troubles me: myMy code seems unable to raise IndexError exception.</p>
<pre><code>IndexError: list index out of range
In case of ordinary
print(l[10])
</code></pre>
<p>Could you help me: what matching behavour is necessary? And whether my example is garbage?</p>
<p>P.S. The class is from Django.</p>
| 0 |
2016-09-24T11:21:20Z
| 39,675,818 |
<p>I'm having trouble understanding your question, but if you mean that you <em>want</em> to raise an exception from your "reversed" indices, then you need to change your <code>get_reversed_index</code> function.</p>
<p>Currently, the output of this function is guaranteed to be in the range <code>[0, length)</code> due to the <code>%</code> or mod operator. So if you want to generate indices that can be out of bounds, try removing the <code>% length</code> ...</p>
| 0 |
2016-09-24T11:29:38Z
|
[
"python"
] |
How do coroutines in Python compare to those in Lua?
| 39,675,844 |
<p>Support for coroutines in Lua is provided by <a href="https://www.lua.org/manual/5.3/manual.html#2.6">functions in the <code>coroutine</code> table</a>, primarily <code>create</code>, <code>resume</code> and <code>yield</code>. The developers describe these coroutines as <a href="http://laser.inf.ethz.ch/2012/slides/Ierusalimschy/coroutines-4.pdf">stackful, first-class and asymmetric</a>.</p>
<p>Coroutines are also available in Python, either using <a href="https://www.python.org/dev/peps/pep-0342/">enhanced generators</a> (and <a href="https://www.python.org/dev/peps/pep-0380/"><code>yield from</code></a>) or, added in version 3.5, <a href="https://www.python.org/dev/peps/pep-0492/"><code>async</code> and <code>await</code></a>.</p>
<p>How do coroutines in Python compare to those in Lua? Are they also stackful, first-class and asymmetric?</p>
<p>Why does Python require so many constructs (<code>async def</code>, <code>async with</code>, <code>async for</code>, <a href="https://www.python.org/dev/peps/pep-0530/">asynchronous comprehensions</a>, ...) for coroutines, while Lua can provide them with just three built-in functions?</p>
| 15 |
2016-09-24T11:32:53Z
| 39,788,063 |
<p>I just had my first look at <code>lua</code>, which included the <a href="https://www.lua.org/cgi-bin/demo?sieve" rel="nofollow"><code>sieve.lua</code> live demo</a>. It is an implementation of the sieve of Erathostenes using coroutines. My immediate thought was: This would look much cleaner in python:</p>
<pre><code>#!/usr/bin/env python3
# sieve.py
# the sieve of Eratosthenes programmed with a generator functions
# typical usage: ./sieve.py 500 | column
import sys
# generate all the numbers from 2 to n
def gen(n):
for i in range(2,n):
yield i
# filter the numbers generated by `g', removing multiples of `p'
def filter(p, g):
for n in g:
if n%p !=0:
yield n
N=int(sys.argv[1]) if len(sys.argv)>1 else 500 # from command line
x=gen(N) # generate primes up to N
while True:
try:
n = next(x) # pick a number until done
except StopIteration:
break
print(n) # must be a prime number
x = filter(n, x) # now remove its multiples
</code></pre>
<p>This does not have much to do with the question, but on my machine using <code>Python 3.4.3</code> a stack overflow happens somewhere for <code>N>7500</code>. Using <code>sieve.lua</code> with <code>Lua 5.2.3</code> the stack overflow happens already at <code>N>530</code>.</p>
<p>Generator objects (which represent a suspended coroutine) can be passed around like any other object, and the next() built-in can be applied to it in any place, so coroutines in python are first-class. Please correct me if I am wrong.</p>
| 0 |
2016-09-30T09:22:58Z
|
[
"python",
"asynchronous",
"lua",
"async-await",
"coroutine"
] |
How do coroutines in Python compare to those in Lua?
| 39,675,844 |
<p>Support for coroutines in Lua is provided by <a href="https://www.lua.org/manual/5.3/manual.html#2.6">functions in the <code>coroutine</code> table</a>, primarily <code>create</code>, <code>resume</code> and <code>yield</code>. The developers describe these coroutines as <a href="http://laser.inf.ethz.ch/2012/slides/Ierusalimschy/coroutines-4.pdf">stackful, first-class and asymmetric</a>.</p>
<p>Coroutines are also available in Python, either using <a href="https://www.python.org/dev/peps/pep-0342/">enhanced generators</a> (and <a href="https://www.python.org/dev/peps/pep-0380/"><code>yield from</code></a>) or, added in version 3.5, <a href="https://www.python.org/dev/peps/pep-0492/"><code>async</code> and <code>await</code></a>.</p>
<p>How do coroutines in Python compare to those in Lua? Are they also stackful, first-class and asymmetric?</p>
<p>Why does Python require so many constructs (<code>async def</code>, <code>async with</code>, <code>async for</code>, <a href="https://www.python.org/dev/peps/pep-0530/">asynchronous comprehensions</a>, ...) for coroutines, while Lua can provide them with just three built-in functions?</p>
| 15 |
2016-09-24T11:32:53Z
| 39,879,949 |
<p>The simple answer is that they are different languages. Yes, Python coroutines are stackful, first-class and asymmetric. See this answer: <a href="http://stackoverflow.com/q/715758/584846">Coroutine vs Continuation vs Generator</a></p>
<p>From the Lua <a href="https://www.lua.org/pil/9.1.html" rel="nofollow">documentation</a>:</p>
<blockquote>
<p>Some people call asymmetric coroutine semi-coroutines (because they
are not symmetrical, they are not really co). However, other people
use the same term semi-coroutine to denote a restricted implementation
of coroutines, where a coroutine can only suspend its execution when
it is not inside any auxiliary function, that is, when it has no
pending calls in its control stack. In other words, only the main body
of such semi-coroutines can yield. A generator in Python is an example
of this meaning of semi-coroutines.</p>
<p>Unlike the difference between symmetric and asymmetric coroutines, the
difference between coroutines and generators (as presented in Python)
is a deep one; generators are simply not powerful enough to implement
several interesting constructions that we can write with true
coroutines. Lua offers true, asymmetric coroutines. Those that prefer
symmetric coroutines can implement them on top of the asymmetric
facilities of Lua. It is an easy task. (Basically, each transfer does
a yield followed by a resume.)</p>
</blockquote>
<p>Also, see this discussion on Python's developer mail list: <a href="https://mail.python.org/pipermail/python-dev/2015-April/139695.html" rel="nofollow">PEP 492: What is the real goal?</a></p>
| 1 |
2016-10-05T17:07:32Z
|
[
"python",
"asynchronous",
"lua",
"async-await",
"coroutine"
] |
Is python += string concatenation bad practice?
| 39,675,898 |
<p>I am reading <a href="http://docs.python-guide.org/en/latest/writing/structure/#mutable-and-immutable-types" rel="nofollow">The Hitchhikerâs Guide to Python</a> and there is a short code snippet </p>
<pre><code>foo = 'foo'
bar = 'bar'
foobar = foo + bar # This is good
foo += 'ooo' # This is bad, instead you should do:
foo = ''.join([foo, 'ooo'])
</code></pre>
<p>The author pointed out that <code>''.join()</code> is not always faster than <code>+</code>, so he is not against using <code>+</code> for string concatenation. </p>
<p>But why is <code>foo += 'ooo'</code> bad practice whereas <code>foobar=foo+bar</code> is considered good? </p>
<ul>
<li>is <code>foo += bar</code> good?</li>
<li>is <code>foo = foo + 'ooo'</code> good?</li>
</ul>
<p>Before this code snippet, the author wrote:</p>
<blockquote>
<p>One final thing to mention about strings is that using join() is not always best. In the instances where you are creating a new string from a pre-determined number of strings, using the addition operator is actually faster, but in cases like above or in cases where you are adding to an existing string, using join() should be your preferred method.</p>
</blockquote>
| 2 |
2016-09-24T11:37:39Z
| 39,680,089 |
<h2>Is it bad practice?</h2>
<p>It's reasonable to assume that it isn't bad practice for this example because:</p>
<ul>
<li>The author doesn't give any reason. Maybe it's just disliked by him/her.</li>
<li>Python documentation doesn't mention it's bad practice (from what I've seen).</li>
<li><code>foo += 'ooo'</code> is just as readable (according to me) and is approximately 100 times faster than <code>foo = ''.join([foo, 'ooo'])</code>.</li>
</ul>
<h2>When should one be used over the other?</h2>
<p>Concatenation of strings have the disadvantage of needing to create a new string and allocate new memory <em>for every concatenation</em>! This is time consuming, but isn't that big of a deal with few and small strings. When you know the number of strings to concatenate and don't need more than maybe 2-4 concatenations I'd go for it.</p>
<hr>
<p>When joining strings Python only has to allocate new memory for the final string, which is much more efficient, but could take longer to compute. Also, because strings are immutable it's often more practical to use a list of strings to dynamically mutate, and only convert it to a string when needed.</p>
<p>It's often convenient to create strings with str.join() since it takes an iterable. For example:</p>
<pre><code>letters = ", ".join("abcdefghij")
</code></pre>
<h2>To conclude</h2>
<p>In most cases it makes more sense to use <code>str.join()</code> but there are times when concatenation is just as viable. Using any form of string concatenation for huge or many strings would be bad practice just as using <code>str.join()</code> would be bad practice for short and few strings, in my own opinion.</p>
<p>I believe that the author was just trying to create a rule of thumb to easier identify when to use what without going in too much detail or make it complicated.</p>
| 3 |
2016-09-24T19:21:58Z
|
[
"python",
"string",
"string-concatenation"
] |
Regex Python not working correctly
| 39,675,984 |
<p>Hy, I have an issue with regex python 3.</p>
<p><strong>Here is my code :</strong></p>
<pre><code># Here is regex 1, have to find tk_common on file tmp/out-file.txt
regex1 = re.compile(r'tk_common')
with open("tmp/out-file.txt") as f:
for line in f:
result = regex1.search(line)
# If found :
if regex1.search is not None:
print("I find tk_common in out-file")
# else :
else:
print("I didn't find tk_common in out-file")
# And here is the second one
regex2 = re.compile(r'tk_text')
with open("tmp/out-file.txt") as f:
for line in f:
result = regex2.search(line)
if regex2.search is not None:
print("I find tk_text in out-file")
else:
print("I didn't fint tk_text in out-file")
</code></pre>
<p><strong>My issue :</strong></p>
<p>I have two print message success when i start my programm :</p>
<pre><code>I find tk_common in out-file
I find tk_text in out-file
</code></pre>
<p>But in fact, <strong>it should not :</strong></p>
<pre><code>$ cat tmp/out-file.txt | grep "tk_common\|tk_text"
<div class="tk_common">
</code></pre>
<p>Any idea ?
Thanks,</p>
| 0 |
2016-09-24T11:47:08Z
| 39,676,080 |
<pre><code>if regex1.search is not None:
</code></pre>
<p>Should be <code>result</code></p>
<pre><code>if result is not None
</code></pre>
<p>Because <code>re.compile().search</code> is a function, and most definitely not <code>None</code>. You want to look at the return value.</p>
<p>Also, your loop</p>
<pre><code>regex1 = re.compile(r'tk_common')
with open("tmp/out-file.txt") as f:
for line in f:
result = regex1.search(line)
</code></pre>
<p>If you find it on line 1, but then not on line 2, your result is going to be None and give a false negative. You should do <code>if result: break</code></p>
<p>Using the Regex engine is a little overkill to simple do "is this string a substring". You can just do something like this</p>
<pre><code>found_tk = False
with open('filename', 'r') as file_handle:
for line in file_handle:
if 'tk_text' in line:
found_tk = True
break
</code></pre>
| 0 |
2016-09-24T11:58:24Z
|
[
"python",
"python-3.x"
] |
Regex Python not working correctly
| 39,675,984 |
<p>Hy, I have an issue with regex python 3.</p>
<p><strong>Here is my code :</strong></p>
<pre><code># Here is regex 1, have to find tk_common on file tmp/out-file.txt
regex1 = re.compile(r'tk_common')
with open("tmp/out-file.txt") as f:
for line in f:
result = regex1.search(line)
# If found :
if regex1.search is not None:
print("I find tk_common in out-file")
# else :
else:
print("I didn't find tk_common in out-file")
# And here is the second one
regex2 = re.compile(r'tk_text')
with open("tmp/out-file.txt") as f:
for line in f:
result = regex2.search(line)
if regex2.search is not None:
print("I find tk_text in out-file")
else:
print("I didn't fint tk_text in out-file")
</code></pre>
<p><strong>My issue :</strong></p>
<p>I have two print message success when i start my programm :</p>
<pre><code>I find tk_common in out-file
I find tk_text in out-file
</code></pre>
<p>But in fact, <strong>it should not :</strong></p>
<pre><code>$ cat tmp/out-file.txt | grep "tk_common\|tk_text"
<div class="tk_common">
</code></pre>
<p>Any idea ?
Thanks,</p>
| 0 |
2016-09-24T11:47:08Z
| 39,676,090 |
<p>This line:</p>
<pre><code>if regex1.search is not None:
</code></pre>
<p>will never be <code>None</code> because <code>regex1.search</code> refers to the <code>search</code> method, not to the return value of that method. Therefore your code always thinks that there is a match.</p>
<p>I think that you meant to check the <code>result</code> variable, not <code>regex1.search</code>.</p>
<pre><code>regex1 = re.compile(r'tk_common')
with open("tmp/out-file.txt") as f:
for line in f:
result = regex1.search(line)
if result is not None:
print("I find tk_common in out-file")
break
else:
print("I didn't find tk_common in out-file")
</code></pre>
<p>It is unnecessary to compile the re pattern because it will be <a href="https://docs.python.org/3/library/re.html#re.compile" rel="nofollow">cached by the re module</a> anyway. Also, since you don't use the match object saved in <code>result</code>, you could just test the result of <code>re.search()</code> directly:</p>
<pre><code>with open("tmp/out-file.txt") as f:
for line in f:
if re.search(r'tk_common', line) is not None:
print("I find tk_common in out-file")
break
else:
print("I didn't find tk_common in out-file")
</code></pre>
| 1 |
2016-09-24T11:59:07Z
|
[
"python",
"python-3.x"
] |
Different functions return the same item from a Python dictionary
| 39,676,159 |
<pre><code>import json, os
def load_data(filepath):
if not os.path.exists(filepath):
return None
with open(filepath, 'r') as file:
return json.load(file)
def get_biggest_bar(data):
bars = []
for bar in data:
bars.append((bar['Cells']['SeatsCount'] , bar['Number']))
max_number = max(bars)[1]
(item for item in data if item['Number'] == max_number).__next__()
return item, max_number
def get_smallest_bar(data):
bars = []
for bar in data:
bars.append((bar['Cells']['SeatsCount'] , bar['Number']))
min_number = min(bars)[1]
(item for item in data if item['Number'] == min_number).__next__()
return item, min_number
def get_closest_bar(data, longitude, latitude):
coordinates = []
def get_distance(point, input_point):
return ((longitude-input_point[0])**2 + (latitude - input_point[1])**2)**1/2
for cell in data:
coordinates.append([cell['Cells']['geoData']['coordinates'],cell['Number']])
for coor in coordinates:
coor[0] = get_distance(point, coor[0])
closest_bar = min(coordinates)[1]
(item for item in data if item['Number'] == closest_bar).__next__()
return item, closest_bar
if __name__ == '__main__':
data = load_data("Bars.json")
print(get_smallest_bar(data))
print(get_biggest_bar(data))
print(get_closest_bar(data, 50.0, 50.0))
</code></pre>
<p>And it's output is:</p>
<pre><code>(dict_values(['СемÑновÑкий пеÑеÑлок, дом 21', 'неÑ', 'Ñайон Ð¡Ð¾ÐºÐ¾Ð»Ð¸Ð½Ð°Ñ ÐоÑа', 'ÐоÑÑоÑнÑй админиÑÑÑаÑивнÑй окÑÑг', 'да', 177, {'type': 'Point', 'coordinates': [37.717115000077776, 55.78262800012168]}, 'СÐÐ', 272459722, [{'PublicPhone': '(916) 223-32-98'}], 'SÐÐ']), 37)
(dict_values(['СемÑновÑкий пеÑеÑлок, дом 21', 'неÑ', 'Ñайон Ð¡Ð¾ÐºÐ¾Ð»Ð¸Ð½Ð°Ñ ÐоÑа', 'ÐоÑÑоÑнÑй админиÑÑÑаÑивнÑй окÑÑг', 'да', 177, {'type': 'Point', 'coordinates': [37.717115000077776, 55.78262800012168]}, 'СÐÐ', 272459722, [{'PublicPhone': '(916) 223-32-98'}], 'SÐÐ']), 434)
(dict_values(['СемÑновÑкий пеÑеÑлок, дом 21', 'неÑ', 'Ñайон Ð¡Ð¾ÐºÐ¾Ð»Ð¸Ð½Ð°Ñ ÐоÑа', 'ÐоÑÑоÑнÑй админиÑÑÑаÑивнÑй окÑÑг', 'да', 177, {'type': 'Point', 'coordinates': [37.717115000077776, 55.78262800012168]}, 'СÐÐ', 272459722, [{'PublicPhone': '(916) 223-32-98'}], 'SÐÐ']), 170)
</code></pre>
<p>As you see, items are COMPLETLY identical, but they are diffrent(I tried to devide functions and run them seperatly, and they output diffrent items)! Also, you can see the second number in the fucntion's returns - they are diffrent! Whats the matter?!</p>
| 0 |
2016-09-24T12:07:56Z
| 39,676,831 |
<p>You are using a generator to get item, but on the next line that variable is not what you think it is. Item from inside the generator is out of scope. I would prefer to return the actual generated value. Also, you are getting the closest bar to some point, but not the one you passed into the function.</p>
<p>Thus I think item and point are both global variables that you are using by mistake inside your functions.</p>
<p>I have python2.7, so the syntax to get the next value from the generator may be slightly different.</p>
<pre><code>def load_data(filepath):
data = [
{'Number': 10, 'Cells': {'SeatsCount': 10, 'geoData': {'coordinates': (10, 10)}}},
{'Number': 50, 'Cells': {'SeatsCount': 50, 'geoData': {'coordinates': (50, 50)}}},
{'Number': 90, 'Cells': {'SeatsCount': 90, 'geoData': {'coordinates': (90, 90)}}}
]
return data
def get_biggest_bar(data):
bars = []
for bar in data:
bars.append((bar['Cells']['SeatsCount'] , bar['Number']))
max_number = max(bars)[1]
g = (item for item in data if item['Number'] == max_number)
return next(g), max_number
def get_smallest_bar(data):
bars = []
for bar in data:
bars.append((bar['Cells']['SeatsCount'] , bar['Number']))
min_number = min(bars)[1]
g = (item for item in data if item['Number'] == min_number)
return next(g), min_number
def get_closest_bar(data, longitude, latitude):
point = (longitude, latitude)
coordinates = []
def get_distance(point, input_point):
return ((longitude-input_point[0])**2 + (latitude - input_point[1])**2)**1/2
for cell in data:
coordinates.append([cell['Cells']['geoData']['coordinates'],cell['Number']])
for coor in coordinates:
coor[0] = get_distance(point, coor[0])
closest_bar = min(coordinates)[1]
g = (item for item in data if item['Number'] == closest_bar)
return next(g), closest_bar
if __name__ == '__main__':
data = load_data("Bars.json")
print("smallest", get_smallest_bar(data))
print("biggest", get_biggest_bar(data))
print("closest", get_closest_bar(data, 50.0, 50.0))
</code></pre>
<p>Output:</p>
<pre><code>('smallest', ({'Cells': {'geoData': {'coordinates': (10, 10)}, 'SeatsCount': 10}, 'Number': 10}, 10))
('biggest', ({'Cells': {'geoData': {'coordinates': (90, 90)}, 'SeatsCount': 90}, 'Number': 90}, 90))
('closest', ({'Cells': {'geoData': {'coordinates': (50, 50)}, 'SeatsCount': 50}, 'Number': 50}, 50))
</code></pre>
| 2 |
2016-09-24T13:26:14Z
|
[
"python",
"dictionary"
] |
Different functions return the same item from a Python dictionary
| 39,676,159 |
<pre><code>import json, os
def load_data(filepath):
if not os.path.exists(filepath):
return None
with open(filepath, 'r') as file:
return json.load(file)
def get_biggest_bar(data):
bars = []
for bar in data:
bars.append((bar['Cells']['SeatsCount'] , bar['Number']))
max_number = max(bars)[1]
(item for item in data if item['Number'] == max_number).__next__()
return item, max_number
def get_smallest_bar(data):
bars = []
for bar in data:
bars.append((bar['Cells']['SeatsCount'] , bar['Number']))
min_number = min(bars)[1]
(item for item in data if item['Number'] == min_number).__next__()
return item, min_number
def get_closest_bar(data, longitude, latitude):
coordinates = []
def get_distance(point, input_point):
return ((longitude-input_point[0])**2 + (latitude - input_point[1])**2)**1/2
for cell in data:
coordinates.append([cell['Cells']['geoData']['coordinates'],cell['Number']])
for coor in coordinates:
coor[0] = get_distance(point, coor[0])
closest_bar = min(coordinates)[1]
(item for item in data if item['Number'] == closest_bar).__next__()
return item, closest_bar
if __name__ == '__main__':
data = load_data("Bars.json")
print(get_smallest_bar(data))
print(get_biggest_bar(data))
print(get_closest_bar(data, 50.0, 50.0))
</code></pre>
<p>And it's output is:</p>
<pre><code>(dict_values(['СемÑновÑкий пеÑеÑлок, дом 21', 'неÑ', 'Ñайон Ð¡Ð¾ÐºÐ¾Ð»Ð¸Ð½Ð°Ñ ÐоÑа', 'ÐоÑÑоÑнÑй админиÑÑÑаÑивнÑй окÑÑг', 'да', 177, {'type': 'Point', 'coordinates': [37.717115000077776, 55.78262800012168]}, 'СÐÐ', 272459722, [{'PublicPhone': '(916) 223-32-98'}], 'SÐÐ']), 37)
(dict_values(['СемÑновÑкий пеÑеÑлок, дом 21', 'неÑ', 'Ñайон Ð¡Ð¾ÐºÐ¾Ð»Ð¸Ð½Ð°Ñ ÐоÑа', 'ÐоÑÑоÑнÑй админиÑÑÑаÑивнÑй окÑÑг', 'да', 177, {'type': 'Point', 'coordinates': [37.717115000077776, 55.78262800012168]}, 'СÐÐ', 272459722, [{'PublicPhone': '(916) 223-32-98'}], 'SÐÐ']), 434)
(dict_values(['СемÑновÑкий пеÑеÑлок, дом 21', 'неÑ', 'Ñайон Ð¡Ð¾ÐºÐ¾Ð»Ð¸Ð½Ð°Ñ ÐоÑа', 'ÐоÑÑоÑнÑй админиÑÑÑаÑивнÑй окÑÑг', 'да', 177, {'type': 'Point', 'coordinates': [37.717115000077776, 55.78262800012168]}, 'СÐÐ', 272459722, [{'PublicPhone': '(916) 223-32-98'}], 'SÐÐ']), 170)
</code></pre>
<p>As you see, items are COMPLETLY identical, but they are diffrent(I tried to devide functions and run them seperatly, and they output diffrent items)! Also, you can see the second number in the fucntion's returns - they are diffrent! Whats the matter?!</p>
| 0 |
2016-09-24T12:07:56Z
| 39,676,867 |
<p>Assign the result of the call to <code>__next__()</code> before returning it, like this:</p>
<pre><code>result = (item for item in data if item['Number'] == closest_bar).__next__()
return result, closest_bar
</code></pre>
| 2 |
2016-09-24T13:29:14Z
|
[
"python",
"dictionary"
] |
Different functions return the same item from a Python dictionary
| 39,676,159 |
<pre><code>import json, os
def load_data(filepath):
if not os.path.exists(filepath):
return None
with open(filepath, 'r') as file:
return json.load(file)
def get_biggest_bar(data):
bars = []
for bar in data:
bars.append((bar['Cells']['SeatsCount'] , bar['Number']))
max_number = max(bars)[1]
(item for item in data if item['Number'] == max_number).__next__()
return item, max_number
def get_smallest_bar(data):
bars = []
for bar in data:
bars.append((bar['Cells']['SeatsCount'] , bar['Number']))
min_number = min(bars)[1]
(item for item in data if item['Number'] == min_number).__next__()
return item, min_number
def get_closest_bar(data, longitude, latitude):
coordinates = []
def get_distance(point, input_point):
return ((longitude-input_point[0])**2 + (latitude - input_point[1])**2)**1/2
for cell in data:
coordinates.append([cell['Cells']['geoData']['coordinates'],cell['Number']])
for coor in coordinates:
coor[0] = get_distance(point, coor[0])
closest_bar = min(coordinates)[1]
(item for item in data if item['Number'] == closest_bar).__next__()
return item, closest_bar
if __name__ == '__main__':
data = load_data("Bars.json")
print(get_smallest_bar(data))
print(get_biggest_bar(data))
print(get_closest_bar(data, 50.0, 50.0))
</code></pre>
<p>And it's output is:</p>
<pre><code>(dict_values(['СемÑновÑкий пеÑеÑлок, дом 21', 'неÑ', 'Ñайон Ð¡Ð¾ÐºÐ¾Ð»Ð¸Ð½Ð°Ñ ÐоÑа', 'ÐоÑÑоÑнÑй админиÑÑÑаÑивнÑй окÑÑг', 'да', 177, {'type': 'Point', 'coordinates': [37.717115000077776, 55.78262800012168]}, 'СÐÐ', 272459722, [{'PublicPhone': '(916) 223-32-98'}], 'SÐÐ']), 37)
(dict_values(['СемÑновÑкий пеÑеÑлок, дом 21', 'неÑ', 'Ñайон Ð¡Ð¾ÐºÐ¾Ð»Ð¸Ð½Ð°Ñ ÐоÑа', 'ÐоÑÑоÑнÑй админиÑÑÑаÑивнÑй окÑÑг', 'да', 177, {'type': 'Point', 'coordinates': [37.717115000077776, 55.78262800012168]}, 'СÐÐ', 272459722, [{'PublicPhone': '(916) 223-32-98'}], 'SÐÐ']), 434)
(dict_values(['СемÑновÑкий пеÑеÑлок, дом 21', 'неÑ', 'Ñайон Ð¡Ð¾ÐºÐ¾Ð»Ð¸Ð½Ð°Ñ ÐоÑа', 'ÐоÑÑоÑнÑй админиÑÑÑаÑивнÑй окÑÑг', 'да', 177, {'type': 'Point', 'coordinates': [37.717115000077776, 55.78262800012168]}, 'СÐÐ', 272459722, [{'PublicPhone': '(916) 223-32-98'}], 'SÐÐ']), 170)
</code></pre>
<p>As you see, items are COMPLETLY identical, but they are diffrent(I tried to devide functions and run them seperatly, and they output diffrent items)! Also, you can see the second number in the fucntion's returns - they are diffrent! Whats the matter?!</p>
| 0 |
2016-09-24T12:07:56Z
| 39,677,868 |
<p>Answers what went wrong have been given. Im my opinion the code searching for min and max was not "pythonic". I'd like to suggest another approach:</p>
<p>(Using sample data from Kenny Ostrom's answer)</p>
<pre><code>data = [ {'Number': 10, 'Cells': {'SeatsCount': 10, 'geoData': {'coordinates': (10, 10)}}},
{'Number': 50, 'Cells': {'SeatsCount': 50, 'geoData': {'coordinates': (50, 50)}}},
{'Number': 90, 'Cells': {'SeatsCount': 90, 'geoData': {'coordinates': (90, 90)}}} ]
biggest = max(data, key=lambda bar: bar['Cells']['SeatsCount'])
smallest = min(data, key=lambda bar: bar['Cells']['SeatsCount'])
</code></pre>
<p>for the closest bar a simple custom key function based on the <code>get_distance</code> from the original code is required, but you got the idea.</p>
| 2 |
2016-09-24T15:12:24Z
|
[
"python",
"dictionary"
] |
Looping over files and plotting (Python)
| 39,676,294 |
<p>My data is look like as in the picture. All of my datas are in .txt format and my aim is to loop over files and plot them. First row represents my variables
(WL, ABS, T%) so firstly I need to delete them before proceeding. </p>
<pre><code>with open('Desktop/100-3.txt', 'r') as f:
data = f.read().splitlines(True)
with open('Desktop/100-3.txt', 'w') as f:
f.writelines(data[1:])
</code></pre>
<p>Probably it would not be necessary but I am very new in Numpy. Basically the algorithm will be as follows:</p>
<ol>
<li>Read all the .txt files</li>
<li>Plot T% versus WL, plot ABS versus WL, save. (WL -> x variable)</li>
<li>Continue for the next file, .. (two graphs for every .txt file)</li>
<li>Then finish the loop, exit.</li>
</ol>
<p><a href="http://i.stack.imgur.com/GOvbi.jpg" rel="nofollow">data looks like this</a></p>
<p><strong>What I've tried</strong></p>
<pre><code>from numpy import loadtxt
import os
dizin = os.listdir(os.getcwd())
for i in dizin:
if i.endswith('.txt'):
data = loadtxt("??",float)
</code></pre>
| 0 |
2016-09-24T12:23:01Z
| 39,676,484 |
<p>For data files like this I would prefer <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.genfromtxt.html" rel="nofollow">np.genfromtxt</a> over np.loadtxt, it has many useful options you can look up in the docs. The <a href="https://docs.python.org/3/library/glob.html" rel="nofollow">glob</a> module is also nice to iterate over directories with wildcards as filters:</p>
<pre><code>from glob import glob
import numpy as np
import matplotlib.pyplot as plt
# loop over all files in the current directory ending with .txt
for fname in glob("./*.txt"):
# read file, skip header (1 line) and unpack into 3 variables
WL, ABS, T = np.genfromtxt(fname, skip_header=1, unpack=True)
# first plot
plt.plot(WL, T)
plt.xlabel('WL')
plt.ylabel('T%')
plt.show()
plt.clf()
# second plot
plt.plot(ABS, T)
plt.xlabel('WL')
plt.ylabel('ABS')
plt.show()
plt.clf()
</code></pre>
<p>The next step would be to do some research on matplotlib to make the plots look better.</p>
<p>Please let me know if the code does not work, I'll try to fix it then. </p>
<p>EDIT: Added plt.clf() to clear the figure before creating a new one.</p>
| 1 |
2016-09-24T12:44:27Z
|
[
"python",
"numpy",
"plot"
] |
Indexing dataframe by datetime python ignoring hour, minutes, seconds
| 39,676,328 |
<p>I have a pandas dataframe <code>df1</code> like the following, where the left hand column is in datetime index:</p>
<pre><code>2016-08-25 19:00:00 144.784598 171.696834 187.392857
2016-08-25 20:30:00 144.837891 171.800840 187.531250
2016-08-25 22:00:00 144.930882 171.982199 187.806134
2016-08-25 23:30:00 144.921652 171.939453 187.757102
2016-08-26 01:00:00 144.954799 172.014280 187.845094
2016-08-26 02:30:00 144.900528 171.906090 187.754032
2016-08-26 04:00:00 144.881981 171.828125 187.702679
2016-08-26 05:30:00 144.870937 171.794847 187.655016
2016-08-26 07:00:00 144.840892 171.728800 187.600116
2016-08-26 08:30:00 144.910801 172.001769 188.052317
2016-08-26 10:00:00 145.191640 172.668826 188.868579
2016-08-26 11:30:00 144.477707 171.294408 187.202932
2016-08-26 13:00:00 144.235066 170.835810 186.617500
2016-08-26 14:30:00 144.091562 170.449642 186.164453
2016-08-26 16:00:00 144.017857 170.404412 186.194444
2016-08-28 19:00:00 144.089375 170.459677 186.256250
2016-08-28 20:30:00 144.154567 170.632161 186.528646
2016-08-28 22:00:00 144.177083 170.701823 186.600694
2016-08-28 23:30:00 144.139063 170.636058 186.502604
2016-08-29 01:00:00 144.188802 170.714167 186.653846
2016-08-29 02:30:00 144.266544 170.760066 186.746094
2016-08-29 04:00:00 144.254464 170.792105 186.744420
2016-08-29 05:30:00 144.194643 170.707666 186.626008
2016-08-29 07:00:00 144.168080 170.633899 186.525962
2016-08-29 08:30:00 144.444046 171.226805 187.512533
2016-08-29 10:00:00 144.529018 171.356548 187.731343
2016-08-29 11:30:00 144.578200 171.421900 187.792991
2016-08-29 13:00:00 144.816134 171.924337 188.470633
2016-08-29 14:30:00 144.791319 171.947195 188.438232
2016-08-29 16:00:00 144.884115 172.066621 188.685855
2016-08-29 19:00:00 144.749023 171.873288 188.473404
2016-08-29 20:30:00 144.638091 171.656599 188.183036
2016-08-29 22:00:00 144.663889 171.687962 188.205729
2016-08-29 23:30:00 144.656414 171.689635 188.230183
2016-08-30 01:00:00 144.613005 171.620593 188.083008
2016-08-30 02:30:00 144.532600 171.503879 187.901940
2016-08-30 04:00:00 144.600160 171.569375 187.965000
2016-08-30 05:30:00 144.568487 171.646406 188.067871
2016-08-30 07:00:00 144.785362 171.930504 188.460526
2016-08-30 08:30:00 144.807596 171.831662 188.422468
2016-08-30 10:00:00 144.803997 171.709052 188.194496
2016-08-30 11:30:00 144.709896 171.518804 187.849864
2016-08-30 13:00:00 144.709727 171.573187 187.875962
2016-08-30 14:30:00 144.789761 171.729604 187.790865
2016-08-30 16:00:00 144.821875 171.800000 187.943484
2016-08-30 19:00:00 144.800781 171.762097 187.895833
2016-08-30 20:30:00 144.647727 171.568841 187.679688
2016-08-30 22:00:00 144.654974 171.559630 187.628125
2016-08-30 23:30:00 144.705163 171.652344 187.763672
2016-08-31 01:00:00 144.701202 171.608456 187.714286
2016-08-31 02:30:00 144.677083 171.620052 187.716250
2016-08-31 04:00:00 144.705056 171.551630 187.596755
2016-08-31 05:30:00 144.674479 171.470170 187.554688
2016-08-31 07:00:00 144.667969 171.509430 187.604167
2016-08-31 08:30:00 144.773438 171.754527 187.749107
2016-08-31 10:00:00 144.864793 171.762162 187.853659
2016-08-31 11:30:00 144.820976 171.686443 187.735577
2016-08-31 13:00:00 144.889785 172.005833 188.272672
2016-08-31 14:30:00 144.715252 171.757528 188.100291
2016-08-31 16:00:00 144.637500 171.680804 188.173611
</code></pre>
<p>I want to create a <code>df2</code> which just contains data from <code>2016-08-31</code> and <code>2016-08-30</code>. What's a way to utilize the datetime property to index these 2 days without having to loop through all the hour and minutes? The desired output for <code>df2</code> is:</p>
<pre><code>2016-08-30 01:00:00 144.613005 171.620593 188.083008
2016-08-30 02:30:00 144.532600 171.503879 187.901940
2016-08-30 04:00:00 144.600160 171.569375 187.965000
2016-08-30 05:30:00 144.568487 171.646406 188.067871
2016-08-30 07:00:00 144.785362 171.930504 188.460526
2016-08-30 08:30:00 144.807596 171.831662 188.422468
2016-08-30 10:00:00 144.803997 171.709052 188.194496
2016-08-30 11:30:00 144.709896 171.518804 187.849864
2016-08-30 13:00:00 144.709727 171.573187 187.875962
2016-08-30 14:30:00 144.789761 171.729604 187.790865
2016-08-30 16:00:00 144.821875 171.800000 187.943484
2016-08-30 19:00:00 144.800781 171.762097 187.895833
2016-08-30 20:30:00 144.647727 171.568841 187.679688
2016-08-30 22:00:00 144.654974 171.559630 187.628125
2016-08-30 23:30:00 144.705163 171.652344 187.763672
2016-08-31 01:00:00 144.701202 171.608456 187.714286
2016-08-31 02:30:00 144.677083 171.620052 187.716250
2016-08-31 04:00:00 144.705056 171.551630 187.596755
2016-08-31 05:30:00 144.674479 171.470170 187.554688
2016-08-31 07:00:00 144.667969 171.509430 187.604167
2016-08-31 08:30:00 144.773438 171.754527 187.749107
2016-08-31 10:00:00 144.864793 171.762162 187.853659
2016-08-31 11:30:00 144.820976 171.686443 187.735577
2016-08-31 13:00:00 144.889785 172.005833 188.272672
2016-08-31 14:30:00 144.715252 171.757528 188.100291
2016-08-31 16:00:00 144.637500 171.680804 188.173611
</code></pre>
| 0 |
2016-09-24T12:26:50Z
| 39,676,415 |
<p>You can use strings to slice a datetime index:</p>
<pre><code>df.loc['2016-08-30':'2016-08-31']
Out:
1 2 3
2016-08-30 01:00:00 144.613005 171.620593 188.083008
2016-08-30 02:30:00 144.532600 171.503879 187.901940
2016-08-30 04:00:00 144.600160 171.569375 187.965000
2016-08-30 05:30:00 144.568487 171.646406 188.067871
2016-08-30 07:00:00 144.785362 171.930504 188.460526
2016-08-30 08:30:00 144.807596 171.831662 188.422468
2016-08-30 10:00:00 144.803997 171.709052 188.194496
2016-08-30 11:30:00 144.709896 171.518804 187.849864
2016-08-30 13:00:00 144.709727 171.573187 187.875962
2016-08-30 14:30:00 144.789761 171.729604 187.790865
2016-08-30 16:00:00 144.821875 171.800000 187.943484
2016-08-30 19:00:00 144.800781 171.762097 187.895833
2016-08-30 20:30:00 144.647727 171.568841 187.679688
2016-08-30 22:00:00 144.654974 171.559630 187.628125
2016-08-30 23:30:00 144.705163 171.652344 187.763672
2016-08-31 01:00:00 144.701202 171.608456 187.714286
2016-08-31 02:30:00 144.677083 171.620052 187.716250
2016-08-31 04:00:00 144.705056 171.551630 187.596755
2016-08-31 05:30:00 144.674479 171.470170 187.554688
2016-08-31 07:00:00 144.667969 171.509430 187.604167
2016-08-31 08:30:00 144.773438 171.754527 187.749107
2016-08-31 10:00:00 144.864793 171.762162 187.853659
2016-08-31 11:30:00 144.820976 171.686443 187.735577
2016-08-31 13:00:00 144.889785 172.005833 188.272672
2016-08-31 14:30:00 144.715252 171.757528 188.100291
2016-08-31 16:00:00 144.637500 171.680804 188.173611
</code></pre>
| 1 |
2016-09-24T12:37:00Z
|
[
"python",
"datetime",
"pandas",
"dataframe"
] |
Group data into classes with numpy
| 39,676,339 |
<p>I have a data frame <code>df</code> from which I extract a column <code>mpg</code>.</p>
<p>I want to add class label/names to each row based on the value of <code>mpg</code>.</p>
<p>I have done it with</p>
<pre><code>mpg = df.iloc[:,0]
median = np.percentile(mpg, q=50)
upper_quartile = np.percentile(mpg, q=75)
lower_quartile = np.percentile(mpg, q=25)
mpg_class = np.ones((num_observations, 1))
for i, element in enumerate(X):
mpg = element[0]
if mpg >= upper_quartile:
mpg_class[i] = 3
elif mpg >= median:
mpg_class[i] = 2
elif mpg >= lower_quartile:
mpg_class[i] = 1
else:
mpg_class[i] = 0
</code></pre>
<p>but I wonder if it's possible to do way smarter with <code>numpy</code>? I guess it might be possible to do it with <code>np.where</code> or something like this.</p>
| 0 |
2016-09-24T12:28:29Z
| 39,676,984 |
<p>Seems like you are looking for <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.qcut.html" rel="nofollow">pd.qcut</a>:</p>
<pre><code>pd.qcut(df.iloc[:, 0], [0, 0.25, 0.5, 0.75, 1], [0, 1, 2, 3])
Out:
0 1
1 0
2 1
3 0
4 0
5 0
6 0
...
</code></pre>
<p>The first parameter is the series you want to discretize. The second is the quantiles/percentiles. The last one is the labels (from 0 to 25% - 0, 25% to 50% - 1, etc.)</p>
| 3 |
2016-09-24T13:40:28Z
|
[
"python",
"pandas",
"numpy",
"dataframe"
] |
Setting a cookie in a Django view
| 39,676,353 |
<p>I am trying to set a cookie in a view using the following code.</p>
<pre><code>def index(request):
resp = HttpResponse("Setting a cookie")
resp.set_cookie('name', 'value')
if 'name' in request.COOKIES:
print "SET"
else:
print "NOT SET"
return render(request, 'myapp/base.html', {})
</code></pre>
<p>When the view is loaded, the console prints out <code>NOT SET</code>, which means the cookie was not set. In every tutorial/doc, this seems to be the method used. However, it simply does not work for me :/</p>
<p>Any help? I am using Django 1.9.8, and I am running the app in my local server, or <code>127.0.0.1:8000</code>.</p>
| 0 |
2016-09-24T12:30:28Z
| 39,676,378 |
<p>You're creating a response and setting a cookie on it, but then you don't actually do anything with that response. The <code>render</code> shortcut creates its own response which is the one actually sent back to the browser.</p>
<p>You should capture the return value from render, and set the cookie on that:</p>
<pre><code>if 'name' in request.COOKIES:
print "SET"
else:
print "NOT SET"
resp = render(request, 'myapp/base.html', {})
resp.set_cookie('name', 'value')
return resp
</code></pre>
| 2 |
2016-09-24T12:32:42Z
|
[
"python",
"django",
"cookies"
] |
Retrieving comments (disqus) embedded in another web page with python
| 39,676,407 |
<p>I'm scrapping a web site using python 3.5 (Beautifulsoup). I can read everything in the source code but I've been trying to retrieve the embedded comments from disqus with no success (which is a reference to a script).</p>
<p>The piece of the html code source looks like this:</p>
<pre><code>var disqus_identifier = "node/XXXXX";
script type='text/javascript' src='https://disqus.com/forums/siteweb/embed.js';
</code></pre>
<p>the src sends to a script function.</p>
<p>I've read the suggestions in stackoverflow, using selenium but I had a really hard time to make it work with no success. I understand that selenium emulates a browser (which I believe is too heavy for what I want). However, I have a problem with the webdrivers, it is not working correctly. So, I dropped this option.</p>
<p>I would like to be able to execute the script and retrieve the .js with the comments.
I found that a possible solution is PyV8. But I can't import in python.
I read the posts in internet, I googled it, but it's not working.</p>
<p>I installed Sublime Text 3 and I downloaded pyv8-win64-p3 manually in: </p>
<p>C:\Users\myusername\AppData\Roaming\Sublime Text 3\Installed Packages\PyV8\pyv8-win64-p3</p>
<p>But I keep getting: </p>
<blockquote>
<p>ImportError: No module named 'PyV8'.</p>
</blockquote>
<p>If somebody can help me, I'll be very very thankful.</p>
| 0 |
2016-09-24T12:35:46Z
| 39,705,046 |
<p>So, you can construct the Disqus API by studying its network traffic; in the page source all required data are present. Like Disqus API send some query string. Recently I have extracted comments from Disqus API, here is the sample code.</p>
<p>Example:
Here soup - page source and <code>params_dict = json.loads(str(soup).split("embedVars = ")[1].split(";")[0])</code></p>
<pre><code>def disqus(params_dict,soup):
headers = {
'User-Agent':'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:44.0) Gecko/20100101 Firefox/44.0'
}
comments_list = []
base = 'default'
s_o = 'default'
version = '25916d2dd3d996eaedf6cdb92f03e7dd'
f = params_dict['disqusShortname']
t_i = params_dict['disqusIdentifier']
t_u = params_dict['disqusUrl']
t_e = params_dict['disqusTitle']
t_d = soup.head.title.text
t_t = params_dict['disqusTitle']
url = 'http://disqus.com/embed/comments/?base=%s&version=%s&f=%s&t_i=%s&t_u=%s&t_e=%s&t_d=%s&t_t=%s&s_o=%s&l='%(base,version,f,t_i,t_u,t_e,t_d,t_t,s_o)
comment_soup = getLink(url)
temp_dict = json.loads(str(comment_soup).split("threadData\" type=\"text/json\">")[1].split("</script")[0])
thread_id = temp_dict['response']['thread']['id']
forumname = temp_dict['response']['thread']['forum']
i = 1
count = 0
flag = True
while flag is True:
disqus_url = 'http://disqus.com/api/3.0/threads/listPostsThreaded?limit=100&thread='+thread_id+'&forum='+forumname+'&order=popular&cursor='+str(i)+':0:0'+'&api_key=E8Uh5l5fHZ6gD8U3KycjAIAk46f68Zw7C6eW8WSjZvCLXebZ7p0r1yrYDrLilk2F'
comment_soup = getJson(disqus_url)
</code></pre>
<p>It,will return json and you can find comments where you can extract comments. Hope this will help for you.</p>
| 0 |
2016-09-26T14:04:44Z
|
[
"javascript",
"python",
"web-scraping",
"pyv8"
] |
Retrieving comments (disqus) embedded in another web page with python
| 39,676,407 |
<p>I'm scrapping a web site using python 3.5 (Beautifulsoup). I can read everything in the source code but I've been trying to retrieve the embedded comments from disqus with no success (which is a reference to a script).</p>
<p>The piece of the html code source looks like this:</p>
<pre><code>var disqus_identifier = "node/XXXXX";
script type='text/javascript' src='https://disqus.com/forums/siteweb/embed.js';
</code></pre>
<p>the src sends to a script function.</p>
<p>I've read the suggestions in stackoverflow, using selenium but I had a really hard time to make it work with no success. I understand that selenium emulates a browser (which I believe is too heavy for what I want). However, I have a problem with the webdrivers, it is not working correctly. So, I dropped this option.</p>
<p>I would like to be able to execute the script and retrieve the .js with the comments.
I found that a possible solution is PyV8. But I can't import in python.
I read the posts in internet, I googled it, but it's not working.</p>
<p>I installed Sublime Text 3 and I downloaded pyv8-win64-p3 manually in: </p>
<p>C:\Users\myusername\AppData\Roaming\Sublime Text 3\Installed Packages\PyV8\pyv8-win64-p3</p>
<p>But I keep getting: </p>
<blockquote>
<p>ImportError: No module named 'PyV8'.</p>
</blockquote>
<p>If somebody can help me, I'll be very very thankful.</p>
| 0 |
2016-09-24T12:35:46Z
| 39,720,555 |
<p>For Facebook embedded comments,you may use Facebook's graph api to extract the comments in json format.</p>
<p>Example-</p>
<p>Facebook comments - <code>https://graph.facebook.com/comments/?ids= "link of page"</code></p>
| 0 |
2016-09-27T09:07:03Z
|
[
"javascript",
"python",
"web-scraping",
"pyv8"
] |
How to have maximum characters in a text widget
| 39,676,423 |
<p>I do not understand how to have a maximum of 4 charachters allowed in the text widget. At the moment, when the buttons are pressed an infinite amount of numbers are shown in the text widget. Example: 123456 but i only want 1234 for this case to be shown.</p>
<p>Also if possible how do you change the size of the window that contains all the widgets as at the moment, the window is much larger than the widgets whereas i just want it to be the same length. Images to the sizing are shown below:</p>
<p><a href="http://i.stack.imgur.com/RsBrz.png" rel="nofollow">Original</a></p>
<p><a href="http://i.stack.imgur.com/U2mg8.png" rel="nofollow">What i want the window size to be</a></p>
| -2 |
2016-09-24T12:37:57Z
| 39,676,561 |
<p>Based on "how to have a maximum of 4 charachters allowed": you need to have a validation in your program:<br/>
Assuming that you only want integer numbers in the range of [1-4] (1, 2, 3, 4):</p>
<pre><code>from tkinter import *
root = Tk()
def valFunc(txt):
if len(txt) <= 4:
try:
txt = int(txt)
return True
except:
return False
else:
return False
vcmd = root.register(valFunc)
e = Entry(root, validate="key", validatecommand=(vcmd, "%P"))
e.pack()
</code></pre>
<p>And for the window size you need to use a <code>geometry</code> method for your window.</p>
| 1 |
2016-09-24T12:54:46Z
|
[
"python",
"tkinter"
] |
Determinant of a (large) 1554 x 1554 matrix in Python
| 39,676,437 |
<p>I need to calculate the determinant of a large 1554,1554 matrix of values with single precision in python. In doing so I encounter a runtime warning:</p>
<pre><code>import numpy as np
from numpy import linalg as LA
a = np.random.random((1554, 1554))
b = np.random.random((1554, 1554))
c = np.dot(a,b)
det = LA.det(c)
</code></pre>
<blockquote>
<p>RuntimeWarning: overflow encountered in det
r = _umath_linalg.det(a, signature=signature)</p>
</blockquote>
<p>Any ideas on how I can work around this problem? Many thanks!</p>
<p>Edit: this question is unique in that it specifically refers to computing the determinant of large matrix in double precision, though a possible answer is included here: <a href="http://stackoverflow.com/questions/462500/can-i-get-the-matrix-determinant-using-numpy">Can I get the matrix determinant using Numpy?</a></p>
| 2 |
2016-09-24T12:38:51Z
| 39,676,766 |
<p>You can use this relation: <a href="https://wikimedia.org/api/rest_v1/media/math/render/svg/f6404a766d86e9d78a5c4f82e05de37469a5f8e9" rel="nofollow">https://wikimedia.org/api/rest_v1/media/math/render/svg/f6404a766d86e9d78a5c4f82e05de37469a5f8e9</a></p>
<p>from <a href="https://en.wikipedia.org/wiki/Determinant#Properties_of_the_determinant" rel="nofollow">https://en.wikipedia.org/wiki/Determinant#Properties_of_the_determinant</a></p>
<p>So divide your matrix by the mean and then compute the determinant to avoid overflow. Later you can multiply with the mean to the power of n (length of one axis)</p>
<p>edit: I'm not sure if the mean is the ideal choice though. This is more a math question</p>
| 2 |
2016-09-24T13:19:13Z
|
[
"python",
"numpy",
"matrix"
] |
Trigonometry: sin(x) getting negative value
| 39,676,531 |
<p>I am trying to solve a homework: I am required to write a program which will calculate the length of a ladder based on two inputs, that is the desired height to be reached and the angle created by leaning the ladder toward the wall.</p>
<p>I used the following formula to convert degrees to radians :</p>
<pre><code>radians = (math.pi / 180) * x # x is the given angle by the user.
</code></pre>
<p>I imported the math library as well to use its functions.</p>
<pre><code>def main():
import math
print("this program calculates the length of a ladder after you give the height and the angle")
h = eval(input("enter the height you want to reach using the ladder"))
x = eval(input("enter the angle which will be created be leaning the ladder to the wall"))
radians = ( math.pi / 180 ) * x
length = h / math.sin(x)
print("the length is:", length)
main()
</code></pre>
<p>What exactly am I doing wrong?<br>
I know the code is missing something and would appreciate it if someone could help me fill the gap.</p>
| -1 |
2016-09-24T12:50:37Z
| 39,676,565 |
<p>You never used <code>radians</code> after you calculate it. </p>
<p>i.e. <code>length = h / math.sin(radians)</code></p>
| 1 |
2016-09-24T12:54:55Z
|
[
"python"
] |
Trigonometry: sin(x) getting negative value
| 39,676,531 |
<p>I am trying to solve a homework: I am required to write a program which will calculate the length of a ladder based on two inputs, that is the desired height to be reached and the angle created by leaning the ladder toward the wall.</p>
<p>I used the following formula to convert degrees to radians :</p>
<pre><code>radians = (math.pi / 180) * x # x is the given angle by the user.
</code></pre>
<p>I imported the math library as well to use its functions.</p>
<pre><code>def main():
import math
print("this program calculates the length of a ladder after you give the height and the angle")
h = eval(input("enter the height you want to reach using the ladder"))
x = eval(input("enter the angle which will be created be leaning the ladder to the wall"))
radians = ( math.pi / 180 ) * x
length = h / math.sin(x)
print("the length is:", length)
main()
</code></pre>
<p>What exactly am I doing wrong?<br>
I know the code is missing something and would appreciate it if someone could help me fill the gap.</p>
| -1 |
2016-09-24T12:50:37Z
| 39,676,618 |
<p>To make crickt_007's right answer absolutely clear: <code>radians</code> which you did not use after you calculate it should be the argument of the sine:</p>
<pre><code>length = h / math.sin(radians)
</code></pre>
| 0 |
2016-09-24T13:02:22Z
|
[
"python"
] |
Trigonometry: sin(x) getting negative value
| 39,676,531 |
<p>I am trying to solve a homework: I am required to write a program which will calculate the length of a ladder based on two inputs, that is the desired height to be reached and the angle created by leaning the ladder toward the wall.</p>
<p>I used the following formula to convert degrees to radians :</p>
<pre><code>radians = (math.pi / 180) * x # x is the given angle by the user.
</code></pre>
<p>I imported the math library as well to use its functions.</p>
<pre><code>def main():
import math
print("this program calculates the length of a ladder after you give the height and the angle")
h = eval(input("enter the height you want to reach using the ladder"))
x = eval(input("enter the angle which will be created be leaning the ladder to the wall"))
radians = ( math.pi / 180 ) * x
length = h / math.sin(x)
print("the length is:", length)
main()
</code></pre>
<p>What exactly am I doing wrong?<br>
I know the code is missing something and would appreciate it if someone could help me fill the gap.</p>
| -1 |
2016-09-24T12:50:37Z
| 39,676,734 |
<p>you calculate <code>radians</code>,thats ok,but problem is you never used that <code>radians</code> value. i think your code must be changed as follows :)</p>
<pre><code>def main():
import math
print("this program calculates the length of a ladder after you give the height and the angle")
h = eval(input("enter the height you want to reach using the ladder"))
x = eval(input("enter the angle which will be created be leaning the ladder to the wall"))
radians = ( math.pi / 180 ) * x
length = h / math.sin(radians)
print("the length is:", length)
main()
</code></pre>
<p>if your both input will be 5,output is <code>the length is: 57.36856622834928</code></p>
| 0 |
2016-09-24T13:16:14Z
|
[
"python"
] |
Adding API to Usage Plan using Serverless Framework
| 39,676,532 |
<p>My <code>serverless.yaml</code> file is as follows:</p>
<pre><code>service: aws-python
provider:
name: aws
runtime: python2.7
stage: beta
region: us-east-1
package:
include:
- deps
- functions
- lib
functions:
hello:
handler: functions/handler.function_handler
events:
- http:
path: ta
method: GET
- http:
path: ta
method: POST
</code></pre>
<p>I want to add this API to a Usage Plan. How is this done?</p>
| -1 |
2016-09-24T12:50:40Z
| 39,706,966 |
<p>As mentioned in comments, Serverless doesn't support this by default. You should look to add the appropriate resources to your CloudFormation template as a custom resource or create it using the AWS CLI or another SDK.</p>
| 1 |
2016-09-26T15:38:38Z
|
[
"python",
"amazon-web-services",
"aws-api-gateway",
"serverless-framework"
] |
Adding API to Usage Plan using Serverless Framework
| 39,676,532 |
<p>My <code>serverless.yaml</code> file is as follows:</p>
<pre><code>service: aws-python
provider:
name: aws
runtime: python2.7
stage: beta
region: us-east-1
package:
include:
- deps
- functions
- lib
functions:
hello:
handler: functions/handler.function_handler
events:
- http:
path: ta
method: GET
- http:
path: ta
method: POST
</code></pre>
<p>I want to add this API to a Usage Plan. How is this done?</p>
| -1 |
2016-09-24T12:50:40Z
| 39,721,244 |
<p>Used the AWS CLI with the following command</p>
<pre><code>aws apigateway update-usage-plan --usage-plan-id <PLAN_ID> --patch-operations op=add,path=/apiStages,value=<API_ID>:<API_STAGE>
</code></pre>
| 0 |
2016-09-27T09:39:18Z
|
[
"python",
"amazon-web-services",
"aws-api-gateway",
"serverless-framework"
] |
RSA Python & Extended Euclidean algorithm to generate the private key. Variable is None
| 39,676,595 |
<p>Problem with simple <a href="https://en.wikipedia.org/wiki/RSA_(cryptosystem)" rel="nofollow"><strong>RSA</strong></a> encryption algorithm. <a href="https://en.wikipedia.org/wiki/Extended_Euclidean_algorithm" rel="nofollow"><strong>Extended Euclidean algorithm</strong></a> is used to generate the <strong>private key</strong>. The problem with <code>multiplicative_inverse(e, phi)</code> method. It is used for finding the multiplicative inverse of two numbers. The function does not return private key correctly. It returns <code>None</code> value.</p>
<hr>
<p>I have the following code:</p>
<pre><code>import random
def gcd(a, b):
while b != 0:
a, b = b, a % b
return a
#Euclidean extended algorithm for finding the multiplicative inverse of two numbers
def multiplicative_inverse(e, phi):
d = 0
x1 = 0
x2 = 1
y1 = 1
temp_phi = phi
while e > 0:
temp1 = temp_phi/e
temp2 = temp_phi - temp1 * e
temp_phi = e
e = temp2
x = x2- temp1* x1
y = d - temp1 * y1
x2 = x1
x1 = x
d = y1
y1 = y
if temp_phi == 1:
return d + phi
def generate_keypair(p, q):
n = p * q
#Phi is the totient of n
phi = (p-1) * (q-1)
#An integer e such that e and phi(n) are coprime
e = random.randrange(1, phi)
#Euclid's Algorithm to verify that e and phi(n) are comprime
g = gcd(e, phi)
while g != 1:
e = random.randrange(1, phi)
g = gcd(e, phi)
#Extended Euclid's Algorithm to generate the private key
d = multiplicative_inverse(e, phi)
#Public key is (e, n) and private key is (d, n)
return ((e, n), (d, n))
if __name__ == '__main__':
p = 17
q = 23
public, private = generate_keypair(p, q)
print("Public key is:", public ," and private key is:", private)
</code></pre>
<p>Since the variable <code>d</code> in the following line <code>d = multiplicative_inverse(e, phi)</code> contains <code>None</code> value, then during encryption I receive the following error:</p>
<blockquote>
<p><strong>TypeError:</strong> unsupported operand type(s) for pow(): 'int', 'NoneType',
'int'</p>
</blockquote>
<p>Output for the code that I provided in the question:</p>
<blockquote>
<p>Public key is: (269, 391) and private key is: (None, 391)</p>
</blockquote>
<hr>
<p><strong>Question:</strong> Why the variable contains <code>None</code> value. How to fix that?</p>
| 0 |
2016-09-24T12:59:05Z
| 39,676,638 |
<p>Well I'm not sure about the algorithm itself, and can't tell if you if it's wrong or right, but you only return a value from <code>multiplicative_inverse</code> function when <code>if temp_phi == 1</code>. Otherwise, the result is None. So I bet your <code>temp_phi != 1</code> when you run the function. There's probably some mistake in the function's logic.</p>
| 1 |
2016-09-24T13:04:22Z
|
[
"python",
"python-3.x",
"encryption",
"rsa",
"private-key"
] |
RSA Python & Extended Euclidean algorithm to generate the private key. Variable is None
| 39,676,595 |
<p>Problem with simple <a href="https://en.wikipedia.org/wiki/RSA_(cryptosystem)" rel="nofollow"><strong>RSA</strong></a> encryption algorithm. <a href="https://en.wikipedia.org/wiki/Extended_Euclidean_algorithm" rel="nofollow"><strong>Extended Euclidean algorithm</strong></a> is used to generate the <strong>private key</strong>. The problem with <code>multiplicative_inverse(e, phi)</code> method. It is used for finding the multiplicative inverse of two numbers. The function does not return private key correctly. It returns <code>None</code> value.</p>
<hr>
<p>I have the following code:</p>
<pre><code>import random
def gcd(a, b):
while b != 0:
a, b = b, a % b
return a
#Euclidean extended algorithm for finding the multiplicative inverse of two numbers
def multiplicative_inverse(e, phi):
d = 0
x1 = 0
x2 = 1
y1 = 1
temp_phi = phi
while e > 0:
temp1 = temp_phi/e
temp2 = temp_phi - temp1 * e
temp_phi = e
e = temp2
x = x2- temp1* x1
y = d - temp1 * y1
x2 = x1
x1 = x
d = y1
y1 = y
if temp_phi == 1:
return d + phi
def generate_keypair(p, q):
n = p * q
#Phi is the totient of n
phi = (p-1) * (q-1)
#An integer e such that e and phi(n) are coprime
e = random.randrange(1, phi)
#Euclid's Algorithm to verify that e and phi(n) are comprime
g = gcd(e, phi)
while g != 1:
e = random.randrange(1, phi)
g = gcd(e, phi)
#Extended Euclid's Algorithm to generate the private key
d = multiplicative_inverse(e, phi)
#Public key is (e, n) and private key is (d, n)
return ((e, n), (d, n))
if __name__ == '__main__':
p = 17
q = 23
public, private = generate_keypair(p, q)
print("Public key is:", public ," and private key is:", private)
</code></pre>
<p>Since the variable <code>d</code> in the following line <code>d = multiplicative_inverse(e, phi)</code> contains <code>None</code> value, then during encryption I receive the following error:</p>
<blockquote>
<p><strong>TypeError:</strong> unsupported operand type(s) for pow(): 'int', 'NoneType',
'int'</p>
</blockquote>
<p>Output for the code that I provided in the question:</p>
<blockquote>
<p>Public key is: (269, 391) and private key is: (None, 391)</p>
</blockquote>
<hr>
<p><strong>Question:</strong> Why the variable contains <code>None</code> value. How to fix that?</p>
| 0 |
2016-09-24T12:59:05Z
| 39,676,651 |
<p>I think that this is a problem:</p>
<pre><code>if temp_phi == 1:
return d + phi
</code></pre>
<p>This function return some value only under condition that temp_phi is equal to 1, otherwise it will not return any value.</p>
| 1 |
2016-09-24T13:06:21Z
|
[
"python",
"python-3.x",
"encryption",
"rsa",
"private-key"
] |
generating pi in python
| 39,676,676 |
<p>I would like to know how to adapt this code to give me X digits of pi when asked
because right now it just prints a random amount</p>
<pre><code>while True:
print("how many digits of pi would you like")
def make_pi():
q, r, t, k, m, x = 1, 0, 1, 1, 3, 3
number = int(input())
for j in range(int(number)):
if 4 * q + r - t < m * t:
yield m
q, r, t, k, m, x = 10 * q, 10 * (r - m * t), t, k, (10 * (3 * q + r)) // t - 10 * m, x
else:
q, r, t, k, m, x = q * k, (2 * q + r) * x, t * x, k + 1, (q * (7 * k + 2) + r * x) // (t * x), x + 2
digits = make_pi()
pi_list = []
my_array = []
for i in make_pi():
my_array.append(str(i))
my_array = my_array[:1] + ['.'] + my_array[1:]
big_string = "".join(my_array)
print("here is the string:\n %s" % big_string)
</code></pre>
| 1 |
2016-09-24T13:09:08Z
| 39,676,694 |
<pre><code>print('{:.xf}'.format(math.pi))
</code></pre>
<p>Where <code>x</code> is the number of digits of pi you want to print after the decimal.</p>
<p>EDIT:</p>
<p>IF you want to do it you way then</p>
<pre><code>big_string = ''.join(my_array[:x])
</code></pre>
<p>Where x is the number of <em>characters</em> including leading 3 and decimal point.</p>
| 1 |
2016-09-24T13:11:49Z
|
[
"python",
"python-3.x",
"pi"
] |
generating pi in python
| 39,676,676 |
<p>I would like to know how to adapt this code to give me X digits of pi when asked
because right now it just prints a random amount</p>
<pre><code>while True:
print("how many digits of pi would you like")
def make_pi():
q, r, t, k, m, x = 1, 0, 1, 1, 3, 3
number = int(input())
for j in range(int(number)):
if 4 * q + r - t < m * t:
yield m
q, r, t, k, m, x = 10 * q, 10 * (r - m * t), t, k, (10 * (3 * q + r)) // t - 10 * m, x
else:
q, r, t, k, m, x = q * k, (2 * q + r) * x, t * x, k + 1, (q * (7 * k + 2) + r * x) // (t * x), x + 2
digits = make_pi()
pi_list = []
my_array = []
for i in make_pi():
my_array.append(str(i))
my_array = my_array[:1] + ['.'] + my_array[1:]
big_string = "".join(my_array)
print("here is the string:\n %s" % big_string)
</code></pre>
| 1 |
2016-09-24T13:09:08Z
| 39,677,340 |
<p>full code if interested</p>
<p>while True:
print("how many digits of pi would you like")</p>
<pre><code>def make_pi():
q, r, t, k, m, x = 1, 0, 1, 1, 3, 3
for j in range(1000):
if 4 * q + r - t < m * t:
yield m
q, r, t, k, m, x = 10 * q, 10 * (r - m * t), t, k, (10 * (3 * q + r)) // t - 10 * m, x
else:
q, r, t, k, m, x = q * k, (2 * q + r) * x, t * x, k + 1, (q * (7 * k + 2) + r * x) // (t * x), x + 2
digits = make_pi()
pi_list = []
my_array = []
for i in make_pi():
my_array.append(str(i))
number = int(input())+2
my_array = my_array[:1] + ['.'] + my_array[1:]
big_string = "".join(my_array[: number ])
print("here is the string:\n %s" % big_string)
</code></pre>
| 0 |
2016-09-24T14:18:22Z
|
[
"python",
"python-3.x",
"pi"
] |
Django Apps aren't loaded yet Celery Tasks
| 39,676,684 |
<p>What is giving the error below? I'm unsure if this is a problem with an app I have installed or one of mine. The exception below is generated only when running Celery i.e. <code>celery -A demo.apps.wall.tasks worker</code> , runserver does not generate any errors. Which app ios the issue?</p>
<pre><code>Traceback (most recent call last):
File "/Users/user/Documents/workspace/demo-api/env/bin/celery", line 11, in <module>
sys.exit(main())
File "/Users/user/Documents/workspace/demo-api/env/lib/python3.5/site-packages/celery/__main__.py", line 30, in main
main()
File "/Users/user/Documents/workspace/demo-api/env/lib/python3.5/site-packages/celery/bin/celery.py", line 81, in main
cmd.execute_from_commandline(argv)
File "/Users/user/Documents/workspace/demo-api/env/lib/python3.5/site-packages/celery/bin/celery.py", line 770, in execute_from_commandline
super(CeleryCommand, self).execute_from_commandline(argv)))
File "/Users/user/Documents/workspace/demo-api/env/lib/python3.5/site-packages/celery/bin/base.py", line 309, in execute_from_commandline
argv = self.setup_app_from_commandline(argv)
File "/Users/user/Documents/workspace/demo-api/env/lib/python3.5/site-packages/celery/bin/base.py", line 469, in setup_app_from_commandline
self.app = self.find_app(app)
File "/Users/user/Documents/workspace/demo-api/env/lib/python3.5/site-packages/celery/bin/base.py", line 489, in find_app
return find_app(app, symbol_by_name=self.symbol_by_name)
File "/Users/user/Documents/workspace/demo-api/env/lib/python3.5/site-packages/celery/app/utils.py", line 238, in find_app
sym = imp(app)
File "/Users/user/Documents/workspace/demo-api/env/lib/python3.5/site-packages/celery/utils/imports.py", line 101, in import_from_cwd
return imp(module, package=package)
File "/Users/user/Documents/workspace/demo-api/env/lib/python3.5/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 986, in _gcd_import
File "<frozen importlib._bootstrap>", line 969, in _find_and_load
File "<frozen importlib._bootstrap>", line 958, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 673, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 662, in exec_module
File "<frozen importlib._bootstrap>", line 222, in _call_with_frames_removed
File "/Users/user/Documents/workspace/demo-api/demo/apps/Walls/tasks.py", line 14, in <module>
from demo.apps.Walls.redis_models import WallSchedule, WallBroadcast, UserWalls
File "/Users/user/Documents/workspace/demo-api/demo/apps/Walls/redis_models.py", line 12, in <module>
from demo.apps.memberships.models import Membership
File "/Users/user/Documents/workspace/demo-api/demo/apps/memberships/models.py", line 4, in <module>
from django.contrib.contenttypes.models import ContentType
File "/Users/user/Documents/workspace/demo-api/env/lib/python3.5/site-packages/django/contrib/contenttypes/models.py", line 161, in <module>
class ContentType(models.Model):
File "/Users/user/Documents/workspace/demo-api/env/lib/python3.5/site-packages/django/db/models/base.py", line 94, in __new__
app_config = apps.get_containing_app_config(module)
File "/Users/user/Documents/workspace/demo-api/env/lib/python3.5/site-packages/django/apps/registry.py", line 239, in get_containing_app_config
self.check_apps_ready()
File "/Users/user/Documents/workspace/demo-api/env/lib/python3.5/site-packages/django/apps/registry.py", line 124, in check_apps_ready
raise AppRegistryNotReady("Apps aren't loaded yet.")
django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet.
</code></pre>
<p>Running <code>manage.py check</code> is ok.</p>
| 0 |
2016-09-24T13:10:28Z
| 39,676,957 |
<p>Ok - after posting your celery app files I compared to what I have and tried running. <em>Think</em> I found your issue - it looks like you're calling tasks.py in <code>celery -A demo.apps.wall.tasks</code>.</p>
<p>If your wall module contains celery.py, and tasks.py - you should call <code>celery -A demo.apps.wall</code>. </p>
<p>Here's the directory structure I have, and the celery command I run:</p>
<pre><code>django_project
- an_app
- celery_tasks
- init.py
- celery_app.py (celery.py in your case)
- tasks.py
</code></pre>
<p>The command I run: <code>celery worker -A celery_tasks</code> from the django_project directory.</p>
| 1 |
2016-09-24T13:37:50Z
|
[
"python",
"django",
"celery"
] |
Django Apps aren't loaded yet Celery Tasks
| 39,676,684 |
<p>What is giving the error below? I'm unsure if this is a problem with an app I have installed or one of mine. The exception below is generated only when running Celery i.e. <code>celery -A demo.apps.wall.tasks worker</code> , runserver does not generate any errors. Which app ios the issue?</p>
<pre><code>Traceback (most recent call last):
File "/Users/user/Documents/workspace/demo-api/env/bin/celery", line 11, in <module>
sys.exit(main())
File "/Users/user/Documents/workspace/demo-api/env/lib/python3.5/site-packages/celery/__main__.py", line 30, in main
main()
File "/Users/user/Documents/workspace/demo-api/env/lib/python3.5/site-packages/celery/bin/celery.py", line 81, in main
cmd.execute_from_commandline(argv)
File "/Users/user/Documents/workspace/demo-api/env/lib/python3.5/site-packages/celery/bin/celery.py", line 770, in execute_from_commandline
super(CeleryCommand, self).execute_from_commandline(argv)))
File "/Users/user/Documents/workspace/demo-api/env/lib/python3.5/site-packages/celery/bin/base.py", line 309, in execute_from_commandline
argv = self.setup_app_from_commandline(argv)
File "/Users/user/Documents/workspace/demo-api/env/lib/python3.5/site-packages/celery/bin/base.py", line 469, in setup_app_from_commandline
self.app = self.find_app(app)
File "/Users/user/Documents/workspace/demo-api/env/lib/python3.5/site-packages/celery/bin/base.py", line 489, in find_app
return find_app(app, symbol_by_name=self.symbol_by_name)
File "/Users/user/Documents/workspace/demo-api/env/lib/python3.5/site-packages/celery/app/utils.py", line 238, in find_app
sym = imp(app)
File "/Users/user/Documents/workspace/demo-api/env/lib/python3.5/site-packages/celery/utils/imports.py", line 101, in import_from_cwd
return imp(module, package=package)
File "/Users/user/Documents/workspace/demo-api/env/lib/python3.5/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 986, in _gcd_import
File "<frozen importlib._bootstrap>", line 969, in _find_and_load
File "<frozen importlib._bootstrap>", line 958, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 673, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 662, in exec_module
File "<frozen importlib._bootstrap>", line 222, in _call_with_frames_removed
File "/Users/user/Documents/workspace/demo-api/demo/apps/Walls/tasks.py", line 14, in <module>
from demo.apps.Walls.redis_models import WallSchedule, WallBroadcast, UserWalls
File "/Users/user/Documents/workspace/demo-api/demo/apps/Walls/redis_models.py", line 12, in <module>
from demo.apps.memberships.models import Membership
File "/Users/user/Documents/workspace/demo-api/demo/apps/memberships/models.py", line 4, in <module>
from django.contrib.contenttypes.models import ContentType
File "/Users/user/Documents/workspace/demo-api/env/lib/python3.5/site-packages/django/contrib/contenttypes/models.py", line 161, in <module>
class ContentType(models.Model):
File "/Users/user/Documents/workspace/demo-api/env/lib/python3.5/site-packages/django/db/models/base.py", line 94, in __new__
app_config = apps.get_containing_app_config(module)
File "/Users/user/Documents/workspace/demo-api/env/lib/python3.5/site-packages/django/apps/registry.py", line 239, in get_containing_app_config
self.check_apps_ready()
File "/Users/user/Documents/workspace/demo-api/env/lib/python3.5/site-packages/django/apps/registry.py", line 124, in check_apps_ready
raise AppRegistryNotReady("Apps aren't loaded yet.")
django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet.
</code></pre>
<p>Running <code>manage.py check</code> is ok.</p>
| 0 |
2016-09-24T13:10:28Z
| 39,676,979 |
<p>Try adding this to the beginning of <code>tasks.py</code>:</p>
<pre><code>import django
django.setup()
</code></pre>
| 0 |
2016-09-24T13:39:49Z
|
[
"python",
"django",
"celery"
] |
Can't get Aptana Studio 3 to run Python code as PyDev
| 39,676,686 |
<p>I just installed Aptana Studio to start and learn to code in Python. However, even my Hello World doesn't run, beacuse I run it in PyDev mode, as suggested. </p>
<p>I use Python 3.5.2, and I have configured the interpreter.
If I click onto "run", all it gives me is this:
<a href="http://i.stack.imgur.com/sxRcr.jpg" rel="nofollow">run function</a></p>
<p>Any help would be appreciated.</p>
| 0 |
2016-09-24T13:10:40Z
| 39,689,547 |
<p>The PyDev version included with Aptana Studio 3 is pretty ancient. Please consider getting either LiClipse (<a href="http://liclipse.com" rel="nofollow">http://liclipse.com</a>) which has PyDev builtin or get the latest Eclipse and install PyDev in it (following instructions from: <a href="http://www.pydev.org/download.html" rel="nofollow">http://www.pydev.org/download.html</a>).</p>
<p>Also, make sure you read the getting started manual: <a href="http://www.pydev.org/manual_101_root.html" rel="nofollow">http://www.pydev.org/manual_101_root.html</a></p>
| 0 |
2016-09-25T17:06:42Z
|
[
"python",
"pydev",
"aptana"
] |
Python getting xml same childnodes in same line
| 39,676,815 |
<h2>I have a xml file as:</h2>
<pre><code><?xml version="1.0" encoding="utf-8"?>
<SetupConf>
<LocSetup>
<Src>
<Dir>C:\User1\test1</Dir>
<Dir>C:\User2\log</Dir>
<Dir>D:\Users\Checkup</Dir>
<Dir>D:\Work1</Dir>
<Dir>E:\job1</Dir>
</Src>
</LocSetup>
</SetupConf>
</code></pre>
<hr>
<p>where <code><Dir></code> node depends on user input. In "Dir" node it may be 1,2,5,10 dir structure defined.
I am able to extract data from the xml using below Python code:</p>
<pre><code>from xml.dom import minidom
dom = minidom.parse('Test0001.xml')
Src=dom.getElementsByTagName('Src')
for node in Src:
alist =node.getElementsByTagName('Dir')
for a in alist:
dirtext = a.childNodes[0].nodeValue + ','
print dirtext
</code></pre>
<p>...............
I am getting Output in multi line as:</p>
<pre><code>C:\User1\test1,
C:\User2\log,
D:\Users\Checkup,
D:\Work1,
E:\job1,
</code></pre>
<hr>
<p>But I need the output in single line without space and remove last comma, like:</p>
<pre><code> C:\User1\test1,C:\User2\log,D:\Users\Checkup,D:\Work1,E:\job1
</code></pre>
<p>Please help me in this regard, I have tried a lot... It may be by using itertools grouping or defaultdict. Any help is greatly appreciated.</p>
| 0 |
2016-09-24T13:23:55Z
| 39,676,862 |
<p>I presume you are trying to store the string so use <em>str.join</em>:</p>
<pre><code>output = ",".join([a.childNodes[0].nodeValue for node in Src for a in node.getElementsByTagName('Dir')])
</code></pre>
<p>You could add a trailing comma after the print so you don't get a newline printed after each<code>print dirte,</code> but that won't account for the trailing comma and does not help you if you actually want to store the string.</p>
<p>Output:</p>
<pre><code>In [1]: from xml.dom import minidom
In [2]: x = r"""<?xml version="1.0" encoding="utf-8"?>
...: <SetupConf>
...: <LocSetup>
...: <Src>
...: <Dir>C:\User1\test1</Dir>
...: <Dir>C:\User2\log</Dir>
...: <Dir>D:\Users\Checkup</Dir>
...: <Dir>D:\Work1</Dir>
...: <Dir>E:\job1</Dir>
...: </Src>
...: </LocSetup>
...: </SetupConf>"""
In [3]: from StringIO import StringIO
In [4]:
In [4]: dom = minidom.parse(StringIO(x))
In [5]: Src = dom.getElementsByTagName('Src')
In [6]: output = ",".join([a.childNodes[0].nodeValue for node in Src for a in node.getElementsByTagName('Dir')])
In [7]: print(output)
C:\User1\test1,C:\User2\log,D:\Users\Checkup,D:\Work1,E:\job1
</code></pre>
| 0 |
2016-09-24T13:28:45Z
|
[
"python",
"xml"
] |
How can I create a new url in a Django app?
| 39,676,866 |
<p>I have implemented a model in a Django app and I would like add an url to handle the view for this model. </p>
<pre><code>class Post(models.Model):
author = models.ForeignKey('auth.User')
title = models.CharField(max_length=200)
text = models.TextField()
created_date = models.DateTimeField(default=timezone.now)
published_date = models.DateTimeField(blank=True, null=True)
</code></pre>
<p>Can you help me?</p>
| -2 |
2016-09-24T13:29:05Z
| 39,677,139 |
<p>I would recommend the Django tutorial as well: <a href="https://docs.djangoproject.com/en/1.10/intro/tutorial01/" rel="nofollow">https://docs.djangoproject.com/en/1.10/intro/tutorial01/</a></p>
<p>However, to answer your question, you could do a setup like this:</p>
<p><strong>views.py:</strong></p>
<pre><code>from django.shortcuts import render
from .models import Post
def my_view(request):
posts = Post.objects.all()
return render(request, 'template.html', {'posts': posts})
</code></pre>
<p><strong>urls.py:</strong></p>
<pre><code>from django.conf.urls import url
from . import views
urlpatterns = [
url(r'^$', views.my_view, name='my_view')
]
</code></pre>
| 0 |
2016-09-24T13:56:41Z
|
[
"python",
"django",
"url",
"view"
] |
Django 1.9 Crispy forms 1.6: multiple forms issue
| 39,676,869 |
<p>I 'm writing an app in Django 1.9 and installed Crispy forms 1.6 (using bootstrap3 template pack).
In that app, I have a profile model, next to the standard User model.
Now, I want to allow the users to modify their profile, and I want to display both User fields and Profile fields on one page. The Profile model contains an avatar, which I want to display on to of the page, followed by some fields from the User model, and after that the rest of the Profile form's fields.
Something like:</p>
<pre><code>> Avatar (Profile model)
> First name (User model)
> Last name (User model)
> E-mail address (User model)
> Street (Profile model)
> ZIP (Profile model)
> Country (Profile model)
> Website (Profile model)
</code></pre>
<p>I've written 2 Forms:</p>
<pre><code>class UserForm(forms.ModelForm)
class ProfileForm(forms.ModelForm)
</code></pre>
<p>Both extended with the Crispy forms "FormHelper"</p>
<p>In my template, I render the forms as:
<code>{% crispy pform %}</code> (profile form)
<code>{% crispy uform %}</code> (user form)</p>
<p>But, of course, that does not display the fields as I described earlier.</p>
<p>Does anyone know how I can tackle this problem?</p>
<p>Kind regards!</p>
<p>Wim</p>
| 0 |
2016-09-24T13:29:33Z
| 39,937,688 |
<p>Why you dont create a form as you desire and render fields one by one?</p>
<pre><code><form>
{{ pform.avatar|as_crispy_field }}
{{ uform.first_name|as_crispy_field }}
...
</form>
</code></pre>
<p>You may want to initialize forms with different prefixes.</p>
| 1 |
2016-10-08T21:45:43Z
|
[
"python",
"django",
"forms",
"django-crispy-forms"
] |
Create a popup widget in IPython
| 39,676,933 |
<p>I'm trying to create a Popup widget in Jupyter as described in <a href="https://youtu.be/o7Tb7YhJZR0?t=26m21s" rel="nofollow">this vid</a>: </p>
<pre><code>from IPython.html import widgets
</code></pre>
<p>which gives the following warning: </p>
<blockquote>
<p>ShimWarning: The <code>IPython.html</code> package has been deprecated. You should import from <code>notebook</code> instead. <code>IPython.html.widgets</code> has moved to <code>ipywidgets</code>. "<code>IPython.html.widgets</code> has moved to <code>ipywidgets</code>."</p>
</blockquote>
<p>However, <code>ipywidgets</code> doesn't have a popup widget and there is nothing in <a href="https://ipywidgets.readthedocs.io/en/latest/" rel="nofollow">its docs</a>. (By the way, <code>IPython.html.widgets</code> doesn't have a popup widget either)</p>
<p>How can I create a popup widget in jupyter?</p>
<hr>
<p><sub><strong>Versions</strong>:<br>
Jupyter 1.0.0<br>
IPython 5.1.0</sub></p>
| 1 |
2016-09-24T13:35:56Z
| 39,679,140 |
<p>Searching in <a href="https://github.com/ipython/ipython/blob/aa586fd81940e557a1df54ecd0478f9d67dfb6b4/docs/source/whatsnew/version3_widget_migration.rst" rel="nofollow">IPython's repository</a> I found that.. : </p>
<blockquote>
<p><strong>PopupWidget was removed</strong> from IPython. If you use the PopupWidget, try using a Box widget instead. If your notebook can't live without the popup functionality, subclass the Box widget (both in Python and JS) and use JQuery UI's draggable() and resizable() methods to mimic the behavior.</p>
</blockquote>
<p><sub>(emphasis mine)</sub></p>
<hr>
<h2>Why it was removed and future plans</h2>
<p><a href="https://github.com/ipython/ipython/pull/7281#issuecomment-68276486" rel="nofollow">@jdfreder's comment</a> in related discussion:</p>
<blockquote>
<p>The popup widget is an oddball in the built-in widget collection. After talking to the others during the last dev meeting (or meeting before last, I forget which), we came to the conclusion that it is best to remove the popup widget altogether. However, the underlying API that allows one to implement the popup widget will not be removed, see PR #7341 . If we do include something like the popup widget again, it will be implemented from scratch, probably as a phosphor component (or something similar).</p>
</blockquote>
<p><a href="https://gitter.im/jupyter/jupyter?at=57e7075c7270539a6d843a4e" rel="nofollow">@Sylvain Corlay</a> in github chat: </p>
<blockquote>
<p>We are moving away from bootstrapjs widgets. For the good cause
bootstrap is a bad citizen of the browser. creates global variable etc..
With jupyterlab richer layouts should be enabled.</p>
</blockquote>
<p><a href="http://i.stack.imgur.com/e1QbQ.gif" rel="nofollow"><img src="http://i.stack.imgur.com/e1QbQ.gif" alt="enter image description here"></a></p>
| 2 |
2016-09-24T17:35:32Z
|
[
"python",
"popup",
"jupyter",
"jupyter-notebook",
"ipywidgets"
] |
Any way to immediately draw multiple rects/circles in pygame?
| 39,676,950 |
<p>Is there any way to draw a lot of circles/rectangles in pygame? I want to draw some objects every frame and I basically have all positions/sizes here ready in a numpy array.</p>
<p>Do I really have to run through a slow python <code>for</code>-loop? Doesn't pygame have any optimizations? </p>
| -1 |
2016-09-24T13:37:32Z
| 39,719,404 |
<p>Yeah, you do. It shouldn't be too much of a speed issue though: in a previous project I did, drawing up to 400 circles on the screen, every frame, at 60 frames per second, had virtually no lag.</p>
| 0 |
2016-09-27T08:08:54Z
|
[
"python",
"pygame"
] |
Render series of responses in selenium webdriver
| 39,676,990 |
<p>I want to collect a series of responses when navigating a website, and afterwards "recreate" the process <strong>using the responses</strong>.</p>
<p>From an <a href="http://stackoverflow.com/questions/36785588/render-http-responsehtml-content-in-selenium-webdriverbrowser">other thread</a> I found this solution to render HTML:</p>
<pre><code>content = requests.get("http://stackoverflow.com/").content
driver = webdriver.Chrome()
driver.get("data:text/html;charset=utf-8," + content)
</code></pre>
<p>Unfortunately, when I try this (using Firefox instead of Chrome), the content is simply put into the browser address bar.</p>
<p>How can I render a series of responses, including for example XHR-responses with selenium webdriver?</p>
| 1 |
2016-09-24T13:40:49Z
| 39,677,133 |
<p>You have to account for certain browser-specific things, like the fact that <a href="http://stackoverflow.com/a/9239272/771848"><code>#</code> and <code>%</code> have to be escaped if you use Firefox</a> - from what I understand, you can simply pass the content through <code>quote()</code>:</p>
<pre><code>try:
from urllib import quote
except ImportError:
from urllib.parse import quote # if Python 3
driver.get("data:text/html;charset=utf-8," + quote(content))
</code></pre>
<p>No need to do that if you use Chrome.</p>
| 1 |
2016-09-24T13:56:14Z
|
[
"javascript",
"python",
"selenium",
"selenium-webdriver"
] |
Render series of responses in selenium webdriver
| 39,676,990 |
<p>I want to collect a series of responses when navigating a website, and afterwards "recreate" the process <strong>using the responses</strong>.</p>
<p>From an <a href="http://stackoverflow.com/questions/36785588/render-http-responsehtml-content-in-selenium-webdriverbrowser">other thread</a> I found this solution to render HTML:</p>
<pre><code>content = requests.get("http://stackoverflow.com/").content
driver = webdriver.Chrome()
driver.get("data:text/html;charset=utf-8," + content)
</code></pre>
<p>Unfortunately, when I try this (using Firefox instead of Chrome), the content is simply put into the browser address bar.</p>
<p>How can I render a series of responses, including for example XHR-responses with selenium webdriver?</p>
| 1 |
2016-09-24T13:40:49Z
| 39,716,670 |
<p>I found a possible solution, or rather workaround. When saving requests (urls) and responses in a dictionary, you can set up a server that answers each request with its resprective pre-recorded response.</p>
| 0 |
2016-09-27T05:32:35Z
|
[
"javascript",
"python",
"selenium",
"selenium-webdriver"
] |
Why is my very simple neural network not doing at all well?
| 39,677,062 |
<p>I have created an extremely simple neural network to help my understanding. It has one neuron, and one input, and one weight. The idea is simple: given many random numbers between 0,200, learn that anything over 100 is correct, and under 100 is in correct (instead of just being told). </p>
<pre><code>import random
weight = random.uniform(-1,1)
def train(g,c,i):
global weight
weight = weight + (i*(c-g)) #change weight by error change
if(g==c):
return True
else:
return False
def trial(i):
global weight
sum = i*weight
if(sum>0):
return 1
else:
return -1
def feedData():
suc = 0
for x in range(0,10000):
d = random.randint(0,200)
if(d>100): #tell what is correct and not (this is like the dataset)
correct = 1
else:
correct = -1
g = trial(d)
if(train(g,correct, d)==True):
suc += 1
print(suc)
feedData();
</code></pre>
<p>Out of 10000, I would expect at least 8000 to be correct. However, it always ranges between 4990 and 5100 success.</p>
<p>I obviously have a slight flaw in my understanding. Cheers for any advice.</p>
| 0 |
2016-09-24T13:47:42Z
| 39,677,251 |
<p>This is mostly because of this line</p>
<p><code>
d = random.randint(0,200)
</code></p>
<p>By the problem itself, you have 50% chances to get the right number (>100). If you increase the maximum value from 200 to 500 as example, you'll get closer to what you want.</p>
<p>You need to find a better way to generate random number or to create your own algorithm for this.</p>
| -1 |
2016-09-24T14:08:57Z
|
[
"python",
"machine-learning",
"neural-network"
] |
Why is my very simple neural network not doing at all well?
| 39,677,062 |
<p>I have created an extremely simple neural network to help my understanding. It has one neuron, and one input, and one weight. The idea is simple: given many random numbers between 0,200, learn that anything over 100 is correct, and under 100 is in correct (instead of just being told). </p>
<pre><code>import random
weight = random.uniform(-1,1)
def train(g,c,i):
global weight
weight = weight + (i*(c-g)) #change weight by error change
if(g==c):
return True
else:
return False
def trial(i):
global weight
sum = i*weight
if(sum>0):
return 1
else:
return -1
def feedData():
suc = 0
for x in range(0,10000):
d = random.randint(0,200)
if(d>100): #tell what is correct and not (this is like the dataset)
correct = 1
else:
correct = -1
g = trial(d)
if(train(g,correct, d)==True):
suc += 1
print(suc)
feedData();
</code></pre>
<p>Out of 10000, I would expect at least 8000 to be correct. However, it always ranges between 4990 and 5100 success.</p>
<p>I obviously have a slight flaw in my understanding. Cheers for any advice.</p>
| 0 |
2016-09-24T13:47:42Z
| 39,677,273 |
<p>I think your problem here is that you're lacking a bias term. The network you've built is multiplying a positive integer (<code>d</code>) by a weight value, and then comparing the result to see if it's positive or negative. In an ideal universe, what should the value of <code>weight</code> be? If <code>weight</code> is positive, the network will get about 50% of the inputs right; if it's negative, it will also be right about 50% of the time.</p>
<p>You'll see that the network can't solve this problem, until you introduce a second "weight" as a bias term. If you have <code>sum = i * weight + bias</code>, and you also update <code>bias</code> in <code>train</code>, then you should be able to correctly classify all inputs. I would initialise <code>bias</code> the same way as <code>weight</code>, and then do the update as:</p>
<pre><code>bias = bias + (c-g)
</code></pre>
<p>Bias terms are often used in machine learning systems to account for a "bias" or "skew" in the input data (e.g., in a spam email classifier, maybe 80-95% of emails that we get are not spam, so the system should be biased against marking something as spam). In this case, the bias will allow the network to learn that it should produce some negative outputs, but all of your inputs are positive values.</p>
<p>To put it another way, let's think of linear algebra. Your input classes (that is, {x|x<100} and {x|x>100}) are linearly separable. The function that separates them is something like y = x - 100. This is a straight line on a 2D plot, which has positive slope, and intersects the y axis at y = -100, and the x axis at x = 100. Using this line, you can say that all values for x under 100 map to negative values of y (i.e., are incorrect), and all those above 100 map to positive values of y (i.e., are correct).</p>
<p>The difficulty with your code is that you can only express lines which go through the origin (because you're lacking a bias term).</p>
| 1 |
2016-09-24T14:11:42Z
|
[
"python",
"machine-learning",
"neural-network"
] |
Procedure to map all relationships between elements in a list of lists
| 39,677,070 |
<p>I'm looking for an algorithm that can map all the relationships between all of the elements in sublists belonging to a list of length <code>n</code>. </p>
<p>More concretely, suppose <code>a</code>, <code>b</code>, <code>c</code>, <code>d</code>, <code>e</code> and <code>f</code> are the names of workers and that each sublist represents a 'shift' that occurred yesterday. I'd like to know, for each worker, who worked with who yesterday.</p>
<pre><code>shifts_yesterday = [[a, b, c, d], [b, c, e, f]]
</code></pre>
<p>Goal:</p>
<pre><code>a: b, c, d
b: a, c, d, e, f
c: a, b, d, e, f
d: a, b, c
e: b, c, f
f: b, c, e
</code></pre>
<p>Above, I can see that <code>a</code> worked with <code>b, c, d</code> yesterday; <code>b</code> worked with <code>a, c, d, e, f</code> yesterday, etc.</p>
<p>Time complexity is a concern here as I have a large list to process.
Though, intuitively, I suspect there is a pretty high floor on this one...</p>
<p>Note: I can obviously write a <strike>linear search</strike> straightforward approach with only <code>for</code> loops, but that is (a) not very clever (b) very slow.</p>
<h3>Edit:</h3>
<p>Here's (a messy) attempt:</p>
<pre><code>shifts = [['a', 'b', 'c', 'd'], ['b', 'c', 'e', 'f']]
workers = [i for s in shifts for i in s]
import collections
d = collections.defaultdict(list)
for w in workers:
for s in shifts:
for i in s:
if i != w and w in s:
if w in d.keys():
if i not in d[w]:
d[w].append(i)
else:
d[w].append(i)
</code></pre>
<p>Test:</p>
<pre><code>for k, v in collections.OrderedDict(sorted(d.items())).items():
print(k, v)
</code></pre>
<h3>Edit 2:</h3>
<p>Times:</p>
<ol>
<li><p>mine: <code>%%timeit -r 10</code> --> <code>10000 loops, best of 10: 19 µs per loop</code></p></li>
<li><p>Padraic Cunningham: <code>%%timeit -r 10</code> --> <code>100000 loops, best of 10:
4.89 µs per loop</code></p></li>
<li><p>zvone: <code>%%timeit -r 10</code> --> <code>100000 loops, best of 10: 3.88 µs per
loop</code></p></li>
<li><p>pneumatics: <code>%%timeit -r 10</code> --> <code>10000 loops, best of 10: 33.5 µs per loop</code></p></li>
</ol>
| 2 |
2016-09-24T13:48:16Z
| 39,677,393 |
<p>Pseudo-code algorithm:</p>
<pre><code>declare two-dimensional array workers
for each shift in shifts_yesterday
for each element x in shift
add x to workers[x]
for each element y != x in shift
add y to workers[x]
for each list xs in workers
print xs[0] + ": "
for each element w in xs except the first
print xs[w] + ", "
</code></pre>
<p>The time complexity is <code>O(n*m^2 + w*m)</code> where <code>n</code> is the number of shifts, <code>m</code> is the maximum number of workers in any shift and <code>w</code> is the total number of workers. If you could settle for seeing each worker once (don't display both <code>a: b</code> and <code>b: a</code>) you could shave off one <code>m</code>. That's a quadratic algorithm, I believe that's the best you can do.</p>
| 1 |
2016-09-24T14:23:05Z
|
[
"python",
"algorithm",
"time-complexity"
] |
Procedure to map all relationships between elements in a list of lists
| 39,677,070 |
<p>I'm looking for an algorithm that can map all the relationships between all of the elements in sublists belonging to a list of length <code>n</code>. </p>
<p>More concretely, suppose <code>a</code>, <code>b</code>, <code>c</code>, <code>d</code>, <code>e</code> and <code>f</code> are the names of workers and that each sublist represents a 'shift' that occurred yesterday. I'd like to know, for each worker, who worked with who yesterday.</p>
<pre><code>shifts_yesterday = [[a, b, c, d], [b, c, e, f]]
</code></pre>
<p>Goal:</p>
<pre><code>a: b, c, d
b: a, c, d, e, f
c: a, b, d, e, f
d: a, b, c
e: b, c, f
f: b, c, e
</code></pre>
<p>Above, I can see that <code>a</code> worked with <code>b, c, d</code> yesterday; <code>b</code> worked with <code>a, c, d, e, f</code> yesterday, etc.</p>
<p>Time complexity is a concern here as I have a large list to process.
Though, intuitively, I suspect there is a pretty high floor on this one...</p>
<p>Note: I can obviously write a <strike>linear search</strike> straightforward approach with only <code>for</code> loops, but that is (a) not very clever (b) very slow.</p>
<h3>Edit:</h3>
<p>Here's (a messy) attempt:</p>
<pre><code>shifts = [['a', 'b', 'c', 'd'], ['b', 'c', 'e', 'f']]
workers = [i for s in shifts for i in s]
import collections
d = collections.defaultdict(list)
for w in workers:
for s in shifts:
for i in s:
if i != w and w in s:
if w in d.keys():
if i not in d[w]:
d[w].append(i)
else:
d[w].append(i)
</code></pre>
<p>Test:</p>
<pre><code>for k, v in collections.OrderedDict(sorted(d.items())).items():
print(k, v)
</code></pre>
<h3>Edit 2:</h3>
<p>Times:</p>
<ol>
<li><p>mine: <code>%%timeit -r 10</code> --> <code>10000 loops, best of 10: 19 µs per loop</code></p></li>
<li><p>Padraic Cunningham: <code>%%timeit -r 10</code> --> <code>100000 loops, best of 10:
4.89 µs per loop</code></p></li>
<li><p>zvone: <code>%%timeit -r 10</code> --> <code>100000 loops, best of 10: 3.88 µs per
loop</code></p></li>
<li><p>pneumatics: <code>%%timeit -r 10</code> --> <code>10000 loops, best of 10: 33.5 µs per loop</code></p></li>
</ol>
| 2 |
2016-09-24T13:48:16Z
| 39,677,443 |
<p>A simplified and more efficient version of your own code using sets to store values and <em>itertools.combinations</em> to pair up the workers:</p>
<pre><code>shifts = [['a', 'b', 'c', 'd'], ['b', 'c', 'e', 'f']]
from itertools import combinations
import collections
d = collections.defaultdict(set)
for sub in shifts:
for a, b in combinations(sub, 2):
d[a].add(b)
d[b].add(a)
for k, v in sorted(d.items()):
print(k, v)
</code></pre>
<p>Which would give you:</p>
<pre><code>('a', set(['c', 'b', 'd']))
('b', set(['a', 'c', 'e', 'd', 'f']))
('c', set(['a', 'b', 'e', 'd', 'f']))
('d', set(['a', 'c', 'b']))
('e', set(['c', 'b', 'f']))
('f', set(['c', 'b', 'e']))
</code></pre>
<p>On your small sample input:</p>
<pre><code>In [1]: import collections
In [2]: %%timeit
...: shifts = [['a', 'b', 'c', 'd'], ['b', 'c', 'e', 'f']]
...: workers = [i for s in shifts for i in s]
...: d = collections.defaultdict(list)
...: for w in workers:
...: for s in shifts:
...: for i in s:
...: if i != w and w in s:
...: if w in d.keys():
...: if i not in d[w]:
...: d[w].append(i)
...: else:
...: d[w].append(i)
...:
10000 loops, best of 3: 21.6 µs per loop
In [3]: from itertools import combinations
In [4]: %%timeit
...: shifts = [['a', 'b', 'c', 'd'], ['b', 'c', 'e', 'f']]
...: d = collections.defaultdict(set)
...: for sub in shifts:
...: for a, b in combinations(sub, 2):
...: d[a].add(b)
...: d[b].add(a)
...:
100000 loops, best of 3: 4.55 µs per loop
</code></pre>
| 2 |
2016-09-24T14:28:36Z
|
[
"python",
"algorithm",
"time-complexity"
] |
Procedure to map all relationships between elements in a list of lists
| 39,677,070 |
<p>I'm looking for an algorithm that can map all the relationships between all of the elements in sublists belonging to a list of length <code>n</code>. </p>
<p>More concretely, suppose <code>a</code>, <code>b</code>, <code>c</code>, <code>d</code>, <code>e</code> and <code>f</code> are the names of workers and that each sublist represents a 'shift' that occurred yesterday. I'd like to know, for each worker, who worked with who yesterday.</p>
<pre><code>shifts_yesterday = [[a, b, c, d], [b, c, e, f]]
</code></pre>
<p>Goal:</p>
<pre><code>a: b, c, d
b: a, c, d, e, f
c: a, b, d, e, f
d: a, b, c
e: b, c, f
f: b, c, e
</code></pre>
<p>Above, I can see that <code>a</code> worked with <code>b, c, d</code> yesterday; <code>b</code> worked with <code>a, c, d, e, f</code> yesterday, etc.</p>
<p>Time complexity is a concern here as I have a large list to process.
Though, intuitively, I suspect there is a pretty high floor on this one...</p>
<p>Note: I can obviously write a <strike>linear search</strike> straightforward approach with only <code>for</code> loops, but that is (a) not very clever (b) very slow.</p>
<h3>Edit:</h3>
<p>Here's (a messy) attempt:</p>
<pre><code>shifts = [['a', 'b', 'c', 'd'], ['b', 'c', 'e', 'f']]
workers = [i for s in shifts for i in s]
import collections
d = collections.defaultdict(list)
for w in workers:
for s in shifts:
for i in s:
if i != w and w in s:
if w in d.keys():
if i not in d[w]:
d[w].append(i)
else:
d[w].append(i)
</code></pre>
<p>Test:</p>
<pre><code>for k, v in collections.OrderedDict(sorted(d.items())).items():
print(k, v)
</code></pre>
<h3>Edit 2:</h3>
<p>Times:</p>
<ol>
<li><p>mine: <code>%%timeit -r 10</code> --> <code>10000 loops, best of 10: 19 µs per loop</code></p></li>
<li><p>Padraic Cunningham: <code>%%timeit -r 10</code> --> <code>100000 loops, best of 10:
4.89 µs per loop</code></p></li>
<li><p>zvone: <code>%%timeit -r 10</code> --> <code>100000 loops, best of 10: 3.88 µs per
loop</code></p></li>
<li><p>pneumatics: <code>%%timeit -r 10</code> --> <code>10000 loops, best of 10: 33.5 µs per loop</code></p></li>
</ol>
| 2 |
2016-09-24T13:48:16Z
| 39,677,461 |
<p>There is should be more conditions specified. For instance, if you have a total "shifts_yesterday" array size limited to 64 then you can use long type to store shift-bit for worker. then you can answer on the question via single operation : </p>
<pre><code>a = 00000001
b = 00000011
d = 00000010
f = 00000010
</code></pre>
<p>Does b work with d ? </p>
<pre><code>((b & d) != 0) : true
</code></pre>
<p>Does a work with f ? </p>
<pre><code>((a & f) != 0) : false
</code></pre>
| 1 |
2016-09-24T14:30:02Z
|
[
"python",
"algorithm",
"time-complexity"
] |
Procedure to map all relationships between elements in a list of lists
| 39,677,070 |
<p>I'm looking for an algorithm that can map all the relationships between all of the elements in sublists belonging to a list of length <code>n</code>. </p>
<p>More concretely, suppose <code>a</code>, <code>b</code>, <code>c</code>, <code>d</code>, <code>e</code> and <code>f</code> are the names of workers and that each sublist represents a 'shift' that occurred yesterday. I'd like to know, for each worker, who worked with who yesterday.</p>
<pre><code>shifts_yesterday = [[a, b, c, d], [b, c, e, f]]
</code></pre>
<p>Goal:</p>
<pre><code>a: b, c, d
b: a, c, d, e, f
c: a, b, d, e, f
d: a, b, c
e: b, c, f
f: b, c, e
</code></pre>
<p>Above, I can see that <code>a</code> worked with <code>b, c, d</code> yesterday; <code>b</code> worked with <code>a, c, d, e, f</code> yesterday, etc.</p>
<p>Time complexity is a concern here as I have a large list to process.
Though, intuitively, I suspect there is a pretty high floor on this one...</p>
<p>Note: I can obviously write a <strike>linear search</strike> straightforward approach with only <code>for</code> loops, but that is (a) not very clever (b) very slow.</p>
<h3>Edit:</h3>
<p>Here's (a messy) attempt:</p>
<pre><code>shifts = [['a', 'b', 'c', 'd'], ['b', 'c', 'e', 'f']]
workers = [i for s in shifts for i in s]
import collections
d = collections.defaultdict(list)
for w in workers:
for s in shifts:
for i in s:
if i != w and w in s:
if w in d.keys():
if i not in d[w]:
d[w].append(i)
else:
d[w].append(i)
</code></pre>
<p>Test:</p>
<pre><code>for k, v in collections.OrderedDict(sorted(d.items())).items():
print(k, v)
</code></pre>
<h3>Edit 2:</h3>
<p>Times:</p>
<ol>
<li><p>mine: <code>%%timeit -r 10</code> --> <code>10000 loops, best of 10: 19 µs per loop</code></p></li>
<li><p>Padraic Cunningham: <code>%%timeit -r 10</code> --> <code>100000 loops, best of 10:
4.89 µs per loop</code></p></li>
<li><p>zvone: <code>%%timeit -r 10</code> --> <code>100000 loops, best of 10: 3.88 µs per
loop</code></p></li>
<li><p>pneumatics: <code>%%timeit -r 10</code> --> <code>10000 loops, best of 10: 33.5 µs per loop</code></p></li>
</ol>
| 2 |
2016-09-24T13:48:16Z
| 39,677,544 |
<pre><code>result = defaultdict(set)
for shift in shifts:
for worker in shift:
result[worker].update(shift)
# now, result[a] contains: a, b, c, d - so remove the a
for k, v in result.iteritems():
v.remove(k)
</code></pre>
| 3 |
2016-09-24T14:37:57Z
|
[
"python",
"algorithm",
"time-complexity"
] |
Procedure to map all relationships between elements in a list of lists
| 39,677,070 |
<p>I'm looking for an algorithm that can map all the relationships between all of the elements in sublists belonging to a list of length <code>n</code>. </p>
<p>More concretely, suppose <code>a</code>, <code>b</code>, <code>c</code>, <code>d</code>, <code>e</code> and <code>f</code> are the names of workers and that each sublist represents a 'shift' that occurred yesterday. I'd like to know, for each worker, who worked with who yesterday.</p>
<pre><code>shifts_yesterday = [[a, b, c, d], [b, c, e, f]]
</code></pre>
<p>Goal:</p>
<pre><code>a: b, c, d
b: a, c, d, e, f
c: a, b, d, e, f
d: a, b, c
e: b, c, f
f: b, c, e
</code></pre>
<p>Above, I can see that <code>a</code> worked with <code>b, c, d</code> yesterday; <code>b</code> worked with <code>a, c, d, e, f</code> yesterday, etc.</p>
<p>Time complexity is a concern here as I have a large list to process.
Though, intuitively, I suspect there is a pretty high floor on this one...</p>
<p>Note: I can obviously write a <strike>linear search</strike> straightforward approach with only <code>for</code> loops, but that is (a) not very clever (b) very slow.</p>
<h3>Edit:</h3>
<p>Here's (a messy) attempt:</p>
<pre><code>shifts = [['a', 'b', 'c', 'd'], ['b', 'c', 'e', 'f']]
workers = [i for s in shifts for i in s]
import collections
d = collections.defaultdict(list)
for w in workers:
for s in shifts:
for i in s:
if i != w and w in s:
if w in d.keys():
if i not in d[w]:
d[w].append(i)
else:
d[w].append(i)
</code></pre>
<p>Test:</p>
<pre><code>for k, v in collections.OrderedDict(sorted(d.items())).items():
print(k, v)
</code></pre>
<h3>Edit 2:</h3>
<p>Times:</p>
<ol>
<li><p>mine: <code>%%timeit -r 10</code> --> <code>10000 loops, best of 10: 19 µs per loop</code></p></li>
<li><p>Padraic Cunningham: <code>%%timeit -r 10</code> --> <code>100000 loops, best of 10:
4.89 µs per loop</code></p></li>
<li><p>zvone: <code>%%timeit -r 10</code> --> <code>100000 loops, best of 10: 3.88 µs per
loop</code></p></li>
<li><p>pneumatics: <code>%%timeit -r 10</code> --> <code>10000 loops, best of 10: 33.5 µs per loop</code></p></li>
</ol>
| 2 |
2016-09-24T13:48:16Z
| 39,677,680 |
<p>I think you're looking for a set membership relationship. Let's call it <code>coworkers</code>:</p>
<pre><code>shifts_yesterday = [['a', 'b', 'c', 'd'], ['b', 'c', 'e', 'f']]
def coworkers(worker, shifts):
coworkers = set()
coworkers.update( *[shift for shift in shifts if worker in shift] )
return coworkers
</code></pre>
<p>For each worker, you create a set of all the shifts that include the worker.</p>
<pre><code>everybody = set()
everybody.update( *shifts_yesterday )
for worker in everybody:
print("{}: {}".format(worker, coworkers(worker, shifts_yesterday)))
</code></pre>
<p>The output is</p>
<pre><code>a: set(['a', 'c', 'b', 'd'])
c: set(['a', 'c', 'b', 'e', 'd', 'f'])
b: set(['a', 'c', 'b', 'e', 'd', 'f'])
e: set(['c', 'b', 'e', 'f'])
d: set(['a', 'c', 'b', 'd'])
f: set(['c', 'b', 'e', 'f'])
</code></pre>
| 1 |
2016-09-24T14:52:06Z
|
[
"python",
"algorithm",
"time-complexity"
] |
Python: Print one or multiple files(copies) on user input
| 39,677,132 |
<p>I'm new to Python.
I'm trying to create a program that prints set of docs I usually print by hand every week, however I'm running into several problems:</p>
<p>Here is the code:</p>
<pre><code>import os
file_list = os.listdir("C:/Python27/Programs/PrintNgo/Files2print")
print ("List of available documents to print" '\n')
enum_list = ('\n'.join('{}: {}'.format(*k) for k in enumerate(file_list)))
print(enum_list)
user_choice = input('\n' "Documents # you want to print: ")
copies = input("How many copies would you like from each: ")
#not implemented
current_choice = file_list[user_choice]
current_file = os.startfile("C:/Python27/Programs/PrintNgo/Files2print/"+current_choice, "print")
</code></pre>
<p>Here is the output:</p>
<pre><code>List of available documents to print
0: doc0.docx
1: doc1.docx
2: doc2.docx
3: doc3.docx
4: doc4.docx
5: doc5.docx
Documents # you want to print:
</code></pre>
<p>I managed to input numbers from 0-5 and print the document desired, however entering 2 values like: 2,3 Do not work and throws an error. How can I print more than one at a time?</p>
<p>If I want to make copies of each doc. Let's say I choosed 2,3 should I do loop to repeat each action as many times as I want the number of copies?</p>
<p><a href="http://stackoverflow.com/questions/16132293/using-integers-with-dictionary-to-create-text-menu-switch-case-alternative">I wonder if my style is fine, however that kind of menu looks nice as well and I can try it eventually</a></p>
| 1 |
2016-09-24T13:55:54Z
| 39,677,350 |
<p>You should avoid the <code>input</code> function in Python 2. It can be convenient, but it is a security risk. Instead you should use the <code>raw_input</code> function. In Python 3, the function named <code>input</code> is equivalent to Python 2's <code>raw_input</code> and the functionality of the old Python 2 <code>input</code> function has been dropped.</p>
<p>The code below shows how to handle multiple document numbers given in a comma-separated list. The code also handles a single document number. If the user supplies any non-integer values the program will crash with a <code>ValueError</code>. However, blanks spaces are permitted in the input. </p>
<pre><code>from __future__ import print_function
user_choice = raw_input("\nDocuments # you want to print: ")
user_choice = [int(u) for u in user_choice.split(',')]
copies = int(raw_input("How many copies would you like from each: "))
for i in range(copies):
print('Copy', i + 1)
for j in user_choice:
print('Printing document #', j)
</code></pre>
<p><strong>Demo</strong></p>
<pre><code>Documents # you want to print: 3, 2,7
How many copies would you like from each: 2
Copy 1
Printing document # 3
Printing document # 2
Printing document # 7
Copy 2
Printing document # 3
Printing document # 2
Printing document # 7
</code></pre>
<p>The heart of this code is the <code>str.split</code> method. </p>
<pre><code>user_choice.split(',')
</code></pre>
<p>gets the string in <code>user_choice</code> and splits it into a list of strings, splitting wherever it finds a comma, discarding the commas.</p>
<pre><code>[int(u) for u in user_choice.split(',')]
</code></pre>
<p>gets each of those resulting strings and converts them to integers, storing the results in a list.</p>
| 1 |
2016-09-24T14:19:01Z
|
[
"python",
"python-2.7"
] |
Quickly sampling large number of rows from large dataframes in python
| 39,677,183 |
<p>I have a very large dataframe (about 1.1M rows) and I am trying to sample it.</p>
<p>I have a list of indexes (about 70,000 indexes) that I want to select from the entire dataframe.</p>
<p>This is what Ive tried so far but all these methods are taking way too much time:</p>
<p>Method 1 - Using pandas :</p>
<pre><code>sample = pandas.read_csv("data.csv", index_col = 0).reset_index()
sample = sample[sample['Id'].isin(sample_index_array)]
</code></pre>
<p>Method 2 :</p>
<p>I tried to write all the sampled lines to another csv.</p>
<pre><code>f = open("data.csv",'r')
out = open("sampled_date.csv", 'w')
out.write(f.readline())
while 1:
total += 1
line = f.readline().strip()
if line =='':
break
arr = line.split(",")
if (int(arr[0]) in sample_index_array):
out.write(",".join(e for e in (line)))
</code></pre>
<p>Can anyone please suggest a better method? Or how I can modify this to make it faster?</p>
<p>Thanks</p>
| 0 |
2016-09-24T14:00:44Z
| 39,677,807 |
<p>It seems you may benefit from simple <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html" rel="nofollow">selection methods</a>. We don't have your data, so here is an example of selecting a subset using a pandas <code>Index</code> object and the <code>.iloc</code> selection method.</p>
<pre><code>import pandas as pd
import numpy as np
# Large Sample DataFrame
df = pd.DataFrame(np.random.randint(0,100,size=(1000000, 4)), columns=list('ABCD'))
df.info()
# Results
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1000000 entries, 0 to 999999
Data columns (total 4 columns):
A 1000000 non-null int32
B 1000000 non-null int32
C 1000000 non-null int32
D 1000000 non-null int32
dtypes: int32(4)
memory usage: 15.3 MB
# Convert a sample list of indices to an `Index` object
indices = [1, 2, 3, 10, 20, 30, 67, 78, 900, 2176, 78776]
idxs = pd.Index(indices)
subset = df.iloc[idxs, :]
subset
# Output
A B C D
1 9 33 62 17
2 44 73 85 11
3 56 83 85 79
10 5 72 3 82
20 72 22 61 2
30 75 15 51 11
67 82 12 18 5
78 95 9 86 81
900 23 51 3 5
2176 30 89 67 26
78776 54 88 56 17
</code></pre>
<p>In your case, try this:</p>
<pre><code>df = pd.read_csv("data.csv", index_col = 0).reset_index()
idx = pd.Index(sample_index_array) # assuming a list
sample = df.iloc[idx, :]
</code></pre>
<p>The <a href="https://stackoverflow.com/questions/28757389/loc-vs-iloc-vs-ix-vs-at-vs-iat"><code>.iat</code> and <code>.at</code> methods</a> are even faster, but require scalar indices. </p>
| 1 |
2016-09-24T15:06:56Z
|
[
"python",
"pandas",
"dataframe",
"bigdata",
"sampling"
] |
Multivariate linear regression in pymc3
| 39,677,240 |
<p>I've recently started learning <code>pymc3</code> after exclusively using <code>emcee</code> for ages and I'm running into some conceptual problems. </p>
<p>I'm practising with Chapter 7 of <a href="https://arxiv.org/abs/1008.4686" rel="nofollow">Hogg's Fitting a model to data</a>. This involves a mcmc fit to a straight line with arbitrary 2d uncertainties. I've accomplished this quite easily in <code>emcee</code>, but <code>pymc</code> is giving me some problems. </p>
<p>It essentially boils down to using a multivariate gaussian likelihood.</p>
<p>Here is what I have so far. </p>
<pre><code>from pymc3 import *
import numpy as np
import matplotlib.pyplot as plt
size = 200
true_intercept = 1
true_slope = 2
true_x = np.linspace(0, 1, size)
# y = a + b*x
true_regression_line = true_intercept + true_slope * true_x
# add noise
# here the errors are all the same but the real world they are usually not!
std_y, std_x = 0.1, 0.1
y = true_regression_line + np.random.normal(scale=std_y, size=size)
x = true_x + np.random.normal(scale=std_x, size=size)
y_err = np.ones_like(y) * std_y
x_err = np.ones_like(x) * std_x
data = dict(x=x, y=y)
with Model() as model: # model specifications in PyMC3 are wrapped in a with-statement
# Define priors
intercept = Normal('Intercept', 0, sd=20)
gradient = Normal('gradient', 0, sd=20)
# Define likelihood
likelihood = MvNormal('y', mu=intercept + gradient * x,
tau=1./(np.stack((y_err, x_err))**2.), observed=y)
# start the mcmc!
start = find_MAP() # Find starting value by optimization
step = NUTS(scaling=start) # Instantiate MCMC sampling algorithm
trace = sample(2000, step, start=start, progressbar=False) # draw 2000 posterior samples using NUTS sampling
</code></pre>
<p>This raises the error: <code>LinAlgError: Last 2 dimensions of the array must be square</code></p>
<p>So I'm trying to pass <code>MvNormal</code> the measured values for x and y (<code>mu</code>s) and their associated measurement uncertainties (<code>y_err</code> and <code>x_err</code>). But it appears that it is not liking the 2d <code>tau</code> argument.</p>
<p>Any ideas? This must be possible</p>
<p>Thanks</p>
| 0 |
2016-09-24T14:07:56Z
| 39,685,419 |
<p>You may try by adapting the following model. Is a "regular" linear regression. But <code>x</code> and <code>y</code> have been replaced by Gaussian distributions. Here I am assuming not only the measured values of the input and output variables but also a reliable estimation of the their error (for example as provided by a measurement device). If you do not trust those error values you may instead try to estimate them from the data.</p>
<pre><code>with pm.Model() as model:
intercept = pm.Normal('intercept', 0, sd=20)
gradient = pm.Normal('gradient', 0, sd=20)
epsilon = pm.HalfCauchy('epsilon', 5)
obs_x = pm.Normal('obs_x', mu=x, sd=x_err, shape=len(x))
obs_y = pm.Normal('obs_y', mu=y, sd=y_err, shape=len(y))
likelihood = pm.Normal('y', mu=intercept + gradient * obs_x,
sd=epsilon, observed=obs_y)
trace = pm.sample(2000)
</code></pre>
<p>If you are estimating the error from the data it could be reasonable to assume they could be correlated and hence, instead of using two separate Gaussian you can use a Multivariate Gaussian. In such a case you will end up with a model like the following:</p>
<pre><code>df_data = pd.DataFrame(data)
cov = df_data.cov()
with pm.Model() as model:
intercept = pm.Normal('intercept', 0, sd=20)
gradient = pm.Normal('gradient', 0, sd=20)
epsilon = pm.HalfCauchy('epsilon', 5)
obs_xy = pm.MvNormal('obs_xy', mu=df_data, tau=pm.matrix_inverse(cov), shape=df_data.shape)
yl = pm.Normal('yl', mu=intercept + gradient * obs_xy[:,0],
sd=epsilon, observed=obs_xy[:,1])
mu, sds, elbo = pm.variational.advi(n=20000)
step = pm.NUTS(scaling=model.dict_to_array(sds), is_cov=True)
trace = pm.sample(1000, step=step, start=mu)
</code></pre>
<p>Notice that in the previous model the covariance matrix was computed from the data. If you are going to do that then I think is better to go with the first model, but if instead you are going to estimate the covariance matrix then the second model could be a sensible approach.</p>
<p>For the second model I use ADVI to initialize it. ADVI can be a good way to initialize models, often it works much better than find_MAP().</p>
<p>You may also want to check this <a href="https://github.com/davidwhogg/DataAnalysisRecipes" rel="nofollow">repository</a> by David Hogg. And the book <a href="http://xcelab.net/rm/statistical-rethinking/" rel="nofollow">Statistical Rethinking</a> where McElreath discuss the problem of doing linear regression including the errors in the input and output variables.</p>
| 1 |
2016-09-25T09:38:22Z
|
[
"python",
"statistics",
"mcmc",
"pymc3"
] |
python Popen shell=True behavior
| 39,677,382 |
<p>Why would this doesn't work: <code>subprocess.Popen(["ls -l | grep myfile"], shell=False)</code>
But this line works: <code>subprocess.Popen(["ls -l | grep myfile"], shell=True)</code>
I understand this shell=True create a sub shell internally and execute command. But didn't understand how this affect Popen behavior</p>
| 0 |
2016-09-24T14:22:18Z
| 39,677,446 |
<p><code>shell=True</code> is used to pass a single <em>string</em> as a command line to a shell. While it's technically possible to pass a list, it's probably never what you want.</p>
<pre><code>subprocess.Popen("ls -l | grep myfile", shell=True)
</code></pre>
| 0 |
2016-09-24T14:28:48Z
|
[
"python",
"python-3.x",
"subprocess"
] |
python Popen shell=True behavior
| 39,677,382 |
<p>Why would this doesn't work: <code>subprocess.Popen(["ls -l | grep myfile"], shell=False)</code>
But this line works: <code>subprocess.Popen(["ls -l | grep myfile"], shell=True)</code>
I understand this shell=True create a sub shell internally and execute command. But didn't understand how this affect Popen behavior</p>
| 0 |
2016-09-24T14:22:18Z
| 39,677,485 |
<p>If you use <code>shell=False</code>, you must be to use like it:</p>
<pre><code>subprocess.Popen(["ls","-l"],shell=False)
</code></pre>
| -1 |
2016-09-24T14:32:08Z
|
[
"python",
"python-3.x",
"subprocess"
] |
python Popen shell=True behavior
| 39,677,382 |
<p>Why would this doesn't work: <code>subprocess.Popen(["ls -l | grep myfile"], shell=False)</code>
But this line works: <code>subprocess.Popen(["ls -l | grep myfile"], shell=True)</code>
I understand this shell=True create a sub shell internally and execute command. But didn't understand how this affect Popen behavior</p>
| 0 |
2016-09-24T14:22:18Z
| 39,677,565 |
<p>Popen() does the subprocess management with the code inside the subprocess module. It doesn't know about piping with <code>|</code>, and it doesn't know whether the string you pass is a program or an argument, like in your example with <code>ls</code> it will assume that everything else except <code>ls</code> itself is arguments for the program. It will try to execute the first item in list and pass all other items as arguments.</p>
<p>When you use <code>shell=True</code>, you may think of it (e.g. on UNIX) as running <code>/bin/sh -c</code> with arglist you provided as a string. So</p>
<pre><code>Popen('ls -l | grep myfile', shell=True)
</code></pre>
<p>is something close to</p>
<pre><code>Popen(['/bin/sh', '-c', 'ls -l | grep myfile'])
</code></pre>
<p>In both cases, argument handling is actually done by your shell.</p>
<p>For piping and <code>shell=False</code>, you should use <code>subprocess.PIPE</code> and stdout/stdin/stderr redirections with the tools provided in subprocess module. </p>
| 2 |
2016-09-24T14:40:58Z
|
[
"python",
"python-3.x",
"subprocess"
] |
Does not openpyxl.styles have a Style anymore in v2.4.0?
| 39,677,426 |
<p>I just upgraded <code>openpyxl</code> to version <code>2.4.0</code> and this code that worked fine in previous versions, doesn't work anymore:</p>
<pre><code>style = openpyxl.styles.Style(**style_kwargs)
</code></pre>
<p>Openpyxl says that it doesn't have a <code>Style</code> attribute:</p>
<pre><code>AttributeError: 'module' object has no attribute 'Style'
</code></pre>
<p>How does it work now?</p>
| -1 |
2016-09-24T14:26:12Z
| 39,680,121 |
<p>The <code>Style</code> class was deprecated several versions ago and has been removed.</p>
| 1 |
2016-09-24T19:24:47Z
|
[
"python",
"openpyxl"
] |
building datetime from 3 integer/float columns in pandas
| 39,677,432 |
<p>After loading a DataFrame with Panda (from a csv) with this structure:</p>
<pre><code> startmonth startday startyear endmonth endday endyear
caseid
1945121601 12.0 16.0 1945 5.0 27.0 1947.0
1946031101 3.0 11.0 1946 10.0 9.0 1993.0
1946110101 11.0 1.0 1946 2.0 4.0 1947.0
</code></pre>
<p>I am thinking how to efficiently use the first 3 & last 3 columns to generate 2 datetime columns, say 'startdate' and 'enddate.' Since there are missing values that need to be dealt with, the parse_dates & date_parser arguments in read_csv seem a bit unwieldy, so I wrote the function below.</p>
<p>First, I fill the NaN value so as to cast month and date from float to integer, and then string them together for parsing.</p>
<pre><code>def dateparser(y=df.startyear,m=df.startmonth,d=df.startday):
m = m.fillna(1).astype(int)
d = d.fillna(1).astype(int)
x = str(y) + " " + str(m) + " " + str(d)
return pd.datetime.strptime(x, '%Y %m %d')
</code></pre>
<p>The resulted error message is a bit confusing, as the string format should be exactly the same with what strptime expect.</p>
<pre><code>n [338]: dateparser()
Traceback (most recent call last):
File "<ipython-input-338-917257f547ca>", line 1, in <module>
dateparser()
File "<ipython-input-337-41aa89124ae6>", line 5, in dateparser
return pd.datetime.strptime(x, '%Y %m %d')
File "/Users/Username/anaconda/lib/python3.5/_strptime.py", line 510, in _strptime_datetime
tt, fraction = _strptime(data_string, format)
File "/Users/Username/anaconda/lib/python3.5/_strptime.py", line 343, in _strptime
(data_string, format))
ValueError: time data 'caseid\n1945121601 1945\n1946031101
1946\n1946110101 1946\n1947022401 1947\n1947053101
1947\n1947111001 1947\n1947120501 1947\n1947120502
1947\n1947120503 1947\n1947120504 1947\n1947120505
1947\n1947120506 1947\n1947120507 1947\n1947122001
1947\n1948032501 1948\n1948032502 1948\n1948070101
6\n2005100601 10\n
Name: startmonth, dtype: int64 caseid\n1945121601 16\n1946031101
6\nName: startday, dtype: int64' does not match format '%Y %m %d'`
</code></pre>
<p>I also tried another parsing package that turns most of the datetime string in English language into datetime variable without issue:</p>
<pre><code>from dateutil.parser import parse
def dateparser():
(same function as above)
return parse(x)
</code></pre>
<p>And it also results in error (ValueError: Unknown string format)...</p>
<p>Any thoughts on how to improve the function are much appreciated. It is also a bit strange for me that most package functions only converts string to datetime, and one needs to make integers/floats into string even if it shouldn't be that hard to directly convert numerical data into datetime formats... did I miss some obvious solutions?</p>
| 1 |
2016-09-24T14:26:54Z
| 39,678,015 |
<p>Although not absolutely sure, the issue seems to be that I am trying to feed the parser a pandas Series while they expect only to take string.</p>
<p>In this cae, Panda's own to_datetime function can do the work.</p>
<pre><code>def dateparser(y=t4.startyear,m=t4.startmonth,d=t4.startday):
y = y.astype(str)
m = m.fillna(1).astype(int).astype(str)
d = d.fillna(1).astype(int).astype(str)
x = y +' '+ m +' '+ d
return pd.to_datetime(x)
</code></pre>
| 0 |
2016-09-24T15:30:09Z
|
[
"python",
"string",
"datetime",
"pandas"
] |
How to keep track of numbered variables made from function call in while loop. How many are there?
| 39,677,445 |
<p>So I'm writing a small project using python,
But now I'm in trouble.</p>
<p>I made some code like this:</p>
<pre><code>START_BUTTONS = ("button1", "button2")
markup = types.ReplyKeyboardMarkup()
lengthof = len(START_BUTTONS)
countn = 0
while (countn < lengthof):
exec("itembtn" + str(countn) + " = types.KeyboardButton(START_BUTTONS[" + str(countn) + "])")
countn = countn + 1
</code></pre>
<p>So, this will parse something like this (unitl the tuple ends):</p>
<pre><code>itembtn0 = types.KeyboardButton(START_BUTTONS[0])
itembtn1 = types.KeyboardButton(START_BUTTONS[1])
</code></pre>
<p>and...</p>
<p>So those variables are usable later.</p>
<p>But, my problem is here. I want to check how many of those variables are there (itembtn0 itembtn1 itembtn2 itembtn3...) and put them like this:</p>
<pre><code>markup.row(itembtn0, itembtn1, itembtn2)
</code></pre>
<p>so , if there were 5 of those, it will return something like this:
markup.row(itembtn0, itembtn1, itembtn2, itembtn3, itembtn4)</p>
<p>Actually I have no idea for what I should write.</p>
<p>Thanks for help! & Sorry for my bad english.</p>
| 0 |
2016-09-24T14:28:45Z
| 39,677,511 |
<p>You are trying to create numbered variables, which can in all cases be replaced by an array. Try something simple instead:</p>
<pre><code>START_BUTTONS = ("button1", "button2")
markup = types.ReplyKeyboardMarkup()
itembtn = []
for btn in START_BUTTONS:
itembtn.append(types.KeyboardButton(btn))
</code></pre>
<p>Access it with </p>
<pre><code>itembtn[0]
itembtn[1]
etc.
</code></pre>
<p>And you can know how many there are: </p>
<pre><code>len(itembtn)
</code></pre>
<p>I am not sure about your markup function, but you can pass the whole array as parameters like this:</p>
<pre><code>markup.row(*itembtn)
</code></pre>
| 1 |
2016-09-24T14:34:21Z
|
[
"python",
"python-2.7"
] |
Filling holes in image with OpenCV or Skimage
| 39,677,462 |
<p>I m trying to fill holes for a chessboard for stereo application. The chessboard is at micro scale thus it is complicated to avoid dust... as you can see :</p>
<p><a href="http://i.stack.imgur.com/mCOFl.png" rel="nofollow"><img src="http://i.stack.imgur.com/mCOFl.png" alt="enter image description here"></a></p>
<p>Thus, the corners detection is impossible. I tried with SciPy's binary_fill_holes or similar approaches but i have a full black image, i dont understand.</p>
| 3 |
2016-09-24T14:30:13Z
| 39,677,637 |
<p>You can use following function in order to remove the holes, by replacing the color of each pixel with a maximum colors of it's environment pixels: </p>
<pre><code>import numpy as np
import cv2
def remove_noise(gray, num):
Y, X = gray.shape
nearest_neigbours = [[
np.argmax(
np.bincount(
gray[max(i - num, 0):min(i + num, Y), max(j - num, 0):min(j + num, X)].ravel()))
for j in range(X)] for i in range(Y)]
result = np.array(nearest_neigbours, dtype=np.uint8)
cv2.imwrite('result2.jpg', result)
return result
</code></pre>
<p>Demo:</p>
<pre><code>img = cv2.imread('mCOFl.png')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
remove_noise(gray, 10)
</code></pre>
<p>Input image:</p>
<p><a href="http://i.stack.imgur.com/AAypM.png" rel="nofollow"><img src="http://i.stack.imgur.com/AAypM.png" alt="enter image description here"></a></p>
<p>Out put:</p>
<p><a href="http://i.stack.imgur.com/PYttM.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/PYttM.jpg" alt="enter image description here"></a></p>
<p>Note: Since this function replace the color of corner pixels too, you can sue <code>cv2.goodFeaturesToTrack</code> function to find the corners and restrict the denoising for that pixels </p>
<pre><code>corners = cv2.goodFeaturesToTrack(gray, 100, 0.01, 30)
corners = np.squeeze(np.int0(corners))
</code></pre>
| 2 |
2016-09-24T14:46:54Z
|
[
"python",
"opencv",
"numpy",
"scipy",
"skimage"
] |
Filling holes in image with OpenCV or Skimage
| 39,677,462 |
<p>I m trying to fill holes for a chessboard for stereo application. The chessboard is at micro scale thus it is complicated to avoid dust... as you can see :</p>
<p><a href="http://i.stack.imgur.com/mCOFl.png" rel="nofollow"><img src="http://i.stack.imgur.com/mCOFl.png" alt="enter image description here"></a></p>
<p>Thus, the corners detection is impossible. I tried with SciPy's binary_fill_holes or similar approaches but i have a full black image, i dont understand.</p>
| 3 |
2016-09-24T14:30:13Z
| 39,687,059 |
<p>You can use morphology: dilate, and then erode with same kernel size. </p>
| 1 |
2016-09-25T12:53:07Z
|
[
"python",
"opencv",
"numpy",
"scipy",
"skimage"
] |
Python + Sqlite: Add a new column and fill its value at interval of 2
| 39,677,528 |
<p>I want to add a new column to a <code>db</code> file and fill its value at an interval of 2. Here I wrote some codes...</p>
<pre><code>import sqlite3
WorkingFile = "C:\\test.db"
con = sqlite3.connect(WorkingFile)
cur = con.cursor()
cur.execute("ALTER table MyTable add column 'WorkingID' 'long'") # Add a column "WorkingID"
rows = cur.fetchall()
iCount = 0
for row in rows:
iCount = iCount + 2
print iCount
cur.execute("UPDATE MyTable SET WorkingID = ?" , (iCount,)) # Here I have question: How to write the WHERE command?
con.commit()
cur.close()
</code></pre>
<p>The code above gives me a new Column with same values. Something like this:</p>
<pre><code>WorkingID
10
10
10
...
</code></pre>
<p>But I want a result like this:</p>
<pre><code>WorkingID
2
4
6
8
10
...
</code></pre>
<p>My question on the code is, I dont know how to write the <code>WHERE</code> code under <code>UPDATE</code>. Would you please help me out? Thanks.</p>
| 0 |
2016-09-24T14:36:50Z
| 39,677,628 |
<p>The SQL engine cannot tell the difference between the rows.</p>
<p>You should add an <a href="http://sqlite.org/autoinc.html" rel="nofollow">autoincrement</a> or ROWID column in order to have it set as ID.</p>
<p>I do not know the structure of your table as you don't create it here but rather alter it.</p>
<p>If you do happen to create it, create a new <code>INTEGER PRIMARY KEY</code> column which you can then use <code>WHERE</code> on like so:</p>
<pre><code>cur.execute(
"CREATE TABLE MyTable"
"(RowID INTEGER PRIMARY KEY,"
"WorkingID INTEGER)")
</code></pre>
| 1 |
2016-09-24T14:46:16Z
|
[
"python",
"sqlite"
] |
Python + Sqlite: Add a new column and fill its value at interval of 2
| 39,677,528 |
<p>I want to add a new column to a <code>db</code> file and fill its value at an interval of 2. Here I wrote some codes...</p>
<pre><code>import sqlite3
WorkingFile = "C:\\test.db"
con = sqlite3.connect(WorkingFile)
cur = con.cursor()
cur.execute("ALTER table MyTable add column 'WorkingID' 'long'") # Add a column "WorkingID"
rows = cur.fetchall()
iCount = 0
for row in rows:
iCount = iCount + 2
print iCount
cur.execute("UPDATE MyTable SET WorkingID = ?" , (iCount,)) # Here I have question: How to write the WHERE command?
con.commit()
cur.close()
</code></pre>
<p>The code above gives me a new Column with same values. Something like this:</p>
<pre><code>WorkingID
10
10
10
...
</code></pre>
<p>But I want a result like this:</p>
<pre><code>WorkingID
2
4
6
8
10
...
</code></pre>
<p>My question on the code is, I dont know how to write the <code>WHERE</code> code under <code>UPDATE</code>. Would you please help me out? Thanks.</p>
| 0 |
2016-09-24T14:36:50Z
| 39,677,912 |
<p>If your table has some unique ID, get a sorted list of them (their actual values do not matter, only their order):</p>
<pre><code>ids = [row[0] for row in cur.execute("SELECT id FROM MyTable ORDER BY id")]
iCount = 0
for id in ids:
iCount += 2
cur.execute("UPDATE MyTable SET WorkingID = ? WHERE id = ?",
[iCount, id])
</code></pre>
<p>If you don't have such a column, use the <code>rowid</code> instead.</p>
| 0 |
2016-09-24T15:17:01Z
|
[
"python",
"sqlite"
] |
Python + Sqlite: Add a new column and fill its value at interval of 2
| 39,677,528 |
<p>I want to add a new column to a <code>db</code> file and fill its value at an interval of 2. Here I wrote some codes...</p>
<pre><code>import sqlite3
WorkingFile = "C:\\test.db"
con = sqlite3.connect(WorkingFile)
cur = con.cursor()
cur.execute("ALTER table MyTable add column 'WorkingID' 'long'") # Add a column "WorkingID"
rows = cur.fetchall()
iCount = 0
for row in rows:
iCount = iCount + 2
print iCount
cur.execute("UPDATE MyTable SET WorkingID = ?" , (iCount,)) # Here I have question: How to write the WHERE command?
con.commit()
cur.close()
</code></pre>
<p>The code above gives me a new Column with same values. Something like this:</p>
<pre><code>WorkingID
10
10
10
...
</code></pre>
<p>But I want a result like this:</p>
<pre><code>WorkingID
2
4
6
8
10
...
</code></pre>
<p>My question on the code is, I dont know how to write the <code>WHERE</code> code under <code>UPDATE</code>. Would you please help me out? Thanks.</p>
| 0 |
2016-09-24T14:36:50Z
| 39,678,389 |
<p>If you can't add ID to your table, you can make a really strange way. So..
Before your loop you make a query</p>
<pre><code>UPDATE MyTable SET WorkingID = 2 LIMIT 1
</code></pre>
<p>And after</p>
<pre><code>iCount = 2
for row in rows:
iCount = iCount + 2
print iCount
cur.execute("UPDATE MyTable SET WorkingID = ? WHERE WorkibgID is NULL LIMIT 1" , (iCount,))
</code></pre>
<p>It's not a good way, but it should work.</p>
| 0 |
2016-09-24T16:13:06Z
|
[
"python",
"sqlite"
] |
Matplotlib - how to rescale pixel intensities for RGB image
| 39,677,581 |
<p>I am confused regarding how matplotlib handles fp32 pixel intensities. To my understanding, it rescales the values between max and min values of the image. However, when I try to view images originally in [0,1] by rescaling their pixel intensites to [-1,1] (by im*2-1) using imshow(), the image appears differently colored. How do I rescale so that images don't differ ? </p>
<p>EDIT : Please look at the image - <a href="http://i.stack.imgur.com/tATMc.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/tATMc.jpg" alt=""></a></p>
<p>PS: I need to do this as part of a program that outputs those values in [-1,1]</p>
<p>Following is the code used for this:</p>
<pre><code>img = np.float32(misc.face(gray=False))
fig,ax = plt.subplots(1,2)
img = img/255 # Convert to 0,1 range
print (np.max(img), np.min(img))
img0 = ax[0].imshow(img)
plt.colorbar(img0,ax=ax[0])
print (np.max(2*img-1), np.min(2*img-1))
img1 = ax[1].imshow(2*img-1) # Convert to -1,1 range
plt.colorbar(img1,ax=ax[1])
plt.show()
</code></pre>
<p>The max,min output is :</p>
<pre><code>(1.0, 0.0)
(1.0, -1.0)
</code></pre>
| 0 |
2016-09-24T14:41:58Z
| 39,678,117 |
<p>You are probably using matplotlib wrong here.</p>
<p>The normalization-step should work correctly, if it's active. <strong>The <a href="http://matplotlib.org/api/colors_api.html#matplotlib.colors.Normalize" rel="nofollow">docs</a> tell us, that is only active by default, if the input-image is of type float!</strong></p>
<h3>Code</h3>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from scipy import misc
fig, ax = plt.subplots(2,2)
# This usage shows different colors because there is no normalization
# FIRST ROW
f = misc.face(gray=True)
print(f.dtype)
g = f*2 # just some operation to show the difference between usages
ax[0,0].imshow(f)
ax[0,1].imshow(g)
# This usage makes sure that the input-image is of type float
# -> automatic normalization is used!
# SECOND ROW
f = np.asarray(misc.face(gray=True), dtype=float) # TYPE!
print(f.dtype)
g = f*2 # just some operation to show the difference between usages
ax[1,0].imshow(f)
ax[1,1].imshow(g)
plt.show()
</code></pre>
<h3>Output</h3>
<pre><code>uint8
float64
</code></pre>
<p><a href="http://i.imgur.com/Wea9PAm.png" rel="nofollow"><img src="http://i.imgur.com/Wea9PAm.png" alt="enter image description here"></a></p>
<h3>Analysis</h3>
<p>The first row shows the wrong usage, because the input is of type int and therefore no normalization will be used.</p>
<p>The second row shows the correct usage!</p>
<p>EDIT: </p>
<p>sascha has correctly pointed out in the comments that rescaling is not applied for RGB images and inputs must be ensured to be in [0,1] range.</p>
| 0 |
2016-09-24T15:41:59Z
|
[
"python",
"image-processing",
"matplotlib"
] |
Define one SQLAlchemy model to be related to one of multiple other models
| 39,677,668 |
<p>I'm designing the database for my own Q&A website, which is somewhat similar to Stack Overflow: </p>
<ol>
<li>A "question" can have several "answers", but an "answer" has only one "question".</li>
<li>Both "questions" and "answers" can have several "comments", but a "comment" has only one "question" or "answer".</li>
</ol>
<p>I have no idea how to design such a database. Here is what I've tried:</p>
<pre><code>class Question(db.Model):
__tablename__ = 'questions'
id = db.Column(db.Integer, primary_key=True)
title = db.Column(db.Unicode(64), unique=True)
body = db.Column(db.UnicodeText)
author_id = db.Column(db.Integer, db.ForeignKey('users.id'))
answers = db.relationship('Answer', backref='question', lazy='dynamic')
class Answer(db.Model):
__tablename__ = 'answers'
id = db.Column(db.Integer, primary_key=True)
body = db.Column(db.UnicodeText)
author_id = db.Column(db.Integer, db.ForeignKey('users.id'))
question_id = db.Column(db.Integer, db.ForeignKey('questions.id'))
class Comment(db.Model):
__tablename__ = 'comments'
id = db.Column(db.Integer, primary_key=True)
# post_id = db.Column(db.Integer, db.ForeignKey('')) ???
</code></pre>
| 1 |
2016-09-24T14:50:48Z
| 39,677,838 |
<p>So you've already managed the first point.</p>
<p>What you're looking for is a generic relationship. You can find it in <a href="https://sqlalchemy-utils.readthedocs.io/en/latest/index.html" rel="nofollow"><code>sqlalchemy_utils</code></a> package.</p>
<pre><code>from sqlalchemy_utils import generic_relationship
class Comment(db.Model):
__tablename__ = 'comments'
id = db.Column(db.Integer, primary_key=True)
object_type = db.Column(db.Unicode(255))
object_id = db.Column(db.Integer)
object = generic_relationship(object_type, object_id)
</code></pre>
<p><a href="http://sqlalchemy-utils.readthedocs.io/en/latest/generic_relationship.html" rel="nofollow">Docs</a> for generic relationship</p>
<p>So basically what it does, it stores the <code>object_type</code> as answer or question and <code>object_id</code> as object's primary key.</p>
| 3 |
2016-09-24T15:09:39Z
|
[
"python",
"sqlalchemy"
] |
Define one SQLAlchemy model to be related to one of multiple other models
| 39,677,668 |
<p>I'm designing the database for my own Q&A website, which is somewhat similar to Stack Overflow: </p>
<ol>
<li>A "question" can have several "answers", but an "answer" has only one "question".</li>
<li>Both "questions" and "answers" can have several "comments", but a "comment" has only one "question" or "answer".</li>
</ol>
<p>I have no idea how to design such a database. Here is what I've tried:</p>
<pre><code>class Question(db.Model):
__tablename__ = 'questions'
id = db.Column(db.Integer, primary_key=True)
title = db.Column(db.Unicode(64), unique=True)
body = db.Column(db.UnicodeText)
author_id = db.Column(db.Integer, db.ForeignKey('users.id'))
answers = db.relationship('Answer', backref='question', lazy='dynamic')
class Answer(db.Model):
__tablename__ = 'answers'
id = db.Column(db.Integer, primary_key=True)
body = db.Column(db.UnicodeText)
author_id = db.Column(db.Integer, db.ForeignKey('users.id'))
question_id = db.Column(db.Integer, db.ForeignKey('questions.id'))
class Comment(db.Model):
__tablename__ = 'comments'
id = db.Column(db.Integer, primary_key=True)
# post_id = db.Column(db.Integer, db.ForeignKey('')) ???
</code></pre>
| 1 |
2016-09-24T14:50:48Z
| 39,684,299 |
<p>I suggest you extract a base class for Question and Answer, e.g. Post, and make Comment relate to Post, such that a Post can have multiple Comments.</p>
<p>SQLAlchemy ORM supports few strategies to <a href="http://docs.sqlalchemy.org/en/latest/orm/inheritance.html" rel="nofollow">implement inheritance in the database</a>, and the right strategy to choose depends on the way you plan to query your entities. Here's the <a href="http://docs.sqlalchemy.org/en/latest/orm/extensions/declarative/inheritance.html" rel="nofollow">detailed documentation on how to properly configure it</a>.</p>
<p>So you'd get something like this:</p>
<p><em>(disclaimer: I have not directly ran this code, but composed it from your example, and my own code. If it ain't working for you, let me know and I'll fix)</em></p>
<pre><code>class Post(db.Model):
__tablename__ = 'posts'
id = db.Column(db.Integer, primary_key=True)
kind = Column(db.Unicode(64), nullable=False)
body = db.Column(db.UnicodeText)
author_id = db.Column(db.Integer, db.ForeignKey('users.id'))
comments = db.relationship('Comment', backref='post', lazy='dynamic')
__mapper_args__ = {
'polymorphic_identity': 'posts',
'polymorphic_on': kind
}
class Question(Post):
__tablename__ = 'questions'
title = db.Column(db.Unicode(64), unique=True)
answers = db.relationship('Answer', backref='question', lazy='dynamic')
__mapper_args__ = {
'polymorphic_identity': 'questions',
}
class Answer(Post):
__tablename__ = 'answers'
question_id = db.Column(db.Integer, db.ForeignKey('questions.id'))
__mapper_args__ = {
'polymorphic_identity': 'answers',
}
class Comment(db.Model):
__tablename__ = 'comments'
id = db.Column(db.Integer, primary_key=True)
post_id = db.Column(db.Integer, db.ForeignKey('posts.id'))
</code></pre>
| 2 |
2016-09-25T07:03:56Z
|
[
"python",
"sqlalchemy"
] |
generating pi to more than 2315 places
| 39,677,809 |
<p>I have managed to get my code working so it generates pi:</p>
<pre><code> while True:
print("how many digits of pi would you like?")
def make_pi():
q, r, t, k, m, x = 1, 0, 1, 1, 3, 3
for j in range(1000000):
if 4 * q + r - t < m * t:
yield m
q, r, t, k, m, x = 10 * q, 10 * (r - m * t), t, k, (10 * (3 * q + r)) // t - 10 * m, x
else:
q, r, t, k, m, x = q * k, (2 * q + r) * x, t * x, k + 1, (q * (7 * k + 2) + r * x) // (t * x), x + 2
digits = make_pi()
pi_list = []
my_array = []
for i in make_pi():
my_array.append(str(i))
number = int(input())+2
my_array = my_array[:1] + ['.'] + my_array[1:]
big_string = "".join(my_array[: number ])
print("here is the string:\n %s" % big_string)
</code></pre>
<p>however no matter how much I increase the <code>range</code> the code only outputs a maximum of 2315 digits of pi after the decimal point </p>
<pre><code>how can I fix this?
</code></pre>
| -1 |
2016-09-24T15:07:01Z
| 39,678,014 |
<p>What about parametrizing that make_pi generator to accept the number of digits?</p>
<p>Something like this:</p>
<pre><code>def make_pi(num_digits):
q, r, t, k, m, x = 1, 0, 1, 1, 3, 3
for j in range(num_digits):
if 4 * q + r - t < m * t:
yield m
q, r, t, k, m, x = 10 * q, 10 * \
(r - m * t), t, k, (10 * (3 * q + r)) // t - 10 * m, x
else:
q, r, t, k, m, x = q * \
k, (2 * q + r) * x, t * x, k + \
1, (q * (7 * k + 2) + r * x) // (t * x), x + 2
num_digits = 10000
pi = "".join([str(d) for d in make_pi(num_digits)])
print("{0}.{1}".format(pi[:1], pi[1:]))
</code></pre>
| 0 |
2016-09-24T15:30:06Z
|
[
"python",
"python-3.x",
"pi"
] |
Are booleans overwritten in python?
| 39,677,814 |
<pre><code>def space_check(board, position):
return board[position] == ' '
def full_board_check(board):
for i in range(1,10):
if space_check(board, i):
return False
return True
</code></pre>
<p>the last line is return True
why not <code>else: return True</code>
if the if statement returned false, won't the last <code>return True</code> overwrite it??</p>
| -7 |
2016-09-24T15:07:37Z
| 39,677,884 |
<p>If it was </p>
<pre><code>for i in range(1,10):
if space_check(board, i):
return False
else:
return True
</code></pre>
<p>then after the first iteration in the for loop the function would return. This would not lead the the expected behaviour. Currently, you check every space, and not just the first one</p>
| 0 |
2016-09-24T15:13:29Z
|
[
"python",
"python-3.x"
] |
Dataframe filtering in pandas
| 39,677,851 |
<p>How can I filter or subset a particular group within a dataframe (e.g., admitted female from the dataframe below)?
I am trying to sum up admissions/rejection rates based on gender. This dataframe is small, but what if it was much larger, let's say for example tens of thousands of line, where indexing individual values is impossible?</p>
<pre><code> Admit Gender Dept Freq
0 Admitted Male A 512
1 Rejected Male A 313
2 Admitted Female A 89
3 Rejected Female A 19
4 Admitted Male B 353
5 Rejected Male B 207
6 Admitted Female B 17
7 Rejected Female B 8
8 Admitted Male C 120
9 Rejected Male C 205
10 Admitted Female C 202
11 Rejected Female C 391
12 Admitted Male D 138
13 Rejected Male D 279
14 Admitted Female D 131
15 Rejected Female D 244
16 Admitted Male E 53
17 Rejected Male E 138
18 Admitted Female E 94
19 Rejected Female E 299
20 Admitted Male F 22
21 Rejected Male F 351
22 Admitted Female F 24
23 Rejected Female F 317
</code></pre>
| 0 |
2016-09-24T15:10:45Z
| 39,680,904 |
<p>To filter the data you can use the very comprehensive <code>query</code>function.</p>
<pre><code># Test data
df = DataFrame({'Admit': ['Admitted', 'Rejected', 'Admitted', 'Rejected', 'Admitted', 'Rejected', 'Admitted'],
'Gender': ['Male', 'Male', 'Female', 'Female', 'Male', 'Male', 'Female'],
'Freq': [512, 313, 89, 19, 353, 207, 17],
'Gender Dept': ['A', 'A', 'A', 'A', 'B', 'B', 'B']})
df.query('Admit == "Admitted" and Gender == "Female"')
Admit Freq Gender Gender Dept
2 Admitted 89 Female A
6 Admitted 17 Female B
</code></pre>
<p>To summarize data use <code>groupby</code>.</p>
<pre><code>group = df.groupby(['Admit', 'Gender']).sum()
print(group)
Freq
Admit Gender
Admitted Female 106
Male 865
Rejected Female 19
Male 520
</code></pre>
<p>You can the filter the result simply by subsetting on the created <code>MultiIndex</code>.</p>
<pre><code>group.loc[('Admitted', 'Female')]
Freq 106
Name: (Admitted, Female), dtype: int64
</code></pre>
| 1 |
2016-09-24T20:57:31Z
|
[
"python",
"pandas",
"dataframe",
"filtering",
"subset"
] |
idiomatic repeating group file format parsing
| 39,677,914 |
<pre><code>repeating_group_field = ['a', 'b', 'c']
ddata = ['l', 'm', 'n', 2, 'a', 'b', 'a', 'c', 'b', 'g', 'h', 2, 'c', 'c', 'l', 2, 'b', 'b']
# output = ['l', 'm', 'n', [['a', 'b'],['a', 'c', 'b']], 'g', 'h', [['c'], ['c']], 'l', 2, [[b], [b]]]
def outer(it):
last = {'value': None}
def rgroup(i, it):
def group(delim, it):
yield delim
for i in it:
if i in repeating_group_field and i is not delim:
yield i
else:
last['value'] = i
return
last['value'] = None
delim = it.next()
for x in range(i):
yield [y for y in group(delim, it)]
for i in it:
if type(i) is int:
yield [x for x in rgroup(i, it)]
if last['value']:
yield last['value']
else:
yield i
it = iter(ddata)
result = [z for z in outer(it)]
print result
def flatter(dataset):
iterator = iter(dataset)
for datum in iterator:
if type(datum) is int:
acc = []
delim = iterator.next()
for n in range(datum):
subacc = []
subacc.append(delim)
while True:
try:
repeating_group_element = iterator.next()
if repeating_group_element not in repeating_group_field:
acc.append(subacc)
yield acc
yield repeating_group_element
break
if repeating_group_element != delim:
subacc.append(repeating_group_element)
else:
acc.append(subacc)
break
except StopIteration:
acc.append(subacc)
yield acc
return
else:
yield datum
g = [y for y in flatter(ddata)]
print g
</code></pre>
<p>I've presented two working algorithms above to convert the input data to the desired internal data structure. The rules of the input data are as follows: an int in the input indicates a repeating group, the first element following the repeating group indicator is the repeating group delimiter. The appearance of the delimiter in the input data indicates the start of the next instance of the repeating group. The number of elements in the repeating group is not specified in the repeating group header. The presence of a non-repeating group field in the input stream indicates the end of the repeating group. Validation of the input is not required. </p>
<p>The first implementation uses nested functions and no accumulators. The second implementation is more flat and uses accumulators. I'm just wondering if there is a more idiomatic way to implement this algorithm in Python. </p>
| 0 |
2016-09-24T15:17:06Z
| 39,678,251 |
<p>I doubt there is something clear and simple for such a sophisticated data re-packing procedure.</p>
<p>I did a version of this from scratch, but in no way it looks beautiful (though it looks simplier to me).</p>
<pre><code>def chew_data(data):
def check_type(char):
nonlocal inside_group
if type(char) is int:
inside_group = True
else:
inside_group = False
return char
inside_group = False
group_delimiter = ''
group_result = []
subgroup_result = []
for c in data:
if not inside_group:
if check_type(c):
yield c
else:
if c == group_delimiter:
group_result.append(subgroup_result)
subgroup_result = []
if not group_delimiter:
group_delimiter = c
if c in repeating_group_field:
subgroup_result.append(c)
else:
group_result.append(subgroup_result)
subgroup_result = []
yield group_result
group_result = []
group_delimiter = ''
if check_type(c):
yield c
group_result.append(subgroup_result)
yield group_result
</code></pre>
| 1 |
2016-09-24T15:56:44Z
|
[
"python"
] |
Difference between python lambda functions inside class and def
| 39,677,922 |
<p>I have the following code.</p>
<pre><code>class SomeClass:
a = lambda self: self.b()
def __init__(self):
self.b = lambda self: None
s = SomeClass()
s.a()
</code></pre>
<p>It give me "TypeError: () takes exactly 1 argument (0 given)". And I want to understand why.
<p>My explanation:</p>
<ul>
<li><p>a - class method, so <code>s.a()</code> equlas <code>SomeClass.a(s)</code></p></li>
<li><p>b - object's attribute (not a method, just a function), that is why <code>self.b()</code> doesn't equal <code>SomeClass.b(self)</code>
<p>So in <code>a = lambda self: self.b()</code> argument for <code>b</code> is missing.
<p>Am I right?
<p>P.S. Is it closure effect?</p>
<pre><code>class SomeClass:
a = lambda self: self.b()
def __init__(self):
self.data = 12
self.b = lambda: self.data
s = SomeClass()
print s.a() #12
s.data = 24
print s.a() #24
</code></pre></li>
</ul>
| 0 |
2016-09-24T15:17:41Z
| 39,678,120 |
<p>Your problem here is the difference between bound methods and functions</p>
<p>Have a simpler example:</p>
<pre><code>class Someclass(object):
bound = lambda *args: "bound method got {}".format(args)
def __init__(self):
self.unbound = lambda *args: "function got {}".format(args)
</code></pre>
<pre><code>>>> c = Someclass()
</code></pre>
<p>If we look closely, these two functions are not of the same type:</p>
<pre><code>>>> c.bound
<bound method Someclass.<lambda> of <__main__.Someclass object at 0x...>>
>>> c.unbound
<function Someclass.__init__.<locals>.<lambda> at 0x...>
</code></pre>
<p>And as a result, when we call them, they receive different arguments:</p>
<pre><code>>>> c.bound(1, 2, 3)
'bound method got (<__main__.Someclass object at 0x...>, 1, 2, 3)'
>>> c.unbound(1, 2, 3)
'unbound got (1, 2, 3)'
</code></pre>
<p>Notice that only the "bound" function got a <code>self</code> argument passed in. </p>
<p><sup>Tested in 3.5 - 2.7 might have slightly different names for things</sup></p>
| 3 |
2016-09-24T15:42:11Z
|
[
"python",
"oop",
"lambda"
] |
Python Turtle game, Check not working?
| 39,677,940 |
<pre><code>import turtle
# Make the play screen
wn = turtle.Screen()
wn.bgcolor("red")
# Make the play field
mypen = turtle.Turtle()
mypen.penup()
mypen.setposition(-300,-300)
mypen.pendown()
mypen.pensize(5)
for side in range(4):
mypen.forward(600)
mypen.left(90)
mypen.hideturtle()
# Make the object
player = turtle.Turtle()
player.color("black")
player.shape("circle")
player.penup()
# define directions( East, West , South , Nord )
def west():
player.setheading(180)
def east():
player.setheading(0)
def north():
player.setheading(90)
def south():
player.setheading(270)
# define forward
def forward():
player.forward(20)
# Wait for input
turtle.listen()
turtle.onkey(west, "a")
turtle.onkey(east, "d")
turtle.onkey(forward,"w")
turtle.onkey(north,"q")
turtle.onkey(south,"s")
if player.xcor() > 300 or player.xcor() < -300:
print("Game over")
if player.ycor() > 300 or player.ycor() < -300:
print("Game over")
</code></pre>
<p>So everything is working fine, till the If statements. When i go trough the play field it should give me a print " Game over ". The coordinates are right but it doesnt check the coordinates! What am i doing wrong ? </p>
| 0 |
2016-09-24T15:20:04Z
| 39,678,576 |
<p>The problem is that your logic to test if the player has gone out of bounds is at the top level of your code -- it doesn't belong there. You should turn control over to the turtle listener, via <code>mainloop()</code> and handle the bounds detection in one of your callback methods, namely <code>forward()</code>.</p>
<p>A demonstration of the above in a rework of your code:</p>
<pre><code>import turtle
QUADRANT = 250
# Make the play screen
screen = turtle.Screen()
screen.bgcolor("red")
# Make the play field
play_pen = turtle.Turtle()
play_pen.pensize(5)
play_pen.speed("fastest")
play_pen.penup()
play_pen.setposition(-QUADRANT, -QUADRANT)
play_pen.pendown()
for _ in range(4):
play_pen.forward(QUADRANT * 2)
play_pen.left(90)
play_pen.hideturtle()
# Make the object
player = turtle.Turtle()
player.color("black")
player.shape("circle")
player.penup()
# define forward
def forward():
player.forward(20)
if player.xcor() > QUADRANT or player.xcor() < -QUADRANT or player.ycor() > QUADRANT or player.ycor() < -QUADRANT:
player.hideturtle()
player.setposition((0, 0))
player.write("Game over", False, align="center", font=("Arial", 24, "normal"))
# define directions(East, West, North, South)
turtle.onkey(lambda: player.setheading(180), "a") # west
turtle.onkey(lambda: player.setheading(0), "d") # east
turtle.onkey(lambda: player.setheading(90), "q") # north
turtle.onkey(lambda: player.setheading(270), "s") # south
turtle.onkey(forward, "w")
# Wait for input
turtle.listen()
turtle.mainloop()
</code></pre>
| 0 |
2016-09-24T16:34:31Z
|
[
"python",
"visual-studio",
"if-statement",
"turtle-graphics"
] |
Python MP3 Player
| 39,677,957 |
<p>I Am Trying To Play An MP3 File On Python , But I Can't Find The Right Module!
I've Tried This:</p>
<pre><code>import os
os.startfile('hello.mp3')
</code></pre>
<p>But I Just Got The <strong>Error</strong>:</p>
<pre><code>Traceback (most recent call last):
File "/Applications/Youtube/text 2 speech/test.py", line 2, in <module>
os.startfile('hello.mp3')
AttributeError: 'module' object has no attribute 'startfile'
</code></pre>
<p>I Have Also Tried This:</p>
<pre><code>import vlc
p = vlc.MediaPlayer("file:hello.mp3")
p.play()
</code></pre>
<p>But I Get The <strong>Error</strong>:</p>
<pre><code>Traceback (most recent call last):
File "/Applications/Youtube/text 2 speech/test.py", line 1, in <module>
import vlc
ImportError: No module named 'vlc'
</code></pre>
<p>But I Still Can't Find The Right Module. Could Someone Please Help?</p>
| 0 |
2016-09-24T15:21:59Z
| 39,678,501 |
<p>You will need to install the <code>vlc.py</code> module from <a href="https://wiki.videolan.org/Python_bindings" rel="nofollow">https://wiki.videolan.org/Python_bindings</a></p>
<p>The absolute bare bones for this would be something like:</p>
<pre><code>import vlc
Inst = vlc.Instance()
player = Inst.media_player_new()
Media = Inst.media_new_path('/home/rolf/vp1.mp3')
player.set_media(Media)
player.play()
</code></pre>
<p>Although you would need to check <code>player.get_state()</code> to see if it is still running, paused, stopped etc and <code>player.stop()</code> to stop the audio once started</p>
<p>As far as I am aware <code>os.startfile()</code> is only available for the <code>windows</code> operating system.<br>
Using the <code>play</code> command line instruction (Linux and probably OS X)</p>
<pre><code>import os
os.system('play /home/rolf/vp1.mp3')
</code></pre>
<p>You could also look at Gstreamer, although the documentation could do with some improvement.</p>
<pre><code>import gst
player = gst.element_factory_make("playbin")
player.set_property("uri", "file:///home/james/vp1.mp3")
player.set_state(gst.STATE_PLAYING)
</code></pre>
<p>play is stopped with <code>player.set_state(gst.STATE_NULL)</code>,<br>
pause with <code>player.set_state(gst.STATE_PAUSED)</code></p>
<p>Note: Avoid tutorials for Gstreamer version 0.10, search instead for version 1.0<br>
Both <code>vlc</code> and <code>Gstreamer</code> allow you to play audio and video, although, in my opinion, <code>vlc</code> in simpler to use but <code>Gstreamer</code> is more flexible.</p>
| 0 |
2016-09-24T16:25:34Z
|
[
"python",
"mp3",
"playback"
] |
Extract various information
| 39,677,964 |
<p><strong>Overview</strong></p>
<p>would like to extract various information like name, date and address from a 2 column csv file before writing to another csv file</p>
<p><strong>Conditions</strong></p>
<ol>
<li>Extract <strong>Name</strong> by first row as it will always be the first
row. </li>
<li>Extract <strong>Date</strong> by regex (is there regex in python?) ##/##/####
format</li>
<li>Extract <strong>Address</strong> by the constant keyword 'road'</li>
</ol>
<hr>
<p><strong>Example CSV dummy Source data reference file format viewed from EXCEL</strong> </p>
<hr>
<pre><code> ID,DATA
88888,DADDY
88888,2/06/2016
88888,new issac road
99999,MUMMY
99999,samsung road
99999,12/02/2016
</code></pre>
<p><strong>Desired CSV outcome</strong></p>
<pre><code>ID,Name,Address,DATE
8888,DADDY,new issac road,2/06/2016
9999,MUMMY,samsung road,12/02/2016
</code></pre>
<p><strong>What i have so far:</strong> </p>
<pre><code>import csv
from collections import defaultdict
columns = defaultdict(list) # each value in each column is appended to a list
with open('dummy_data.csv') as f:
reader = csv.DictReader(f) # read rows into a dictionary format
for row in reader: # read a row as {column1: value1, column2: value2,...}
for (k,v) in row.items(): # go over each column name and value
columns[k].append(v) # append the value into the appropriate list
# based on column name k
uniqueidstatement = columns['receipt_id']
print uniqueidstatement
resultFile = open("wtf.csv",'wb')
wr = csv.writer(resultFile, dialect='excel')
wr.writerow(uniqueidstatement)
</code></pre>
| 0 |
2016-09-24T15:22:54Z
| 39,678,472 |
<p>You can group the sections by <code>ID</code> and from each group you can determine which is the date and which is the address with some simple logic . </p>
<pre><code>import csv
from itertools import groupby
from operator import itemgetter
with open("test.csv") as f, open("out.csv", "w") as out:
reader = csv.reader(f)
next(reader)
writer = csv.writer(out)
writer.writerow(["ID","NAME","ADDRESS", "DATE"])
groups = groupby(csv.reader(f), key=itemgetter(0))
for k, v in groups:
id_, name = next(v)
add_date_1, add_date_2 = next(v)[1], next(v)[1]
date, add = (add_date_1, add_date_2) if "road" in add_date_2 else (add_date_2, add_date_1)
writer.writerow([id_, name, add, date])
</code></pre>
| 0 |
2016-09-24T16:21:55Z
|
[
"python",
"csv"
] |
python sampling from different distributions with different probability
| 39,677,967 |
<p>I am trying to implement a fucntion which returns 100 samples from three different multivariate gaussian distributions.</p>
<p>numpy provides a way to sample from a sinle multivariate gaussian. But I could not find a way to sample from three different multivariate with different sampling probability.</p>
<p>My requirement is to sample with probability $[0.7, 0.2, 0.1]$ from three multivariate gaussians with mean and covariances as given below</p>
<pre><code>G_1 mean = [1,1] cov =[ [ 5, 1] [1,5]]
G_2 mean = [0,0] cov =[ [ 5, 1] [1,5]]
G_3 mean = [-1,-1] cov =[ [ 5, 1] [1,5]]
</code></pre>
<p>Any idea ?</p>
| 2 |
2016-09-24T15:23:00Z
| 39,678,093 |
<p>Say you create an array of your generators:</p>
<pre><code>generators = [
np.random.multivariate_normal([1, 1], [[5, 1], [1, 5]]),
np.random.multivariate_normal([0, 0], [[5, 1], [1, 5]]),
np.random.multivariate_normal([-1, -1], [[5, 1], [1, 5]])]
</code></pre>
<p>Now you can create a weighted random of generator indices, since <a href="http://docs.scipy.org/doc/numpy-dev/reference/generated/numpy.random.choice.html" rel="nofollow"><code>np.random.choice</code></a> supports weighted sampling:</p>
<pre><code>draw = np.random.choice([0, 1, 2], 100, p=[0.7, 0.2, 0.1])
</code></pre>
<p>(<code>draw</code> is a length-100 array of entries, each from <em>{0, 1, 2}</em> with probability <em>0.7, 0.2, 0.1</em>, respectively.)</p>
<p>Now just generate the samples:</p>
<pre><code>[generators[i] for i in draw]
</code></pre>
| 3 |
2016-09-24T15:39:35Z
|
[
"python",
"numpy",
"probability",
"sampling"
] |
!ls in Jupyter notebook (Python 3)
| 39,678,309 |
<p>If I use:</p>
<pre><code>!ls '/Users/martyn/Documents/rawData'
</code></pre>
<p>it gives me a list of the files in the required directory.</p>
<p>But I want to paramterize this. I tried:</p>
<pre><code>pathData = '/Users/martyn/Documents/rawData'
!ls pathData
</code></pre>
<p>But this gives the error:</p>
<pre><code>ls: pathData: No such file or directory
</code></pre>
<p>I can see the problem ... but can't see how to fix it. Any help would be greatly appreciated.</p>
| 0 |
2016-09-24T16:04:40Z
| 39,678,425 |
<p>You probably need</p>
<pre><code>!ls {pathData}
</code></pre>
<p>or</p>
<pre><code>!ls $pathData
</code></pre>
| 1 |
2016-09-24T16:17:32Z
|
[
"python",
"ipython",
"jupyter-notebook"
] |
python: queue.queue() best practice to set size?
| 39,678,374 |
<p>so I have Queue.Queue()</p>
<p>i have bunch of producers who puts jobs into that Queue.Queue() and bunch of consumers who pops from the queue</p>
<p>1) is there benefit of capping the Queue size vs. not doing so?<br>
2) by not capping the size, does it really not have any size limit? can grow forever?</p>
<p>I've noticed that deadlock seems to occur more when queue has a fixed size</p>
| 0 |
2016-09-24T16:11:31Z
| 39,678,524 |
<p>If you don't cap the size, the <code>Queue</code> can grow until you run out of memory. So no size limit is imposed by Python, but your machine still has finite resources.</p>
<p>In some applications (probably most), the programmer knows memory consumption can't become a problem, due to the specific character of their application.</p>
<p>But if, e.g., you have producers that "run forever", and consumers that run much slower than producers, capping the size is essential to avoid unbounded memory demands.</p>
<p>As to deadlocks, it's highly unlikely that the implementation of <code>Queue</code> is responsible for a deadlock regardless of whether the <code>Queue</code>'s size is bounded; far more likely that deadlocks are due to flaws in application code. For example, picture a producer that fetches things over a network, and mistakenly suppresses errors when the network connection is broken. Then the producer can fail to produce any new items, and so a consumer will eventually block forever waiting for something new to show up on the <code>Queue</code>. That "looks like" deadlock, but the cause really has nothing to do with <code>Queue</code>.</p>
| 0 |
2016-09-24T16:28:09Z
|
[
"python",
"multithreading",
"queue"
] |
Having trouble installing pyWin32 on my windows 10 operating system based computer
| 39,678,559 |
<p><a href="http://i.stack.imgur.com/pkXyV.png" rel="nofollow"><img src="http://i.stack.imgur.com/pkXyV.png" alt=" I tried installing the package in windows command prompt"></a></p>
<p>Why will it not install? I thought I followed the correct procedures</p>
| -1 |
2016-09-24T16:32:11Z
| 39,706,136 |
<p>Just use bounding quotes for the full path of .whl file.</p>
<pre><code>pip install "C:\...full path with spaces\pywin32-...-win_amd64 (1).whl"
</code></pre>
<p>Of course make sure <code>pip install wheels</code> was run first.</p>
<p>Alternative way is using easy_install (not necessary to download the installer manually):</p>
<pre><code>easy_install.exe https://github.com/jmdaweb/TWBlue_deps_windows/raw/master/x64/pywin32-220.win-amd64-py2.7.exe
</code></pre>
<p>But the second way may cause problems with py2exe if you have plans to use it. Maybe <code>pip install pypiwin32</code> is OK for you (it will install pyWin32 build 219, should work just fine for most cases).</p>
| 0 |
2016-09-26T14:58:59Z
|
[
"python",
"module",
"pywin32",
"pyhook"
] |
Having a compiling error with Python using PyCharm 4.0.5
| 39,678,570 |
<p>The reason for me asking the question here is that I did not find a solution elsewhere. I'm having the following error with my PyCharm 4.0.5 program while trying to run a Python script. It was working fine the one day and when I tried using it this afternoon I got the following error after tying to run a program which I am 100% has no errors in it.
In the message box I got the following error:</p>
<pre><code>Failed to import the site module
Traceback (most recent call last):
File "C:\Python34\lib\site.py", line 562, in <module>
main()
File "C:\Python34\lib\site.py", line 544, in main
known_paths = removeduppaths()
File "C:\Python34\lib\site.py", line 125, in removeduppaths
dir, dircase = makepath(dir)
File "C:\Python34\lib\site.py", line 90, in makepath
dir = os.path.join(*paths)
AttributeError: 'module' object has no attribute 'path'
Process finished with exit code 1
</code></pre>
<p>I have never seen an error of this kind and don't know where to start tackling this problem.</p>
<p>Any feedback will be greatly appreciated!</p>
<p>The code looks like the following, and I seem to have forgotten to mention that it gives me the exact same error for every single .py script on my computer.</p>
<pre><code>import turtle
wn = turtle.Screen()
alex = turtle.Turtle()
def hexagon(var):
for i in range(6):
alex.right(60)
alex.forward(var)
def square(var):
for i in range(4):
alex.forward(var)
alex.left(90)
def triangle(var):
for i in range(3):
alex.forward(var)
alex.left(120)
def reset():
alex.clear()
alex.reset()
x = True
while x:
alex.hideturtle()
choice = input("""
Enter the shape of choice:
a. Triangle
b. Square
c. Hexagon
""")
if choice.lower() == "a":
length = input("Enter the desired length of the sides: ")
triangle(int(length))
restart = input("Do you wish to try again? Y/N ")
if restart.lower() == "n":
x = False
else:
reset()
if choice.lower() == "b":
length = input("Enter the desired length of the sides: ")
square(int(length))
restart = input("Do you wish to try again? Y/N ")
if restart.lower() == "n":
x = False
else:
reset()
if choice.lower() == "c":
length = input("Enter the desired length of the sides: ")
hexagon(int(length))
restart = input("Do you wish to try again? Y/N ")
if restart.lower() == "n":
x = False
else:
reset()
print("Thank you for using your local turtle services!")
</code></pre>
| 0 |
2016-09-24T16:33:44Z
| 39,678,668 |
<p>You must have a python file named <code>os.py</code> which is being imported instead of the "real" os module.</p>
| 0 |
2016-09-24T16:43:24Z
|
[
"python",
"pycharm"
] |
Uploading empty data to Bigquery using gcloud python
| 39,678,654 |
<p>I try to upload a few rows of data using the <a href="https://googlecloudplatform.github.io/google-cloud-python/stable/bigquery-usage.html" rel="nofollow">gcloud python library</a> and don't succeed. Here is the sample code taken from the <a href="https://googlecloudplatform.github.io/google-cloud-python/stable/bigquery-table.html" rel="nofollow">latest documentation</a> </p>
<pre><code>client = bigquery.Client()
dataset = client.dataset('test')
table = dataset.table("test_table")
rows = [("foo", "bar"), ("foo2", "bar2")]
result = table.insert_data(rows)
</code></pre>
<p>If I query the latest upload I get:</p>
<pre><code>[(None, None), (None, None)]
</code></pre>
<p>So I add empty fields. In the documentation it says the uploaded rows should be "list of tuples", but that does not seem to work. My schema has two string fields. Unicode fields do not work either and I do not get any error result back either, which makes it difficult to debug. Any hint what I do wrong? </p>
| 1 |
2016-09-24T16:42:08Z
| 39,799,850 |
<p>Explicitly declaring the schema in your table will help solve this problem. I.e., instead of using <code>table = dataset.table('test_table')</code>, you should use the following:</p>
<p><code>left = SchemaField('left', 'STRING', 'REQUIRED')
right = SchemaField('right', 'STRING', 'REQUIRED')
table = dataset.table('test_table', schema=[left, right])
</code></p>
<p>I had opened an issue on Github regarding this. You can <a href="https://github.com/GoogleCloudPlatform/google-cloud-python/issues/2472" rel="nofollow">read more here</a> if interested.</p>
| 1 |
2016-09-30T20:47:34Z
|
[
"python",
"google-bigquery",
"gcloud-python",
"google-cloud-python"
] |
How to reinitialize text widget in Tkinter?
| 39,678,665 |
<p>I am trying to return the search result of the user in the GUI itself.
This is the following code which I am using </p>
<pre><code>def printResult(searchResult):
global resultsFrame
text = Text(resultsFrame)
for i in searchResult:
text.insert(END,i+'\n')
text.pack(side=TOP)
</code></pre>
<p>where searchResult is a list which has all the statements to be printed on the screen and this is how I declared resultsFrame in the code (before root.mainloop())
I am using global so that it makes changes to the existing frame but it rather adds another frame below it.
How to make changes in the already existing text widget?</p>
<pre><code>resultsFrame = Frame(root)
resultsFrame.pack(side=TOP)
</code></pre>
<p>For reference here is an image of the GUI:</p>
<p><a href="http://i.stack.imgur.com/anajN.png" rel="nofollow"><img src="http://i.stack.imgur.com/anajN.png" alt="Creation of new Text widget"></a></p>
<p>EDIT :
<strong><em>Issue Solved</em></strong></p>
<p>I declared the frame outside the function</p>
<pre><code>resultsFrame = Frame(root)
resultsFrame.pack(side=TOP)
text = Text(resultsFrame)
def printResult(searchResult):
global resultsFrame
global text
text.delete("1.0",END)
for i in searchResult:
text.insert(END,str(i)+'\n')
text.pack(side=TOP)
</code></pre>
| 0 |
2016-09-24T16:43:16Z
| 39,678,805 |
<p>Try this:</p>
<pre><code>global resultsFrame
text = Text(resultsFrame)
def printResult(searchResult):
for i in searchResult:
text.insert(END,i+'\n')
...
text.pack(side=TOP)
</code></pre>
<p>This should solve your issue.</p>
| 2 |
2016-09-24T16:58:05Z
|
[
"python",
"python-2.7",
"user-interface",
"tkinter"
] |
Is a python dict comprehension always "last wins" if there are duplicate keys
| 39,678,672 |
<p>If I create a python dictionary with a dict comprehension, but there are duplicate keys, am I guaranteed that the last item will the the one that ends up in the final dictionary? It's not clear to me from looking at <a href="https://www.python.org/dev/peps/pep-0274/" rel="nofollow">https://www.python.org/dev/peps/pep-0274/</a>?</p>
<pre><code>new_dict = {k:v for k,v in [(1,100),(2,200),(3,300),(1,111)]}
new_dict[1] #is this guaranteed to be 111, rather than 100?
</code></pre>
| 1 |
2016-09-24T16:43:35Z
| 39,678,738 |
<p>If you mean something like </p>
<pre><code>{key: val for (key, val) in pairs}
</code></pre>
<p>where <code>pairs</code> is an ordered collection (eg, list or tuple) of 2-element lists or tuples then yes, the comprehension will take the collection in order and the last value will "win". </p>
<p>Note that if pairs is a set of pairs, then there is no "last item", so the outcome is not predictable.
EXAMPLE:</p>
<pre><code>>>> n = 10
>>> pairs = [("a", i) for i in range(n)]
>>> {key:val for (key, val) in pairs}
{'a': 9}
>>> {key:val for (key, val) in set(pairs)}
{'a': 2}
</code></pre>
| 1 |
2016-09-24T16:51:20Z
|
[
"python",
"dictionary",
"duplicates",
"dictionary-comprehension"
] |
Is a python dict comprehension always "last wins" if there are duplicate keys
| 39,678,672 |
<p>If I create a python dictionary with a dict comprehension, but there are duplicate keys, am I guaranteed that the last item will the the one that ends up in the final dictionary? It's not clear to me from looking at <a href="https://www.python.org/dev/peps/pep-0274/" rel="nofollow">https://www.python.org/dev/peps/pep-0274/</a>?</p>
<pre><code>new_dict = {k:v for k,v in [(1,100),(2,200),(3,300),(1,111)]}
new_dict[1] #is this guaranteed to be 111, rather than 100?
</code></pre>
| 1 |
2016-09-24T16:43:35Z
| 39,678,945 |
<p>Last item wins. The best documentation I can find for this is in the <a href="https://docs.python.org/3/reference/expressions.html#dictionary-displays" rel="nofollow">Python 3 language reference, section 6.2.7</a>:</p>
<blockquote>
<p>A dict comprehension, in contrast to list and set comprehensions, needs two expressions separated with a colon followed by the usual âforâ and âifâ clauses. When the comprehension is run, the resulting key and value elements are inserted in the new dictionary <strong>in the order they are produced</strong>.</p>
</blockquote>
<p>That documentation also explicitly states that the last item wins for comma-separated key-value pairs (<code>{1: 1, 1: 2}</code>) and for dictionary unpacking (<code>{**{1: 1}, **{1: 2}}</code>):</p>
<blockquote>
<p>If a comma-separated sequence of key/datum pairs is given, ... you can specify the same key multiple times in the key/datum list, and the final dictionaryâs value for that key will be the last one given.</p>
<p>A double asterisk <code>**</code> denotes <em>dictionary unpacking</em>. Its operand must be a mapping. Each mapping item is added to the new dictionary. Later values replace values already set by earlier key/datum pairs and earlier dictionary unpackings.</p>
</blockquote>
| 2 |
2016-09-24T17:13:24Z
|
[
"python",
"dictionary",
"duplicates",
"dictionary-comprehension"
] |
How to get distinct count of keys along with other aggregations in pandas
| 39,678,806 |
<p>My data frame (DF) looks like this </p>
<pre><code>Customer_number Store_number year month last_buying_date1 amount
1 20 2014 10 2015-10-07 100
1 20 2014 10 2015-10-09 200
2 20 2014 10 2015-10-20 100
2 10 2014 10 2015-10-13 500
</code></pre>
<p>and I want to get an output like this </p>
<pre><code> year month sum_purchase count_purchases distinct customers
2014 10 900 4 3
</code></pre>
<p>How do I get an output like this using Agg and group by . I am using a 2 step group by currently but struggling to get the distinct customers . Here's my approach </p>
<pre><code>#### Step 1 - Aggregating everything at customer_number, store_number level
aggregations = {
'amount': 'sum',
'last_buying_date1': 'count',
}
grouped_at_Cust = DF.groupby(['customer_number','store_number','month','year']).agg(aggregations).reset_index()
grouped_at_Cust.columns = ['customer_number','store_number','month','year','total_purchase','num_purchase']
#### Step2 - Aggregating at year month level
aggregations = {
'total_purchase': 'sum',
'num_purchase': 'sum',
size
}
Monthly_customers = grouped_at_Cust.groupby(['year','month']).agg(aggregations).reset_index()
Monthly_customers.colums = ['year','month','sum_purchase','count_purchase','distinct_customers']
</code></pre>
<p>My struggle is in the 2nd step. How do i include size in the 2nd aggregation step ? </p>
| 0 |
2016-09-24T16:58:09Z
| 39,679,044 |
<p>You could use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.DataFrameGroupBy.agg.html" rel="nofollow"><code>groupby.agg</code></a> and supplying the function <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.SeriesGroupBy.nunique.html" rel="nofollow"><code>nunique</code></a> to return number of unique Customer Ids in the group.</p>
<pre><code>df_grp = df.groupby(['year', 'month'], as_index=False) \
.agg({'purchase_amt':['sum','count'], 'Customer_number':['nunique']})
df_grp.columns = map('_'.join, df_grp.columns.values)
df_grp
</code></pre>
<p><a href="http://i.stack.imgur.com/TlazM.png" rel="nofollow"><img src="http://i.stack.imgur.com/TlazM.png" alt="Image"></a></p>
<hr>
<p>Incase, you are trying to group them differently (omitting certain column) when performing <code>groupby</code> operation:</p>
<pre><code>df_grp_1 = df.groupby(['year', 'month']).agg({'purchase_amt':['sum','count']})
df_grp_2 = df.groupby(['Store_number', 'month', 'year'])['Customer_number'].agg('nunique')
</code></pre>
<p>Take the first level of the multiindex columns which contains the <code>agg</code> operation performed:</p>
<pre><code>df_grp_1.columns = df_grp_1.columns.get_level_values(1)
</code></pre>
<p>Merge them back on the intersection of the columns used to group them:</p>
<pre><code>df_grp = df_grp_1.reset_index().merge(df_grp_2.reset_index().drop(['Store_number'],
axis=1), on=['year', 'month'], how='outer')
</code></pre>
<p>Rename the columns to new ones:</p>
<pre><code>d = {'sum': 'sum_purchase', 'count': 'count_purchase', 'nunique': 'distinct_customers'}
df_grp.columns = [d.get(x, x) for x in df_grp.columns]
df_grp
</code></pre>
<p><a href="http://i.stack.imgur.com/jjNSR.png" rel="nofollow"><img src="http://i.stack.imgur.com/jjNSR.png" alt="Image"></a></p>
| 1 |
2016-09-24T17:25:43Z
|
[
"python",
"pandas",
"group-by",
"aggregate"
] |
How to add keypress events in event queue for pygame
| 39,678,811 |
<p>I have a game made with pygame which runs perfectly okay. I want to create a system which reads keys to press from a file (which will contain codes of keys to press in separate lines) and adds them to the pygame event queue so that the player agent moves on its own without the keys actually being pressed. </p>
<p>After reading the pygame docs I have tried to create a new event object and add it to the queue but the event constructor requires attributes which I could not find anywhere.</p>
<p>Does anyone know which attributes are needed to create an instance of an event or if there is another better approach for what I am trying to do? </p>
<p><strong>UPDATE:</strong> I have successfully added my events to the queue however they do not seem to work even though they are completely identical to the ones created automatically. I am printing the event queue below. Highlighted events are added when I actually press the 'a' key. As you can see my events (the ones above) do not trigger the rest of the events as they should.
<a href="http://i.stack.imgur.com/zpmvq.png" rel="nofollow"><img src="http://i.stack.imgur.com/zpmvq.png" alt=""></a></p>
<p><strong>UPDATE 2:</strong> I added some print statements to the event handling code for the keydown events. These are only executed if I actually press the key on the keyboard and just seem to ignore the events I raise from the code.<br>
<code>input_list = [pg.K_RETURN, pg.K_a, pg.K_s]
if self.cursor.state == c.PLAYER1:
self.cursor.rect.y = 358
if keys[pg.K_DOWN]:
print("down")
self.cursor.state = c.PLAYER2
for input in input_list:
if keys[input]:
print("button")
self.reset_game_info()
self.done = True
elif self.cursor.state == c.PLAYER2:
self.cursor.rect.y = 403
if keys[pg.K_UP]:
print("up")
self.cursor.state = c.PLAYER1</code> </p>
<p>I am creating my events like so:<br>
<code>pg.event.post(pg.event.Event(pg.KEYDOWN, {'mod': 0, 'scancode': 30, 'key': pg.K_a, 'unicode': 'a'}))</code><br>
I found these values by printing the event which happens when I actually press the key 'a'. </p>
| 0 |
2016-09-24T16:58:41Z
| 39,681,219 |
<p>The <a href="http://www.pygame.org/docs/ref/event.html#pygame.event.Event" rel="nofollow">Event object's</a> first argument is its type, which is an integer between <code>pygame.USEREVENT</code> and <code>pygame.NUMEVENTS</code> (24 to 32 exclusive). This is used to identify your event from other events, such as <code>pygame.KEYDOWN</code>, <code>pygame.MOUSEBUTTONDOWN</code> etc.</p>
<p>The second argument is either a dictionary or keyword arguments. These will be your event's attributes. The dict should contain strings as keys.</p>
<p>Here's a short example demonstrating how it's used:</p>
<pre><code>import pygame
pygame.init()
event1 = pygame.event.Event(pygame.USEREVENT, {"greeted": False, "jumped": 10, "ID": 1})
event2 = pygame.event.Event(pygame.USEREVENT, greeted=True, jumped=200, ID=2)
pygame.event.post(event1)
pygame.event.post(event2)
for event in pygame.event.get():
if event.type == pygame.QUIT:
raise SystemExit
elif event.type == pygame.USEREVENT:
print("Player:", event.ID, "| Greeted:", event.greeted, "| Jumped:", event.jumped)
elif event.type == pygame.USEREVENT + 1:
print("Player:", event.ID, "| Greeted:", event.greeted, "| Jumped:", event.jumped)
</code></pre>
<p>This will output:</p>
<pre><code>Player: 1 | Greeted: False | Jumped: 10
Player: 2 | Greeted: True | Jumped: 200
</code></pre>
| 0 |
2016-09-24T21:40:55Z
|
[
"python",
"pygame",
"keyboard-events"
] |
Pandas nested dict to dataframe
| 39,678,854 |
<p>I have a simple nested list like this:</p>
<pre><code>allFrame= [{'statValues': {'kpi2': 2, 'kpi1': 1}, 'modelName': 'first'},{'statValues': {'kpi2': 4, 'kpi1': 2}, 'modelName': 'second'}, {'statValues': {'kpi2': 3, 'kpi1': 3}, 'modelName': 'third'}]
</code></pre>
<p>a <code>pd.DataFrame(allFrame)</code> or <code>pd.DataFrame.from_dict(allFrame)</code>both do not really work and only return <a href="http://i.stack.imgur.com/UPV9o.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/UPV9o.jpg" alt="json"></a></p>
<h2>How can I instead receive the kpi_X as column-names?</h2>
<p>I found <a href="http://stackoverflow.com/questions/32770359/python-dict-to-dataframe-pandas">Python dict to DataFrame Pandas</a> doing something similar. However, I believe this operation should be simpler</p>
| 1 |
2016-09-24T17:03:21Z
| 39,678,953 |
<p>Looks like you need to flatten those dicts first.</p>
<p>Apply a flattening function on the list first:</p>
<pre><code>def flatten_dict(d, prefix='__'):
def items():
# A clojure for recursively extracting dict like values
for key, value in d.items():
if isinstance(value, dict):
for sub_key, sub_value in flatten_dict(value).items():
# Key name should imply nested origin of the dict,
# so we use a default prefix of __ instead of _ or .
yield key + prefix + sub_key, sub_value
else:
yield key, value
return dict(items())
</code></pre>
<p>Also note the use of orient=records, meaning each dict in the list is a line in the dataframe.</p>
<p>So:</p>
<pre><code>l = list(map(flatten_dict, allFrame))
df = pd.DataFrame.from_dict(l, orient='records')
</code></pre>
| 1 |
2016-09-24T17:14:11Z
|
[
"python",
"pandas",
"dictionary",
"nested"
] |
Efficient check if two numbers are co-primes (relatively primes)?
| 39,678,984 |
<p>What is the most efficient ("pythonic") way to test/check if two numbers are <strong>co-primes</strong> (relatively prime) in <strong>Python</strong>.</p>
<p>For the moment I have this code:</p>
<pre><code>def gcd(a, b):
while b != 0:
a, b = b, a % b
return a
def coprime(a, b):
return gcd(a, b) == 1
print(coprime(14,15)) #Should be true
print(coprime(14,28)) #Should be false
</code></pre>
<p>Can the code for checking/testing if two numbers are relatively prime be considered "Pythonic" or there is some better way?</p>
| 1 |
2016-09-24T17:18:07Z
| 39,679,114 |
<p>The only suggestion for improvement might be with your function <code>gcd</code>. Namely, you could use <a href="https://docs.python.org/3/library/math.html#math.gcd" rel="nofollow"><code>gcd</code></a> that's defined in <code>math</code> (for Python <code>3.5</code>) for a speed boost.</p>
<p>Defining <code>coprime2</code> that uses the built-in version of <code>gcd</code>:</p>
<pre><code>from math import gcd as bltin_gcd
def coprime2(a, b):
return bltin_gcd(a, b) == 1
</code></pre>
<p>You almost cut down execution speed by half due to the fact that <code>math.gcd</code> is implemented in <code>C</code> (<a href="https://github.com/python/cpython/blob/master/Modules/mathmodule.c#L688" rel="nofollow">see <code>math_gcd</code> in <code>mathmodule.c</code></a>):</p>
<pre><code>%timeit coprime(14, 15)
1000000 loops, best of 3: 907 ns per loop
%timeit coprime2(14, 15)
1000000 loops, best of 3: 486 ns per loop
</code></pre>
<p>For Python <code><= 3.4</code> you could use <code>fractions.gcd</code> but, as noted in a comment by @user2357112, it is not implemented in <code>C</code>. Actually, <em>there's really no incentive to actually use it, <a href="https://hg.python.org/cpython/file/3.4/Lib/fractions.py#l17" rel="nofollow">its implementation is exactly the same as yours.</em></a></p>
| 3 |
2016-09-24T17:32:28Z
|
[
"python",
"algorithm",
"python-3.x",
"primes",
"relative"
] |
Python Django Rest Post API without storage
| 39,679,030 |
<p>I would like to create a web api with Python and the Django Rest framework. The tutorials that I have read so far incorporate models and serializers to process and store data. I was wondering if there's a simpler way to process data that is post-ed to my api and then return a JSON response without storing any data.</p>
<p>Currently, this is my urls.py</p>
<pre><code>from django.conf.urls import url
from rest_framework import routers
from core.views import StudentViewSet, UniversityViewSet, TestViewSet
router = routers.DefaultRouter()
router.register(r'students', StudentViewSet)
router.register(r'universities', UniversityViewSet)
router.register(r'other', TestViewSet,"other")
urlpatterns = router.urls
</code></pre>
<p>and this is my views.py</p>
<pre><code>from rest_framework import viewsets
from rest_framework.decorators import api_view
from rest_framework.response import Response
from .models import University, Student
from .serializers import UniversitySerializer, StudentSerializer
import json
from django.http import HttpResponse
class StudentViewSet(viewsets.ModelViewSet):
queryset = Student.objects.all()
serializer_class = StudentSerializer
class UniversityViewSet(viewsets.ModelViewSet):
queryset = University.objects.all()
serializer_class = UniversitySerializer
class TestViewSet(viewsets.ModelViewSet):
def retrieve(self, request, *args, **kwargs):
return Response({'something': 'my custom JSON'})
</code></pre>
<p>The first two parts regarding Students and Universities were created after following a tutorial on Django setup. I don't need the functionality that it provides for creating, editing and removing objects. I tried playing around with the TestViewSet which I created.</p>
<p>I am currently stuck trying to receive JSON data that gets posted to the url ending with "other" and processing that JSON before responding with some custom JSON.</p>
<p><strong>Edit</strong></p>
<p>These two links were helpful in addition to the solution provided:</p>
<p><a href="http://stackoverflow.com/questions/13603027/django-rest-framework-non-model-serializer">Django REST framework: non-model serializer</a></p>
<p><a href="http://jsatt.com/blog/abusing-django-rest-framework-part-1-non-model-endpoints/" rel="nofollow">http://jsatt.com/blog/abusing-django-rest-framework-part-1-non-model-endpoints/</a></p>
| 0 |
2016-09-24T17:23:11Z
| 39,679,125 |
<p>You can use their generic <a href="http://www.django-rest-framework.org/api-guide/views/" rel="nofollow">APIView</a> class (which doesn't have any attachment to Models or Serializers) and then handle the request yourself based on the HTTP request type. For example:</p>
<pre><code>class RetrieveMessages(APIView):
def post(self, request, *args, **kwargs):
posted_data = self.request.data
city = posted_data['city']
return_data = [
{"echo": city}
]
return Response(status=200, data=return_data)
def get....
</code></pre>
| 1 |
2016-09-24T17:33:20Z
|
[
"python",
"json",
"django",
"rest",
"web"
] |
Read k matrices size nxm from stdin in Python
| 39,679,049 |
<p>I need to read k matrices size nxm from stdin in Python.
In the first line there must be the number of matrices (k), and then - k descriptions of matrices: in the first line 2 integer numbers for size (n and m) devided by space, then the matrix.</p>
<p>Here's an example:</p>
<pre><code>2
2 3
4 5 6
3 1 7
4 4
5 3 4 5
6 5 1 4
3 9 1 4
8 5 4 3
</code></pre>
<p>Could you please tell me how I can do this?
I could have done this only without considering m (for 1 matrix):</p>
<pre><code>n = int(input())
a = []
for i in range(n):
a.append([int(j) for j in input().split()])
</code></pre>
<p>I have found some similar questions but stdin is not used (e.g. reading from file is used) or the size of string in matrix is not set.</p>
| 0 |
2016-09-24T17:26:09Z
| 39,679,306 |
<p>You are on a right way. try to break it in simple steps. Basically a n X m matrix is n rows and each row would have m elements (pretty obvious). what if we have n=1 then we have a line in which there would be m element. to take such input we would just </p>
<pre><code>matrix = input().split() #read the input
matrix = [ int(j) for j in matrix] #matrix is now 1 x m list
</code></pre>
<p>or simplify this</p>
<pre><code>matrix = [ int(j) for j in input().split() ]
</code></pre>
<p>now suppose we have n rows, that means we have to do this n times which is simply looping n times ,</p>
<pre><code>matrix = [ [ int(j) for j in input().split() ] for i in n ]
</code></pre>
<p>a more pythonic way is using map,</p>
<pre><code>matrix= [ list( map(int, input().split() ) ) for i in range(n) ]
</code></pre>
| 0 |
2016-09-24T17:53:10Z
|
[
"python",
"matrix",
"stdin"
] |
Modifying Depth First Search with dictionaries
| 39,679,149 |
<p>There are a lot of really good DFS python implementations out there, such as <a href="http://eddmann.com/posts/depth-first-search-and-breadth-first-search-in-python/" rel="nofollow">this one</a>, but none of them include cost. I'd like to be able to record the total cost of a DFS path, but this implementation represents the graph as a dictionary of sets.</p>
<pre><code>graph = {'A': set(['B', 'C']),
'B': set(['A', 'D', 'E']),
'C': set(['A', 'F']),
'D': set(['B']),
'E': set(['B', 'F']),
'F': set(['C', 'E'])}
def dfs(graph, start):
visited, stack = set(), [start]
while stack:
vertex = stack.pop()
if vertex not in visited:
visited.add(vertex)
stack.extend(graph[vertex] - visited)
return visited
dfs(graph, 'A') # {'E', 'D', 'F', 'A', 'C', 'B'}
</code></pre>
<p>I don't think sets could adequately record cost. So I feel that instead representing the graph as a dictionary of dictionaries would be a good way to implement cost. i.e:</p>
<pre><code>graph = {'A' : {'C' : 10,
'D' : 7}
etc.....
</code></pre>
<p>My question is, how could this algorithm be modified to use a this new type of graph instead? I'm still very unfamiliar with python syntax, so seeing an example like this really helps.</p>
<p>Alternatively, if there's an even easier way to represent cost, I'm open to suggestions</p>
<p>EDIT: Okay, here's a different way of thinking about it. How can I modify the code above so that it treats the nested dictionaries similarly to sets?</p>
<p>EDIT2: I believe I've solved it myself, now that I understand the keys() function of dictionaries returns a list.</p>
| 0 |
2016-09-24T17:36:31Z
| 39,679,444 |
<p>Looks like <a href="https://networkx.github.io" rel="nofollow">NetworkX</a> is what you are looking for. DFS is included in the package and its graph implementation is what you want. Hope it helps. </p>
| 0 |
2016-09-24T18:05:34Z
|
[
"python",
"dictionary",
"depth-first-search"
] |
How can a class hold an array of classes in django
| 39,679,167 |
<p>I have been having trouble using django. Right now, I have a messagebox class that is suppose to hold messages, and a message class that extends it. How do I make it so messagebox will hold messages?</p>
<p>Something else that I cannot figure out is how classes are to interact. Like, I have a user that can send messages. Should I call its method to call a method in messagebox to send a msg or can I have a method in user to make a msg directly. </p>
<p>My teacher tries to accentuate cohesion and coupling, but he never even talks about how to implement this in django or implement django period. Any help would be appreciated.</p>
| 0 |
2016-09-24T17:38:10Z
| 39,679,214 |
<p>You're confusing two different things here. A class can easily have an attribute that is a list which contains instances of another class, there is nothing difficult about that. </p>
<p>(But note that there is no way in which a Message should extend MessageBox; this should be composition, not inheritance.)</p>
<p>However then you go on to talk about Django models. But Django models, although they are Python classes, also represent tables in the database. And the way you represent one table containing a list of entries in another table is via a foreign key field. So in this case your Message model would have a ForeignKey to MessageBox.</p>
<p>Where you put the send method depends entirely on your logic. A message should probably know how to send itself, so it sounds like the method would go there.</p>
| 3 |
2016-09-24T17:43:05Z
|
[
"python",
"django"
] |
multiple conditions for np.where python pandas
| 39,679,382 |
<p>I have the following dataframe:</p>
<pre><code>region pop_1 pop_1_source pop_2 pop_2_source pop_3 pop_3_source
a 99 x1 84 x2 61 x3
b 64 x1 65 x2 16 x3
c 92 x1 26 x2 6 x3
d 82 x1 60 x2 38 x3
e 45 x1 77 x2 42 x3
</code></pre>
<p>I can calculate the highest value found in each region through:</p>
<pre><code>df['upper_limit'] = df[['pop_1','pop_2','pop_3']].max(axis=1)
</code></pre>
<p>If I only compare two populations I can then calculate the source of the highest population i.e:</p>
<pre><code>df['upper_limit_source'] = np.where(df.upper_limit == df['upper_limit'],df.pop_1,df.pop_2)
</code></pre>
<p>However if I try expand this to search into all three columns, it fails to work.
I have searched for a solution, but cannot make anything work with np.where(np.logical_or or similar. </p>
<p>Am I missing something obvious? </p>
| 1 |
2016-09-24T17:59:05Z
| 39,679,560 |
<p>I found your question a bit confusing (among other things, <code>df.upper_limit == df['upper_limit']</code> is always true, and your "source" columns are all filled with <code>x1</code> (except for one <code>1x</code> which looks like a typo)). </p>
<p>However, it seems like you'd like to find <em>which</em> of the three columns was responsible for the maximum, then calculate a value based on this. So, to calculate the column responsible, you could use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.argmax.html" rel="nofollow"><code>np.argmax</code></a>:</p>
<pre><code>import numpy as np
idmax = np.argmax(df[['pop_1','pop_2','pop_3']].as_matrix(), axis=1)
</code></pre>
<p>This will give you, for each row, 0, 1, or 2, depending on which of the three columns was responsible for the maximum.</p>
<p>Now if, for example, you'd like to choose <code>pop_1_source</code>, <code>pop_2_source</code>, or <code>pop_3_source</code>, according to the index, you could use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.choose.html" rel="nofollow"><code>np.choose</code></a>:</p>
<pre><code>np.choose(idmax, df[[`pop_1_source', 'pop_2_source', pop_3_source']].as_matrix().T)
</code></pre>
| 1 |
2016-09-24T18:18:22Z
|
[
"python",
"pandas"
] |
Extra newlines in python
| 39,679,398 |
<p>I have this python code which is used to give direction to <strong>m</strong> to reach <strong>p</strong>. </p>
<p>Here is the code:</p>
<pre><code>#!/bin/python
def displayPathtoPrincess(n,grid):
m = "m"
p = "p"
for i in range(n):
if(m in grid[i]):
m_column = grid[i].find(m)
m_row = int(i + 1)
#print "{0}x{1} \n".format(int(i + 1), m_position + 1)
if(p in grid[i]):
p_column = grid[i].find(p)
p_row = int(i + 1)
#print "{0}x{1} \n".format(int(i + 1), p_position + 1)
down_up = p_row - m_row
if(down_up > 0):
print "DOWN\n"*down_up
else:
print "UP\n"
right_left = p_column - m_column
if(right_left > 0):
print "RIGHT\n"*right_left
else:
print "LEFT\n"
m = input()
grid = []
for i in xrange(0, m):
grid.append(raw_input().strip())
displayPathtoPrincess(m,grid)
</code></pre>
<p>Input:</p>
<pre><code>> 6
> ---
> ---
> -m-
> ---
> ---
> p--
</code></pre>
<p>Expected output:</p>
<pre><code>DOWN
DOWN
DOWN
LEFT
</code></pre>
<p>My output:</p>
<pre><code>DOWN
DOWN
DOWN
LEFT
</code></pre>
<p>As you can see in my output, the program adds a new line whenever it changes the direction. Any ideas on how to stop this new line from appearing?</p>
| 0 |
2016-09-24T18:00:55Z
| 39,679,450 |
<p>You are hard-coding a newline after each <code>'DOWN'</code> or <code>'RIGHT'</code> every time you do this:</p>
<pre><code> print "DOWN\n"*down_up
print "RIGHT\n"*right_left
</code></pre>
<p>The resulting strings will be <code>'DOWN'</code> or <code>'RIGHT'</code> followed by a newline, the specified number of times. That means that such strings will end with an unwanted newline. The smallest fix is to multiply by one fewer than the necessary number, and then add the last bit:</p>
<pre><code> print "DOWN\n"*(down_up-1) + 'DOWN'
print "RIGHT\n"*(right_left-1) + 'RIGHT'
</code></pre>
<p>Or use <code>str.join</code>:</p>
<pre><code> print '\n'.join("DOWN" for i in range(down_up))
print '\n'.join("RIGHT" for i in range(right_left))
</code></pre>
| 0 |
2016-09-24T18:05:58Z
|
[
"python",
"python-2.7"
] |
Python scan for WiFi
| 39,679,421 |
<p>I was searching for a program that can scan for WiFi networks and print all of the SSIDs. I tried with scapy but I failed. I am using the pyCharm editor. </p>
<p>I tried this code:</p>
<pre><code>from scapy.all import *
from scapy.layers.dot11 import Dot11
def packet_handler(pkt):
if pkt.haslayer(Dot11) and pkt.type == 2:
print(pkt.show())
scapy.sniff(iface="mon0", prn=packet_handler)
</code></pre>
| 0 |
2016-09-24T18:03:53Z
| 39,679,678 |
<p>try <code>pip install wifi</code> then for scanning use</p>
<pre><code>from wifi import Cell, Scheme
Cell.all('wlan0')
</code></pre>
<p>This returns a list of Cell objects. Under the hood, this calls iwlist scan and parses the unfriendly output. Each cell object should have the following attributes: ssid, signal, quality and more.
and for connecting use</p>
<pre><code>cell = Cell.all('wlan0')[0]
scheme = Scheme.for_cell('wlan0', 'home', cell, passkey)
scheme.save()
scheme.activate()
scheme = Scheme.find('wlan0', 'home')
scheme.activate()
</code></pre>
<p>for more info goto <a href="https://wifi.readthedocs.io/en/latest/" rel="nofollow">https://wifi.readthedocs.io/en/latest/</a></p>
| 0 |
2016-09-24T18:33:08Z
|
[
"python",
"wifi",
"python-3.5",
"scapy",
"sniffer"
] |
How to load virtual environment for Python on Windows 10 using PowerShell
| 39,679,423 |
<p>Following Tyler Butler's <a href="http://www.tylerbutler.com/2012/05/how-to-install-python-pip-and-virtualenv-on-windows-with-powershell/" rel="nofollow">post</a>, I was able to install <code>pip</code>, <code>python</code> and <code>virtualenv</code> to my PowerShell. However, I can only enter a virtual environment where I have created it. If I open a new session of Power Shell, <code>workon</code> can only show me the the first virtual environment I that have ever created.</p>
<hr>
<p>The initial loading of the first virtual environment
<a href="http://i.stack.imgur.com/ZXbUp.png" rel="nofollow"><img src="http://i.stack.imgur.com/ZXbUp.png" alt="The initial loading of the first virtual environment"></a></p>
<hr>
<p>Unable to load the existing virtual environment. After specifying <code>workon venv</code>, nothing gets loaded
<a href="http://i.stack.imgur.com/fZqbc.png" rel="nofollow"><img src="http://i.stack.imgur.com/fZqbc.png" alt="Unable to load the existing virtual environment"></a></p>
<hr>
<p>Unable to create the virtual environment under the same name, thus some thing has been in place.
<a href="http://i.stack.imgur.com/V6DwZ.png" rel="nofollow"><img src="http://i.stack.imgur.com/V6DwZ.png" alt="No more venv as new virtual environment"></a></p>
<hr>
<p>After creating several more virtual environment, the <code>workon</code> command can only get me back <code>venv</code> as existing virtual environment.
<a href="http://i.stack.imgur.com/Ei1jq.png" rel="nofollow"><img src="http://i.stack.imgur.com/Ei1jq.png" alt="Nothing else are found as virtual environment"></a></p>
<hr>
<p>Seeking help with getting the virtual environment function working in PowerSheel.</p>
| 1 |
2016-09-24T18:03:55Z
| 39,682,292 |
<p>Thanks to the suggestions from: <a href="http://www.voidynullness.net/blog/2014/06/19/install-python-setuptools-pip-virtualenvwrapper-for-powershell-pyside-on-windows/" rel="nofollow">http://www.voidynullness.net/blog/2014/06/19/install-python-setuptools-pip-virtualenvwrapper-for-powershell-pyside-on-windows/</a></p>
<p>Including the following specification to the File <code>~\Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1</code>, where <code>~</code> stands for the root folder of the user.</p>
<pre><code>Import-Module virtualenvwrapper
</code></pre>
<p>As of now, in the new sessions of the PowerShell, <code>workon</code> functions nicely.</p>
| 0 |
2016-09-25T00:34:25Z
|
[
"python",
"powershell",
"virtualenv"
] |
Seaborn visualize groups
| 39,679,431 |
<p>How can I plot this data frame using seaborn to show the KPI per model?</p>
<pre><code>allFrame = pd.DataFrame({'modelName':['first','second', 'third'],
'kpi_1':[1,2,3],
'kpi_2':[2,4,3]})
</code></pre>
<p>Not like <code>sns.barplot(x="kpi2", y="kpi1", hue="modelName", data=allFrame)</code>
<a href="http://i.stack.imgur.com/5vH67.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/5vH67.jpg" alt="enter image description here"></a>
But rather like this per KPI
<a href="http://i.stack.imgur.com/er6kQ.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/er6kQ.jpg" alt="enter image description here"></a></p>
| 0 |
2016-09-24T18:04:36Z
| 39,680,526 |
<p>Try <code>melt</code>ing the dataframe first, and then you can plot using <code>seaborn</code>:</p>
<pre><code>import pandas as pd
import seaborn as sns
allFrame = pd.DataFrame({'modelName':['first','second', 'third'],
'kpi_1':[1,2,3],
'kpi_2':[2,4,3]})
allFrame2 = pd.melt(frame=allFrame,
id_vars=['modelName'],
value_vars=["kpi_1","kpi_2"],
value_name="Values", var_name="kpis")
sns.barplot(x="kpis", y="Values", hue="modelName", data=allFrame2)
</code></pre>
<p>Thanks!</p>
| 1 |
2016-09-24T20:11:46Z
|
[
"python",
"plot",
"group",
"seaborn"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.