title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
sequence |
---|---|---|---|---|---|---|---|---|---|
python: utilization of a list in a two level dict | 40,043,288 | <p>How can I initialize a list in a two level dict in a pythonic way?</p>
<pre><code>pos = defaultdict(dict)
pait = "2:N"
cars = ["bus","taxi"]
for x in cars:
pos[x][pait]=[]
</code></pre>
| 0 | 2016-10-14T12:33:56Z | 40,043,400 | <p>Python's list and dictionary comprehensions come in handy for one-line initialization.</p>
<pre><code>pos = {x: {"2:N": []} for x in ["bus", "taxi"]}
</code></pre>
| 1 | 2016-10-14T12:40:19Z | [
"python"
] |
Start two instances of spyder with python2.7 and python3.4 in ubuntu | 40,043,363 | <p>I would like to install two spyder: one is with python2.7 and the other is with python3.4</p>
<p>And I run the commands to install both:</p>
<pre><code>pip install spyder
pip3 install spyder
</code></pre>
<p>But how to start it differently?</p>
<p>Because when I type </p>
<pre><code>spyder
</code></pre>
<p>It just comes out spyder with python2.7</p>
<p>How to start spyder with python3.4?</p>
<p>Thank you.</p>
| 0 | 2016-10-14T12:38:12Z | 40,044,211 | <p>I wasn't able to download spyder through pip3 but instead used <code>apt install spyder3</code> but hopefully this will still work:</p>
<p><strong>For python2.7</strong></p>
<pre><code>spyder
</code></pre>
<p><strong>For python3.4</strong></p>
<pre><code>spyder3
</code></pre>
<p>I could get both running at the same time but if you have a problem try it with <code>--new-instance</code></p>
<p>Hope that helps!</p>
| 0 | 2016-10-14T13:19:16Z | [
"python",
"python-3.x",
"ide",
"spyder"
] |
Python matrix comparison | 40,043,444 | <p>I have big data like:</p>
<pre><code>{'a_1':0b110000,
'a_2':0b001100,
'a_3':0b000011,
'b_1':0b100100,
'b_2':0b000001,
'c_1':0b100000,}
</code></pre>
<p>and so on... the structure of the data can be reorganized and is more to show, what I want to achieve. Rows of 'a' will never overlap by their sub-rows.
What would be a performant way, to get the best combinations of two (ab,ac) or three (abc) or more rows in terms of most matching values?
Hope, the questions is clear somehow, hard to describe :/
Maybe some matrix operations of numpy?</p>
<p>more info:
possible combinations of two would be ab,ac,bc. ab would check rows of a (a_1,a_2,a_3) against rows of b (b_1,b_2) each other. a_1 & b_1 means 0b110000 & 0b100100 and would give one result. a_1 & b_2 means 0b110000 & 0b000001 and would give no result. That would be the description of a solution by loops, but it is very slow, especially with combinations of 8 or so (not covered by example data).</p>
<p>maybe a more clear structure of the data:</p>
<pre><code>{'a': [0b110000,
0b001100,
0b000011],
'b': [0b100100,
0b000001],
'c': [0b100000]}
</code></pre>
<p>Let me show, how I'm doing those calculations until now. The data-structure is kind of different, as I tried to start this question with an 'I thought' better structure...</p>
<pre><code>data = {'a':[1,1,2,2,3,3],
'b':[4,5,5,5,4,5],
'c':[6,7,7,7,6,7]}
combine_count = 3
for config in combinations(['a','b','c'],combine_count):
ret = {}
for index,combined in enumerate(zip(*tuple(data.get(k) for k in config))):
ret.setdefault(combined, []).append(index)
for k,v in ret.items():
score = len(v)
if score >= 2:
print(k,score)
</code></pre>
<p>my problem with this is, that especially the process of constructing combined with larger combine_count takes a lot of time.
data of course is a lot larger. It has about 231 keys with lists each of length ~60000. Also, the RAM consumption is too high. </p>
| 0 | 2016-10-14T12:42:13Z | 40,068,611 | <p>Not sure about your triple evaluation* but you might be able to modify this to do what you want. I am assuming you will iterate through the combinations of a,b,c etc.</p>
<pre><code>#!/usr/bin/python
import numpy as np
import random
import time
A = [np.random.randint(0, 2**15, random.randint(1, 5)) + 2**16 for i in range(231)]
best_score = 0
tm = time.time()
for i, a in enumerate(A):
for j, b in enumerate(A[1:]):
for k, c in enumerate(A[2:]):
an, bn, cn = len(a), len(b), len(c) #some shortcuts
a_block = np.broadcast_to(a.reshape(an, 1, 1), (an, bn, cn))
b_block = np.broadcast_to(b.reshape(1, bn, 1), (an, bn, cn))
c_block = np.broadcast_to(c.reshape(1, 1, cn), (an, bn, cn))
all_and = c_block & b_block & a_block
all_score = ((all_and & 1) +
((all_and >> 1) & 1) +
((all_and >> 2) & 1) +
((all_and >> 3) & 1) +
((all_and >> 4) & 1) +
((all_and >> 5) & 1))
ix = np.unravel_index(np.argmax(all_score), (an, bn, cn))
if all_score[ix] > best_score:
print(i,j,k, ix, all_score[ix], a_block[ix], b_block[ix], c_block[ix])
best_score = all_score[ix]
best_abc = (i, j, k)
best_ix = ix[:]
print(time.time() - tm)
print(best_score)
print(best_abc)
print(best_ix)
''' gives
0 0 0 (0, 2, 0) 2 95038 76894 78667
0 0 1 (0, 3, 1) 3 95038 70262 96242
0 0 2 (0, 2, 0) 4 95038 76894 96255
0 3 2 (0, 0, 0) 5 95038 96255 96255
4 3 2 (0, 0, 0) 6 96255 96255 96255
871.6093053817749
6
(4, 3, 2)
(0, 0, 0)
'''
</code></pre>
<p>EDIT * I think this code does: find the location (and value of) the maximum between a1&b1&c1, a2&b1&c1, a3&b1&c1, a1&b2&c1 etc which is possibly different from a1&b1&c1 | a2&b1&c1 | a3&b1&c1 | a1&b2&c1</p>
<p>EDIT2 More explicitly showing process of iterating over a pseudo dataset. a,b,c are arrays 1 to 5 long but numpy randint can't generate random numbers 60000 bits long, also I've not attempted to ensure all the numbers are unique (which would be quite easy to do) It takes about 15m on this not very powerful laptop, so that gives you a starting point for comparison.</p>
<p>A way to speed up the process might be to confine comparison to just two i.e. a,b to start with and keep a list of the high scorers then go through each of those combinations &ing against all the other entries in the list to select the highest scoring three way and.</p>
| 1 | 2016-10-16T09:00:10Z | [
"python",
"performance",
"numpy",
"matrix",
"comparison"
] |
How to configure nginx for django with gunicorn? | 40,043,660 | <p>I have successfully run gunicorn and confirmed that my web runs on localhost:8000. But I can't get nginx right. My config file goes like this:</p>
<pre><code> server {
listen 80;
server_name 104.224.149.42;
location / {
proxy_pass http://127.0.0.1:8000;
}
}
</code></pre>
<p>104.224.149.42 is the ip for outside world.</p>
| -3 | 2016-10-14T12:52:14Z | 40,043,979 | <p>Do this</p>
<ul>
<li>Remove <code>default</code> from <code>/etc/nginx/sites-enabled/default</code></li>
</ul>
<p>Create <code>/etc/nginx/sites-available/my.conf</code> with following</p>
<pre><code>server {
listen 80;
server_name 104.224.149.42;
location / {
proxy_pass http://127.0.0.1:8000;
}
}
</code></pre>
<p>Then <code>sudo ln -s /etc/nginx/sites-available/my.conf /etc/nginx/sites-enabled/my.conf</code></p>
<ul>
<li>restart <code>nginx</code></li>
</ul>
<p>You have to configure <code>static</code> files and <code>media</code> files serving.</p>
| 0 | 2016-10-14T13:07:15Z | [
"python",
"django",
"nginx"
] |
How to customise QGroupBox title in PyQt5? | 40,043,709 | <p>Here's a piece of code that creates a simple QGroupBox:</p>
<pre><code>from PyQt5.QtWidgets import (QApplication, QWidget,
QGroupBox, QGridLayout)
class QGroupBoxTest(QWidget):
def __init__(self):
super().__init__()
self.initUI()
def initUI(self):
gb = QGroupBox()
gb.setTitle('QGroupBox title')
appLayout = QGridLayout()
appLayout.addWidget(gb, 0, 0)
self.setLayout(appLayout)
self.setWindowTitle('QGroupBox test window')
self.setGeometry(300, 300, 300, 200)
if __name__ == "__main__":
import sys
app = QApplication(sys.argv)
test = QGroupBoxTest()
test.show()
sys.exit(app.exec_())
</code></pre>
<p>and here's what the output looks like to me:</p>
<p><a href="https://postimg.org/image/6kga2ljjd/" rel="nofollow"><img src="https://s12.postimg.org/q2axijgh9/qgb_1.jpg" alt="qgb-1.jpg"></a></p>
<p>Now let's say I want to add some style to it and I do this by adding this line to the initUI method:</p>
<pre><code>gb.setStyleSheet("border: 1px solid gray; border-radius: 5px")
</code></pre>
<p>here's the output:</p>
<p><a href="https://postimg.org/image/ign5gvbzv/" rel="nofollow"><img src="https://s9.postimg.org/lnhp0hwfz/qgb_1.jpg" alt="qgb-1.jpg"></a></p>
<p>As can be clearly seen from the pic, the title has ended up inside the frame and now is overlapping the frame border.</p>
<p>So I have three questions actually:</p>
<ol>
<li>Why did the title move?</li>
<li>How do I move it about and place it wherever I want (if possible)?</li>
<li>What if I simply want to round off border corners without specifying the border-style property. Let's say I want the border-style to stay the same as in the first pic but with rounded corners. How do I do that?</li>
</ol>
| 1 | 2016-10-14T12:54:35Z | 40,057,845 | <p><strong>1)</strong> Probably that's the default QT placement, in the first image the platform style is used, and its take care of borders and title placement, when you change the stylesheet you override something and you get the ugly placement.</p>
<p><strong>2)</strong> You can control the "title" position using the <code>QGroupBox:title</code> controls, for example:</p>
<pre><code>gb.setStyleSheet('QGroupBox:title {'
'subcontrol-origin: margin;'
'subcontrol-position: top center;'
'padding-left: 10px;'
'padding-right: 10px; }')
</code></pre>
<p>will result in something like this:</p>
<p><a href="https://i.stack.imgur.com/vAdF4.png" rel="nofollow"><img src="https://i.stack.imgur.com/vAdF4.png" alt="title"></a></p>
<p><del><strong>3)</strong> My suggestion is to create different strings for the stylesheet attributes you want to change, then compose them to create the style you want.</del></p>
| 1 | 2016-10-15T10:23:23Z | [
"python",
"css",
"pyqt5"
] |
How to customise QGroupBox title in PyQt5? | 40,043,709 | <p>Here's a piece of code that creates a simple QGroupBox:</p>
<pre><code>from PyQt5.QtWidgets import (QApplication, QWidget,
QGroupBox, QGridLayout)
class QGroupBoxTest(QWidget):
def __init__(self):
super().__init__()
self.initUI()
def initUI(self):
gb = QGroupBox()
gb.setTitle('QGroupBox title')
appLayout = QGridLayout()
appLayout.addWidget(gb, 0, 0)
self.setLayout(appLayout)
self.setWindowTitle('QGroupBox test window')
self.setGeometry(300, 300, 300, 200)
if __name__ == "__main__":
import sys
app = QApplication(sys.argv)
test = QGroupBoxTest()
test.show()
sys.exit(app.exec_())
</code></pre>
<p>and here's what the output looks like to me:</p>
<p><a href="https://postimg.org/image/6kga2ljjd/" rel="nofollow"><img src="https://s12.postimg.org/q2axijgh9/qgb_1.jpg" alt="qgb-1.jpg"></a></p>
<p>Now let's say I want to add some style to it and I do this by adding this line to the initUI method:</p>
<pre><code>gb.setStyleSheet("border: 1px solid gray; border-radius: 5px")
</code></pre>
<p>here's the output:</p>
<p><a href="https://postimg.org/image/ign5gvbzv/" rel="nofollow"><img src="https://s9.postimg.org/lnhp0hwfz/qgb_1.jpg" alt="qgb-1.jpg"></a></p>
<p>As can be clearly seen from the pic, the title has ended up inside the frame and now is overlapping the frame border.</p>
<p>So I have three questions actually:</p>
<ol>
<li>Why did the title move?</li>
<li>How do I move it about and place it wherever I want (if possible)?</li>
<li>What if I simply want to round off border corners without specifying the border-style property. Let's say I want the border-style to stay the same as in the first pic but with rounded corners. How do I do that?</li>
</ol>
| 1 | 2016-10-14T12:54:35Z | 40,072,415 | <p>Even though this question has already been answered, I will post what I've figured out regarding technics of applying style sheets to widgets in PyQt which partly answers my original question. I hope someone will find it useful.</p>
<p>I think it's nice to keep the styles in separate css(qss) files:</p>
<pre><code>/*css stylesheet file that contains all the style information*/
QGroupBox {
border: 1px solid black;
border-radius: 5px;
}
QGroupBox:title{
subcontrol-origin: margin;
subcontrol-position: top center;
padding: 0 3px 0 3px;
}
</code></pre>
<p>and the python code looks like this:</p>
<pre><code>from PyQt5.QtWidgets import (QApplication, QWidget,
QGroupBox, QGridLayout)
from PyQt5.QtCore import QFile, QTextStream
class QGroupBoxTest(QWidget):
def __init__(self):
super().__init__()
self.initUI()
def initUI(self):
gb = QGroupBox()
gb.setTitle('QGroupBox title:')
gb.setStyleSheet(self.getStyleSheet("./styles.qss"))
appLayout = QGridLayout()
appLayout.addWidget(gb, 0, 0)
self.setLayout(appLayout)
self.setWindowTitle('QGroupBox test window')
self.setGeometry(300, 300, 300, 300)
def getStyleSheet(self, path):
f = QFile(path)
f.open(QFile.ReadOnly | QFile.Text)
stylesheet = QTextStream(f).readAll()
f.close()
return stylesheet
if __name__ == "__main__":
import sys
app = QApplication(sys.argv)
test = QGroupBoxTest()
test.show()
sys.exit(app.exec_())
</code></pre>
<p>which yields the following output:</p>
<p><a href="https://postimg.org/image/5lp4cwjmb/" rel="nofollow"><img src="https://s21.postimg.org/wwafktmjb/qgb_1.jpg" alt="qgb-1.jpg"></a></p>
| 0 | 2016-10-16T16:13:08Z | [
"python",
"css",
"pyqt5"
] |
Remove html after some point in Beautilful Soup | 40,043,715 | <p>I have a trouble. My aim is to parse the data until some moment. Then, I want to stop parsing.</p>
<pre><code> <span itemprop="address">
Some address
</span>
<i class="fa fa-signal">
</i>
...
</p>
</div>
</div>
<div class="search_pagination" id="pagination">
<ul class="pagination">
</ul>
</div>
</div>
</div>
</div>
<div class="col-sm-3">
<div class="panel" itemscope="" itemtype="http://schema.org/WPSideBar">
<h2 class="heading_a" itemprop="name">
Top-10 today
</h2> #a lot of tags after that moment
</code></pre>
<p>I want to get all the values from <code><span itemprop="address"></code> (there are a lot of them before) until the moment <code>Top-10 today</code>.</p>
| 2 | 2016-10-14T12:54:49Z | 40,043,941 | <p>You can actually let <code>BeautifulSoup</code> <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/#parsing-only-part-of-a-document" rel="nofollow">parse only the tags you are interested in via <code>SoupStrainer</code></a>:</p>
<pre><code>from bs4 import BeautifulSoup, SoupStrainer
only_addresses = SoupStrainer("span", itemprop="address")
soup = BeautifulSoup(html_doc, "html.parser", parse_only=only_addresses)
</code></pre>
<p>If you though have some "addresses" before the "Top-10 today" and some after but you are interested in those coming before it, you can make a custom <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/#a-function" rel="nofollow">searching function</a>:</p>
<pre><code>def search_addresses(tag):
return tag.name == "span" and tag.get("itemprop") == "address" and \
tag.find_next("h2", text=lambda text: text and "Top-10 today" in text)
addresses = soup.find_all(search_addresses)
</code></pre>
<p>It does not look trivial, but the idea is simple - we are using <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/#find-all-next-and-find-next" rel="nofollow"><code>find_next()</code></a> for every "address" to check if "Top-10 today" heading exists after it.</p>
| 1 | 2016-10-14T13:05:50Z | [
"python",
"html",
"beautifulsoup"
] |
print random data for each millisecond python | 40,043,749 | <p>I want to print random data ranging from -1 to 1 in csv file for each millisecond using Python. I started with to print for each second and it worked. But, I am facing difficulty with printing random data for each millisecond. I want the timestamp to be in UNIX epoch format like "1476449030.55676" (for milliseconds, decimal point is not required)</p>
<pre><code>tstep = datetime.timedelta(milliseconds=1)
tnext = datetime.datetime.now() + tstep
NumberOfReadings = 10; # 10 values ( 1 value for 1 millisecond)
i = 0;
f = open(sys.argv[1], 'w+')
try:
writer = csv.writer(f)
while i < NumberOfReadings:
writer.writerow((random.uniform(-1, 1), time.time()))
tdiff = tnext - datetime.datetime.now()
time.sleep(float(tdiff.total_seconds()/1000))
tnext = tnext + tstep
i =i+1;
finally:
f.close()
</code></pre>
| 1 | 2016-10-14T12:56:18Z | 40,044,008 | <p>UPD: <code>time.sleep()</code> accepts argument in seconds, so you don't need to divide it by 1000. After fixing this, my output looks like this:</p>
<pre><code>0.18153176446804853,1476466290.720721
-0.9331178681567136,1476466290.721784
-0.37142653326337327,1476466290.722779
0.1397040393287503,1476466290.723766
0.7126280853504974,1476466290.724768
-0.5367844384018245,1476466290.725762
0.44284645253432786,1476466290.726747
-0.2914685960956531,1476466290.727744
-0.40353712249981943,1476466290.728778
0.035369003158632895,1476466290.729771
</code></pre>
<p>Which is as good as it gets, given the precision of time.sleep and other time functions.</p>
<p>Here's a stripped down version, which outputs timestamps into stdout every second:</p>
<pre><code>import time
tstep = 0.001
tnext = time.time() + tstep
NumberOfReadings = 10; # 10 values ( 1 value for 1 millisecond)
for i in range(NumberOfReadings):
now = time.time()
print(now)
time.sleep(tnext - now)
tnext += tstep
</code></pre>
<p>================================================</p>
<p>This is the problem:</p>
<pre><code>float(tdiff.total_seconds()/1000)
</code></pre>
<p>You use integer division, and then convert result to float.
Instead, you need to use float division: </p>
<pre><code>tdiff.total_seconds()/1000.0
</code></pre>
| 0 | 2016-10-14T13:08:36Z | [
"python",
"csv",
"printing",
"milliseconds"
] |
How to get into the python shell within Odoo 8 environment? | 40,043,813 | <p>I would like to use the Odoo Framework from the shell.</p>
<p>I installed the module <a href="https://www.odoo.com/apps/modules/8.0/shell/" rel="nofollow">"Shell command backport"</a> (technical name: <code>shell</code>), but I couldn't make it work.</p>
<pre><code>$ ./odoo.py shell --addons-path=/opt/odoo_8/src/linked-addons -d database_name
Traceback (most recent call last):
File "./odoo.py", line 160, in <module>
main()
File "./odoo.py", line 157, in main
openerp.cli.main()
File "/opt/odoo_8/src/OCA/OCB/openerp/cli/__init__.py", line 58, in main
for m in module.get_modules():
File "/opt/odoo_8/src/OCA/OCB/openerp/modules/module.py", line 351, in get_modules
plist.extend(listdir(ad))
File "/opt/odoo_8/src/OCA/OCB/openerp/modules/module.py", line 346, in listdir
return map(clean, filter(is_really_module, os.listdir(dir)))
OSError: [Errno 2] No such file or directory: '/opt/odoo8/openerp/addons'
</code></pre>
<p>Where is defined the path <code>/opt/odoo8/openerp/addons</code>? I checked <a href="http://stackoverflow.com/questions/28054026/where-openerp-odoo-finds-the-modules-path">this similar question</a> as well. </p>
<p>If I don't write the addons path in the command the error appears again.</p>
<p>I read the answer of <a href="http://stackoverflow.com/questions/34293139/how-to-parse-odoo-in-python-console/34293569">this other question</a>, I tried the module and the script option but they didn't work. What should I do to make it work? Any hint would help.</p>
| 0 | 2016-10-14T12:59:27Z | 40,051,325 | <p>Check your <code>.opererp_serverrc</code> for the user you are executing the command as. In the users home directory you will find this file. There may be reference to the <code>addons_path</code>. The path it appears to be looking for <code>/opt/odoo8/openerp/addons</code> differs from what you have specified in your command. I would check your config files.</p>
| 2 | 2016-10-14T20:15:28Z | [
"python",
"shell",
"path",
"odoo-8"
] |
Find the last window created in Maya? | 40,043,917 | <p>I was wondering if there is any way to find the name of the last window created in Maya, knowing that I can't add any information to the window itself before that... I checked in both the <code>cmds</code> and API but couldn't find anything. Maybe in PyQt but I don't know much about it.</p>
<p>I'm looking for any solution. Thanks</p>
| 0 | 2016-10-14T13:04:34Z | 40,044,189 | <p>you can work with something like a close callback, save the needed information and restore it again </p>
<pre><code>def restoreLayout(self):
"""
Restore the layout of each widget
"""
settings=self.settings
try:
self.restoreGeometry(settings.value("geometry").toByteArray())
self.restoreState(settings.value("windowState").toByteArray())
size=settings.value('fontSize').toFloat()[0]
self.setFontSize(size)
except:
pass
def saveLayout(self):
"""
Save the layout of each widget
Save the main window id to your data base
"""
settings=self.settings
settings.setValue("geometry", self.saveGeometry())
settings.setValue("windowState", self.saveState())
settings.setValue("fontSize", app.font().pointSize())
def closeEvent(self, event):
QtGui.QMainWindow.closeEvent(self, event)
self.saveLayout()
</code></pre>
<p>a simple case/idea to save tha main win_id and a child button_id:</p>
<pre><code>from functools import partial
import json
def close_ui(*args):
win_id = args[0]
if cmds.window(win_id, exists=True):
cmds.deleteUI(win_id, window=True)
with open('dataBase/ui/uidata.json', 'w') as outfile:
json.dump(args, outfile)
win = {}
win["main_win"] = cmds.window()
cmds.columnLayout()
cmds.text( label='closing it' )
win["btn"] = cmds.button( label='Close')
cmds.button(win["btn"],e=True, command=partial(close_ui, win["main_win"], win["btn"]))
cmds.showWindow(win["main_win"])
</code></pre>
| 1 | 2016-10-14T13:18:12Z | [
"python",
"maya"
] |
Find the last window created in Maya? | 40,043,917 | <p>I was wondering if there is any way to find the name of the last window created in Maya, knowing that I can't add any information to the window itself before that... I checked in both the <code>cmds</code> and API but couldn't find anything. Maybe in PyQt but I don't know much about it.</p>
<p>I'm looking for any solution. Thanks</p>
| 0 | 2016-10-14T13:04:34Z | 40,046,747 | <p>Here is what I came up with, it's surely not the "cleanest" solution but it works!</p>
<pre><code># List all the currently opened windows
uisBefore = cmds.lsUI (wnd = True)
# Execute the function which may or may not create a window
func(*args, **kwargs)
# List all the opened windows again
uisAfter = cmds.lsUI (wnd = True)
# Find all the windows that were opened after executing func()
newUIs = [ui for ui in uisAfter if ui not in uisBefore]
</code></pre>
| 0 | 2016-10-14T15:23:34Z | [
"python",
"maya"
] |
Find the last window created in Maya? | 40,043,917 | <p>I was wondering if there is any way to find the name of the last window created in Maya, knowing that I can't add any information to the window itself before that... I checked in both the <code>cmds</code> and API but couldn't find anything. Maybe in PyQt but I don't know much about it.</p>
<p>I'm looking for any solution. Thanks</p>
| 0 | 2016-10-14T13:04:34Z | 40,051,380 | <p>If you create a window with the <code>window</code> command, you'll get back the name of the window you just created:</p>
<pre><code>import maya.cmds as cmds
w = cmds.window()
c= cmds.columnLayout()
def who_am_i(*_):
print "window is", w
b = cmds.button('push', c=who_am_i)
cmds.showWindow(w)
</code></pre>
<p>If for some reason you don't own the code that creates the window:</p>
<pre><code>existing_windows = set(cmds.lsUI(type = 'window'))
// make your window here
new_windows = list(set(cmds.lsUI(type = 'window') - existing_windows))
</code></pre>
| 1 | 2016-10-14T20:19:32Z | [
"python",
"maya"
] |
Launch python from specific exe in PHP | 40,043,954 | <p>I've a php file for launch my python file with default version (2.7):</p>
<pre><code><?php
echo '<pre>';
system("cmd /c D:\web\folder\easypy.py");
echo '</pre>';
?>
</code></pre>
<p>But I developped a python file more strong who work with other python (3.5). So I need to use the exe of the version 3.5 in my php.</p>
<p>I tried this :</p>
<pre><code><?php
echo '<pre>';
system("cmd /c D:\web\folder\strongpy.py D:\folder1\folder2\python35\python.exe");
echo '</pre>';
?>
</code></pre>
<p>But it not works.</p>
| -2 | 2016-10-14T13:06:13Z | 40,044,186 | <p>system in php looks like this:</p>
<pre><code>system("command", $retval); //retval is optional
</code></pre>
<p>command in your case is:</p>
<pre><code> "cmd /c D:/folder1/folder2/python35/python.exe D:/web/folder/strongpy.py"
</code></pre>
<p>the final php code looks like this:</p>
<pre><code>system("cmd /c D:/folder1/folder2/python35/python.exe D:/web/folder/strongpy.py");
</code></pre>
| 1 | 2016-10-14T13:17:53Z | [
"php",
"python",
"cmd"
] |
How do I call an Excel VBA script using xlwings v0.10 | 40,044,090 | <p>I used to use the info in this question to run a VBA script that does some basic formatting after I run my python code.</p>
<p><a href="http://stackoverflow.com/questions/30308455/how-do-i-call-an-excel-macro-from-python-using-xlwings">How do I call an Excel macro from Python using xlwings?</a></p>
<p>Specifically I used the first update.</p>
<pre><code>from xlwings import Workbook, Application
wb = Workbook(...)
Application(wb).xl_app.Run("your_macro")
</code></pre>
<p>Now I'm using v0.10.0 of xlwings and this code no longer works.</p>
<p>When I try the suggested new code for v0.10.0:</p>
<pre><code>wb.app.macro('your_macro')
</code></pre>
<p>Python returns an object: </p>
<pre><code><xlwings.main.Macro at 0x92d3198>
</code></pre>
<p>and my macro isn't run in Excel.</p>
<p>The documentation (<a href="http://docs.xlwings.org/en/stable/api.html#xlwings.App.macro" rel="nofollow">http://docs.xlwings.org/en/stable/api.html#xlwings.App.macro</a>) has an example that is a custom function but I have a script that does several things in Excel (formats the data I output from python, adds some formulas in the sheet, etc.) that I want to run.</p>
<p>I'm sure I'm missing something basic here.</p>
<p><strong>Update</strong>
Based on Felix Zumstein's suggestion, I tried:</p>
<pre><code>import xlwings as xw
xlfile = 'model.xlsm'
wb = xw.Book(xlfile)
wb.macro('your_macro')
</code></pre>
<p>This returns the same thing as wb.app.macro('your_macro'):</p>
<pre><code><xlwings.main.Macro at 0x92d05888>
</code></pre>
<p>and no VBA script run inside Excel.</p>
| 1 | 2016-10-14T13:12:19Z | 40,058,238 | <p>You need to use <code>Book.macro</code>. As your link to the docs says, <code>App.macro</code> is only for macros that are not part of a workbook (i.e. addins). So use:</p>
<pre><code>wb.macro('your_macro')
</code></pre>
| 0 | 2016-10-15T11:04:39Z | [
"python",
"xlwings"
] |
Kivy object doesn't render when defined in .kv file | 40,044,117 | <p>I'm developing an app in kivy and currently trying to figure out why I can get an object to render when I add it in Python but not in the .kv file.</p>
<p>This works fine, rendering the background image with a switch on top of it.</p>
<pre><code>class LoginScreen(Screen):
def __init__(self,**kwargs):
super(LoginScreen, self).__init__(**kwargs)
self.layout = BoxLayout(orientation='vertical')
self.add_widget(self.layout)
self.layout.add_widget(defSwitch())
def on_touch_up(self,touch):
if self.collide_point(*touch.pos):
self.parent.current = 'data'
class defSwitch(Switch):
active = False
</code></pre>
<p>Where the .kv file is:</p>
<pre><code><LoginScreen>:
imgname: './images/' + 'login.jpg'
canvas.before:
Rectangle:
pos: self.pos
size: self.size
source: self.imgname
</code></pre>
<p>This doesn't render anything but the background image:</p>
<pre><code>class LoginScreen(Screen):
def on_touch_up(self,touch):
if self.collide_point(*touch.pos):
self.parent.current = 'data'
class defSwitch(Switch):
active = False
</code></pre>
<p>where in this case the .kv file has:</p>
<pre><code><LoginScreen>:
imgname: './images/' + 'login.jpg'
canvas.before:
Rectangle:
pos: self.pos
size: self.size
source: self.imgname
BoxLayout:
orientation: 'vertical'
defSwitch:
</code></pre>
<p>If I replace defSwitch: in the .kv file by a default Switch object, then it renders fine. Why can't I use a custom object?</p>
| 0 | 2016-10-14T13:14:07Z | 40,053,001 | <p>If you want to use a custom class created in <code>.py</code> file with kv language, you need to set it in <code>.kv</code> file too:</p>
<pre><code><defScreen>: # class def
<LoginScreen>:
...
defScreen: # object
</code></pre>
<p>so that it knows what to look for.</p>
<p>Also I'd like to recommend you using camel case in kv language as there are <a href="http://stackoverflow.com/a/40030291/5994041">situations</a> in which the parser won't recognize the phrase as a widget.</p>
| 1 | 2016-10-14T22:43:20Z | [
"python",
"python-2.7",
"kivy",
"python-2.x",
"kivy-language"
] |
Substituting variables into strings | 40,044,171 | <p>Currently I know that we can do something like</p>
<pre><code>"This is a string with var %(sub_var)s that will be substituted" % ({'sub_var': 'a1234'})
</code></pre>
<p>That will sub in the <code>sub_var</code> variable into the string. However is there a way to define a string like this:</p>
<pre><code>a = "This is a string with var %(sub_var)s that will be substituted"
</code></pre>
<p>Without defining that <code>sub_var</code> until the string <code>a</code> is actually used? I have some variables that can change depending on the condition, and I don't want to have to keep retyping <code>a</code>.</p>
<p>Thank you!</p>
| -1 | 2016-10-14T13:17:12Z | 40,044,247 | <p>Did you try exactly what you said?</p>
<pre><code> a = "This is a string with var %(sub_var)s that will be substituted"
=> None
a
=> 'This is a string with var %(sub_var)s that will be substituted'
a % {'sub_var': 'c'}
=> 'This is a string with var c that will be substituted'
</code></pre>
| 3 | 2016-10-14T13:20:48Z | [
"python",
"string",
"python-2.7"
] |
Substituting variables into strings | 40,044,171 | <p>Currently I know that we can do something like</p>
<pre><code>"This is a string with var %(sub_var)s that will be substituted" % ({'sub_var': 'a1234'})
</code></pre>
<p>That will sub in the <code>sub_var</code> variable into the string. However is there a way to define a string like this:</p>
<pre><code>a = "This is a string with var %(sub_var)s that will be substituted"
</code></pre>
<p>Without defining that <code>sub_var</code> until the string <code>a</code> is actually used? I have some variables that can change depending on the condition, and I don't want to have to keep retyping <code>a</code>.</p>
<p>Thank you!</p>
| -1 | 2016-10-14T13:17:12Z | 40,044,262 | <p>Store a function in the variable that can be used to make the strings:</p>
<pre><code>>>a='Variable is {}'.format
>>print(a('one'))
Variable is one
>>print(a('two'))
Variable is two
</code></pre>
| 0 | 2016-10-14T13:21:20Z | [
"python",
"string",
"python-2.7"
] |
Python and flask, looping and printing data based on a variable | 40,044,172 | <p>I have a problem with looping in python and JSON based on a variable. What Im trying to do is to print just one row of JSON data based on one variable that is passed through.</p>
<p>Heres the code for the python file:</p>
<pre><code>@app.route('/<artist_name>/')
def artist(artist_name):
list = [
{'artist_name': 'Nirvana', 'album_name': 'Nevermind', 'date_of_release': '1993', 'img': 'https://upload.wikimedia.org/wikipedia/en/b/b7/NirvanaNevermindalbumcover.jpg'},
{'artist_name': 'Eminem', 'album_name': 'Marshal Mathers LP', 'date_of_release': '2000', 'img': 'http://e.snmc.io/lk/f/l/6b09725acea3aefbafbf503a76885d0c/1612455.jpg'},
{'artist_name': 'System of a Down', 'album_name': 'Toxicity', 'date_of_release': '2001', 'img': 'http://loudwire.com/files/2015/09/System-of-a-Down-Toxicity.png'},
{'artist_name': 'Korn', 'album_name': 'Life is Peachy', 'date_of_release': '1996', 'img': 'http://loudwire.com/files/2014/01/Life-is-Peachy.jpg'}
]
return render_template("artist.html", results=list, artist_name=artist_name)
</code></pre>
<p>And this is my artist.html template:</p>
<pre><code>{% if results %}
<ul>
{% for item in results if item.artist_name == item.artist_name %}
<li>{{ item.artist_name }}</li>
<li>{{ item.date_of_release}}</li>
{% endfor %}
</ul>
{% endif %}
</code></pre>
<p>What im trying to achieve is that when an "artist_name" variable is being passed I will be able to print "artist_name" variable and "date_of_release" variable but instead it prints all four records instead of one based on the variable "artist_name". Can anybody help me with this? Thank you.</p>
| -1 | 2016-10-14T13:17:12Z | 40,044,425 | <p>my solution is less complicated, and working :)</p>
<pre><code>from flask import Flask,render_template,url_for
import json
app = Flask(__name__)
app.debug = False
@app.route('/<artist_name>/')
def artist(artist_name):
list = [
{'artist_name': 'Nirvana', 'album_name': 'Nevermind', 'date_of_release': '1993', 'img': 'https://upload.wikimedia.org/wikipedia/en/b/b7/NirvanaNevermindalbumcover.jpg'},
{'artist_name': 'Eminem', 'album_name': 'Marshal Mathers LP', 'date_of_release': '2000', 'img': 'http://e.snmc.io/lk/f/l/6b09725acea3aefbafbf503a76885d0c/1612455.jpg'},
{'artist_name': 'System of a Down', 'album_name': 'Toxicity', 'date_of_release': '2001', 'img': 'http://loudwire.com/files/2015/09/System-of-a-Down-Toxicity.png'},
{'artist_name': 'Korn', 'album_name': 'Life is Peachy', 'date_of_release': '1996', 'img': 'http://loudwire.com/files/2014/01/Life-is-Peachy.jpg'}
]
res = ""
for i in list:
if i.get('artist_name') == artist_name:
res = i
return render_template("artist.html", results=list, artist_name=res)
if __name__ == '__main__':
app.run(host='0.0.0.0')
</code></pre>
<p>template:</p>
<pre><code>{% if results %}
<ul>
<li>{{ artist_name["artist_name"] }}</li>
<li>{{ artist_name["date_of_release"]}}</li>
</ul>
{% endif %}
</code></pre>
<p>(artist_name in url is case sensitive)</p>
| 0 | 2016-10-14T13:30:04Z | [
"python",
"json",
"loops",
"variables",
"flask"
] |
Python and flask, looping and printing data based on a variable | 40,044,172 | <p>I have a problem with looping in python and JSON based on a variable. What Im trying to do is to print just one row of JSON data based on one variable that is passed through.</p>
<p>Heres the code for the python file:</p>
<pre><code>@app.route('/<artist_name>/')
def artist(artist_name):
list = [
{'artist_name': 'Nirvana', 'album_name': 'Nevermind', 'date_of_release': '1993', 'img': 'https://upload.wikimedia.org/wikipedia/en/b/b7/NirvanaNevermindalbumcover.jpg'},
{'artist_name': 'Eminem', 'album_name': 'Marshal Mathers LP', 'date_of_release': '2000', 'img': 'http://e.snmc.io/lk/f/l/6b09725acea3aefbafbf503a76885d0c/1612455.jpg'},
{'artist_name': 'System of a Down', 'album_name': 'Toxicity', 'date_of_release': '2001', 'img': 'http://loudwire.com/files/2015/09/System-of-a-Down-Toxicity.png'},
{'artist_name': 'Korn', 'album_name': 'Life is Peachy', 'date_of_release': '1996', 'img': 'http://loudwire.com/files/2014/01/Life-is-Peachy.jpg'}
]
return render_template("artist.html", results=list, artist_name=artist_name)
</code></pre>
<p>And this is my artist.html template:</p>
<pre><code>{% if results %}
<ul>
{% for item in results if item.artist_name == item.artist_name %}
<li>{{ item.artist_name }}</li>
<li>{{ item.date_of_release}}</li>
{% endfor %}
</ul>
{% endif %}
</code></pre>
<p>What im trying to achieve is that when an "artist_name" variable is being passed I will be able to print "artist_name" variable and "date_of_release" variable but instead it prints all four records instead of one based on the variable "artist_name". Can anybody help me with this? Thank you.</p>
| -1 | 2016-10-14T13:17:12Z | 40,044,456 | <p>In the comments, H J potter and Markus have it right: <em>however</em> I'ld be very careful over the url encoding of your url and enforcing capitlisation of the artist name: Is "System Of A Down" the same as "System of a Down", and does flask always decode correctly?</p>
<pre><code>{% if results %}
<ul>
{% for item in results if item.artist_name == artist_name %}
<li>{{ item.artist_name }}</li>
<li>{{ item.date_of_release}}</li>
{% endfor %}
</ul>
{% endif %}
</code></pre>
<p>Have a look at <a href="http://stackoverflow.com/questions/497908/is-a-url-allowed-to-contain-a-space">Is a URL allowed to contain a space?</a> TLDR? "Shorter answer: no, you must encode a space; it is correct to encode a space as +, but only in the query string; in the path you must use %20."</p>
<p>I would also go further though and suggest a restructure of the code - try to keep business logic (i.e. deciding not to display or load all the data) in one file, the 'python', and the display logic (deciding to put them on bullet points or on divs) in the other (the 'template')</p>
| 0 | 2016-10-14T13:31:39Z | [
"python",
"json",
"loops",
"variables",
"flask"
] |
select specific column from panads MultiIndex dataframe | 40,044,260 | <p>I have a MultiIndex dataframe with 200 columns, I would like to select an specific column from that. Suppose, df is some part of my dataframe:</p>
<pre><code>df=
a b
l h l h l h l
cold hot hot cold cold hot hot
2009-01-01 01:00:00 0.1 0.9 0.4 0.29 0.15 0.6 0.3
2009-01-01 02:00:00 0.1 0.8 0.35 0.2 0.15 0.6 0.4
2009-01-01 03:00:00 0.12 0.7 0.3 0.23 0.23 0.8 0.3
2009-01-01 04:00:00 0.1 0.9 0.33 0.24 0.15 0.6 0.4
2009-01-01 05:00:00 0.17 0.9 0.41 0.23 0.18 0.75 0.4
</code></pre>
<p>I would like to select the values for this column[h,hot]. </p>
<p>My output should be:</p>
<pre><code>df['h','hot']=
a b
2009-01-01 01:00:00 0.9 0.6
2009-01-01 02:00:00 0.8 0.6
2009-01-01 03:00:00 0.7 0.8
2009-01-01 04:00:00 0.9 0.6
2009-01-01 05:00:00 0.9 0.75
</code></pre>
<p>I do appreciate that if someone guide me how can I select that. </p>
<p>Thank you in advance.</p>
| 1 | 2016-10-14T13:21:15Z | 40,044,656 | <p>Try this:</p>
<pre><code>dataframe= pd.DataFrame()
dataframe["temp"] = df["b"]["h"]["hot"]
</code></pre>
<p>df - is your dataframe</p>
| 0 | 2016-10-14T13:40:36Z | [
"python",
"pandas"
] |
select specific column from panads MultiIndex dataframe | 40,044,260 | <p>I have a MultiIndex dataframe with 200 columns, I would like to select an specific column from that. Suppose, df is some part of my dataframe:</p>
<pre><code>df=
a b
l h l h l h l
cold hot hot cold cold hot hot
2009-01-01 01:00:00 0.1 0.9 0.4 0.29 0.15 0.6 0.3
2009-01-01 02:00:00 0.1 0.8 0.35 0.2 0.15 0.6 0.4
2009-01-01 03:00:00 0.12 0.7 0.3 0.23 0.23 0.8 0.3
2009-01-01 04:00:00 0.1 0.9 0.33 0.24 0.15 0.6 0.4
2009-01-01 05:00:00 0.17 0.9 0.41 0.23 0.18 0.75 0.4
</code></pre>
<p>I would like to select the values for this column[h,hot]. </p>
<p>My output should be:</p>
<pre><code>df['h','hot']=
a b
2009-01-01 01:00:00 0.9 0.6
2009-01-01 02:00:00 0.8 0.6
2009-01-01 03:00:00 0.7 0.8
2009-01-01 04:00:00 0.9 0.6
2009-01-01 05:00:00 0.9 0.75
</code></pre>
<p>I do appreciate that if someone guide me how can I select that. </p>
<p>Thank you in advance.</p>
| 1 | 2016-10-14T13:21:15Z | 40,044,994 | <p>For multi-index slicing as you desire the columns needs to be sorted first using <code>sort_index(axis=1)</code>, you can then select the cols of interest without error:</p>
<pre><code>In [12]:
df = df.sort_index(axis=1)
df['a','h','hot']
Out[12]:
0
2009-01-01 01:00:00 0.9
2009-01-01 02:00:00 0.8
2009-01-01 03:00:00 0.7
2009-01-01 04:00:00 0.9
2009-01-01 05:00:00 0.9
Name: (a, h, hot), dtype: float64
</code></pre>
| 1 | 2016-10-14T13:56:22Z | [
"python",
"pandas"
] |
How to calculate the Kolmogorov-Smirnov statistic between two weighted samples | 40,044,375 | <p>Let's say that we have two samples <code>data1</code> and <code>data2</code> with their respective weights <code>weight1</code> and <code>weight2</code> and that we want to calculate the Kolmogorov-Smirnov statistic between the two weighted samples.</p>
<p>The way we do that in python follows:</p>
<pre><code>def ks_w(data1,data2,wei1,wei2):
ix1=np.argsort(data1)
ix2=np.argsort(data2)
wei1=wei1[ix1]
wei2=wei2[ix2]
data1=data1[ix1]
data2=data2[ix2]
d=0.
fn1=0.
fn2=0.
j1=0
j2=0
j1w=0.
j2w=0.
while(j1<len(data1))&(j2<len(data2)):
d1=data1[j1]
d2=data2[j2]
w1=wei1[j1]
w2=wei2[j2]
if d1<=d2:
j1+=1
j1w+=w1
fn1=(j1w)/sum(wei1)
if d2<=d1:
j2+=1
j2w+=w2
fn2=(j2w)/sum(wei2)
if abs(fn2-fn1)>d:
d=abs(fn2-fn1)
return d
</code></pre>
<p>where we just modify to our purpose the classical two-sample KS statistic as implemented in <em>Press, Flannery, Teukolsky, Vetterling - Numerical Recipes in C - Cambridge University Press - 1992 - pag.626</em>.</p>
<p>Our questions are:</p>
<ul>
<li>is anybody aware of any other way to do it?</li>
<li>is there any library in python/R/* that performs it?</li>
<li>what about the test? Does it exist or should we use a reshuffling procedure in order to evaluate the statistic?</li>
</ul>
| 1 | 2016-10-14T13:27:28Z | 40,059,727 | <p>Studying the <code>scipy.stats.ks_2samp</code> code we were able to find a more efficient python solution. We share below in case anyone is interested:</p>
<pre><code>def ks_w2(data1,data2,wei1,wei2):
ix1=np.argsort(data1)
ix2=np.argsort(data2)
data1=data1[ix1]
data2=data2[ix2]
wei1=wei1[ix1]
wei2=wei2[ix2]
data=np.concatenate([data1,data2])
cwei1=np.hstack([0.,cumsum(wei1)*1./sum(wei1)])
cwei2=np.hstack([0.,cumsum(wei2)*1./sum(wei2)])
cdf1we=cwei1[[np.searchsorted(data1,data,side='right')]]
cdf2we=cwei2[[np.searchsorted(data2,data,side='right')]]
return np.max(np.abs(cdf1we-cdf2we))
</code></pre>
<p>To evaluate the performance we performed the following test:</p>
<pre><code>ds1=random.rand(10000)
ds2=random.randn(40000)+.2
we1=random.rand(10000)+1.
we2=random.rand(40000)+1.
</code></pre>
<p><code>ks_w2(ds1,ds2,we1,we2)</code> took 11.7ms on our machine, while <code>ks_w(ds1,ds2,we1,we2)</code> took 1.43s</p>
| 0 | 2016-10-15T13:40:57Z | [
"python",
"scipy",
"kolmogorov-smirnov"
] |
oct2py in Anaconda/Spyder not recognizing octave | 40,044,448 | <p>Windows7</p>
<p>Anaconda/python ver 3.4</p>
<p>Octave ver 4.0.3</p>
<p>OCTAVE_EXECUTABLE = C:\Users\Heather\Octave-4.0.3\bin</p>
<p>Hi all, </p>
<p>I've been working a few days on trying to get oct2py working in Anaconda using Spyder. I was wondering if anyone could tell me the correct way to get it to work in Spyder on a windows machine? Basic setup maybe or maybe I'm using the wrong packages?</p>
<p>So far I've installed the oct2py package per the Anaconda Cloud using: </p>
<p>conda install -c conda-forge oct2py=3.5.9</p>
<p>In all the documentation for oct2py it mentioned needing to have Octave downloaded in order for oct2py to work. So from this page pypi.python.org/pypi/oct2py, it mentioned getting Octave from sourceforge at </p>
<p><a href="https://sourceforge.net/projects/octave/files/Octave%20Windows%20binaries/" rel="nofollow">https://sourceforge.net/projects/octave/files/Octave%20Windows%20binaries/</a> . </p>
<p>I downloaded the Octave 3.6.4 from there and a friend helped me to get the OCTAVE_EXECUTABLE in my environment variables pointing towards it. At this point I was able to type 'octave' in a command line and it would bring up an octave instance, but Spyder would never recognize I had octave installed. </p>
<p>ergo:</p>
<p>from oct2py import octave</p>
<p>Error: cannot import name octave</p>
<p>At this point I realized the sourceforge Octave said it was a supplemental package, so I uninstalled the Octave 3.6.4 and installed Octave 4.0.3 from <a href="http://www.gnu.org/software/octave/" rel="nofollow">http://www.gnu.org/software/octave/</a> for windows. Now Octave opens nicely when I click on the application but the command line does not recognize the term 'octave' which I feel is a step back. I looked at my env variables again and the new path for Octave was present in the system variables and I updated the OCTAVE_EXECUTABLE to point to the new version of Octave (with no whitespaces in the directory). But my computer even after full shutdown and restarts does not recognize 'octave' in the command line and Spyder still does not see that I have octave when I try running oct2py.Oct2Py().</p>
<p>So after all this I was wondering if anyone has gotten oct2py to work in Anaconda but especially using Spyder? How so? I am trying to get my python script to open and use a .m file to perform a function and output a matrix that will be used further in the script for computation. However if I can't even get it to recognize octave then I don't know how I'll get this finished.</p>
<p>Sample of code:</p>
<pre><code>from oct2py import Oct2Py
filename = 'filename'
oc = Oct2Py()
eph_matrix = oc.read_eph(filename)
print(eph_matrix) #nx25 matrix
</code></pre>
<p>I am hopeful if I can just get python to recognize Octave that I can get past the import line.</p>
<p>Any help would be very appreciated.</p>
| -1 | 2016-10-14T13:31:19Z | 40,045,238 | <p>The <code>OCTAVE_EXECUTABLE</code> or <code>OCTAVE</code> environment variables should point directly to the <em>executable</em> and not the folder that contains the executable. So you'll likely want to set it to </p>
<pre><code>OCTAVE_EXECUTABLE = C:\Users\Heather\Octave-4.0.3\bin\octave-cli.exe
</code></pre>
<p>Another option is to provide the executable as the first input to <code>Oct2Py</code>.</p>
<pre><code>from oct2py import Oct2Py
octave = Oct2Py('C:\Users\Heather\Octave-4.0.3\bin\octave-cli.exe')
</code></pre>
<p>Also, if you want to be able to run it from the Windows command prompt, you will want to add the folder containing the executables (<code>'C:\Users\Heather\Octave-4.0.3\bin'</code>) to the <code>PATH</code> environment variable.</p>
| 0 | 2016-10-14T14:08:44Z | [
"python",
"octave",
"anaconda",
"spyder"
] |
I can not run a server on Django | 40,044,452 | <p>I have a problem: after the <code>python manage.py runserver</code> command I receive the following error which I can not solve:</p>
<pre><code>Unhandled exception in thread started by <function wrapper at 0xb6712e64>
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/django/utils/autoreload.py", line 229, in wrapper
fn(*args, **kwargs)
File "/usr/lib/python2.7/dist-packages/django/core/management/commands/runserver.py", line 107, in inner_run
autoreload.raise_last_exception()
File "/usr/lib/python2.7/dist-packages/django/utils/autoreload.py", line 252, in raise_last_exception
six.reraise(*_exception)
File "/usr/lib/python2.7/dist-packages/django/utils/autoreload.py", line 229, in wrapper
fn(*args, **kwargs)
File "/usr/lib/python2.7/dist-packages/django/__init__.py", line 18, in setup
apps.populate(settings.INSTALLED_APPS)
File "/usr/lib/python2.7/dist-packages/django/apps/registry.py", line 115, in populate
app_config.ready()
File "/usr/lib/python2.7/dist-packages/django/contrib/admin/apps.py", line 22, in ready
self.module.autodiscover()
File "/usr/lib/python2.7/dist-packages/django/contrib/admin/__init__.py", line 24, in autodiscover
autodiscover_modules('admin', register_to=site)
File "/usr/lib/python2.7/dist-packages/django/utils/module_loading.py", line 74, in autodiscover_modules
import_module('%s.%s' % (app_config.name, module_to_search))
File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/usr/local/lib/python2.7/dist-packages/redactor/admin.py", line 3, in <module>
from redactor.widgets import JQueryEditor
ImportError: cannot import name JQueryEditor
</code></pre>
<p>How can I solve this problem? Many where we recommend that you <code>sudo easy_install pip</code> the team, but it did not help.</p>
| 0 | 2016-10-14T13:31:30Z | 40,044,511 | <p>Are you sure that the name of the widget class is correct?
I was checking django-redactor and django-redactoreditor and they don't have a class named "JQueryEditor".</p>
| 0 | 2016-10-14T13:34:28Z | [
"python",
"django"
] |
Python How to sort list in a list | 40,044,490 | <p>I'm trying to sort a list within a list. This is what I have. I would like to sort the inner namelist based on alphabetical order. Have tried using loops but to no avail. The smaller 'lists' inside happens to be strings so I can't sort() them as list. Error: 'str' object has no attribute 'sort'</p>
<pre><code>Names = [
[['John'] ['Andrea', 'Regina', 'Candice'], ['Charlotte', 'Melanie'],
['Claudia', 'Lace', 'Karen'] ['Ronald', 'Freddy'],
['James', 'Luke', 'Ben', 'Nick']]
]
</code></pre>
<p>I would like it to be sorted into: </p>
<pre><code>[
[['John'] ['Andrea', 'Candice', 'Regina'], ['Charlotte', 'Melanie'],
['Claudia', 'Karen', 'Lace'] ['Freddy', 'Ronald'],
['Ben', 'James', 'Luke', 'Nick']]
]
</code></pre>
<p>Thank you.
Edit: Sorry i made a mistake in the previous codes. Ive since edited it.</p>
| -1 | 2016-10-14T13:33:23Z | 40,044,563 | <pre><code>Names = [sorted(sublist) for sublist in Names]
</code></pre>
<p>This is a list comprehension that takes each sublist of <code>Names</code>, sorts it, and then builds a new list out of those sorted lists. It then makes <code>Names</code> that list</p>
| 4 | 2016-10-14T13:36:56Z | [
"python",
"string",
"list",
"sorting"
] |
Python How to sort list in a list | 40,044,490 | <p>I'm trying to sort a list within a list. This is what I have. I would like to sort the inner namelist based on alphabetical order. Have tried using loops but to no avail. The smaller 'lists' inside happens to be strings so I can't sort() them as list. Error: 'str' object has no attribute 'sort'</p>
<pre><code>Names = [
[['John'] ['Andrea', 'Regina', 'Candice'], ['Charlotte', 'Melanie'],
['Claudia', 'Lace', 'Karen'] ['Ronald', 'Freddy'],
['James', 'Luke', 'Ben', 'Nick']]
]
</code></pre>
<p>I would like it to be sorted into: </p>
<pre><code>[
[['John'] ['Andrea', 'Candice', 'Regina'], ['Charlotte', 'Melanie'],
['Claudia', 'Karen', 'Lace'] ['Freddy', 'Ronald'],
['Ben', 'James', 'Luke', 'Nick']]
]
</code></pre>
<p>Thank you.
Edit: Sorry i made a mistake in the previous codes. Ive since edited it.</p>
| -1 | 2016-10-14T13:33:23Z | 40,044,666 | <p>The built - in list method: <code>sort()</code> works in-place on sub lists if you iterate over them:</p>
<pre><code>Names = [['John', 'Andrea', 'Regina', 'Candice', 'Charlotte', 'Melanie'], ['Claudia', 'Lace', 'Karen', 'Ronald', 'Freddy'], ['James', 'Luke', 'Ben', 'Nick']
for sublist in Names:
sublist.sort()
</code></pre>
| 2 | 2016-10-14T13:40:55Z | [
"python",
"string",
"list",
"sorting"
] |
Iterating through json object in python | 40,044,504 | <p>I have a response from Foursquare that reads as follows</p>
<pre><code>response: {
suggestedFilters: {},
geocode: {},
headerLocation: "Harlem",
headerFullLocation: "Harlem",
headerLocationGranularity: "city",
query: "coffee",
totalResults: 56,
suggestedBounds: {},
groups: [{
type: "Recommended Places",
name: "recommended",
items: [{
reasons: {
count: 1,
items: [{
summary: "You've been here 6 times",
type: "social",
reasonName: "friendAndSelfCheckinReason",
count: 0
}]
},
venue: {
id: "4fdf5edce4b08aca4a462878",
name: "The Chipped Cup",
contact: {},
location: {},
categories: [],
verified: true,
stats: {},
url: "http://www.chippedcupcoffee.com",
price: {},
hasMenu: true,
rating: 8.9,
ratingColor: "73CF42",
ratingSignals: 274,
menu: {},
allowMenuUrlEdit: true,
beenHere: {},
hours: {},
photos: {},
venuePage: {},
storeId: "",
hereNow: {}
},
tips: [],
referralId: "e-0-4fdf5edce4b08aca4a462878-0"
},
]
</code></pre>
<p>If I type the following: </p>
<pre><code>for value in json_data['response']['groups'][0]:
print(value['name'])
</code></pre>
<p>I get a TypeError: string indices must be integers</p>
<p>I'm just wondering how to iterate through this response to get the names of businesses. </p>
<p>Thanks</p>
| 0 | 2016-10-14T13:34:13Z | 40,044,552 | <p>You went too far. The [0] is the first element of the groups</p>
<pre><code>for value in json_data['response']['groups']
</code></pre>
<p>Or you need to actually parse the JSON data from a string with the <code>json.loads</code> function </p>
<p>Also, I think you want </p>
<pre><code>value['venue']['name']
</code></pre>
| 1 | 2016-10-14T13:36:25Z | [
"python",
"json"
] |
Iterating through json object in python | 40,044,504 | <p>I have a response from Foursquare that reads as follows</p>
<pre><code>response: {
suggestedFilters: {},
geocode: {},
headerLocation: "Harlem",
headerFullLocation: "Harlem",
headerLocationGranularity: "city",
query: "coffee",
totalResults: 56,
suggestedBounds: {},
groups: [{
type: "Recommended Places",
name: "recommended",
items: [{
reasons: {
count: 1,
items: [{
summary: "You've been here 6 times",
type: "social",
reasonName: "friendAndSelfCheckinReason",
count: 0
}]
},
venue: {
id: "4fdf5edce4b08aca4a462878",
name: "The Chipped Cup",
contact: {},
location: {},
categories: [],
verified: true,
stats: {},
url: "http://www.chippedcupcoffee.com",
price: {},
hasMenu: true,
rating: 8.9,
ratingColor: "73CF42",
ratingSignals: 274,
menu: {},
allowMenuUrlEdit: true,
beenHere: {},
hours: {},
photos: {},
venuePage: {},
storeId: "",
hereNow: {}
},
tips: [],
referralId: "e-0-4fdf5edce4b08aca4a462878-0"
},
]
</code></pre>
<p>If I type the following: </p>
<pre><code>for value in json_data['response']['groups'][0]:
print(value['name'])
</code></pre>
<p>I get a TypeError: string indices must be integers</p>
<p>I'm just wondering how to iterate through this response to get the names of businesses. </p>
<p>Thanks</p>
| 0 | 2016-10-14T13:34:13Z | 40,044,610 | <p><code>json_data['response']['groups'][0]</code> is a dictionary. When you iterate over a dictionary, you are iterating over a list of keys, all of which are strings...so within the loop, <code>value</code> is a string.</p>
<p>So when you ask for <code>value['name']</code>, you are trying to index the string with <code>['name']</code>, which doesn't make any sense, hence the error.</p>
<p>I think you meant:</p>
<pre><code>for value in json_date['response']['groups']
</code></pre>
| 0 | 2016-10-14T13:38:40Z | [
"python",
"json"
] |
python decode partial utf-8 byte array | 40,044,517 | <p>I'm getting data from channel which is not aware about UTF-8 rules. So sometimes when UTF-8 is using multiple bytes to code one character and I try to convert part of received data into text I'm getting error during conversion. By nature of interface (stream without any end) I'm not able to find out when data are full. Thus I need to handle partial utf-8 decoding. Basically I need to decode what I can and store partial data. Stored partial data will be added as prefix to next data. My question is if there is some neat function in python to allow it? </p>
<p>[EDIT]
Just to ensure you I know about function in <a href="https://docs.python.org/3/library/stdtypes.html#bytes.decode" rel="nofollow">docs.python</a></p>
<pre><code> bytes.decode(encoding="utf-8", errors="ignore")
</code></pre>
<p>but the issue is it would not return me where is the error and so I can not know how much bytes from end I shall keep.</p>
| 0 | 2016-10-14T13:34:39Z | 40,044,518 | <p>So far I come up with not so nice function:</p>
<pre><code>def decodeBytesUtf8Safe(toDec):
"""
decodes byte array in utf8 to string. It can handle case when end of byte array is
not complete thus making utf8 error. in such case text is translated only up to error.
Rest of byte array (from error to end) is returned as second parameter and can be
combined with next byte array and decoded next time.
:param toDec: bytes array to be decoded a(eg bytes("abc","utf8"))
:return:
1. decoded string
2. rest of byte array which could not be encoded due to error
"""
okLen = len(toDec)
outStr = ""
while(okLen>0):
try:
outStr = toDec[:okLen].decode("utf-8")
except UnicodeDecodeError as ex:
okLen -= 1
else:
break
return outStr,toDec[okLen:]
</code></pre>
<p>you can test it using script:</p>
<pre><code>def test(arr):
expStr = arr.decode("utf-8")
errorCnt = 0
for i in range(len(arr)+1):
decodedTxt, rest = decodeBytesUtf8Safe(arr[0:i])
decodedTxt2, rest2 = decodeBytesUtf8Safe(rest+arr[i:])
recvString = decodedTxt+decodedTxt2
sys.stdout.write("%02d ; %s (%s - %s )\n"%(i,recvString,decodedTxt, decodedTxt2))
if(expStr != recvString):
print("Error when divided at %i"%(i))
errorCnt += 1
return errorCnt
testUtf8 = bytes([0x61, 0xc5, 0xbd, 0x6c, 0x75, 0xc5, 0xa5, 0x6f, 0x75, 0xc4, 0x8d, 0x6b, 0xc3, 0xbd, 0x20, 0x6b, 0xc5, 0xaf, 0xc5, 0x88])
err = test(testUtf8)
print("total errors %i"%(err))
</code></pre>
<p>it shall give you the output:</p>
<pre><code>00 ; aŽluÅ¥ouÄký kůŠ( - aŽluÅ¥ouÄký kůŠ)
01 ; aŽluÅ¥ouÄký kůŠ(a - ŽluÅ¥ouÄký kůŠ)
02 ; aŽluÅ¥ouÄký kůŠ(a - ŽluÅ¥ouÄký kůŠ)
03 ; aŽluÅ¥ouÄký kůŠ(aŽ - luÅ¥ouÄký kůŠ)
04 ; aŽluÅ¥ouÄký kůŠ(aŽl - uÅ¥ouÄký kůŠ)
05 ; aŽluÅ¥ouÄký kůŠ(aŽlu - Å¥ouÄký kůŠ)
06 ; aŽluÅ¥ouÄký kůŠ(aŽlu - Å¥ouÄký kůŠ)
07 ; aŽluÅ¥ouÄký kůŠ(aŽluÅ¥ - ouÄký kůŠ)
08 ; aŽluÅ¥ouÄký kůŠ(aŽluÅ¥o - uÄký kůŠ)
09 ; aŽluÅ¥ouÄký kůŠ(aŽluÅ¥ou - Äký kůŠ)
10 ; aŽluÅ¥ouÄký kůŠ(aŽluÅ¥ou - Äký kůŠ)
11 ; aŽluÅ¥ouÄký kůŠ(aŽluÅ¥ouÄ - ký kůŠ)
12 ; aŽluÅ¥ouÄký kůŠ(aŽluÅ¥ouÄk - ý kůŠ)
13 ; aŽluÅ¥ouÄký kůŠ(aŽluÅ¥ouÄk - ý kůŠ)
14 ; aŽluÅ¥ouÄký kůŠ(aŽluÅ¥ouÄký - kůŠ)
15 ; aŽluÅ¥ouÄký kůŠ(aŽluÅ¥ouÄký - kůŠ)
16 ; aŽluÅ¥ouÄký kůŠ(aŽluÅ¥ouÄký k - ůŠ)
17 ; aŽluÅ¥ouÄký kůŠ(aŽluÅ¥ouÄký k - ůŠ)
18 ; aŽluÅ¥ouÄký kůŠ(aŽluÅ¥ouÄký ků - Å )
19 ; aŽluÅ¥ouÄký kůŠ(aŽluÅ¥ouÄký ků - Å )
20 ; aŽluÅ¥ouÄký kůŠ(aŽluÅ¥ouÄký kůŠ- )
total errors 0
</code></pre>
| 0 | 2016-10-14T13:34:39Z | [
"python",
"utf-8"
] |
python decode partial utf-8 byte array | 40,044,517 | <p>I'm getting data from channel which is not aware about UTF-8 rules. So sometimes when UTF-8 is using multiple bytes to code one character and I try to convert part of received data into text I'm getting error during conversion. By nature of interface (stream without any end) I'm not able to find out when data are full. Thus I need to handle partial utf-8 decoding. Basically I need to decode what I can and store partial data. Stored partial data will be added as prefix to next data. My question is if there is some neat function in python to allow it? </p>
<p>[EDIT]
Just to ensure you I know about function in <a href="https://docs.python.org/3/library/stdtypes.html#bytes.decode" rel="nofollow">docs.python</a></p>
<pre><code> bytes.decode(encoding="utf-8", errors="ignore")
</code></pre>
<p>but the issue is it would not return me where is the error and so I can not know how much bytes from end I shall keep.</p>
| 0 | 2016-10-14T13:34:39Z | 40,046,018 | <p>You can call the codecs module to the rescue. It gives you directly a incremental decoder, that does exactly what you need:</p>
<pre><code>import codecs
dec = codecs.getincrementaldecoder('utf8')()
</code></pre>
<p>You can feed it with: <code>dec.decode(input)</code> and when it is over, optionally add a <code>dec.decode(bytes(), True)</code> to force it to cleanup any stored state.</p>
<p>The test becomes:</p>
<pre><code>>>> def test(arr):
dec = codecs.getincrementaldecoder('utf8')()
recvString = ""
for i in range(len(arr)):
recvString += dec.decode(arr[i:i+1])
sys.stdout.write("%02d : %s\n" % (i, recvString))
recvString += dec.decode(bytes(), True) # will choke on incomplete input...
return recvString == arr.decode('utf8')
>>> testUtf8 = bytes([0x61, 0xc5, 0xbd, 0x6c, 0x75, 0xc5, 0xa5, 0x6f, 0x75, 0xc4, 0x8d, 0x6b, 0xc3, 0xbd, 0x20, 0x6b, 0xc5, 0xaf, 0xc5, 0x88])
>>> test(testUtf8)
00 : a
01 : a
02 : aŽ
03 : aŽl
04 : aŽlu
05 : aŽlu
06 : aŽluť
07 : aŽluťo
08 : aŽluťou
09 : aŽluťou
10 : aŽluÅ¥ouÄ
11 : aŽluÅ¥ouÄk
12 : aŽluÅ¥ouÄk
13 : aŽluÅ¥ouÄký
14 : aŽluÅ¥ouÄký
15 : aŽluÅ¥ouÄký k
16 : aŽluÅ¥ouÄký k
17 : aŽluÅ¥ouÄký ků
18 : aŽluÅ¥ouÄký ků
19 : aŽluÅ¥ouÄký kůÅ
True
</code></pre>
| 3 | 2016-10-14T14:46:14Z | [
"python",
"utf-8"
] |
Python tensor product | 40,044,714 | <p>I have the following problem. For performance reasons I use <code>numpy.tensordot</code> and have thus my values stored in tensors and vectors.
One of my calculations look like this:</p>
<p><a href="https://i.stack.imgur.com/PLDSC.png" rel="nofollow"><img src="https://i.stack.imgur.com/PLDSC.png" alt="enter image description here"></a></p>
<p><code><w_j></code> is the expectancy value of <code>w_j</code> and <code><sigma_i></code> the expectancy value of <code>sigma_i</code>. (Perhaps I should now have called is sigma, because it has nothing to do with standart deviation) Now for further calculations I also need the variance. To the get Variance I need to calculate:
<a href="https://i.stack.imgur.com/KLmxY.png" rel="nofollow"><img src="https://i.stack.imgur.com/KLmxY.png" alt="enter image description here"></a></p>
<p>Now when I implemented the first formula into python with <code>numpy.tensordot</code> I was really happy when it worked because this is quite abstract and I am not used to tensors. The code does look like this:</p>
<pre><code>erc = numpy.tensordot(numpy.tensordot(re, ewp, axes=1), ewp, axes=1)
</code></pre>
<p>Now this works and my problem is to write down the correct form for the second formula. One of my attempts was:</p>
<pre><code>serc = numpy.tensordot(numpy.tensordot(numpy.tensordot(numpy.tensordot
(numpy.tensordot(re, re, axes=1), ewp, axes=1), ewp, axes=1)
, ewp, axes=1), ewp, axes=1)
</code></pre>
<p>But this does give me a scalar instead of a vector. Another try was:</p>
<pre><code>serc = numpy.einsum('m, m', numpy.einsum('lm, l -> m',
numpy.einsum('klm, k -> lm', numpy.einsum('jklm, j -> klm',
numpy.einsum('ijk, ilm -> jklm', re, re), ewp), ewp), ewp), ewp)
</code></pre>
<p>The vectors have lenght <code>l</code> and the dimension of the tensor is <code>l * l * l</code>. I hope my problem is understandable and thank you in advance!</p>
<p>Edit: The first formula can in python also written down like: <code>erc2 = numpy.einsum('ik, k -> i', numpy.einsum('ijk, k -> ij', re, ewp), ewp)</code></p>
| 2 | 2016-10-14T13:42:55Z | 40,045,478 | <p>You could do that with a series of reductions, like so -</p>
<pre><code>p1 = np.tensordot(re,ewp,axes=(1,0))
p2 = np.tensordot(p1,ewp,axes=(1,0))
out = p2**2
</code></pre>
<p><strong>Explanation</strong></p>
<p>First off, we could separate it out into two groups of operations :</p>
<pre><code>Group 1: R(i,j,k) , < wj > , < wk >
Group 2: R(i,l,m) , < wl > , < wl >
</code></pre>
<p>The operations performed within these two groups are identical. So, one could compute for one group and derive the final output based off it.</p>
<p>Now, to compute <code>R(i,j,k)</code> , < <code>wj</code> >, < <code>wk</code> and end up with <code>(i)</code> , we need to perform element-wise multiplication along the second and third axes of <code>R</code> with <code>w</code> and then perform <code>sum-reduction</code> along those axes. Here, we are doing it in two steps with two <code>tensordots</code> -</p>
<pre><code>[1] R(i,j,k) , < wj > to get p1(i,k)
[2] p1(i,k) , < wk > to get p2(i)
</code></pre>
<p>Thus, we end up with a vector <code>p2</code>. Similarly with the second group, the result would be an identical vector. So, to get to the final output, we just need to square that vector, i.e. <code>p**2</code>.</p>
| 2 | 2016-10-14T14:19:34Z | [
"python",
"performance",
"numpy",
"numpy-einsum"
] |
find mean and corr of 10,000 columns in pyspark Dataframe | 40,044,779 | <p>I have DF with 10K columns and 70Million rows. I want to calculate the mean and corr of 10K columns. I did below code but it wont work due to code size 64K issue (<a href="https://issues.apache.org/jira/browse/SPARK-16845" rel="nofollow">https://issues.apache.org/jira/browse/SPARK-16845</a>)</p>
<p>Data:</p>
<pre><code>region dept week sal val1 val2 val3 ... val10000
US CS 1 1 2 1 1 ... 2
US CS 2 1.5 2 3 1 ... 2
US CS 3 1 2 2 2.1 2
US ELE 1 1.1 2 2 2.1 2
US ELE 2 2.1 2 2 2.1 2
US ELE 3 1 2 1 2 .... 2
UE CS 1 2 2 1 2 .... 2
</code></pre>
<p>Code:</p>
<pre><code>aggList = [func.mean(col) for col in df.columns] #exclude keys
df2= df.groupBy('region', 'dept').agg(*aggList)
</code></pre>
<p>code 2</p>
<pre><code>aggList = [func.corr('sal', col).alias(col) for col in df.columns] #exclude keys
df2 = df.groupBy('region', 'dept', 'week').agg(*aggList)
</code></pre>
<p>this fails. Is there any alternative way to overcome this bug? and any one tried DF with 10K columns?. Is there any suggestion on performance improvement?</p>
| 0 | 2016-10-14T13:45:59Z | 40,046,119 | <p>We also ran into the 64KB issue, but in a where clause, which is filed under another bug report. What we used as a workaround, is simply, to do the operations/transformations in several steps.</p>
<p>In your case, this would mean, that you don't do all the aggregatens in one step. Instead loop over the relevant columns in an outer operation:</p>
<ul>
<li>Use <a href="https://spark.apache.org/docs/1.5.1/api/java/org/apache/spark/sql/DataFrame.html#select(org.apache.spark.sql.Column...)" rel="nofollow"><code>select</code></a> to create a temporary dataframe, which just contains columns you need for the operation.</li>
<li>Use the <code>groupBy</code> and <code>agg</code> like you did, except not for a list of aggregations, but just for on (or two, you could combine the <code>mean</code> and <code>corr</code>.</li>
<li>After you received references to all temporary dataframes, use <a href="https://spark.apache.org/docs/1.5.1/api/java/org/apache/spark/sql/DataFrame.html#withColumn(java.lang.String,%20org.apache.spark.sql.Column)" rel="nofollow"><code>withColumn</code></a> to append the aggregated columns from the temporary dataframes to a result df. </li>
</ul>
<p>Due to the lazy evaluation of a Spark DAG, this is of course slower as doing it in one operation. But it should evaluate the whole analysis in one run.</p>
| 1 | 2016-10-14T14:51:41Z | [
"python",
"apache-spark",
"pyspark",
"spark-dataframe"
] |
Python:Update list of tuples | 40,044,814 | <p>I have a list of tuples like this:</p>
<p>list = [(1, 'q'), (2, 'w'), (3, 'e'), (4, 'r')]</p>
<p>and i am trying to create a update function update(item,num) which search the item in the list and then change the num.</p>
<p>for example if i use update(w,6) the result would be </p>
<pre><code>list = [(1, 'q'), (6, 'w'), (3, 'e'), (4, 'r')]
</code></pre>
<p>i tried this code but i had error</p>
<pre><code>if item in heap:
heap.remove(item)
Pushheap(item,num)
else:
Pushheap(item,num)
</code></pre>
<p>Pushheap is a function that push tuples in the heap
any ideas?</p>
| 4 | 2016-10-14T13:48:03Z | 40,044,975 | <p>As noted in the comments, you are using an immutable data structure for data items that you are attempting to change. Without further context, it looks like you want a dictionary, not a list of tuples, and it also looks like you want the second item in the tuple (the letter) to be the key, since you are planning on modifying the number. </p>
<p>Using these assumptions, I recommend converting the list of tuples to a dictionary and then using normal dictionary assignment. This also assumes that <em>order</em> is not important (if it is, you can use an <code>OrderedDict</code>) and that the same letter does not appear twice (if it does, only the last number will be in the dict).</p>
<pre><code>>>> lst = [(1, 'q'), (2, 'w'), (3, 'e'), (4, 'r')]
>>> item_dict = dict(i[::-1] for i in lst)
>>> item_dict
{'q': 1, 'r': 4, 'e': 3, 'w': 2}
>>> item_dict['w'] = 6
>>> item_dict
{'q': 1, 'r': 4, 'e': 3, 'w': 6}
</code></pre>
| 3 | 2016-10-14T13:55:37Z | [
"python",
"python-2.7",
"python-3.x"
] |
Python:Update list of tuples | 40,044,814 | <p>I have a list of tuples like this:</p>
<p>list = [(1, 'q'), (2, 'w'), (3, 'e'), (4, 'r')]</p>
<p>and i am trying to create a update function update(item,num) which search the item in the list and then change the num.</p>
<p>for example if i use update(w,6) the result would be </p>
<pre><code>list = [(1, 'q'), (6, 'w'), (3, 'e'), (4, 'r')]
</code></pre>
<p>i tried this code but i had error</p>
<pre><code>if item in heap:
heap.remove(item)
Pushheap(item,num)
else:
Pushheap(item,num)
</code></pre>
<p>Pushheap is a function that push tuples in the heap
any ideas?</p>
| 4 | 2016-10-14T13:48:03Z | 40,045,156 | <p>You can simply scan through the list looking for a tuple with the desired letter and replace the whole tuple (you can't modify tuples), breaking out of the loop when you've found the required item. Eg,</p>
<pre><code>lst = [(1, 'q'), (2, 'w'), (3, 'e'), (4, 'r')]
def update(item, num):
for i, t in enumerate(lst):
if t[1] == item:
lst[i] = num, item
break
update('w', 6)
print(lst)
</code></pre>
<p><strong>output</strong></p>
<pre><code>[(1, 'q'), (6, 'w'), (3, 'e'), (4, 'r')]
</code></pre>
<p>However, you should seriously consider using a dictionary instead of a list of tuples. Searching a dictionary is much more efficient than doing a linear scan over a list.</p>
| 5 | 2016-10-14T14:04:45Z | [
"python",
"python-2.7",
"python-3.x"
] |
Python:Update list of tuples | 40,044,814 | <p>I have a list of tuples like this:</p>
<p>list = [(1, 'q'), (2, 'w'), (3, 'e'), (4, 'r')]</p>
<p>and i am trying to create a update function update(item,num) which search the item in the list and then change the num.</p>
<p>for example if i use update(w,6) the result would be </p>
<pre><code>list = [(1, 'q'), (6, 'w'), (3, 'e'), (4, 'r')]
</code></pre>
<p>i tried this code but i had error</p>
<pre><code>if item in heap:
heap.remove(item)
Pushheap(item,num)
else:
Pushheap(item,num)
</code></pre>
<p>Pushheap is a function that push tuples in the heap
any ideas?</p>
| 4 | 2016-10-14T13:48:03Z | 40,045,176 | <p>Tuples are an immutable object. Which means once they're created, you can't go changing there contents.</p>
<p>You can, work around this however, by replaceing the tuple you want to change. Possibly something such as this:</p>
<pre><code>def change_item_in_list(lst, item, num):
for pos, tup in enumerate(lst):
if tup[1] == item:
lst[pos] = (num, item)
return
l = [(1, 'q'), (2, 'w'), (3, 'e'), (4, 'r')]
print(l)
change_item_in_list(l, 'w', 6)
print(l)
</code></pre>
<p>But as @brianpck has already said, you probably want a (ordered)-dictionary instead of a list of tuples.</p>
| 1 | 2016-10-14T14:05:28Z | [
"python",
"python-2.7",
"python-3.x"
] |
pandas read_sql is unusually slow | 40,045,093 | <p>I'm trying to read several columns from three different MySQL tables into three different dataframes.</p>
<p>It doesn't take long to read from the database, but actually putting them into a dataframe is fairly slow.</p>
<pre><code>start_time = time.time()
print('Reading data from database...')
from sqlalchemy import create_engine
q_crash = 'SELECT <query string> FROM table1'
q_vehicle = 'SELECT <query string> table2'
q_person = 'SELECT <query string> FROM table3'
engine = create_engine('mysql+pymysql://user:password@host:port/dbasename')
print('Database time: {:.1f}'.format(time.time() - start_time))
crash = pd.read_sql_query(q_crash, engine)
print('Read_sql time for table 1: {:.1f}'.format(time.time() - start_time))
vehicle = pd.read_sql_query(q_vehicle, engine)
print('Read_sql time for table 2: {:.1f}'.format(time.time() - start_time))
person = pd.read_sql_query(q_person, engine)
print('Read_sql time for table 3: {:.1f}'.format(time.time() - start_time))
</code></pre>
<p>Output:</p>
<pre><code>Reading data from database...
Database time: 0.0
Read_sql time for table 1: 13.4
Read_sql time for table 2: 30.9
Read_sql time for table 3: 49.4
</code></pre>
<p>Is this normal? The tables are quite large-- table 3 is over 601,000 rows. But pandas has handled larger datasets without a hitch whenever I use read_csv.</p>
| 0 | 2016-10-14T14:01:09Z | 40,047,381 | <p>IMO it doesn't make much sense to read up complete tables to Pandas DFs if you have them in MySQL DB - why don't you use SQL for filtering and joining your data? Do you really need <strong>all</strong> rows from those three tables as Pandas DFs? </p>
<p>If you want to join them you could do it first on the MySQL side and load the result set into single DF...</p>
<p>something similar to:</p>
<pre><code>qry = 'select p.*, v.*, c.* from vehicle v join person p on v.id = p.vehicle_id join crash c on c.id = p.crash_id where <additional where clause>'
df = pd.read_sql(qry, engine)
</code></pre>
| 1 | 2016-10-14T15:54:40Z | [
"python",
"mysql",
"pandas"
] |
Accuracy not high enough for dogs_cats classification dataset using CNN with Keras-Tf python | 40,045,159 | <p>Guys I'm trying to classify the Dogs vs Cats dataset using CNN. I'm deep learning beginner btw.</p>
<p>The dataset link can be obtained from <a href="https://www.kaggle.com/c/dogs-vs-cats/data" rel="nofollow">here</a>. I've also classified the above dataset using MLP with a training accuracy of 70% and testing accuracy of 62%. So I decided to use CNN to improve the score.</p>
<p>But unfortunately, I'm still getting very similar results. Here is my code:</p>
<pre><code>from sklearn.cross_validation import train_test_split
from sklearn.preprocessing import LabelEncoder
from keras.layers import Dense, Activation, Flatten, Dropout
from keras.layers.convolutional import Convolution2D
from keras.layers.convolutional import MaxPooling2D
from keras.models import Sequential
from keras.utils import np_utils
from keras.optimizers import SGD
from keras.datasets import mnist
from keras import backend as K
from imutils import paths
import numpy as np
import argparse
import cPickle
import h5py
import sys
import cv2
import os
K.set_image_dim_ordering('th')
def image_to_feature_vector(image, size=(28, 28)):
return cv2.resize(image, size)
print("[INFO] pre-processing images...")
imagePaths = list(paths.list_images(raw_input('path to dataset: ')))
data = []
labels = []
for (i, imagePath) in enumerate(imagePaths):
image = cv2.imread(imagePath)
label = imagePath.split(os.path.sep)[-1].split(".")[0]
features = image_to_feature_vector(image)
data.append(features)
labels.append(label)
if i > 0 and i % 1000 == 0:
print("[INFO] processed {}/{}".format(i, len(imagePaths)))
le = LabelEncoder()
labels = le.fit_transform(labels)
labels = np_utils.to_categorical(labels, 2)
data = np.array(data) / 255.0
print("[INFO] constructing training/testing split...")
(X_train, X_test, y_train, y_test) = train_test_split(data, labels, test_size=0.25, random_state=42)
X_train = X_train.reshape(X_train.shape[0], 3, 28, 28).astype('float32')
X_test = X_test.reshape(X_test.shape[0], 3, 28, 28).astype('float32')
num_classes = y_test.shape[1]
def basic_model():
model = Sequential()
model.add(Convolution2D(32, 3, 3, border_mode='valid', init='uniform', bias=True, input_shape=(3, 28, 28), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))
sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])
return model
model = basic_model()
model.fit(X_train, y_train, validation_data=(X_test, y_test), nb_epoch=25, batch_size=50, shuffle=True, verbose=1)
print('[INFO] Evaluating the model on test data...')
scores = model.evaluate(X_test, y_test, batch_size=100, verbose=1)
print("\nAccuracy: %.4f%%\n\n"%(scores[1]*100))
</code></pre>
<p>The CNN model I've used is very basic but decent enough I think. I followed various tutorials to get to it. I even used this architecture but got similar result(65% testing accuracy):</p>
<pre><code>def baseline_model():
model = Sequential()
model.add(Convolution2D(30, 5, 5, border_mode='valid', input_shape=(3, 28, 28), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Convolution2D(15, 3, 3, activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(50, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))
sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='binary_crossentropy', optimizer=sgd, metrics=['accuracy'])
return model
</code></pre>
<p>For optimiser I also tried <code>adam</code> with default parameters and for <code>model.complie</code> loss function I also tried <code>categorical_crossentropy</code> but there was no (or very slight) improvement.</p>
<p>Can you suggest where I'm going wrong or what I can do to improve efficiency?(In few epochs if possible)</p>
<p>(I'm a beginner in deep learning and keras programming...)</p>
<p>EDIT: so I managed to touch 70.224% testing accuracy and 74.27% training accuracy. CNN architecture was
<code>CONV => CONV => POOL => DROPOUT => FLATTEN => DENSE*3</code></p>
<p>(There is almost no overfitting as training acc: 74% and testing is: 70% í ½í¹)</p>
<p>But still open to suggestions to increase it further, 70% is definitely on lower side...</p>
| 2 | 2016-10-14T14:04:53Z | 40,098,342 | <p>Basically, your network is not deep enough. That is why both your training and validation accuracy are low. You can try to deepen your network from two aspects.</p>
<ol>
<li><p>Use larger number of filters for each convolutional layer. (30, 5, 5) or (15, 3, 3) are just not enough. Change the first convolutional layer to (64, 3, 3). After max pooling, which reduces your 2D size, the network should provide "deeper" features. Thus, the second should not be 15, but something like (64, 3,3) or even (128,3,3).</p></li>
<li><p>Add more convolutional layers. 5 or 6 layers for this problem may be good.</p></li>
</ol>
<p>Overall, your question is beyond programming. It is more about CNN network architecture. You may read more research papers on this topic to get a better understanding. For this specific problem, Keras has a very good tutorial on how to improve the performance with very small set of cats and dogs images:
<a href="https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html" rel="nofollow">Building powerful image classification models using very little data</a></p>
| 0 | 2016-10-18T01:57:56Z | [
"python",
"keras",
"conv-neural-network"
] |
Filtering pandas DataFrame | 40,045,175 | <p>I'm reading in a .csv file using pandas, and then I want to filter out the rows where a specified column's value is not in a dictionary for example. So something like this:</p>
<pre><code>df = pd.read_csv('mycsv.csv', sep='\t', encoding='utf-8', index_col=0,
names=['col1', 'col2','col3','col4'])
c = df.col4.value_counts(normalize=True).head(20)
values = dict(zip(c.index.tolist()[1::2], c.tolist()[1::2])) # Get odd and create dict
df_filtered = filter out all rows where col4 not in values
</code></pre>
<p>After searching around a bit I tried using the following to filter it:</p>
<pre><code>df_filtered = df[df.col4 in values]
</code></pre>
<p>but that unfortunately didn't work.</p>
<p>I've done the following to make it works for what I want to do, but it's incredibly slow for a large .csv file, so I thought there must be a way to do it that's built in to pandas:</p>
<pre><code>t = [(list(df.col1) + list(df.col2) + list(df.col3)) for i in range(len(df.col4)) if list(df.col4)[i] in values]
</code></pre>
| 0 | 2016-10-14T14:05:21Z | 40,045,266 | <p>If you want to check against the dictionary values:</p>
<pre><code>df_filtered = df[df.col4.isin(values.values())]
</code></pre>
<p>If you want to check against the dictionary keys:</p>
<pre><code>df_filtered = df[df.col4.isin(values.keys())]
</code></pre>
| 1 | 2016-10-14T14:09:56Z | [
"python",
"csv",
"pandas"
] |
Filtering pandas DataFrame | 40,045,175 | <p>I'm reading in a .csv file using pandas, and then I want to filter out the rows where a specified column's value is not in a dictionary for example. So something like this:</p>
<pre><code>df = pd.read_csv('mycsv.csv', sep='\t', encoding='utf-8', index_col=0,
names=['col1', 'col2','col3','col4'])
c = df.col4.value_counts(normalize=True).head(20)
values = dict(zip(c.index.tolist()[1::2], c.tolist()[1::2])) # Get odd and create dict
df_filtered = filter out all rows where col4 not in values
</code></pre>
<p>After searching around a bit I tried using the following to filter it:</p>
<pre><code>df_filtered = df[df.col4 in values]
</code></pre>
<p>but that unfortunately didn't work.</p>
<p>I've done the following to make it works for what I want to do, but it's incredibly slow for a large .csv file, so I thought there must be a way to do it that's built in to pandas:</p>
<pre><code>t = [(list(df.col1) + list(df.col2) + list(df.col3)) for i in range(len(df.col4)) if list(df.col4)[i] in values]
</code></pre>
| 0 | 2016-10-14T14:05:21Z | 40,048,354 | <p>As A.Kot mentioned you could use the values method of the <code>dict</code> to search. But the <code>values</code> method returns either a <code>list</code> or an iterator depending on your version of Python.</p>
<p>If your only reason for creating that <code>dict</code> is membership testing, and you only ever look at the values of the <code>dict</code> then you are using the wrong data structure.</p>
<p>A <a href="https://docs.python.org/2/library/stdtypes.html#set" rel="nofollow"><code>set</code></a> will improve your lookup performance, and simplify your check back to:</p>
<pre><code>df_filtered = df[df.col4 in values]
</code></pre>
<p>If you use values elsewhere, and you want to check against the keys, then you're ok because membership testing against keys is efficient.</p>
| 0 | 2016-10-14T16:56:23Z | [
"python",
"csv",
"pandas"
] |
Python Log Level with an additional verbosity depth | 40,045,288 | <p>I would like to extend the existing <code>logging.LEVEL</code> mechanics so that I have the option of switching between different logging levels such as <code>DEBUG</code>, <code>INFO</code>, <code>ERROR</code> etc. but also define a <code>depth</code> for each of the levels.</p>
<p>For example, let's assume that the logging level is set to <code>logging.DEBUG</code>. All of the <code>log.DEBUG()</code> calls will be visible.</p>
<pre><code>log.debug('Limit event has occurred.')
</code></pre>
<p>So I get:</p>
<pre><code>[2016-10-08 10:07:29,807] <__main__> {myApp:test_condition_info:93} (DEBUG) Limit event has occurred.
</code></pre>
<p>What I am after, is passing an extra depth level to the <code>log.debug()</code> call so that I can control how much of detail is printed in the <code>DEBUG</code> message, not entirely <em>enabling</em> or <em>disabling</em> the DEBUG level but controlling how much information the <strong>debug</strong> message will carry. So in all cases we see the debug message but in some instances, it is less detailed and on some occasions more information is included.</p>
<p>For example:</p>
<pre><code>log.debug('Limit event has occurred.', verbosity=1)
log.debug('The following user has caused the limit event: %s' % (user), verbosity=3)
log.debug('The following files have been affected: %s' % [list_of_files], verbosity=7)
</code></pre>
<p>So when the logging level is set to <code>DEBUG</code> and the global verbosity is set to <code>GLOBAL_VERBOSITY=1</code> we will get this:</p>
<pre><code>[2016-10-08 10:07:29,807] <__main__> {myApp:test_condition_info:93} (DEBUG) Limit event has occurred.
</code></pre>
<p>And if the global verbosity is set to <code>GLOBAL_VERBOSITY=4</code> we will get this:</p>
<pre><code>[2016-10-08 10:07:29,807] <__main__> {myApp:test_condition_info:93} (DEBUG) Limit event has occurred.
[2016-10-08 10:07:29,807] <__main__> {myApp:test_condition_info:93} (DEBUG) The following user has caused the limit event: xpr
</code></pre>
<p>And if the global verbosity is set to <code>GLOBAL_VERBOSITY=9</code> we will get all of the details:</p>
<pre><code>[2016-10-08 10:07:29,807] <__main__> {myApp:test_condition_info:93} (DEBUG) Limit event has occurred.
[2016-10-08 10:07:29,807] <__main__> {myApp:test_condition_info:93} (DEBUG) The following user has caused the limit event: xpr
[2016-10-08 10:07:29,807] <__main__> {myApp:test_condition_info:93} (DEBUG) The following files have been affected: ['inside.ini', 'render.so']
</code></pre>
<p>How should I approach this problem?</p>
| 1 | 2016-10-14T14:10:43Z | 40,052,090 | <p>Can't you just use the more fine grained logging levels? <code>DEBUG</code> is just a wrapper for level 10. You can use</p>
<pre><code>Logger.log(10, "message")
</code></pre>
<p>to log at debug level and then</p>
<pre><code>Logger.log(9, "message")
</code></pre>
<p>which won't show up at debug level, but will if you do</p>
<pre><code>Logger.setLevel(9)
</code></pre>
<p>If you're dead set on doing it the other way, you should look at 'filters'.</p>
<p><a href="http://stackoverflow.com/questions/879732/logging-with-filters">logging with filters</a></p>
<pre><code>#!/usr/bin/env python
import logging
GLOBAL_VERBOSITY = 1
class LoggingErrorFilter(logging.Filter):
def filter(self, record):
if record.__dict__.get("verbosity", 0) > GLOBAL_VERBOSITY:
print "Log message verbosity is greater than threshold, logging line:{0}".format(record)
return True
print "Log message verbosity is lower than threshold, not logging line:{0}".format(record)
return False
logging.basicConfig(level=logging.DEBUG, filename="test.log")
logger = logging.getLogger()
filter = LoggingErrorFilter()
logger.addFilter(filter)
def main():
logger.info("Message 1", extra={"verbosity":3})
logger.info("Message 2", extra={"verbosity":1})
if __name__ == "__main__":
main()
</code></pre>
| 1 | 2016-10-14T21:14:11Z | [
"python",
"logging"
] |
Kivy scrollbar scroll direction with mouse | 40,045,326 | <p>Is there a way to change the scroll behavior of Kivy scrollbars when using a mouse? With a mousewheel, the contents of a DropDown or Spinner scroll up or down as expected. However, if you use a mouse to grab the scrollbar and slide it up, the direction is reversed - you have to drag the mouse pointer down to move the scrollbar and list up.</p>
| 1 | 2016-10-14T14:12:32Z | 40,061,155 | <p>This can be fixed by modifying the DropDown from which Spinner inherits to change scroll_type to include 'bars' (just 'content' by default). I fixed this behaviour as follows:</p>
<pre><code>from functools import partial
dropdownMod = partial(DropDown, bar_width = 10, scroll_type = ['bars','content'])
class SpinnerName(Spinner):
dropdown_cls = dropdownMod
values = ('1','2','3')
</code></pre>
| 0 | 2016-10-15T15:54:48Z | [
"python",
"windows",
"python-2.7",
"kivy",
"python-2.x"
] |
How in python's module CONFIGPARSER read text from variable, not from a file? | 40,045,338 | <p>The problem is that <code>config.read("filename.ini")</code> - requires a local file. I download the content of this file straight into the variable from my FTP server with the help of StringIO. </p>
<pre><code>content = StringIO()
f.retrbinary('RETR /folder1/inifile.ini, content.write)
request = content.getvalue()
config.read(request)
</code></pre>
| 0 | 2016-10-14T14:13:17Z | 40,046,881 | <p>I found this in the python docs.</p>
<p>Using your StringIO object it looks like you can use config.readfp(contents).</p>
| 0 | 2016-10-14T15:29:34Z | [
"python",
"ftp",
"readfile",
"ini",
"configparser"
] |
celery not all tasks in a group are executed | 40,045,405 | <p>I have simple celery case:</p>
<pre><code>@task()
def my_task1(index):
log.info(index)
@task()
def my_task2():
tasks=[my_task1.si(1), my_task1.si(2), my_task1.si(3), my_task1.si(4)]
group(*tasks)()
</code></pre>
<p>When I run <code>my_task2</code>, I only see tasks with integers 2 and 4 in celery console. I want to run them all. What am I doing wrong?</p>
| 0 | 2016-10-14T14:16:27Z | 40,046,932 | <p>Problem solved - I had another celery running...</p>
| 0 | 2016-10-14T15:32:00Z | [
"python",
"celery",
"django-celery"
] |
Python - Group by over dicts | 40,045,458 | <p>I am using Python 3.5 and I have the following array of dics:</p>
<pre><code>d1 = [{id=1, col="name", oldvalue="foo", newvalue="bar"}, {id=1, col="age", oldvalue="25", newvalue="26"}, {id=2, col="name", oldvalue="foo", newvalue="foobar"}, {id=3, col="age", oldvalue="25", newvalue="26"}]
d2 = [{id=1, col="name", oldvalue="foo", newvalue="bar"}, {id=1, col="age", oldvalue="25"]
d3 = [{id=3, col="age", oldvalue="25", newvalue="26"}]
</code></pre>
<p>As you can see it is showing some changes made to particular "rows".</p>
<p>I want to know if there is a way to return the total of rows updated, so:</p>
<pre><code>def counting_updates(d):
# ...
return total
print counting_updates(d1) # prints 3 because 3 rows were updated
print counting_updates(d2) # prints 1 because 1 row was updated
print counting_updates(d3) # prints 1 because 1 row was updated
</code></pre>
| 0 | 2016-10-14T14:18:44Z | 40,045,522 | <p>If you're looking for the number of unique <code>id</code>s, then</p>
<pre><code>len({row['id'] for row in d})
</code></pre>
<p>The <code>{}</code> is a set comprehension.</p>
| 1 | 2016-10-14T14:22:20Z | [
"python",
"dictionary",
"python-3.5"
] |
MySQL data to a python dict structure | 40,045,496 | <p>The structure of my mysql table looks like this:</p>
<pre><code>id | mid | liters | timestamp
1 | 20 | 50 | 2016-10-11 10:53:25
2 | 30 | 60 | 2016-10-11 10:40:20
3 | 20 | 100 | 2016-10-11 10:09:27
4 | 30 | 110 | 2016-10-11 09:55:07
5 | 40 | 80 | 2016-10-11 09:44:46
6 | 40 | 90 | 2016-10-11 07:56:14
7 | 20 | 120 | 2016-04-08 13:27:41
8 | 20 | 130 | 2016-04-08 15:35:28
</code></pre>
<p>My desired output is like this :</p>
<pre><code>dict = {
20:{50:[2016-10-11 10:53:25,2016-10-11 10:53:25],100:[2016-10-11 10:53:25,2016-10-11 10:09:27],120:[2016-10-11 10:09:27,2016-04-08 13:27:41],130:[2016-04-08 13:27:41,2016-04-08 15:35:28]},
30:{60:[2016-10-11 10:40:20,2016-10-11 10:40:20],110:[2016-10-11 10:40:20,2016-10-11 09:55:07]}
40:{80:[2016-10-11 09:44:46,2016-10-11 09:44:46],90:[2016-10-11 09:44:46,2016-10-11 07:56:14]}
}
</code></pre>
<p>If <code>50:[2016-10-11 10:53:25,2016-10-11 10:53:25]</code> (when you don't have a previews timestamp and you need to duplicate) it's hard to make, just a <code>50:[2016-10-11 10:53:25]</code> can be ok.
How can I extract this to a python dict from my db table.</p>
<p>I tryed something like :</p>
<pre><code>query_all = "SELECT GROUP_CONCAT(mid,liters,timestamp) FROM `my_table` GROUP BY mid ORDER BY mid "
</code></pre>
<p>But I don't know how to order this. Thanks for your time. </p>
| 0 | 2016-10-14T14:20:38Z | 40,045,619 | <p><code>GROUP_CONCAT</code> may have own <code>ORDER BY</code></p>
<p>e.g .</p>
<pre><code>GROUP_CONCAT(mid,liters,timestamp ORDER BY liters)
</code></pre>
| 0 | 2016-10-14T14:27:43Z | [
"python",
"mysql",
"sql",
"dictionary"
] |
Pandas: query string where column name contains special characters | 40,045,545 | <p>I am working with a data frame that has a structure something like the following:</p>
<pre><code>In[75]: df.head(2)
Out[75]:
statusdata participant_id association latency response \
0 complete CLIENT-TEST-1476362617727 seeya 715 dislike
1 complete CLIENT-TEST-1476362617727 welome 800 like
stimuli elementdata statusmetadata demo$gender demo$question2 \
0 Sample B semi_imp complete male 23
1 Sample C semi_imp complete female 23
</code></pre>
<p>I want to be able to run a query string against the column <code>demo$gender</code>.</p>
<p>I.e,</p>
<pre><code>df.query("demo$gender=='male'")
</code></pre>
<p>But this has a problem with the <code>$</code> sign. If I replace the <code>$</code> sign with another delimited (like <code>-</code>) then the problem persists. Can I fix up my query string to avoid this problem. I would prefer not to rename the columns as these correspond tightly with other parts of my application.</p>
<p>I really want to stick with a query string as it is supplied by another component of our tech stack and creating a parser would be a heavy lift for what seems like a simple problem.</p>
<p>Thanks in advance.</p>
| 0 | 2016-10-14T14:24:07Z | 40,045,659 | <p>The current implementation of <code>query</code> requires the string to be a valid python expression, so column names must be valid python identifiers. Your two options are renaming the column, or using a plain boolean filter, like this:</p>
<pre><code>df[df['demo$gender'] =='male']
</code></pre>
| 1 | 2016-10-14T14:29:23Z | [
"python",
"pandas",
"dataframe"
] |
Pandas: query string where column name contains special characters | 40,045,545 | <p>I am working with a data frame that has a structure something like the following:</p>
<pre><code>In[75]: df.head(2)
Out[75]:
statusdata participant_id association latency response \
0 complete CLIENT-TEST-1476362617727 seeya 715 dislike
1 complete CLIENT-TEST-1476362617727 welome 800 like
stimuli elementdata statusmetadata demo$gender demo$question2 \
0 Sample B semi_imp complete male 23
1 Sample C semi_imp complete female 23
</code></pre>
<p>I want to be able to run a query string against the column <code>demo$gender</code>.</p>
<p>I.e,</p>
<pre><code>df.query("demo$gender=='male'")
</code></pre>
<p>But this has a problem with the <code>$</code> sign. If I replace the <code>$</code> sign with another delimited (like <code>-</code>) then the problem persists. Can I fix up my query string to avoid this problem. I would prefer not to rename the columns as these correspond tightly with other parts of my application.</p>
<p>I really want to stick with a query string as it is supplied by another component of our tech stack and creating a parser would be a heavy lift for what seems like a simple problem.</p>
<p>Thanks in advance.</p>
| 0 | 2016-10-14T14:24:07Z | 40,083,013 | <p>For the interested here is a simple proceedure I used to accomplish the task:</p>
<pre><code># Identify invalid column names
invalid_column_names = [x for x in list(df.columns.values) if not x.isidentifier() ]
# Make replacements in the query and keep track
# NOTE: This method fails if the frame has columns called REPL_0 etc.
replacements = dict()
for cn in invalid_column_names:
r = 'REPL_'+ str(invalid_column_names.index(cn))
query = query.replace(cn, r)
replacements[cn] = r
inv_replacements = {replacements[k] : k for k in replacements.keys()}
df = df.rename(columns=replacements) # Rename the columns
df = df.query(query) # Carry out query
df = df.rename(columns=inv_replacements)
</code></pre>
<p>Which amounts to identifying the invalid column names, transforming the query and renaming the columns. Finally we perform the query and then translate the column names back.</p>
| 0 | 2016-10-17T09:39:30Z | [
"python",
"pandas",
"dataframe"
] |
ImportError: No module named skimage | 40,045,579 | <p>I'm trying to use scikit-image on Mac OS X El Capitan. </p>
<p>I installed scikit-image and the relevant dependencies using <code>pip install scikit-image</code>, but when I run python and try <code>import skimage</code> I get the error: <code>ImportError: No module named skimage</code>.</p>
<p>Running <code>pip list</code> gives the following output: </p>
<pre><code>cycler (0.10.0)
Cython (0.24.1)
dask (0.11.1)
decorator (4.0.10)
dlib (19.1.0)
matplotlib (1.5.3)
networkx (1.11)
numpy (1.11.2)
Pillow (3.4.1)
pip (8.1.2)
pyparsing (2.1.10)
python-dateutil (2.5.3)
pytz (2016.7)
scikit-image (0.12.3)
setuptools (23.1.0)
six (1.10.0)
toolz (0.8.0)
vboxapi (1.0)
wheel (0.29.0)
</code></pre>
<p>Does anyone have any idea why this might be happening? </p>
| 0 | 2016-10-14T14:25:31Z | 40,057,881 | <p>Resetting my <code>$PYTHONPATH</code> seems to have fixed the problem. </p>
| 0 | 2016-10-15T10:26:42Z | [
"python",
"scikit-image",
"dlib"
] |
Adding a column in pandas df using a function | 40,045,632 | <p>I have a pandas df(see below). I want to add a column called "price" which I want to derive the values from using a function. How do I go about doing this?</p>
<pre><code>function:
def getquotetoday(symbol):
yahoo = Share(symbol)
return yahoo.get_prev_close()
df:
Symbol Bid Ask
MSFT 10.25 11.15
AAPL 100.01 102.54
(...)
</code></pre>
| -1 | 2016-10-14T14:28:07Z | 40,045,819 | <p>In general, you can use the apply function. If your function is of only one column, you can use:</p>
<pre><code>df['price'] = df['Symbol'].apply(getquotetoday)
</code></pre>
<p>as @EdChum suggested. If your function happens to require multiple columns, you can use, for example, something like:</p>
<pre><code>df['new_column_name'] = df.apply(lambda x: my_function(x['value_1'], x['value_2']), axis=1)
</code></pre>
| 1 | 2016-10-14T14:37:03Z | [
"python",
"function",
"pandas",
"dataframe",
"yahoo"
] |
Python Regex: Using a lookahead | 40,045,740 | <p>I'm trying to detect text of the following type, in order to remove it from the text:</p>
<pre><code>BOLD:Parshat NoachBOLD:
BOLD:Parshat Lech LechaBOLD:
BOLD:Parshat VayeraBOLD
BOLD:Parshat ShâminiBOLD:
</code></pre>
<p>But only to capture this part:</p>
<pre><code>BOLD:Parshat Noach
BOLD:Parshat Lech Lecha
BOLD:Parshat Vayera
BOLD:Parshat Shâmini
</code></pre>
<p>I thought to use this regex using lookahead:</p>
<pre><code>re.sub(r"BOLD:Parshat .*?(?=(:BOLD))","",comment) #tried lookahead with and without parens
</code></pre>
<p>But it doesn't seem to be detecting them. What might be the problem? The text is followed by a snippet in Hebrew, not sure if that is what is causing the problem.</p>
<p>Please note that these segments are embedded in the middle of different lines, as mentioned followed by a Hebrew snippet.</p>
| 0 | 2016-10-14T14:33:23Z | 40,045,826 | <p>You don't need look-around just use a capture group and <code>re.MULTILINE</code> flag in order to match a multi line text.</p>
<pre><code>In [8]: s = """BOLD:Parshat NoachBOLD:
...: BOLD:Parshat Lech LechaBOLD:
...: BOLD:Parshat VayeraBOLD
...: BOLD:Parshat ShâminiBOLD:"""
In [9]: re.findall(r'^(BOLD.*)BOLD:?$', s, re.MULTILINE)
Out[9]:
['BOLD:Parshat Noach',
'BOLD:Parshat Lech Lecha',
'BOLD:Parshat Vayera',
'BOLD:Parshat Shâmini']
</code></pre>
| 0 | 2016-10-14T14:37:27Z | [
"python",
"regex"
] |
Python Regex: Using a lookahead | 40,045,740 | <p>I'm trying to detect text of the following type, in order to remove it from the text:</p>
<pre><code>BOLD:Parshat NoachBOLD:
BOLD:Parshat Lech LechaBOLD:
BOLD:Parshat VayeraBOLD
BOLD:Parshat ShâminiBOLD:
</code></pre>
<p>But only to capture this part:</p>
<pre><code>BOLD:Parshat Noach
BOLD:Parshat Lech Lecha
BOLD:Parshat Vayera
BOLD:Parshat Shâmini
</code></pre>
<p>I thought to use this regex using lookahead:</p>
<pre><code>re.sub(r"BOLD:Parshat .*?(?=(:BOLD))","",comment) #tried lookahead with and without parens
</code></pre>
<p>But it doesn't seem to be detecting them. What might be the problem? The text is followed by a snippet in Hebrew, not sure if that is what is causing the problem.</p>
<p>Please note that these segments are embedded in the middle of different lines, as mentioned followed by a Hebrew snippet.</p>
| 0 | 2016-10-14T14:33:23Z | 40,045,872 | <p>In python you can just do:</p>
<pre><code>str = re.sub(r'BOLD:?$', '', str, 0, re.MULTILINE)
</code></pre>
<p><a href="https://regex101.com/r/qiNd4L/3" rel="nofollow">RegEx Demo</a></p>
<p>That will remove <code>BOLD</code> followed by optional <code>:</code> from the end of each line.</p>
<hr>
<p><strong>EDIT:</strong> If this <code>BOLD:</code> term is not always at end of line, one can use:</p>
<pre><code>>>> print re.sub(r'\b(BOLD:.*)BOLD:?', r'\1', str)
BOLD:Parshat Noach
BOLD:Parshat Lech Lecha
BOLD:Parshat Vayera
BOLD:Parshat Shâmini
</code></pre>
| 1 | 2016-10-14T14:39:35Z | [
"python",
"regex"
] |
multiple domains - one django project | 40,045,801 | <p>I want to run ONE django project on multiple domains/websites. The websites each need to access a unique "urls.py"/"views.py". I tried it already with <a href="http://michal.karzynski.pl/blog/2010/10/19/run-multiple-websites-one-django-project/" rel="nofollow">this tutorial</a>, but it doesn't work for me.
Is there a way to do this with middleware in an easy way (without the Sites framework)?
A little bit of help would be really great. Thanks.</p>
<p>Edit: As I tried it like in the tutorial from above, my httpd.conf looked like this:</p>
<pre><code>ServerRoot "/home/webfactionusername/webapps/erdbeer/apache2"
LoadModule authz_core_module modules/mod_authz_core.so
LoadModule dir_module modules/mod_dir.so
LoadModule env_module modules/mod_env.so
LoadModule log_config_module modules/mod_log_config.so
LoadModule mime_module modules/mod_mime.so
LoadModule rewrite_module modules/mod_rewrite.so
LoadModule setenvif_module modules/mod_setenvif.so
LoadModule wsgi_module modules/mod_wsgi.so
LoadModule unixd_module modules/mod_unixd.so
LogFormat "%{X-Forwarded-For}i %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
CustomLog /home/webfactionusername/logs/user/access_erdbeer.log combined
ErrorLog /home/webfactionusername/logs/user/error_erdbeer.log
Listen 10414
KeepAlive Off
SetEnvIf X-Forwarded-SSL on HTTPS=1
ServerLimit 1
StartServers 1
MaxRequestWorkers 5
MinSpareThreads 1
MaxSpareThreads 3
ThreadsPerChild 5
WSGIDaemonProcess erdbeer processes=2 threads=12 python-path=/home/webfactionusername/webapps/erdbeer:/home/webfactionusername/webapps/erdbeer/myproject:/home/webfactionusername/webapps/erdbeer/lib/python2.7
WSGIProcessGroup erdbeer
WSGIRestrictEmbedded On
WSGILazyInitialization On
WSGIScriptAlias / /home/webfactionusername/webapps/erdbeer/myproject/myproject/wsgi.py
# Virtual hosts setup
NameVirtualHost *
<VirtualHost *>
ServerName mydomain123abc.de
WSGIDaemonProcess erdbeer processes=5 python-path=/home/webfactionusername/webapps/erdbeer:/home/webfactionusername/webapps/erdbeer/lib/python2.7 threads=1
WSGIScriptAlias / /home/webfactionusername/webapps/erdbeer/subdomain1.wsgi
</VirtualHost>
<VirtualHost *>
ServerName seconddomain123.de
WSGIDaemonProcess erdbeer processes=5 python-path=/home/webfactionusername/webapps/erdbeer:/home/webfactionusername/webapps/erdbeer/lib/python2.7 threads=1
WSGIScriptAlias / /home/webfactionusername/webapps/erdbeer/subdomain2.wsgi
</VirtualHost>
</code></pre>
<p>Edit2: I'm still not able to grasp this middleware-concept. I only was able to understand how that I probably need to use "process_request", but I have no clue how the middleware-file would look like. Lets say I have "domain1.com" and "domain2.com" which should use these urls:</p>
<p>domain1_urls.py</p>
<pre><code>from django.conf.urls import include, url
from django.contrib import admin
from django.http import HttpResponse
urlpatterns = [
url(r'^$', 'myapp1.views.home'),
url(r'^admin/', include(admin.site.urls)),
url(r'^robots\.txt$', lambda r: HttpResponse("User-agent: *\nDisallow:", content_type="text/plain")),
]
</code></pre>
<p>domains2_urls.py</p>
<pre><code>from django.conf.urls import include, url
from django.contrib import admin
from django.http import HttpResponse
urlpatterns = [
url(r'^$', 'myapp2.views.home'),
url(r'^admin/', include(admin.site.urls)),
url(r'^robots\.txt$', lambda r: HttpResponse("User-agent: *\nDisallow:", content_type="text/plain")),
]
</code></pre>
<p>How would I use that in my middleware? (I'm a beginner...)</p>
| 0 | 2016-10-14T14:36:20Z | 40,064,648 | <p>I found the solution <a href="https://code.djangoproject.com/wiki/MultiHostMiddleware" rel="nofollow">perfectly here</a>.</p>
<pre><code># File: settings.py
HOST_MIDDLEWARE_URLCONF_MAP = {
# Control Panel
"www.example.com": "webapp.sites.example.urls",
}
</code></pre>
<p>and</p>
<pre><code># File: multihost.py
import time
from django.conf import settings
from django.utils.cache import patch_vary_headers
class MultiHostMiddleware:
def process_request(self, request):
try:
request.META["LoadingStart"] = time.time()
host = request.META["HTTP_HOST"]
#if host[-3:] == ":80":
# host = host[:-3] # ignore default port number, if present
# best way to do this.
host_port = host.split(':')
if len(host_port)==2:
host = host_port[0]
if host in settings.HOST_MIDDLEWARE_URLCONF_MAP:
request.urlconf = settings.HOST_MIDDLEWARE_URLCONF_MAP[host]
request.META["MultiHost"] = str(request.urlconf)
else:
request.META["MultiHost"] = str(settings.ROOT_URLCONF)
except KeyError:
pass # use default urlconf (settings.ROOT_URLCONF)
def process_response(self, request, response):
if 'MultiHost' in request.META:
response['MultiHost'] = request.META.get("MultiHost")
if 'LoadingStart' in request.META:
_loading_time = time.time() - int(request.META["LoadingStart"])
response['LoadingTime'] = "%.2fs" % ( _loading_time, )
if getattr(request, "urlconf", None):
patch_vary_headers(response, ('Host',))
return response
</code></pre>
| 0 | 2016-10-15T21:59:21Z | [
"python",
"django",
"django-views"
] |
Function Calling Python basic | 40,045,851 | <p>I'm trying to enter values from the tuple <code>('a',1), ('b',2),('c',3)</code> into the function <code>dostuff</code> but i always get a return of <code>None</code> or <code>False</code>. i'm new to this to i'm sorry if this question is basic. I would appreciate any help.</p>
<p>I expect the result of this to be:</p>
<pre><code>a1---8
b2---8
c3---8
</code></pre>
<p>Code:</p>
<pre><code>def dostuff(stri,numb,char):
cal = stri+str(numb)+'---'+str(char)
return cal
def callit (tups,char):
for x in range(len(tups)):
dostuff(tups[x][0],tups[x][1],char)
print(callit([('a',1), ('b',2),('c',3)],8))
</code></pre>
| 0 | 2016-10-14T14:38:40Z | 40,045,962 | <p>You can use a list comprehension to do it in one line:</p>
<pre><code>char = 8
point_list = [('a', 1), ('b', 2),('c', 3)]
print("\n".join(["{}{}---{}".format(s, n, char) for s, n in point_list]))
</code></pre>
<p><code>"{}{}---{}".format(s, n, char)</code> creates a string by replacing <code>{}</code> by each one of the input in <code>format</code>, therefore <code>"{}{}---{}".format("a", 1, 8)</code> will return <code>"a1---8"</code></p>
<p><code>for s, n in point_list</code> will create an implicit loop over the <code>point_list</code> list and for each element of the list (each tuple) will store the first element in <code>s</code> and the second in <code>n</code></p>
<p><code>["{}{}---{}".format(s, n, char) for s, n in point_list]</code> is therefore a list created by applying the format we want to each of the tuples: it will return <code>["a1---8","b2---8","c3---8"]</code></p>
<p>finally <code>"\n".join(["a1---8","b2---8","c3---8"])</code> creates a single string from a list of strings by concatenating each element of the list to "\n" to the next one: <code>"a1---8\nb2---8\nc3---8"</code> (<code>"\n"</code> is a special character representing the end of a line</p>
| -1 | 2016-10-14T14:43:41Z | [
"python"
] |
Function Calling Python basic | 40,045,851 | <p>I'm trying to enter values from the tuple <code>('a',1), ('b',2),('c',3)</code> into the function <code>dostuff</code> but i always get a return of <code>None</code> or <code>False</code>. i'm new to this to i'm sorry if this question is basic. I would appreciate any help.</p>
<p>I expect the result of this to be:</p>
<pre><code>a1---8
b2---8
c3---8
</code></pre>
<p>Code:</p>
<pre><code>def dostuff(stri,numb,char):
cal = stri+str(numb)+'---'+str(char)
return cal
def callit (tups,char):
for x in range(len(tups)):
dostuff(tups[x][0],tups[x][1],char)
print(callit([('a',1), ('b',2),('c',3)],8))
</code></pre>
| 0 | 2016-10-14T14:38:40Z | 40,046,009 | <p>I think you're misunderstanding the <code>return</code> value of the functions: unless otherwise specified, all functions will return <code>None</code> at completion. Your code:</p>
<pre><code>print(callit([('a',1), ('b',2),('c',3)],8))`
</code></pre>
<p>is telling the Python interpreter "print the return value of this function call." This isn't printing what you expect it to because the <code>callit</code> function doesn't have a return value specified. You could either change the <code>return</code> in your <code>dostuff</code> function like so:</p>
<pre><code> def dostuff(stri,numb,char):
cal = stri+str(numb)+'---'+str(char)
print cal
def callit (tups,char):
for x in range(len(tups)):
dostuff(tups[x][0],tups[x][1],char)
callit([('a',1), ('b',2),('c',3)],8)
</code></pre>
<p>This changes the return on the third line into a print command, and removes the print command from the <code>callit</code> call.
Another option would be:</p>
<pre><code> def dostuff(stri,numb,char):
cal = stri+str(numb)+'---'+str(char)
return cal
def callit (tups,char):
for x in range(len(tups)):
cal = dostuff(tups[x][0],tups[x][1],char)
print(cal)
callit([('a',1), ('b',2),('c',3)],8)
</code></pre>
<p>This takes the return value from the <code>dostuff</code> function and stores it in a variable named cal, which could then be printed or written to a file on disk.</p>
| 4 | 2016-10-14T14:45:50Z | [
"python"
] |
Function Calling Python basic | 40,045,851 | <p>I'm trying to enter values from the tuple <code>('a',1), ('b',2),('c',3)</code> into the function <code>dostuff</code> but i always get a return of <code>None</code> or <code>False</code>. i'm new to this to i'm sorry if this question is basic. I would appreciate any help.</p>
<p>I expect the result of this to be:</p>
<pre><code>a1---8
b2---8
c3---8
</code></pre>
<p>Code:</p>
<pre><code>def dostuff(stri,numb,char):
cal = stri+str(numb)+'---'+str(char)
return cal
def callit (tups,char):
for x in range(len(tups)):
dostuff(tups[x][0],tups[x][1],char)
print(callit([('a',1), ('b',2),('c',3)],8))
</code></pre>
| 0 | 2016-10-14T14:38:40Z | 40,046,080 | <p>For a function to return a value in python it must have a <code>return</code> statement. In your <code>callit</code> function you lack a value to <code>return</code>. A more Pythonic approach would have both a value and iterate through the tuples using something like this:</p>
<pre><code>def callit(tups, char):
x = [dostuff(a, b, char) for a, b in tups]
return x
</code></pre>
<p>Since <code>tups</code> is a list of tuples, we can iterate through it using <code>for a, b in tups</code> - this grabs both elements in the pairs. Next <code>dostuff(a, b, char)</code> is calling your <code>dostuff</code> function on each pair of elements and the <code>char</code> specified. Enclosing that in brackets makes the result a list, which we then return using the <code>return</code> statement. </p>
<p>Note you don't need to do:</p>
<pre><code>x = ...
return x
</code></pre>
<p>You can just use <code>return [dostuff(a, b, char) for a, b in tups]</code> but I used the former for clarity.</p>
| 0 | 2016-10-14T14:49:55Z | [
"python"
] |
Function Calling Python basic | 40,045,851 | <p>I'm trying to enter values from the tuple <code>('a',1), ('b',2),('c',3)</code> into the function <code>dostuff</code> but i always get a return of <code>None</code> or <code>False</code>. i'm new to this to i'm sorry if this question is basic. I would appreciate any help.</p>
<p>I expect the result of this to be:</p>
<pre><code>a1---8
b2---8
c3---8
</code></pre>
<p>Code:</p>
<pre><code>def dostuff(stri,numb,char):
cal = stri+str(numb)+'---'+str(char)
return cal
def callit (tups,char):
for x in range(len(tups)):
dostuff(tups[x][0],tups[x][1],char)
print(callit([('a',1), ('b',2),('c',3)],8))
</code></pre>
| 0 | 2016-10-14T14:38:40Z | 40,047,070 | <p>as @n1c9 said, every Python function must return some object, and if there's no <code>return</code> statement written in the function definition the function will <strong>implicitly</strong> return the <code>None</code> object. (implicitly meaning that under the hood, Python will see that there's no return statement and return<code>None</code>)</p>
<p>However, while there's nothing wrong in this case with printing a value in a function rather than returning it, it's generally considered bad practice. This is because if you ever want to test the function to aid in debugging, you have to write the test within the function definition. While if you returned the value you could just test the return value of calling the function.</p>
<p>So when you're debugging this code you might write something like this:</p>
<pre><code>def test_callit():
tups = [('a', 1), ('b', 2), ('c', 3)]
expected = 'a1---8\nb2---8\nc3---8'
result = callit(tups, 8)
assert result == expected, (str(result) + " != " + expected)
</code></pre>
<p>if you're unfamiliar with the assert statement, you can read up about it <a href="https://docs.python.org/2/reference/simple_stmts.html#grammar-token-assert_stmt" rel="nofollow">here</a></p>
<p>Now that you have a test function, you can go back and modify your code. Callit needs a return value, which in this case should probably be a string. so for the function<code>callit</code> you might write</p>
<pre><code>def callit(tups, char):
result = ''
for x in range(len(tups)):
result += dostuff(tups[x][0], tups[x][1], char) + '\n'
result = result[:result.rfind('\n')] # trim off the last \n
return result
</code></pre>
<p>when you run test_callit, if you get any assertion errors you can see how it differs from what you expect in the traceback.</p>
<p>What I'm about to talk about isn't really relevant to your question, but I would say improves the readability of your code. </p>
<p>Python's for statement is very different from most other programming languages, because it actually acts like a foreach loop. Currently, the code ignores that feature and forces regular for-loop functionality. it's actually simpler and faster to write something like this:</p>
<pre><code>def callit(tups, char):
result = ''
for tup in tups:
result += dostuff(tup[0], tup[1], char) + '\n'
result = result[:result.rfind('\n')] # trim off the last \n
return result
</code></pre>
| 1 | 2016-10-14T15:38:20Z | [
"python"
] |
How to test printed value in Python 3.5 unittest? | 40,045,963 | <pre><code>from checker.checker import check_board_state, check_row, check_winner,\
check_column, check_diagonal
import sys
import unittest
class TestChecker(unittest.TestCase):
def test_winner_row(self):
check_board_state([['o', 'x', '.'],
['o', 'o', 'o'],
['.', 'x', 'o']])
output = sys.stdout.getvalue().strip()
assert output == 'o'
def test_draw(self):
check_board_state([['.', 'x', '.', 'o', 'o'],
['o', 'o', 'x', '.', '.'],
['.', 'o', 'x', '.', '.'],
['.', 'o', 'x', '.', '.'],
['.', 'o', 'x', '.', '.']])
output = sys.stdout.getvalue().strip()
assert output == '.'
if __name__ == '__main__':
unittest.main()
</code></pre>
<p>I want to test printed value in check_board_state function but I have problem with these tests. When I try to run them using </p>
<blockquote>
<p>python -m unittest tests.py </p>
</blockquote>
<p>i've got error:</p>
<blockquote>
<p>AttributeError: '_io.TextIOWrapper' object has no attribute 'getvalue'</p>
</blockquote>
<p>Tests works fine in PyDev in Eclipse when I use Python unittest instead of Python run. How Can I resolve this problem?</p>
| 0 | 2016-10-14T14:43:44Z | 40,046,524 | <pre><code>from checker import checker
from io import StringIO
import sys
import unittest
class TestChecker(unittest.TestCase):
def setUp(self):
# every test instance the class(setUp)
self.cls = checker()
old_stdout = sys.stdout
sys.stdout = mystdout = StringIO()
super(TestChecker, self).setUp()
def test_winner_row(self):
# the modules should give a return
self.cls.check_board_state([['o', 'x', '.'],
['o', 'o', 'o'],
['.', 'x', 'o']])
result = sys.stdout.getvalue().strip()
excepted = "o"
# use unittests assertion methods
self.assertEqual(excepted, result)
def test_draw(self):
self.cls.check_board_state([['.', 'x', '.', 'o', 'o'],
['o', 'o', 'x', '.', '.'],
['.', 'o', 'x', '.', '.'],
['.', 'o', 'x', '.', '.'],
['.', 'o', 'x', '.', '.']])
result = sys.stdout.getvalue().strip()
excepted = "."
self.assertEqual(excepted, result)
def tearDown(self):
sys.stdout = old_stdout
super(TestChecker, self).tearDown()
if __name__ == '__main__':
unittest.main()
</code></pre>
| -1 | 2016-10-14T15:12:31Z | [
"python",
"pydev",
"python-unittest"
] |
Popen shell commands and problems with spaces | 40,045,986 | <p>I have this code, it runs whatever command the user enters for adb, e,g the user enters the word 'devices' and 'adb.exe devices' will run and print out the device list.
This works fine with 'devices' but whenever a more complex command is issued, such as one with spaces, 'shell pm path com.myapp.app' it fails.</p>
<pre><code> c_arg = self.cmdTxt.GetValue() ##gets the user input, a string
params = [toolsDir + '\\adb.exe', c_arg] ##builds the command
p = Popen(params, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE) ## executes the command
stdout, stderr = p.communicate() ## get output
self.progressBox.AppendText(stdout) # print output of command
</code></pre>
<p>Is there some formatting or processing I need to do on the string from .GetValue() before I can put it into params and run it in Popen?</p>
| 0 | 2016-10-14T14:44:51Z | 40,046,567 | <p><code>subprocess.Popen</code> with <em>shell=True</em> takes the full command string, not a list.</p>
<pre><code>params = str(buildpath)+"\\adb.exe"+c_arg #c_arg is a string of arguments.
</code></pre>
| 0 | 2016-10-14T15:14:37Z | [
"android",
"python",
"shell",
"wx"
] |
How do I write a doctest in python 3.5 for a function using random to replace characters in a string? | 40,046,030 | <pre><code>def intoxication(text):
"""This function causes each character to have a 1/5 chance of being replaced by a random letter from the string of letters
INSERT DOCTEST HERE
"""
import random
string_letters = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ"
text = "".join(i if random.randint(0,4) else random.choice(string_letters) for i in text)
return text
</code></pre>
| 0 | 2016-10-14T14:47:06Z | 40,046,592 | <p>You need to mock the random functions to give something that is predetermined.</p>
<pre><code>def intoxication(text):
"""This function causes each character to have a 1/5 chance of being replaced by a random letter from the string of letters
If there is 0% chance of a random character chosen, result will be the same as input
>>> import random
>>> myrand = lambda x, y: 1
>>> mychoice = lambda x: "T"
>>> random.randint = myrand
>>> random.choice = mychoice
>>> intoxication("Hello World")
'Hello World'
If there is 100% chance of a random character chosen, result will be the same as 'TTTTTTTTTTT'
>>> import random
>>> myrand = lambda x, y: 0
>>> mychoice = lambda x: "T"
>>> random.randint = myrand
>>> random.choice = mychoice
>>> intoxication("Hello World")
'TTTTTTTTTTT'
If every second character is replaced
>>> import random
>>> thisone = 0
>>> def myrand(x, y): global thisone; thisone+=1; return thisone % 2
>>> mychoice = lambda x: "T"
>>> random.randint = myrand
>>> random.choice = mychoice
>>> intoxication("Hello World")
'HTlToTWTrTd'
If every third character is replaced
>>> import random
>>> thisone = 0
>>> def myrand(x, y): global thisone; thisone+=1; return thisone % 3
>>> mychoice = lambda x: "T"
>>> random.randint = myrand
>>> random.choice = mychoice
>>> intoxication("Hello World")
'HeTloTWoTld'
"""
import random
string_letters = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ"
text = "".join(i if random.randint(0,4) else random.choice(string_letters) for i in text)
return text
if __name__ == "__main__":
import doctest
doctest.testmod()
</code></pre>
| 0 | 2016-10-14T15:15:54Z | [
"python",
"python-3.x"
] |
How do I run pycharm within my docker container? | 40,046,167 | <p>I'm very new to docker. I want to build my python application within a docker container. As I build the application I want to be testing / running it in Pycharm and in the container I build. </p>
<p>How do I connect Pycharm pro to a specific container or image (either python or Anaconda)?</p>
<p>When I create a project, click pure python and then add remote, then clicking docker I get the following result</p>
<p><a href="https://i.stack.imgur.com/SVwEE.jpg" rel="nofollow"><img src="https://i.stack.imgur.com/SVwEE.jpg" alt="enter image description here"></a></p>
<p>I'm running on Mac OS X El Capitan (10.11.6) with Docker version 1.12.1 and Pycharm Pro 2016.2.3 </p>
| 3 | 2016-10-14T14:53:59Z | 40,050,811 | <p>Docker-for-mac only supports connections over the /var/run/docker.sock socket that is listening on your OSX host.</p>
<p>If you try to add this to pycharm, you'll get the following message:</p>
<p><a href="https://i.stack.imgur.com/4x5i6.png" rel="nofollow"><img src="https://i.stack.imgur.com/4x5i6.png" alt="Only supported on Linux"></a></p>
<p>"Cannot connect: java.lang.ExceptionInInitializerError, caused by: java.lang.IllegalStateException: Only supported on Linux"</p>
<p>So PyCharm <em>really</em> only wants to connect to a docker daemon over a TCP socket, and has support for the recommended TLS protection of that socket. The Certificates folder defaults to the certificate folder for the default docker-machine machine, "default".</p>
<p>It is possible to implement a workaround to expose Docker for Mac via a TCP server if you have socat installed on your OSX machine.</p>
<p>On my system, I have it installed via homebrew:</p>
<pre><code>brew install socat
</code></pre>
<p>Now that's installed, I can run socat with the following parameters:</p>
<pre><code>socat TCP-LISTEN:2376,reuseaddr,fork,bind=127.0.0.1 UNIX-CLIENT:/var/run/docker.sock
</code></pre>
<p>WARNING: this will make it possible for any process running as any user on your whole mac to access your docker-for-mac. The unix socket is protected by user permissions, while 127.0.0.1 is not.</p>
<p>This socat command tells it to listen on 127.0.0.1:2376 and pass connections on to /var/run/docker.sock. The reuseaddr and fork options allow this one command to service multiple connections instead of just the very first one.</p>
<p>I can test that socat is working by running the following command:</p>
<pre><code>docker -H tcp://127.0.0.1:2376 ps
</code></pre>
<p>If you get a successful <code>docker ps</code> response back, then you know that the socat process is doing its job.</p>
<p>Now, in the PyCharm window, I can put the same <code>tcp://127.0.0.1:2376</code> in place. I should get a "Connection successful" message back:</p>
<p><a href="https://i.stack.imgur.com/TqH0E.png" rel="nofollow"><img src="https://i.stack.imgur.com/TqH0E.png" alt="connection successful"></a></p>
<p>This workaround will require that socat command to be running any time you want to use docker from PyCharm.</p>
<p>If you wanted to do the same thing, but with TLS, you could set up certificates and make them available for both pycharm and socat, and use socat's <code>OPENSSL-LISTEN</code> instead of the <code>TCP-LISTEN</code> feature. I won't go into the details on that for this answer though.</p>
| 2 | 2016-10-14T19:37:05Z | [
"python",
"docker",
"anaconda"
] |
Python VBA-like left() | 40,046,168 | <p>I've been looking around but don't see anything similar to what I'm looking for. Within a django site I want to add some code that looks at a values furthest left (or the first) character in a variable (which is populated by a DB query), and if it's a particular letter or number, do something with said variable. How can I do this?</p>
| 0 | 2016-10-14T14:54:01Z | 40,046,294 | <p>This checks if the first letter in <code>mystring</code> is <code>'a'</code> or <code>'b'</code>.</p>
<pre><code>if mystring[0] in ('a', 'b'):
# whatever
</code></pre>
<p>Use slicing to test for longer substrings:</p>
<pre><code>if mystring[:3] in ('abc', 'def', '987'):
# whatever
</code></pre>
<p>Alternatively you can use <code>str.startswith()</code>:</p>
<pre><code>if mystring.startswith(('SUBTOTAL', 'TOTAL')):
# whatever
</code></pre>
| 1 | 2016-10-14T15:00:32Z | [
"python",
"django"
] |
Reorganizer Project in django-CMS | 40,046,185 | <p>Hi everyone I have a project works with this characteristic!</p>
<pre><code>Django==1.8.14
django-cms==3.2.5
Python 2.7.12
</code></pre>
<p>my project work fine, but now I am trying to reorganizer my apps</p>
<p>Right now I have something like this</p>
<p><a href="https://i.stack.imgur.com/dieS7.png" rel="nofollow"><img src="https://i.stack.imgur.com/dieS7.png" alt="enter image description here"></a></p>
<p>in my case <code>api_cpujobs, APIchart, drawChart, cms_extensions and readRSS</code> are apps, so to reorganizer I create a folder inside portal calls <code>apps</code> and move there my apps.</p>
<p>I modified my <code>setting.py</code></p>
<p>so now I have this</p>
<pre><code>'portal',
'portal.apps.APIchart',
'portal.apps.drawChart',
'portal.apps.readRSS',
'portal.apps.cms_extensions',
</code></pre>
<p>But when I start the server I obtain this error</p>
<pre><code>ImportError: No module named apps
</code></pre>
<p>I look for in internet and even other tutorial have the same organization, but I don't find what am I missing?</p>
<p>Thank in advice!</p>
| 0 | 2016-10-14T14:55:11Z | 40,048,548 | <p>I'm not quite sure, but try it that way:</p>
<pre><code>'portal',`
'APIchart',
'drawChart',
...
</code></pre>
<p>That should work, if it doesn't fix the issue, let me know :)</p>
| 0 | 2016-10-14T17:09:11Z | [
"python",
"django"
] |
How to find the indices of a subset in a list? | 40,046,268 | <p>I have a list of values that I know are increasing, such as</p>
<pre><code>x = [1, 2, 3, 4, 5, 6]
</code></pre>
<p>I'm looking for the indices of the subset that are within some range <code>[min, max]</code>. E.g. I want</p>
<pre><code>>> subset_indices(x, 2, 4)
[1, 3]
>> subset_indices(x, 1.1, 7)
[1, 5]
</code></pre>
<p>Is there a nice pythonic way of doing this?</p>
| 0 | 2016-10-14T14:59:11Z | 40,046,879 | <p>Following the recommendations from Kenny Ostrom and volcano, I implemented it simply as</p>
<pre><code>import bisect
def subset_indices(sequence, minv, maxv):
low = bisect.bisect_left(sequence, minv)
high = bisect.bisect_left(sequence, maxv, lo=low)
return [low, high]
</code></pre>
| 1 | 2016-10-14T15:29:32Z | [
"python",
"list",
"subset"
] |
Python : 2d contour plot with fixed x and y for 6 series of fractional data (z) | 40,046,295 | <p>I'm trying to use a contour plot to show an array of fractional data (between 0 and 1) at 6 heights (5, 10, 15, 20, 25, and 30) with a fixed x-axis (the "WN" series, 1 to 2300). y (height) is different for each series and discontinuous so I need to interpolate between heights. </p>
<pre><code>WN,5,10,15,20,25,30
1,0.9984898,0.99698234,0.99547797,0.99397725,0.99247956,0.99098486
2,0.99814528,0.99629492,0.9944489,0.99260795,0.99077147,0.98893934
3,0.99765164,0.99530965,0.99297464,0.99064702,0.98832631,0.98601222
4,0.99705136,0.99411237,0.99118394,0.98826683,0.98535997,0.9824633
5,0.99606526,0.99214685,0.98824716,0.98436642,0.98050326,0.97665751
6,0.98111153,0.96281821,0.94508928,0.92790776,0.91125059,0.89509743
7,0.99266499,0.98539108,0.97816986,0.97100824,0.96390355,0.95685524
...
</code></pre>
<p>Any ideas? Thank you!</p>
| 0 | 2016-10-14T15:00:34Z | 40,048,484 | <p>Using matplotlib, you need your X (row), Y (column), and Z values. The matplotlib function expects data in a certain format. Below, you'll see the meshgrid helps us get that format.</p>
<p>Here, I use pandas to import your data that I saved to a csv file. You can load your data any way you'd like though. The key is prepping your data for the plotting function.</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
#import the data from a csv file
data = pd.read_csv('C:/book1.csv')
#here, I let the x values be the column headers (could switch if you'd like)
#[1:] don't want the 'WN' as a value
X = data.columns.values[1:]
#Here I get the index values (a pandas dataframe thing) as the Y values
Y = data['WN']
#don't want this column in your data though
del data['WN']
#need to shape your data in preparation for plotting
X, Y = np.meshgrid(X, Y)
#see http://matplotlib.org/examples/pylab_examples/contour_demo.html
plt.contourf(X,Y,data)
</code></pre>
<p><a href="https://i.stack.imgur.com/T2uTZ.png" rel="nofollow"><img src="https://i.stack.imgur.com/T2uTZ.png" alt="enter image description here"></a></p>
| 0 | 2016-10-14T17:05:36Z | [
"python",
"python-2.7",
"pandas",
"matplotlib"
] |
Python C Extension: PyEval_GetLocals() returns NULL | 40,046,330 | <p>I need to read local variables from Python in C/C++. When I try to <code>PyEval_GetLocals</code>, I get a NULL. This happens although Python is initialized. The following is a minimal example.</p>
<pre><code>#include <iostream>
#include <Python.h>
Py_Initialize();
PyRun_SimpleString("a=5");
PyObject *locals = PyEval_GetLocals();
std::cout<<locals<<std::endl; //prints NULL (prints 0)
Py_Finalize();
</code></pre>
<p>In <a href="https://docs.python.org/3/c-api/reflection.html#c.PyEval_GetLocals" rel="nofollow">the manual</a>, it says that it returns NULL if no frame is running, but... there's a frame running!</p>
<p><strong>What am I doing wrong?</strong></p>
<p>I'm running this in Debian Jessie.</p>
| 1 | 2016-10-14T15:02:25Z | 40,048,193 | <p>Turns out the right way to access variables in the scope is:</p>
<pre><code>Py_Initialize();
PyObject *main = PyImport_AddModule("__main__");
PyObject *globals = PyModule_GetDict(main);
PyObject *a = PyDict_GetItemString(globals, "a");
std::cout<<globals<<std::endl; //Not NULL
Py_Finalize();
</code></pre>
| 0 | 2016-10-14T16:45:39Z | [
"python",
"c++",
"c",
"python-c-extension",
"python-extensions"
] |
Keras + tensorflow gives the error "no attribute 'control_flow_ops'" | 40,046,619 | <p>I am trying to run keras for the first time. I installed the modules with:</p>
<pre><code>pip install keras --user
pip install tensorflow --user
</code></pre>
<p>and then tried to run <a href="https://github.com/fchollet/keras/blob/master/examples/mnist_cnn.py" rel="nofollow">https://github.com/fchollet/keras/blob/master/examples/mnist_cnn.py</a>.</p>
<p>However it gives me:</p>
<pre><code>AttributeError: 'module' object has no attribute 'control_flow_ops'
</code></pre>
<p>These are the versions I am using.</p>
<pre><code>print tensorflow.__version__
0.11.0rc0
print keras.__version__
1.1.0
</code></pre>
<blockquote>
<p>What can I do to get keras to run with tensorflow?</p>
</blockquote>
| 1 | 2016-10-14T15:17:03Z | 40,066,895 | <p>There is an issue between Keras and TF, Probably tf.python.control_flow_ops does not exist or not visible anymore.
using below import statements you can resolve this issue</p>
<p>import tensorflow as tf</p>
<p>tf.python.control_flow_ops = tf</p>
<p>For Details check:
<a href="https://github.com/fchollet/keras/issues/3857" rel="nofollow">https://github.com/fchollet/keras/issues/3857</a></p>
| 2 | 2016-10-16T04:38:55Z | [
"python",
"ubuntu",
"machine-learning",
"tensorflow",
"keras"
] |
Updating Python version that's compiled from source | 40,046,656 | <p>I run a script on several CentOS machines that compiles Python 2.7.6 from source and installs it. I would now like to update the script so that it updates Python to 2.7.12, and don't really know how to tackle this.</p>
<p>Should I do this exactly the same way, just with source code of higher version, and it will overwrite the old Python version?</p>
<p>Should I first uninstall the old Python version? If so, then how?</p>
<p>Sorry if this is trivial - I tried Googleing and searching through Stack, but did not found anyone with a similar problem.</p>
| 0 | 2016-10-14T15:18:52Z | 40,047,015 | <p>Replacing 2.7.6 with 2.7.12 would be fine using the procedure you linked.
There should be no real problems with libraries installed with pip easy_install as the version updates are minor. </p>
<p>Worst comes to worst and there is a library conflict it would be because the python library used for compiling may be different and you can always reinstall the library which would recompile against the correct python library if required. This is only problematic if the library being installed is actually compiled against the python library. Pure python packages would not be affected.</p>
<p>If you were doing a major version change this would be okay as well as on CentOS you have to call python with python2.7 and not python, so a new version would call with python2.8</p>
| 1 | 2016-10-14T15:36:02Z | [
"python",
"python-2.7",
"centos"
] |
Why doesn't this work? Is this a apscheduler bug? | 40,046,700 | <p>When I run this it waits a minute then it prints 'Lights on' then waits two minutes and prints 'Lights off'. After that apscheduler seems to go nuts and quickly alternates between the two very fast. </p>
<p>Did i just stumble into a apscheduler bug or why does this happen?</p>
<pre><code>from datetime import datetime, timedelta
import time
import os, signal, logging
logging.basicConfig(level=logging.DEBUG)
from apscheduler.schedulers.background import BackgroundScheduler
scheduler = BackgroundScheduler()
def turn_on():
#Turn ON
print('##############################Lights on')
def turn_off():
#Turn off
print('#############################Lights off')
def schedule():
print('Lights will turn on at'.format(lights_on_time))
if __name__ == '__main__':
while True:
lights_on_time = (str(datetime.now() + timedelta(minutes=1)))
lights_off_time = (str(datetime.now() + timedelta(minutes=2)))
scheduler.add_job(turn_on, 'date', run_date=lights_on_time)
scheduler.add_job(turn_off, 'date', run_date=lights_off_time)
try:
scheduler.start()
signal.pause()
except:
pass
print('Press Ctrl+{0} to exit'.format('Break' if os.name == 'nt' else 'C'))
try:
# This is here to simulate application activity (which keeps the main thread alive).
while True:
time.sleep(2)
except (KeyboardInterrupt, SystemExit):
# Not strictly necessary if daemonic mode is enabled but should be done if possible
scheduler.shutdown()
</code></pre>
| 1 | 2016-10-14T15:21:31Z | 40,046,852 | <p>You are flooding the scheduler with events. You are using the BackgroundScheduler, meaning that scheduler.start() is exiting and not waiting for the event to happen. The simplest fix may be to not use the BackgroundScheduler (use the BlockingScheduler), or put a sleep(180) on your loop.</p>
| 2 | 2016-10-14T15:28:28Z | [
"python"
] |
Why doesn't this work? Is this a apscheduler bug? | 40,046,700 | <p>When I run this it waits a minute then it prints 'Lights on' then waits two minutes and prints 'Lights off'. After that apscheduler seems to go nuts and quickly alternates between the two very fast. </p>
<p>Did i just stumble into a apscheduler bug or why does this happen?</p>
<pre><code>from datetime import datetime, timedelta
import time
import os, signal, logging
logging.basicConfig(level=logging.DEBUG)
from apscheduler.schedulers.background import BackgroundScheduler
scheduler = BackgroundScheduler()
def turn_on():
#Turn ON
print('##############################Lights on')
def turn_off():
#Turn off
print('#############################Lights off')
def schedule():
print('Lights will turn on at'.format(lights_on_time))
if __name__ == '__main__':
while True:
lights_on_time = (str(datetime.now() + timedelta(minutes=1)))
lights_off_time = (str(datetime.now() + timedelta(minutes=2)))
scheduler.add_job(turn_on, 'date', run_date=lights_on_time)
scheduler.add_job(turn_off, 'date', run_date=lights_off_time)
try:
scheduler.start()
signal.pause()
except:
pass
print('Press Ctrl+{0} to exit'.format('Break' if os.name == 'nt' else 'C'))
try:
# This is here to simulate application activity (which keeps the main thread alive).
while True:
time.sleep(2)
except (KeyboardInterrupt, SystemExit):
# Not strictly necessary if daemonic mode is enabled but should be done if possible
scheduler.shutdown()
</code></pre>
| 1 | 2016-10-14T15:21:31Z | 40,049,123 | <p>Try this:</p>
<pre><code>from datetime import datetime, timedelta
from apscheduler.schedulers.background import BackgroundScheduler
import time
scheduler = BackgroundScheduler()
def turn_on():
print('Turn on', datetime.now())
def turn_off():
print('Turn off', datetime.now())
scheduler.start()
while True:
scheduler.add_job(func=turn_on, trigger='date', next_run_time=datetime.now() + timedelta(minutes=1))
scheduler.add_job(func=turn_off, trigger='date', next_run_time=datetime.now() + timedelta(minutes=2))
time.sleep(180)
</code></pre>
<p>You should only start the scheduler once.</p>
| 1 | 2016-10-14T17:45:27Z | [
"python"
] |
In OpenCV I've got a mask of an area of a frame. How would I then insert another image into that location on the original frame? | 40,046,741 | <p>I'm brand new to OpenCV and I can't seem to find a way to do this (It probably has to do with me not knowing any of the specific lingo).</p>
<p>I'm looping through the frames of a video and pulling out a mask from the video where it is green-screened using inRange. I'm looking for a way to then insert an image into that location on the original frame. The code i'm using to pull the frames/mask is below.</p>
<pre><code>import numpy as np
import cv2
cap = cv2.VideoCapture('vid.mov')
image = cv2.imread('photo.jpg')
# green digitally added not much variance
lower = np.array([0, 250, 0])
upper = np.array([10, 260, 10])
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
cv2.imshow('frame', frame)
# get mask of green area
mask = cv2.inRange(frame, lower, upper)
cv2.imshow('mask', mask)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
</code></pre>
| 0 | 2016-10-14T15:23:26Z | 40,047,718 | <p>Use Bitwise operations for masking and related binary operations. Please check below code to see how Bitwise operations are done.</p>
<p><strong>Code</strong></p>
<pre><code>import numpy as np
import cv2
cap = cv2.VideoCapture('../video.mp4')
image = cv2.imread('photo.jpg')
# green digitally added not much variance
lower = np.array([0, 250, 0])
upper = np.array([10, 260, 10])
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
cv2.imshow('frame', frame)
# get mask of green area
mask = cv2.inRange(frame, lower, upper)
notMask = cv2.bitwise_not(mask)
imagePart=cv2.bitwise_and(image, image, mask = mask)
videoPart=cv2.bitwise_and(frame, frame, mask = notMask)
output = cv2.bitwise_or(imagePart, videoPart)
cv2.imshow('output', output)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
</code></pre>
<p><strong>RGB bad color space</strong></p>
<p>Since, you are doing color processing, I would suggest you to use appropriate color space. HSV would be a good start for you. For finding good range of HSV values, try this <a href="https://github.com/saurabheights/ImageProcessingExperimentScripts/blob/master/AnalyzeHSV/hsvThresholder.py" rel="nofollow">script</a>.</p>
<p><strong>Generating Video Output</strong>
You need a to create a video writer and once all image processing is done, write the processed frame to a new video. I am pretty sure, you cannot read and write to same video file.</p>
<p>Further see official docs <a href="http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_gui/py_video_display/py_video_display.html" rel="nofollow">here</a>.</p>
<p>It has both examples of how to read and write video.</p>
| 0 | 2016-10-14T16:15:01Z | [
"python",
"opencv",
"video"
] |
How to replace a dataframe column with a numpy array column in Python? | 40,046,793 | <p>Here's what i have so far:</p>
<pre><code>data.shape: # data == my dataframe
(768, 9)
data2 = pd.DataFrame(data) # copy of data
array = data.values # convert data to arrays
X = array[:,0:8]
Y = array[:,8]
# perform a transformation on X
Xrescaled = scaler.transform(X)
</code></pre>
<p>How can i replace each column of the dataframe, <code>data2</code> with the <strong>corresponding</strong> column in the array, <code>Xrescaled</code>? Thanks.</p>
| 0 | 2016-10-14T15:25:43Z | 40,047,175 | <p>You can just do: <code>data2.iloc[:,:8] = Xrescaled</code>, here is a demo:</p>
<pre><code>import numpy as np
data = pd.DataFrame({'x': [1,2], 'y': [3,4], 'z': [5,6]})
data
# x y z
#0 1 3 5
#1 2 4 6
import pandas as pd
data2 = pd.DataFrame(data)
data2
# x y z
#0 1 3 5
#1 2 4 6
X = data.values[:,:2]
Xrescaled = X * 2
Xrescaled
# array([[2, 6],
# [4, 8]])
data2.iloc[:,:2] = Xrescaled
data2
# x y z
#0 2 6 5
#1 4 8 6
</code></pre>
| 1 | 2016-10-14T15:44:09Z | [
"python",
"arrays",
"pandas",
"numpy",
"dataframe"
] |
AttributeError: 'filter' object has no attribute 'replace' in Python 3 | 40,046,864 | <p>I have some problems with python 3.x. In python 2.x. I could use <code>replace</code> attr in <code>filter</code> obj, but now I cannot use this. Here is a section of my code:</p>
<pre><code>def uniq(seq):
seen = {}
return [seen.setdefault(x, x) for x in seq if x not in seen]
def partition(seq, n):
return [seq[i : i + n] for i in xrange(0, len(seq), n)]
def PlayFairCipher(key, from_ = 'J', to = None):
if to is None:
to = 'I' if from_ == 'J' else ''
def canonicalize(s):
return list(filter(str.isupper, s.upper()).replace(from_, to))
m = partition(uniq(canonicalize(key + ascii_uppercase)), 5)
enc = {}
for row in m:
for i, j in product(xrange(5), repeat=2):
if i != j:
enc[row[i] + row[j]] = row[(i + 1) % 5] + row[(j + 1) % 5]
for c in zip(*m):
for i, j in product(xrange(5), repeat=2):
if i != j:
enc[c[i] + c[j]] = c[(i + 1) % 5] + c[(j + 1) % 5]
for i1, j1, i2, j2 in product(xrange(5), repeat=4):
if i1 != i2 and j1 != j2:
enc[m[i1][j1] + m[i2][j2]] = m[i1][j2] + m[i2][j1]
def sub_enc(txt):
lst = findall(r"(.)(?:(?!\1)(.))?", canonicalize(txt))
return ''.join(enc[a + (b if b else 'X')] for a, b in lst)
return sub_enc
</code></pre>
<p>But when this compiled, I receive this:</p>
<pre><code>AttributeError: 'filter' object has no attribute 'replace'
</code></pre>
<p>How can I fix it?</p>
| -1 | 2016-10-14T15:28:57Z | 40,047,101 | <p>I think you can use a list comprehension:</p>
<pre><code>[c.replace(from_, to) for c in s.upper() if c.isupper()]
</code></pre>
<p>Is this what you want? There's a lot of code there so I might be missing something</p>
| 1 | 2016-10-14T15:40:42Z | [
"python",
"python-2.7",
"python-3.x",
"replace",
"filter"
] |
AttributeError: 'filter' object has no attribute 'replace' in Python 3 | 40,046,864 | <p>I have some problems with python 3.x. In python 2.x. I could use <code>replace</code> attr in <code>filter</code> obj, but now I cannot use this. Here is a section of my code:</p>
<pre><code>def uniq(seq):
seen = {}
return [seen.setdefault(x, x) for x in seq if x not in seen]
def partition(seq, n):
return [seq[i : i + n] for i in xrange(0, len(seq), n)]
def PlayFairCipher(key, from_ = 'J', to = None):
if to is None:
to = 'I' if from_ == 'J' else ''
def canonicalize(s):
return list(filter(str.isupper, s.upper()).replace(from_, to))
m = partition(uniq(canonicalize(key + ascii_uppercase)), 5)
enc = {}
for row in m:
for i, j in product(xrange(5), repeat=2):
if i != j:
enc[row[i] + row[j]] = row[(i + 1) % 5] + row[(j + 1) % 5]
for c in zip(*m):
for i, j in product(xrange(5), repeat=2):
if i != j:
enc[c[i] + c[j]] = c[(i + 1) % 5] + c[(j + 1) % 5]
for i1, j1, i2, j2 in product(xrange(5), repeat=4):
if i1 != i2 and j1 != j2:
enc[m[i1][j1] + m[i2][j2]] = m[i1][j2] + m[i2][j1]
def sub_enc(txt):
lst = findall(r"(.)(?:(?!\1)(.))?", canonicalize(txt))
return ''.join(enc[a + (b if b else 'X')] for a, b in lst)
return sub_enc
</code></pre>
<p>But when this compiled, I receive this:</p>
<pre><code>AttributeError: 'filter' object has no attribute 'replace'
</code></pre>
<p>How can I fix it?</p>
| -1 | 2016-10-14T15:28:57Z | 40,047,148 | <p>in python 2, <code>filter</code> returns a string if a string is passed as input.</p>
<blockquote>
<p>filter(...)
filter(function or None, sequence) -> list, tuple, or string</p>
<p>Return those items of sequence for which function(item) is true. If
function is None, return the items that are true. If sequence is a >tuple
or string, return the same type, else return a list.</p>
</blockquote>
<p>To simulate this behaviour in python 3 just do</p>
<pre><code>"".join(filter(str.isupper, s.upper()))
</code></pre>
<p>to convert the iterable to string, then you can perform the replace</p>
| 0 | 2016-10-14T15:42:41Z | [
"python",
"python-2.7",
"python-3.x",
"replace",
"filter"
] |
Change date into fiscal month pandas python | 40,046,935 | <p>I am looking to change the date in my df into a fiscal month in a new column using python pandas.</p>
<p>This is an example</p>
<pre><code>17/01/2016 201601
18/01/2016 201602
</code></pre>
<p>Could you help me to get the required python code?</p>
| -1 | 2016-10-14T15:32:13Z | 40,048,038 | <p>You can do something like this:</p>
<pre><code>In [29]: df['fiscal_month'] = np.where(df.Date.dt.day < 18, df.Date, df.Date + pd.DateOffset(months=1))
In [30]: df
Out[30]:
Date new fiscal_month
0 2016-01-17 2016-01-17 2016-01-17
1 2016-01-18 2016-02-01 2016-02-18
In [31]: df.fiscal_month = df.fiscal_month.dt.strftime('%Y%m')
In [32]: df
Out[32]:
Date new fiscal_month
0 2016-01-17 2016-01-17 201601
1 2016-01-18 2016-02-01 201602
</code></pre>
| 1 | 2016-10-14T16:35:56Z | [
"python",
"pandas"
] |
What is the difference between installing python from the website and using brew? | 40,046,952 | <p>I have a Mac with OSX 10.11.6. I used brew to install python3. It installed python 3.5.2, but I need python 3.5.1. I've been googling, but can't figure out how I would install 3.5.1 via brew. So I went to python.org and downloaded the python-3.5.1-macosx10.6.pkg. I searched for how installing python this way would differ from installing it via brew, but couldn't find any answers. </p>
<p>So, it is possible to brew install python 3.5.1? If not, what will it mean to install 3.5.1 via .pkg file?</p>
| 2 | 2016-10-14T15:33:10Z | 40,137,410 | <blockquote>
<p>it is possible to brew install python 3.5.1?</p>
</blockquote>
<p>Yes it is. See <a href="http://stackoverflow.com/a/4158763/735926">this StackOverflow answer</a>.</p>
<blockquote>
<p>If not, what will it mean to install 3.5.1 via .pkg file?</p>
</blockquote>
<p>The most noticeable change will be that you wonât be able to upgrade your Python installation without downloading the new version and installing it by hand (compared to <code>brew upgrade python3</code>). Itâll also be <a href="http://stackoverflow.com/a/3819829/735926">slightly more complicated to remove</a> compared to <code>brew rm python3</code>.</p>
<p>Other than these minor differences you should have the same experience with both installations. Be sure that the <code>python</code> installed from <code>python-3.5.1-macosx10.6.pkg</code> is before Homebrewâs in your <code>PATH</code> or use its full path.</p>
| 0 | 2016-10-19T16:57:13Z | [
"python",
"osx",
"homebrew"
] |
Non-linear Least Squares Fitting (2-dimensional) in Python | 40,046,961 | <p>I was wondering what the correct approach to fitting datapoints to a non-linear function should be in python.</p>
<p>I am trying to fit a series of data-points</p>
<pre><code>t = [0., 0.5, 1., 1.5, ...., 4.]
y = [6.3, 4.5,.................]
</code></pre>
<p>using the following model function</p>
<pre><code>f(t, x) = x1*e^(x2*t)
</code></pre>
<p>I was mainly wondering which library routine is appropriate for this problem and how it should be setup. I tried using the following with unsuccessful results:</p>
<pre><code>t_data = np.array([0.5, 1.0, 1.5, 2.0,........])
y_data = np.array([6.8, 3., 1.5, 0.75........])
def func_nl_lsq(x, t, y):
return [x[0]*np.exp(x[1]*t)] - y
popt, pcov = scipy.optimize.curve_fit(func_nl_lsq, t_data, y_data)
</code></pre>
<p>I know it's unsuccessful because I am able to solve the "equivalent" linear least squares problem (simply obtained by taking the log of the model function) and its answer doesn't even come close to the one I am getting by doing the above.</p>
<p>Thank you</p>
| 1 | 2016-10-14T15:33:29Z | 40,047,217 | <p><code>scipy.otimize.curve_fit</code> can be used to fit the data. I think you just don't use it properly. I assume you have a given <code>t</code> and <code>y</code> and try to fit a function of the form <code>x1*exp(x2*t) = y</code>.</p>
<p>You need</p>
<pre><code>ydata = f(xdata, *params) + eps
</code></pre>
<p>This means your function is not defined properly. Your function should look like probably look like </p>
<pre><code>def func_nl_lsq(t, x1, x2):
return x1*np.exp(x2*t)
</code></pre>
<p>depending what you really want to fit. Here x1 and x2 are your fitting parameter. It is also possible to do</p>
<pre><code>def func_nl_lsq(t, x):
return x[0]*np.exp(x[1]*t)
</code></pre>
<p>but you likely need to provide an initial guess p0.</p>
| 0 | 2016-10-14T15:46:32Z | [
"python",
"numpy",
"optimization",
"scipy",
"linear-algebra"
] |
Non-linear Least Squares Fitting (2-dimensional) in Python | 40,046,961 | <p>I was wondering what the correct approach to fitting datapoints to a non-linear function should be in python.</p>
<p>I am trying to fit a series of data-points</p>
<pre><code>t = [0., 0.5, 1., 1.5, ...., 4.]
y = [6.3, 4.5,.................]
</code></pre>
<p>using the following model function</p>
<pre><code>f(t, x) = x1*e^(x2*t)
</code></pre>
<p>I was mainly wondering which library routine is appropriate for this problem and how it should be setup. I tried using the following with unsuccessful results:</p>
<pre><code>t_data = np.array([0.5, 1.0, 1.5, 2.0,........])
y_data = np.array([6.8, 3., 1.5, 0.75........])
def func_nl_lsq(x, t, y):
return [x[0]*np.exp(x[1]*t)] - y
popt, pcov = scipy.optimize.curve_fit(func_nl_lsq, t_data, y_data)
</code></pre>
<p>I know it's unsuccessful because I am able to solve the "equivalent" linear least squares problem (simply obtained by taking the log of the model function) and its answer doesn't even come close to the one I am getting by doing the above.</p>
<p>Thank you</p>
| 1 | 2016-10-14T15:33:29Z | 40,047,454 | <p>If you are using <code>curve_fit</code> you can simplify it quite a bit, with no need to compute the error inside your function:</p>
<pre><code>from scipy.optimize import curve_fit
import numpy as np
import matplotlib.pyplot as plt
t_data = np.array([0.5, 1.0, 1.5, 2.0, 2.5, 3.])
y_data = np.array([6.8, 3., 1.5, 0.75, 0.25, 0.1])
def func_nl_lsq(t, *args):
a, b = args
return a*np.exp(b*t)
popt, pcov = curve_fit(func_nl_lsq, t_data, y_data, p0=[1, 1])
plt.plot(t_data, y_data, 'o')
plt.plot(t_data, func_nl_lsq(t_data, *popt), '-')
plt.show()
</code></pre>
<h2>EDIT</h2>
<p>Note I'm using a general signature that accepts <code>*args</code>. In order for this to work you must pass <code>p0</code> to <code>curve_fit</code>.</p>
<p>The conventional approach is shown below:</p>
<pre><code>def func_nl_lsq(t, a, b):
return a*np.exp(b*t)
popt, pcov = curve_fit(func_nl_lsq, t_data, y_data)
a, b = popt
plt.plot(t_data, func_nl_lsq(t_data, a, b), '-')
</code></pre>
<p><a href="https://i.stack.imgur.com/idyoY.png" rel="nofollow"><img src="https://i.stack.imgur.com/idyoY.png" alt="Example"></a></p>
| 1 | 2016-10-14T15:59:07Z | [
"python",
"numpy",
"optimization",
"scipy",
"linear-algebra"
] |
Non-linear Least Squares Fitting (2-dimensional) in Python | 40,046,961 | <p>I was wondering what the correct approach to fitting datapoints to a non-linear function should be in python.</p>
<p>I am trying to fit a series of data-points</p>
<pre><code>t = [0., 0.5, 1., 1.5, ...., 4.]
y = [6.3, 4.5,.................]
</code></pre>
<p>using the following model function</p>
<pre><code>f(t, x) = x1*e^(x2*t)
</code></pre>
<p>I was mainly wondering which library routine is appropriate for this problem and how it should be setup. I tried using the following with unsuccessful results:</p>
<pre><code>t_data = np.array([0.5, 1.0, 1.5, 2.0,........])
y_data = np.array([6.8, 3., 1.5, 0.75........])
def func_nl_lsq(x, t, y):
return [x[0]*np.exp(x[1]*t)] - y
popt, pcov = scipy.optimize.curve_fit(func_nl_lsq, t_data, y_data)
</code></pre>
<p>I know it's unsuccessful because I am able to solve the "equivalent" linear least squares problem (simply obtained by taking the log of the model function) and its answer doesn't even come close to the one I am getting by doing the above.</p>
<p>Thank you</p>
| 1 | 2016-10-14T15:33:29Z | 40,047,570 | <p>First, you are using the wrong function. Your function <code>func_nl_lsq</code> calculates the residual, it is not the model function. To use <code>scipy.otimize.curve_fit</code>, you have to define model function, as answers by @DerWeh and @saullo_castro suggest. You still can use custom residual function as you like with <code>scipy.optimize.least_squares</code> instead of <code>scipy.optimize.curve_fit</code>.</p>
<pre><code>t_data = np.array([0.5, 1.0, 1.5, 2.0])
y_data = np.array([6.8, 3., 1.5, 0.75])
def func_nl_lsq(x, t=t_data, y=y_data):
return x[0]*np.exp(x[1]*t) - y
# removed one level of []'s
scipy.optimize.least_squares(func_nl_lsq, [0, 0])
</code></pre>
<p>Also, please note, that the remark by @MadPhysicist is correct: the two problems you are considering (the initial problem and the problem where model function is under logarithm) are not equivalent to each other. Note that if you apply logarithm to your model function, you apply it also to the residuals, and <em>residual sum of squares</em> now means something different. This lead to different optimization problem and different results.</p>
| 0 | 2016-10-14T16:05:58Z | [
"python",
"numpy",
"optimization",
"scipy",
"linear-algebra"
] |
What type of data that is separated by \ and is in hex? | 40,047,029 | <p>I have a data set that is pulled from a pixhawk. I am trying to parse this data and plot some of them vs time. The issue is when I use this code to open one of the bin files:</p>
<pre><code>with open("px4log.bin", "rb") as binary_file:
# Read the whole file at once
data = binary_file.read()
print(data)
</code></pre>
<p>I get data that looks like this:</p>
<pre><code>b'\xa3\x95\x80\x80YFMT\x00BBnNZ\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00Type,Length,Name,Format,Columns\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xa3\x95\x80\x81\x17PARMNf\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00Name,Value\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xa3\x95\x80\x82-GPS\x00BIHBcLLeeEefI\x00\x00\x00Status,TimeMS,Week,NSats,HDop,Lat,Lng,RelAlt,Alt,Spd,GCrs,VZ,T\x00\x00\xa3\x95\x80\x83\x1fIMU\x00Iffffff\x00\x00\x00\x00\x00\x00\x00\x00\x00TimeMS,GyrX,GyrY,GyrZ,AccX,AccY,AccZ\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0
</code></pre>
<p>I was told it is supposed to be in binary, but it's not. Unless python is doing something to the data set when it is opening it?</p>
<p>You can download this sample data set if you would from:</p>
<pre><code>https://pixhawk.org/_media/downloads/px4log_sample_1.px4log.zip
</code></pre>
| 2 | 2016-10-14T15:36:44Z | 40,047,273 | <p>Python is showing you the binary data represented in <a href="https://en.wikipedia.org/wiki/Hexadecimal" rel="nofollow">hexadecimal</a> when the characters do not correspond with a regular <a href="https://en.wikipedia.org/wiki/ASCII" rel="nofollow">ascii</a> character. For example <code>\xa3</code> is a byte of hexidecimal value <code>A3</code> which is <code>10100011</code> in binary. <code>T</code> on the other hand could be printed as <code>\x54</code> which is a byte of binary value <code>01010100</code>. Since you used the <code>print</code> function, python assumes you are trying to convert the binary data to a human readable string, so instead of <code>\x54</code> it showed the corresponding character <code>T</code>.</p>
<p>You can use the following code to get an array of binary strings that represent your data:</p>
<pre><code>data = '\xa3\x95\x80\x80YFMT\x00BBnNZ\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00Type,Length,Name,Format,Columns'
decimalArray = map(ord,data)
byteArray = map(lambda x: "{0:b}".format(x), decimalArray)
print byteArray
</code></pre>
<p>Here is the output:</p>
<pre><code>['10100011', '10010101', '10000000', '10000000', '1011001', '1000110', '1001101', '1010100', '0', '1000010', '1000010', '1101110', '1001110', '1011010', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '1010100', '1111001', '1110000', '1100101', '101100', '1001100', '1100101', '1101110', '1100111', '1110100', '1101000', '101100', '1001110', '1100001', '1101101', '1100101', '101100', '1000110', '1101111', '1110010', '1101101', '1100001', '1110100', '101100', '1000011', '1101111', '1101100', '1110101', '1101101', '1101110', '1110011']
</code></pre>
| 1 | 2016-10-14T15:49:23Z | [
"python"
] |
while loop works without condition | 40,047,062 | <p>As far as I know, while statement requires condition to work, but here it works without any; how's that possible? How does <code>while q:</code> work?</p>
<p>The code is below:
...</p>
<pre><code>q = set([])
for i in range(N):
q.add((i, 0))
q.add((i, M - 1))
w[i][0] = h[i][0]
w[i][M - 1] = h[i][M - 1]
for i in range(M):
q.add((0, i))
q.add((N - 1, i))
w[0][i] = h[0][i]
w[N - 1][i] = h[N - 1][i]
while q:
ci, cj = q.pop()
for ii, jj in ((0, 1), (0, -1), (1, 0), (-1, 0)):
ni, nj = ci + ii, cj + jj
if 0 <= ni < N and 0 <= nj < M:
if w[ni][nj] != h[ni][nj] and (w[ni][nj] is None or w[ni][nj] > w[ci][cj]):
w[ni][nj] = max(h[ni][nj], w[ci][cj])
q.add((ni, nj))
</code></pre>
| 1 | 2016-10-14T15:38:09Z | 40,047,099 | <p>There is a condition.
The loop will iterate as long as there is an element in set q.</p>
<p>You would get a similar effect if you wrote:</p>
<pre><code>while len(q):
# do something
</code></pre>
<p>or even</p>
<pre><code>while len(q) > 0:
#do something
</code></pre>
<p>However, these expressions could be viewed as perhaps a little redundant.</p>
| 1 | 2016-10-14T15:40:26Z | [
"python",
"while-loop",
"do-while"
] |
while loop works without condition | 40,047,062 | <p>As far as I know, while statement requires condition to work, but here it works without any; how's that possible? How does <code>while q:</code> work?</p>
<p>The code is below:
...</p>
<pre><code>q = set([])
for i in range(N):
q.add((i, 0))
q.add((i, M - 1))
w[i][0] = h[i][0]
w[i][M - 1] = h[i][M - 1]
for i in range(M):
q.add((0, i))
q.add((N - 1, i))
w[0][i] = h[0][i]
w[N - 1][i] = h[N - 1][i]
while q:
ci, cj = q.pop()
for ii, jj in ((0, 1), (0, -1), (1, 0), (-1, 0)):
ni, nj = ci + ii, cj + jj
if 0 <= ni < N and 0 <= nj < M:
if w[ni][nj] != h[ni][nj] and (w[ni][nj] is None or w[ni][nj] > w[ci][cj]):
w[ni][nj] = max(h[ni][nj], w[ci][cj])
q.add((ni, nj))
</code></pre>
| 1 | 2016-10-14T15:38:09Z | 40,047,147 | <p>A <code>while</code> loop evaluates any expression it is given as a boolean. Pretty much everything in Python has a boolean value. Empty containers, such as <code>set()</code> generally evaluate to <code>False</code>, while non-empty containers, such as a set with at least one element, evaluate to <code>True</code>.</p>
<p><code>while q:</code> can therefore be read as "loop while <code>q</code> is not empty", i.e., "loop as long as <code>q</code> does not evaluate to boolean <code>False</code>".</p>
<p>As an aside, instances of any class you write will usually evaluate to <code>True</code>. You can modify this by implementing a <code>__bool__</code> method in your class that returns something else.</p>
<p>Also, <code>q = set()</code> works fine. No need for <code>q = set([])</code>.</p>
| 6 | 2016-10-14T15:42:41Z | [
"python",
"while-loop",
"do-while"
] |
Starting 2nd for loop based on first | 40,047,075 | <p>I'm trying to write a program that will calculate the future value of a tuition. Tuition today is $10000, with an increase of 5% each year. I am trying to write a program that tells me the cost of tuition in year 10, as well as total tuition for years 10-13 combined. </p>
<p>I am almost certain that I am on the right track with writing 2 for loops, but my program will not run. </p>
<pre><code>def tuition():
tuition_cost=10000
increase=1.05
running_total=0
#first loop includes tuition for years 1-10
#update tuition for year 10
for year in range (1,11,1):
tuition_cost=((tuition_cost*(increase**year))
print(tuition_cost)
for year in range (10,14,1):
tuition_cost=(tuition_cost*(increase**year))
running_total=running_total+tuition_cost
print(running_total)
tuition()
</code></pre>
<p>Does anyone have any suggestions?</p>
| 0 | 2016-10-14T15:38:49Z | 40,047,164 | <p>Try this one:</p>
<pre><code>def tuition():
tuition_cost=10000
increase=1.05
print('the cost for the year 10:', tuition_cost*(increase**10))
running_total = 0
for year in range(10):
running_total += tuition_cost*(increase**year)
print('the cost for 10 years:', running_total)
for year in range(10,14,1):
running_total += tuition_cost*(increase**year)
print('the cost for 14 years:', running_total)
tuition()
</code></pre>
| 0 | 2016-10-14T15:43:38Z | [
"python",
"for-loop",
"nested",
"nested-loops"
] |
Starting 2nd for loop based on first | 40,047,075 | <p>I'm trying to write a program that will calculate the future value of a tuition. Tuition today is $10000, with an increase of 5% each year. I am trying to write a program that tells me the cost of tuition in year 10, as well as total tuition for years 10-13 combined. </p>
<p>I am almost certain that I am on the right track with writing 2 for loops, but my program will not run. </p>
<pre><code>def tuition():
tuition_cost=10000
increase=1.05
running_total=0
#first loop includes tuition for years 1-10
#update tuition for year 10
for year in range (1,11,1):
tuition_cost=((tuition_cost*(increase**year))
print(tuition_cost)
for year in range (10,14,1):
tuition_cost=(tuition_cost*(increase**year))
running_total=running_total+tuition_cost
print(running_total)
tuition()
</code></pre>
<p>Does anyone have any suggestions?</p>
| 0 | 2016-10-14T15:38:49Z | 40,047,573 | <p>I think, your program should look like this:</p>
<pre><code>tuition_cost = 10000
increase = 1.05
running_total = 0
for year in range(0, 11):
price_for_year = tuition_cost*(increase**year)
print(price_for_year)
for year in range(10, 14):
running_total += price_for_year
price_for_year = tuition_cost*(increase**year)
print(running_total)
</code></pre>
| 0 | 2016-10-14T16:06:15Z | [
"python",
"for-loop",
"nested",
"nested-loops"
] |
Pandas: Get corresponding column value in row based on unique value | 40,047,224 | <p>I've figured out how to get the information I want, but I would be surprised if there is not a better, more readable way to do so.</p>
<p>I want to get the value in a different column in the row that holds the data element I specify. For example, what is the value in 'b' that corresponds to the value of 10 in 'a'.</p>
<pre><code>>>> df
a b c
0 10 20 30
1 11 21 31
2 12 22 32
>>> df['b'][df[df['a'] == 11].index.tolist()].tolist()
[21]
</code></pre>
<p>This is how I currently solved it, but in practice my dataframes are not named so concisely and I have long strings as column names so the line gets hard to read.</p>
<p>EDIT: If the value in 'a' is not unique is there also a way to get all corresponding values in 'b'?</p>
| 1 | 2016-10-14T15:46:58Z | 40,047,354 | <p>You can use a boolean mask with <code>loc</code> to return all rows where the boolean condition is met, here we mask the df with the condition where 'a' == 11, and where this is met return all values for 'b':</p>
<pre><code>In [120]:
df = pd.DataFrame({'a':[10,11,11],'b':np.arange(3), 'c':np.random.randn(3)})
df
Out[120]:
a b c
0 10 0 -1.572926
1 11 1 -0.639703
2 11 2 -1.282575
In [121]:
df.loc[df['a'] == 11,'b']
Out[121]:
1 1
2 2
Name: b, dtype: int32
</code></pre>
| 2 | 2016-10-14T15:53:18Z | [
"python",
"pandas"
] |
Classification of continious data | 40,047,228 | <p>I've got a Pandas df that I use for Machine Learning in Scikit for Python.
One of the columns is a target value which is continuous data (varying from -10 to +10).</p>
<p>From the target-column, I want to calculate a new column with 5 classes where the number of rows per class is the same, i.e. if I have 1000 rows I want to distribute into 5 classes with roughly 200 in each class.</p>
<p>So far, I have done this in Excel, separate from my Python code, but as the data has grown it's getting unpractical.</p>
<p>In Excel I have calculated the percentiles and then used some logic to build the classes.</p>
<p>How to do this in Python?</p>
| 0 | 2016-10-14T15:47:10Z | 40,048,581 | <pre><code>#create data
import numpy as np
import pandas as pd
df = pd.DataFrame(20*np.random.rand(50, 1)-10, columns=['target'])
#find quantiles
quantiles = df['target'].quantile([.2, .4, .6, .8])
#labeling of groups
df['group'] = 5
df['group'][df['target'] < quantiles[.8]] = 4
df['group'][df['target'] < quantiles[.6]] = 3
df['group'][df['target'] < quantiles[.4]] = 2
df['group'][df['target'] < quantiles[.2]] = 1
</code></pre>
| 0 | 2016-10-14T17:11:36Z | [
"python",
"scikit-learn",
"percentile"
] |
Classification of continious data | 40,047,228 | <p>I've got a Pandas df that I use for Machine Learning in Scikit for Python.
One of the columns is a target value which is continuous data (varying from -10 to +10).</p>
<p>From the target-column, I want to calculate a new column with 5 classes where the number of rows per class is the same, i.e. if I have 1000 rows I want to distribute into 5 classes with roughly 200 in each class.</p>
<p>So far, I have done this in Excel, separate from my Python code, but as the data has grown it's getting unpractical.</p>
<p>In Excel I have calculated the percentiles and then used some logic to build the classes.</p>
<p>How to do this in Python?</p>
| 0 | 2016-10-14T15:47:10Z | 40,060,171 | <p>Cannot get this to work. </p>
<p>Anybody know how to do this in qcut instead? </p>
<p>I have tried the following:
df['5D'] = pd.qcut(df['target'], 5, labels=None, retbins=True, precision=3)
(5D is a new column with class 1-5 and target is a column in a dataframe.</p>
<p>I get the following error: ValueError: Length of values does not match length of index</p>
| 0 | 2016-10-15T14:21:42Z | [
"python",
"scikit-learn",
"percentile"
] |
Reading csv from url and pushing it in DB through pandas | 40,047,291 | <p>The URL gives a csv formatted data. I am trying to get the data and push it in database. However, I am unable to read data as it only prints header of the file and not complete csv data. Could there be better option?</p>
<pre><code>#!/usr/bin/python3
import pandas as pd
data = pd.read_csv("some-url") //URL not provided due to security restrictions.
for row in data:
print(row)
</code></pre>
| 2 | 2016-10-14T15:50:20Z | 40,047,450 | <p>You can iterate through the results of <code>df.to_dict(orient="records")</code>:</p>
<pre><code>data = pd.read_csv("some-url")
for row in data.to_dict(orient="records"):
# For each loop, `row` will be filled with a key:value dict where each
# key takes the value of the column name.
# Use this dict to create a record for your db insert, eg as raw SQL or
# to create an instance for an ORM like SQLAlchemy.
</code></pre>
<p>I do a similar thing to pre-format data for SQLAlchemy inserts, although I'm using Pandas to merge data from multiple sources rather than just reading the file.</p>
<p><em>Side note: There will be plenty of other ways to do this without Pandas and just iterate through the lines of the file. However Pandas's intuituve handling of CSVs makes it an attractive shortcut to do what you need.</em></p>
| 3 | 2016-10-14T15:58:58Z | [
"python",
"python-3.x",
"pandas",
"python-requests"
] |
Creating .csv based on values in dictionary | 40,047,315 | <p>I'm trying to parse an XML file, return the values and put it into a .csv file. I have the following code so far:</p>
<pre><code>for shift_i in shift_list :
# Iterates through all values in 'shift_list' for later comparison to ensure all tags are only counted once
for node in tree.xpath("//Data/Status[@Name and @Reason]"):
# Iterates through all nodes containing a 'Name' and 'Reason' attribute
state = node.attrib["Name"]
reason = node.attrib["Reason"]
end = node.attrib["End"]
start = node.attrib[u'Start']
# Sets each of the attribute values to the name of the attribute all lowercase
try:
shift = node.attrib[u'Shift']
except:
continue
# Tries to set shift attribute value to 'shift' variable, sometimes fails if no Shift attribute is present
if shift == shift_i :
# If the Shift attribute is equal to the current iteration from the 'shift_list', takes the difference of start and end and appends that value to the list with the given Name, Reason, and Shift
tdelta = datetime.strptime(end, FMT) - datetime.strptime(start, FMT)
d[state, reason, shift].append((tdelta.total_seconds()) / 60)
for node in tree.xpath("//Data/Status[not(@Reason)]"):
# Iterates through Status nodes with no Reason attribute
state = node.attrib["Name"]
end = node.attrib["End"]
start = node.attrib[u'Start']
# Sets each of the attribute values to the name of the attribute all lowercase
try:
shift = node.attrib[u'Shift']
except:
continue
# Tries to set shift attribute value to 'shift' variable, sometimes fails if no Shift
# attribute is present
if shift == shift_i:
# If the Shift attribute is equal to the current iteration from the 'shift_list',
# takes the difference of start and end and appends that value to the list with
# the given Name, "No Reason" string, and Shift
tdelta = datetime.strptime(end, FMT) - datetime.strptime(start, FMT)
d[state, 'No Reason', shift].append((tdelta.total_seconds()) / 60)
for item in d :
# Iterates through all items of d
d[item] = sum(d[item])
# Sums all values related to 'item' and replaces value in dictionary
a.update(d)
# Current keys and values in temporary dictionary 'd' to permanent
# dictionary 'a' for further analysis
d.clear()
# Clears dictionary d of current iterations keys and values to start fresh for next
# iteration. If this is not done, d[item] = sum(d[item]) returns
# "TypeError: 'float' object is not iterable"
</code></pre>
<p>This creates a dictionary with values that look like this:</p>
<pre><code>{('Name1','Reason','Shift'):Value,('Name2','Reason','Shift'):Value....}
</code></pre>
<p>print(a) returns this</p>
<pre><code>defaultdict(<class 'list'>, {('Test Run', 'No Reason', 'Night'): 5.03825, ('Slow Running', 'No Reason', 'Day'): 10.72996666666667, ('Idle', 'Shift Start Up', 'Day'): 5.425433333333333, ('Idle', 'Unscheduled', 'Afternoon'): 470.0, ('Idle', 'Early Departure', 'Day'): 0.32965, ('Idle', 'Break Creep', 'Day'): 24.754250000000003, ('Idle', 'Break', 'Day'): 40.0, ('Micro Stoppage', 'No Reason', 'Day'): 39.71673333333333, ('Idle', 'Unscheduled', 'Night'): 474.96175, ('Running', 'No Reason', 'Day'): 329.4991500000004, ('Idle', 'No Reason', 'Day'): 19.544816666666666})
</code></pre>
<p>I want to create a .csv that has columns of 'Names'+'Reasons' with the totals, and the rows are described by the 'Shift'. Like this:</p>
<pre><code> Name1-Reason Name2-Reason Name3-Reason Name4-Reason
Shift1 value value value value
Shift2 value value value value
Shift3 value value value value
</code></pre>
<p>I'm not sure how to go about doing this. I tried using nested Dicts to better describe my data but I got a TypeError when using </p>
<pre><code>d[state][reason][shift].append((tdelta.total_seconds()) / 60)
</code></pre>
<p>If there is a better way to do this please let me know, I'm a very new to this and would love to hear all advice.</p>
| 1 | 2016-10-14T15:51:38Z | 40,047,551 | <p>I would use the DictWriter method of the csv package to write your csv file. For that you need to have a list of dictionaries. Each list item is a <code>shift</code> and is represented by a dictionary with keys <code>name</code> & <code>reason</code>. It should look like the following:</p>
<p><code>[{'Name1':value1, 'Name2':value2}, {'Name1':value3, 'Name2':value4}]</code></p>
| 1 | 2016-10-14T16:04:29Z | [
"python",
"csv",
"dictionary"
] |
Creating .csv based on values in dictionary | 40,047,315 | <p>I'm trying to parse an XML file, return the values and put it into a .csv file. I have the following code so far:</p>
<pre><code>for shift_i in shift_list :
# Iterates through all values in 'shift_list' for later comparison to ensure all tags are only counted once
for node in tree.xpath("//Data/Status[@Name and @Reason]"):
# Iterates through all nodes containing a 'Name' and 'Reason' attribute
state = node.attrib["Name"]
reason = node.attrib["Reason"]
end = node.attrib["End"]
start = node.attrib[u'Start']
# Sets each of the attribute values to the name of the attribute all lowercase
try:
shift = node.attrib[u'Shift']
except:
continue
# Tries to set shift attribute value to 'shift' variable, sometimes fails if no Shift attribute is present
if shift == shift_i :
# If the Shift attribute is equal to the current iteration from the 'shift_list', takes the difference of start and end and appends that value to the list with the given Name, Reason, and Shift
tdelta = datetime.strptime(end, FMT) - datetime.strptime(start, FMT)
d[state, reason, shift].append((tdelta.total_seconds()) / 60)
for node in tree.xpath("//Data/Status[not(@Reason)]"):
# Iterates through Status nodes with no Reason attribute
state = node.attrib["Name"]
end = node.attrib["End"]
start = node.attrib[u'Start']
# Sets each of the attribute values to the name of the attribute all lowercase
try:
shift = node.attrib[u'Shift']
except:
continue
# Tries to set shift attribute value to 'shift' variable, sometimes fails if no Shift
# attribute is present
if shift == shift_i:
# If the Shift attribute is equal to the current iteration from the 'shift_list',
# takes the difference of start and end and appends that value to the list with
# the given Name, "No Reason" string, and Shift
tdelta = datetime.strptime(end, FMT) - datetime.strptime(start, FMT)
d[state, 'No Reason', shift].append((tdelta.total_seconds()) / 60)
for item in d :
# Iterates through all items of d
d[item] = sum(d[item])
# Sums all values related to 'item' and replaces value in dictionary
a.update(d)
# Current keys and values in temporary dictionary 'd' to permanent
# dictionary 'a' for further analysis
d.clear()
# Clears dictionary d of current iterations keys and values to start fresh for next
# iteration. If this is not done, d[item] = sum(d[item]) returns
# "TypeError: 'float' object is not iterable"
</code></pre>
<p>This creates a dictionary with values that look like this:</p>
<pre><code>{('Name1','Reason','Shift'):Value,('Name2','Reason','Shift'):Value....}
</code></pre>
<p>print(a) returns this</p>
<pre><code>defaultdict(<class 'list'>, {('Test Run', 'No Reason', 'Night'): 5.03825, ('Slow Running', 'No Reason', 'Day'): 10.72996666666667, ('Idle', 'Shift Start Up', 'Day'): 5.425433333333333, ('Idle', 'Unscheduled', 'Afternoon'): 470.0, ('Idle', 'Early Departure', 'Day'): 0.32965, ('Idle', 'Break Creep', 'Day'): 24.754250000000003, ('Idle', 'Break', 'Day'): 40.0, ('Micro Stoppage', 'No Reason', 'Day'): 39.71673333333333, ('Idle', 'Unscheduled', 'Night'): 474.96175, ('Running', 'No Reason', 'Day'): 329.4991500000004, ('Idle', 'No Reason', 'Day'): 19.544816666666666})
</code></pre>
<p>I want to create a .csv that has columns of 'Names'+'Reasons' with the totals, and the rows are described by the 'Shift'. Like this:</p>
<pre><code> Name1-Reason Name2-Reason Name3-Reason Name4-Reason
Shift1 value value value value
Shift2 value value value value
Shift3 value value value value
</code></pre>
<p>I'm not sure how to go about doing this. I tried using nested Dicts to better describe my data but I got a TypeError when using </p>
<pre><code>d[state][reason][shift].append((tdelta.total_seconds()) / 60)
</code></pre>
<p>If there is a better way to do this please let me know, I'm a very new to this and would love to hear all advice.</p>
| 1 | 2016-10-14T15:51:38Z | 40,053,849 | <p>I think the following may do what you want or at least be close. One important consideration that was ignored by the way you say the CSV file should be formatted, is the fact that each row must have a <code>Name-Reason</code> column for <em>every possible</em> combination of the two, even if weren't any of that particular mixture in one or more of the shift rows â because, well, that just that's how the CSV file format works.</p>
<pre><code>from collections import defaultdict
import csv
# Dictionary keys are (Name, Reason, Shift)
d = {('Test Run', 'No Reason', 'Night'): 5.03825,
('Slow Running', 'No Reason', 'Day'): 10.72996666666667,
('Idle', 'Shift Start Up', 'Day'): 5.425433333333333,
('Idle', 'Unscheduled', 'Afternoon'): 470.0,
('Idle', 'Early Departure', 'Day'): 0.32965,
('Idle', 'Break Creep', 'Day'): 24.754250000000003,
('Idle', 'Break', 'Day'): 40.0,
('Micro Stoppage', 'No Reason', 'Day'): 39.71673333333333,
('Idle', 'Unscheduled', 'Night'): 474.96175,
('Running', 'No Reason', 'Day'): 329.4991500000004,
('Idle', 'No Reason', 'Day'): 19.544816666666666}
# Transfer data to a defaultdict of dicts.
dd = defaultdict(dict)
for (name, reason, shift), value in d.items():
name_reason = name + '-' + reason # Merge together to form lower level keys
dd[shift][name_reason] = value
# Create a csv file from the data in the defaultdict.
ABSENT = '---' # Placeholder for empty fields
name_reasons = sorted(name_reason for shift in dd.keys()
for name_reason in dd[shift])
with open('dict.csv', 'wb') as csv_file:
writer = csv.writer(csv_file, delimiter=',')
writer.writerow(['Shift'] + name_reasons) # column headers row
for shift in sorted(dd):
row = [shift] + [dd[shift].get(name_reason, ABSENT)
for name_reason in name_reasons]
writer.writerow(row)
</code></pre>
<p>Here's the contents of the <code>dict.csv</code> file the code above creates:</p>
<pre class="lang-none prettyprint-override"><code>Shift,Idle-Break,Idle-Break Creep,Idle-Early Departure,Idle-No Reason,Idle-Shift Start Up,Idle-Unscheduled,Idle-Unscheduled,Micro Stoppage-No Reason,Running-No Reason,Slow Running-No Reason,Test Run-No Reason
Afternoon,---,---,---,---,---,470.0,470.0,---,---,---,---
Day,40.0,24.754250000000003,0.32965,19.544816666666666,5.425433333333333,---,---,39.71673333333333,329.4991500000004,10.72996666666667,---
Night,---,---,---,---,---,474.96175,474.96175,---,---,---,5.03825
</code></pre>
| 1 | 2016-10-15T00:44:03Z | [
"python",
"csv",
"dictionary"
] |
SOAP/WSDL web services and Python nowadays | 40,047,359 | <p>First of all: Yes, I know that there are plenty of SOAP/WSDL/Python Questions. And no, none of the answers I found was really helpful (anymore).</p>
<p>Secondly: Yes, I wouldn't use SOAP/WSDL anymore if I wouldn't need to. Unfortunately there are still huge software companies only offering web service through this interface. And I have to communicate with such a system. The specific company suggests the usage of PHP but I'm not really a PHP fan when it comes to serious things. I know that there seem to be good SOAP solutions for Java but Java is no option in this context.</p>
<p>The problem: There exists a multitude of SOAP packages for Python and quite some of them support WSDL. Foremost SOAPpy and ZSI. Unfortunately they usually depend on PyXML, which isn't compatible to recent Python versions anymore. I'm fine with Python 3 or Python 2.7, but nothing previous to that.</p>
<p>Since I don't want to ride a dead horse: Are there still any solutions to use SOAP / WSDL within current Python versions?</p>
| 0 | 2016-10-14T15:53:38Z | 40,085,258 | <p>Check out: <a href="http://stackoverflow.com/questions/7817303/what-soap-libraries-exist-for-python-3-x">What SOAP libraries exist for Python 3.x?</a></p>
<p>I've used suds before and it feels good. It does seem pysimplesoap is more maintained though - I've only used pysimplesoap server side, not for consuming (e. client).</p>
| 0 | 2016-10-17T11:31:01Z | [
"python",
"soap",
"wsdl",
"soap-client"
] |
How to compile missing extensions modules when cross compiling Python 3.5.2 for ARM on Linux | 40,047,363 | <p>When cross compiling Python for ARM many of the extension modules are not build. How do I build the missing extension modules, mainly math, select, sockets, while cross compiling Python 3.5.2 for ARM on Linux? However, when compiling for the native platform the extension modules are properly build.</p>
<p>These were my cross-compilation steps:</p>
<pre><code>CONFIG_SITE=config.site CC=arm-linux-gnueabihf-gcc CXX=arm-linux-gnueabihf-g++ AR=arm-linux-gnueabihf-ar RANLIB=arm-linux-gnueabihf-ranlib READELF=arm-linux-gnueabihf-readelf ./configure --enable-shared --host=arm-linux --build=x86_64-linux-gnu --disable-ipv6 --prefix=/opt/python3
make
sudo PATH=/home/benny/workspace/projects/webshield/src/dntl_ws/sw/toolchain/gcc-linaro-4.9-2016.02-x86_64_arm-linux-gnueabihf/bin:$PATH make install
</code></pre>
<p>These are the modules built when cross-compiled:</p>
<pre><code>_ctypes_test
cmath
_json
_testcapi
_testbuffer
_testimportmultiple
_testmultiphase
_lsprof
_opcode
parser
mmap
audioop
_crypt
_csv
termios
resource
nis
_multibytecodec
_codecs_kr
_codecs_jp
_codecs_cn
_codecs_tw
_codecs_hk
_codecs_iso2022
_decimal
_multiprocessing
ossaudiodev
xxlimited
_ctypes
</code></pre>
<p>These are the compilation steps on an x86 machine:</p>
<pre><code>CONFIG_SITE=config.site ./configure --enable-shared --disable-ipv6 --prefix=/opt/python3
make
sudo make install
</code></pre>
<p>These are the modules built when natively compiled:</p>
<pre><code>_struct
_ctypes_test
array
cmath
math
_datetime
_random
_bisect
_heapq
_pickle
_json
_testcapi
_testbuffer
_testimportmultiple
_testmultiphase
_lsprof
unicodedata
_opcode
fcntl
grp
spwd
select
parser
mmap
syslog
audioop
readline
_crypt
_csv
_posixsubprocess
_socket
_sha256
_sha512
_md5
_sha1
termios
resource
nis
binascii
pyexpat
_elementtree
_multibytecodec
_codecs_kr
_codecs_jp
_codecs_cn
_codecs_tw
_codecs_hk
_codecs_iso2022
_decimal
_multiprocessing
ossaudiodev
xxlimited
_ctypes
</code></pre>
<p>I also tried building for ARM natively on an ARM machine and the extensions were build successfully.</p>
<p>The tool chain used for cross compilation is:</p>
<pre><code>gcc version 4.9.4 20151028 (prerelease) (Linaro GCC 4.9-2016.02)
</code></pre>
<p>My host machine is:</p>
<pre><code>Ubuntu 16.04.1 LTS
Linux whachamacallit 4.4.0-42-generic #62-Ubuntu SMP Fri Oct 7 23:11:45 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
</code></pre>
| 1 | 2016-10-14T15:53:58Z | 40,066,854 | <h2>Root Cause ( Courtesy: Xavier de Gaye)</h2>
<blockquote>
<p>There was already a native python3.5 (from the Ubuntu repository) is the PATH. So the problem is that setup.py in build_extensions() does not build the extensions that have been already built statically into this native Ubuntu interpreter.
<a href="http://bugs.python.org/issue28444#msg278717" rel="nofollow">http://bugs.python.org/issue28444#msg278717</a></p>
</blockquote>
<h2>Temporary Solution</h2>
<blockquote>
<p>A patch has been submitted at <a href="http://bugs.python.org/issue28444" rel="nofollow">http://bugs.python.org/issue28444</a> and is working successfully.</p>
</blockquote>
| 1 | 2016-10-16T04:32:40Z | [
"python",
"python-3.x",
"arm",
"cross-compiling",
"embedded-linux"
] |
Trying to make a function that copies the content of one file and writes it to another | 40,047,414 | <p>I have a question that states
<strong>Write a function fcopy() that takes as input two file names (as strings) and copies the content of the first file into the second.</strong>
and I want to know how to go about solving this.</p>
<p>My first file is named example, and the second file is named output, both text files are in .txt format, and the path to them are
<em>"C:\Users\HOME\Desktop\Introduction to Computing\Lab\assignments\example.txt"</em>
and <em>"C:\Users\HOME\Desktop\Introduction to Computing\Lab\assignments\output.txt"</em></p>
| -4 | 2016-10-14T15:56:44Z | 40,047,638 | <p>You are not to ask StackOverflow to do your homework for you. Feeling generous though...</p>
<p>First of all, read this: <a href="https://docs.python.org/3.3/library/shutil.html" rel="nofollow">https://docs.python.org/3.3/library/shutil.html</a> It's the Python 3 docs for the shutil module. It will give high-level functions for reading/writing files (I/O).</p>
<pre><code>from shutil import copyfile
copyfile(locationOfSource, locationOfDestination)
</code></pre>
<p>An important thing to note is that "\" (back-slash) signifies non-literal text, so "\n" means new line, NOT just "\n". This is rarely mentioned and had me stumped when I first learnt escape characters. To do the back-slash that you want within a string, you MUST use "\" instead of "\".</p>
<p>The commenters below your answer are correct, please read the information given to you by StackOverflow about asking questions. Also, welcome to the site.</p>
| 3 | 2016-10-14T16:10:03Z | [
"python",
"python-3.x"
] |
Trying to make a function that copies the content of one file and writes it to another | 40,047,414 | <p>I have a question that states
<strong>Write a function fcopy() that takes as input two file names (as strings) and copies the content of the first file into the second.</strong>
and I want to know how to go about solving this.</p>
<p>My first file is named example, and the second file is named output, both text files are in .txt format, and the path to them are
<em>"C:\Users\HOME\Desktop\Introduction to Computing\Lab\assignments\example.txt"</em>
and <em>"C:\Users\HOME\Desktop\Introduction to Computing\Lab\assignments\output.txt"</em></p>
| -4 | 2016-10-14T15:56:44Z | 40,047,773 | <p>If you really need to, you could write a simple wrapper function to accomplish this:</p>
<pre><code>def copy_file(orig_file_name, copy_file_name):
with open(orig_file_name, 'r') as orig_file, open(copy_file_name, 'w+') as cpy_file:
orig_file = orig_file.read()
cpy_file.write(orig_file)
</code></pre>
<p>But as @Frogboxe has already said, the correct way to copy a file is to used the shutil library:</p>
<pre><code>import shutil
shutil.copy(target_file, copy_file)
</code></pre>
| 1 | 2016-10-14T16:18:39Z | [
"python",
"python-3.x"
] |
SQLAlchemy: What is the difference beetween CheckConstraint and @validates | 40,047,417 | <p>In a SQLAlchemy model what is the difference between adding a CheckConstraint and adding a @validates decorated validation method? Is the one acting on the database level and the other one not? And are there any guidelines when to use which?</p>
<p>Specifically, what is the difference between using</p>
<pre><code>__table_args__ = (CheckConstraint('to_node_id != from_node_id'), )
</code></pre>
<p>and</p>
<pre><code>@validates('from_node', 'to_node')
def validate_nodes_are_different(self, key, field):
if key == 'to_node' and field and field is self.from_node:
raise ValueError
elif key == 'from_node' and field and field is self.to_node:
raise ValueError
return field
</code></pre>
| 0 | 2016-10-14T15:56:57Z | 40,049,763 | <p><code>CheckConstraint</code> is a database-level check; <code>@validates</code> is a Python-level check. A database-level check is more universal; the constraint is satisfied even if you access the database through other means. A Python-level check is more expressive; you can usually check for complicated constraints more easily.</p>
<p>The other thing you should consider is what happens when you want to change the constraint. A <code>CHECK</code> constraint will force you to change existing data to conform when you change the constraint. A Python-level constraint allows you to force only new data to conform to the constraint.</p>
| 1 | 2016-10-14T18:23:26Z | [
"python",
"sqlalchemy"
] |
Determine if an entity is created 'today' | 40,047,495 | <p>I am creating an app which, on any given day, only one entity can be created per day. Here is the model:</p>
<pre><code>class MyModel(ndb.Model):
created = ndb.DateTimeProperty(auto_now_add=True)
</code></pre>
<p>Since only one entity is allowed to be created per day, we will need to compare the MyModel.created property to today's date:</p>
<pre><code>import datetime
class CreateEntity(webapp2.RequestHandler):
def get(self):
today = datetime.datetime.today()
my_model = MyModel.query(MyModel.created == today).get()
if my_model:
# print("Today's entity already exists")
else:
# create today's new entity
</code></pre>
<p>The problem is that I cannot compare the two dates like this. How can I check if an entity was already created 'today'?</p>
| 0 | 2016-10-14T16:01:18Z | 40,047,657 | <p>You are comparing a <code>DateTime</code> object with a <code>Date</code> object.</p>
<p>Instead of </p>
<pre><code>my_model = MyModel.query(MyModel.created == today).get()
</code></pre>
<p>use</p>
<pre><code>my_model = MyModel.query(MyModel.created.date() == today).get()
</code></pre>
| 0 | 2016-10-14T16:11:06Z | [
"python",
"google-app-engine",
"datetime",
"google-cloud-datastore",
"app-engine-ndb"
] |
Determine if an entity is created 'today' | 40,047,495 | <p>I am creating an app which, on any given day, only one entity can be created per day. Here is the model:</p>
<pre><code>class MyModel(ndb.Model):
created = ndb.DateTimeProperty(auto_now_add=True)
</code></pre>
<p>Since only one entity is allowed to be created per day, we will need to compare the MyModel.created property to today's date:</p>
<pre><code>import datetime
class CreateEntity(webapp2.RequestHandler):
def get(self):
today = datetime.datetime.today()
my_model = MyModel.query(MyModel.created == today).get()
if my_model:
# print("Today's entity already exists")
else:
# create today's new entity
</code></pre>
<p>The problem is that I cannot compare the two dates like this. How can I check if an entity was already created 'today'?</p>
| 0 | 2016-10-14T16:01:18Z | 40,047,817 | <p>Seems like the only one solution is to use a "range" query, here's a relevant answer <a href="http://stackoverflow.com/a/14963648/762270">http://stackoverflow.com/a/14963648/762270</a></p>
| 0 | 2016-10-14T16:21:26Z | [
"python",
"google-app-engine",
"datetime",
"google-cloud-datastore",
"app-engine-ndb"
] |
Determine if an entity is created 'today' | 40,047,495 | <p>I am creating an app which, on any given day, only one entity can be created per day. Here is the model:</p>
<pre><code>class MyModel(ndb.Model):
created = ndb.DateTimeProperty(auto_now_add=True)
</code></pre>
<p>Since only one entity is allowed to be created per day, we will need to compare the MyModel.created property to today's date:</p>
<pre><code>import datetime
class CreateEntity(webapp2.RequestHandler):
def get(self):
today = datetime.datetime.today()
my_model = MyModel.query(MyModel.created == today).get()
if my_model:
# print("Today's entity already exists")
else:
# create today's new entity
</code></pre>
<p>The problem is that I cannot compare the two dates like this. How can I check if an entity was already created 'today'?</p>
| 0 | 2016-10-14T16:01:18Z | 40,048,207 | <p>You can't query by <code>created</code> property using <code>==</code> since you don't actually know the <strong>exact</strong> creation datetime (which is what you'll find in <code>created</code> due to the <code>auto_now_add=True</code> option)</p>
<p>But you could query for the most recently created entity and check if its <code>creation</code> datetime is today. Something along these lines:</p>
<pre><code>class CreateEntity(webapp2.RequestHandler):
def get(self):
now = datetime.datetime.utcnow()
# get most recently created one:
entity_list = MyModel.query().order(-MyModel.created).fetch(limit=1)
entity = entity_list[0] if entity_list else None
if entity and entity.created.year == now.year and \
entity.created.month == now.month and \
entity.created.day == now.day:
# print("Today's entity already exists")
else:
# create today's new entity
</code></pre>
<p>Or you could compute a datetime for today's 0:00:00 am and query for <code>created</code> bigger than that.</p>
<p>Or you could drop the <code>auto_now_add=True</code> option and explicitly set <code>created</code> to a specific time of the day (say midnight exactly) and then you can query for the datetime matching that time of day today.</p>
| 0 | 2016-10-14T16:46:27Z | [
"python",
"google-app-engine",
"datetime",
"google-cloud-datastore",
"app-engine-ndb"
] |
Determine if an entity is created 'today' | 40,047,495 | <p>I am creating an app which, on any given day, only one entity can be created per day. Here is the model:</p>
<pre><code>class MyModel(ndb.Model):
created = ndb.DateTimeProperty(auto_now_add=True)
</code></pre>
<p>Since only one entity is allowed to be created per day, we will need to compare the MyModel.created property to today's date:</p>
<pre><code>import datetime
class CreateEntity(webapp2.RequestHandler):
def get(self):
today = datetime.datetime.today()
my_model = MyModel.query(MyModel.created == today).get()
if my_model:
# print("Today's entity already exists")
else:
# create today's new entity
</code></pre>
<p>The problem is that I cannot compare the two dates like this. How can I check if an entity was already created 'today'?</p>
| 0 | 2016-10-14T16:01:18Z | 40,054,913 | <p>Using a range query for a single specific known value you want to lookup is overkill and expensive, I would use one of these 2 solutions:</p>
<p><strong>1 - Extra Property</strong></p>
<p>Sacrifice a little space with an extra property, though since it's one per day, it shouldn't be a big deal.</p>
<pre class="lang-python prettyprint-override"><code>from datetime import datetime
class MyModel(ndb.Model):
def _pre_put_hook(self):
self.date = datetime.today().strftime("%Y%m%d")
created = ndb.DateTimeProperty(auto_now_add=True)
date = ndb.StringProperty()
class CreateEntity(webapp2.RequestHandler):
def get(self):
today = datetime.today().strftime("%Y%m%d")
my_model = MyModel.query(MyModel.date == today).get()
if my_model:
logging.info("Today's entity already exists")
else:
# MyModel.date gets set automaticaly by _pre_put_hook
my_model = MyModel()
my_model.put()
logging.info("create today's new entity")
</code></pre>
<p><strong>2 - Use [today] as Entity ID (preferred)</strong></p>
<p>I would rather use <code>today</code> as the <strong>ID</strong> for my Entity, that's the fastest/cheaper/optimal way to retrieve your entity later. It could also be a combination with something else, i.e. <code>ID=<userid+today></code>, in case that entity is per user, or maybe just add userid as a parent (ancestor). So it would be something like this:</p>
<pre class="lang-python prettyprint-override"><code>from datetime import datetime
class MyModel(ndb.Model):
created = ndb.DateTimeProperty(auto_now_add=True)
class CreateEntity(webapp2.RequestHandler):
def get(self):
today = datetime.today().strftime("%Y%m%d")
my_model = MyModel.get_by_id(today)
if my_model:
logging.info("Today's entity already exists")
else:
my_model = MyModel(id=today)
my_model.put()
logging.info("create today's new entity")
</code></pre>
| 0 | 2016-10-15T04:20:45Z | [
"python",
"google-app-engine",
"datetime",
"google-cloud-datastore",
"app-engine-ndb"
] |
Determine if an entity is created 'today' | 40,047,495 | <p>I am creating an app which, on any given day, only one entity can be created per day. Here is the model:</p>
<pre><code>class MyModel(ndb.Model):
created = ndb.DateTimeProperty(auto_now_add=True)
</code></pre>
<p>Since only one entity is allowed to be created per day, we will need to compare the MyModel.created property to today's date:</p>
<pre><code>import datetime
class CreateEntity(webapp2.RequestHandler):
def get(self):
today = datetime.datetime.today()
my_model = MyModel.query(MyModel.created == today).get()
if my_model:
# print("Today's entity already exists")
else:
# create today's new entity
</code></pre>
<p>The problem is that I cannot compare the two dates like this. How can I check if an entity was already created 'today'?</p>
| 0 | 2016-10-14T16:01:18Z | 40,058,428 | <p>I ended up changing the property from <code>DateTimeProperty</code> to <code>DateProperty</code>. Now I am able to do this:</p>
<pre><code>today_date = datetime.datetime.today().date()
today_entity = MyModel.query(MyModel.created == today_date).get()
</code></pre>
| 1 | 2016-10-15T11:26:07Z | [
"python",
"google-app-engine",
"datetime",
"google-cloud-datastore",
"app-engine-ndb"
] |
Sqlite3 INSERT query ERROR? | 40,047,742 | <p>i want to insert in my DB 2 strings var </p>
<p>one entered from the user ==> H and </p>
<p>one generated form the chatbot==> B</p>
<p>Here is the code:</p>
<pre><code># initialize the connection to the database
sqlite_file = '/Users/emansaad/Desktop/chatbot1/brain.sqlite'
connection = sqlite3.connect(sqlite_file)
connection.create_function("REGEXP", 2, regexp)
cursor = connection.cursor()
connection.text_factory = str
connection = open
def new_data(Input,Output):
row = cursor.execute('SELECT * FROM chatting_log WHERE user=?',(Input,)).fetchone()
if row:
return
else:
cursor.execute('INSERT INTO chatting_log VALUES (?, ?)', (Input, Output))
while True:
print(("B:%s" % B))
H = input('H:')
New_H= ' '.join(PreProcess_text(H))
reply= cursor.execute('SELECT respoce FROM Conversation_Engine WHERE request REGEXP?',[New_H]).fetchone()
if reply:
B=reply[0]
new_data(H,B)
</code></pre>
<p>The codes works perfectly in generating and selecting the replay from the DB , but the problem is when i go back to the chatting_log table in the DB there is no data?</p>
<p>PS: i am using python 3</p>
<p>thank you,</p>
| 2 | 2016-10-14T16:16:27Z | 40,047,765 | <p>Always remember to commit the changes that you make. In this case: <code>connection.commit()</code>. Except it looks like you overrode your <code>connection</code> variable.</p>
| 1 | 2016-10-14T16:18:01Z | [
"python",
"mysql",
"python-3.x"
] |
Subsets and Splits