Unnamed: 0
int64
0
1.91M
id
int64
337
73.8M
title
stringlengths
10
150
question
stringlengths
21
64.2k
answer
stringlengths
19
59.4k
tags
stringlengths
5
112
score
int64
-10
17.3k
1,909,800
69,180,786
AttributeError: 'Stud' object has no attribute 'sno' at line no.11
<p>program is:</p> <pre><code>class Stud: def __init__(self): self.displval() print(&quot;I am Constructor&quot;) self.sno = int(input(&quot;Enter the roll number&quot;)) self.sname = (input(&quot;Enter the Name&quot;)) def displval(self): print(&quot;=&quot;*50) print(self.sno) print(self.sname) so = Stud() </code></pre>
<p>Your main problem is that you are calling <code>self.displval</code> <em>before</em> you set the attributes it tries to display.</p> <pre><code>class Stud: def __init__(self): print(&quot;I am Constructor&quot;) self.sno = int(input(&quot;Enter the roll number&quot;)) self.sname = (input(&quot;Enter the Name&quot;)) self.displval() def displval(self): print(&quot;=&quot;*50) print(self.sno) print(self.sname) </code></pre> <p>However, <code>__init__</code> is doing too much work. It should receive values as arguments and simply set the attributes. If you want <code>Stud</code> to provide a way to collect those arguments from the user, define an additional class method. (It's also debatable whether <code>__init__</code> should be printing anything to standard output, but I'll leave that for now.)</p> <pre><code>class Stud: def __init__(self, sno, sname): self.sno = sno self.sname = sname self.displval() @classmethod def create_with_user_input(cls): sno = int(input(&quot;Enter the roll number&quot;)) sname = (input(&quot;Enter the Name&quot;)) return cls(sno, sname) def displval(self): print(&quot;=&quot;*50) print(self.sno) print(self.sname) so = Stud.create_from_user_input() </code></pre>
python|class|object|constructor
0
1,909,801
72,714,260
How to feed in a time-series into pyunicorn.timeseries.surrogates?
<p>I struggle to find out how to feed a time-series that consists of a one-column .txt file into pyunicorn’s <code>timeseries.surrogates</code>. My one-column .txt file contains many numerical datapoints that constitute the time-series.</p> <p>Pyunicorn offers several examples how to apply its surrogate methods in this link: <a href="http://www.pik-potsdam.de/%7Edonges/pyunicorn/api/timeseries/surrogates.html" rel="nofollow noreferrer">http://www.pik-potsdam.de/~donges/pyunicorn/api/timeseries/surrogates.html</a></p> <p>Paradigmatically, the last surrogate option in the link above, namely for <code>white_noise_surrogates(original_data)</code>, Pyunicorn offers the following explanatory code.</p> <pre><code>ts = Surrogates.SmallTestData().original_data surrogates = Surrogates.SmallTestData().white_noise_surrogates(ts) </code></pre> <p>Clearly, the example data <code>SmallTestData()</code> is part of pyunicorn. But how would I have to enter my data, that is, <code>Data_2</code>, into the code above? The code</p> <p><code>surrogates = Surrogates.white_noise_surrogates(Data_2)</code> returns the message</p> <pre><code>TypeError: Surrogates.correlated_noise_surrogates() missing 1 required positional argument: 'original_data' </code></pre> <p>Trying the code in another try</p> <pre><code>TS = Surrogates.Data_2().original_data Surrogate = Surrogates.correlated_noise_surrogates(TS) </code></pre> <p>returns into the message</p> <pre><code>AttributeError: type object 'Surrogates' has no attribute 'Data_2' </code></pre> <p>I assume that there is a simple solution, but I cannot figure it out. Here is an overview of my code:</p> <pre><code>from pyunicorn.timeseries import Surrogates import pyunicorn as pn Data_2 = np.loadtxt(&quot;/path-to-data.txt&quot;) # Surrogate time-series TS = Surrogates.Data_2().original_data Surrogate = Surrogates.correlated_noise_surrogates(TS) </code></pre> <p>Does anyone understand how to properly feed or insert a time-series into pyunicorn’s <code>timeseries.surrogates</code> options?</p>
<p>You need to instantiate the class <code>Surrogates</code> with your data</p> <pre class="lang-py prettyprint-override"><code>TS = Surrogates(original_data=Data_2) my_surr = TS.correlated_noise_surrogates(Data_2) </code></pre>
python
2
1,909,802
68,435,352
is there an API to force facebook to scrape a website automatically
<p>I'm aware you can force update a page's cache by entering the URL on Facebook's debugger tool while been logged in as admin for that app/page: <a href="https://developers.facebook.com/tools/debug" rel="nofollow noreferrer">https://developers.facebook.com/tools/debug</a></p> <p>But what I need is a way to automatically call an API endpoint or something from our internal app whenever somebody from our Sales department updates the main image of one of our pages. It is not an option to ask thousands of sales people to login as an admin and manually update a page's cache whenever they update one of our item's description or image.</p> <p>We can't afford to wait 24 hours for Facebook to update its cache because we're getting daily complaints from our clients whenever they don't see a change showing up as soon as we change it on our side.</p>
<p>This worked a while ago:</p> <pre><code>$.post('https://graph.facebook.com', { id: 'https://www.yourdomain.com/someurl', scrape: true }, (response) =&gt; { console.log(response); }); </code></pre> <p>In this case, with jQuery - but of course you can also use the fetch API or axios.</p>
python|facebook|facebook-opengraph
0
1,909,803
59,339,435
Yearly Interest on house and deposit
<p>Suppose you currently have $50,000 deposited into a bank account and the account pays you a constant interest rate of 3.5% per year on your deposit. You are planning to buy a house with the current price of $300,000. The price will increase by 1.5% per year. It still requires a minimum down payment of 20% of the house price.</p> <p>Write a while loop to calculate how many (integer) years you need to wait until you can afford the down payment to buy the house.</p> <pre><code>m = 50000 #money you have i = 0.035 #interest rate h = 300000 #house price f = 0.015 #amount house will increase by per year d= 0.2 #percent of down payment on house y = 0 #number of years x = 0 #money for the down payment mn = h*d #amount of down payment while m &lt;= mn: m = (m+(m*i)) #money you have plus money you have times interest y = y + 1 #year plus one mn = mn +(h*f*y) print(int(y)) </code></pre> <p>The answer you should get is 10.</p> <p>I keep getting the wrong answer, but I am not sure what is incorrect.</p>
<p>You can simplify the code by using the compound interest formula.</p> <pre class="lang-py prettyprint-override"><code>def compound_interest(amount, rate, years): return amount * (rate + 1) ** years while compound_interest(m, i, y) &lt; d * compound_interest(h, f, y): y += 1 </code></pre> <hr /> <p>If you are allowed to do without the while loop, you can resolve the inequality after the years <code>y</code>.</p> <blockquote class="spoiler"> <p> <a href="https://i.stack.imgur.com/FHDw3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FHDw3.png" alt="enter image description here" /></a></p> </blockquote> <p>So you get this code snippet:</p> <pre class="lang-py prettyprint-override"><code>import math base = (i + 1) / (f + 1) arg = (d * h) / m y = math.ceil(math.log(arg, base)) </code></pre>
python|python-3.x|while-loop
2
1,909,804
73,131,223
Problem calling variables in a function. UnboundLocalError: local variable 'prev_avg_gain' referenced before assignment
<p>I am trying to write a function to calculate RSI data using input from a list or by pulling live price data from an API.</p> <p>The script worked fine when feeding it data directly from a list, but while I am trying to convert it to a function, I am experiencing issues.</p> <p>The function needs to remember the output from a previous run in order to calculate a smoothed average, but I keep getting an error that the local variable is being called before assignment. I've eliminated a lot of the code to keep the post short, but it errors on the 15th run. On the 14th run I think I am defining the variable and printing the value, so I don't understand why it errors.</p> <pre><code>def calc_rsi(price, i): while price != complete: window.append(price) if i == 14: avg_gain = sum(gains) / len(gains) avg_loss = sum(losses) / len(losses) if i &gt; 14: avg_gain = (prev_avg_gain * (window_length - 1) + gain) / window_length avg_loss = (prev_avg_loss * (window_length - 1) + loss) / window_length if i &gt;= 14: rs = avg_gain / avg_loss rsi = round(100 - (100 / (1 + rs)), 2) prev_avg_gain = avg_gain prev_avg_loss = avg_loss print (&quot;rsi&quot;, rsi) print (&quot;gain&quot;, prev_avg_gain) print () </code></pre> <p>The thing that is throwing me for a real loop (pun intended) is that on run 14, my print statement 'print (&quot;gain=&quot;, prev_avg_gain)' returns the proper value, so I know that it is assigning a value to the variable.... I've tried adding the 'prev_avg_gain = avg_gain ' to the block of code for 'if i == 14:' and 'if i &gt; 14:' rather than doing it once in the &gt;= block and it throws the same error.</p> <p>I am new to python and scripting so please go easy :)</p>
<p>The code as you have pasted will work, if you make <strong>one</strong> call to <code>calc_rsi()</code> where <code>i</code> is incremented from <code>&lt;=14</code> to <code>&gt;14</code>. In this case <code>prev_avg_gain</code> will be <em>remembered</em>. But if you make 2 calls to <code>calc_rsi()</code> then <code>prev_avg_gain</code> will <strong>not be remembered</strong> between calls.</p> <p>To demonstrate, I use a trimmed down example:</p> <pre class="lang-py prettyprint-override"><code>def calc_rsi(price, i): while i &lt;= price: if i == 14: avg_gain = 10 if i &gt; 14: avg_gain = (prev_avg_gain * (10 - 1) + 1) / 5 if i &gt;= 14: prev_avg_gain = avg_gain print (&quot;gain&quot;, prev_avg_gain) i += 1 # i loops from 13 to 16 in one call, this works! calc_rsi(price=16, i=14) # i loops from 13 to 14, prev_avg_gain is set calc_rsi(price=14, i=13) # i loops from 15 to 16, throws error! because prev_avg_gain is not remembered calc_rsi(price=16, i=15) </code></pre>
python|python-3.x|function|local|unbound
0
1,909,805
35,565,768
Getting socket.gaierror: [Errno 8] nodename nor servname provided,or not known
<p>I'm trying to set up a proxy and I'm using the socket and httlib modules. I have my web browser pointed to the localhost and port on which the server is running,and I'm handling HTTP requests through the server. I extract the url from the HTTP header when the browser is requesting a web page and then trying to make a request through the proxy in the following manner:-</p> <pre><code>conn = httplib.HTTPSConnection(url,serverPort) conn.request("GET",url) r1 = conn.getresponse() print r1.status,r1.reason </code></pre> <p>Note that the serverPort parameter in the first line is the port the proxy is on and url is the url extracted from the HTTP header received from the browser when it makes a request for a webpage.</p> <p>So I seem to be getting an error when I run my proxy and have the browser type in an address such as <a href="http://www.google.com" rel="nofollow">http://www.google.com</a> or <a href="http://www.getmetal.org" rel="nofollow">http://www.getmetal.org</a>.</p> <p>The error is:-</p> <p>socket.gaierror: [Errno 8] nodename nor servname provided, or not known</p> <p>There is also a trace:-</p> <p><a href="http://i.stack.imgur.com/UgZwD.png" rel="nofollow">http://i.stack.imgur.com/UgZwD.png</a></p> <p>If anyone has any suggestions as to what the problem may be I'd be delighted. Here is code for the proxy server: NOTE: IF you are testing this,there may be some indentation issues due to having to put everything 4 spaces to the right to have it display as code segment</p> <pre><code>from socket import * import httplib import webbrowser import string serverPort = 2000 serverSocket = socket(AF_INET,SOCK_STREAM) serverSocket.bind(('', serverPort)) serverSocket.listen(2) urlList=['www.facebook.com','www.youtube.com','www.twitter.com'] print 'The server is ready to receive' while 1: connectionSocket, addr = serverSocket.accept() print addr req= connectionSocket.recv(1024) #parse Get request here abnd extract url reqHeaderData= req.split('\n') newList=[] x=0 while x&lt;len(reqHeaderData): st=reqHeaderData[x] element= st.split(' ') print element newList.append(element) x=x+1 print newList[0][1] url = newList[0][1] url= url[:-1] for i in urlList: if url ==i: raise Exception("The website you are trying to access is blocked") connectionSocket.send('Valid') print(url) conn = httplib.HTTPSConnection(url,serverPort) print conn conn.request("GET",url) print "request printed" r1 = conn.getresponse() print r1.status,r1.reason print r1 #200 OK data = r1.read() x= r1.getheaders() for i in x: print i connectionSocket.close() </code></pre>
<p>Here's a common mistake I see...</p> <pre><code>url="https://foo.tld" port=443 conn=httplib.HTTPConnection(url,port) </code></pre> <p>This won't work because of the "https://"...</p> <p>You should do this instead:</p> <pre><code>url="foo.tld" port=443 conn=httplib.HTTPConnection(url,port) </code></pre> <p>This would work. I'm not sure if this is your specific problem, but it is certainly something to verify. </p>
python|sockets|proxy
9
1,909,806
15,553,596
copy by value object with no __deepcopy__ attr
<p>I'm trying to deepcopy an instance of a class, but I get a:</p> <pre><code>object has no __deepcopy__ atrribute </code></pre> <p>error.</p> <p>The class is locked away in a <code>.pyd</code>, so it cannot be modified.</p> <p>Is there a way to copy these objects by value without using deepcopy?</p>
<p>You'll have to copy the object state. The easiest way would be to use the <code>pickle</code> module:</p> <pre><code>import pickle copy = pickle.loads(pickle.dumps(someobject)) </code></pre> <p>This is <em>not guaranteed to work</em>. All the pickle module does for you in the general case is to pickle the instance attributes, and restore the instance a-new from the class reference and restore the attribute contents on that.</p> <p>Since this is a C extension object, if the instance state is not exposed to you, and pickling is not explicitly supported by the type, this won't work either. In that case, you have no other options, I'm afraid.</p>
python|copy
1
1,909,807
15,783,191
Slew rate measuring
<p>I have to measure slew rates in signals like the one in the image below. I need the slew rate of the part marked by the grey arrow. <img src="https://i.stack.imgur.com/5QGC8.png" alt="signal to process"></p> <p>At the moment I smoothen the signal with a hann window to get rid of eventual noise and to flatten the peaks. Then I search (starting right) the 30% and 70% points and calculate the slew rate between this two points. But my problem is, that the signal gets flattened after smoothing. Therefore the calculated slew rate is not as high as it should be. An if I reduce smoothing, then the peaks (you can see right side in the image) get higher and the 30% point is eventually found at the wrong position.</p> <p>Is there a better/safer way to find the required slew rate?</p>
<p>If you know between what values your signal is transitioning, and your noise is not too large, you can simply compute the time differences between all crossings of 30% and all crossings of 70% and keep the smallest one:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt s100, s0 = 5, 0 signal = np.concatenate((np.ones((25,)) * s100, s100 + (np.random.rand(25) - 0.5) * (s100-s0), np.linspace(s100, s0, 25), s0 + (np.random.rand(25) - 0.5) * (s100-s0), np.ones((25,)) * s0)) # Interpolate to find crossings with 30% and 70% of signal # The general linear interpolation formula between (x0, y0) and (x1, y1) is: # y = y0 + (x-x0) * (y1-y0) / (x1-x0) # to find the x at which the crossing with y happens: # x = x0 + (y-y0) * (x1-x0) / (y1-y0) # Because we are using indices as time, x1-x0 == 1, and if the crossing # happens within the interval, then 0 &lt;= x &lt;= 1. # The following code is just a vectorized version of the above delta_s = np.diff(signal) t30 = (s0 + (s100-s0)*.3 - signal[:-1]) / delta_s idx30 = np.where((t30 &gt; 0) &amp; (t30 &lt; 1))[0] t30 = idx30 + t30[idx30] t70 = (s0 + (s100-s0)*.7 - signal[:-1]) / delta_s idx70 = np.where((t70 &gt; 0) &amp; (t70 &lt; 1))[0] t70 = idx70 + t70[idx70] # compute all possible transition times, keep the smallest idx = np.unravel_index(np.argmin(t30[:, None] - t70), (len(t30), len(t70),)) print t30[idx[0]] - t70[idx[1]] # 9.6 plt. plot(signal) plt.plot(t30, [s0 + (s100-s0)*.3]*len(t30), 'go') plt.plot(t30[idx[0]], [s0 + (s100-s0)*.3], 'o', mec='g', mfc='None', ms=10) plt.plot(t70, [s0 + (s100-s0)*.7]*len(t70), 'ro') plt.plot(t70[idx[1]], [s0 + (s100-s0)*.7], 'o', mec='r', mfc='None', ms=10 ) plt.show() </code></pre> <p><img src="https://i.stack.imgur.com/5vBwe.png" alt="enter image description here"></p>
numpy|scipy|signal-processing|python-2.5
3
1,909,808
59,879,601
unable to iterate over images in the video using moviepy
<p>I need to create a video by selecting a series of images in folder and add music to the video. With the below approach, I'm able to generate the video but unable to iterate the images while the video is running.</p> <pre><code>for filename in os.listdir("E://images"): if filename.endswith(".png"): clips.append(ImageClip("E://images//"+filename).set_duration(8)) finalVideo = CompositeVideoClip( clips ).set_duration(8) slides=[finalVideo] final = CompositeVideoClip(slides, size=(100,200)).set_duration(8) audioclip = AudioFileClip("E://songs//new.mp3") videoclip2 = final.set_audio(audioclip) videoclip2.write_videofile("test.mp4",fps=24) </code></pre> <p>I tried with this link as well <a href="https://stackoverflow.com/questions/44732602/convert-image-sequence-to-video-using-moviepy">Convert image sequence to video using Moviepy</a> instead of using <strong>CompositeVideoClip</strong> i tried with</p> <pre><code> concat_clip = concatenate_videoclips(clips, method="compose") </code></pre> <p>but it's not working for me. </p> <p>Pls suggest</p> <p>Thanks</p>
<p>I got it finally !</p> <p>The issue was images i used, earlier the images were 3mb, 1mb etc but later i understood that all images should be above 14KB and below 100KB and possibly jpg files</p> <pre><code>clips = [] for file in os.listdir("E:\\images\\"): if file.endswith(".jpg"): clips.append(VideoFileClip("E:\\images\\"+ file).set_duration(10)) video = concatenate_videoclips( clips,method='compose') audioclip = AudioFileClip("back.mp3",fps=44100) videoclip = video.set_audio(audioclip) videoclip.write_videofile('ab23.mp4',codec='mpeg4', fps=24,audio=True) </code></pre> <p>Thanks everybody......</p> <h2>my technology stack</h2> <p>python: 3.8.1</p> <p>moviepy:1.0.1</p>
python-3.x|video-processing|moviepy
1
1,909,809
59,828,657
Dataframe horizontal stacked bar plot
<p>I am reading the movielens user data. I want to plot the age and occupation grouped by gender (in two separate plots). But I get this error:</p> <p><code>user_df.groupby(['gender'])['age'].unstack().plot.bar()</code></p> <p>AttributeError: Cannot access callable attribute 'unstack' of 'SeriesGroupBy' objects, try using the 'apply' method</p> <p>I would like the plot to be similar to the example in <a href="http://benalexkeen.com/bar-charts-in-matplotlib/" rel="nofollow noreferrer">http://benalexkeen.com/bar-charts-in-matplotlib/</a> The data format is like :</p> <pre><code>user_id age gender occupation zipcode 0 1 24 M technician 85711 1 2 53 F other 94043 2 3 23 M writer 32067 3 4 24 M technician 43537 4 5 33 F other 15213 </code></pre>
<p>You can try something like this:</p> <pre><code>df.groupby(['occupation'])['user_id'].nunique().plot.bar() </code></pre> <p><a href="https://i.stack.imgur.com/R0SrU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/R0SrU.png" alt=" "></a></p> <p>For both <strong>gender</strong> and <strong>occupation</strong>, you can do:</p> <pre><code>df.groupby(['occupation','gender'])['user_id'].size().unstack().plot.bar() </code></pre> <p><a href="https://i.stack.imgur.com/fU0zz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fU0zz.png" alt="enter image description here"></a></p>
python|matplotlib
1
1,909,810
70,873,477
Parse div element from html with style attributes
<p>I'm trying to get the text <code>Something here I want to get</code> inside the div element from a html file using Python and BeautifulSoup.</p> <p>This is how part of the code looks like in html:</p> <pre><code>&lt;div xmlns=&quot;&quot; id=&quot;idp46819314579224&quot; style=&quot;box-sizing: border-box; width: 100%; margin: 0 0 10px 0; padding: 5px 10px; background: #d43f3a; font-weight: bold; font-size: 14px; line-height: 20px; color: #fff;&quot; class=&quot;&quot; onclick=&quot;toggleSection('idp46819314579224-container');&quot; onmouseover=&quot;this.style.cursor='pointer'&quot;&gt;Something here I want to get&lt;div id=&quot;idp46819314579224-toggletext&quot; style=&quot;float: right; text-align: center; width: 8px;&quot;&gt; - &lt;/div&gt; &lt;/div&gt; </code></pre> <p>And this is how I tried to do:</p> <pre><code>vu = soup.find_all(&quot;div&quot;, {&quot;style&quot; : &quot;background: #d43f3a&quot;}) for div in vu: print(div.text) </code></pre> <p>I use loop because there are several div with different id but all of them has the same background colour. It has no errors, but I got no output.</p> <p>How can I get the text using the background colour as the condition?</p>
<p>The <code>style</code> attribute has other content inside it</p> <pre><code>style=&quot;box-sizing: ....; ....;&quot; </code></pre> <p>Your current code is asking <code>if style == &quot;background: #d43f3a&quot;</code> which it is not.</p> <p>What you can do is ask <code>if &quot;background: #d43f3a&quot; in style</code> -- a sub-string check.</p> <p>One approach is passing a <a href="https://beautiful-soup-4.readthedocs.io/en/latest/#the-keyword-arguments" rel="nofollow noreferrer">regular expression</a>.</p> <pre><code>&gt;&gt;&gt; import re &gt;&gt;&gt; vu = soup.find_all(&quot;div&quot;, style=re.compile(&quot;background: #d43f3a&quot;)) ... ... for div in vu: ... print(div.text.strip()) Something here I want to get </code></pre> <p>You can also say the same thing using <a href="https://beautiful-soup-4.readthedocs.io/en/latest/#css-selectors" rel="nofollow noreferrer">CSS Selectors</a></p> <pre><code>soup.select('div[style*=&quot;background: #d43f3a&quot;]') </code></pre> <p>Or by passing a function/lambda</p> <pre><code>&gt;&gt;&gt; vu = soup.find_all(&quot;div&quot;, style=lambda style: &quot;background: #d43f3a&quot; in style) ... ... for div in vu: ... print(div.text.strip()) Something here I want to get </code></pre>
python|parsing|beautifulsoup
1
1,909,811
2,436,714
Name some non-trivial sites written using IronPython & Silverlight
<p>Just what the title says. It'd be nice to know a few non-trivial sites out there using <a href="http://ironpython.codeplex.com/wikipage?title=SilverlightInteractiveSession&amp;referringTitle=Home" rel="nofollow noreferrer">Silverlight in Python</a>.</p>
<p>My current job is writing business apps for a German / Swiss media media consortium using IronPython and Silverlight. We're gradually moving all our web apps over to IronPython / Silverlight as they are faster to build, look nicer and perform better than the Javascript equivalents.</p> <p>Definitely not trivial, but not public either I'm afraid (although there our main app may be used by customers - advertisers - when we port that over).</p>
python|silverlight|ironpython|web
2
1,909,812
3,135,015
How can I break this multithreaded python script into "chunks"?
<p>I'm processing 100k domain names into a CSV based on results taken from Siteadvisor using urllib (not the best method, I know). However, my current script creates too many threads and Python runs into errors. Is there a way I can "chunk" this script to do X number of domains at a time (say, 10-20) to prevent these errors? Thanks in advance.</p> <pre><code>import threading import urllib class Resolver(threading.Thread): def __init__(self, address, result_dict): threading.Thread.__init__(self) self.address = address self.result_dict = result_dict def run(self): try: content = urllib.urlopen("http://www.siteadvisor.com/sites/" + self.address).read(12000) search1 = content.find("didn't find any significant problems.") search2 = content.find('yellow') search3 = content.find('web reputation analysis found potential security') search4 = content.find("don't have the results yet.") if search1 != -1: result = "safe" elif search2 != -1: result = "caution" elif search3 != -1: result = "warning" elif search4 != -1: result = "unknown" else: result = "" self.result_dict[self.address] = result except: pass def main(): infile = open("domainslist", "r") intext = infile.readlines() threads = [] results = {} for address in [address.strip() for address in intext if address.strip()]: resolver_thread = Resolver(address, results) threads.append(resolver_thread) resolver_thread.start() for thread in threads: thread.join() outfile = open('final.csv', 'w') outfile.write("\n".join("%s,%s" % (address, ip) for address, ip in results.iteritems())) outfile.close() if __name__ == '__main__': main() </code></pre> <p><strong>Edit</strong>: new version, based on andyortlieb's suggestions.</p> <pre><code>import threading import urllib import time class Resolver(threading.Thread): def __init__(self, address, result_dict, threads): threading.Thread.__init__(self) self.address = address self.result_dict = result_dict self.threads = threads def run(self): try: content = urllib.urlopen("http://www.siteadvisor.com/sites/" + self.address).read(12000) search1 = content.find("didn't find any significant problems.") search2 = content.find('yellow') search3 = content.find('web reputation analysis found potential security') search4 = content.find("don't have the results yet.") if search1 != -1: result = "safe" elif search2 != -1: result = "caution" elif search3 != -1: result = "warning" elif search4 != -1: result = "unknown" else: result = "" self.result_dict[self.address] = result outfile = open('final.csv', 'a') outfile.write(self.address + "," + result + "\n") outfile.close() print self.address + result threads.remove(self) except: pass def main(): infile = open("domainslist", "r") intext = infile.readlines() threads = [] results = {} for address in [address.strip() for address in intext if address.strip()]: loop=True while loop: if len(threads) &lt; 20: resolver_thread = Resolver(address, results, threads) threads.append(resolver_thread) resolver_thread.start() loop=False else: time.sleep(.25) for thread in threads: thread.join() # removed so I can track the progress of the script # outfile = open('final.csv', 'w') # outfile.write("\n".join("%s,%s" % (address, ip) for address, ip in results.iteritems())) # outfile.close() if __name__ == '__main__': main() </code></pre>
<p>Your existing code will work beautifully - just modify your <code>__init__</code> method inside <code>Resolver</code> to take in an additional list of addresses instead of one at a time, so instead of having one thread for each address, you have one thread for every 10 (for example). That way you won't overload the threading.</p> <p>You'll obviously have to slightly modify <code>run</code> as well so it loops through the array of addresses instead of the one <code>self.address</code>.</p> <p>I can work up a quick example if you'd like, but from the quality of your code I feel as though you'll be able to handle it quite easily.</p> <p>Hope this helps!</p> <p><strong>EDIT</strong> Example below as requested. Note that you'll have to modify main to send your <code>Resolver</code> instance lists of addresses instead of a single address - I couldn't handle this for you without knowing more about the format of your file and how the addresses are stored. Note - you could do the <code>run</code> method with a helper function, but i thought this might be more understandable as an example</p> <pre><code>class Resolver(threading.Thread): def __init__(self, addresses, result_dict): threading.Thread.__init__(self) self.addresses = addresses # Now takes in a list of multiple addresses self.result_dict = result_dict def run(self): for address in self.addresses: # do your existing code for every address in the list try: content = urllib.urlopen("http://www.siteadvisor.com/sites/" + address).read(12000) search1 = content.find("didn't find any significant problems.") search2 = content.find('yellow') search3 = content.find('web reputation analysis found potential security') search4 = content.find("don't have the results yet.") if search1 != -1: result = "safe" elif search2 != -1: result = "caution" elif search3 != -1: result = "warning" elif search4 != -1: result = "unknown" else: result = "" self.result_dict[address] = result except: pass </code></pre>
python|python-multithreading
2
1,909,813
67,103,306
CODE: Python , Error Type: 'int' object is not subscriptable
<p>So, the problem is write a void function which takes a 4 digit number and add the square of first two digit and last two digit.</p> <p>and my solution is</p> <pre><code>def add(): print(&quot;Enter a 4 Digit number&quot;) num = int(input()) if 999 &lt; num &lt; 10000: c = int(num[0:2]) d = int(num[2:4]) e = (c ** 2) + (d ** 2) print(e) else: print(&quot;Enter a valid number&quot;) </code></pre> <p>add()</p> <p>#it shows error: 'int' object is not subscriptable</p>
<p>This should work</p> <pre class="lang-py prettyprint-override"><code>def add(): print(&quot;Enter a 4 Digit number&quot;) num = int(input()) if 999 &lt; num &lt; 10000: c = int(str(num)[0:2]) #You first need to convert it into str d = int(str(num)[2:4]) #Same here e = (c ** 2) + (d ** 2) print(e) else: print(&quot;Enter a valid number&quot;) add() </code></pre>
python|python-3.x|pycharm
0
1,909,814
67,051,417
How to predict a single sample with Keras
<p>I'm trying to implement a Fully Convolutional Neural Network and can successfully test the accuracy of the model on the test set after training. However, I'd like to use the model to make a prediction on a single sample only. Training was in batches. I believe what I'm missing is related to batch size and input shape. Here is the configuration for the network:</p> <pre><code>def read(file_name): data = np.loadtxt(file_name, delimiter=&quot;\t&quot;) y = data[:, 0] x = data[:, 1:] return x, y.astype(int) train_data, train_labels = read(&quot;FordA_TRAIN.tsv&quot;) test_data, test_labels = read(&quot;FordA_TEST.tsv&quot;) train_data = train_data.reshape((train_data.shape[0], train_data.shape[1], 1)) test_data = test_data.reshape((test_data.shape[0], test_data.shape[1], 1)) num_classes = len(np.unique(train_labels)) #print(train_data[0]) # Shuffle the data to prepare for validation_split (and prevent overfitting for class order) idx = np.random.permutation(len(train_data)) train_data = train_data[idx] train_labels = train_labels[idx] #Standardize labels to have a value between 0 and 1 rather than -1 and 1. train_labels[train_labels == -1] = 0 test_labels[test_labels == -1] = 0 def make_model(input_shape): input_layer = keras.layers.Input(input_shape) conv1 = keras.layers.Conv1D(filters=64, kernel_size=3, padding=&quot;same&quot;)(input_layer) conv1 = keras.layers.BatchNormalization()(conv1) conv1 = keras.layers.ReLU()(conv1) conv2 = keras.layers.Conv1D(filters=64, kernel_size=3, padding=&quot;same&quot;)(conv1) conv2 = keras.layers.BatchNormalization()(conv2) conv2 = keras.layers.ReLU()(conv2) conv3 = keras.layers.Conv1D(filters=64, kernel_size=3, padding=&quot;same&quot;)(conv2) conv3 = keras.layers.BatchNormalization()(conv3) conv3 = keras.layers.ReLU()(conv3) gap = keras.layers.GlobalAveragePooling1D()(conv3) output_layer = keras.layers.Dense(num_classes, activation=&quot;softmax&quot;)(gap) return keras.models.Model(inputs=input_layer, outputs=output_layer) model = make_model(input_shape=train_data.shape[1:]) keras.utils.plot_model(model, show_shapes=True) epochs = 500 batch_size = 32 callbacks = [ keras.callbacks.ModelCheckpoint( &quot;best_model.h5&quot;, save_best_only=True, monitor=&quot;val_loss&quot; ), keras.callbacks.ReduceLROnPlateau( monitor=&quot;val_loss&quot;, factor=0.5, patience=20, min_lr=0.0001 ), keras.callbacks.EarlyStopping(monitor=&quot;val_loss&quot;, mode = 'min', patience=50, verbose=1), ] model.compile( optimizer=&quot;adam&quot;, loss=&quot;sparse_categorical_crossentropy&quot;, metrics=[&quot;sparse_categorical_accuracy&quot;], ) history = model.fit( train_data, train_labels, batch_size=batch_size, epochs=epochs, callbacks=callbacks, validation_split=0.2, verbose=1, ) model = keras.models.load_model(&quot;best_model.h5&quot;) test_loss, test_acc = model.evaluate(test_data, test_labels) print(&quot;Test accuracy&quot;, test_acc) print(&quot;Test loss&quot;, test_loss) </code></pre> <p>The above code can successfully display where the accuracy converged. Now, I'd like to make predictions on single samples. So far I have:</p> <pre><code>def read(file_name): data = np.loadtxt(file_name, delimiter=&quot;\t&quot;) y = data[:, 0] x = data[:, 1:] return x, y.astype(int) test_data, test_labels = read(&quot;FordA_TEST_B.tsv&quot;) test_data = test_data.reshape((test_data.shape[0], test_data.shape[1], 1)) test_labels[test_labels == -1] = 0 print(test_data) model = keras.models.load_model(&quot;forda_original_model.h5&quot;) q = model.predict(test_data[0]) </code></pre> <p>This raises the error: ValueError: Error when checking input: expected input_1 to have 3 dimensions, but got array with shape (500, 1)</p> <p>How does the input have to be reshaped and what is the rule to go by? Any help is much appreciated!</p>
<p>Copied from a comment:</p> <p>The model expects a batch dimension. Thus, to predict for a single model, just expand the dimensions to create a single-sized batch by running:</p> <pre><code>q = model.predict(test_data[0][None,...]) </code></pre> <p>or</p> <pre><code>q = model.predict(test_data[0][np.newaxis,...]) </code></pre>
python|numpy|tensorflow|machine-learning|keras
1
1,909,815
63,792,229
variable equal to an updated dict does not work, why?
<p>Imagine I have</p> <pre><code>dict1 = {'uno':[1,2,3],'dos':[4,5,6]} </code></pre> <p>and</p> <pre><code>dictA = {'AA':'ZZZZZ'} </code></pre> <p>This works:</p> <pre><code>dict1.update(dictA) </code></pre> <p>Result: <code>{'uno': [1, 2, 3], 'dos': [4, 5, 6], 'AA':'ZZZZZ'}</code></p> <p>But this does not work:</p> <pre><code>B = dict1.update(dictA) </code></pre> <p>The result is not an error but Result is None, which makes this behaviour (IMMO) strange and dangerous since the code does not crash.</p> <p>So Why is returning None and not giving error?</p> <p>Note: It looks like the way to go is:</p> <pre><code>C = dict1.update(dictA) B = {} B.update(dict1) B.update(dictA) B </code></pre> <p>C is none B is OK here</p>
<p>When using <code>update</code> it update <code>dict1</code> the dictionary given as parameter and returns <code>None</code>:</p> <p><a href="https://python-reference.readthedocs.io/en/latest/docs/dict/update.html" rel="nofollow noreferrer">Docs</a>:</p> <blockquote> <p>dict. update([mapping]) mapping Required. Either another dictionary object or an iterable of key:value pairs &gt; (iterables of length two). If keyword arguments are specified, the dictionary is &gt; then updated with those key:value pairs. Return Value None</p> </blockquote> <p>Code:</p> <pre><code>dict1 = {'uno':[1,2,3],'dos':[4,5,6]} dict1.update({'tres':[7,8,9]}) # {'uno': [1, 2, 3], 'dos': [4, 5, 6], 'tres': [7, 8, 9]} print(dict1) </code></pre>
python|dictionary|updates
1
1,909,816
64,076,594
Object of type User is not JSON serializable in DRF
<p>I am customizing the API that I give when I send the get request. The following error occurred when the get request was sent after customizing the response value using GenericAPIView.</p> <p>traceback</p> <pre><code>Traceback (most recent call last): File &quot;C:\Users\kurak\AppData\Local\Programs\Python\Python38-32\lib\site-packages\django\core\handlers\exception.py&quot;, line 34, in inner response = get_response(request) File &quot;C:\Users\kurak\AppData\Local\Programs\Python\Python38-32\lib\site-packages\django\core\handlers\base.py&quot;, line 145, in _get_response response = self.process_exception_by_middleware(e, request) File &quot;C:\Users\kurak\AppData\Local\Programs\Python\Python38-32\lib\site-packages\django\core\handlers\base.py&quot;, line 143, in _get_response response = response.render() File &quot;C:\Users\kurak\AppData\Local\Programs\Python\Python38-32\lib\site-packages\django\template\response.py&quot;, line 105, in render self.content = self.rendered_content File &quot;C:\Users\kurak\AppData\Local\Programs\Python\Python38-32\lib\site-packages\rest_framework\response.py&quot;, line 70, in rendered_content ret = renderer.render(self.data, accepted_media_type, context) File &quot;C:\Users\kurak\AppData\Local\Programs\Python\Python38-32\lib\site-packages\rest_framework\renderers.py&quot;, line 100, in render ret = json.dumps( File &quot;C:\Users\kurak\AppData\Local\Programs\Python\Python38-32\lib\site-packages\rest_framework\utils\json.py&quot;, line 25, in dumps return json.dumps(*args, **kwargs) File &quot;C:\Users\kurak\AppData\Local\Programs\Python\Python38-32\lib\json\__init__.py&quot;, line 234, in dumps return cls( File &quot;C:\Users\kurak\AppData\Local\Programs\Python\Python38-32\lib\json\encoder.py&quot;, line 199, in encode chunks = self.iterencode(o, _one_shot=True) File &quot;C:\Users\kurak\AppData\Local\Programs\Python\Python38-32\lib\json\encoder.py&quot;, line 257, in iterencode return _iterencode(o, 0) File &quot;C:\Users\kurak\AppData\Local\Programs\Python\Python38-32\lib\site-packages\rest_framework\utils\encoders.py&quot;, line 67, in default return super().default(obj) File &quot;C:\Users\kurak\AppData\Local\Programs\Python\Python38-32\lib\json\encoder.py&quot;, line 179, in default raise TypeError(f'Object of type {o.__class__.__name__} ' TypeError: Object of type User is not JSON serializable </code></pre> <p>What's problem in my code? I can't solve this error. Please help me. Here is my code. Thanks in advance</p> <p>views.py</p> <pre><code>class ReadPostView (GenericAPIView) : serializer_class = PostSerializer permission_classes = [IsAuthenticated] def get (self, serializer) : serializer = self.serializer_class() posts = Post.objects.all() data = [] for post in posts : comments = Comment.objects.filter(post=post) json = { 'pk': post.pk, 'author': { 'email': post.author_email, 'username': post.author_name, 'profile': post.author_profile }, 'like': post.liker.count, 'liker': post.liker, 'text': post.text, 'images': Image.objects.filter(post=post), 'comments_count': comments.count(), 'view': post.view, 'viewer_liked': None, 'tag': post.tag } data.append(json) return Response(data) </code></pre> <p>models.py</p> <pre><code>class Post (models.Model): author_name = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE, related_name='authorName', null=True) author_email = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE, related_name='authorEmail', null=True) author_profile = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE, related_name='authorProfile', null=True) title = models.CharField(max_length=40) text = models.TextField(max_length=300) tag = models.CharField(max_length=511, null=True) view = models.IntegerField(default=0) viewer = models.ManyToManyField(settings.AUTH_USER_MODEL, related_name='viewer', blank=True) like = models.IntegerField(default=0) liker = models.ManyToManyField(settings.AUTH_USER_MODEL, related_name='liker', blank=True) def __str__ (self) : return self.title class Image (models.Model) : post = models.ForeignKey(Post, on_delete=models.CASCADE) image = models.ImageField(null=True, blank=True) class Comment (models.Model) : author = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE, null=True) post = models.ForeignKey(Post, on_delete=models.CASCADE) text = models.TextField(max_length=200) </code></pre>
<p>There are a few problems with your code:</p> <p><strong>First,</strong> you can't pass an instance or a list of instances to your JSON fields. <code>'email': post.author_email,</code>, <code>'username': post.author_name,</code>, <code>'profile': post.author_profile</code>, <code>'liker': post.liker,</code> and <code>'images': Image.objects.filter(post=post),</code></p> <p>To fix this you either have to create a serializer for their model and pass the serialized data instead or you have to just pass a serializable field of those models like <code>post.liker.email</code></p> <p>You can use DRF <code>ModelSerializer</code>'s to make a model serializer: <a href="https://www.django-rest-framework.org/api-guide/serializers/#modelserializer" rel="nofollow noreferrer">ModelSerializer</a></p> <p><strong>Second,</strong> you don't need all three fields <code>author_name</code>, <code>author_email</code>, and <code>author_profile</code> in your model. all of them are pointing to your default user model and you can access everywhere from one of them:</p> <pre><code>post.author_profile.email # will give you the user email post.author_profile.first_name # will give you the user's first name # and so on ... </code></pre> <p><strong>Third,</strong> you can just use <code>ListAPIView</code> to generate a serialized list of your data: <a href="https://www.django-rest-framework.org/api-guide/generic-views/#listapiview" rel="nofollow noreferrer">ListAPIView</a></p> <p>You are doing the whole thing wrong here. Please consider looking at some more django and rest framework examples.</p>
python|django|django-rest-framework
1
1,909,817
64,070,788
How to do a listing and justify numbers?
<p>I am looking to get some help with this assignment. From the list below, I should find numbers greater than 0, the numbers are written to the file right justified in 10 spaces, with 2 spaces allowed for the fractional portion of the value, and finally write them into a file.</p> <p>Here is what I've got so far:</p> <pre><code>def formatted_file(file_name, nums_list): ''' Test: &gt;&gt;&gt; formatted_file('out1.txt', [1, 23.999, -9, 327.1]) &gt;&gt;&gt; show_file('out1.txt') 1.00 24.00 327.10 &lt;BLANKLINE&gt; &gt;&gt;&gt; formatted_file('out1.txt',[-1, -98.6]) &gt;&gt;&gt; show_file('out1.txt') &lt;BLANKLINE&gt; &gt;&gt;&gt; formatted_file('out1.txt',[]) &gt;&gt;&gt; show_file('out1.txt') &lt;BLANKLINE&gt; ''' with open('out1.txt', 'w') as my_file: for x in nums_list: if x &gt; 0: a = list() a.append(x) if len(a) &gt; 0: my_file.write(f'{i:10.2f}\n') def show_file(file_name): with open(file_name, 'r') as result_file: print(result_file.read()) if __name__ == &quot;__main__&quot;: import doctest doctest.testmod(verbose = True) </code></pre> <p>When I run this function, the file that I get is blank. I got it working last night in pycharm, but when I ran it to IDLE it didn't work. And now it's throwing a bunch of error in pycharm as well.</p> <hr /> <p>Thanks both for your suggestions. Unfortunately none of the methods writes the output in the file :(</p> <p>The test passes in IDLE though.</p>
<p>If you use a string.format, you must put the value in the format function. I also do not understand what len(a) or a in particular a is used for here. This is your function with my modification, which passed the first test (I think the other two were just for debug)</p> <pre><code>def formatted_file(file_name, nums_list): ''' Test: &gt;&gt;&gt; formatted_file('out2.txt', [1, 23.999, -9, 327.1]) &gt;&gt;&gt; show_file('out2.txt') 1.00 24.00 327.10 &lt;BLANKLINE&gt; ''' with open(file_name, 'w') as my_file: for x in nums_list: if x &gt; 0: my_file.write('{:10.2f}\n'.format(x)) def show_file(file_name): with open(file_name, 'r') as result_file: print(result_file.read()) if __name__ == &quot;__main__&quot;: import doctest doctest.testmod(verbose = True) </code></pre> <hr /> <pre><code>Trying: formatted_file('out2.txt', [1, 23.999, -9, 327.1]) Expecting nothing ok Trying: show_file('out2.txt') Expecting: 1.00 24.00 327.10 &lt;BLANKLINE&gt; ok 2 items had no tests: __main__ __main__.show_file 1 items passed all tests: 2 tests in __main__.formatted_file 2 tests in 3 items. 2 passed and 0 failed. Test passed. </code></pre>
python
0
1,909,818
42,935,034
Python/ Boto 3: How to retrieve/download files from AWS S3?
<p>In Python/Boto 3, Found out that to download a file individually from S3 to local can do the following:</p> <pre><code> bucket = self._aws_connection.get_bucket(aws_bucketname) for s3_file in bucket.list(): if filename == s3_file.name: self._downloadFile(s3_file, local_download_directory) break; </code></pre> <p>And to download all files under one chosen directory: </p> <pre><code> else: bucket = self._aws_connection.get_bucket(aws_bucketname) for s3_file in bucket.list(): self._downloadFile(s3_file, local_download_directory) </code></pre> <p>And helper function <code>_downloadFile()</code>:</p> <pre><code> def _downloadFile(self, s3_file, local_download_destination): full_local_path = os.path.expanduser(os.path.join(local_download_destination, s3_file.name)) try: print "Downloaded: %s" % (full_local_path) s3_file.get_contents_to_filename(full_local_path) </code></pre> <p>But both don’t seem to be working. Using Boto 3 and Python, would like to be able to download all files, as a zip preferably, under a defined directory on S3 to my local. </p> <p>What could I be doing wrong, and what’s the correct implementation of the parameters? </p> <p>Thank you in advance, and will be sure to accept/upvote answer</p> <p><strong>UPDATE CODE</strong>: <code>Getting an error: “AttributeError: 'S3' object has no attribute</code></p> <pre><code>import sys import json import os import subprocess import boto3 from boto.s3.connection import S3Connection s3 = boto3.resource('s3') s3client = boto3.client('s3') #This works for bucket in s3.buckets.all(): print(bucket.name) def main(): #Getting an error: “AttributeError: 'S3' object has no attribute 'download’” s3client.download('testbucket', 'fileone.json', 'newfile') if __name__ == "__main__": main() </code></pre>
<p>To download files from S3 to Local FS, use the <a href="https://boto3.readthedocs.io/en/latest/reference/services/s3.html#S3.Client.download_file" rel="noreferrer"><code>download_file()</code></a> method</p> <pre><code>s3client = boto3.client('s3') s3client.download_file(Bucket, Key, Filename) </code></pre> <p>If the S3 object is <code>s3://mybucket/foo/bar/file.txt</code>, then the arguments would be</p> <pre><code>Bucket --&gt; mybucket Key --&gt; foo/bar/file.txt Filename --&gt; /local/path/file.txt </code></pre> <p>There aren't any methods to download the entire bucket. An alternative way would be to list all the objects in the bucket and download them individually as files.</p> <pre><code>for obj in s3client.list_objects(Bucket='mybucket')['Contents']: try: filename = obj['Key'].rsplit('/', 1)[1] except IndexError: filename = obj['Key'] localfilename = os.path.join('/home/username/Downloads/', filename) # The local directory must exist. s3client.download_file('mybucket', obj['Key'], localfilename) </code></pre> <p><strong>Note:</strong> The response of <a href="https://boto3.readthedocs.io/en/latest/reference/services/s3.html#S3.Client.list_objects" rel="noreferrer"><code>list_objects()</code></a> is truncated to 1000 objects. Use the markers in the response to retrieve the remainder of objects in the bucket.</p>
python|amazon-web-services|amazon-s3|boto|boto3
19
1,909,819
66,359,449
Python Reportlab divide table to fit into different pages
<p>I am trying to build a schedule planner, in a PDF file generated with ReportLab. The schedule will have a different rows depending on the hour of the day: starting with 8:00 a.m., 8:15 a.m., 8:30 a.m., and so on.</p> <p>I made a loop in which the hours will be calculated automatically and the schedule will be filled. However, since my table is too long, it doesn't fit completely in the page. (Although the schedule should end on 7:30 p.m., it is cutted at 2:00 p.m.)</p> <p>The desired result is to have a PageBreak when the table is at around 20 activities. On the next page, the header should be exactly the same as in the first page and below, the continuation of the table. The process should repeat every time it is necessary, until the end of the table.</p> <p><a href="https://i.stack.imgur.com/MVaxQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MVaxQ.png" alt="enter image description here" /></a></p> <p><strong>The Python code is the following:</strong></p> <pre><code>from reportlab.pdfgen.canvas import Canvas from datetime import datetime, timedelta from reportlab.platypus import Table, TableStyle from reportlab.lib import colors from reportlab.lib.pagesizes import letter, landscape class Vendedor: &quot;&quot;&quot; Información del Vendedor: Nombre, sucursal, meta de venta &quot;&quot;&quot; def __init__(self, nombre_vendedor, sucursal, dia_reporte): self.nombre_vendedor = nombre_vendedor self.sucursal = sucursal self.dia_reporte = dia_reporte class Actividades: &quot;&quot;&quot; Información de las Actividades realizadas: Hora de actividad y duración, cliente atendido, tipo de actividad, resultado, monto venta (mxn) + (usd), monto cotización (mxn) + (usd), solicitud de apoyo y comentarios adicionales &quot;&quot;&quot; def __init__(self, hora_actividad, duracion_actividad, cliente, tipo_actividad, resultado, monto_venta_mxn, monto_venta_usd, monto_cot_mxn, monto_cot_usd, requiero_apoyo, comentarios_extra): self.hora_actividad = hora_actividad self.duracion_actividad = duracion_actividad self.cliente = cliente self.tipo_actividad = tipo_actividad self.resultado = resultado self.monto_venta_mxn = monto_venta_mxn self.monto_venta_usd = monto_venta_usd self.monto_cot_mxn = monto_cot_mxn self.monto_cot_usd = monto_cot_usd self.requiero_apoyo = requiero_apoyo self.comentarios_extra = comentarios_extra class PDFReport: &quot;&quot;&quot; Crea el Reporte de Actividades diarias en archivo de formato PDF &quot;&quot;&quot; def __init__(self, filename): self.filename = filename vendedor = Vendedor('John Doe', 'Stack Overflow', datetime.now().strftime('%d/%m/%Y')) file_name = 'cronograma_actividades.pdf' document_title = 'Cronograma Diario de Actividades' title = 'Cronograma Diario de Actividades' nombre_colaborador = vendedor.nombre_vendedor sucursal_colaborador = vendedor.sucursal fecha_actual = vendedor.dia_reporte canvas = Canvas(file_name) canvas.setPageSize(landscape(letter)) canvas.setTitle(document_title) canvas.setFont(&quot;Helvetica-Bold&quot;, 20) canvas.drawCentredString(385+100, 805-250, title) canvas.setFont(&quot;Helvetica&quot;, 16) canvas.drawCentredString(385+100, 785-250, nombre_colaborador + ' - ' + sucursal_colaborador) canvas.setFont(&quot;Helvetica&quot;, 14) canvas.drawCentredString(385+100, 765-250, fecha_actual) title_background = colors.fidblue hour = 8 minute = 0 hour_list = [] data_actividades = [ {'Hora', 'Cliente', 'Resultado de \nActividad', 'Monto Venta \n(MXN)', 'Monto Venta \n(USD)', 'Monto Cotización \n(MXN)', 'Monto Cotización \n(USD)', 'Comentarios \nAdicionales'}, ] i = 0 for i in range(47): if minute == 0: if hour &lt;= 12: time = str(hour) + ':' + str(minute) + '0 a.m.' else: time = str(hour-12) + ':' + str(minute) + '0 p.m.' else: if hour &lt;= 12: time = str(hour) + ':' + str(minute) + ' a.m.' else: time = str(hour-12) + ':' + str(minute) + ' p.m.' if minute != 45: minute += 15 else: hour += 1 minute = 0 hour_list.append(time) # I TRIED THIS SOLUTION BUT THIS DIDN'T WORK # if i % 20 == 0: # canvas.showPage() data_actividades.append([hour_list[i], i, i, i, i, i, i, i]) i += 1 table_actividades = Table(data_actividades, colWidths=85, rowHeights=30, repeatRows=1) tblStyle = TableStyle([ ('BACKGROUND', (0, 0), (-1, 0), title_background), ('TEXTCOLOR', (0, 0), (-1, 0), colors.whitesmoke), ('ALIGN', (1, 0), (1, -1), 'CENTER'), ('GRID', (0, 0), (-1, -1), 1, colors.black) ]) rowNumb = len(data_actividades) for row in range(1, rowNumb): if row % 2 == 0: table_background = colors.lightblue else: table_background = colors.aliceblue tblStyle.add('BACKGROUND', (0, row), (-1, row), table_background) table_actividades.setStyle(tblStyle) width = 150 height = 150 table_actividades.wrapOn(canvas, width, height) table_actividades.drawOn(canvas, 65, (0 - height) - 240) canvas.save() </code></pre> <p>I tried by adding:</p> <pre><code>if i % 20 == 0: canvas.showPage() </code></pre> <p>However this failed to achieve the desired result.</p> <p>Other quick note: Although I specifically coded the column titles of the table. Once I run the program, the order of the column titles is modified for some reason (see the pasted image). Any idea of why this is happening?</p> <pre><code>data_actividades = [ {'Hora', 'Cliente', 'Resultado de \nActividad', 'Monto Venta \n(MXN)', 'Monto Venta \n(USD)', 'Monto Cotización \n(MXN)', 'Monto Cotización \n(USD)', 'Comentarios \nAdicionales'}, ] </code></pre> <p>Thank you very much in advance, have a great day!</p>
<p>You should use templates, as suggested in the Chapter 5 &quot;PLATYPUS - Page Layout and TypographyUsing Scripts&quot; of the official documentation.</p> <p>The basic idea is to use frames, and add to a list element all the information you want to add. In my case I call it &quot;contents&quot;, with the command &quot;<code>contents.append(FrameBreak())</code>&quot; you leave the frame and work on the next one, on the other hand if you want to change the type of template you use the command &quot; <code>contents.append(NextPageTemplate('&lt;template_name&gt;'))</code>&quot;</p> <p><strong>My proposal:</strong></p> <p>For your case I used two templates, the first one is the one that contains the header with the sheet information and the first part of the table, and the other template corresponds to the rest of the content. The name of these templates is firstpage and laterpage.The code is as follows:</p> <pre><code>from reportlab.pdfgen.canvas import Canvas from datetime import datetime, timedelta from reportlab.platypus import Table, TableStyle from reportlab.lib import colors from reportlab.lib.pagesizes import letter, landscape from reportlab.platypus import BaseDocTemplate, Frame, Paragraph, PageBreak, \ PageTemplate, Spacer, FrameBreak, NextPageTemplate, Image from reportlab.lib.pagesizes import letter,A4 from reportlab.lib.units import inch, cm from reportlab.lib.styles import getSampleStyleSheet from reportlab.lib.enums import TA_JUSTIFY, TA_CENTER,TA_LEFT,TA_RIGHT class Vendedor: &quot;&quot;&quot; Información del Vendedor: Nombre, sucursal, meta de venta &quot;&quot;&quot; def __init__(self, nombre_vendedor, sucursal, dia_reporte): self.nombre_vendedor = nombre_vendedor self.sucursal = sucursal self.dia_reporte = dia_reporte class Actividades: &quot;&quot;&quot; Información de las Actividades realizadas: Hora de actividad y duración, cliente atendido, tipo de actividad, resultado, monto venta (mxn) + (usd), monto cotización (mxn) + (usd), solicitud de apoyo y comentarios adicionales &quot;&quot;&quot; def __init__(self, hora_actividad, duracion_actividad, cliente, tipo_actividad, resultado, monto_venta_mxn, monto_venta_usd, monto_cot_mxn, monto_cot_usd, requiero_apoyo, comentarios_extra): self.hora_actividad = hora_actividad self.duracion_actividad = duracion_actividad self.cliente = cliente self.tipo_actividad = tipo_actividad self.resultado = resultado self.monto_venta_mxn = monto_venta_mxn self.monto_venta_usd = monto_venta_usd self.monto_cot_mxn = monto_cot_mxn self.monto_cot_usd = monto_cot_usd self.requiero_apoyo = requiero_apoyo self.comentarios_extra = comentarios_extra class PDFReport: &quot;&quot;&quot; Crea el Reporte de Actividades diarias en archivo de formato PDF &quot;&quot;&quot; def __init__(self, filename): self.filename = filename vendedor = Vendedor('John Doe', 'Stack Overflow', datetime.now().strftime('%d/%m/%Y')) file_name = 'cronograma_actividades.pdf' document_title = 'Cronograma Diario de Actividades' title = 'Cronograma Diario de Actividades' nombre_colaborador = vendedor.nombre_vendedor sucursal_colaborador = vendedor.sucursal fecha_actual = vendedor.dia_reporte canvas = Canvas(file_name, pagesize=landscape(letter)) doc = BaseDocTemplate(file_name) contents =[] width,height = A4 left_header_frame = Frame( 0.2*inch, height-1.2*inch, 2*inch, 1*inch ) right_header_frame = Frame( 2.2*inch, height-1.2*inch, width-2.5*inch, 1*inch,id='normal' ) frame_later = Frame( 0.2*inch, 0.6*inch, (width-0.6*inch)+0.17*inch, height-1*inch, leftPadding = 0, topPadding=0, showBoundary = 1, id='col' ) frame_table= Frame( 0.2*inch, 0.7*inch, (width-0.6*inch)+0.17*inch, height-2*inch, leftPadding = 0, topPadding=0, showBoundary = 1, id='col' ) laterpages = PageTemplate(id='laterpages',frames=[frame_later]) firstpage = PageTemplate(id='firstpage',frames=[left_header_frame, right_header_frame,frame_table],) contents.append(NextPageTemplate('firstpage')) logoleft = Image('logo_power.png') logoleft._restrictSize(1.5*inch, 1.5*inch) logoleft.hAlign = 'CENTER' logoleft.vAlign = 'CENTER' contents.append(logoleft) contents.append(FrameBreak()) styleSheet = getSampleStyleSheet() style_title = styleSheet['Heading1'] style_title.fontSize = 20 style_title.fontName = 'Helvetica-Bold' style_title.alignment=TA_CENTER style_data = styleSheet['Normal'] style_data.fontSize = 16 style_data.fontName = 'Helvetica' style_data.alignment=TA_CENTER style_date = styleSheet['Normal'] style_date.fontSize = 14 style_date.fontName = 'Helvetica' style_date.alignment=TA_CENTER canvas.setTitle(document_title) contents.append(Paragraph(title, style_title)) contents.append(Paragraph(nombre_colaborador + ' - ' + sucursal_colaborador, style_data)) contents.append(Paragraph(fecha_actual, style_date)) contents.append(FrameBreak()) title_background = colors.fidblue hour = 8 minute = 0 hour_list = [] data_actividades = [ {'Hora', 'Cliente', 'Resultado de \nActividad', 'Monto Venta \n(MXN)', 'Monto Venta \n(USD)', 'Monto Cotización \n(MXN)', 'Monto Cotización \n(USD)', 'Comentarios \nAdicionales'}, ] i = 0 for i in range(300): if minute == 0: if hour &lt;= 12: time = str(hour) + ':' + str(minute) + '0 a.m.' else: time = str(hour-12) + ':' + str(minute) + '0 p.m.' else: if hour &lt;= 12: time = str(hour) + ':' + str(minute) + ' a.m.' else: time = str(hour-12) + ':' + str(minute) + ' p.m.' if minute != 45: minute += 15 else: hour += 1 minute = 0 hour_list.append(time) # I TRIED THIS SOLUTION BUT THIS DIDN'T WORK # if i % 20 == 0: data_actividades.append([hour_list[i], i, i, i, i, i, i, i]) i += 1 table_actividades = Table(data_actividades, colWidths=85, rowHeights=30, repeatRows=1) tblStyle = TableStyle([ ('BACKGROUND', (0, 0), (-1, 0), title_background), ('TEXTCOLOR', (0, 0), (-1, 0), colors.whitesmoke), ('ALIGN', (1, 0), (1, -1), 'CENTER'), ('GRID', (0, 0), (-1, -1), 1, colors.black) ]) rowNumb = len(data_actividades) for row in range(1, rowNumb): if row % 2 == 0: table_background = colors.lightblue else: table_background = colors.aliceblue tblStyle.add('BACKGROUND', (0, row), (-1, row), table_background) table_actividades.setStyle(tblStyle) width = 150 height = 150 contents.append(NextPageTemplate('laterpages')) contents.append(table_actividades) contents.append(PageBreak()) doc.addPageTemplates([firstpage,laterpages]) doc.build(contents) </code></pre> <p><strong>Results</strong></p> <p>With this you can add as many records as you want, I tried with 300. The table is not fully visible because for my convenience I made an A4 size pdf. However, the principle is the same for any size so you must play with the size of the frames and the size of the pdf page.</p> <p><a href="https://i.stack.imgur.com/zJ9He.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zJ9He.png" alt="enter image description here" /></a> <a href="https://i.stack.imgur.com/SYEm9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SYEm9.png" alt="enter image description here" /></a></p> <p><strong>EXTRA, add header on each page</strong></p> <p>since only one template will be needed now, the &quot;first_page&quot; template should be removed since it will be the same for all pages. In the same way that you proposed in the beginning I cut the table every 21 records (to include the header of the table) and it is grouped in a list that then iterates adding the header with the logo in each cycle. Also it is included in the logical cutting sentence, the case when the number of records does not reach 21 but the number of records is going to end. The code is as follows:</p> <pre><code>canvas = Canvas(file_name, pagesize=landscape(letter)) doc = BaseDocTemplate(file_name) contents =[] width,height = A4 left_header_frame = Frame( 0.2*inch, height-1.2*inch, 2*inch, 1*inch ) right_header_frame = Frame( 2.2*inch, height-1.2*inch, width-2.5*inch, 1*inch,id='normal' ) frame_table= Frame( 0.2*inch, 0.7*inch, (width-0.6*inch)+0.17*inch, height-2*inch, leftPadding = 0, topPadding=0, showBoundary = 1, id='col' ) laterpages = PageTemplate(id='laterpages',frames=[left_header_frame, right_header_frame,frame_table],) logoleft = Image('logo_power.png') logoleft._restrictSize(1.5*inch, 1.5*inch) logoleft.hAlign = 'CENTER' logoleft.vAlign = 'CENTER' styleSheet = getSampleStyleSheet() style_title = styleSheet['Heading1'] style_title.fontSize = 20 style_title.fontName = 'Helvetica-Bold' style_title.alignment=TA_CENTER style_data = styleSheet['Normal'] style_data.fontSize = 16 style_data.fontName = 'Helvetica' style_data.alignment=TA_CENTER style_date = styleSheet['Normal'] style_date.fontSize = 14 style_date.fontName = 'Helvetica' style_date.alignment=TA_CENTER canvas.setTitle(document_title) title_background = colors.fidblue hour = 8 minute = 0 hour_list = [] data_actividades = [ {'Hora', 'Cliente', 'Resultado de \nActividad', 'Monto Venta \n(MXN)', 'Monto Venta \n(USD)', 'Monto Cotización \n(MXN)', 'Monto Cotización \n(USD)', 'Comentarios \nAdicionales'}, ] i = 0 table_group= [] size = 304 count = 0 for i in range(size): if minute == 0: if hour &lt;= 12: time = str(hour) + ':' + str(minute) + '0 a.m.' else: time = str(hour-12) + ':' + str(minute) + '0 p.m.' else: if hour &lt;= 12: time = str(hour) + ':' + str(minute) + ' a.m.' else: time = str(hour-12) + ':' + str(minute) + ' p.m.' if minute != 45: minute += 15 else: hour += 1 minute = 0 hour_list.append(time) data_actividades.append([hour_list[i], i, i, i, i, i, i, i]) i += 1 table_actividades = Table(data_actividades, colWidths=85, rowHeights=30, repeatRows=1) tblStyle = TableStyle([ ('BACKGROUND', (0, 0), (-1, 0), title_background), ('TEXTCOLOR', (0, 0), (-1, 0), colors.whitesmoke), ('ALIGN', (1, 0), (1, -1), 'CENTER'), ('GRID', (0, 0), (-1, -1), 1, colors.black) ]) rowNumb = len(data_actividades) for row in range(1, rowNumb): if row % 2 == 0: table_background = colors.lightblue else: table_background = colors.aliceblue tblStyle.add('BACKGROUND', (0, row), (-1, row), table_background) table_actividades.setStyle(tblStyle) if ((count &gt;= 20) or (i== size) ): count = 0 table_group.append(table_actividades) data_actividades = [ {'Hora', 'Cliente', 'Resultado de \nActividad', 'Monto Venta \n(MXN)', 'Monto Venta \n(USD)', 'Monto Cotización \n(MXN)', 'Monto Cotización \n(USD)', 'Comentarios \nAdicionales'},] width = 150 height = 150 count += 1 if i &gt; size: break contents.append(NextPageTemplate('laterpages')) for table in table_group: contents.append(logoleft) contents.append(FrameBreak()) contents.append(Paragraph(title, style_title)) contents.append(Paragraph(nombre_colaborador + ' - ' + sucursal_colaborador, style_data)) contents.append(Paragraph(fecha_actual, style_date)) contents.append(FrameBreak()) contents.append(table) contents.append(FrameBreak()) doc.addPageTemplates([laterpages,]) doc.build(contents) </code></pre> <p><strong>Extra - result:</strong></p> <p><a href="https://i.stack.imgur.com/ah8sZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ah8sZ.png" alt="enter image description here" /></a></p>
python|datatable|pdf-generation|reportlab
2
1,909,820
66,379,390
How to automatically select idle GPU for model traning in tensorflow?
<p>I am using nvidia prebuilt docker container <code>NVIDIA Release 20.12-tf2</code> to run my experiment. I am using <code>TensorFlow Version 2.3.1</code>. Currently, I am running my model on one of GPU, I still have 3 more idle GPUs so I intend to use my alternative experiment on any idle GPUs. Here is the output of <code>nvidia-smi</code>:</p> <pre><code>+-----------------------------------------------------------------------------+ | NVIDIA-SMI 450.51.06 Driver Version: 450.51.06 CUDA Version: 11.1 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Tesla T4 Off | 00000000:6A:00.0 Off | 0 | | N/A 70C P0 71W / 70W | 14586MiB / 15109MiB | 100% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 1 Tesla T4 Off | 00000000:6B:00.0 Off | 0 | | N/A 39C P0 27W / 70W | 212MiB / 15109MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 2 Tesla T4 Off | 00000000:6C:00.0 Off | 0 | | N/A 41C P0 28W / 70W | 212MiB / 15109MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 3 Tesla T4 Off | 00000000:6D:00.0 Off | 0 | | N/A 41C P0 28W / 70W | 212MiB / 15109MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| +-----------------------------------------------------------------------------+ </code></pre> <p><strong>update: prebuilt -container</strong>:</p> <p>I'm using nvidia-prebuilt container as follow:</p> <pre><code>docker run -ti --rm --gpus all --shm-size=1024m -v /home/hamilton/data:/data nvcr.io/nvidia/tensorflow:20.12-tf2-py3 </code></pre> <p>To utilize idle GPU for my other experiments, I tried to add those in my python script:</p> <p><strong>attempt-1</strong></p> <pre><code>import tensorflow as tf devices = tf.config.experimental.list_physical_devices('GPU') tf.config.experimental.set_memory_growth(devices[0], True) </code></pre> <p>but this attempt gave me following error:</p> <blockquote> <pre><code>raise ValueError(&quot;Memory growth cannot differ between GPU devices&quot;) ValueError: Memory growth cannot differ between GPU devices </code></pre> </blockquote> <p>I googled this error but none of them discussed on GitHub is not working for me.</p> <p><strong>attempt-2</strong></p> <p>I also tried this:</p> <pre><code>gpus = tf.config.experimental.list_physical_devices('GPU') for gpu in gpus: tf.config.experimental.set_memory_growth(gpu, True) </code></pre> <p>but this attempt also gave me error like this:</p> <blockquote> <p>Error occurred when finalizing GeneratorDataset iterator: Failed precondition: Python interpreter state is not initialized. The process may be terminated.</p> </blockquote> <p>people discussed this error on github but still not able to get rid of error on my side.</p> <p><strong>latest attempt</strong>:</p> <p>I also tried parallel training with TensorFlow and added those to my python script:</p> <pre><code>device_type = &quot;GPU&quot; devices = tf.config.experimental.list_physical_devices(device_type) devices_names = [d.name.split(&quot;e:&quot;)[1] for d in devices] strategy = tf.distribute.MirroredStrategy(devices=devices_names[:3]) with strategy.scope(): opt = Adam(learning_rate=0.1) model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy']) </code></pre> <p>but this gave me also error and the program stopped.</p> <p>Can anyone help me how to automatically select idle GPUs for the training model in tensorflow? Does anyone know any workable approach? What's wrong with my above attempt? Any possible ideas to utilize idle GPUs while running the program on one of the GPUs? any thoughts?</p>
<p>Thanks to @HernánAlarcón suggestion, I tried like this and it worked like charm:</p> <pre><code>docker run -ti --rm --gpus device=1,3 --shm-size=1024m -v /home/hamilton/data:/data nvcr.io/nvidia/tensorflow:20.12-tf2-py3 </code></pre> <p>this may not be an elegant solution but it worked like charm. I am open to other possible remedies to fix this sort of problem.</p>
python|tensorflow|gpu|nvidia-docker
1
1,909,821
72,380,259
Reshape tensors in pytorch?
<p>I'm struggling with the result of a matrix multiplication in pytorch and I don't know how to solve it, in particular: I'm multiplying these two matrices</p> <pre><code>tensor([[[[209.5000, 222.7500], [276.5000, 289.7500]], [[208.5000, 221.7500], [275.5000, 288.7500]]]], dtype=torch.float64) </code></pre> <p>and</p> <pre><code>tensor([[[[ 0., 1., 2., 5., 6., 7., 10., 11., 12.], [ 2., 3., 4., 7., 8., 9., 12., 13., 14.], [10., 11., 12., 15., 16., 17., 20., 21., 22.], [12., 13., 14., 17., 18., 19., 22., 23., 24.]], [[25., 26., 27., 30., 31., 32., 35., 36., 37.], [27., 28., 29., 32., 33., 34., 37., 38., 39.], [35., 36., 37., 40., 41., 42., 45., 46., 47.], [37., 38., 39., 42., 43., 44., 47., 48., 49.]], [[50., 51., 52., 55., 56., 57., 60., 61., 62.], [52., 53., 54., 57., 58., 59., 62., 63., 64.], [60., 61., 62., 65., 66., 67., 70., 71., 72.], [62., 63., 64., 67., 68., 69., 72., 73., 74.]]]], dtype=torch.float64) </code></pre> <p>with the following line of code <code> A.view(2,-1) @ B</code>, and then I reshape the result with <code>result.view(2, 3, 3, 3)</code>. The resulting matrix is</p> <pre><code>tensor([[[[ 6687.5000, 7686.0000, 8684.5000], [11680.0000, 12678.5000, 13677.0000], [16672.5000, 17671.0000, 18669.5000]], [[ 6663.5000, 7658.0000, 8652.5000], [11636.0000, 12630.5000, 13625.0000], [16608.5000, 17603.0000, 18597.5000]], [[31650.0000, 32648.5000, 33647.0000], [36642.5000, 37641.0000, 38639.5000], [41635.0000, 42633.5000, 43632.0000]]], [[[31526.0000, 32520.5000, 33515.0000], [36498.5000, 37493.0000, 38487.5000], [41471.0000, 42465.5000, 43460.0000]], [[56612.5000, 57611.0000, 58609.5000], [61605.0000, 62603.5000, 63602.0000], [66597.5000, 67596.0000, 68594.5000]], [[56388.5000, 57383.0000, 58377.5000], [61361.0000, 62355.5000, 63350.0000], [66333.5000, 67328.0000, 68322.5000]]]], dtype=torch.float64) </code></pre> <p>Instead I want</p> <pre><code>tensor([[[[ 6687.5000, 7686.0000, 8684.5000], [11680.0000, 12678.5000, 13677.0000], [16672.5000, 17671.0000, 18669.5000]], [[31650.0000, 32648.5000, 33647.0000], [36642.5000, 37641.0000, 38639.5000], [41635.0000, 42633.5000, 43632.0000]], [[56612.5000, 57611.0000, 58609.5000], [61605.0000, 62603.5000, 63602.0000], [66597.5000, 67596.0000, 68594.5000]]], [[[ 6663.5000, 7658.0000, 8652.5000], [11636.0000, 12630.5000, 13625.0000], [16608.5000, 17603.0000, 18597.5000]], [[31526.0000, 32520.5000, 33515.0000], [36498.5000, 37493.0000, 38487.5000], [41471.0000, 42465.5000, 43460.0000]], [[56388.5000, 57383.0000, 58377.5000], [61361.0000, 62355.5000, 63350.0000], [66333.5000, 67328.0000, 68322.5000]]]], dtype=torch.float64) </code></pre> <p>Can someone help me? Thanks</p>
<p>This is a common but interesting problem because it involves a combination of <a href="https://pytorch.org/docs/stable/generated/torch.reshape.html" rel="nofollow noreferrer"><code>torch.reshape</code></a>s and <a href="https://pytorch.org/docs/stable/generated/torch.transpose.html" rel="nofollow noreferrer"><code>torch.transpose</code></a> to solve it. More specifically, you will need</p> <ol> <li>Apply an initial reshape to restructure the tensor and expose the axes you want to swap;</li> <li>Then do so using a transpose operation;</li> <li>Lastly apply a second reshape to get to the desired format.</li> </ol> <p>In your case, you could do:</p> <pre><code>&gt;&gt;&gt; result.reshape(3,2,3,3).transpose(0,1).reshape(2,3,3,3) tensor([[[[ 6687.5000, 7686.0000, 8684.5000], [11680.0000, 12678.5000, 13677.0000], [16672.5000, 17671.0000, 18669.5000]], [[31650.0000, 32648.5000, 33647.0000], [36642.5000, 37641.0000, 38639.5000], [41635.0000, 42633.5000, 43632.0000]], [[56612.5000, 57611.0000, 58609.5000], [61605.0000, 62603.5000, 63602.0000], [66597.5000, 67596.0000, 68594.5000]]], [[[ 6663.5000, 7658.0000, 8652.5000], [11636.0000, 12630.5000, 13625.0000], [16608.5000, 17603.0000, 18597.5000]], [[31526.0000, 32520.5000, 33515.0000], [36498.5000, 37493.0000, 38487.5000], [41471.0000, 42465.5000, 43460.0000]], [[56388.5000, 57383.0000, 58377.5000], [61361.0000, 62355.5000, 63350.0000], [66333.5000, 67328.0000, 68322.5000]]]], dtype=torch.float64) </code></pre> <p>I encourage you to look a the intermediate results to get an idea of how the method works so you can apply it on other use cases in the future.</p>
python|pytorch|tensor
1
1,909,822
65,786,143
How to inherit the Odoo default QWeb reports in .py file?
<p>I want to inherit odoo default qweb report &quot;Picking operation&quot; from stock.picking in python file. I know how to inherit default qweb report in xml. please suggest/guide how to inherit a qweb default report in .py file</p>
<p>You can use it.</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>return self.env.ref('your_module_name.your_menu_id').report_action(self, data=data)</code></pre> </div> </div> </p>
python|xml|report|odoo|qweb
1
1,909,823
65,711,462
How to store user uploaded image from flask server in google storage bucket?
<p>I am trying to find a way to store an image uploaded in the flask server by a user in a google storage bucket. This is my attempt to upload the image. It fails.</p> <pre><code>@app.route(&quot;/upload-image&quot;, methods=[&quot;GET&quot;, &quot;POST&quot;]) def upload_image(): if request.method == &quot;POST&quot;: try: if request.files: image = request.files[&quot;image&quot;] readImg = image.read() content = bytes(readImg) client = storage.Client().from_service_account_json(os.environ['GOOGLE_APPLICATION_CREDENTIALS']) print('1)') bucket = storage.Bucket(client, &quot;uploaded-usrimg&quot;) print('2)') file_blob = bucket.blob(content) print('3)') return render_template('result.html', request=result.payload[0].display_name) # return render_template('homepage.html') except Exception as e: print('error creating image data') print(e) </code></pre> <p>My blob (image) does not upload to my bucket. I get this error:</p> <pre><code>127.0.0.1 - - [13/Jan/2021 18:40:58] &quot;POST /upload-image HTTP/1.1&quot; 500 - 1) 2) error creating image data 'utf-8' codec can''t decode byte 0x89 in position 0: invalid start byte [2021-01-13 18:41:11,663] ERROR in app: Exception on /upload-image [POST] Traceback (most recent call last): File &quot;/Users/me/.pyenv/versions/3.7.3/lib/python3.7/site-packages/flask/app.py&quot;, line 2447, in wsgi_app response = self.full_dispatch_request() File &quot;/Users/me/.pyenv/versions/3.7.3/lib/python3.7/site-packages/flask/app.py&quot;, line 1953, in full_dispatch_request return self.finalize_request(rv) File &quot;/Users/me/.pyenv/versions/3.7.3/lib/python3.7/site-packages/flask/app.py&quot;, line 1968, in finalize_request response = self.make_response(rv) File &quot;/Users/me/.pyenv/versions/3.7.3/lib/python3.7/site-packages/flask/app.py&quot;, line 2098, in make_response &quot;The view function did not return a valid response. The&quot; TypeError: The view function did not return a valid response. The function either returned None or ended without a return statement. 127.0.0.1 - - [13/Jan/2021 18:41:11] &quot;POST /upload-image HTTP/1.1&quot; 500 - </code></pre> <p>Any idea how to solve this error? Or another method in uploading to google bucket? Thanks so much.</p>
<p>I believe this error message is due to the way you are handling the image. In your code <code>readImg = image.read()</code>, you are decoding the image according to UTF-8 rules and encounter a byte sequence that is not allowed in UTF-8 encoding.</p> <p>You need to open the image with <code>b</code> in the <code>open()</code> mode so that the file is read as binary and the contents remain as bytes.</p> <pre class="lang-py prettyprint-override"><code>with open(path, 'rb') as f: contents = f.read() </code></pre> <p>If you were using different file types, <code>byte XXXX in position 0</code> could also mean that the file is encoded incorrectly so, for example, you could try this or similar:</p> <pre class="lang-py prettyprint-override"><code>open(path, encoding='utf-16') as f: contents = f.read() </code></pre>
python|flask|google-cloud-platform|google-cloud-storage|bucket
0
1,909,824
50,565,514
Find URLs in text and replace them with their domain name
<p>I am working on an NLP project and I want to replace all the URLs in a text with their domain name to simplify my corpora. </p> <p>An example of this could be:</p> <pre><code>Input: Ask questions here https://stackoverflow.com/questions/ask Output: Ask questions here stackoverflow.com </code></pre> <p>At this moment I am finding the urls with the following RE:</p> <pre><code>urls = re.findall('https?://(?:[-\w.]|(?:%[\da-fA-F]{2}))+', text) </code></pre> <p>And then I iterate over them to get the domain name:</p> <pre><code>doms = [re.findall(r'^(?:https?:)?(?:\/\/)?(?:[^@\n]+@)?(?:www\.)?([^:\/\n]+)',url) for url in urls] </code></pre> <p>And then I simply replace each URL with its dom.</p> <p>This is not an optimal approach and I am wondering if someone has a better solution for this problem!</p>
<p>You can use <code>re.sub</code>:</p> <pre><code>import re s = 'Ask questions here https://stackoverflow.com/questions/ask, new stuff here https://stackoverflow.com/questions/, Final ask https://stackoverflow.com/questions/50565514/find-urls-in-text-and-replace-them-with-their-domain-name mail server here mail.inbox.com/whatever' new_s = re.sub('https*://[\w\.]+\.com[\w/\-]+|https*://[\w\.]+\.com|[\w\.]+\.com/[\w/\-]+', lambda x:re.findall('(?&lt;=\://)[\w\.]+\.com|[\w\.]+\.com', x.group())[0], s) </code></pre> <p>Output:</p> <pre><code>'Ask questions here stackoverflow.com, new stuff here stackoverflow.com, Final ask stackoverflow.com mail server here mail.inbox.com' </code></pre>
python|regex|url
3
1,909,825
50,559,045
How to print line of a file after match an exact string pattern with python?
<p>I have a list </p> <pre><code>list = ['plutino?','res 2:11','Uranus L4','res 9:19','damocloid','cubewano?','plutino'] </code></pre> <p>I want to search every element from the list in a file with the next format and print the line after match </p> <pre><code>1995QY9 | 1995_QY9 | plutino | 32929 | | 39.445 | 0.260 | 29.193 | 49.696 | 4.8 | 66 | # 0.400 | 1.21 BR-U | ? 1997CU29 | 1997_CU29 | cubewano | 33001 | | 43.534 | 0.039 | 41.815 | 45.253 | 1.5 | 243 | | 1.82 RR | 1998BU48 | 1998_BU48 | Centaur | 33128 | | 33.363 | 0.381 | 20.647 | 46.078 | 14.2 | 213 | # 0.052 | 1.59 RR | ? 1998VG44 | 1998_VG44 | plutino | 33340 | | 39.170 | 0.250 | 29.367 | 48.974 | 3.0 | 398 | # 0.028 | 1.51 IR | 1998SN165 | 1998_SN165 | inner classic | 35671 | | 37.742 | 0.041 | 36.189 | 39.295 | 4.6 | 393 | # 0.060 | 1.13 BB | 2000VU2 | 2000_VU2 | unusual | 37117 | Narcissus | 6.878 | 0.554 | 3.071 | 10.685 | 13.8 | 11 | # 0.088 | | 1999HX11 | 1999_HX11 | plutino? | 38083 | Rhadamanthus | 39.220 | 0.151 | 33.295 | 45.144 | 12.7 | 168 | | 1.18 BR | 1999HB12 | 1999_HB12 | res 2:5 | 38084 | | 56.376 | 0.422 | 32.566 | 80.187 | 13.1 | 176 | | 1.39 BR-IR | </code></pre> <p>I am using the next code to do that</p> <pre><code>for i in list: with open("tnolist.txt") as f: for line in f: if re.search(i, line): print(line) </code></pre> <p>The code works fine for all element, except for <strong>plutino</strong>. When the variable <strong>i</strong> is <strong>plutino</strong> the code prints lines for <strong>plutino</strong> and for <strong>plutino?</strong>. </p>
<p>This happens because <strong>plutino</strong> is a substring of <strong>plutino?</strong>, so the regex parser matches the first part of <strong>plutino?</strong> and returns a non-falsey answer. Without a whole lot of additional work, you should be able to fix the problem with <code>re.search(i, line+r'\s')</code>, which says that you need to have a whitespace character after the phrase you're searching. As the file gets longer and more complicated, you might have more such exceptions to make the regex behave as desired.</p> <p><strong>Update:</strong> I also like <a href="http://myregexp.com/" rel="nofollow noreferrer">visual regex editors</a> for reasons like this. They make it easy to see what matches and what doesn't.</p> <p>Another option would be something like <code>i==line.split('|')[2].strip()</code> which extracts the portion of your file you seem to care about. The <code>.strip()</code> method can become inefficient on long lines, but this might fit your use case.</p>
python
2
1,909,826
26,789,442
Cancel a task execution in a thread and remove a task from a queue
<p>A Python application I'm developing watches for a folder and uploads certain new or changed files to a server. As a task queue I'm using <a href="https://docs.python.org/2/library/queue.html" rel="nofollow">Queue</a> module and a pool of worker threads.</p> <p>Sometimes during an upload a file changes and the upload needs to be canceled and started all over.</p> <p>I know how to stop thread execution with <a href="https://docs.python.org/2/library/threading.html#event-objects" rel="nofollow">threading.Event</a>, but how do I remove or move around a task in Queue?</p>
<p>The easiest way to do it would be to mark the instance you've loaded into the <code>Queue</code> as cancelled:</p> <pre><code>class Task(object): def __init__(self, data): self.cancelled = False self.data = data def cancel(self): self.cancelled = True q = Queue.Queue() t = Task("some data to put in the queue") q.put(t) # Later t.cancel() </code></pre> <p>Then in your consuming thread:</p> <pre><code>task = q.get() if task.cancelled: # Skip it else: # handle it. </code></pre> <p>It is also possible to directly interact with the <code>deque</code> that the <code>Queue</code> uses internally, provided you acquire the internal mutex used to synchronized access to the <code>Queue</code>:</p> <pre><code>&gt;&gt;&gt; import Queue &gt;&gt;&gt; q = Queue.Queue() &gt;&gt;&gt; q.put("a") &gt;&gt;&gt; q.put("b") &gt;&gt;&gt; q.put("c") &gt;&gt;&gt; q.put("d") &gt;&gt;&gt; q.queue[2] 'c' &gt;&gt;&gt; q.queue[3] 'd' &gt;&gt;&gt; with q.mutex: # Always acquire the lock first in actual usage ... q.queue[3] ... 'd' </code></pre> <p>While this should work, messing with the internals of the <code>Queue</code> isn't recommended, and could break across Python versions if the implementation of <code>Queue</code> changes. Also, keep in mind that operations other than <code>append</code>/<code>appendleft</code> and <code>pop</code>/<code>popleft</code> on <code>deque</code> objects do not perform as well as they do on <code>list</code> instances; even something as simple as <code>__getitem__</code> is <code>O(n)</code>.</p>
python|multithreading
1
1,909,827
56,444,243
Remove random text from filename based on list
<p>So I have a list of files from glob that are formated in the following way</p> <pre><code>filename xx xxx moretxt.txt </code></pre> <p>what I'am trying to do is rename them as follows</p> <pre><code>filename.txt </code></pre> <p>the first two xx is one of these:</p> <pre><code>[1B, 2B, 3B, 4B, 5B, 6B, 7B, 8B, 9B, 10B, 11B, 12B, 1A, 2A, 3A, 4A, 5A, 6A, 7A, 8A, 9A, 10A, 11A, 12A] </code></pre> <p>so how do I remove the "xx xxx moretxt" from the file name and keep the extension?</p> <pre><code>import glob, os os.chdir("C:\\somepath") for file in glob.glob("**/*.txt", recursive = True): print(file) </code></pre>
<p>Using <code>str.split</code></p> <p><strong>Ex:</strong></p> <pre><code>filename = "filename xx xxx moretxt.txt" val = filename.split() filename = "{}.{}".format(val[0], val[-1].split(".")[-1]) print(filename) </code></pre> <hr> <p>or using <code>re.match</code></p> <p><strong>Ex:</strong></p> <pre><code>import re filename = "filename xx xxx moretxt.txt" filename = re.match(r"(?P&lt;filename&gt;\w+).*\.(?P&lt;ext&gt;.+)", filename) filename = "{}.{}".format(filename.group('filename'), filename.group('ext')) print(filename) </code></pre> <p><strong>Output:</strong></p> <pre><code>filename.txt </code></pre>
python
2
1,909,828
45,265,044
How to check a time is between two times in python
<p>For example if I have </p> <p>test_case1: (9:00 - 16:00) and </p> <p>test_case2: (21:30 - 4:30)</p> <p>that is, it works whether the first or second number is bigger than the other. </p>
<p>You can use pure lexicographical string comparison if you zero-fill your times - then all you need is to determine if the second time is 'smaller' than the first time and for that special case check both days, e.g.:</p> <pre><code>def is_between(time, time_range): if time_range[1] &lt; time_range[0]: return time &gt;= time_range[0] or time &lt;= time_range[1] return time_range[0] &lt;= time &lt;= time_range[1] print(is_between("11:00", ("09:00", "16:00"))) # True print(is_between("17:00", ("09:00", "16:00"))) # False print(is_between("01:15", ("21:30", "04:30"))) # True </code></pre> <p>This will also work with time tuples (e.g. <code>(9, 0)</code>) instead of strings if that's how you represent your time. It will even work with most time objects.</p>
python
15
1,909,829
64,773,886
find cosine similarity between words
<p>Is it possible to find similarity between two words? For example:</p> <pre><code>cos_lib = cosine_similarity('kamra', 'cameras') </code></pre> <p>This gives me an error</p> <pre><code>ValueError: could not convert string to float: 'kamra' </code></pre> <p>because I haven't converted the words into numerical vectors. How can I do so? I tried this but it wouldn't work either:</p> <pre><code>('kamra').toarray() </code></pre> <p>My aim is to check the similarity with both value(lists) of my dictionary and return the key with the highest similarity. Is that possible?</p> <pre><code>features = {&quot;CAMERA&quot;: ['camera', 'kamras'], &quot;BATTERY&quot;: ['batterie', 'battery']} </code></pre> <p>I also tried this but I am not satisfied with the results:</p> <pre><code>print(damerau.distance('dual camera', 'cameras')) print(damerau.distance('kamra', 'battery')) </code></pre> <p>since the results are 6 and 5. But the similar between the first two strings is more so the distance should be less. That's what I am trying to achieve.</p>
<p>I'd recommend using a pre-trained model from <a href="https://radimrehurek.com/gensim/index.html" rel="nofollow noreferrer">Gensim</a>. You can can download a pre-trained model and then get the cosine similarity between their two vectors.</p> <pre><code>import gensim.downloader as api # overview of all models in gensim: https://github.com/RaRe-Technologies/gensim-data model_glove = api.load(&quot;glove-wiki-gigaword-100&quot;) model_glove.relative_cosine_similarity(&quot;politics&quot;, &quot;vote&quot;) # output: 0.07345439049627836 model_glove.relative_cosine_similarity(&quot;film&quot;, &quot;camera&quot;) # output: 0.06281138757741007 model_glove.relative_cosine_similarity(&quot;economy&quot;, &quot;fart&quot;) # output: -0.01170896437873441 </code></pre> <p>Pretrained models will have a hard time recognising typos though, because they were probably not in the training data. Figuring these out is a separate task from cosine similarity.</p> <pre><code>model_glove.relative_cosine_similarity(&quot;kamra&quot;, &quot;cameras&quot;) # output: -0.040658474068872255 </code></pre> <p>The following function might be useful though, if you have several words and you want to have the most similar one from the list:</p> <pre><code>model_glove.most_similar_to_given(&quot;camera&quot;, [&quot;kamra&quot;, &quot;movie&quot;, &quot;politics&quot;, &quot;umbrella&quot;, &quot;beach&quot;]) # output: 'movie' </code></pre>
python|scikit-learn|nlp|sklearn-pandas|cosine-similarity
2
1,909,830
64,834,953
Python - Optimal way to re-assign global variables from function in other module
<p>I have a module which I called entities.py - there are 2 classes within it and 2 global variables as in below pattern:</p> <pre><code>FIRST_VAR = ... SECOND_VAR = ... class FirstClass: [...] class SecondClass: [...] </code></pre> <p>I also have another module (let's call it main.py for now) where I import both classes and constants as like here:</p> <pre><code>from entities import FirstClass, SecondClass, FIRST_VAR, SECOND_VAR </code></pre> <p>In the same &quot;main.py&quot; module I have another constant: <code>THIRD_VAR = ...</code>, and another class, in which all of imported names are being used.</p> <p>Now, I have a function, which is being called only if a certain condition is met (passing config file path as CLI argument in my case). As my best bet, I've written it as following:</p> <pre><code>def update_consts_from_config(config: ConfigParser): global FIRST_VAR global SECOND_VAR global THIRD_VAR FIRST_VAR = ... SECOND_VAR = ... THIRD_VAR = ... </code></pre> <p>This works perfectly fine, although PyCharm indicates two issues, which at least I don't consider accurate.</p> <p><code>from entities import FirstClass, SecondClass, FIRST_VAR, SECOND_VAR</code> - here it warns me that FIRST_VAR and SECOND_VAR are unused imports, but from my understanding and testing they are used and not re-declared elsewhere unless function <code>update_consts_from_config</code> is invoked.</p> <p>Also, under <code>update_consts_from_config</code> function:</p> <p><code>global FIRST_VAR</code> - at this and next line, it says <em>Global variable FIRST_VAR is undefined at the module level</em></p> <p>My question is, should I really care about those warnings and (as I think the code is correct and clear), or am I missing something important and should come up with something different here?</p> <p>I know I can do something as:</p> <pre><code>import entities from entities import FirstClass, SecondClass FIRST_VAR = entities.FIRST_VAR SECOND_VAR = entities.SECOND_VAR </code></pre> <p>and work from there, but this look like an overkill for me, <code>entities</code> module has only what I have to import in <code>main.py</code> which also strictly depends on it, therefore I would rather stick to importing those names explicitly than referencing them by <code>entities.</code> just for that reason</p> <p>What do you think would be a best practice here? I would like my code to clear, unambiguous and somehow optimal.</p>
<p>Import only entities, then refer to variables in its <a href="https://realpython.com/python-namespaces-scope/" rel="nofollow noreferrer">namespace</a> to access/modify them.</p> <p>Note: this pattern, modifying constants in other modules (which then, to purists, aren't so much constants as globals) can be justified. I have tons of cases where I use constants, rather than <a href="https://en.wikipedia.org/wiki/Magic_number_(programming)" rel="nofollow noreferrer">magic variables</a>, as module level configuration. However, for example for testing, I might reach in and modify these constants. Say to switch a cache expiry from 2 days to 0.1 seconds to test caching. Or like you propose, to override configuration. Tread carefully, but it can be useful.</p> <p><strong>main.py</strong>:</p> <pre><code>import entities def update_consts_from_config(FIRST_VAR): entities.FIRST_VAR = FIRST_VAR firstclass = entities.FirstClass() print(f&quot;{entities.FIRST_VAR=} before override&quot;) firstclass.debug() entities.debug() update_consts_from_config(&quot;override&quot;) print(f&quot;{entities.FIRST_VAR=} after override&quot;) firstclass.debug() entities.debug() </code></pre> <p><strong>entities.py</strong>:</p> <pre><code>FIRST_VAR = &quot;ori&quot; class FirstClass: def debug(self): print(f&quot;entities.py:{FIRST_VAR=}&quot;) def debug(): print(f&quot;making sure no closure/locality effects after object instantation {FIRST_VAR=}&quot;) </code></pre> <p><strong>$ python main.py</strong></p> <pre><code> entities.FIRST_VAR='ori' before override entities.py:FIRST_VAR='ori' making sure no closure/locality effects after object instantation FIRST_VAR='ori' entities.FIRST_VAR='override' after override entities.py:FIRST_VAR='override' making sure no closure/locality effects after object instantation FIRST_VAR='override' </code></pre> <p>Now, if FIRST_VAR wasn't a string, int or another type of immutable, you should I think be able to import it separately and <em>mutate</em> it. Like <code>SECOND_VAR.append(&quot;config override&quot;)</code> in main.py. But <em>assigning</em> to a global in main.py will only affect affect the main.py <em>binding</em>, so if you want to share actual state between main.py and entities and other modules, everyone, not just main.py needs to <code>import entities</code> then access <code>entities.FIRST_VAR</code>.</p> <p>Oh, and if you had:</p> <pre><code>class SecondClass: def __init__(self): self.FIRST_VAR = FIRST_VAR </code></pre> <p>then its instance-level value of that immutable string/int would <strong>not</strong> be affected by any overrides done <strong>after</strong> an instance creation. Mutables like lists or dictionaries <strong>would</strong> be affected because they're all different bindings pointing to the same variable.</p> <p>Last, wrt to those &quot;tricky&quot; namespaces. <code>global</code> in your original code means: &quot;dont consider FIRST_VAR as a variable to assign in <code>update_consts_from_config</code> s local namespace , instead assign it to <strong>main.py</strong> global, script-level namespace&quot;.</p> <p>It does <strong>not</strong> mean &quot;assign it to some global state magically shared between <strong>entities.py</strong> and <strong>main.py</strong>&quot;. <code>__builtins__</code> <em>might</em> be that beast but modifying it is considered extremely bad form in Python.</p>
python|import|module|global-variables
3
1,909,831
61,555,205
Convolutional Neural Network seems to be randomly guessing
<p>So I am currently trying to build a race recognition program using a convolution neural network. I'm inputting 200px by 200px versions of the UTKFaceRegonition <a href="https://drive.google.com/open?id=1vJoaZFzhPKVS0kI2d_ufljWEMPN_7DRa" rel="nofollow noreferrer">dataset</a> (put my dataset on a google drive if you want to take a look). Im using 8 different classes (4 races * 2 genders) using keras and tensorflow, each having about 700 images but I have done it with 1000. The problem is when I run the network it gets at best 13.5% accuracy and about 11-12.5% validation accuracy, with a loss around 2.079-2.081, even after 50 epochs or so it won't improve at all. My current hypothesis is that it is randomly guessing stuff/not learning because 8/100=12.5%, which is about what it is getting and on other models I have made with 3 classes it was getting about 33%</p> <p>I noticed the validation accuracy is different on the first and sometimes second epoch, but after that it ends up staying constant. I've increased the pixel resolution, changed amount of layers, types of layer and neurons per layer, I've tried optimizers (sgd at the normal lr and at very large and small (.1 and 10^-6) and I've tried different loss functions like KLDivergence but nothing seems to have any effect on it except KLDivergence which on one run did pretty well (about 16%) but then it flopped again. Some ideas I had are maybe theres too much noise in the dataset or maybe it has to do with the amount of dense layers, but honestly I dont know why it is not learning.</p> <p>Heres the code to make the tensors</p> <pre><code>import numpy as np import matplotlib import matplotlib.pyplot as plt import os import cv2 import random import pickle WIDTH_SIZE = 200 HEIGHT_SIZE = 200 CATEGORIES = [] for CATEGORY in os.listdir('./TRAINING'): CATEGORIES.append(CATEGORY) DATADIR = &quot;./TRAINING&quot; training_data = [] def create_training_data(): for category in CATEGORIES: path = os.path.join(DATADIR, category) class_num = CATEGORIES.index(category) for img in os.listdir(path)[:700]: try: img_array = cv2.imread(os.path.join(path,img), cv2.IMREAD_COLOR) new_array = cv2.resize(img_array,(WIDTH_SIZE,HEIGHT_SIZE)) training_data.append([new_array,class_num]) except Exception as error: print(error) create_training_data() random.shuffle(training_data) X = [] y = [] for features, label in training_data: X.append(features) y.append(label) X = np.array(X).reshape(-1, WIDTH_SIZE, HEIGHT_SIZE, 3) y = np.array(y) pickle_out = open(&quot;X.pickle&quot;, &quot;wb&quot;) pickle.dump(X, pickle_out) pickle_out = open(&quot;y.pickle&quot;, &quot;wb&quot;) pickle.dump(y, pickle_out) </code></pre> <p>Heres my built model</p> <pre><code>from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Dropout, Activation, Flatten, Conv2D, MaxPooling2D import pickle pickle_in = open(&quot;X.pickle&quot;,&quot;rb&quot;) X = pickle.load(pickle_in) pickle_in = open(&quot;y.pickle&quot;,&quot;rb&quot;) y = pickle.load(pickle_in) X = X/255.0 model = Sequential() model.add(Conv2D(256, (2,2), activation = 'relu', input_shape = X.shape[1:])) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Conv2D(256, (2,2), activation = 'relu')) model.add(Conv2D(256, (2,2), activation = 'relu')) model.add(Conv2D(256, (2,2), activation = 'relu')) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Conv2D(256, (2,2), activation = 'relu')) model.add(Conv2D(256, (2,2), activation = 'relu')) model.add(Dropout(0.4)) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Conv2D(256, (2,2), activation = 'relu')) model.add(Conv2D(256, (2,2), activation = 'relu')) model.add(Dropout(0.4)) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Conv2D(256, (2,2), activation = 'relu')) model.add(Conv2D(256, (2,2), activation = 'relu')) model.add(Dropout(0.4)) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Flatten()) model.add(Dense(8, activation=&quot;softmax&quot;)) model.compile(optimizer='adam',loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False),metrics=['accuracy']) model.fit(X, y, batch_size=16,epochs=100,validation_split=.1) </code></pre> <p>Heres a log of 10 epochs I ran.</p> <pre><code>5040/5040 [==============================] - 55s 11ms/sample - loss: 2.0803 - accuracy: 0.1226 - val_loss: 2.0796 - val_accuracy: 0.1250 Epoch 2/100 5040/5040 [==============================] - 53s 10ms/sample - loss: 2.0797 - accuracy: 0.1147 - val_loss: 2.0798 - val_accuracy: 0.1161 Epoch 3/100 5040/5040 [==============================] - 53s 10ms/sample - loss: 2.0797 - accuracy: 0.1190 - val_loss: 2.0800 - val_accuracy: 0.1161 Epoch 4/100 5040/5040 [==============================] - 53s 11ms/sample - loss: 2.0797 - accuracy: 0.1173 - val_loss: 2.0799 - val_accuracy: 0.1107 Epoch 5/100 5040/5040 [==============================] - 52s 10ms/sample - loss: 2.0797 - accuracy: 0.1183 - val_loss: 2.0802 - val_accuracy: 0.1107 Epoch 6/100 5040/5040 [==============================] - 52s 10ms/sample - loss: 2.0797 - accuracy: 0.1226 - val_loss: 2.0801 - val_accuracy: 0.1107 Epoch 7/100 5040/5040 [==============================] - 52s 10ms/sample - loss: 2.0797 - accuracy: 0.1238 - val_loss: 2.0803 - val_accuracy: 0.1107 Epoch 8/100 5040/5040 [==============================] - 54s 11ms/sample - loss: 2.0797 - accuracy: 0.1169 - val_loss: 2.0802 - val_accuracy: 0.1107 Epoch 9/100 5040/5040 [==============================] - 52s 10ms/sample - loss: 2.0797 - accuracy: 0.1212 - val_loss: 2.0803 - val_accuracy: 0.1107 Epoch 10/100 5040/5040 [==============================] - 53s 11ms/sample - loss: 2.0797 - accuracy: 0.1177 - val_loss: 2.0802 - val_accuracy: 0.1107 </code></pre> <p>So yeah, any help on why my network seems to be just guessing? Thank you!</p>
<p>The problem lies in the design of you network. </p> <ul> <li>Typically you'd want in the first layers to learn high-level features and use larger kernel with odd size. Currently you're essentially interpolating neighbouring pixels. Why odd size? Read e.g. <a href="https://datascience.stackexchange.com/questions/23183/why-convolutions-always-use-odd-numbers-as-filter-size">here</a>.</li> <li>Number of filters typically increases from small (e.g. 16, 32) number to larger values when going deeper into the network. In your network all layers learn the same number of filters. The reasoning is that the deeper you go, the more fine-grained features you'd like to learn - hence increase in number of filters.</li> <li>In your ANN each layer also cuts out valuable information from the image (by default you are using <code>valid</code> padding).</li> </ul> <p>Here's a very basic network that gets me after 40 seconds and 10 epochs over 95% training accuracy:</p> <pre><code>import pickle import tensorflow as tf from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Flatten, Conv2D, MaxPooling2D pickle_in = open("X.pickle","rb") X = pickle.load(pickle_in) pickle_in = open("y.pickle","rb") y = pickle.load(pickle_in) X = X/255.0 model = Sequential() model.add(Conv2D(16, (5,5), activation = 'relu', input_shape = X.shape[1:], padding='same')) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Conv2D(32, (3,3), activation = 'relu', padding='same')) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Conv2D(64, (3,3), activation = 'relu', padding='same')) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Flatten()) model.add(Dense(512)) model.add(Dense(8, activation='softmax')) model.compile(optimizer='adam',loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False),metrics=['accuracy']) </code></pre> <p>Architecture:</p> <pre><code>Model: "sequential_4" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d_19 (Conv2D) (None, 200, 200, 16) 1216 _________________________________________________________________ max_pooling2d_14 (MaxPooling (None, 100, 100, 16) 0 _________________________________________________________________ conv2d_20 (Conv2D) (None, 100, 100, 32) 4640 _________________________________________________________________ max_pooling2d_15 (MaxPooling (None, 50, 50, 32) 0 _________________________________________________________________ conv2d_21 (Conv2D) (None, 50, 50, 64) 18496 _________________________________________________________________ max_pooling2d_16 (MaxPooling (None, 25, 25, 64) 0 _________________________________________________________________ flatten_4 (Flatten) (None, 40000) 0 _________________________________________________________________ dense_7 (Dense) (None, 512) 20480512 _________________________________________________________________ dense_8 (Dense) (None, 8) 4104 ================================================================= Total params: 20,508,968 Trainable params: 20,508,968 Non-trainable params: 0 </code></pre> <p>Training:</p> <pre><code>Train on 5040 samples, validate on 560 samples Epoch 1/10 5040/5040 [==============================] - 7s 1ms/sample - loss: 2.2725 - accuracy: 0.1897 - val_loss: 1.8939 - val_accuracy: 0.2946 Epoch 2/10 5040/5040 [==============================] - 6s 1ms/sample - loss: 1.7831 - accuracy: 0.3375 - val_loss: 1.8658 - val_accuracy: 0.3179 Epoch 3/10 5040/5040 [==============================] - 6s 1ms/sample - loss: 1.4857 - accuracy: 0.4623 - val_loss: 1.9507 - val_accuracy: 0.3357 Epoch 4/10 5040/5040 [==============================] - 6s 1ms/sample - loss: 1.1294 - accuracy: 0.6028 - val_loss: 2.1745 - val_accuracy: 0.3250 Epoch 5/10 5040/5040 [==============================] - 6s 1ms/sample - loss: 0.8060 - accuracy: 0.7179 - val_loss: 3.1622 - val_accuracy: 0.3000 Epoch 6/10 5040/5040 [==============================] - 6s 1ms/sample - loss: 0.5574 - accuracy: 0.8169 - val_loss: 3.7494 - val_accuracy: 0.2839 Epoch 7/10 5040/5040 [==============================] - 6s 1ms/sample - loss: 0.3756 - accuracy: 0.8813 - val_loss: 4.9125 - val_accuracy: 0.2643 Epoch 8/10 5040/5040 [==============================] - 6s 1ms/sample - loss: 0.3001 - accuracy: 0.9036 - val_loss: 5.6300 - val_accuracy: 0.2821 Epoch 9/10 5040/5040 [==============================] - 6s 1ms/sample - loss: 0.2345 - accuracy: 0.9337 - val_loss: 5.7263 - val_accuracy: 0.2679 Epoch 10/10 5040/5040 [==============================] - 6s 1ms/sample - loss: 0.1549 - accuracy: 0.9581 - val_loss: 7.3682 - val_accuracy: 0.2732 </code></pre> <p>As you can see, validation score is terrible, but the point was to demonstrate that poor architecture can prevent training altogether.</p>
tensorflow|keras|deep-learning|neural-network|conv-neural-network
4
1,909,832
71,550,037
How do I choose multiple minimum values in a dictionary?
<p>I'm trying to inspect and choose values in a dictionary if they're the same minimum values from the whole dictionary.</p> <p>As you can see, my code is choosing duplicate values although they're not the minimum values. How do i correct this error? For example, my code shouldn't't delete values (including duplicates) unless there are multiple '7.14'</p> <pre><code> def tieBreaker (preDictionary): while True: minValues = min(preDictionary.values()) minKeys = [k for k in preDictionary if preDictionary[k] == minValues] print(minKeys) for i in range(0, len(minKeys)): for j in range(0, len(minKeys)): if minKeys[i] &gt; minKeys[j]: del preDictionary[minKeys[i]] i += 1 j += 1 if len(minKeys) &lt; 2: return preDictionary break </code></pre> <p>Current output is {'candidate1': '35.71', 'candidate2': '28.57', 'candidate4': '14.29', 'candidate3': '7.14'}</p> <p>While the input is {'candidate1': '35.71', 'candidate2': '28.57', 'candidate5': '14.29', 'candidate4': '14.29', 'candidate3': '7.14'}</p> <p>candidate 5 should not be deleted as although it's a duplicate value, not a minimum..</p> <p>Also, minKeys currently is ['candidate5', 'candidate4'] where it should be ['candidate3']</p>
<p>You do not need a <code>while</code> loop. You can run through each key value pair and construct a new <code>dict</code> without the keys with minimum value.</p> <pre><code>d = {'candidate1': 35.71, 'candidate2': 28.57, 'candidate4': 14.29, 'candidate3': 7.14} min_value = min(d.values()) d_without_min_value = {k: v for k, v in d.items() if v != min_value} # output # {'candidate1': 35.71, 'candidate2': 28.57, 'candidate4': 14.29} </code></pre> <p><strong>EDIT</strong></p> <p><strong>Seems like you are passing values as string instead of <code>float</code>.</strong> Calling <code>min()</code> on a <code>list</code> of <code>str</code> will result in minimum value in <a href="https://en.wikipedia.org/wiki/Lexicographic_order" rel="nofollow noreferrer">lexicographical order</a>. Remove the quotes around the values or convert the values into <code>float</code> before you process the dict</p> <pre><code>d = {k: float(v) for k, v in d.items()} </code></pre>
python|dictionary
2
1,909,833
69,409,367
How to create .ts files for Qt Linguist with PySide6?
<p>I have a python project written with PySide2 and now I want to migrate to PySide6. I used Qt Linguist to translate UI and created <code>.ts</code> files with help of this command: <code>pylupdate5</code> utility from PyQt5 package (but it worked fine for my project with PySide2). Now I plan to get rid of PySide2 and PyQt5 packages.</p> <p>So, I need to replace <code>pylupdate5</code> with something from PySide6 package. I assumed <code>lupdate</code> should to the job but it seems it works only with C++ code. It gives me errors like <code>Unterminated C++ character</code> or <code>Unbalanced opening parenthesis in C++ code</code>. An with <code>lupdate -help</code> I don't see how I may switch it to python mode (similar to <code>uic -g python</code>).</p> <p>Does anyone know how to create <code>.ts</code> files for Qt Linguist from python source files?</p>
<p>lupdate of Qt 6.1 does not support Python but in Qt 6.2 that problem no longer exists so you have 2 options:</p> <ul> <li>Install Qt 6.2 and use its lupdate.</li> <li>Wait for PySide6 6.2.0 to be released (at this moment it is only available for windows) and use lupdate.</li> </ul>
python|qt|pyside6
3
1,909,834
69,450,718
asyncio: can a task only start when previous task reach a pre-defined stage?
<p>I am starting with asyncio that I wish to apply to following problem:</p> <ul> <li>Data is split in chunks.</li> <li>A chunk is 1st compressed.</li> <li>Then the compressed chunk is written in the file.</li> <li>A single file is used for all chunks, so I need to process them one by one.</li> </ul> <pre class="lang-py prettyprint-override"><code>with open('my_file', 'w+b') as f: for chunk in chunks: compress_chunk(ch) f.write(ch) </code></pre> <p>From this context, to run this process faster, as soon as the <code>write</code> step of current iteration starts, could the <code>compress</code> step of next iteration be triggered as well?</p> <p>Can I do that with <code>asyncio</code>, keeping a similar <code>for</code> loop structure? If yes, could you share some pointers about this?</p> <p>I am guessing another way to run this in parallel is by using <code>ProcessPoolExecutor</code> and splitting fully the <code>compress</code> phase from the <code>write</code> phase. This means compressing 1st all chunks in different executors.</p> <p>Only when all chunks are compressed, then starting the writing step . But I would like to investigate the 1st approach with <code>asyncio</code> 1st, if it makes sense.</p> <p>Thanks in advance for any help. Bests</p>
<p>You can do this with a producer-consumer model. As long as there is one producer and one consumer, you will have the correct order. For your use-case, that's all you'll benefit from. Also, you should use the <code>aioFiles</code> library. Standard file IO will mostly block your main compression/producer thread and you won't see much speedup. Try something like this:</p> <pre class="lang-py prettyprint-override"><code>async def produce(queue, chunks): for chunk in chunks: compress_chunk(ch) await queue.put(i) async def consume(queue): with async with aiofiles.open('my_file', 'w') as f: while True: compressed_chunk = await Q.get() await f.write(b'Hello, World!') queue.task_done() async def main(): queue = asyncio.Queue() producer = asyncio.create_task(producer(queue, chunks)) consumer = asyncio.create_task(consumer(queue)) # wait for the producer to finish await producer # wait for the consumer to finish processing and cancel it await queue.join() consumer.cancel() asyncio.run(main()) </code></pre> <p><a href="https://github.com/Tinche/aiofiles" rel="nofollow noreferrer">https://github.com/Tinche/aiofiles</a></p> <p><a href="https://stackoverflow.com/questions/52582685/using-asyncio-queue-for-producer-consumer-flow">Using asyncio.Queue for producer-consumer flow</a></p>
python-asyncio|process-pool
2
1,909,835
57,564,760
Add custom field to ModelSerializer and fill it in post save signal
<p>In my API I have a route to add a resource named Video. I have a post_save signal to this Model where I proccess this video and I generate a string. I want a custom field in my serializer to be able to fill it with this text that was generated. So, in my response I can have this value.</p> <pre class="lang-py prettyprint-override"><code>class VideoSerializer(serializers.ModelSerializer): class Meta: model = Video fields = ('id', 'owner', 'description', 'file') @receiver(post_save, sender=Video) def encode_video(sender, instance=None, created=False, **kwargs): string_generated = do_stuff() </code></pre> <p>Right now what I am getting in my response is:</p> <pre><code>{ "id": 17, "owner": "b424bc3c-5792-470f-bac4-bab92e906b92", "description": "", "file": "https://z.s3.amazonaws.com/videos/sample.mkv" } </code></pre> <p>I expect a new key "string" with the value generated by the signal.</p>
<p>In order to append <code>string_generated</code> in your response you need to be able to access that field from your serializer. There are 2 convenient ways to do that:</p> <ol> <li>Add <code>string_generated</code> as a field in your model and add that in <code>VideoSerializer</code> as a <code>SerializerMethodField</code> so that <code>string_generated</code> will be a read-only value. This means it will only appear in response. And finally delete your post signal and override the save() method instead:</li> </ol> <pre class="lang-py prettyprint-override"><code>class VideoSerializer(serializers.ModelSerializer): string_generated = serializers.SerializerMethodField(source='get_string_generated') class Meta: model = Video fields = ('id', 'owner', 'description', 'file') read_only_fields = ('string_generated') def get_string_generated(self, obj): return obj.string_generated </code></pre> <pre class="lang-py prettyprint-override"><code># models.py class Video(models.Model): # your fields... def save(self, force_insert=False, force_update=False): string_generated = do_stuff() super(Video, self).save(force_insert, force_update) </code></pre> <ol start="2"> <li>If possible, delete your post-signal. Then, add <code>do_stuff</code> as <code>SerializerMethodField</code> in your <code>VideoSerializer</code>:</li> </ol> <pre class="lang-py prettyprint-override"><code>class VideoSerializer(serializers.ModelSerializer): string_generated = serializers.SerializerMethodField() class Meta: model = Video fields = ('id', 'owner', 'description', 'file') def get_string_generated(self, obj): return do_stuff() </code></pre>
python|django|django-rest-framework
1
1,909,836
57,546,250
Unable to send an email by python
<p>Im trying to send a basic email by this script </p> <pre><code>s = smtplib.SMTP(email_user, 587) s.starttls() s.login(email_user, pasword) message = 'Hi There, sending this email from python' s.sendmail(email_user, email_user, message) s.quit() </code></pre> <p>and getting the following error</p> <pre><code>for res in _socket.getaddrinfo(host, port, family, type, proto, flags): socket.gaierror: [Errno 11003] getaddrinfo failed </code></pre>
<p>On your first line:</p> <pre><code>s = smtplib.SMTP(email_user, 587) </code></pre> <p>'email_user' should instead be the email server, example 'smtp.gmail.com'</p>
python-3.x|email
0
1,909,837
57,461,031
How to calculate average values of an array for each class?
<p>I was wondering if there is an efficient way to calculate the average values for each class.</p> <p>For example:</p> <pre class="lang-py prettyprint-override"><code>scores = [1, 2, 3, 4, 5] classes = [0, 0, 1, 1, 1] </code></pre> <p>Expected output is</p> <pre class="lang-py prettyprint-override"><code>output = [[0, 1.5], [1, 4.0]] </code></pre> <p>where output is [[class_indx, avg_value], ...]</p> <p>I can achieve it using the dictionary. But it means I need to convert the array (list in this example) into dict first and then convert back to array when the job is done. It seems like a workaround in this case and I would prefer to operate directly on arrays.</p> <p>I guess someone has invented the wheel but just I haven't dug it out from my search. Are there any approaches to do that efficiently?</p> <p>Thanks.</p>
<p>With <code>itertools.groupby</code> function:</p> <pre><code>from itertools import groupby scores = [1, 2, 3, 4, 5] classes = [0, 0, 1, 1, 1] res = [] for k, g in groupby(zip(scores, classes), key=lambda x: x[1]): group = list(g) res.append([k, sum(i[0] for i in group) / len(group)]) print(res) # [[0, 1.5], [1, 4.0]] </code></pre> <p>Or with <code>collections.defauldict</code> object:</p> <pre><code>from collections import defauldict scores = [1, 2, 3, 4, 5] classes = [0, 0, 1, 1, 1] d = defaultdict(list) res = [] for sc, cl in zip(scores, classes): d[cl].append(sc) res = [[cl, sum(lst)/len(lst)] for cl, lst in d.items()] print(res) # [[0, 1.5], [1, 4.0]] </code></pre>
python
3
1,909,838
57,709,619
Display graphs in the toplevel window
<p><strong>What I am trying to do :</strong></p> <p>I am trying to display three displays in a loop in the toplevel window rather than the main window. I am getting the error which is mentioned below. So, I haven't been able to run it.</p> <p><strong>Error I am getting :</strong></p> <pre><code>Exception in Tkinter callback Traceback (most recent call last): File "C:\Users\sel\Anaconda3\lib\tkinter\__init__.py", line 1705, in __call__ return self.func(*args) File "&lt;ipython-input-19-df56c3798d6a&gt;", line 91, in on_window show_figure(selected_figure) File "&lt;ipython-input-19-df56c3798d6a&gt;", line 53, in show_figure one_figure = all_figures[number] IndexError: list index out of range </code></pre> <p><strong>Below is my code :</strong></p> <pre><code>import tkinter as tk import matplotlib.pyplot as plt from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg all_figures = [] selected_figure = 0 class MyClass(): def __init__(self): self.sheets = [[1,2,3], [3,1,2], [1,5,1]] self.W = 2 self.L = 5 def plot_sheet(self, data): """plot single figure""" fig, ax = plt.subplots(1) ax.set_xlim([0, self.W]) ax.set_ylim([0, self.L]) ax.plot(data) return fig def generate_all_figures(self): """create all figures and keep them on list""" global all_figures for data in self.sheets: fig = self.plot_sheet(data) all_figures.append(fig) dataPlot = None def on_window(): def show_figure(number): global dataPlot # remove old canvas if dataPlot is not None: # at start there is no canvas to destroy dataPlot.get_tk_widget().destroy() # get figure from list one_figure = all_figures[number] # display canvas with figure dataPlot = FigureCanvasTkAgg(one_figure, master=window) dataPlot.draw() dataPlot.get_tk_widget().grid(row=0, column=0) def on_prev(): global selected_figure # get number of previous figure selected_figure -= 1 if selected_figure &lt; 0: selected_figure = len(all_figures)-1 show_figure(selected_figure) def on_next(): global selected_figure # get number of next figure selected_figure += 1 if selected_figure &gt; len(all_figures)-1: selected_figure = 0 show_figure(selected_figure) top = tk.Toplevel() top.wm_geometry("794x370") top.title('Optimized Map') selected_figure = 0 dataPlot = None # default value for `show_figure` show_figure(selected_figure) frame = tk.Frame(top) frame.grid(row=1, column=0) b1 = tk.Button(frame, text="&lt;&lt;", command=on_prev) b1.grid(row=0, column=0) b2 = tk.Button(frame, text="&gt;&gt;", command=on_next) b2.grid(row=0, column=1) window = tk.Tk() b1 = tk.Button(window, text="Next", command=on_window) b1.grid(row=0, column=0) window.mainloop() </code></pre>
<p>You have created a class with a <code>generate_all_figures</code> but you haven't created a <code>MyClass</code> object and run <code>generate_all_figures()</code> therefore your <code>all_figures</code> list is empty. This is why you get an <code>IndexError</code>.</p> <p>You need to create a <code>MyClass</code> object and run <code>generate_all_figures()</code>to populate your <code>all_figures</code> list before executing <code>on_window()</code>:</p> <pre><code>window = tk.Tk() mc = MyClass() mc.generate_all_figures() b1 = tk.Button(window, text="Next", command=on_window) b1.grid(row=0, column=0) window.mainloop() </code></pre> <p>By the way, you don't need <code>global all_figures</code> in <code>generate_all_figures</code> (See <a href="https://stackoverflow.com/questions/4630543/defining-lists-as-global-variables-in-python">Defining lists as global variables in Python</a>).</p>
python|tkinter
1
1,909,839
42,245,170
How to enable timing magics for every cell in Jupyter notebook?
<p>The <code>%%time</code> and <code>%%timeit</code> magics enable timing of a single cell in a Jupyter or iPython notebook.</p> <p>Is there similar functionality to turn timing on and off for every cell in a Jupyter notebook?</p> <p><a href="https://stackoverflow.com/questions/30490801/how-do-i-set-up-default-cell-magics-for-every-ipython-notebook-cell">This question</a> is related but does not have an answer to the more general question posed of enabling a given magic automatically in every cell.</p>
<p>A hacky way to do this is via a custom.js file (usually placed in <code>~/.jupyter/custom/custom.js</code>)</p> <p>The example of how to create buttons for the toolbar is located <a href="https://github.com/jupyter/notebook/blob/4.0.x/notebook/static/custom/custom.js" rel="nofollow noreferrer">here</a> and it's what I based this answer off of. It merely adds the string form of the magics you want to all cells when pressing the enable button, and the disable button uses <code>str.replace</code> to "turn" it off. </p> <pre><code>define([ 'base/js/namespace', 'base/js/events' ], function(Jupyter, events) { events.on('app_initialized.NotebookApp', function(){ Jupyter.toolbar.add_buttons_group([ { 'label' : 'enable timing for all cells', 'icon' : 'fa-clock-o', // select your icon from http://fortawesome.github.io/Font-Awesome/icons 'callback': function () { var cells = Jupyter.notebook.get_cells(); cells.forEach(function(cell) { var prev_text = cell.get_text(); if(prev_text.indexOf('%%time\n%%timeit\n') === -1) { var text = '%%time\n%%timeit\n' + prev_text; cell.set_text(text); } }); } }, { 'label' : 'disable timing for all cells', 'icon' : 'fa-stop-circle-o', // select your icon from http://fortawesome.github.io/Font-Awesome/icons 'callback': function () { var cells = Jupyter.notebook.get_cells(); cells.forEach(function(cell) { var prev_text = cell.get_text(); var text = prev_text.replace('%%time\n%%timeit\n',''); cell.set_text(text); }); } } // add more button here if needed. ]); }); }); </code></pre>
python|ipython-notebook|jupyter-notebook
2
1,909,840
53,996,947
How to call a function with pymunk's collision handler?
<p>I am trying to implement an AI to solve a simple task: move from A to B, while avoiding obstacles. </p> <p>So far I used <code>pymunk</code> and <code>pygame</code> to build the enviroment and this works quite fine. But now I am facing the next step: to get rewards for my reinforcement learning algorithm I need to detect the collision between the player and, for example, a wall. Or simply to restart the enviroment when a wall/obstacle gets hit. </p> <p>Setting the <code>c_handler.begin</code> function equals the <code>Game.restart</code> fuctions helped me to print out that the player actually hit something. </p> <p>But except from <code>print()</code> I can't access any other function concerning the player position and I don't really know what to do next. </p> <p>So how can i use the pymunk collision to restart the environment? Or are there other ways for resetting or even other libraries to build a proper enviroment?</p> <pre><code>def restart(self, arbiter, data): car.body.position = 50, 50 return True def main(self): [...] c_handler = space.add_collision_handler(1,2) c_handler.begin = Game.restart [...] </code></pre>
<p>In general it seems like it would be useful for you to read up a bit on how classes works in python, particularly how class instance variables works. </p> <p>Anyway, if you already know you want to manipulate the car variable, you can store it in the class itself. Then since you have self available in the restart method you can just do whatever there. </p> <p>Or, the other option is to find out the body that you want to change from the arbiter that is passed into the callback.</p> <p>option 1:</p> <pre><code>class MyClass: def restart(self, space, arbiter, data): self.car.body.position = 50,50 return True def main(self): [...] self.car = car c_handler = space.add_collision_handler(1,2) c_handler.begin = self.restart [...] </code></pre> <p>option 2:</p> <pre><code>def restart(space, arbiter, data): arbiter.shapes[0].body.position = 50,50 # or maybe its the other shape, in that case you should do this instead # arbiter.shapes[1].body.position = 50,50 </code></pre>
python|pygame|artificial-intelligence|pymunk
0
1,909,841
65,402,126
The application of self-attention layer raised index error
<p>So I am doing a classification machine learning with the input of (batch, step, features).</p> <p>In order to improve the accuracy of this model, I intended to apply a self-attention layer to it.</p> <p>I am unfamiliar with how to use it for my case since most examples online are concerned with embedding NLP models.</p> <pre><code>def opt_select(optimizer): if optimizer == 'Adam': adamopt = tf.keras.optimizers.Adam(lr=learning_rate, beta_1=0.9, beta_2=0.999, epsilon=1e-8) return adamopt elif optimizer == 'RMS': RMSopt = tf.keras.optimizers.RMSprop(lr=learning_rate, rho=0.9, epsilon=1e-6) return RMSopt else: print('undefined optimizer') def LSTM_attention_model(X_train, y_train, X_test, y_test, num_classes, loss,batch_size=68, units=128, learning_rate=0.005,epochs=20, dropout=0.2, recurrent_dropout=0.2,optimizer='Adam'): class myCallback(tf.keras.callbacks.Callback): def on_epoch_end(self, epoch, logs={}): if (logs.get('acc') &gt; 0.90): print(&quot;\nReached 90% accuracy so cancelling training!&quot;) self.model.stop_training = True callbacks = myCallback() model = tf.keras.models.Sequential() model.add(Masking(mask_value=0.0, input_shape=(X_train.shape[1], X_train.shape[2]))) model.add(Bidirectional(LSTM(units, dropout=dropout, recurrent_dropout=recurrent_dropout))) model.add(SeqSelfAttention(attention_activation='sigmoid')) model.add(Dense(num_classes, activation='softmax')) opt = opt_select(optimizer) model.compile(loss=loss, optimizer=opt, metrics=['accuracy']) history = model.fit(X_train, y_train, batch_size=batch_size, epochs=epochs, validation_data=(X_test, y_test), verbose=1, callbacks=[callbacks]) score, acc = model.evaluate(X_test, y_test, batch_size=batch_size) yhat = model.predict(X_test) return history, that </code></pre> <p>This led to <code>IndexError: list index out of range</code></p> <p>What is the correct way to apply this layer to my model?</p> <hr /> <p>As requested, one may use the following codes to simulate a set of the dataset.</p> <pre><code>import tensorflow as tf from tensorflow.keras.layers import Dense, Dropout,Bidirectional,Masking,LSTM from keras_self_attention import SeqSelfAttention X_train = np.random.rand(700, 50,34) y_train = np.random.choice([0, 1], 700) X_test = np.random.rand(100, 50, 34) y_test = np.random.choice([0, 1], 100) batch_size= 217 epochs = 600 dropout = 0.6 Rdropout = 0.7 learning_rate = 0.00001 optimizer = 'RMS' loss = 'categorical_crossentropy' num_classes = y_train.shape[1] LSTM_attention_his,yhat = LSTM_attention_model(X_train,y_train,X_test,y_test,loss =loss,num_classes=num_classes,batch_size=batch_size,units=32,learning_rate=learning_rate,epochs=epochs,dropout = 0.5,recurrent_dropout=Rdropout,optimizer=optimizer) </code></pre>
<p>Here is how I would rewrite the code -</p> <pre><code>import tensorflow as tf from tensorflow.keras.layers import Dense, Dropout, Bidirectional, Masking, LSTM, Reshape from keras_self_attention import SeqSelfAttention import numpy as np def opt_select(optimizer): if optimizer == 'Adam': adamopt = tf.keras.optimizers.Adam(lr=learning_rate, beta_1=0.9, beta_2=0.999, epsilon=1e-8) return adamopt elif optimizer == 'RMS': RMSopt = tf.keras.optimizers.RMSprop(lr=learning_rate, rho=0.9, epsilon=1e-6) return RMSopt else: print('undefined optimizer') def LSTM_attention_model(X_train, y_train, X_test, y_test, num_classes, loss, batch_size=68, units=128, learning_rate=0.005, epochs=20, dropout=0.2, recurrent_dropout=0.2, optimizer='Adam'): class myCallback(tf.keras.callbacks.Callback): def on_epoch_end(self, epoch, logs={}): if (logs.get('accuracy') &gt; 0.90): print(&quot;\nReached 90% accuracy so cancelling training!&quot;) self.model.stop_training = True callbacks = myCallback() model = tf.keras.models.Sequential() model.add(Masking(mask_value=0.0, input_shape=(X_train.shape[1], X_train.shape[2]))) model.add(Bidirectional(LSTM(units, dropout=dropout, recurrent_dropout=recurrent_dropout, return_sequences=True))) model.add(SeqSelfAttention(attention_activation='sigmoid')) model.add(Reshape((-1, model.output.shape[1]*model.output.shape[2]))) model.add(Dense(num_classes, activation='softmax')) opt = opt_select(optimizer) model.compile(loss=loss, optimizer=opt, metrics=['accuracy']) history = model.fit(X_train, y_train, batch_size=batch_size, epochs=epochs, validation_data=(X_test, y_test), verbose=1, callbacks=[callbacks]) score, acc = model.evaluate(X_test, y_test, batch_size=batch_size) yhat = model.predict(X_test) return history, that X_train = np.random.rand(700, 50,34) y_train = np.random.choice([0, 1], (700, 1)) X_test = np.random.rand(100, 50, 34) y_test = np.random.choice([0, 1], (100, 1)) batch_size= 217 epochs = 600 dropout = 0.6 Rdropout = 0.7 learning_rate = 0.00001 optimizer = 'RMS' loss = 'categorical_crossentropy' num_classes = y_train.shape[1] LSTM_attention_his,yhat = LSTM_attention_model( X_train,y_train,X_test,y_test, loss =loss,num_classes=num_classes,batch_size=batch_size,units=32, learning_rate=learning_rate,epochs=epochs,dropout = 0.5,recurrent_dropout=Rdropout,optimizer=optimizer ) </code></pre> <p>These are the changes I had to make to get this to start training -</p> <ul> <li>The original issue was caused by the LSTM layer outputting the wrong dimensions. The <code>SeqSelfAttention</code> layer needs a 3D input (one dimension corresponding to the sequence of the data) which was missing from the output of the LSTM layer. As mentioned by @today, in the comments, this can be solved by adding <code>return_sequences=True</code> to the LSTM layer.</li> <li>But even with that modification,the code still gives an error at when trying to compute the cost function.The issue is that, the output of the self-attention layer is <code>(None, 50, 64) </code> when this is directly passed into the <code>Dense</code> layer, the final output of the network becomes <code>(None, 50, 1)</code>. This doesn't make sense for what we are trying to do, because the final output should just contain a single label for each datapoint (it should have the shape <code>(None, 1)</code>). The issue is the output from the self-attention layer which is 3 dimensional (each data point has a <code>(50, 64)</code> feature vector). This needs to be reshaped into a single dimensional feature vector for the computation to make sense. So I added a reshape layer <code>model.add(Reshape((-1, )))</code> between the attention layer and the Dense layer.</li> <li>In addition, the <code>myCallback</code> class is testing if <code>logs.get('acc')</code> is &gt; 0.9 but I think it should be <code>(logs.get('accuracy')</code>.</li> </ul> <p>To comment on OP's question in the comment on what kind of column should be added, in this case, it was just a matter of extracting the full sequential data from the LSTM layer. Without the <code>return_sequence</code> flag, the output from the LSTM layer is <code>(None, 64)</code> This is simply the final features of the LSTM without the intermediate sequential data.</p>
python|tensorflow|machine-learning|keras|deep-learning
-1
1,909,842
22,446,857
Python numpy efficiently combining arrays
<p>My question might sound biology heavy, but I am confident anyone could answer this without any knowledge of biology and I could really use some help.</p> <p>Suppose you have a function, create_offspring(mutations, genome1, genome2), that takes a list of mutations, which are in the form of a numpy 2d arrays with 5 rows and 10 columns as such ( each set of 5 vals is a mutation):</p> <pre><code> [ [4, 3, 6 , 7, 8], [5, 2, 6 , 7, 8] ...] </code></pre> <p>The function also takes two genomes which are in the form of numpy 2d arrays with 5 rows and 10 columns. The value at each position in the genomes is either 5 zeros at places where a mutation hasn't occurred, or filled with the values corresponding to the mutation list for spots where a mutation has occurred. The follow is an example of a genome that has yet to have a mutation at pos 0 and has a mutation at position 1 already.</p> <pre><code> [ [0, 0, 0 , 0, 0], [5, 2, 5 , 7, 8] ...] </code></pre> <p>What I am trying to accomplish is to efficiently ( I have a current way that works but it is WAY to slow) generate a child genome from my two genomes that is a numpy array and a random combination of the two parent genomes(AKA the numpy arrays). By random combination, I mean that each position in the child array has a 50% chance of either being the 5 values at position X from parent 1 genome or parent 2. For example if parent 1 is</p> <pre><code>[0,0,0,0,0], [5, 2, 6 , 7, 8] ...] </code></pre> <p>and parent 2 is</p> <pre><code>[ [4, 3, 6 , 7, 8], [0, 0, 0 , 0, 0] ...] </code></pre> <p>the child genome should have a 50% chance of getting all zeros at position 1 and a 50% chance of getting <code>[4, 3, 6 , 7, 8]</code> etc..</p> <p>Additionally, there needs to be a .01% chance that the child genome gets whatever the corresponding mutation is from the mutation list passed in at the beginning.</p> <p>I have a current method for solving this, but it takes far too long: </p> <pre><code> def create_offspring(mutations, genome_1, genome_2 ): ##creates an empty genome child_genome = numpy.array([[0]*5] * 10, dtype=np.float) for val in range(10): random = rand() if random &lt; mutation_rate: child_genome[val] = mutation_list[val] elif random &gt; .5: child_genome[val] = genome1[val] else: child_genome[val] = genome2[val] return child_genome </code></pre>
<p>Thanks for the clarification in the comments. Things work differently with 10000 than with 10 :)</p> <p>First, there's a faster way to make an empty (or full) array:</p> <pre><code>np.zeros(shape=(rows, cols), dtype=np.float) </code></pre> <p>Then, try generating a list of random numbers, checking each of them simultaneously, and then working from there.</p> <pre><code>randoms = np.rand(len(genome)) half = (randoms &lt; .5) for val, (rand, half) in enumerate(zip(randoms, half)): your_code </code></pre> <p>This will at least speed the random number generation. I'm still thinking on the rest.</p>
python|arrays|performance|genetics
1
1,909,843
28,613,870
Seemingly nonsensical runtime increases when switching from pure C to C with Numpy objects
<h1>Introduction</h1> <p>I am trying to realise some number crunching on a one-dimensional array in C (herafter: <em>standalone)</em> and as a Numpy module written in C (herafter: <em>module)</em> simultaneously. Since all I need to do with the array is to compare selected elements, I could use an abstraction layer for the array access and thus I can use the same code for the <em>standalone</em> or the <em>module.</em></p> <p>Now, I expect the <em>module</em> to be somewhat slower, since comparing elements of a Numpy array of unknown type using <code>descr-&gt;f-&gt;compare</code> requires extra function calls and similar and thus is more costly than the analogous operation for a C array of known type. However, when looking at the output of a profiler (Valgrind), I found runtime increases in the <em>module</em> for lines which have no obvious connection to the Python methods. I want to understand and avoid this, if possible.</p> <h1>Minimal example</h1> <p>Unfortunately, my minimal example is quite lengthy. Note that the Python variant is no real module anymore due to example reduction.</p> <pre class="lang-c prettyprint-override"><code># include &lt;stdlib.h&gt; # include &lt;stdio.h&gt; # ifdef PYTHON # include &lt;Python.h&gt; # include &lt;numpy/arrayobject.h&gt; // Define Array creation and access routines for Python. typedef PyArrayObject * Array; static inline char diff_sign (Array T, int i, int j) { return T-&gt;descr-&gt;f-&gt;compare ( PyArray_GETPTR1(T,i), PyArray_GETPTR1(T,j), T ); } Array create_array (int n) { npy_intp dims[1] = {n}; Array T = (Array) PyArray_SimpleNew (1, dims, NPY_DOUBLE); for (int i=0; i&lt;n; i++) {* (double *) PyArray_GETPTR1(T,i) = i;} // Line A return T; } #endif # ifdef STANDALONE // Define Array creation and access routines for standalone C. typedef double * Array; static inline char diff_sign (Array T, int i, int j) { return (T[i]&gt;T[j]) - (T[i]&lt;T[j]); } Array create_array (int n) { Array T = malloc (n*sizeof(double)); for (int i=0; i&lt;n; i++) {T[i] = i;} // Line B return T; } # endif int main() { # ifdef PYTHON Py_Initialize(); import_array(); # endif // avoids that the compiler knows the values of certain variables at runtime. int volatile blur = 0; int n = 1000; Array T = create_array (n); # ifdef PYTHON for (int i=0; i&lt;n; i++) {* (double *) PyArray_GETPTR1(T,i) = i;} // Line C # endif # ifdef STANDALONE for (int i=0; i&lt;n; i++) {T[i] = i;} // Line D #endif int a = 333 + blur; int b = 444 + blur; int c = 555 + blur; int e = 666 + blur; int f = 777 + blur; int g = 1 + blur; int h = n + blur; // Line E standa. module for (int i=h; i&gt;0; i--) // 4000 8998 { int d = c; do c = (c+a)%b; // 4000 5000 while (c&gt;n-1); // 2000 2000 if (c==e) f*=2; // 3000 3000 if ( diff_sign(T,c,d)==g ) f++; // 5000 5000 } printf("%i\n", f); } </code></pre> <p>I compiled this with the following two commands:</p> <pre class="lang-none prettyprint-override"><code>gcc source.c -o standalone -O3 -g -std=c11 -D STANDALONE gcc source.c -o module -O3 -g -std=c11 -D PYTHON -lpython2.7 -I/usr/include/python2.7 </code></pre> <p>Changing to <code>-O2</code> does not change the following; changing the compiler to Clang does change the minimal example but not the phenomenon with my actual code.</p> <h1>Profiling results</h1> <p>The interesting things happen after Line E and I gave the total runtime spent in those lines as reported by the profiler as comments in the source code: Despite having no direct relation to whether I compile as <em>standalone</em> or <em>module,</em> the runtimes for these lines strongly differ. In particular, in my actual application, the additional time spent in those lines in the <em>module</em> makes up for one fourth of the <em>module’s</em> total runtime.</p> <p>What’s even more weird is that if I remove line C (and D) – which is redundant in the example, as the array’s values are already set (to the same values) in line A (and B) –, the runtime spent in the loop header is reduced from 8998 to 6002 (the other reported runtimes do not change). The same thing happens, if I change <code>int n = 1000;</code> to <code>int n = 1000 + blur;</code>, i.e., if I make <code>n</code> unknown compile time.</p> <p>This does not make much sense to me and since it has a relevant impact on the runtime, I would like to avoid it.</p> <h1>Questions</h1> <ul> <li>Where do these runtime increases come from. I am aware that compilers are not perfect and sometimes work in seemingly mysterious ways, but I would like to understand.</li> <li>How can I avoid these runtime increases?</li> </ul>
<p>you have to be very careful when interpreting callgrind profiles. Callgrind gives you the instruction fetch count, so the number of instructions. This is not connected to actual performance on modern cpus, as instructions can have different latencies and throughputs and can be reordered by suitably capable cpus.</p> <p>Also you are here matching the instruction fetch to the lines the debug symbols associate to them. Those do not correspond exactly, e.g. the module code associates the a register copy and a nop instruction (which are essentially free in terms of runtime compared to the following division) to the loop line the source code, while the standalone module associates it to the line above. You can see that in the machine code tab when using <code>--dump-instr=yes</code> in kcachegrind. This is will have something to do with different registers being available for the two variants due to the different number of function calls that imply spilling stuff onto the stack.</p> <p>Lets look at the modulo loops to see if there is a significant runtime difference:</p> <p>module:</p> <pre><code> 400b58: 42 8d 04 3b lea (%rbx,%r15,1),%eax 400b5c: 99 cltd 400b5d: 41 f7 fe idiv %r14d 400b60: 81 fa e7 03 00 00 cmp $0x3e7,%edx 400b66: 89 d3 mov %edx,%ebx 400b68: 7f ee jg 400b58 &lt;main+0x1b8&gt; </code></pre> <p>standalone:</p> <pre><code> 4005f0: 8d 04 32 lea (%rdx,%rsi,1),%eax 4005f3: 99 cltd 4005f4: f7 f9 idiv %ecx 4005f6: 81 fa e7 03 00 00 cmp $0x3e7,%edx 4005fc: 7f f2 jg 4005f0 &lt;main+0x140&gt; </code></pre> <p>the difference is one register to register copy <code>mov %edx,%ebx</code> (likely again caused by different register pressure due to earlier function calls) this is one of the cheapest operations available in a cpu probably around 1-2 cycles and good throughput, so it should have no measurable effect on the actual wall time. The <code>idiv</code> instruction is the expensive part, it should be around 20 cycles with poor throughput. So the instruction fetch count here is grossly misleading.</p> <p>A better tool for such detailed profiling is a sampling profiler like <code>perf record/report</code>. When you run long enough you will be able to single out instructions that are costing a lot of time, though the actually high sample counts will also then not match up directly with the slow instructions as the cpu may execute later independent instructions in parallel with the slow ones.</p>
c|performance|numpy|python-c-api
2
1,909,844
14,588,124
error while using FREAK
<p>I'm trying to create Descriptor extractor using FREAK. but at the following line: <code>freakExtractor = cv2.DescriptorExtractor_create('FREAK')</code></p> <p>I get an error saying: <code>freakExtractor = cv2.DescriptorExtractor_create('FREAK') AttributeError: 'module' object has no attribute 'DescriptorExtractor_create'</code></p> <p>can someone tell me what is the exact problem and why i'm getting this error? </p> <p>I'm using ubuntu 12.10 with opencv 2.4.3 and python 2.7.</p>
<p>I think, </p> <pre><code>cv2.DescriptorExtractor_create('FREAK') </code></pre> <p>is not a part of python interface, just use the latest opencv for that then it will work or you simply can write the code in c++ which is availabe in that version in c++. </p>
python|opencv|python-2.7|freak
-2
1,909,845
14,857,719
Django - User creation with custom user model results in internal error
<p>Ok, I know this is a silly question but I am blocked and I can't figure out what to do.</p> <p>I have searched on google and stackoverflow but did not found any answer : I tried this :</p> <ul> <li><a href="https://stackoverflow.com/questions/2886987/adding-custom-fields-to-users-in-django">Adding custom fields to users in django</a></li> <li><a href="https://stackoverflow.com/questions/11488974/django-create-user-profile-on-user-creation">Django - Create user profile on user creation</a></li> <li><a href="https://docs.djangoproject.com/en/dev/topics/auth/#storing-additional-information-about-users" rel="nofollow noreferrer">https://docs.djangoproject.com/en/dev/topics/auth/#storing-additional-information-about-users</a></li> </ul> <p>My model is the following :</p> <pre><code>class UserProfile(models.Model): user = models.OneToOneField(User) quota = models.IntegerField(null = True) def create_user_profile(sender, instance, created, **kwargs): if created: UserProfile.objects.create(user=instance) post_save.connect(create_user_profile, sender=User) </code></pre> <p>And my view for user registration is the following : </p> <pre><code>def register(request): if request.method == 'POST': # If the form has been submitted... form = RegistrationForm(request.POST) # A form bound to the POST data if form.is_valid(): # All validation rules pass # Process the data in form.cleaned_data cd = form.cleaned_data #Then we create the user user = User.objects.create_user(cd['username'],cd["email"],cd["password1"]) user.get_profil().quota = 20 user.save() return HttpResponseRedirect('') else: form = RegistrationForm() # An unbound form return render(request, 'registration_form.html', {'form': form,}) </code></pre> <p>The line that launches an InternalError is :</p> <pre><code>user = User.objects.create_user(cd['username'],cd["email"],cd["password1"]) </code></pre> <p>And the error is :</p> <pre><code>InternalError at /register/ current transaction is aborted, commands ignored until end of transaction block </code></pre> <p>Thank you for your help</p>
<pre><code>user = User.objects.create_user(username=form.cleaned_data['username'], password=form.cleaned_data['password'], email=form.cleaned_data['email']) user.is_active = True user.save() </code></pre>
python|django|django-models|django-users|django-managers
3
1,909,846
41,425,673
`OSError: [Errno 9] Bad file descriptor` with socket wrapper on Windows
<p>I was writing a wrapper class for sockets so I could use it as a file-like object for piping into the <code>stdin</code> and <code>stdout</code> of a process created with <code>subprocess.Popen()</code>.</p> <pre><code>def do_task(): global s #The socket class sockIO(): def __init__(self, s):self.s=s def write(self, m): self.s.send(m) def read(self, n=None): return self.s.read() if n is None else self.s.read(n) def fileno(self): return self.s.fileno() #stdio=s.makefile('rw') stdio=sockIO(s) cmd = subprocess.Popen('cmd', shell=True, stdout=stdio, stderr=stdio, stdin=stdio) </code></pre> <p>I didn't use <code>socket.makefile()</code> as it gives a <code>io.UnsupportedOperation: fileno</code> error, but with my present code I'm getting the following error on Windows (works fine on Linux):</p> <pre><code>Traceback (most recent call last): File "C:\Users\admin\Desktop\Projects\Python3\client.py", line 65, in &lt;module&gt; main() File "C:\Users\admin\Desktop\Projects\Python3\client.py", line 62, in main receive_commands2() File "C:\Users\admin\Desktop\Projects\Python3\client.py", line 57, in receive_commands2 stdin=stdio) File "C:\Python3\lib\subprocess.py", line 914, in __init__ errread, errwrite) = self._get_handles(stdin, stdout, stderr) File "C:\Python3\lib\subprocess.py", line 1127, in _get_handles p2cread = msvcrt.get_osfhandle(stdin.fileno()) OSError: [Errno 9] Bad file descriptor </code></pre>
<p>According to the Python documentation about <code>socket.fileno()</code>, it is stated that this won't work in Windows. Quoting from <a href="https://docs.python.org/3/library/socket.html?highlight=socket#module-socket" rel="nofollow noreferrer">Python Documentation</a>:</p> <blockquote> <p><code>socket.fileno()</code></p> <p>Return the socket’s file descriptor (a small integer). This is useful with select.select().</p> <p><strong>Under Windows the small integer returned by this method cannot be used where a file descriptor can be used (such as <code>os.fdopen()</code>)</strong>. Unix does not have this limitation.</p> </blockquote> <p><strong>Note:</strong></p> <p>The above code will work in Linux and other *nix systems.</p>
python|windows|python-3.x|sockets
1
1,909,847
6,481,771
Problems with IMAP in Gmail with Python
<p>I have a problem with IMAP in Python 2.7 For testing purposes, I have created <code>[email protected]</code> with the password <code>testing123testing</code> I am following <a href="http://yuji.wordpress.com/2011/06/22/python-imaplib-imap-example-with-gmail/" rel="nofollow">this tutorial</a> and typed this into my Python Iteractive Shell:</p> <pre><code> Python 2.7.2 (default, Jun 12 2011, 15:08:59) [MSC v.1500 32 bit (Intel)] on win32 Type "copyright", "credits" or "license()" for more information. &gt;&gt;&gt; import imaplib mail = imaplib.IMAP4_SSL('imap.gmail.com') mail.login('[email protected]', 'testing123testing') mail.list() # Out: list of "folders" aka labels in gmail. mail.select("inbox") # connect to inbox. &gt;&gt;&gt; </code></pre> <p>Nothing happens, not even error messages. Note: I have enabled IMAP in Gmail Thanks, -tim</p> <p>Update: In response to this comment:</p> <blockquote> <p>Did you do the next section after the code you quoted above? – Amber </p> </blockquote> <p>I tried this: </p> <pre><code> Python 2.7.2 (default, Jun 12 2011, 15:08:59) [MSC v.1500 32 bit (Intel)] on win32 Type "copyright", "credits" or "license()" for more information. &gt;&gt;&gt; import imaplib mail = imaplib.IMAP4_SSL('imap.gmail.com') mail.login('[email protected]', 'mypassword') mail.list() # Out: list of "folders" aka labels in gmail. mail.select("inbox") # connect to inbox. result, data = mail.search(None, "ALL") ids = data[0] # data is a list. id_list = ids.split() # ids is a space separated string latest_email_id = id_list[-1] # get the latest result, data = mail.fetch(latest_email_id, "(RFC822)") # fetch the email body (RFC822) for the given ID raw_email = data[0] # here's the body, which is raw text of the whole email # including headers and alternate payloads &gt;&gt;&gt; </code></pre> <p>and it still did nothing</p>
<p>It seems to work for me; I created a <code>sarnoldwashere</code> folder via the python API:</p> <pre><code>&gt;&gt;&gt; mail.create("sarnoldwashere") ('OK', ['Success']) &gt;&gt;&gt; mail.list() ('OK', ['(\\HasNoChildren) "/" "INBOX"', '(\\HasNoChildren) "/" "Personal"', '(\\HasNoChildren) "/" "Receipts"', '(\\HasNoChildren) "/" "Travel"', '(\\HasNoChildren) "/" "Work"', '(\\Noselect \\HasChildren) "/" "[Gmail]"', '(\\HasNoChildren) "/" "[Gmail]/All Mail"', '(\\HasNoChildren) "/" "[Gmail]/Drafts"', '(\\HasNoChildren) "/" "[Gmail]/Sent Mail"', '(\\HasNoChildren) "/" "[Gmail]/Spam"', '(\\HasNoChildren) "/" "[Gmail]/Starred"', '(\\HasChildren \\HasNoChildren) "/" "[Gmail]/Trash"', '(\\HasNoChildren) "/" "sarnoldwashere"']) &gt;&gt;&gt; mail.logout() ('BYE', ['LOGOUT Requested']) </code></pre> <p>It ought to still be there in the web interface. (Unless someone else deletes it in the meantime.)</p> <p><strong>Edit</strong> to include the full contents of the session, even including the boring bits where I re-learn The Way of Python:</p> <pre><code>&gt;&gt;&gt; import imaplib &gt;&gt;&gt; mail = imaplib.IMAP4_SSL('imap.gmail.com') &gt;&gt;&gt; mail.login('[email protected]', 'testing123testing') ('OK', ['[email protected] .. .. authenticated (Success)']) &gt;&gt;&gt; mail.list() ('OK', ['(\\HasNoChildren) "/" "INBOX"', '(\\HasNoChildren) "/" "Personal"', '(\\HasNoChildren) "/" "Receipts"', '(\\HasNoChildren) "/" "Travel"', '(\\HasNoChildren) "/" "Work"', '(\\Noselect \\HasChildren) "/" "[Gmail]"', '(\\HasNoChildren) "/" "[Gmail]/All Mail"', '(\\HasNoChildren) "/" "[Gmail]/Drafts"', '(\\HasNoChildren) "/" "[Gmail]/Sent Mail"', '(\\HasNoChildren) "/" "[Gmail]/Spam"', '(\\HasNoChildren) "/" "[Gmail]/Starred"', '(\\HasChildren \\HasNoChildren) "/" "[Gmail]/Trash"']) &gt;&gt;&gt; # Out: list of "folders" aka labels in gmail. ... mail.select("inbox") # connect to inbox. ('OK', ['3']) &gt;&gt;&gt; mail.dir() Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "/usr/lib/python2.6/imaplib.py", line 214, in __getattr__ raise AttributeError("Unknown IMAP4 command: '%s'" % attr) AttributeError: Unknown IMAP4 command: 'dir' &gt;&gt;&gt; dir(mail) ['PROTOCOL_VERSION', '_CRAM_MD5_AUTH', '__doc__', '__getattr__', '__init__', '__module__', '_append_untagged', '_check_bye', '_checkquote', '_cmd_log', '_cmd_log_idx', '_cmd_log_len', '_command', '_command_complete', '_dump_ur', '_get_line', '_get_response', '_get_tagged_response', '_log', '_match', '_mesg', '_new_tag', '_quote', '_simple_command', '_untagged_response', 'abort', 'append', 'authenticate', 'capabilities', 'capability', 'certfile', 'check', 'close', 'continuation_response', 'copy', 'create', 'debug', 'delete', 'deleteacl', 'error', 'expunge', 'fetch', 'getacl', 'getannotation', 'getquota', 'getquotaroot', 'host', 'is_readonly', 'keyfile', 'list', 'literal', 'login', 'login_cram_md5', 'logout', 'lsub', 'mo', 'mustquote', 'myrights', 'namespace', 'noop', 'open', 'partial', 'port', 'print_log', 'proxyauth', 'read', 'readline', 'readonly', 'recent', 'rename', 'response', 'search', 'select', 'send', 'setacl', 'setannotation', 'setquota', 'shutdown', 'sock', 'socket', 'sort', 'ssl', 'sslobj', 'state', 'status', 'store', 'subscribe', 'tagged_commands', 'tagnum', 'tagpre', 'tagre', 'thread', 'uid', 'unsubscribe', 'untagged_responses', 'welcome', 'xatom'] &gt;&gt;&gt; dir(mail).sort() &gt;&gt;&gt; d=dir(mail) &gt;&gt;&gt; d.sort() &gt;&gt;&gt; d ['PROTOCOL_VERSION', '_CRAM_MD5_AUTH', '__doc__', '__getattr__', '__init__', '__module__', '_append_untagged', '_check_bye', '_checkquote', '_cmd_log', '_cmd_log_idx', '_cmd_log_len', '_command', '_command_complete', '_dump_ur', '_get_line', '_get_response', '_get_tagged_response', '_log', '_match', '_mesg', '_new_tag', '_quote', '_simple_command', '_untagged_response', 'abort', 'append', 'authenticate', 'capabilities', 'capability', 'certfile', 'check', 'close', 'continuation_response', 'copy', 'create', 'debug', 'delete', 'deleteacl', 'error', 'expunge', 'fetch', 'getacl', 'getannotation', 'getquota', 'getquotaroot', 'host', 'is_readonly', 'keyfile', 'list', 'literal', 'login', 'login_cram_md5', 'logout', 'lsub', 'mo', 'mustquote', 'myrights', 'namespace', 'noop', 'open', 'partial', 'port', 'print_log', 'proxyauth', 'read', 'readline', 'readonly', 'recent', 'rename', 'response', 'search', 'select', 'send', 'setacl', 'setannotation', 'setquota', 'shutdown', 'sock', 'socket', 'sort', 'ssl', 'sslobj', 'state', 'status', 'store', 'subscribe', 'tagged_commands', 'tagnum', 'tagpre', 'tagre', 'thread', 'uid', 'unsubscribe', 'untagged_responses', 'welcome', 'xatom'] &gt;&gt;&gt; mail.list() ('OK', ['(\\HasNoChildren) "/" "INBOX"', '(\\HasNoChildren) "/" "Personal"', '(\\HasNoChildren) "/" "Receipts"', '(\\HasNoChildren) "/" "Travel"', '(\\HasNoChildren) "/" "Work"', '(\\Noselect \\HasChildren) "/" "[Gmail]"', '(\\HasNoChildren) "/" "[Gmail]/All Mail"', '(\\HasNoChildren) "/" "[Gmail]/Drafts"', '(\\HasNoChildren) "/" "[Gmail]/Sent Mail"', '(\\HasNoChildren) "/" "[Gmail]/Spam"', '(\\HasNoChildren) "/" "[Gmail]/Starred"', '(\\HasChildren \\HasNoChildren) "/" "[Gmail]/Trash"']) &gt;&gt;&gt; mail.select("INBOX") # connect to inbox. ('OK', ['3']) &gt;&gt;&gt; mail.list() ('OK', ['(\\HasNoChildren) "/" "INBOX"', '(\\HasNoChildren) "/" "Personal"', '(\\HasNoChildren) "/" "Receipts"', '(\\HasNoChildren) "/" "Travel"', '(\\HasNoChildren) "/" "Work"', '(\\Noselect \\HasChildren) "/" "[Gmail]"', '(\\HasNoChildren) "/" "[Gmail]/All Mail"', '(\\HasNoChildren) "/" "[Gmail]/Drafts"', '(\\HasNoChildren) "/" "[Gmail]/Sent Mail"', '(\\HasNoChildren) "/" "[Gmail]/Spam"', '(\\HasNoChildren) "/" "[Gmail]/Starred"', '(\\HasChildren \\HasNoChildren) "/" "[Gmail]/Trash"']) &gt;&gt;&gt; mail.list("INBOX") ('OK', ['(\\HasNoChildren) "/" "INBOX"']) &gt;&gt;&gt; mail.open("INBOX") Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "/usr/lib/python2.6/imaplib.py", line 1149, in open self.sock = socket.create_connection((host, port)) File "/usr/lib/python2.6/socket.py", line 547, in create_connection for res in getaddrinfo(host, port, 0, SOCK_STREAM): socket.gaierror: [Errno -2] Name or service not known &gt;&gt;&gt; mail.recent() ('OK', ['0']) &gt;&gt;&gt; mail.create("sarnoldwashere") ('OK', ['Success']) &gt;&gt;&gt; mail.list() ('OK', ['(\\HasNoChildren) "/" "INBOX"', '(\\HasNoChildren) "/" "Personal"', '(\\HasNoChildren) "/" "Receipts"', '(\\HasNoChildren) "/" "Travel"', '(\\HasNoChildren) "/" "Work"', '(\\Noselect \\HasChildren) "/" "[Gmail]"', '(\\HasNoChildren) "/" "[Gmail]/All Mail"', '(\\HasNoChildren) "/" "[Gmail]/Drafts"', '(\\HasNoChildren) "/" "[Gmail]/Sent Mail"', '(\\HasNoChildren) "/" "[Gmail]/Spam"', '(\\HasNoChildren) "/" "[Gmail]/Starred"', '(\\HasChildren \\HasNoChildren) "/" "[Gmail]/Trash"', '(\\HasNoChildren) "/" "sarnoldwashere"']) &gt;&gt;&gt; mail.logout() ('BYE', ['LOGOUT Requested']) &gt;&gt;&gt; </code></pre>
python|gmail|imap|imaplib
3
1,909,848
54,024,428
How to get the raw JSON response of a HTTP request from `driver.page_source` in Selenium webdriver Firefox
<p>If I browse to <a href="https://httpbin.org/headers" rel="noreferrer"><code>https://httpbin.org/headers</code></a> I expect to get the following JSON response:</p> <pre><code>{ "headers": { "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", "Accept-Encoding": "gzip, deflate, br", "Accept-Language": "en-US,en;q=0.5", "Connection": "close", "Host": "httpbin.org", "Upgrade-Insecure-Requests": "1", "User-Agent": "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:64.0) Gecko/20100101 Firefox/64.0" } } </code></pre> <p>However, if I use Selenium</p> <pre><code>from selenium import webdriver from selenium.webdriver.firefox.options import Options options = Options() options.headless = True driver = webdriver.Firefox(options=options) url = 'https://httpbin.org/headers' driver.get(url) print(driver.page_source) driver.close() </code></pre> <p>I get</p> <pre><code>&lt;html platform="linux" class="theme-light" dir="ltr"&gt;&lt;head&gt;&lt;meta http-equiv="Content-Security-Policy" content="default-src 'none' ; script-src resource:; "&gt;&lt;link rel="stylesheet" type="text/css" href="resource://devtools-client-jsonview/css/main.css"&gt;&lt;script type="text/javascript" charset="utf-8" async="" data-requirecontext="_" data-requiremodule="viewer-config" src="resource://devtools-client-jsonview/viewer-config.js"&gt;&lt;/script&gt;&lt;script type="text/javascript" charset="utf-8" async="" data-requirecontext="_" data-requiremodule="json-viewer" src="resource://devtools-client-jsonview/json-viewer.js"&gt;&lt;/script&gt;&lt;/head&gt;&lt;body&gt;&lt;div id="content"&gt;&lt;div id="json"&gt;{ "headers": { "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", "Accept-Encoding": "gzip, deflate, br", "Accept-Language": "en-US,en;q=0.5", "Connection": "close", "Host": "httpbin.org", "Upgrade-Insecure-Requests": "1", "User-Agent": "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:64.0) Gecko/20100101 Firefox/64.0" } } &lt;/div&gt;&lt;/div&gt;&lt;script src="resource://devtools-client-jsonview/lib/require.js" data-main="resource://devtools-client-jsonview/viewer-config.js"&gt;&lt;/script&gt;&lt;/body&gt;&lt;/html&gt; </code></pre> <p>Where do the HTML tags come from? How do I get the raw JSON response of a HTTP request from <code>driver.page_source</code>?</p>
<p>use the "view-source:" parameter in your url</p> <p><strong>Simple Mode:</strong></p> <p>example:</p> <pre><code>url = 'view-source:https://httpbin.org/headers' driver.get(url) content = driver.page_source print(content) </code></pre> <p>output:</p> <pre><code>'&lt;html&gt;&lt;head&gt;&lt;meta name="viewport" content="width=device-width"&gt;&lt;title&gt;https://httpbin.org/headers&lt;/title&gt;&lt;link rel="stylesheet" type="text/css" href="resource://content-accessible/viewsource.css"&gt;&lt;/head&gt;&lt;body id="viewsource" class="highlight" style="-moz-tab-size: 4"&gt;&lt;pre&gt;{\n "headers": {\n "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", \n "Accept-Encoding": "gzip, deflate, br", \n "Accept-Language": "en-US,en;q=0.5", \n "Host": "httpbin.org", \n "Upgrade-Insecure-Requests": "1", \n "User-Agent": "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:67.0) Gecko/20100101 Firefox/67.0"\n }\n}\n&lt;/pre&gt;&lt;/body&gt;&lt;/html&gt;' </code></pre> <p><strong>Best Mode: (for JSON)</strong></p> <p>example:</p> <pre><code>url = 'view-source:https://httpbin.org/headers' driver.get(url) content = driver.page_source content = driver.find_element_by_tag_name('pre').text parsed_json = json.loads(content) print(parsed_json) </code></pre> <p>output:</p> <pre><code>{'headers': {'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', 'Accept-Encoding': 'gzip, deflate, br', 'Accept-Language': 'en-US,en;q=0.5', 'Host': 'httpbin.org', 'Upgrade-Insecure-Requests': '1', 'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:67.0) Gecko/20100101 Firefox/67.0'}} </code></pre>
python|json|selenium|selenium-webdriver|httpresponse
11
1,909,849
25,810,207
How to programmatically fill in RGB noise in the transparent region of an image using Python?
<p>I need to process a lot of images using Python. All these images have some transparent region (alpha channel) of different sizes.</p> <p>I need to programmatically fill in RGB noise in the transparent region of those images, but keep the non-transparent region unchanged. <a href="https://stackoverflow.com/questions/4761940/opencv-template-matching-and-transparency/20461136#20461136">This is an example of changing the images</a>.</p> <p>How to do this programmatically in Python?</p>
<p>In my opinion you need to:</p> <ol> <li>Create a <code>Mat</code> that contains Gaussian noise (or what kind of noise you need to add in the images).</li> <li>For each image you copy the noise <code>Mat</code> into another one based on the alpha channel (used as mask)</li> <li>Add the two images (<code>initial</code> and <code>noise_mask</code>) to he <code>initial</code> image (or the <code>inital_noisy_background</code>)</li> </ol>
python|opencv|image-processing|simplecv|template-matching
1
1,909,850
44,756,948
ValueError: unknown url type: h
<p>I wrote an application in python to download the file at a specified hour but I received <strong>ValueError: unknown url type: h</strong> Error this is my code</p> <pre><code>import time,os,urllib2 coun=input("Enter count of the movies:") x=0 namelist=[] addresslist=[] os.chdir('D:\\') while(coun &gt; x): name=raw_input("Enter the name of movie:") namelist.append(name) address=raw_input("enter the address of %s:"%(name)) addresslist.append(address) x=x+1 ti= time.localtime().tm_hour print('it\'s wating...') while(ti!=11): ti= time.localtime().tm_hour timi=time.localtime().tm_min tisec=time.localtime().tm_sec if (ti==3 &amp; timi==59 &amp; tisec==59): print('it\'s 3') print('it\'s your time.let start downloating') x=0 while(coun &gt; x): data=urllib2.urlopen(address[x]) file=open(namelist[x],'wb') file.write(data) file.close() x=x+1 </code></pre> <p>And when I run it and answer the questions that return to me this Error:</p> <pre><code>Traceback (most recent call last): File "tidopy.py", line 24, in &lt;module&gt; data=urllib2.urlopen(address[x]) File "C:\Python27\lib\urllib2.py", line 154, in urlopen return opener.open(url, data, timeout) File "C:\Python27\lib\urllib2.py", line 421, in open protocol = req.get_type() File "C:\Python27\lib\urllib2.py", line 283, in get_type raise ValueError, "unknown url type: %s" % self.__original ValueError: unknown url type: h </code></pre> <p><strong>How can I fix it?</strong> please help</p>
<p>This line:</p> <pre><code>data=urllib2.urlopen(address[x]) </code></pre> <p>Should most likely be this:</p> <pre><code>data=urllib2.urlopen(addresslist[x]) </code></pre> <p>You want the element of the list <code>addresslist</code>, not the first character of the string <code>address</code>.</p>
python|urllib2
1
1,909,851
44,570,305
Using Pycharm I don't get any output once code is run, just Process finished with exit code 0
<p>So pretty much, I have a directory that contains many of 2 types of text files, a bunch of files that begin with "summary_" and a bunch of files that begin with "log". All these text are in the same directory. For now I don't care about the "log" text files, I only care about the files that start with "summary".</p> <p>Each "summary" file contains either 7 lines of text or 14 lines of text. At the end of each line it will say either PASS or FAIL depending on the test result. For the test result to be passing all 7 or 14 lines have to say "PASS" at the end. If one of those lines have just one "FAIL" in it, The test has failed. I want to count the number of passes and failures.</p> <pre><code>import os import glob def pass_or_fail_counter(): pass_count = 0 fail_count = 0 os.chdir("/home/dario/Documents/Log_Test") print("Working directory is ") + os.getcwd() data = open('*.txt').read() count = data.count('PASS') if count == 7 or 14: pass_count = pass_count + 1 else: fail_count = fail_count + 1 print(pass_count) print(fail_count) f.close() pass_or_fail_counter() </code></pre>
<p>I don't know about within Pycharm, but the following seems to work without it:</p> <pre><code>import os import glob def pass_or_fail_counter(logdir): pass_count, fail_count = 0, 0 for filename in glob.iglob(os.path.join(logdir, '*.txt')): with open(filename, 'rt') as file: lines = file.read().splitlines() if len(lines) in {7, 14}: # right length? if "PASS" in lines[-1]: # check last line for result pass_count += 1 else: fail_count += 1 print(pass_count, fail_count) pass_or_fail_counter("/home/dario/Documents/Log_Test") </code></pre>
python|pycharm
2
1,909,852
62,009,346
Why or-combined != sometimes does not behave as expected
<p>So I was trying to create a tic tac toe game and I ran into a problem with one of my method where I could not figure out why it was going on an infinite loop. My code is:</p> <pre><code>def player_input(): marker = '' while marker != 'X' or marker != 'O': marker = input('Do you want to be X or O: ').upper() print(marker) if marker == 'X': return ['X','O'] return ['O','X'] </code></pre> <p>What it is currently doing is that it keeps asking the question even when the user inputs X or O. The code works when I use the condition:</p> <pre><code>while not (marker == 'X' or marker == 'O'): </code></pre>
<p>The problem is your logic in checking <code>marker != 'X' or marker != 'O'</code>.</p> <p>Let's pretend <code>marker == 'X'</code>. So our expression evaluates to <code>False or True</code> which evaluates to True. Same goes with <code>marker == 'O'</code>. Our expression here evaluates to <code>True or False</code> which evaluates to True.</p> <p>You should be using <code>and</code>, not <code>or</code>.</p> <p>Your second expression, <code>not (marker == 'X' or marker == 'O')</code> is equivalent to <code>(not marker == 'X') and (not marker == 'O')</code>, so it works. (<a href="https://en.wikipedia.org/wiki/De_Morgan&#39;s_laws" rel="nofollow noreferrer">De Morgan's laws</a>)</p> <pre class="lang-py prettyprint-override"><code>def player_input(): marker = '' while marker != 'X' and marker != 'O': # change from 'or' to 'and' marker = input('Do you want to be X or O: ').upper() print(marker) if marker == 'X': return ['X','O'] return ['O','X'] </code></pre>
python-3.x|comparator
7
1,909,853
23,678,120
Assign variable names to different files in python
<p>I am trying to open various text files simulataneously in python. I want to assign a unique name to each file. I have tried the follwoing but it is not working:</p> <pre><code>for a in [1,2,11]: "DOS%s"%a=open("DOS%s"%a,"r") </code></pre> <p>Instead I get this error:</p> <pre><code>SyntaxError: can't assign to operator </code></pre> <p>What is the correct way to do this?</p>
<p>you always have to have the namespace declared before assignment, either on a previous line or on the left of a statement. Anyway you can do:</p> <pre><code>files = {f:open("DOS%s" % f) for f in [1,2,11]} </code></pre> <p>then access files like:</p> <pre><code>files[1].read() </code></pre>
python|python-2.7
3
1,909,854
20,413,313
In Pandas How to sort one level of a multi-index based on the values of a column, while maintaining the grouping of the other level
<p>I'm taking a Data Mining course at university right now, but I'm a wee bit stuck on a multi-index sorting problem. </p> <p>The actual data involves about 1 million reviews of movies, and I'm trying to analyze that based on American zip codes, but to test out how to do what I want, I've been using a much smaller data set of 250 randomly generated ratings for 10 movies and instead of zip codes, I'm using age groups.</p> <p>So this is what I have right now, it's a multiindexed DataFrame in Pandas with two levels, 'group' and 'title'</p> <pre><code> rating group title Alien 4.000000 Argo 2.166667 Adults Ben-Hur 3.666667 Gandhi 3.200000 ... ... Alien 3.000000 Argo 3.750000 Coeds Ben-Hur 3.000000 Gandhi 2.833333 ... ... Alien 2.500000 Argo 2.750000 Kids Ben-Hur 3.000000 Gandhi 3.200000 ... ... </code></pre> <p>What I'm aiming for is to sort the titles based on their rating within the group (and only show the most popular 5 or so titles within each group) </p> <p>So something like this (but I'm only going to show two titles in each group):</p> <pre><code> rating group title Alien 4.000000 Adults Ben-Hur 3.666667 Argo 3.750000 Coeds Alien 3.000000 Gandhi 3.200000 Kids Ben-Hur 3.000000 </code></pre> <p>Anyone know how to do this? I've tried sort_order, sort_index, etc and swapping the levels, but they mix up the groups too. So it then looks like:</p> <pre><code> rating group title Adults Alien 4.000000 Coeds Argo 3.750000 Adults Ben-Hur 3.666667 Kids Gandhi 3.666667 Coeds Alien 3.000000 Kids Ben-Hur 3.000000 </code></pre> <p>I'm kind of looking for something like this: <a href="https://stackoverflow.com/questions/17242970/multi-index-sorting-in-pandas">Multi-Index Sorting in Pandas</a>, but instead of sorting based on another level, I want to sort based on the values. Kind of like if that person wanted to sort based on his sales column.</p> <p>Thanks!</p>
<p>You're looking for <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.sort.html" rel="nofollow">sort</a>:</p> <pre><code>In [11]: s = pd.Series([3, 1, 2], [[1, 1, 2], [1, 3, 1]]) In [12]: s.sort() In [13]: s Out[13]: 1 3 1 2 1 2 1 1 3 dtype: int64 </code></pre> <p>Note; this works inplace (i.e. modifies s), to return a copy use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.order.html" rel="nofollow">order</a>:</p> <pre><code>In [14]: s.order() Out[14]: 1 3 1 2 1 2 1 1 3 dtype: int64 </code></pre> <p>Update: I realised what you were actually asking, and I think this ought to be an option in sortlevels, but for now I think you have to reset_index, groupby and apply:</p> <pre><code>In [21]: s.reset_index(name='s').groupby('level_0').apply(lambda s: s.sort('s')).set_index(['level_0', 'level_1'])['s'] Out[21]: level_0 level_1 1 3 1 1 3 2 1 2 Name: 0, dtype: int64 </code></pre> <p><em>Note: you can set the level names to [None, None] afterwards.</em></p>
python|sorting|pandas|multi-index
2
1,909,855
20,432,278
Calculate numpy.std of each pandas.DataFrame's column?
<p>I want to get the <code>numpy.std</code> of each column of my <code>pandas.DataFrame</code>.</p> <p>Here is my code:</p> <pre><code>import pandas as pd import numpy as np prices = pd.DataFrame([[-0.33333333, -0.25343423, -0.1666666667], [+0.23432323, +0.14285714, -0.0769230769], [+0.42857143, +0.07692308, +0.1818181818]]) print(pd.DataFrame(prices.std(axis=0))) </code></pre> <p>Here is my code's output:</p> <pre><code>pd.DataFrame([[ 0.39590933], [ 0.21234018], [ 0.1809432 ]]) </code></pre> <p>And here is the right output (if calculate with <code>np.std</code>)</p> <pre><code>pd.DataFrame([[ 0.32325862], [ 0.17337503], [ 0.1477395 ]]) </code></pre> <p><strong>Why am I having such difference? How can I fix that?</strong></p> <p><em><strong>NOTE</em></strong>: I have tried to do this way:</p> <pre><code>print(np.std(prices, axis=0)) </code></pre> <p>But I had the following error:</p> <pre><code>Traceback (most recent call last): File "C:\Users\*****\Documents\******\******\****.py", line 10, in &lt;module&gt; print(np.std(prices, axis=0)) File "C:\Python33\lib\site-packages\numpy\core\fromnumeric.py", line 2812, in std return std(axis=axis, dtype=dtype, out=out, ddof=ddof) TypeError: std() got an unexpected keyword argument 'dtype' </code></pre> <p>Thank you!</p>
<p>They're both right: they just differ on what the default delta degrees of freedom is. <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.std.html" rel="nofollow noreferrer"><code>np.std</code></a> uses 0, and <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.std.html" rel="nofollow noreferrer"><code>DataFrame.std</code></a> uses 1:</p> <pre><code>&gt;&gt;&gt; prices.std(axis=0, ddof=0) 0 0.323259 1 0.173375 2 0.147740 dtype: float64 &gt;&gt;&gt; prices.std(axis=0, ddof=1) 0 0.395909 1 0.212340 2 0.180943 dtype: float64 &gt;&gt;&gt; np.std(prices.values, axis=0, ddof=0) array([ 0.32325862, 0.17337503, 0.1477395 ]) &gt;&gt;&gt; np.std(prices.values, axis=0, ddof=1) array([ 0.39590933, 0.21234018, 0.1809432 ]) </code></pre>
python|numpy|pandas
10
1,909,856
35,820,604
python convert sorted Double linked list to BST
<p>The idea is quite similar to most of others, do a inorder in place traversal treat left as prevNode and right as nextNode for Some reason it just cannot work.. seems not running recursion?</p> <p>I tested my DoubleLinked list is contructed correctly by printing preNode and nextNode</p> <p>but is still say 'NoneType' object has no attribute 'val' The problem is on buildTree</p> <p>Any one plz help</p> <pre><code>class LinkNode(object): def __init__(self, val): self.val = val self.nextNode = None self.prevNode = None def buildLinkList(arr): dummy = head = LinkNode(None) dummy.nextNode = head for val in arr: new_node = LinkNode(val) new_node.prevNode = head head.nextNode = new_node head = head.nextNode return dummy.nextNode def printLink(head): while head: print head.val if not head.nextNode: #print head.val return head head = head.nextNode def buildTree(head, n): if n &lt;= 0: return None left = buildTree(head, n / 2) print head.val root = head root.prevNode = left head = head.nextNode root.nextNode = buildTree(head, n - n / 2 - 1) return root def inorder(root): if root: inorder(root.prevNode) print root.val inorder(root.nextNode) arr = [1, 2, 3, 4, 5, 6, 7] head = buildLinkList(arr) #print head.val root = buildTree(head, 7) </code></pre>
<p>head is not updated in every recursion call,so here head should be global variable (like how double pointer is used in C).</p> <p>update your buildTree function like</p> <pre><code>def buildTree(n): global head if n&lt;=0 : return None left = buildTree(n/2) root = head root.prevNode = left head = head.nextNode root.nextNode = buildTree(n-n/2-1) return root </code></pre>
python|binary-search-tree|doubly-linked-list
0
1,909,857
29,504,138
passing a variable to urlopen() and reading it again in python using bs4
<p>I am planning to open a bunch of links where the only thing changing is the year at the end of the links. I am using the code below but it is returning a bunch of errors. My aim is to open that link and filter some things on the page but first I need to open all the pages so I have the test code. Code below:</p> <pre><code>from xlwt import * from urllib.request import urlopen from bs4 import BeautifulSoup, SoupStrainer from xlwt.Style import * j=2014 for j in range(2015): conv=str(j) content = urlopen("http://en.wikipedia.org/wiki/List_of_Telugu_films_of_%s").read() %conv j+=1 print(content) </code></pre> <p>Errors:</p> <pre><code>Traceback (most recent call last): File "F:\urltest.py", line 11, in &lt;module&gt; content = urlopen("http://en.wikipedia.org/wiki/List_of_Telugu_films_of_%s").read() %conv File "C:\Python34\lib\urllib\request.py", line 161, in urlopen return opener.open(url, data, timeout) File "C:\Python34\lib\urllib\request.py", line 469, in open response = meth(req, response) File "C:\Python34\lib\urllib\request.py", line 579, in http_response 'http', request, response, code, msg, hdrs) File "C:\Python34\lib\urllib\request.py", line 507, in error return self._call_chain(*args) File "C:\Python34\lib\urllib\request.py", line 441, in _call_chain result = func(*args) File "C:\Python34\lib\urllib\request.py", line 587, in http_error_default raise HTTPError(req.full_url, code, msg, hdrs, fp) urllib.error.HTTPError: HTTP Error 400: Bad Request </code></pre> <p>A little guidance required. If there is any other way to pass the variables[2014, 2015 etc] also it would be great.</p>
<p>That may be because you are declaring <code>j</code> and then modifying it at the end of your loop. <code>range()</code> already does this for you so you don't have to increment it. Also, your string interpolation syntax looks wrong. Be sure to include the variable immediately after the string. <code>print("Hi %s!" % name)</code>.</p> <p>Try:</p> <pre><code>for j in range(2015): conv=str(j) content = urlopen("http://en.wikipedia.org/wiki/List_of_Telugu_films_of_%s" % conv).read() </code></pre> <p>Also, I am assuming you don't want to query from years 0 to 2015. You can call <code>range(start_year, end_year)</code> to iterate from <code>[start_year, end_year)</code>.</p>
python|html|variables|beautifulsoup|logic
1
1,909,858
64,395,846
Pandas UDF in pyspark
<p>I am trying to fill a series of observation on a spark dataframe. Basically I have a list of days and I should create the missing one for each group.<br /> In pandas there is the <code>reindex</code> function, which is not available in pyspark.<br /> I tried to implement a pandas UDF:</p> <pre><code>@pandas_udf(schema, functionType=PandasUDFType.GROUPED_MAP) def reindex_by_date(df): df = df.set_index('dates') dates = pd.date_range(df.index.min(), df.index.max()) return df.reindex(dates, fill_value=0).ffill() </code></pre> <p>This looks like should do what I need, however it fails with this message <code>AttributeError: Can only use .dt accessor with datetimelike values</code> . What am I doing wrong here?<br /> Here the full code:</p> <pre><code>data = spark.createDataFrame( [(1, &quot;2020-01-01&quot;, 0), (1, &quot;2020-01-03&quot;, 42), (2, &quot;2020-01-01&quot;, -1), (2, &quot;2020-01-03&quot;, -2)], ('id', 'dates', 'value')) data = data.withColumn('dates', col('dates').cast(&quot;date&quot;)) schema = StructType([ StructField('id', IntegerType()), StructField('dates', DateType()), StructField('value', DoubleType())]) @pandas_udf(schema, functionType=PandasUDFType.GROUPED_MAP) def reindex_by_date(df): df = df.set_index('dates') dates = pd.date_range(df.index.min(), df.index.max()) return df.reindex(dates, fill_value=0).ffill() data = data.groupby('id').apply(reindex_by_date) </code></pre> <p>Ideally I would like something like this:</p> <pre><code>+---+----------+-----+ | id| dates|value| +---+----------+-----+ | 1|2020-01-01| 0| | 1|2020-01-02| 0| | 1|2020-01-03| 42| | 2|2020-01-01| -1| | 2|2020-01-02| 0| | 2|2020-01-03| -2| +---+----------+-----+ </code></pre>
<h1>Case 1: Each ID has an individual date range.</h1> <p>I would try to reduce the content of the udf as much as possible. In this case I would only calculate the date range per ID in the udf. For the other parts I would use Spark native functions.</p> <pre class="lang-py prettyprint-override"><code>from pyspark.sql import types as T from pyspark.sql import functions as F # Get min and max date per ID date_ranges = data.groupby('id').agg(F.min('dates').alias('date_min'), F.max('dates').alias('date_max')) # Calculate the date range for each ID @F.udf(returnType=T.ArrayType(T.DateType())) def get_date_range(date_min, date_max): return [t.date() for t in list(pd.date_range(date_min, date_max))] # To get one row per potential date, we need to explode the UDF output date_ranges = date_ranges.withColumn( 'dates', F.explode(get_date_range(F.col('date_min'), F.col('date_max'))) ) date_ranges = date_ranges.drop('date_min', 'date_max') # Add the value for existing entries and add 0 for others result = date_ranges.join( data, ['id', 'dates'], 'left' ) result = result.fillna({'value': 0}) </code></pre> <h1>Case 2: All ids have the same date range</h1> <p>I think there is no need to use a UDF here. What you want to can be archived in a different way: First, you get all possible IDs and all necessary dates. Second, you crossJoin them, which will provide you with all possible combinations. Third, left join the original data onto the combinations. Fourth, replace the occurred null values with 0.</p> <pre class="lang-py prettyprint-override"><code># Get all unique ids ids_df = data.select('id').distinct() # Get the date series date_min, date_max = data.agg(F.min('dates'), F.max('dates')).collect()[0] dates = [[t.date()] for t in list(pd.date_range(date_min, date_max))] dates_df = spark.createDataFrame(data=dates, schema=&quot;dates:date&quot;) # Calculate all combinations all_comdinations = ids_df.crossJoin(dates_df) # Add the value column result = all_comdinations.join( data, ['id', 'dates'], 'left' ) # Replace all null values with 0 result = result.fillna({'value': 0}) </code></pre> <p>Please be aware of the following limitiations with this solution:</p> <ol> <li>crossJoins can be quite costly. One potential solution to cope with the issue can be found in <a href="https://stackoverflow.com/questions/62092728/pyspark-crossjoin-between-2-dataframes-with-millions-of-records">this related question</a>.</li> <li>The collect statement and use of Pandas results in a not perfectly parallelised Spark transformation.</li> </ol> <hr /> <p>[EDIT] Split into two cases as I first thought all IDs have the same date range.</p>
python|pandas|apache-spark|pyspark
3
1,909,859
73,052,858
Python Selenium window closing no matter what
<p>I don't really like to ask questions but I just can't find out what is wrong with my code. I'm new to selenium so please excuse me if it's something obvious.</p> <pre><code>from selenium import webdriver from selenium.webdriver.chrome.service import Service from webdriver_manager.chrome import ChromeDriverManager from selenium.webdriver.chrome.options import Options chrome_options = Options() chrome_options.add_experimental_option(&quot;detach&quot;, True) s=Service(ChromeDriverManager().install()) driver = webdriver.Chrome(options=chrome_options, service=s) driver.maximize_window() driver.get('https://www.youtube.com') </code></pre> <p>This code works, and opens up youtube successfully, however, the window will close shortly after opening. To combat this, I added the 'detach True' option into the code as shown above (<a href="https://stackoverflow.com/questions/51865300/python-selenium-keep-browser-open">Python selenium keep browser open</a>), however, this hasn't worked and the window will close a few seconds after opening. There was also this error showing when I ran the code.</p> <p><em>[17708:21796:0720/212826.842:ERROR:device_event_log_impl.cc(214)] [21:28:26.841] USB: usb_device_handle_win.cc:1048 Failed to read descriptor from node connection: A device attached to the system is not functioning. (0x1F)</em></p> <p>I looked at other people on SO who had this issue, but all the resources said to ignore it and that it shouldn't affect the running of the program. To stop the error message from popping up I put this line into my code. <em>chrome_options.add_experimental_option('excludeSwitches', ['enable-logging'])</em> This stopped the error from showing up but didn't stop the window from closing.</p> <p>Any help is appreciated, I'm running the most recent version of VS on windows 10.</p>
<p>After your test case finishes running, it will close the browser no matter what. In your case browser will be closed as soon as you navigate to youtube. You don't have anything else and your test case is finished as soon as you navigate to youtube.</p> <p>But, if you would like to observe more and stay on youtube once you navigate, you can add wait time so it doesn't close as soon as it navigates to youtube.</p> <p>Try adding this line below so that it will wait for 10 seconds.</p> <p>time.sleep(10)</p>
python|selenium
1
1,909,860
73,050,984
SQLAlchemy db.create_all() doesn't create the tables when my models are defined in a separate file
<p>I'm somewhat new to Flask and I'm having a problem.</p> <p>I have an app.py whereby I instantiated my Flask app and SQLAlchemy. (Code shown below):</p> <pre><code>from flask import Flask from flask_sqlalchemy import SQLAlchemy import os basedir = os.path.abspath(os.path.dirname(__file__)) app = Flask(__name__) app.config['SECRET_KEY'] = 'I5aE2js75KeZHVx88qAm4gPHnDvM7lSD' app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///' + os.path.join(basedir, 'data.db') db = SQLAlchemy(app) </code></pre> <p>Then on a separate file (models.py) I imported the db and used it as follows:</p> <pre><code>from app import db class User(db.Model): id = db.Column(db.Integer, primary_key=True) username = db.Column(db.String(100), nullable=False) def __repr__(self): return f'&lt;User {self.username}&gt;' </code></pre> <p>The problem is when i use db.create_all(), the tables defined within my models.py file don't get created. I copied the classes over onto my app.py and they got created.</p> <p>I dont know what I am doing wrong. Any help is greatly appreciated!</p>
<p>I like to put my db initialization into a function:</p> <p>models.py</p> <pre><code>from flask_sqlalchemy import SQLAlchemy db = SQLAlchemy() class User(db.Model): id = db.Column(db.Integer, primary_key=True) username = db.Column(db.String(100), nullable=False) def __repr__(self): return f'&lt;User {self.username}&gt;' </code></pre> <p>app.py</p> <pre><code>from flask import Flask from flask_sqlalchemy import SQLAlchemy import os from models import db def initialize_db(app): app.app_context().push() db.init_app(app) db.create_all() db.session.commit() basedir = os.path.abspath(os.path.dirname(__file__)) app = Flask(__name__) app.config['SECRET_KEY'] = 'I5aE2js75KeZHVx88qAm4gPHnDvM7lSD' app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///' + os.path.join(basedir, 'data.db') initialize_db(app) </code></pre>
python|sqlite|flask|sqlalchemy
0
1,909,861
73,032,562
How Check ManyRelatedManager is None
<p>-models.py</p> <pre><code>class Role: name = models.CharFiled(max_length=100) comp = models.ForeignKey(Company, models.CASCADE, related_name=&quot;comp_roles&quot;) users = models.ManyToManyField(User, related_name='user_roles', related_query_name='user_role', through='UserRole') class UserRole(models.Model): user = models.ForeignKey(User, on_delete=models.CASCADE) role = models.ForeignKey(Role, on_delete=models.CASCADE) </code></pre> <p>based on certain conditions I have two queries</p> <p>-in api_views.py:</p> <p>1-Role.objects.filter(comp_id=self.comp_id).prefetch_related('users')</p> <p>2-Role.objects.filter(comp_id=self.comp_id)</p> <p>-RoleSerializer.py</p> <pre><code>class RoleSerializer(serializers.ModelSerializer): users = serializers.SerializerMethodField() def get_users(self, obj): #users: ManyRelatedManager = obj.users logger.info(f&quot;this is obj.users: {obj.users}&quot;) logger.info(f&quot;this is n obj.users: {obj.users == None}&quot;) # hit the db: # if not obj.users.all(): # if obj.users.exists() == None: # no hit the db: # if obj.users.all() == None: # if obj.users == None: if obj.users.all() == None: logger.info(f&quot;obj.users is None!&quot;) return None logger.info(f&quot;obj.users is not None!&quot;) serializer = UserReadSerializer(obj.users.all(), many=True) return serializer.data </code></pre> <p>Either <strong>obj.users == None <em>log</em></strong> or <strong>obj.users.all() == None <em>codition</em></strong> are always false!</p> <p>My question is how can I find out obj.users or obj.users.all() (in RoleSerializer/get_users) is None? so I can decide to return whether None or UserReadSerializer data.</p>
<p><code>obj.users.all()</code> returns a queryset. so validating it against to None will not work. Instead you can use below line to get the count of entries and do your other operations on it.</p> <pre><code>obj.users.count() </code></pre> <p>if no users are present in the database for the model, it will return 0.</p> <p><strong>Edit :</strong> Saw one more answer posted for this question now.</p> <pre><code>obj.users.exists() </code></pre> <p>using exists is efficient than getting the count</p> <p><strong>Edit:</strong> Adding complete code .</p> <p><strong>Views.py</strong></p> <pre><code>class RoleModelViewset(modelViewset): serializer_class = RoleSerializer queryset=Role.objects.filter(comp_id=self.comp_id).prefetch_related('users') </code></pre> <p><strong>Serializer.py</strong></p> <pre><code>class RoleSerializer(serializers.ModelSerializer): users = serializers.SerializerMethodField() def get_users(self, obj): if not obj.users.exists(): logger.info(f&quot;obj.users is None!&quot;) return None logger.info(f&quot;obj.users is not None!&quot;) serializer = UserReadSerializer(obj.users.all(), many=True) return serializer.data </code></pre>
python|django|database|orm|django-orm
1
1,909,862
55,980,523
Formatting paragraph text in HTML as single line
<p>I have tried to extract text from html page using traditional beautiful soup method. I have followed the code from <a href="https://stackoverflow.com/a/24618186/10895147">another SO answer</a>.</p> <pre><code>import urllib from bs4 import BeautifulSoup url = "http://orizon-inc.com/about.aspx" html = urllib.urlopen(url).read() soup = BeautifulSoup(html) # kill all script and style elements for script in soup(["script", "style"]): script.extract() # rip it out # get text text = soup.get_text() # break into lines and remove leading and trailing space on each lines = (line.strip() for line in text.splitlines()) # break multi-headlines into a line each chunks = (phrase.strip() for line in lines for phrase in line.split(" ")) # drop blank lines text = '\n'.join(chunk for chunk in chunks if chunk) print(text) </code></pre> <p>I am able to extract text using this correctly for most of the pages. But I there occurs new line between the words in the paragraph for some particular pages like the one I've mentioned.</p> <p>result:</p> <pre><code>\nAt Orizon, we use our extensive consulting, management, technology and\nengineering capabilities to design, develop,\ntest, deploy, and sustain business and mission-critical solutions to government\nclients worldwide.\nBy using proven management and technology deployment\npractices, we enable our clients to respond faster to opportunities,\nachieve more from their operations, and ultimately exceed\ntheir mission requirements.\nWhere\nconverge\nTechnology &amp; Innovation\n© Copyright 2019 Orizon Inc., All Rights Reserved.\n&gt;' </code></pre> <p>In the result there occurs a new line between <strong>technology and\nengineering</strong>, <strong>develop,\ntest</strong>,etc.</p> <p>These are all the text inside the same paragraph. </p> <p>If we view it in html source code it is correct:</p> <pre><code>&lt;p&gt; At Orizon, we use our extensive consulting, management, technology and engineering capabilities to design, develop, test, deploy, and sustain business and mission-critical solutions to government clients worldwide. &lt;/p&gt; &lt;p&gt; By using proven management and technology deployment practices, we enable our clients to respond faster to opportunities, achieve more from their operations, and ultimately exceed their mission requirements. &lt;/p&gt; </code></pre> <p>What is the reason for this? and how can I extract it accurately?</p>
<p>Instead of splitting the text per line, you should be splitting the text per HTML tag, since for each paragraph and title, you want the text inside to be stripped of line breaks.</p> <p>You can do that by iterating over all elements of interest (I included <code>p</code>, <code>h2</code> and <code>h1</code> but you can extend the list), and for each element, strip it of any newlines, then append a newline to the end of the element to create a line break before the next element.</p> <p>Here's a working implementation:</p> <pre><code>import urllib.request from bs4 import BeautifulSoup url = "http://orizon-inc.com/about.aspx" html = urllib.request.urlopen(url).read() soup = BeautifulSoup(html,'html.parser') # kill all script and style elements for script in soup(["script", "style"]): script.extract() # rip it out # put text inside paragraphs and titles on a single line for p in soup(['h1','h2','p']): p.string = " ".join(p.text.split()) + '\n' text = soup.text # remove duplicate newlines in the text text = '\n\n'.join(x for x in text.splitlines() if x.strip()) print(text) </code></pre> <p>Output sample:</p> <pre><code>login About Us At Orizon, we use our extensive consulting, management, technology and engineering capabilities to design, develop, test, deploy, and sustain business and mission-critical solutions to government clients worldwide. By using proven management and technology deployment practices, we enable our clients to respond faster to opportunities, achieve more from their operations, and ultimately exceed their mission requirements. </code></pre> <p>If you don't want a gap between paragraphs/titles, use:</p> <pre><code>text = '\n'.join(x for x in text.splitlines() if x.strip()) </code></pre>
python|python-3.x|web-scraping|beautifulsoup
1
1,909,863
65,037,372
How can a webcam be accessed using opencv with python?
<p>This is the code that I currently have:</p> <pre class="lang-py prettyprint-override"><code>import cv2, time video=cv2.VideoCapture(0) check, frame=video.read() print(check) print(frame) cv2.imshow(&quot;Capturing&quot;, frame) cv2.waitkey(0) video.release() </code></pre> <p>The code is showing a syntax error.</p> <p>The error:</p> <pre class="lang-py prettyprint-override"><code>False None Traceback (most recent call last): File &quot;C:/Users/Stagiair/Desktop/aaa.py&quot;, line 9, in &lt;module&gt; cv2.imshow(&quot;Capturing&quot;, frame) cv2.error: OpenCV(4.4.0) C:\Users\appveyor\AppData\Local\Temp\1\pip-req-build-95hbg2jt\opencv\modules\highgui\src\window.cpp:376: error: (-215:Assertion failed) size.width&gt;0 &amp;&amp; size.height&gt;0 in function 'cv::imshow' </code></pre>
<p>Your example is showing that <code>video.read()</code> is returning <code>(False, None)</code>. This is telling you that the read failed. The failure you're seeing is the result of passing <code>None</code> to <code>cv2.imshow()</code>.</p> <p>On one of my laptops, it take a few attempts for the video to 'warm up' and start returning images. Arranging to ignore some number of <code>(False, None)</code> returns is usually sufficient.</p>
python|cv2
0
1,909,864
64,036,638
Draw a turtle path without Turtle module
<p>I'm trying to code a program that draws the path of a 'turtle' when given a string. I can't use the turtle module. We suppose the turtle starts at (0,0) and points toward <em>y</em>.</p> <p>Here are the 4 possible caracters:</p> <p>S: Go forward 1 in the current direction;<br /> R: Turns right 90 degrees;<br /> L: Turns left 90 degrees;<br /> T: Disables displacement tracking if it is currently active, otherwise enables it.</p> <p>For example, a path could be: SSSRSSLSTSST</p> <p>I see two ways to approch this problem. Either the turtle is always moving straight in a plane that rotates. Either the particle can 'recognize' where it is actually pointing, and then move left and right.</p> <p>In both situations, I'm stuck.</p> <p>Here is the 'code' I did:</p> <pre><code>import matplotlib.pyplot as plt pathUser=input('Write a path') #User enter a path path=list(pathUser) #Convert string to a matrix x=0 y=0 for p in path: #Check the letters one-by-one if p == &quot;S&quot;: y=y+1 #Moves straight plt.plot(x,y,'k^') elif p == &quot;R&quot;: elif p == &quot;L&quot;: elif p == &quot;T&quot;: plt.show() </code></pre> <p>Is it a good start? What I can do it rotates the point, but not the axis. Can someone could help me to figure out what to put in R and L parts? Thank you in advance for your time and your help.</p>
<p>For turtle graphics, we need to store and update both direction and positions. Here a simple way:</p> <ol> <li>initialize position and direction</li> <li>if S: change position acc. to the direction</li> <li>if R: change direction by -90 degrees if L: change direction by +90 degrees</li> </ol> <p>Here's sample code:</p> <pre><code>import matplotlib.pyplot as plt direction = 90 track = True move_dir = { 0: [ 1, 0], 90: [ 0, 1], 180: [-1, 0], 270: [ 0, -1]} x, y = [0, 0] prev_x, prev_y = x, y path = input('write a path: \n&gt;&gt;&gt;') for p in path: if p == 'S': prev_x, prev_y = x, y x += move_dir[direction][0] y += move_dir[direction][1] if track: plt.plot([prev_x, x], [prev_y, y], color='red', marker='.', markersize=10) elif p == 'L': direction = (direction + 90) % 360 elif p == 'R': if direction == 0: direction = 270 else: direction = direction - 90 else: track = not track plt.grid() plt.show() </code></pre> <p>Sample test case:</p> <pre><code>write a path: &gt;&gt;&gt;SSSRSSLSTSSTS </code></pre> <p>outputs:</p> <p><a href="https://i.stack.imgur.com/Bqwj1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Bqwj1.png" alt="enter image description here" /></a></p>
python|matplotlib
0
1,909,865
72,066,222
What Does Tensorflow.js Use fbjs For?
<p>I was looking over the Tensorflow.js dependencies and noticed that <code>fbjs</code> is included in the dependency list. What functionality requires <code>fbjs</code>? I'm not familiar with the package, but I'm aware that it is a Facebook JavaScript package. It just seems a little strange to me, but as I said, I don't know much about <code>fbjs</code> so maybe there's something useful in the context of Tensorflow.js.</p>
<p>The Facebook JavaScript package, <code>fbjs</code>, used in Tensorflow.js is for the <code>tfjs-vis</code> package. <code>fbjs</code> contains several visual tools utilized in <code>tfjs-vis</code>.</p> <p>Being that Facebook is most known as a social media platform, I found it odd at the time.</p>
javascript|tensorflow.js|fbjs
0
1,909,866
10,561,527
Django - set ForeignKey deferrable foreign key constraint in SQLite3
<p>I seem to be stuck with creating an initialy deferrable foreign key relationship between two models in Django and using SQLite3 as my backend storage. </p> <p>Consider this simple example. This is what <em>models.py</em> looks like:</p> <pre><code>from django.db import models class Investigator(models.Model): name = models.CharField(max_length=250) email = models.CharField(max_length=250) class Project(models.Model): name = models.CharField(max_length=250) investigator = models.ForeignKey(Investigator) </code></pre> <p>And this is what the output from <em>sqlall</em> looks like:</p> <pre><code>BEGIN; CREATE TABLE "moo_investigator" ( "id" integer NOT NULL PRIMARY KEY, "name" varchar(250) NOT NULL, "email" varchar(250) NOT NULL ) ; CREATE TABLE "moo_project" ( "id" integer NOT NULL PRIMARY KEY, "name" varchar(250) NOT NULL, "investigator_id" integer NOT NULL REFERENCES "moo_investigator" ("id") ) ; CREATE INDEX "moo_project_a7e50be7" ON "moo_project" ("investigator_id"); COMMIT; </code></pre> <p>"DEFERRABLE INITIALLY DEFERRED" is missing from the *investigator_id* column in the <em>project</em> table. What am I doing wrong?</p> <p>p.s. I am new to Python and Django - using Python version 2.6.1 Django version 1.4 and SQLite version 3.6.12</p>
<p>This behavior is now the default. See <a href="https://github.com/django/django/blob/803840abf7dcb6ac190f021a971f1e3dc8f6792a/django/db/backends/sqlite3/schema.py#L16" rel="nofollow noreferrer">https://github.com/django/django/blob/803840abf7dcb6ac190f021a971f1e3dc8f6792a/django/db/backends/sqlite3/schema.py#L16</a></p>
python|django|sqlite
1
1,909,867
62,573,842
How do I create a text file in jupyter notebook, from the output from my code
<p>How can I create and append the output of my code to a txt file? I have a lot of code.</p> <p>This is just a small example of what I'm trying to do:</p> <pre><code>def output_txt(): def blop(a,b):ans = a + b print(ans) blop(2,3) x = 'Cool Stuff' print(x) def main(x): f = open('demo.txt', 'a+') f.write(str(output_txt)) f.close main(x) f = open('demo.txt', 'r') contents = f.read() print(contents) </code></pre> <p>But the output gives me this:</p> <p>cool bananas&lt;function output_txt at 0x0000017AA66F0F78&gt;&lt;function output_txt at 0x0000017AA66F0B88&gt;&lt;function output_txt at 0x0000017AA66F0948&gt;</p>
<p>If you just want to save the output of your code to a text file, you can add this to a Jupyter notebook cell (the variable <code>result</code> holds the content you want to write away):</p> <pre><code>with open('result.txt', 'a') as fp: fp.write(result) </code></pre> <p>If you want to convert your whole Jupyter notebook to a txt file (so including the code and the markdown text), you can use <a href="https://nbconvert.readthedocs.io/en/latest/" rel="nofollow noreferrer">nbconvert</a>. For example, to convert to reStructuredText:</p> <pre><code>jupyter nbconvert --to rst notebook.ipynb </code></pre>
python|file|jupyter-notebook
2
1,909,868
60,681,488
Using numpy.exp to calculate object life length
<p>I can't find any example anywhere on the internet .</p> <p>I would like to learn using the exponential law to calculate a probability.</p> <p>This my exponential lambda : 0.0035</p> <blockquote> <ol> <li>What is the probability that my object becomes defectuous before 100 hours of work ? P(X &lt; 100)</li> </ol> </blockquote> <p>How could I write this with numpy or sci kit ? Thanks !</p> <p><strong>Edit : this is the math :</strong></p> <p>P(X &lt; 100) = 1 - e ** -0.0035 * 100 = 0.3 = 30%</p> <p><strong>Edit 2 :</strong> </p> <p>Hey guys, I maybe have found something there, hi hi : <a href="http://web.stanford.edu/class/archive/cs/cs109/cs109.1192/handouts/pythonForProbability.html" rel="nofollow noreferrer">http://web.stanford.edu/class/archive/cs/cs109/cs109.1192/handouts/pythonForProbability.html</a></p> <p><strong>Edit 3 :</strong></p> <p>This is my attempt with scipy :</p> <pre><code>from scipy import stats B = stats.expon(0.0035) # Declare B to be a normal random variable print(B.pdf(1)) # f(1), the probability density at 1 print(B.cdf(100)) # F(2) which is also P(B &lt; 100) print(B.rvs()) # Get a random sample from B </code></pre> <p>but B.cdf is wrong : it prints 1, while it should print 0.30, please help ! B.pdf prints 0.369 : What is this ?</p> <p><strong>Edit 4 :</strong> I've done it with the python math lib like this :</p> <pre><code>lambdaCalcul = - 0.0035 * 100 MyExponentialProbability = 1 - math.exp(lambdaCalcul) print("My probability is",MyExponentialProbability * 100 , "%"); </code></pre> <p>Any other solution with numpy os scipy is appreciated, thank you</p>
<p>The <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.expon.html" rel="nofollow noreferrer"><strong><code>expon(..)</code></strong></a> function takes as parameters <code>loc</code> and <code>scale</code> (which correspond to the mean and the standard deviation. Since the standard deviation is the inverse of the variance, we thus can construct such distribution with:</p> <pre><code>B = stats.expon(<b>scale=1/0.0035</b>)</code></pre> <p>Then the cummulative distribution function says for <code>P(X &lt; 100)</code>:</p> <pre><code>&gt;&gt;&gt; print(B.cdf(100)) 0.2953119102812866 </code></pre>
numpy|scipy|statistics|exponential
0
1,909,869
60,672,065
how to solve '[mov,mp4,m4a,3gp,3g2,mj2 @ 0000021c356d9e00] moov atom not found' in opencv
<p>I'm trying to create a video uploader in a kivy app using OpenCV. However, when I try to upload a video, I get the following error </p> <pre><code>[mov,mp4,m4a,3gp,3g2,mj2 @ 0000021c356d9e00] moov atom not found [mov,mp4,m4a,3gp,3g2,mj2 @ 0000021c356d9e00] moov atom not found [mov,mp4,m4a,3gp,3g2,mj2 @ 0000021c356d9e00] moov atom not found [mov,mp4,m4a,3gp,3g2,mj2 @ 0000021c356d9e00] moov atom not found ... </code></pre> <p>The screen becomes unresponsive during this. I edited the save() function recently and added an uploadClass() because I was getting another error. </p> <p><strong>main.py</strong></p> <pre><code>... class SaveDialog(Screen): save = ObjectProperty(None) text_input = ObjectProperty(None) cancel = ObjectProperty(None) def save(self, path, filename): for letter in os.path.join(path, filename): print(letter) def find(s, ch): return [i for i, letter in enumerate(s) if letter == ch] os_path_simpl = list(os.path.join(path, filename)) for t in range(len(find(os.path.join(path, filename), '\\'))): os_path_simpl[find(os.path.join(path, filename), '\\')[t]] = '\\' class uploadClass(object): video = ''.join(os_path_simpl) def __init__(self, src=video): self.video_selected = cv2.VideoCapture(src) self.vid_cod = cv2.VideoWriter_fourcc(*'mp4v') self.out = cv2.VideoWriter('media/testOne.mp4', self.vid_cod, 20.0, (640,480)) self.thread = Thread(target=self.update, args=()) self.thread.daemon = True self.thread.start() def update(self): while True: if self.video_selected.isOpened(): (self.status, self.frame) = self.video_selected.read() def show_frame(self): if self.status: cv2.imshow('uploading', self.frame) if cv2.waitKey(10) &amp; 0xFF == ord('q'): self.video_selected.release() self.out.release() cv2.destroyAllWindows() exit(1) def save_frame(self): self.out.write(self.frame) rtsp_stream_link = 'media/testOne.mp4' upload_Class = uploadClass(rtsp_stream_link) while True: try: upload_Class.__init__() upload_Class.show_frame() upload_Class.save_frame() except AttributeError: pass sm.current = "home" ... </code></pre>
<p>Moov atom contains various bits of information required to play a video and the errors you are getting are saying that this information is either missing or corrupt.</p> <p>This can happen, for example, when you create a video then attempt move/upload the file whilst the creation process is still running. </p> <p>In your case I think you need to release the cv2.VideoWriter/cv2.VideoCapture objects prior to attempting to upload the file. i.e.</p> <pre><code>self.video_selected.release() self.out.release() </code></pre> <p>Need to be called before the video is uploaded.</p>
python|opencv|ffmpeg
1
1,909,870
71,338,574
how to handle complex json where key is not always present using python?
<p>I need support in getting the value of Key <em>issues</em> which is not always present in JSON.</p> <p>my JSON object is as below -</p> <pre><code>{ &quot;records&quot;:[ { &quot;previousAttempts&quot;: [], &quot;id&quot;: &quot;aaaaa-aaaaaa-aaaa-aaa&quot;, &quot;parentId&quot;: null }, { &quot;previousAttempts&quot;: [], &quot;id&quot;: &quot;aaaaa-aaaaaa-aaaa-aaa&quot;, &quot;parentId&quot;: null, &quot;issues&quot;:[ { &quot;type&quot;: &quot;warning&quot;, &quot;category&quot;: &quot;General&quot; }, { &quot;type&quot;: &quot;warning&quot;, &quot;category&quot;: &quot;General&quot; } ] } ] } </code></pre>
<p>This should work for you</p> <pre class="lang-python prettyprint-override"><code>import json data = json.loads(json_data) issues = [r.get('issues', []) for r in data['records']] </code></pre>
python|json
2
1,909,871
64,417,559
Unable to find fixture "mocker" (pytest-mock) when running from tox
<p>I have been using <a href="https://pypi.org/project/pytest-mock/" rel="noreferrer">pytest-mock</a> library for mocking with pytest. When I'm trying to run the test using <code>tox</code> command, I am getting the following error:</p> <pre class="lang-sh prettyprint-override"><code>... tests/test_cli.py ....EEEE ... file /path/to/test_cli.py, line 63 def test_cli_with_init_cmd_fails_with_db_error(runner, mocker, context): E fixture 'mocker' not found &gt; available fixtures: cache, capfd, capfdbinary, caplog, capsys, capsysbinary, context, cov, doctest_namespace, fs, monkeypatch, no_cover, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, requests_mock, runner, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory &gt; use 'pytest --fixtures [testpath]' for help on them. </code></pre> <p>However, when I try to run the test directly using pytest from within my venv, everything works as expected.</p> <p><code>$ py.test --cov esmigrate --cov-report term-missing</code></p> <pre><code>... platform linux -- Python 3.8.5, pytest-6.1.1, py-1.9.0, pluggy-0.13.1 rootdir: /path/to/project/root, configfile: tox.ini plugins: cov-2.10.1, pyfakefs-4.0.2, mock-3.3.1, requests-mock-1.8.0 collected 50 items tests/test_cli.py ........ [ 16%] tests/test_contexts/test_context_config.py ... [ 22%] tests/test_internals/test_db_manager.py .......... [ 42%] tests/test_internals/test_glob_loader.py ..... [ 52%] tests/test_internals/test_http_handler.py ....... [ 66%] tests/test_internals/test_script_parser.py ................. [100%] ... </code></pre> <p>Which is strange, because, I have added <code>pytest-mock</code> in my <code>requirements.txt</code> file, which was used to install dependencies within the venv, and I have this file added as a dependency for tox testenv as well. This is the content of my <code>tox.ini</code> file.</p> <pre class="lang-sh prettyprint-override"><code>[tox] envlist=py36, py37, py38, flake8 [pytest] filterwarnings = error::DeprecationWarning error::PendingDeprecationWarning [flake8] max-line-length = 120 select = B,C,E,F,W,T4,B9,B950 ignore = E203,E266,E501,W503,D1 [testenv] passenv=USERNAME commands=py.test --cov esmigrate {posargs} --cov-report term-missing deps= -rrequirements.txt [testenv:flake8] basepython = python3.8 deps = flake8 commands = flake8 esmigrate tests </code></pre> <p>A snapshot of <code>requirements.txt</code> file</p> <pre class="lang-sh prettyprint-override"><code>... pyfakefs==4.0.2 pyparsing==2.4.7 pyrsistent==0.17.3 pytest==6.1.1 pytest-cov==2.10.1 pytest-mock==3.3.1 PyYAML==5.3.1 ... </code></pre> <p>This doesn't cause any problem when ran from <code>travis-ci</code> either, but I want to know what's the problem here and what I've been doing wrong. Was tox-env unable to install pytest-mock, or did &quot;mocker&quot; fixture got shadowed by something else?</p>
<p>tox currently (though this is planned to be improved in the (at the time of writing) current rewrite) does not recreate environments if files it does not manage change (such as requirements.txt / setup.py)</p> <p>For a related question, you can see my <a href="https://stackoverflow.com/q/23032580/812183">question and workarounds</a></p> <p>the core issue here is if you're not managing tox environment dependencies directly inline in <code>tox.ini</code> it will not notice changes (such as adding / removing dependencies from <code>requirements.txt</code>) and so you will need to run tox with the <code>--recreate</code> flag to reflect those changes</p> <hr /> <p>disclaimer: I'm one of the current tox maintainers</p>
python-3.x|pytest|tox
4
1,909,872
64,297,560
Add n values from the both sides of a pandas data frame column values
<p>I have a data frame like this,</p> <pre><code>df col1 col2 A 1 B 3 C 2 D 5 E 6 F 8 G 10 </code></pre> <p>I want to add previous and next n values of a particular value of col2 and store it into a new column,</p> <p>So, If n=2, then the data frame should look like,</p> <pre><code> col1 col2 col3 A 1 6 (only below 2 values are there no upper values, so adding 3 numbers) B 3 11 (adding one prev, current and next two) C 2 17(adding all 4 values) D 5 24(same as above) E 6 31(same as above) F 8 29(adding two prev and next one as only one is present) G 10 24(adding with only prev two values) </code></pre> <p>When previous or next 2 values are not found adding whatever values are available. I can do it using a for loop, but the execution time will be huge, looking for some pandas shortcuts do do it most efficiently.</p>
<p>You can use the <code>rolling</code> method.</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd df = pd.read_json('{&quot;col1&quot;:{&quot;0&quot;:&quot;A&quot;,&quot;1&quot;:&quot;B&quot;,&quot;2&quot;:&quot;C&quot;,&quot;3&quot;:&quot;D&quot;,&quot;4&quot;:&quot;E&quot;,&quot;5&quot;:&quot;F&quot;,&quot;6&quot;:&quot;G&quot;},&quot;col2&quot;:{&quot;0&quot;:1,&quot;1&quot;:3,&quot;2&quot;:2,&quot;3&quot;:5,&quot;4&quot;:6,&quot;5&quot;:8,&quot;6&quot;:10}}') df['col3'] = df['col2'].rolling(5, center=True, min_periods=0).sum() </code></pre> <pre><code>col1 col2 col3 0 A 1 6.0 1 B 3 11.0 2 C 2 17.0 3 D 5 24.0 4 E 6 31.0 5 F 8 29.0 6 G 10 24.0 </code></pre>
python|pandas|dataframe|rolling-sum
5
1,909,873
70,379,603
Python script is failing with ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))
<p>I've similar issue as many of you but cannot get this resolve. I'm producing a self-executable file that is running fine on my VirtualBox Linux 7.3 and 7.9 but when I'm trying to run it somewhere else (on other Linux servers) I'm getting the below output:</p> <pre><code> Traceback (most recent call last): File &quot;urllib3/connectionpool.py&quot;, line 706, in urlopen File &quot;urllib3/connectionpool.py&quot;, line 382, in _make_request File &quot;urllib3/connectionpool.py&quot;, line 1010, in _validate_conn File &quot;urllib3/connection.py&quot;, line 421, in connect File &quot;urllib3/util/ssl_.py&quot;, line 429, in ssl_wrap_socket File &quot;urllib3/util/ssl_.py&quot;, line 472, in _ssl_wrap_socket_impl File &quot;ssl.py&quot;, line 365, in wrap_socket File &quot;ssl.py&quot;, line 776, in __init__ File &quot;ssl.py&quot;, line 1036, in do_handshake File &quot;ssl.py&quot;, line 648, in do_handshake ConnectionResetError: [Errno 104] Connection reset by peer During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;requests/adapters.py&quot;, line 449, in send File &quot;urllib3/connectionpool.py&quot;, line 756, in urlopen File &quot;urllib3/util/retry.py&quot;, line 532, in increment File &quot;urllib3/packages/six.py&quot;, line 734, in reraise File &quot;urllib3/connectionpool.py&quot;, line 706, in urlopen File &quot;urllib3/connectionpool.py&quot;, line 382, in _make_request File &quot;urllib3/connectionpool.py&quot;, line 1010, in _validate_conn File &quot;urllib3/connection.py&quot;, line 421, in connect File &quot;urllib3/util/ssl_.py&quot;, line 429, in ssl_wrap_socket File &quot;urllib3/util/ssl_.py&quot;, line 472, in _ssl_wrap_socket_impl File &quot;ssl.py&quot;, line 365, in wrap_socket File &quot;ssl.py&quot;, line 776, in __init__ File &quot;ssl.py&quot;, line 1036, in do_handshake File &quot;ssl.py&quot;, line 648, in do_handshake urllib3.exceptions.ProtocolError: ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;create_incident.py&quot;, line 415, in &lt;module&gt; open_incident_ticket() File &quot;create_incident.py&quot;, line 368, in open_incident_ticket resp = requests.post(endpoint_uri, headers=headers, data = json.dumps(data)) File &quot;requests/api.py&quot;, line 119, in post File &quot;requests/api.py&quot;, line 61, in request File &quot;requests/sessions.py&quot;, line 542, in request File &quot;requests/sessions.py&quot;, line 655, in send File &quot;requests/adapters.py&quot;, line 498, in send requests.exceptions.ConnectionError: ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer')) </code></pre> <p>My post constructor looks like:</p> <pre><code>resp = requests.post(endpoint_uri, headers=headers, data = json.dumps(data)) </code></pre> <p>Could you pls advise me where exactly I need to look for it? Is there multiple issues that I'm struggling with? Many thanks, Mario</p>
<p>In the end, the issue was caused by firewall settings.</p>
python|python-3.x|rest|ssl|python-jsons
1
1,909,874
70,337,889
OpenCV Orb Algorithm QR Code Match Issues
<p>I'm following a video tutorial about, feature detection and matching by using Python OpenCV. Video uses the ORB (Oriented FAST and Rotated BRIEF) algorithm, as seen in the link below:</p> <p><a href="https://youtu.be/nnH55-zD38I" rel="nofollow noreferrer">https://youtu.be/nnH55-zD38I</a></p> <p>So i decided to use it the example 2 images i have, with little modification on the code. There are 2 input images, 1 with single QR code (single.jpg), other with a few different QR codes inside (multiple.jpg). The aim is to find the most similar region in the bigger image (multiple.jpg). But getting matches with totally different QR codes.</p> <p>Why it is marking different region, and can we do an improvement on this example?</p> <pre><code>import cv2 MULTIPLE_NAME =&quot;...multiple.jpg&quot; SINGLE_NAME = &quot;...single.jpg&quot; multiple = cv2.imread(MULTIPLE_NAME) single = cv2.imread(SINGLE_NAME) orb=cv2.ORB_create() kpsingle,dessingle = orb.detectAndCompute(single,None) kpmultiple,desmultiple = orb.detectAndCompute(multiple,None) bf=cv2.BFMatcher() matches = bf.knnMatch(dessingle, desmultiple, k=2) good=[] for m, n in matches: if m.distance &lt; 2*n.distance: good.append([m]) img3 = cv2.drawMatchesKnn(single, kpsingle, multiple, kpmultiple, good, None, flags=2) cv2.imshow(&quot;img&quot;,multiple) cv2.imshow(&quot;crop&quot;,single) cv2.imshow(&quot;img3&quot;,img3) cv2.waitKey() </code></pre> <p><a href="https://i.stack.imgur.com/EOBLS.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EOBLS.jpg" alt="single" /></a> <a href="https://i.stack.imgur.com/DHaTA.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DHaTA.jpg" alt="multiple" /></a> <a href="https://i.stack.imgur.com/FiSp1.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FiSp1.jpg" alt="img3" /></a> <a href="https://i.stack.imgur.com/jzC8k.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jzC8k.jpg" alt="expected" /></a></p>
<p><strong>Method 1</strong>: pyzbar QR code (you need to pip or conda install pyzbar)</p> <p>I tried and got this</p> <pre class="lang-py prettyprint-override"><code>import cv2 import numpy as np import pyzbar.pyzbar as pyzbar YELLOW = (0,255,255) RED = (0,0,255) font = cv2.FONT_HERSHEY_SIMPLEX img = cv2.imread(r&quot;C:\some-path\qr-codes-green.jpg&quot;) # Create a qrCodeDetector Object dec_objs = pyzbar.decode(img) for d in dec_objs: pts = [[p.x,p.y] for p in d.polygon] txt_org = (d.rect.left,d.rect.top) txt = d.data.decode(&quot;utf-8&quot;) cv2.putText(img,txt,txt_org,font,0.5,YELLOW,1,cv2.LINE_AA) poly = np.int32([pts]) cv2.polylines(img,poly,True,RED,thickness=1,lineType=cv2.LINE_AA) cv2.imshow('QR-scanner',img) cv2.waitKey(0) </code></pre> <p><strong>Results</strong> <a href="https://i.stack.imgur.com/n4elZ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/n4elZ.jpg" alt="enter image description here" /></a></p> <hr /> <p><strong>Method 2:</strong> wechat QR code</p> <pre class="lang-py prettyprint-override"><code>import cv2 import numpy as np import pyzbar.pyzbar as pyzbar YELLOW = (0,255,255) RED = (0,0,255) font = cv2.FONT_HERSHEY_SIMPLEX img = cv2.imread(r&quot;C:\some-path\qr-codes-green.jpg&quot;) mod_path = r'C:\some-path\model\\' detector = cv2.wechat_qrcode_WeChatQRCode(mod_path+'detect.prototxt', mod_path+'detect.caffemodel', mod_path+'sr.prototxt', mod_path+'sr.caffemodel') res, points = detector.detectAndDecode(img) for i in range(len(res)): poly = points[i].astype(np.int32) txt = res[i] print(poly) txt_org = poly[0] cv2.putText(img,txt,txt_org,font,0.5,YELLOW,1,cv2.LINE_AA) cv2.polylines(img, [poly], True, RED, thickness=1, lineType=cv2.LINE_AA) cv2.imshow('QR-scanner',img) cv2.waitKey(0) </code></pre> <p><a href="https://i.stack.imgur.com/jNx3z.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jNx3z.jpg" alt="enter image description here" /></a></p> <p>Good references here</p> <p><a href="https://learnopencv.com/opencv-qr-code-scanner-c-and-python/" rel="nofollow noreferrer">https://learnopencv.com/opencv-qr-code-scanner-c-and-python/</a></p> <p><a href="https://learnopencv.com/wechat-qr-code-scanner-in-opencv/" rel="nofollow noreferrer">https://learnopencv.com/wechat-qr-code-scanner-in-opencv/</a></p>
python|opencv|image-processing|sift|orb
1
1,909,875
11,354,544
Read lines containing integers from a file in Python?
<p>I have a file format like this:</p> <pre><code>9 8 1 3 4 1 ... ... </code></pre> <p>Now, I want to get each line as three integers.</p> <p>When I used</p> <pre><code>for line in f.readlines(): print line.split(" ") </code></pre> <p>The script printed this:</p> <pre><code>['9', '8', '1\r\n'] ['3', '4', '1\r\n'] ... ... </code></pre> <p>How can I get each line as three integers?</p>
<p>Using the code you have and addressing your specific question of how to convert your list to integers:</p> <p>You can iterate through each line and convert the strings to <code>int</code> with the following example using <a href="http://docs.python.org/tutorial/datastructures.html#list-comprehensions" rel="noreferrer">list comprehension</a>:</p> <p>Given:</p> <pre><code>line =['3', '4', '1\r\n'] </code></pre> <p>then:</p> <pre><code>int_list = [int(i) for i in line] </code></pre> <p>will yield a list of integers</p> <pre><code>[3, 4, 1] </code></pre> <p>that you can then access via subscripts (0 to 2). e.g. </p> <p><code>int_list[0]</code> contains <code>3</code>, </p> <p><code>int_list[1]</code> contains <code>4</code>, </p> <p>etc.</p> <hr> <p>A more streamlined version for your consideration:</p> <pre><code>with open('data.txt') as f: for line in f: int_list = [int(i) for i in line.split()] print int_list </code></pre> <p>The advantage of using <code>with</code> is that it will automatically close your file for you when you are done, or if you encounter an exception.</p> <p><strong>UPDATE</strong>:</p> <p>Based on your comments below, if you want the numbers in 3 different variables, say <code>a</code>, <code>b</code> and <code>c</code>, you can do the following:</p> <pre><code> for line in f: a, b, c = [int(i) for i in line.split()] print 'a = %d, b = %d, c = %d\n' %(a, b, c) </code></pre> <p>and get this:</p> <pre><code> a = 9, b = 8, c = 1 </code></pre> <p>This counts on there being 3 numbers on each line.</p> <p><em>Aside:</em></p> <p>Note that in place of "list comprehension" (LC) you can also use a "generator expression" (GE) of this form:</p> <pre><code> a, b, c = (int(i) for i in line.split()) </code></pre> <p>for your particular problem with 3 integers this doesn't make much difference, but I show it for completeness. For larger problems, LC requires more memory as it generates a complete list in memory at once, while GE generate a value one by one as needed. This SO question <a href="https://stackoverflow.com/questions/47789/generator-expressions-vs-list-comprehension">Generator Expressions vs. List Comprehension</a> will give you more information if you are curious.</p>
python|file-io|casting
18
1,909,876
63,341,400
Include file and line number in python logs so clicking in PyCharm can go to the source line?
<p>I'm new to Python but experienced in Java. One of the more useful tricks was to have the slf4j backend include file name and line number of the source line for a log statement so IDE's could navigate to that when clicked.</p> <p>I would like to have the same facility in Python but is inexperienced with the ecosystem.</p> <p>How do I configure the Python logging library to add this?</p>
<p>You can use any of the <a href="https://docs.python.org/3/library/logging.html#logrecord-attributes" rel="nofollow noreferrer">LogRecord attributes</a> in the formatter of a logger.</p> <pre><code>import logging logging.basicConfig(format=&quot;File: %(filename)s Line: %(lineno)d Message: %(message)s&quot;) logging.warning(&quot;this is a log&quot;) </code></pre> <p>Output: <code>File: lineno.py Line: 4 Message: this is a log</code></p>
python|logging
0
1,909,877
63,467,820
Extracting the last items from nested lists in python
<p>I have some nested lists. I want to extract the last occurring element within each sublist (eg 'bye' for the first sublist). I then want to add all these last occurring elements ('bye', 'bye', 'hello' and 'ciao') to a new list so that I can count them easily, and find out which is the most frequently occurring one ('bye).</p> <p>The problem is that my code leaves me with an empty list. I've looked at <a href="https://stackoverflow.com/questions/62822696/extracting-first-and-last-element-from-sublists-in-nested-list">Extracting first and last element from sublists in nested list</a> and <a href="https://stackoverflow.com/questions/63199461/how-to-extract-the-last-item-from-a-list-in-a-list-of-lists-python">How to extract the last item from a list in a list of lists? (Python)</a> and they're not exactly what I'm looking for.</p> <p>Thanks for any help!</p> <pre><code>my_list = [['hello', 'bye', 'bye'], ['hello', 'bye', 'bye'], ['hello', 'hello', 'hello'], ['hello', 'bye', 'ciao']] # Make a new list of all of the last elements in the sublists new_list = [] for sublist in my_list: for element in sublist: if sublist.index(element) == -1: new_list.append(element) # MY OUTPUT print(new_list) [] # EXPECTED OUTPUT ['bye', 'bye', 'hello', 'ciao'] # I would then use new_list to find out what the most common last element is: most_common = max(set(new_list), key = new_list.count) # Expected final output print(most_common) # 'bye' </code></pre>
<p>You are looking for this:</p> <pre><code>for sublist in my_list: new_list.append(sublist[-1]) </code></pre> <p>The index <code>-1</code> does not &quot;really exist&quot;, it is a way to tell python to start counting from the end of the list. That is why you will not get a match when looking for <code>-1</code> like you do it.<br /> Additionally, you are walking over all the lists, which is not necessary, as you have &quot;random access&quot; by fetching the last item as you can see in my code.</p> <p>There is even a more pythonic way to do this using list comprehensions:</p> <pre><code>new_list = [sublist[-1] for sublist in my_list] </code></pre> <p>Then you do not need any of those <code>for</code> loops.</p>
python|list|for-loop|data-science
0
1,909,878
17,979,260
glib main loop hangs after Popen
<p>I'm trying to build a script that logs the window title when a different window becomes active. This is what I have so far:</p> <pre><code>import glib import dbus from dbus.mainloop.glib import DBusGMainLoop def notifications(bus, message): if message.get_member() == "event": args = message.get_args_list() if args[0] == "activate": print "Hello world" activewindow = Popen("xdotool getactivewindow getwindowname", stdout=PIPE, stderr=PIPE); print activewindow.communicate() DBusGMainLoop(set_as_default=True) bus = dbus.SessionBus() bus.add_match_string_non_blocking("interface='org.kde.KNotify',eavesdrop='true'") bus.add_message_filter(notifications) mainloop = glib.MainLoop() mainloop.run() </code></pre> <p>However, something is apparently wrong with my Popen call, and glib seems to swallow the error. At least, that is what someone on a IRC channel told me. When I remove the <code>Popen</code> and <code>activewindow.communicate()</code> calls, everything keeps working and I get a message "Hello world!" printed in the shell whenever I switch to a new window. </p> <p>With the <code>Popen</code> and <code>communicate()</code> calls, the script prints a single "Hello world" and hangs after that. </p> <p>Does anyone know:</p> <ul> <li>How I can get a proper error message?</li> <li>What I'm doing wrong in my <code>Popen</code> call?</li> </ul> <p>Thanks in advance!</p>
<p>I can't just comment... you haven't imported process module, or its members (Popen, PIPE).</p>
python|popen|glib|dbus
0
1,909,879
17,796,446
Convert a list to a string and back
<p>I have a virtual machine which reads instructions from tuples nested within a list like so:</p> <pre><code>[(0,4738),(0,36), (0,6376),(0,0)] </code></pre> <p>When storing this kind of machine code program, a text file is easiest, and has to be written as a string. Which is obviously quite hard to convert back.</p> <p>Is there any module which can read a string into a list/store the list in a readable way?</p> <p>requirements: </p> <ul> <li>Must be human readable in stored form (hence "pickle" is not suitable) </li> <li>Must be relatively easy to implement</li> </ul>
<p>Use the <a href="http://docs.python.org/2/library/json.html" rel="noreferrer"><code>json</code> module</a>:</p> <pre><code>string = json.dumps(lst) lst = json.loads(string) </code></pre> <p>Demo:</p> <pre><code>&gt;&gt;&gt; import json &gt;&gt;&gt; lst = [(0,4738),(0,36), ... (0,6376),(0,0)] &gt;&gt;&gt; string = json.dumps(lst) &gt;&gt;&gt; string '[[0, 4738], [0, 36], [0, 6376], [0, 0]]' &gt;&gt;&gt; lst = json.loads(string) &gt;&gt;&gt; lst [[0, 4738], [0, 36], [0, 6376], [0, 0]] </code></pre> <p>An alternative could be to use <a href="http://docs.python.org/2/library/functions.html#repr" rel="noreferrer"><code>repr()</code></a> and <a href="http://docs.python.org/2/library/ast.html#ast.literal_eval" rel="noreferrer"><code>ast.literal_eval()</code></a>; for just lists, tuples and integers that also allows you to round-trip:</p> <pre><code>&gt;&gt;&gt; from ast import literal_eval &gt;&gt;&gt; string = repr(lst) &gt;&gt;&gt; string '[[0, 4738], [0, 36], [0, 6376], [0, 0]]' &gt;&gt;&gt; lst = literal_eval(string) &gt;&gt;&gt; lst [[0, 4738], [0, 36], [0, 6376], [0, 0]] </code></pre> <p>JSON has the added advantage that it is a standard format, with support from tools outside of Python support serialising, parsing and validation. The <code>json</code> library is also a lot faster than the <code>ast.literal_eval()</code> function.</p>
python|python-2.7
43
1,909,880
60,984,624
Negative results in ARIMA model
<p>I'm trying to predict daily revenue to end of month by learning previous month. Due to different behavior of the revenue between workdays and weekends I decided to use time series model (ARIMA) in Python.</p> <p>This is the my Python code that I'm using:</p> <pre><code>import itertools import pandas as pd import numpy as np from datetime import datetime, date, timedelta import statsmodels.api as sm import matplotlib.pyplot as plt plt.style.use('fivethirtyeight') import calendar data_temp = [['01/03/2020',53921.785],['02/03/2020',97357.9595],['03/03/2020',95353.56893],['04/03/2020',93319.6761999999],['05/03/2020',88835.79958],['06/03/2020',98733.0856000001],['07/03/2020',61501.03036],['08/03/2020',74710.00968],['09/03/2020',156613.20712],['10/03/2020',131533.9006],['11/03/2020',108037.3002],['12/03/2020',106729.43067],['13/03/2020',125724.79704],['14/03/2020',79917.6726599999],['15/03/2020',90889.87192],['16/03/2020',160107.93834],['17/03/2020',144987.72243],['18/03/2020',146793.40641],['19/03/2020',145040.69416],['20/03/2020',140467.50472],['21/03/2020',69490.18814],['22/03/2020',82753.85331],['23/03/2020',142765.14863],['24/03/2020',121446.77825],['25/03/2020',107035.29359],['26/03/2020',98118.19468],['27/03/2020',82054.8721099999],['28/03/2020',61249.91097],['29/03/2020',72435.6711699999],['30/03/2020',127725.50818],['31/03/2020',77973.61724]] panel = pd.DataFrame(data_temp, columns = ['Date', 'revenue']) pred_result=pd.DataFrame(columns=['revenue']) panel['Date']=pd.to_datetime(panel['Date']) panel.set_index('Date', inplace=True) ts = panel['revenue'] p = d = q = range(0, 2) pdq = list(itertools.product(p, d, q)) seasonal_pdq = [(x[0], x[1], x[2], 7) for x in list(itertools.product(p, d, q))] aic = float('inf') for es in [True,False]: for param in pdq: for param_seasonal in seasonal_pdq: try: mod = sm.tsa.statespace.SARIMAX(ts, order=param, seasonal_order=param_seasonal, enforce_stationarity=es, enforce_invertibility=False) results = mod.fit() if results.aic&lt;aic: param1=param param2=param_seasonal aic=results.aic es1=es #print('ARIMA{}x{} enforce_stationarity={} - AIC:{}'.format(param, param_seasonal,es,results.aic)) except: continue print('Best model parameters: ARIMA{}x{} - AIC:{} enforce_stationarity={}'.format(param1, param2, aic,es1)) mod = sm.tsa.statespace.SARIMAX(ts, order=param1, seasonal_order=param2, enforce_stationarity=es1, enforce_invertibility=False) results = mod.fit() pred_uc = results.get_forecast(steps=calendar.monthrange(datetime.now().year,datetime.now().month)[1]-datetime.now().day+1) pred_ci = pred_uc.conf_int() ax = ts.plot(label='observed', figsize=(12, 5)) pred_uc.predicted_mean.plot(ax=ax, label='Forecast') ax.fill_between(pred_ci.index, pred_ci.iloc[:, 0], pred_ci.iloc[:, 1], color='k', alpha=.25) ax.set_xlabel('Date') plt.legend() plt.show() predict=pred_uc.predicted_mean.to_frame() predict.reset_index(inplace=True) predict.rename(columns={'index': 'date',0: 'revenue_forcast'}, inplace=True) display(predict) </code></pre> <p>The output looks like: <a href="https://i.stack.imgur.com/ZMc9c.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZMc9c.png" alt="enter image description here" /></a></p> <p>How you can see the prediction results have negative value as result of negative slope.</p> <p>Since I'm trying to predict income, the result cannot be lower than zero, and the negative slope also looks very strange.</p> <p>What's wrong with my method? How can I improve it?</p>
<p>You can't force an ARIMA model to take only positive values. However, a classic 'trick' when you want to predict something that's always positive is to use a function that converts positive values to any value in R. The <code>log</code> function is a good example of this.</p> <pre><code>panel['log_revenue'] = np.log(panel['revenue']) </code></pre> <p>And predict now <code>log_revenue</code> column.</p> <p>Now if the predictions take negative values, that's ok because your prediction is actually <code>np.exp(predict)</code>, which is positive. </p>
python|time-series|arima
3
1,909,881
66,000,857
Can't access text saved in a Quill form field on Django template
<p>In my django template I want to access the bio prop of an instance of my Creator class. This bio is set up as a QuillField in the Creator model class. When I try to access creator.bio, all that renders to the page is the following:</p> <p>&lt;django_quill.fields.FieldQuill object at 0x1084ce518&gt;</p> <p>What I want is the actual paragraph of formatted text (ie. the bio) that I typed into the form and saved. As of now, the QuillField is only accessible through the form in the Django admin page. The problem has nothing to do with the Quill UI, but rather being able to access the text I wrote into that form field and render it to the page in a readable format.</p> <p>From models.py:</p> <pre><code>from django.db import models from django_quill.fields import QuillField class Creator(models.Model): name = models.CharField(max_length=100) title = models.CharField(max_length=100, default='Creator') bio = QuillField() photo = models.ImageField(upload_to='images/', default='static/assets/icons/user-solid.svg') email = models.EmailField(max_length=100) website = models.URLField(max_length=1000, blank=True) facebook = models.URLField(max_length=1000, blank=True) twitter = models.URLField(max_length=1000, blank=True) instagram = models.URLField(max_length=1000, blank=True) def __str__(self): return self.name </code></pre> <p>In views.py:</p> <pre><code>def about(request): context = {&quot;creators&quot; : Creator.objects.all()} return render(request, 'about.html', context) </code></pre> <p>And, in the template:</p> <pre><code> &lt;section id=&quot;creator-container&quot;&gt; {% for creator in creators %} &lt;div class=&quot;creator-square&quot;&gt; &lt;h4&gt;{{ creator.name }}&lt;/h4&gt; &lt;h5&gt;{{ creator.title }}&lt;/h5&gt; &lt;img src=&quot;../../media/{{ creator.photo }}&quot; alt=&quot;{{actor.name}} headshot&quot; id=&quot;creator-photo&quot;&gt; &lt;p class=&quot;creator-bio&quot;&gt;{{ creator.bio }}&lt;/p&gt; &lt;/div&gt; {% endfor %} &lt;/section&gt; </code></pre> <p>If I print the creator.bio object to the console, this is what I get:</p> <pre><code>{&quot;delta&quot;:&quot;{\&quot;ops\&quot;:[{\&quot;attributes\&quot;:{\&quot;background\&quot;:\&quot;transparent\&quot;,\&quot;color\&quot;:\&quot;#000000\&quot;,\&quot;bold\&quot;:true},\&quot;insert\&quot;:\&quot;Sharon Yablon\&quot;},{\&quot;attributes\&quot;:{\&quot;background\&quot;:\&quot;transparent\&quot;,\&quot;color\&quot;:\&quot;#000000\&quot;},\&quot;insert\&quot;:\&quot; is an award-winning playwright who has been writing and directing her plays in Los Angeles for many years. Her work has appeared in a variety of sites, and on stage with The Echo Theater Company, Padua Playwrights, Zombie Joe's Underground Theater, The Lost Studio, Theater Unleashed, Bootleg, Theater of N.O.T.E., and others. Her short stories, \\\&quot;Perfidia\\\&quot; and \\\&quot;The Caller,\\\&quot; can be found in journals, and her published plays are in \&quot;},{\&quot;attributes\&quot;:{\&quot;background\&quot;:\&quot;transparent\&quot;,\&quot;color\&quot;:\&quot;#000000\&quot;,\&quot;italic\&quot;:true},\&quot;insert\&quot;:\&quot;Desert Road's One Acts of Note\&quot;},{\&quot;attributes\&quot;:{\&quot;background\&quot;:\&quot;transparent\&quot;,\&quot;color\&quot;:\&quot;#000000\&quot;},\&quot;insert\&quot;:\&quot;, \&quot;},{\&quot;attributes\&quot;:{\&quot;background\&quot;:\&quot;transparent\&quot;,\&quot;color\&quot;:\&quot;#000000\&quot;,\&quot;italic\&quot;:true},\&quot;insert\&quot;:\&quot;Fever Dreams\&quot;},{\&quot;attributes\&quot;:{\&quot;background\&quot;:\&quot;transparent\&quot;,\&quot;color\&quot;:\&quot;#000000\&quot;},\&quot;insert\&quot;:\&quot;, \&quot;},{\&quot;attributes\&quot;:{\&quot;background\&quot;:\&quot;transparent\&quot;,\&quot;color\&quot;:\&quot;#000000\&quot;,\&quot;italic\&quot;:true},\&quot;insert\&quot;:\&quot;Los Angeles Under the Influence\&quot;},{\&quot;attributes\&quot;:{\&quot;background\&quot;:\&quot;transparent\&quot;,\&quot;color\&quot;:\&quot;#000000\&quot;},\&quot;insert\&quot;:\&quot;, \&quot;},{\&quot;attributes\&quot;:{\&quot;background\&quot;:\&quot;transparent\&quot;,\&quot;color\&quot;:\&quot;#000000\&quot;,\&quot;italic\&quot;:true},\&quot;insert\&quot;:\&quot;LA Writers and Their Works\&quot;},{\&quot;attributes\&quot;:{\&quot;background\&quot;:\&quot;transparent\&quot;,\&quot;color\&quot;:\&quot;#000000\&quot;},\&quot;insert\&quot;:\&quot;, and others. She was co-editor of an anthology of plays from the LA underground scene titled \&quot;},{\&quot;attributes\&quot;:{\&quot;background\&quot;:\&quot;transparent\&quot;,\&quot;color\&quot;:\&quot;#000000\&quot;,\&quot;italic\&quot;:true},\&quot;insert\&quot;:\&quot;I Might Be The Person You Are Talking To, \&quot;},{\&quot;attributes\&quot;:{\&quot;background\&quot;:\&quot;transparent\&quot;,\&quot;color\&quot;:\&quot;#000000\&quot;},\&quot;insert\&quot;:\&quot;and most recently, her play \&quot;},{\&quot;attributes\&quot;:{\&quot;background\&quot;:\&quot;transparent\&quot;,\&quot;color\&quot;:\&quot;#000000\&quot;,\&quot;italic\&quot;:true},\&quot;insert\&quot;:\&quot;Hello Stranger\&quot;},{\&quot;attributes\&quot;:{\&quot;background\&quot;:\&quot;transparent\&quot;,\&quot;color\&quot;:\&quot;#000000\&quot;},\&quot;insert\&quot;:\&quot; (Theater of N.O.T.E., 2017) was published by Original Works. She is a frequent writer and sometime co-curator with Susan Hayden's \&quot;},{\&quot;attributes\&quot;:{\&quot;background\&quot;:\&quot;transparent\&quot;,\&quot;color\&quot;:\&quot;#000000\&quot;,\&quot;italic\&quot;:true},\&quot;insert\&quot;:\&quot;Library Girl\&quot;},{\&quot;attributes\&quot;:{\&quot;background\&quot;:\&quot;transparent\&quot;,\&quot;color\&quot;:\&quot;#000000\&quot;},\&quot;insert\&quot;:\&quot;, a \\\&quot;Best of the Westside\\\&quot; monthly literary series centered around a music theme. Her one-acts inspired by crimes in LA history have appeared in \&quot;},{\&quot;attributes\&quot;:{\&quot;background\&quot;:\&quot;transparent\&quot;,\&quot;color\&quot;:\&quot;#000000\&quot;,\&quot;italic\&quot;:true},\&quot;insert\&quot;:\&quot;LA True Crime’s\&quot;},{\&quot;attributes\&quot;:{\&quot;background\&quot;:\&quot;transparent\&quot;,\&quot;color\&quot;:\&quot;#000000\&quot;},\&quot;insert\&quot;:\&quot; quarterly evenings since its inception in 2015. \&quot;},{\&quot;insert\&quot;:\&quot;\\n\&quot;}]}&quot;,&quot;html&quot;:&quot;&lt;p&gt;&lt;strong style=\&quot;background-color: transparent; color: rgb(0, 0, 0);\&quot;&gt;Sharon Yablon&lt;/strong&gt;&lt;span style=\&quot;background-color: transparent; color: rgb(0, 0, 0);\&quot;&gt; is an award-winning playwright who has been writing and directing her plays in Los Angeles for many years. Her work has appeared in a variety of sites, and on stage with The Echo Theater Company, Padua Playwrights, Zombie Joe's Underground Theater, The Lost Studio, Theater Unleashed, Bootleg, Theater of N.O.T.E., and others. Her short stories, \&quot;Perfidia\&quot; and \&quot;The Caller,\&quot; can be found in journals, and her published plays are in &lt;/span&gt;&lt;em style=\&quot;background-color: transparent; color: rgb(0, 0, 0);\&quot;&gt;Desert Road's One Acts of Note&lt;/em&gt;&lt;span style=\&quot;background-color: transparent; color: rgb(0, 0, 0);\&quot;&gt;, &lt;/span&gt;&lt;em style=\&quot;background-color: transparent; color: rgb(0, 0, 0);\&quot;&gt;Fever Dreams&lt;/em&gt;&lt;span style=\&quot;background-color: transparent; color: rgb(0, 0, 0);\&quot;&gt;, &lt;/span&gt;&lt;em style=\&quot;background-color: transparent; color: rgb(0, 0, 0);\&quot;&gt;Los Angeles Under the Influence&lt;/em&gt;&lt;span style=\&quot;background-color: transparent; color: rgb(0, 0, 0);\&quot;&gt;, &lt;/span&gt;&lt;em style=\&quot;background-color: transparent; color: rgb(0, 0, 0);\&quot;&gt;LA Writers and Their Works&lt;/em&gt;&lt;span style=\&quot;background-color: transparent; color: rgb(0, 0, 0);\&quot;&gt;, and others. She was co-editor of an anthology of plays from the LA underground scene titled &lt;/span&gt;&lt;em style=\&quot;background-color: transparent; color: rgb(0, 0, 0);\&quot;&gt;I Might Be The Person You Are Talking To, &lt;/em&gt;&lt;span style=\&quot;background-color: transparent; color: rgb(0, 0, 0);\&quot;&gt;and most recently, her play &lt;/span&gt;&lt;em style=\&quot;background-color: transparent; color: rgb(0, 0, 0);\&quot;&gt;Hello Stranger&lt;/em&gt;&lt;span style=\&quot;background-color: transparent; color: rgb(0, 0, 0);\&quot;&gt; (Theater of N.O.T.E., 2017) was published by Original Works. She is a frequent writer and sometime co-curator with Susan Hayden's &lt;/span&gt;&lt;em style=\&quot;background-color: transparent; color: rgb(0, 0, 0);\&quot;&gt;Library Girl&lt;/em&gt;&lt;span style=\&quot;background-color: transparent; color: rgb(0, 0, 0);\&quot;&gt;, a \&quot;Best of the Westside\&quot; monthly literary series centered around a music theme. Her one-acts inspired by crimes in LA history have appeared in &lt;/span&gt;&lt;em style=\&quot;background-color: transparent; color: rgb(0, 0, 0);\&quot;&gt;LA True Crime’s&lt;/em&gt;&lt;span style=\&quot;background-color: transparent; color: rgb(0, 0, 0);\&quot;&gt; quarterly evenings since its inception in 2015. &lt;/span&gt;&lt;/p&gt;&quot;} </code></pre> <p>Does anyone know how to access this so that it renders correctly, as HTML text?</p>
<p>Based on <a href="https://github.com/LeeHanYeong/django-quill-editor/issues/12" rel="nofollow noreferrer">https://github.com/LeeHanYeong/django-quill-editor/issues/12</a> it sounds like you need to use:</p> <pre><code>{{ creator.bio.html|safe }} </code></pre> <p>(though be careful using <a href="https://docs.djangoproject.com/en/3.1/ref/templates/builtins/#safe" rel="nofollow noreferrer"><code>safe</code></a> if you aren't certain the HTML is not malicious!)</p>
python|django|quill
4
1,909,882
66,247,522
Binary Search Implementation Using Slicing
<p>Regarding the binary search implementation given below:</p> <pre><code>def bin_search(arr, key): n = len(arr) if n &lt; 2: return (0 if (n == 1 and arr[0] == key) else None) m = int(0.5 * n) if arr[m] &gt; key: return bin_search(arr[:m], key) result = bin_search(arr[m:], key) return (result + m if result != None else None) </code></pre> <p>For the above binary search implementation, time complexity will be affected as we are taking slice of an array and space complexity too as list slicing in python creates a new list object. For improving the above implementation, I am thinking of introducing lower and upper bound variables just as in its original implementation. But it will modify the above code implementation completely.</p> <p>Can you please let me know how to modify the above implementation so that the time and space complexity of it is improved and is my understanding regarding its complexity correct?</p>
<p>Here is iterative solution with time complexity of <code>O(log(n))</code> and space complexity of <code>O(1)</code>. Instead of modifying array you just modify the positions of pointers. I mean <code>left/right</code> by pointers.</p> <pre><code>def binary_search(array, target): return binary_search_helper(array, target, 0, len(array) - 1) def binary_search_helper(array, target, left, right): while left &lt;= right: middle = (left + right) // 2 match = array[middle] if target == match: return middle elif target &lt; match: right = middle - 1 else: left = middle + 1 return -1 </code></pre> <p><strong>Recursive solution:</strong> I don't see a way to improve complexity with a <em>slight</em> changes since you need to work with positions instead of array itself. That will affect your base case and function calls. Here is my attempt to reduce space complexity from <code>O(n)</code> to <code>O(log(n))</code>.</p> <pre><code>def bin_search(arr, key, left=0, right=len(arr) - 1): # Should change base case since we modify only pointers if left &gt; right: return None # since we are not modifying our array working with length will not work m = int(0.5 * (left + right)) if arr[m] == key: return m elif arr[m] &gt; key: return bin_search(arr, key, left, m - 1) else: return bin_search(arr, key, m + 1, right) </code></pre> <p>PS: Need to create <code>arr</code> beforehand or create another caller function since we define <code>right=len(arr) - 1</code> in function definition. I would recommend using caller function like this:</p> <pre><code>def binary_search_caller(arr, key): return bin_search(arr, key, 0, len(array) - 1) </code></pre> <p>And change function definition to:</p> <pre><code>def bin_search(arr, key, left, right): ... </code></pre>
python|algorithm|time-complexity|binary-search
2
1,909,883
59,349,527
Cholesky implementation in python - Solve Ax=b
<p>I'm using Cholesky decomposition for <code>Ax=b</code> to find <code>x</code> , by doing <code>L*LT=A</code> then <code>y=L*b</code> and in the end <code>x=LT*b</code>.When I check though I don't seem to get the same results as doing the classic <code>Ax=b</code> . Here's my code :</p> <pre><code>import numpy as np import scipy.linalg as sla myL=np.linalg.cholesky(A) #check_x = np.dot(A, b) #check_x = np.dot(A,b) check_x = sla.solve(A, b) #check if the composition was done right myLT=myL.T.conj() #transpose matrix Ac=np.dot(myL,myLT) #should give the original matrix A #y=np.dot(myL,b) y = sla.solve_triangular(myL, b) #x=np.dot(myL.T.conj(),y) x = sla.solve_triangular(myLT, b) </code></pre>
<p>I was sleepless and tired , I got the last line wrong it actually is </p> <pre><code>x=np.linalg.solve(myLT, y) </code></pre>
python|numpy|linear-algebra
2
1,909,884
59,090,846
how to sort strings in python in a numerical fashion
<p>this the output but i want to arrange these file names in numerical order like(0,1,2,...1000) and not based on the first value(0,1,10,100)</p> <pre><code>D:\deep&gt;python merge.py ./data/00.jpg ./data/01.jpg ./data/010.jpg ./data/0100.jpg ./data/0101.jpg ./data/0102.jpg ./data/0103.jpg ./data/0104.jpg ./data/0105.jpg ./data/0106.jpg ./data/0107.jpg ./data/0108.jpg ./data/0109.jpg ./data/011.jpg ./data/0110.jpg ./data/0111.jpg ./data/0112.jpg ./data/0113.jpg ./data/0114.jpg ./data/0115.jpg ./data/0116.jpg ./data/0117.jpg ./data/0118.jpg ./data/0119.jpg </code></pre> <p>the code i used is the below. i want to sort the filenames in a numerical order i tried using sort function with key as int but it dint work</p> <pre><code>import cv2 import os import numpy as np image_folder = 'd:/deep/data' video_name = 'video.avi' images = [img for img in os.listdir(image_folder) if img.endswith(".jpg")] print(images) frame = cv2.imread(os.path.join(image_folder, images[0])) height, width, layers = frame.shape fourcc = cv2.VideoWriter_fourcc(*'XVID') video = cv2.VideoWriter(video_name, fourcc,15.0, (width,height)) for image in images: video.write(cv2.imread(os.path.join(image_folder, image))) cv2.destroyAllWindows() video.release() </code></pre>
<p>Let's say that your paths are in a list <code>paths</code>. It could also be a generator, of course.</p> <pre><code>paths = ["./data/00.jpg", "./data/100.jpg", "./data/01.jpg"] </code></pre> <p>Then</p> <pre><code>sorted(paths, key=lambda p: int(p[7:-4])) </code></pre> <p>returns exactly the desired output:</p> <pre><code>['./data/00.jpg', './data/01.jpg', './data/100.jpg'] </code></pre>
python|python-3.x|python-2.7
1
1,909,885
62,455,511
Error Getting Managed Identity Access Token from Azure Function
<p>I'm having an issue retrieving an Azure Managed Identity access token from my Function App. The function gets a token then accesses a Mysql database using that token as the password.</p> <p>I am getting this response from the function:</p> <p><code>9103 (HY000): An error occurred while validating the access token. Please acquire a new token and retry.</code></p> <p>Code:</p> <pre><code>import logging import mysql.connector import requests import azure.functions as func def main(req: func.HttpRequest) -&gt; func.HttpResponse: def get_access_token(): URL = &quot;http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&amp;resource=https%3A%2F%2Fossrdbms-aad.database.windows.net&amp;client_id=&lt;client_id&gt;&quot; headers = {&quot;Metadata&quot;:&quot;true&quot;} try: req = requests.get(URL, headers=headers) except Exception as e: print(str(e)) return str(e) else: password = req.json()[&quot;access_token&quot;] return password def get_mysql_connection(password): &quot;&quot;&quot; Get a Mysql Connection. &quot;&quot;&quot; try: con = mysql.connector.connect( host='&lt;host&gt;.mysql.database.azure.com', user='&lt;user&gt;@&lt;db&gt;', password=password, database = 'materials_db', auth_plugin='mysql_clear_password' ) except Exception as e: print(str(e)) return str(e) else: return &quot;Connected to DB!&quot; password = get_access_token() return func.HttpResponse(get_mysql_connection(password)) </code></pre> <p>Running a modified version of this code on a VM with my managed identity works. It seems that the Function App is not allowed to get an access token. Any help would be appreciated.</p> <p>Note: I have previously logged in as AzureAD Manager to the DB and created this user with all privileges to this DB.</p> <p><strong>Edit</strong>: No longer calling endpoint for VMs.</p> <pre><code>def get_access_token(): identity_endpoint = os.environ[&quot;IDENTITY_ENDPOINT&quot;] # Env var provided by Azure. Local to service doing the requesting. identity_header = os.environ[&quot;IDENTITY_HEADER&quot;] # Env var provided by Azure. Local to service doing the requesting. api_version = &quot;2019-08-01&quot; # &quot;2018-02-01&quot; #&quot;2019-03-01&quot; #&quot;2019-08-01&quot; CLIENT_ID = &quot;&lt;client_id&gt;&quot; resource_requested = &quot;https%3A%2F%2Fossrdbms-aad.database.windows.net&quot; # resource_requested = &quot;https://ossrdbms-aad.database.windows.net&quot; URL = f&quot;{identity_endpoint}?api-version={api_version}&amp;resource={resource_requested}&amp;client_id={CLIENT_ID}&quot; headers = {&quot;X-IDENTITY-HEADER&quot;:identity_header} try: req = requests.get(URL, headers=headers) except Exception as e: print(str(e)) return str(e) else: try: password = req.json()[&quot;access_token&quot;] except: password = str(req.text) return password </code></pre> <p>But now I am getting this Error:</p> <pre><code>{&quot;error&quot;:{&quot;code&quot;:&quot;UnsupportedApiVersion&quot;,&quot;message&quot;:&quot;The HTTP resource that matches the request URI 'http://localhost:8081/msi/token?api-version=2019-08-01&amp;resource=https%3A%2F%2Fossrdbms-aad.database.windows.net&amp;client_id=&lt;client_idxxxxx&gt;' does not support the API version '2019-08-01'.&quot;,&quot;innerError&quot;:null}} </code></pre> <p>Upon inspection this seems to be a general error. This error message is propagated even if it's not the underlying issue. Noted several times in Github.</p> <p>Is my endpoint correct now?</p>
<p>For this problem, it was caused by the wrong endpoint you request for the access token. We can just use the endpoint <code>http://169.254.169.254/metadata/identity.....</code> in azure VM, but if in azure function we can not use it.</p> <p>In azure function, we need to get the <code>IDENTITY_ENDPOINT</code> from the environment.</p> <pre><code>identity_endpoint = os.environ[&quot;IDENTITY_ENDPOINT&quot;] </code></pre> <p>The endpoint is like:</p> <pre><code>http://127.0.0.1:xxxxx/MSI/token/ </code></pre> <p>You can refer to this <a href="https://docs.microsoft.com/en-us/azure/app-service/overview-managed-identity?context=azure%2Factive-directory%2Fmanaged-identities-azure-resources%2Fcontext%2Fmsi-context&amp;tabs=python#obtain-tokens-for-azure-resources" rel="nofollow noreferrer">tutorial</a> about it, you can also find the python code sample in the tutorial. <a href="https://i.stack.imgur.com/nJRTI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nJRTI.png" alt="enter image description here" /></a></p> <p>In my function code, I also add the client id of the managed identity I created in the <code>token_auth_uri</code> but I'm not sure if the <code>client_id</code> is necessary here (In my case, I use user-assigned identity but not system-assigned identity).</p> <pre><code>token_auth_uri = f&quot;{identity_endpoint}?resource={resource_uri}&amp;api-version=2019-08-01&amp;client_id={client_id}&quot; </code></pre> <p><strong>Update:</strong></p> <pre><code>#r &quot;Newtonsoft.Json&quot; using System.Net; using Microsoft.AspNetCore.Mvc; using Microsoft.Extensions.Primitives; using Newtonsoft.Json; public static async Task&lt;IActionResult&gt; Run(HttpRequest req, ILogger log) { string resource=&quot;https://ossrdbms-aad.database.windows.net&quot;; string clientId=&quot;xxxxxxxx&quot;; log.LogInformation(&quot;C# HTTP trigger function processed a request.&quot;); HttpWebRequest request = (HttpWebRequest)WebRequest.Create(String.Format(&quot;{0}/?resource={1}&amp;api-version=2019-08-01&amp;client_id={2}&quot;, Environment.GetEnvironmentVariable(&quot;IDENTITY_ENDPOINT&quot;), resource,clientId)); request.Headers[&quot;X-IDENTITY-HEADER&quot;] = Environment.GetEnvironmentVariable(&quot;IDENTITY_HEADER&quot;); request.Method = &quot;GET&quot;; HttpWebResponse response = (HttpWebResponse)request.GetResponse(); StreamReader streamResponse = new StreamReader(response.GetResponseStream()); string stringResponse = streamResponse.ReadToEnd(); log.LogInformation(&quot;test:&quot;+stringResponse); string name = req.Query[&quot;name&quot;]; string requestBody = await new StreamReader(req.Body).ReadToEndAsync(); dynamic data = JsonConvert.DeserializeObject(requestBody); name = name ?? data?.name; return name != null ? (ActionResult)new OkObjectResult($&quot;Hello, {name}&quot;) : new BadRequestObjectResult(&quot;Please pass a name on the query string or in the request body&quot;); } </code></pre>
python|azure|azure-functions|azure-managed-identity
3
1,909,886
62,180,630
Python/Flask - Comparing user input to database value
<p>Im trying some learning-by-doing here with Python and Flask and have run into something I don't quite understand - hoping someone out there might be able to help or explain this behavior.</p> <p>I'm not building anything real, just trying to learn and understand what is going on here.</p> <p>I'm playing with registration / login form in Python/Flask and am struggling with the login part.</p> <p>I have built a registration from which writes name, email and password (unhashed, that comes later) to a simple table 'users' with a | ID | name | email | password | structure, all values being varchar except ID which is INT and auto increments.</p> <p>My imports are</p> <pre><code>import os from flask import Flask, session, request from flask_session import Session from sqlalchemy import create_engine from sqlalchemy.orm import scoped_session, sessionmaker from flask import Flask, render_template app = Flask(__name__) # Set up database engine = create_engine(os.getenv("DATABASE_URL")) db = scoped_session(sessionmaker(bind=engine)) </code></pre> <p>I have a html login form that looks as follows</p> <pre><code>&lt;form action="{{ url_for('login') }}" method="post"&gt; &lt;div class="form-group"&gt; &lt;label for="email"&gt;E-mail:&lt;/label&gt; &lt;input type="email" name="email" class="form-control" placeholder="Enter email"&gt; &lt;/div&gt; &lt;div class="form-group"&gt; &lt;label for="pwd"&gt;Password:&lt;/label&gt; &lt;input type="password" name="pwd" class="form-control" placeholder="Enter password"&gt; &lt;/div&gt; &lt;button type="submit" class="btn btn-primary"&gt;Login&lt;/button&gt; &lt;/form&gt; &lt;p&gt; {{ logintry }} </code></pre> <p>My Flask application route for 'login' is as follows</p> <pre><code>@app.route("/login", methods=['GET', 'POST']) def login(): if request.method == 'POST': uname = request.form.get("email") passwd = request.form.get("pwd") pwCheck = db.execute("SELECT password FROM users WHERE email = :uname", {"uname": uname}).fetchone() if pwCheck == passwd: return render_template("authenticated.html") else: return render_template("login.html", logintry="Login Failure") else: return render_template("login.html") </code></pre> <p>In this sample I am trying to log in with [email protected] email and password is 1234</p> <p>The problem I have is that the database value seems to be returned as </p> <pre><code>('1234',) </code></pre> <p>Whereas the user input is simply presented as</p> <pre><code>1234 </code></pre> <p>And therefore they are not equal, and the login fails.</p> <p>Can anyone help guide me here, or maybe explain what is going on ?</p> <p>Thanks in advance.</p>
<p>There are two main things to understand here: 1. What the database is returning 2. What your form is returning</p> <p>In order to understand how to get the login to work you must understand how to ensure that the input/results your form and database give you, can be compared.</p> <p>In your question you said that the database is returning ('1234',). This is a tuple in python, and can be indexed. Indexing your tuple, like so</p> <pre><code>pwCheck[0] </code></pre> <p>would return '1234'. </p> <p>So instead of comparing the raw result that your database query is returning, you should instead understand that your database is returning data that needs a little bit more processing before comparing against the form input. </p> <p>You could add an extra line which creates a new variable db_pwd, like so</p> <pre><code>db_pwd = pwCheck[0] </code></pre> <p>And then check if db_pwd == passwd </p>
python|sql|flask
1
1,909,887
35,466,473
Improve Regex to catch complete emails from Google search?
<p>In order to practice and help my sister get emails from doctors for her baby, I have designed this email harvester. It makes a search, cleans the urls given, adds them to a dictionary and parse them for emails in two different ways. </p> <p>The code has been taken from different places, so if you correct me, please explain clearly your improvement, as I am working at the limit of my knowledge already.</p> <p>The question is how to get emails better (and improve code, if possible). I'll post the code and the exact output below:</p> <p>CODE of my program:</p> <pre><code>import requests, re, webbrowser, bs4 from selenium import webdriver from bs4 import BeautifulSoup import time, random, webbrowser import urllib.request def google_this(): #Googles and gets the first few links search_terms = ['Fiat','Lambrusco'] added_terms = 'email contact? @' #This searches for certain keywords in Google and parses results with BS for el in search_terms: webpage = 'http://google.com/search?q=' + str(el) + str(added_terms) print('Searching for the terms...', el,added_terms) headers = {'User-agent':'Mozilla/5.0'} res = requests.get(webpage, headers=headers) #res.raise_for_status() statusCode = res.status_code if statusCode == 200: soup = bs4.BeautifulSoup(res.text,'lxml') serp_res_rawlink = soup.select('.r a') dicti = [] #This gets the href links for link in serp_res_rawlink: url = link.get('href') if 'pdf' not in url: dicti.append(url) dicti_url = [] #This cleans the "url?q=" from link for el in dicti: if '/url?q=' in el: result = (el.strip('/url?q=')) dicti_url.append(result) #print(dicti_url) dicti_pretty_links = [] #This cleans the gibberish at end of url for el in dicti_url[0:4]: pretty_url = el.partition('&amp;')[0] dicti_pretty_links.append(pretty_url) print(dicti_pretty_links) for el in dicti_pretty_links: #This converts page in BS soup # browser = webdriver.Firefox() # browser.get(el) # print('I have been in the element below and closed the window') # print(el) # time.sleep(1) # browser.close() webpage = (el) headers = {'User-agent':'Mozilla/5.0'} res = requests.get(webpage, headers=headers) #res.raise_for_status() statusCode = res.status_code if statusCode == 200: soup = bs4.BeautifulSoup(res.text,'lxml') #This is the first way to search for an email in soup emailRegex = re.compile(r'([a-zA-Z0-9_.+]+@+[a-zA-Z0-9_.+])', re.VERBOSE) mo = emailRegex.findall(res.text) #mo = emailRegex.findall(soup.prettify()) print('THIS BELOW IS REGEX') print(mo) #This is the second way to search for an email in soup: mailtos = soup.select('a[href^=mailto]') for el in mailtos: print('THIS BELOW IS MAILTOS') print(el.text) time.sleep(random.uniform(0.5,1)) google_this() </code></pre> <p>And here is the OUTPUT when this very same code above. As you can see, some emails seem to be found, but at cut just after the "@" symbol:</p> <pre><code>C:\Users\SK\AppData\Local\Programs\Python\Python35-32\python.exe C:/Users/SK/PycharmProjects/untitled/another_temperase.py Searching for the terms... Fiat email contact? @ ['http://www.fcagroup.com/en-US/footer/Pages/contacts.aspx', 'http://www.fiat.co.uk/header-contacts', 'http://www.fiatusa.com/webselfservice/fiat/', 'https://twitter.com/nic_fincher81/status/672505531689394176'] THIS BELOW IS REGEX ['investor.relations@f', 'investor.relations@f', 'sustainability@f', 'sustainability@f', 'mediarelations@f', 'mediarelations@f'] THIS BELOW IS MAILTOS [email protected] THIS BELOW IS MAILTOS [email protected] THIS BELOW IS MAILTOS [email protected] THIS BELOW IS REGEX [] THIS BELOW IS REGEX [] THIS BELOW IS REGEX ['nic_fincher81@y', 'nic_fincher81@y', 'nic_fincher81@y', 'nic_fincher81@y', 'nic_fincher81@y', 'nic_fincher81@y'] Searching for the terms... Lambrusco email contact? @ ['http://www.labattagliola.it/%3Flang%3Den'] Process finished with exit code 0 </code></pre>
<p>I would recommend a more restrictive version that still catches all of the email:</p> <pre><code>([a-zA-Z0-9_.+]+@[a-zA-Z0-9_.+]+) </code></pre> <p>The problem of not catching anything after the first letter after the <code>@</code> is because the regex is missing a <code>+</code></p> <pre><code>([a-zA-Z0-9_.+]+@+[a-zA-Z0-9_.+]+) </code></pre> <p>Originally this part <code>[a-zA-Z0-9_.+]</code> simply said to catch one of any of the following characters <code>a-z</code>, <code>A-Z</code>, <code>0-9</code>, <code>.</code>, <code>_</code>,<code>+</code>.</p> <p>I would also be careful about <code>@+</code> which says to catch 1 or more "@" symbols.</p> <p>So a potentially valid email could look like this: </p> <blockquote> <p>..................@@@@@@@@@@@@@@@@@@@@@@@@.................</p> </blockquote>
python|regex|email|search|beautifulsoup
1
1,909,888
58,876,179
try to print out a matrix NameError: name 'Qb_matrix' is not defined
<p>I tried to print out a matrix using code as follow, however, it showed up nameerror. i wonder where specifically should i define the matrix? can python recognise the abbreviation as Qb to Q_bar?</p> <pre><code>import numpy as np Q11 = 14.583 Q12 = 1.4583 Q23 = 0 Q22 = 3.646 Q33 = 4.2 theta = 60 def Q_bar(Q11, Q12, Q22, Q33, theta): n = np.sin(theta*np.pi/180) m = np.cos(theta*np.pi/180) Qb_11 = Q11*m**4 + 2*(Q12 + 2*Q33)*n**2*m**2 + Q22*n**4 Qb_22 = Q11*n**4 + 2*(Q12 + 2*Q33)*n**2*m**2 + Q22*m**4 Qb_33 = (Q11 + Q22 - 2*Q12 - 2*Q33)*n**2*m**2 + Q33*(m**4 + n**4) Qb_12 = (Q11 + Q22 - 4*Q33)*n**2*m**2 + Q12*(m**4 + n**4) Qb_13 = (Q11 - Q12 - 2*Q33)*n*m**3 + (Q12 - Q22 + 2*Q33)*n**3*m Qb_23 = (Q11 - Q12 - 2*Q33)*n**3*m + (Q12 - Q22 + 2*Q33)*n*m**3 Qb_matrix = np.array([[Qb_11, Qb_12, Qb_13],[Qb_12, Qb_22, Qb_23],[Qb_13, Qb_23, Qb_33]]) return(Qb_matrix) print(Qb_matrix) </code></pre>
<p>you never call your function so the code inside it is never executed. Further more even if you did call the function. The variable Qb_matrix which you create in the function will only exist inside the function scope, when you return it you need to store that returned value. </p> <pre><code>import numpy as np Q11 = 14.583 Q12 = 1.4583 Q23 = 0 Q22 = 3.646 Q33 = 4.2 theta = 60 def Q_bar(Q11, Q12, Q22, Q33, theta): n = np.sin(theta*np.pi/180) m = np.cos(theta*np.pi/180) Qb_11 = Q11*m**4 + 2*(Q12 + 2*Q33)*n**2*m**2 + Q22*n**4 Qb_22 = Q11*n**4 + 2*(Q12 + 2*Q33)*n**2*m**2 + Q22*m**4 Qb_33 = (Q11 + Q22 - 2*Q12 - 2*Q33)*n**2*m**2 + Q33*(m**4 + n**4) Qb_12 = (Q11 + Q22 - 4*Q33)*n**2*m**2 + Q12*(m**4 + n**4) Qb_13 = (Q11 - Q12 - 2*Q33)*n*m**3 + (Q12 - Q22 + 2*Q33)*n**3*m Qb_23 = (Q11 - Q12 - 2*Q33)*n**3*m + (Q12 - Q22 + 2*Q33)*n*m**3 Qb_matrix = np.array([[Qb_11, Qb_12, Qb_13],[Qb_12, Qb_22, Qb_23],[Qb_13, Qb_23, Qb_33]]) return(Qb_matrix) my_qb_matrix = Q_bar(Q11, Q12, Q22, Q33, theta) print(my_qb_matrix) </code></pre> <p><strong>OUTPUT</strong></p> <pre><code>[[ 6.659175 1.179375 2.52896738] [ 1.179375 12.127675 2.20689254] [ 2.52896738 2.20689254 3.921075 ]] </code></pre>
arrays|python-3.x|matrix|nameerror
0
1,909,889
31,574,105
How to run a kivy script in the kivy VM?
<p>So I am looking into using Kivy for Android development. Defeating the jedi etc.</p> <p>But I have hit a roadblock! I installed the Kivy VM image in VirtualBox, but when I try to run the test script:</p> <pre><code># /usr/bin/kivy __version__ = 1.0 from kivy.app import App from kivy.uix.button import Button class Hello(App): def build(self): btn = Button(text='Hello World') return btn Hello().run() </code></pre> <p>Using: python main.py</p> <p>I get:</p> <pre><code>Traceback (most recent call last): File "main.py", line 3, in &lt;module&gt; from kivy.app import App ImportError: No module named kivy.app </code></pre>
<p>I tried just plain installing kivy as they say to on their website, and it worked.</p> <pre><code>sudo add-apt-repository ppa:kivy-team/kivy apt-get install python-kivy </code></pre>
android|python|virtual-machine|kivy|ubuntu
0
1,909,890
15,747,437
Is it possible to unclutter a graph that uses seconds on x-axis in matplotlib
<p>I have a dataset with datetimes that sometimes contain differences in seconds.</p> <p>I have the datetimes (on x-axis) displayed vertically in hopes that the text won't overlap each other, but from the looks of it they're stacked practically on top of each other. I think this is because I have data that differ in seconds and the dates can range through different days, so the x-axis is very tight. Another problem with this is that the datapoints on the graph are also overlapping because the distance between them is so tight.</p> <p>Here's an example set (already converted using <code>date2num()</code>). It differs in seconds, but spans over several days:</p> <pre><code>dates = [734949.584699074, 734959.4604050926, 734959.4888773148, 734949.5844791667, 734959.037025463, 734959.0425810185, 734959.0522916666, 734959.4607060185, 734959.4891435185, 734949.5819444444, 734959.0348726852, 734959.0390393519, 734959.0432175926, 734959.0515393518, 734959.4864814815, 734949.5842476852, 734959.0367476852, 734959.038125, 734959.0423032407, 734959.052025463, 734959.4603819444, 734959.4895023148, 734949.5819791667, 734959.0348958333, 734959.0390740741, 734959.0432407408, 734959.0515856481, 734959.4579976852, 734959.487175926] values = [39, 68, 27, 57, 22, 33, 70, 19, 60, 53, 52, 33, 87, 63, 78, 34, 26, 42, 24, 97, 20, 1, 32, 60, 61, 48, 30, 48, 17] dformat = mpl.dates.DateFormatter('%m-%d-%Y %H:%M:%S') figure = plt.figure() graph = figure.add_subplot(111) graph.xaxis.set_major_formatter(dformat) plt.xticks(rotation='vertical') figure.subplots_adjust(bottom=.35) graph.plot_date(dates,values) graph.set_xticks(dates) plt.show() </code></pre> <p>I have two questions:</p> <ul> <li><p>Is there a way to create a spacing on the x-axis so that I can see the text and the datapoints clearly? This would result in a very long horizontal graph, but I will save this to an image file.</p></li> <li><p>Relates to first question: to reduce the horizontal length of the graph, is there a way to compress ticks on the x-axis so that areas which have no data will be shortened?</p> <p>For example, if we have three dates with values:</p> <pre><code> March 22 2013 23:11:04 55 March 22 2013 23:11:10 70 April 1 2013 10:43:56 5 </code></pre> <p>Is it possible to condense the spaces between March 22 23:11:10 and April 1 2013 1-:43:56?</p></li> </ul>
<p>You are basically asking for something impossible, you can not both see a range of days <em>and</em> have differences of a few seconds be apparent while keeping the x-axis linear. If you want to try this, you can do something like <a href="http://matplotlib.org/api/figure_api.html#matplotlib.figure.Figure.set_size_inches" rel="nofollow noreferrer">(doc)</a></p> <pre><code>fig.set_size_inches(1000, 2, forward=True) </code></pre> <p>which will make you figure 1000 inches wide and 2 inches tall, but doing that is rather ungainly. </p> <p>What I think you should do is apply Dermen's link (<a href="https://stackoverflow.com/questions/5656798/python-matplotlib-is-there-a-way-to-make-a-discontinuous-axis">Python/Matplotlib - Is there a way to make a discontinuous axis?</a>) with a break anyplace your data has a big break. You will end up with multiple sections that are each a few seconds wide which will give enough space of the tick labels to be readable. </p>
python|graph|matplotlib
0
1,909,891
59,776,664
Set margins of a time series plotted with pandas
<p>I have the following code for generating a time series plot</p> <pre><code>import numpy as np fig = plt.figure() ax = fig.add_subplot(111) series = pd.Series([np.sin(ii*np.pi) for ii in range(30)], index=pd.date_range(start='2019-01-01', end='2019-12-31', periods=30)) series.plot(ax=ax) </code></pre> <p><a href="https://i.stack.imgur.com/Z7SRt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Z7SRt.png" alt="enter image description here"></a></p> <p>I want to set an automatic limit for x and y, I tried using ax.margins() but it does not seem to work:</p> <pre><code>ax.margins(y=0.1, x=0.05) # even with # ax.margins(y=0.1, x=5) </code></pre> <p><a href="https://i.stack.imgur.com/jHqbQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jHqbQ.png" alt="enter image description here"></a></p> <p>What I am looking for is an automatic method like padding=0.1 (10% of whitespace around the graph)</p>
<p>Pandas and matplotlib seem to be confused rather often while collaborating when axes have dates. For some reason in this case <code>ax.margins</code> doesn't work as expected with the x-axis.</p> <p>Here is a workaround which does seem to do the job, explicitely moving the xlims:</p> <pre class="lang-py prettyprint-override"><code>xmargins = 0.05 ymargins = 0.1 ax.margins(y=ymargins) x0, x1 = plt.xlim() plt.xlim(x0-xmargins*(x1-x0), x1+xmargins*(x1-x0)) </code></pre> <p>Alternatively, you could work directly with matplotlib's plot, which does work as expected applying the margins to the date axis.</p> <pre><code>ax.plot(series.index, series) ax.margins(y=0.1, x=0.05) </code></pre> <p>PS: <a href="https://stackoverflow.com/questions/43159443/matplotlib-margins">This post</a> talks about setting <code>use_sticky_edges</code> to False and calling <code>autoscale_view</code> after setting the margins, but also that doesn't seem to work here.</p> <pre><code>ax.use_sticky_edges = False ax.autoscale_view(scaley=True, scalex=True) </code></pre>
python|pandas|plot
4
1,909,892
59,700,437
Customizing and understanding GnuRadio QT GUI Vector Sink
<p>I have created a simple GnuRadio flowgraph in GNU Radio Companion 3.8 where I connect a Vector Source block (with vector [1,2,3,4,5]) to a QT GUI Vector Sink. When I run the flowgraph, I see a two lines: one which goes from 1 to 5 (as expected) and one which is perfectly horizontal at zero. If I set the reference level in the sink to something other than zero (e.g., 1), that line at zero remains (in addition to a line at the reference). Additionally, the legend in the upper right corner contains Min Hold and Max Hold buttons. An example is shown below:</p> <p><a href="https://i.stack.imgur.com/RdU7h.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RdU7h.png" alt="enter image description here"></a></p> <p>I have a few questions:</p> <ol> <li>What is this line at zero? How do I get rid of it?</li> <li>How do I get rid of the Min and Max Hold options in the upper right of the plot?</li> <li>In general, is it true that finer control of the formatting of plots in GNURadio is possible when explicitly writing code (say in a python-based flowgraph) to render the plot instead of using companion?</li> </ol>
<p>The vector plot puts markers (horiz lines) at the "LowerIntensityLevel" and "UpperIntensityLevel". It seems like they are both at 0 unless something sets them. There are functions in <code>VectorDisplayPlot</code> to set the levels, but nothing calls them. <code>VectorDisplayPlot</code> is the graphical Qt-based widget that does the actual plot display.</p> <p>These markers default to on. Which seems wrong to me, since nothing sets them and they have no default value, so it seems like you wouldn't want them unless you are going to use them.</p> <p>The line style, color, and if they are enabled or not are style properties of the VectorDisplayPlot. The "dark.qss" theme turns them off, but the default theme has them on.</p> <p><strong>So you can turn them off with a theme.</strong></p> <p>The important parts for the theme are:</p> <pre><code>VectorDisplayPlot { qproperty-marker_lower_intensity_visible: false; qproperty-marker_upper_intensity_visible: false; qproperty-marker_ref_level_visible: false; } </code></pre> <p>It should be possible to make a .qss file with just that in it. Get GRC to use it with the flow graph in the properties of the Options block under "QSS Theme". The "ref_level" line is only needed to make the ref level marker go away.</p> <p><a href="https://i.stack.imgur.com/zIA2E.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zIA2E.png" alt="Example with upper, lower, and ref level markers off"></a></p> <p>The <code>VectorDisplayPlot</code> is a private member of <code>vector_sink</code>, which is the GNU Radio block that one uses. I see no methods in <code>vector_sink_impl</code> that ever set the upper/lower intensity values, and since only that class has access to the private <code>VectorDisplayPlot</code>, there's no way anything else could set them either. So the feature is totally unusable from any code (Python/C++) using the vector sink, much less from GRC.</p> <p>It looks like these markers are used for some of the other plots, like the spectrum plot. I think someone cut &amp; pasted that code into the vector plot and this behavior is a bug.</p>
python|qt|gnuradio|gnuradio-companion
1
1,909,893
25,362,614
Force use of scientific style for basemap colorbar labels
<p>String formatting can by used to specify scientific notation for matplotlib.basemap colorbar labels:</p> <pre><code>cb = m.colorbar(cs, ax=ax1, format='%.4e') </code></pre> <p>But then each label is scientifically notated with the base.</p> <p>If numbers are large enough, the colobar automatically reduces them to scientific notation, placing the base (i.e. <code>x10^n</code>) at the top of the color bar, leaving only the coefficient numbers as labels.</p> <p>You can do this with a standard axis with the following:</p> <pre><code>ax.ticklabel_format(style='sci', axis='y', scilimits=(0,0)) </code></pre> <p>Is there an equivalent method for matplotlib.basemap colorbars, or perhaps a standard matplotlib colorbar?</p>
<p>There's no one-line method, but you can do this by updating the colorbar's <code>formatter</code> and then calling <code>colorbar.update_ticks()</code>. </p> <pre><code>import numpy as np import matplotlib.pyplot as plt z = np.random.random((10,10)) fig, ax = plt.subplots() im = ax.imshow(z) cb = fig.colorbar(im) cb.formatter.set_powerlimits((0, 0)) cb.update_ticks() plt.show() </code></pre> <p><img src="https://i.stack.imgur.com/LR7LB.png" alt="enter image description here"></p> <p>The reason for the slightly odd way of doing things is that a colorbar actually has statically assigned ticks and ticklabels. The colorbar's axes (<code>colorbar.ax</code>) actually always ranges between 0 and 1. (Therefore, altering <code>colorbar.ax.yaxis.formatter</code> doesn't do anything useful.) The tick positions and labels are calculated from <code>colorbar.locator</code> and <code>colorbar.formatter</code> and are assigned when the colorbar is created. Therefore, if you need precise control over a colorbar's ticks/ticklables, you need to explicitly call <code>colorbar.update_ticks()</code> after customizing how the ticks are displayed. The colorbar's convenience functions do this for you behind the scenes, but as far as I know, what you want can't be done through another method.</p>
python|matplotlib|matplotlib-basemap
14
1,909,894
25,148,496
Python send bash command output by query string
<p>I am a beginner to Python so please bear with me. I need my python program to accept incoming data (stdin) from a command (ibeacon scan -b) and send that data by query string to my server. I using Raspbian on a raspberry pi. The ibeacon_scan command output looks like this. </p> <pre><code>iBeacon Scan ... 3F234454-CFD-4A0FF-ADF2-F4911BA9FFA6 1 4 -71 -69 3F234454-CFD-4A0FF-ADF2-F4911BA9FFA6 6 2 -71 -63 3F234454-CFD-4A0FF-ADF2-F4911BA9FFA6 1 4 -71 -69 3F234454-CFD-4A0FF-ADF2-F4911BA9FFA6 5 7 -71 -64 ...keeps updating </code></pre> <p>I'm pipping the command to the python script.</p> <pre><code>ibeacon scan -b &gt; python.py &amp; </code></pre> <p>Here is the outline of what I think could work. I need help organizing the code correctly.</p> <pre><code>import httplib, urllib, fileinput for line in fileinput(): params = urllib.urlencode({'@UUID': 12524, '@Major': 1, '@Minor': 2, '@Power': -71, '@RSSI': -66}) headers = {"Content-type": "application/x-www-form-urlencoded","Accept": "text/plain"} conn = httplib.HTTPConnection("www.example.com") conn.request("POST", "", params, headers) response = conn.getresponse() print response.status, response.reason data = response.read() data conn.close() </code></pre> <p>I know there are a lot of problems with this and I could really use any advice on any of this. Thank you for your time!</p>
<p>There are several problems in your code.</p> <p><strong>1. Wrong bash command:</strong> At the moment you are extending the output to your <code>python.py</code> file, which is completely wrong. You should use a pipe and execute the python.py script. Your command should look like this:</p> <pre><code>ibeacon scan -b | ./python.py &amp; </code></pre> <p>Make shure your python.py script is executable (chown). As an alternative you could also try this:</p> <pre><code>ibeacon scan -b | python python.py &amp; </code></pre> <p><strong>2. Wrong use of fileintput:</strong> Your for loop should look like this:</p> <pre><code>for line in fileinput.input(): ... </code></pre>
python|bash|http|stdin|ibeacon
0
1,909,895
2,625,701
How to install wexpect?
<p>I'm running 32-bit Windows XP and trying to have Matlab communicate with Cgate, a command line program. I'd like to make this happen using wexpect, which is a port of Python's module pexpect to Windows. I'm having trouble installing or importing wexpect though. I've put wexpect in the folder Lib, along with all other modules. I can import those other modules but just not wexpect. Commands I've tried include:</p> <pre><code>import wexpect import wexpect.py python wexpect.py install python wexpect.py install --home=~ wexpect install </code></pre> <p>Does anyone have anymore ideas? </p>
<p>I have created a <a href="https://github.com/raczben/wexpect" rel="nofollow noreferrer">Github repo</a> and <a href="https://pypi.org/project/wexpect/" rel="nofollow noreferrer">PyPI project</a> for wexpect. So now wexpect can be installed with:</p> <p><strong><code>pip install wexpect</code></strong></p>
python|installation|expect|pexpect|wexpect
0
1,909,896
2,390,766
How do I disable and then re-enable a warning?
<p>I'm writing some unit tests for a Python library and would like certain warnings to be raised as exceptions, which I can easily do with the <a href="http://docs.python.org/library/warnings.html#warnings.simplefilter" rel="noreferrer">simplefilter</a> function. However, for one test I'd like to disable the warning, run the test, then re-enable the warning.</p> <p>I'm using Python 2.6, so I'm supposed to be able to do that with the <a href="http://docs.python.org/library/warnings.html#warnings.catch_warnings" rel="noreferrer">catch_warnings</a> context manager, but it doesn't seem to work for me. Even failing that, I should also be able to call <a href="http://docs.python.org/library/warnings.html#warnings.resetwarnings" rel="noreferrer">resetwarnings</a> and then re-set my filter.</p> <p>Here's a simple example which illustrates the problem:</p> <pre><code>&gt;&gt;&gt; import warnings &gt;&gt;&gt; warnings.simplefilter("error", UserWarning) &gt;&gt;&gt; &gt;&gt;&gt; def f(): ... warnings.warn("Boo!", UserWarning) ... &gt;&gt;&gt; &gt;&gt;&gt; f() # raises UserWarning as an exception Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "&lt;stdin&gt;", line 2, in f UserWarning: Boo! &gt;&gt;&gt; &gt;&gt;&gt; f() # still raises the exception Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "&lt;stdin&gt;", line 2, in f UserWarning: Boo! &gt;&gt;&gt; &gt;&gt;&gt; with warnings.catch_warnings(): ... warnings.simplefilter("ignore") ... f() # no warning is raised or printed ... &gt;&gt;&gt; &gt;&gt;&gt; f() # this should raise the warning as an exception, but doesn't &gt;&gt;&gt; &gt;&gt;&gt; warnings.resetwarnings() &gt;&gt;&gt; warnings.simplefilter("error", UserWarning) &gt;&gt;&gt; &gt;&gt;&gt; f() # even after resetting, I'm still getting nothing &gt;&gt;&gt; </code></pre> <p>Can someone explain how I can accomplish this?</p> <p>EDIT: Apparently this is a known bug: <a href="http://bugs.python.org/issue4180" rel="noreferrer">http://bugs.python.org/issue4180</a></p>
<p>Reading through the docs and few times and poking around the source and shell I think I've figured it out. The docs could probably improve to make clearer what the behavior is.</p> <p>The warnings module keeps a registry at __warningsregistry__ to keep track of which warnings have been shown. If a warning (message) is not listed in the registry before the 'error' filter is set, any calls to warn() will not result in the message being added to the registry. Also, the warning registry does not appear to be created until the first call to warn:</p> <pre><code>&gt;&gt;&gt; import warnings &gt;&gt;&gt; __warningregistry__ ------------------------------------------------------------ Traceback (most recent call last): File "&lt;ipython console&gt;", line 1, in &lt;module&gt; NameError: name '__warningregistry__' is not defined &gt;&gt;&gt; warnings.simplefilter('error') &gt;&gt;&gt; __warningregistry__ ------------------------------------------------------------ Traceback (most recent call last): File "&lt;ipython console&gt;", line 1, in &lt;module&gt; NameError: name '__warningregistry__' is not defined &gt;&gt;&gt; warnings.warn('asdf') ------------------------------------------------------------ Traceback (most recent call last): File "&lt;ipython console&gt;", line 1, in &lt;module&gt; UserWarning: asdf &gt;&gt;&gt; __warningregistry__ {} </code></pre> <p>Now if we ignore warnings, they will get added to the warnings registry:</p> <pre><code>&gt;&gt;&gt; warnings.simplefilter("ignore") &gt;&gt;&gt; warnings.warn('asdf') &gt;&gt;&gt; __warningregistry__ {('asdf', &lt;type 'exceptions.UserWarning'&gt;, 1): True} &gt;&gt;&gt; warnings.simplefilter("error") &gt;&gt;&gt; warnings.warn('asdf') &gt;&gt;&gt; warnings.warn('qwerty') ------------------------------------------------------------ Traceback (most recent call last): File "&lt;ipython console&gt;", line 1, in &lt;module&gt; UserWarning: qwerty </code></pre> <p>So the error filter will only apply to warnings that aren't already in the warnings registry. To make your code work you'll need to clear the appropriate entries out of the warnings registry when you're done with the context manager (or in general any time after you've used the ignore filter and want a prev. used message to be picked up the error filter). Seems a bit unintuitive... </p>
python|warnings
11
1,909,897
2,440,147
How to check the existence of a row in SQLite with Python?
<p>I have the cursor with the query statement as follows:</p> <pre><code>cursor.execute("select rowid from components where name = ?", (name,)) </code></pre> <p>I want to check for the existence of the components: name and return to a python variable. How do I do that?</p>
<p>Since the <code>name</code>s are unique, I really favor your (the OP's) method of using <code>fetchone</code> or Alex Martelli's method of using <code>SELECT count(*)</code> over my initial suggestion of using <code>fetchall</code>.</p> <p><code>fetchall</code> wraps the results (typically multiple rows of data) in a list. Since the <code>name</code>s are unique, <code>fetchall</code> returns either a list with just one tuple in the list (e.g. <code>[(rowid,),]</code> or an empty list <code>[]</code>. If you desire to know the <code>rowid</code>, then using <code>fetchall</code> requires you to burrow through the list and tuple to get to the <code>rowid</code>. </p> <p>Using <code>fetchone</code> is better in this case since you get just one row, <code>(rowid,)</code> or <code>None</code>. To get at the <code>rowid</code> (provided there is one) you just have to pick off the first element of the tuple.</p> <p>If you don't care about the particular <code>rowid</code> and you just want to know there is a hit, then you could use Alex Martelli's suggestion, <code>SELECT count(*)</code>, which would return either <code>(1,)</code> or <code>(0,)</code>.</p> <p>Here is some example code:</p> <p>First some boiler-plate code to setup a toy sqlite table:</p> <pre><code>import sqlite3 connection = sqlite3.connect(':memory:') cursor=connection.cursor() cursor.execute('create table components (rowid int,name varchar(50))') cursor.execute('insert into components values(?,?)', (1,'foo',)) </code></pre> <p><strong>Using <code>fetchall</code>:</strong></p> <pre><code>for name in ('bar','foo'): cursor.execute("SELECT rowid FROM components WHERE name = ?", (name,)) data=cursor.fetchall() if len(data)==0: print('There is no component named %s'%name) else: print('Component %s found with rowids %s'%(name,','.join(map(str, next(zip(*data)))))) </code></pre> <p>yields:</p> <pre><code>There is no component named bar Component foo found with rowids 1 </code></pre> <p><strong>Using <code>fetchone</code>:</strong> </p> <pre><code>for name in ('bar','foo'): cursor.execute("SELECT rowid FROM components WHERE name = ?", (name,)) data=cursor.fetchone() if data is None: print('There is no component named %s'%name) else: print('Component %s found with rowid %s'%(name,data[0])) </code></pre> <p>yields:</p> <pre><code>There is no component named bar Component foo found with rowid 1 </code></pre> <p><strong>Using <code>SELECT count(*)</code>:</strong></p> <pre><code>for name in ('bar','foo'): cursor.execute("SELECT count(*) FROM components WHERE name = ?", (name,)) data=cursor.fetchone()[0] if data==0: print('There is no component named %s'%name) else: print('Component %s found in %s row(s)'%(name,data)) </code></pre> <p>yields: </p> <pre><code>There is no component named bar Component foo found in 1 row(s) </code></pre>
python|sql|sqlite
93
1,909,898
67,974,473
Seaborn FacetGrid multiple page pdf plotting
<p>I'm trying to create a multi-page pdf using FacetGrid from this (<a href="https://seaborn.pydata.org/examples/many_facets.html" rel="nofollow noreferrer">https://seaborn.pydata.org/examples/many_facets.html</a>). There are 20 grids images and I want to save the first 10 grids in the first page of pdf and the second 10 grids to the second page of pdf file. I got the idea of create mutipage pdf file from this (<a href="https://stackoverflow.com/questions/65805978/export-huge-seaborn-chart-into-pdf-with-multiple-pages">Export huge seaborn chart into pdf with multiple pages</a>). This example works on sns.catplot() but in my case (sns.FacetGrid) the output pdf file has two pages and each page has all of the 20 grids instead of dividing 10 grids in each page.</p> <pre><code>import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt # Create a dataset with many short random walks rs = np.random.RandomState(4) pos = rs.randint(-1, 2, (20, 5)).cumsum(axis=1) pos -= pos[:, 0, np.newaxis] step = np.tile(range(5), 20) walk = np.repeat(range(20), 5) df = pd.DataFrame(np.c_[pos.flat, step, walk], columns=[&quot;position&quot;, &quot;step&quot;, &quot;walk&quot;]) # plotting FacetGrid def grouper(iterable, n, fillvalue=None): from itertools import zip_longest args = [iter(iterable)] * n return zip_longest(*args, fillvalue=fillvalue) from matplotlib.backends.backend_pdf import PdfPages with PdfPages(&quot;output.pdf&quot;) as pdf: N_plots_per_page = 10 for cols in grouper(df[&quot;walk&quot;].unique(), N_plots_per_page): # Initialize a grid of plots with an Axes for each walk grid = sns.FacetGrid(df, col=&quot;walk&quot;, hue=&quot;walk&quot;, palette=&quot;tab20c&quot;, col_wrap=2, height=1.5) # Draw a horizontal line to show the starting point grid.map(plt.axhline, y=0, ls=&quot;:&quot;, c=&quot;.5&quot;) # Draw a line plot to show the trajectory of each random walk grid.map(plt.plot, &quot;step&quot;, &quot;position&quot;, marker=&quot;o&quot;) # Adjust the tick positions and labels grid.set(xticks=np.arange(5), yticks=[-3, 3], xlim=(-.5, 4.5), ylim=(-3.5, 3.5)) # Adjust the arrangement of the plots grid.fig.tight_layout(w_pad=1) pdf.savefig() </code></pre>
<p>You are missing the <code>col_order=cols</code> argument to the <code>grid = sns.FacetGrid(...)</code> call.</p>
python|seaborn|facet-grid
1
1,909,899
68,019,358
How to make a dataframe download through browser using python
<p>I have a function, which generates a dataframe, which I am exporting as an excel sheet, at the end of the function. <code>df.to_excel('response.xlsx')</code> This excel file is being saved in my working directory. Now I'm hosting this in Streamlit on heroku as a web app, but I want this excel file to be downloaded in user's local disk (a normal browser download) once this function is called. Is there a way to do it ?</p>
<p>Snehan Kekre, from streamlit, wrote the following solution in <a href="https://discuss.streamlit.io/t/how-to-add-a-download-excel-csv-function-to-a-button/4474/9" rel="nofollow noreferrer">this thread</a>.</p> <pre class="lang-py prettyprint-override"><code> streamlit as st import pandas as pd import io import base64 import os import json import pickle import uuid import re def download_button(object_to_download, download_filename, button_text, pickle_it=False): &quot;&quot;&quot; Generates a link to download the given object_to_download. Params: ------ object_to_download: The object to be downloaded. download_filename (str): filename and extension of file. e.g. mydata.csv, some_txt_output.txt download_link_text (str): Text to display for download link. button_text (str): Text to display on download button (e.g. 'click here to download file') pickle_it (bool): If True, pickle file. Returns: ------- (str): the anchor tag to download object_to_download Examples: -------- download_link(your_df, 'YOUR_DF.csv', 'Click to download data!') download_link(your_str, 'YOUR_STRING.txt', 'Click to download text!') &quot;&quot;&quot; if pickle_it: try: object_to_download = pickle.dumps(object_to_download) except pickle.PicklingError as e: st.write(e) return None else: if isinstance(object_to_download, bytes): pass elif isinstance(object_to_download, pd.DataFrame): #object_to_download = object_to_download.to_csv(index=False) towrite = io.BytesIO() object_to_download = object_to_download.to_excel(towrite, encoding='utf-8', index=False, header=True) towrite.seek(0) # Try JSON encode for everything else else: object_to_download = json.dumps(object_to_download) try: # some strings &lt;-&gt; bytes conversions necessary here b64 = base64.b64encode(object_to_download.encode()).decode() except AttributeError as e: b64 = base64.b64encode(towrite.read()).decode() button_uuid = str(uuid.uuid4()).replace('-', '') button_id = re.sub('\d+', '', button_uuid) custom_css = f&quot;&quot;&quot; &lt;style&gt; #{button_id} {{ display: inline-flex; align-items: center; justify-content: center; background-color: rgb(255, 255, 255); color: rgb(38, 39, 48); padding: .25rem .75rem; position: relative; text-decoration: none; border-radius: 4px; border-width: 1px; border-style: solid; border-color: rgb(230, 234, 241); border-image: initial; }} #{button_id}:hover {{ border-color: rgb(246, 51, 102); color: rgb(246, 51, 102); }} #{button_id}:active {{ box-shadow: none; background-color: rgb(246, 51, 102); color: white; }} &lt;/style&gt; &quot;&quot;&quot; dl_link = custom_css + f'&lt;a download=&quot;{download_filename}&quot; id=&quot;{button_id}&quot; href=&quot;data:application/vnd.openxmlformats-officedocument.spreadsheetml.sheet;base64,{b64}&quot;&gt;{button_text}&lt;/a&gt;&lt;br&gt;&lt;/br&gt;' return dl_link vals= ['A','B','C'] df= pd.DataFrame(vals, columns=[&quot;Title&quot;]) filename = 'my-dataframe.xlsx' download_button_str = download_button(df, filename, f'Click here to download {filename}', pickle_it=False) st.markdown(download_button_str, unsafe_allow_html=True) </code></pre> <p>I'd recommend searching the thread on that discussion forum. There seem to be at least 3-4 alternatives to this code.</p> <p><a href="https://i.stack.imgur.com/V6MkW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/V6MkW.png" alt="enter image description here" /></a></p>
python|pandas|dataframe|heroku|streamlit
2