Unnamed: 0
int64
0
1.91M
id
int64
337
73.8M
title
stringlengths
10
150
question
stringlengths
21
64.2k
answer
stringlengths
19
59.4k
tags
stringlengths
5
112
score
int64
-10
17.3k
600
42,257,512
Difference between subprocess.Popen preexec_fn and start_new_session in python
<p>What is the difference between these two options to start a new process with <code>subprocess.Popen</code> for <code>python3.2+</code> under <code>Linux</code>:</p> <pre><code>proc = subprocess.Popen(args, ..., preexec_fn=os.setsid) # 1 proc = subprocess.Popen(args, ..., start_new_session=True) # 2 </code></pre> <p>I need this as I need to set process group ID to have a possibility to kill at once this process and all of its children. This is then used in the case if the process run time exceeds certain threshold:</p> <pre><code>try: out, err = proc.communicate(timeout=time_max) except subprocess.TimeoutExpired: os.killpg(os.getpgid(proc.pid), signal.SIGTERM) </code></pre> <p>I tested my code with the both options (<code>#1</code> &amp; <code>#2</code>) and they both seem to work ok for me. </p> <p>But I wonder what is the best option here - the one with <code>preexec_fn</code> or the one with <code>start_new_session</code>?</p>
<p>According to the official <a href="https://docs.python.org/3.4/library/subprocess.html?highlight=subprocess#popen-constructor" rel="noreferrer">Python Docs</a>,</p> <blockquote> <p>The preexec_fn parameter is not safe to use in the presence of threads in your application. The child process could deadlock before exec is called. If you must use it, keep it trivial! Minimize the number of libraries you call into.</p> <p>If you need to modify the environment for the child use the env parameter rather than doing it in a preexec_fn. The start_new_session parameter can take the place of a previously common use of preexec_fn to call os.setsid() in the child.</p> </blockquote> <p>So I guess the answer to your question is that <code>start_new_session</code> was introduced to replace the common operation of using <code>preexec_fn</code> to set the session id through <code>os.setsid()</code>, which is not thread safe.</p>
python|subprocess|popen|kill-process|setsid
18
601
58,248,093
How can I make my dictionary be able to be indexed by a function in python 3.x
<p>I am trying to make a program that finds out how many integers in a list are not the integer that is represented the most in that list. To do that I have a command which creates a dictionary with every value in the list and the number of times it is represented in it. Next I try to create a new list with all items from the older list except the most represented value so I can count the length of the list. The problem is that I cannot access the most represented value in the dictionary as I get an error code.</p> <pre><code>import operator import collections a = [7, 155, 12, 155] dictionary = collections.Counter(a).items() b = [] for i in a: if a != dictionary[max(iter(dictionary), key=operator.itemgetter(1))[0]]: b.append(a) </code></pre> <p>I get this error code: TypeError: 'dict_items' object does not support indexing</p>
<p>The variable you called <code>dictionary</code> is not a <code>dict</code> but a <code>dict_items</code>. </p> <pre><code>&gt;&gt;&gt; type(dictionary) &lt;class 'dict_items'&gt; &gt;&gt;&gt; help(dict.items) items(...) D.items() -&gt; a set-like object providing a view on D's items </code></pre> <p>and sets are iterable, not indexable:</p> <pre><code>for di in dictionary: print(di) # is ok dictionary[0] # triggers the error you saw </code></pre> <p>Note that Counter is very rich, maybe using <code>Counter.most_common</code> would do the trick.</p>
python-3.x|dictionary|indexing
1
602
58,350,799
Difference between id and equality in python
<p>How is the <code>id</code> of an object computed? <a href="https://docs.python.org/3/library/functions.html#id" rel="nofollow noreferrer">https://docs.python.org/3/library/functions.html#id</a></p> <p>It seems there is a place in a class to do equality, with <code>__eq__</code> but where is the <code>is</code> operation done, and how is the <code>id</code> arrived at?</p>
<p>You can think of <code>id(obj)</code> as some sort of address of the object. The way it is computed, and what the value represents, is implementation-dependent, and you should not make any assumptions about the value.</p> <p>What you need to know:</p> <ol> <li>Object's <code>id</code> will not change as long as the object exists</li> <li>Two co-existing objects will have different <code>id</code>s</li> <li>An object may have the same <code>id</code> as another object which has already been deallocated (since it is gone, its address may be reused).</li> <li><code>a is b</code> is equivalent to <code>id(a) == id(b)</code>.</li> </ol> <p>You cannot override the way <code>id</code> is computed, nor the way <code>is</code> behaves, like you'd override operators such as <code>__eq__</code>.</p>
python
2
603
58,301,581
Command line python and jupyter notebooks use two different versions of torch
<p>On my conda environment importing torch from command line Python and from a jupyter notebook yields two different results.</p> <p>Command line Python:</p> <pre><code>$ source activate GNN (GNN) $ python &gt;&gt;&gt; import torch &gt;&gt;&gt; print(torch.__file__) /home/riccardo/.local/lib/python3.7/site-packages/torch/__init__.py &gt;&gt;&gt; print(torch.__version__) 0.4.1 </code></pre> <p>Jupyter:</p> <pre><code>(GNN) $ jupyter notebook --no-browser --port=8890 import torch print(torch.__file__) /home/riccardo/.local/lib/python3.6/site-packages/torch/__init__.py print(torch.__version__) 1.2.0+cu92 </code></pre> <p>I tried the steps suggested in <a href="https://stackoverflow.com/questions/39604271/conda-environments-not-showing-up-in-jupyter-notebook">Conda environments not showing up in Jupyter Notebook</a> </p> <pre><code>$ conda install ipykernel $ source activate GNN (GNN) $ python -m ipykernel install --user --name GNN --display-name "Python (GNN)" Installed kernelspec GNN in /home/riccardo/.local/share/jupyter/kernels/gnn </code></pre> <p>but that did not solve the problem.</p>
<p>You need to sort of make the Anaconda environment recognized in Jupyter using </p> <pre><code>conda activate myenv conda install -n myenv ipykernel python -m ipykernel install --user --name myenv --display-name "Python (myenv)" </code></pre> <p>Replace <code>myenv</code> with the name of your environment. Later on, in your Jupyter Notebook, in the Select Kernel option, you will see this <code>Python (myenv)</code> option.</p>
python|jupyter-notebook|anaconda|pytorch|conda
1
604
65,432,087
Is there a way to use the secrets python module with a seed?
<p><code>Random.seed()</code> Is less secure than secrets, but I can't find any documentation on using a seed with secrets? or is random.seed just as fine?</p>
<p>No, there isn't. <a href="https://github.com/python/cpython/blob/master/Lib/secrets.py" rel="noreferrer"><code>secrets</code> uses <code>random</code>'s <code>SystemRandom</code> class</a>, which <a href="https://docs.python.org/3/library/random.html#random.SystemRandom" rel="noreferrer">reads from the operating system's random device</a>, such as <code>/dev/urandom</code> on Linux. This OS randomness is based off hardware entropy, which is what gives it its security, and there is no way to seed it.</p>
python|random
6
605
45,600,902
Matplotlib stacked bar chart
<p>Hi I'm fairly new to matplotlib but I'm trying to plot a stacked bar chart. Instead of stacking, my bars are overlapping one another.</p> <p>This is the dictionary where I'm storing data. </p> <pre><code>eventsDict = { 'A' : [30.427007371788505, 3.821656050955414], 'B' : [15.308879925288613, 25.477707006369428], 'C' : [10.846066723627477, 1.910828025477707], 'D' : [0.32586881793073297, 0.6369426751592357], 'E' : [3.110656307747332, 11.464968152866243], 'F' : [8.183480040534901, 1.910828025477707], 'G' : [3.048065650644783, 16.560509554140125], 'H' : [9.950920976811652, 4.45859872611465] } </code></pre> <p>My stacked bar graph has two bars. The first one contains all the data from the first value of the list and the second one contains all the second values from the list. (The list being the values in the dictionary)</p> <p>First, I convert the dictionary to a tuple:</p> <pre><code>allEvents = list(self.eventsDict.items()) </code></pre> <p>This turns the dictionary to this list:</p> <pre><code>all Events = [('A', [30.427007371788505, 3.821656050955414]), ('B', [15.308879925288613, 25.477707006369428]), ('C', [10.846066723627477, 1.910828025477707]), ('D', [0.32586881793073297, 0.6369426751592357]), ('E', [3.110656307747332, 11.464968152866243]), ('F', [8.183480040534901, 1.910828025477707]), ('G', [3.048065650644783, 16.560509554140125]), ('H', [9.950920976811652, 4.45859872611465])] </code></pre> <p>This is where I plot it:</p> <pre><code> range_vals = np.linspace(0, 2, 3) mid_vals = (range_vals[0:-1] + range_vals[1:]) * 0.5 colors = ['#DC7633', '#F4D03F', '#52BE80', '#3498DB', '#9B59B6', '#C0392B', '#2471A3', '#566573', '#95A5A6'] x_label = ['All events. %s total events' % (totalEvents), 'Corrected p-value threshold p &lt; %s. %s total events' % (self.pVal, totalAdjusted)] #Turn the dict to a tuple. That way it is ordered and is subscriptable. allEvents = list(self.mod_eventsDict.items()) #print (allEvents) #Use below to index: #list[x] key - value pairing #list[x][0] event name (key) #list[x][1] list of values [val 1(all), val 2(adjusted)] #Plot the Top bar first plt.bar(mid_vals, allEvents[0][1], color = colors[0], label = allEvents[0][0]) #Plot the rest x = 1 for x in range(1, 20): try: plt.bar(mid_vals, allEvents[x-1][1], bottom =allEvents[x-1][1], color = colors[x], label = allEvents[x][0]) x = x + 1 except IndexError: continue plt.xticks(mid_vals) # for classic style plt.xticks(mid_vals, x_label) # for classic style plt.xlabel('values') plt.ylabel('Count/Fraction') plt.title('Stacked Bar chart') plt.legend() plt.axis([0, 2.5, 0, 1]) plt.show() </code></pre> <p>This is the graph output. Ideally, they should all add up to 1 when stacked. I made them all a fraction of one whole so that both bars will have the same height. However, they just overlap each other. Also, note that stacks have a different label from their names on the dictionary. </p> <p><img src="https://i.stack.imgur.com/yuTTp.jpg" alt="stacked bar graph output"></p> <p>Please help me debug!!</p>
<p>You'll need to set <code>bottom</code> differently - this tells matplotlib where to place the bottom of the bar you're plotting, so it needs to be the sum of all of the heights of the bars that came before.</p> <p>You could for example track the current heights of the bars with a list like so:</p> <pre><code>current_heights = [0] * 20 for x in range(20): try: plt.bar(mid_vals, allEvents[x][1], bottom=current_heights[x], color=colors[x], label=allEvents[x][0]) x = x + 1 current_heights[x] += allEvents[x][1] #increment bar height after plotting except IndexError: continue </code></pre>
python|matplotlib|bar-chart|stacked-chart
1
606
45,581,807
Muscle Multiple Sequence Alignment with Biopython?
<p>I just learned to use python (and Biopython) so this question may bespeak my inexperience.</p> <p>In order to carry out MSA of sequences in a file (FileA.fasta), I use the following code:</p> <pre><code>from Bio.Align.Applications import MuscleCommandline inp = 'FileA.fasta' outp = 'FileB.fasta' cline = MuscleCommandline(input=inp, out=outp) cline() </code></pre> <p>I get the following error:</p> <pre><code>ApplicationError ... Non-zero return code 127 from 'muscle -in FileA.fasta -out FileB.fasta', message '/bin/sh: muscle: command not found' </code></pre> <p>I know that this has something to do with the executable not being in my working PATH. The Biopython tutorial suggests that I update the PATH to include the location of Muscle Tools and it gives an example of this for Windows, but I don't know how to do this for MAC.</p> <p>Please help. </p> <p>Thank you.</p>
<p>First make sure you know where you installed <code>muscle</code>. If, for example, you installed muscle in:</p> <pre><code>/usr/bin/muscle3.8.31_i86darwin64 </code></pre> <p>then you <a href="https://stackoverflow.com/questions/7703041/editing-path-variable-on-mac">edit <code>/etc/paths</code></a> with:</p> <pre><code>$ sudo vi /etc/paths </code></pre> <p>Each entry is separated by line breaks:</p> <pre><code>/usr/local/bin /bin /usr/sbin /sbin </code></pre> <p>Add the appropriate path (in this example <code>/usr/bin</code>) to the list. Save with <a href="https://www.cyberciti.biz/faq/linux-unix-vim-save-and-quit-command/" rel="nofollow noreferrer"><code>wq!</code></a></p> <p>Now, make sure the <code>muscle</code> is on your path. Try to run</p> <pre><code>muscle -in FileA.fasta -out FileB.fasta </code></pre> <p>If that works, the BioPython code should work as well.</p>
python|alignment|bioinformatics|biopython
1
607
45,425,355
Two dimensional FFT using python results in slightly shifted frequency
<p>I know there have been several questions about using the Fast Fourier Transform (FFT) method in python, but unfortunately none of them could help me with my problem:</p> <p>I want to use python to calculate the Fast Fourier Transform of a given two dimensional signal f, i.e. f(x,y). Pythons documentation helps a lot, solving a few issues, which the FFT brings with it, but i still end up with a slightly shifted frequency compared to the frequency i expect it to show. Here is my python code:</p> <pre><code>from scipy.fftpack import fft, fftfreq, fftshift import matplotlib.pyplot as plt import numpy as np import math fq = 3.0 # frequency of signal to be sampled N = 100.0 # Number of sample points within interval, on which signal is considered x = np.linspace(0, 2.0 * np.pi, N) # creating equally spaced vector from 0 to 2pi, with spacing 2pi/N y = x xx, yy = np.meshgrid(x, y) # create 2D meshgrid fnc = np.sin(2 * np.pi * fq * xx) # create a signal, which is simply a sine function with frequency fq = 3.0, modulating the x(!) direction ft = np.fft.fft2(fnc) # calculating the fft coefficients dx = x[1] - x[0] # spacing in x (and also y) direction (real space) sampleFrequency = 2.0 * np.pi / dx nyquisitFrequency = sampleFrequency / 2.0 freq_x = np.fft.fftfreq(ft.shape[0], d = dx) # return the DFT sample frequencies freq_y = np.fft.fftfreq(ft.shape[1], d = dx) freq_x = np.fft.fftshift(freq_x) # order sample frequencies, such that 0-th frequency is at center of spectrum freq_y = np.fft.fftshift(freq_y) half = len(ft) / 2 + 1 # calculate half of spectrum length, in order to only show positive frequencies plt.imshow( 2 * abs(ft[:half,:half]) / half, aspect = 'auto', extent = (0, freq_x.max(), 0, freq_y.max()), origin = 'lower', interpolation = 'nearest', ) plt.grid() plt.colorbar() plt.show() </code></pre> <p>And what i get out of this when running it, is:</p> <p><img src="https://i.stack.imgur.com/n3V4e.png" alt="FFT of signal"></p> <p>Now you see that the frequency in x direction is not exactly at <code>fq = 3</code>, but slightly shifted to the left. Why is this? I would assume that is has to do with the fact, that FFT is an algorithm using symmetry arguments and </p> <pre><code>half = len(ft) / 2 + 1 </code></pre> <p>is used to show the frequencies at the proper place. But I don't quite understand what the exact problem is and how to fix it.</p> <p>Edit: I have also tried using a higher sampling frequency (N = 10000.0), which did not solve the issue, but instead shifted the frequency slightly too far to the right. So i am pretty sure that the problem is not the sampling frequency.</p> <p>Note: I'm aware of the fact, that the leakage effect leads to unphysical amplitudes here, but in this post I am primarily interested in the correct frequencies.</p>
<p>I found a number of issues</p> <p>you use <code>2 * np.pi</code> twice, you should choose one of either linspace or the arg to sine as radians if you want a nice integer number of cycles</p> <p>additionally <code>np.linspace</code> defaults to <code>endpoint=True</code>, giving you an extra point for 101 instead of 100</p> <pre><code>fq = 3.0 # frequency of signal to be sampled N = 100 # Number of sample points within interval, on which signal is considered x = np.linspace(0, 1, N, endpoint=False) # creating equally spaced vector from 0 to 2pi, with spacing 2pi/N y = x xx, yy = np.meshgrid(x, y) # create 2D meshgrid fnc = np.sin(2 * np.pi * fq * xx) # create a signal, which is simply a sine function with frequency fq = 3.0, modulating the x(!) direction </code></pre> <p>you can check these issues: </p> <pre><code>len(x) Out[228]: 100 plt.plot(fnc[0]) </code></pre> <p>fixing the linspace endpoint now means you have an even number of fft bins so you drop the <code>+ 1</code> in the <code>half</code> calc</p> <p><code>matshow()</code> appears to have better defaults, your <code>extent = (0, freq_x.max(), 0, freq_y.max()),</code> in <code>imshow</code> appears to fubar the fft bin numbering</p> <pre><code>from scipy.fftpack import fft, fftfreq, fftshift import matplotlib.pyplot as plt import numpy as np import math fq = 3.0 # frequency of signal to be sampled N = 100 # Number of sample points within interval, on which signal is considered x = np.linspace(0, 1, N, endpoint=False) # creating equally spaced vector from 0 to 2pi, with spacing 2pi/N y = x xx, yy = np.meshgrid(x, y) # create 2D meshgrid fnc = np.sin(2 * np.pi * fq * xx) # create a signal, which is simply a sine function with frequency fq = 3.0, modulating the x(!) direction plt.plot(fnc[0]) ft = np.fft.fft2(fnc) # calculating the fft coefficients #dx = x[1] - x[0] # spacing in x (and also y) direction (real space) #sampleFrequency = 2.0 * np.pi / dx #nyquisitFrequency = sampleFrequency / 2.0 # #freq_x = np.fft.fftfreq(ft.shape[0], d=dx) # return the DFT sample frequencies #freq_y = np.fft.fftfreq(ft.shape[1], d=dx) # #freq_x = np.fft.fftshift(freq_x) # order sample frequencies, such that 0-th frequency is at center of spectrum #freq_y = np.fft.fftshift(freq_y) half = len(ft) // 2 # calculate half of spectrum length, in order to only show positive frequencies plt.matshow( 2 * abs(ft[:half, :half]) / half, aspect='auto', origin='lower' ) plt.grid() plt.colorbar() plt.show() </code></pre> <p>zoomed the plot: <a href="https://i.stack.imgur.com/v1Nzl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/v1Nzl.png" alt="enter image description here"></a></p>
python|fft|dft
3
608
28,467,719
Monkey patching Python Property setters (and getters?)
<p>So, monkey patching is pretty awesome, but what if I want to monkey patch a <code>@property</code>?</p> <p>For example, to monkey patch a method:</p> <pre><code>def new_method(): print('do stuff') SomeClass.some_method = new_method </code></pre> <p>however, properties in python re-write the = sign.</p> <p>Quick example, lets say I want to modify x to be 4. How would I go about doing that?:</p> <pre><code>class MyClass(object): def __init__(self): self.__x = 3 @property def x(self): return self.__x @x.setter def x(self, value): if value != 3: print('Nice try') else: self.__x = value foo = MyClass() foo.x = 4 print(foo.x) foo.__x = 4 print(foo.x) </code></pre> <blockquote> <p>Nice try </p> <p>3</p> <p>3</p> </blockquote>
<p>Using <code>_ClassName__attribute</code>, you can access the attribute:</p> <pre><code>&gt;&gt;&gt; class MyClass(object): ... def __init__(self): ... self.__x = 3 ... @property ... def x(self): ... return self.__x ... @x.setter ... def x(self, value): ... if value != 3: ... print('Nice try') ... else: ... self.__x = value ... &gt;&gt;&gt; foo = MyClass() &gt;&gt;&gt; foo._MyClass__x = 4 &gt;&gt;&gt; foo.x 4 </code></pre> <p>See <a href="https://docs.python.org/2/tutorial/classes.html#private-variables-and-class-local-references" rel="nofollow">Private Variables and Class-local References - Python tutorial</a>, especially parts that mention about <em>name mangling</em>.</p>
python|monkeypatching
2
609
41,410,238
Path Finder code, __getitem__ TypeError
<p>I am trying to make a "path finder"</p> <pre><code>def find_all_paths(start, end, graph, path=[]): path = path + [start] if start == end: return [path] paths = [] for node in graph[start]: if node not in path: newpaths = find_all_paths(graph, node, end, path) for newpath in newpaths: paths.append(newpath) return paths graph={1: ['2'], 2: ['3', '4', '5'], 3: ['4'], 4: ['5', '6'], 5: [], 6: []} </code></pre> <p>if I enter <code>find_all_paths(2,5,graph)</code> in the shell I should get back all the paths that go from key 2 in the graph dictionary to the the 5 values a proper result should be something like</p> <pre><code>path=[[2,5],[2,3,4,5][2,4,5]] </code></pre> <p>the code keeps giving values errors such as </p> <pre><code> for node in graph[start]: TypeError: 'int' object has no attribute '__getitem__' </code></pre> <p>could someone please help me get this thing running</p>
<p>There are several awkwardness and errors:</p> <p>Instead of initialising the <em>path</em> parameter with a list, use <code>None</code>. And create the empty list in the function body.</p> <pre><code>def find_all_paths(start, end, graph, path=None): path = path or [] </code></pre> <p>The values passed to the <code>find_all_paths</code> doesn't respect the signature. Write this instead:</p> <pre><code>newpaths = find_all_paths(node, end, graph, path) </code></pre> <p>Since the values are integer, the graph must contain <code>int</code> instead of <code>strings</code>.</p> <pre><code>graph = {1: [2], 2: [3, 4, 5], 3: [4], 4: [5, 6], 5: [], 6: []} </code></pre> <p>Here is the fixed version of your code:</p> <pre><code>def find_all_paths(start, end, graph, path=None): path = path or [] path.append(start) if start == end: return [path] paths = [] for node in graph[start]: if node not in path: newpaths = find_all_paths(node, end, graph, path) for newpath in newpaths: paths.append(newpath) return paths </code></pre> <p>If you try this:</p> <pre><code>graph = {1: [2], 2: [3, 4, 5], 3: [4], 4: [5, 6], 5: [], 6: []} print(find_all_paths(2, 5, graph)) </code></pre> <p>You'll get:</p> <pre><code>[[2, 3, 4, 5, 6]] </code></pre>
python|python-2.7|typeerror
2
610
6,554,455
Loading SQLite3 values into Python variables
<p>I have a project built in Python 2.7, using SQLite3 as the database. I need to know how to load an item from a particular row and column into an existing Python variable.</p> <p>TY!</p>
<p>Here are the basic steps:</p> <pre><code>import sqlite3 conn = sqlite3.connect(':memory:') curs = conn.cursor() results = curs.execute( """SELECT mycol FROM mytable WHERE somecol = ?;""", (some_var,) ).fetchall() curs.close() conn.close() </code></pre> <p>For further research you can look into using a context manager (<code>with</code> statement), and how to fetch results into a dict. Here's an example:</p> <pre><code>with sqlite3.connect(':memory:') as conn: curs = conn.cursor() curs.row_factory = sqlite3.Row try: results = curs.execute( """SELECT mycol FROM mytable WHERE somecol = ?;""", (some_var,) ).fetchall() # you would put your exception-handling code here finally: curs.close() </code></pre> <p>The benefits of the context manager are many, including the fact that your connection is closed for you. The benefit of mapping the results in a dict is that you can access column values by name as opposed to a less-meaningful integer.</p>
python|sqlite
3
611
56,909,602
I am getting pip version upgrade message while installing pygmaps
<p>I am getting this error while installing <code>pygmaps</code> package in pycharm.</p> <pre><code>Could not find a version that satisfies the requirement pygmaps (from versions: ) No matching distribution found for pygmaps You are using pip version 10.0.1, however version 19.1.1 is available. You should consider upgrading via the 'python -m pip install --upgrade pip' command. </code></pre> <p>I have already upgraded pip version to 19.1.1, but still showing this error.</p>
<p><code>pip install git+https://github.com/thearn/pygmaps-extended</code> can you try this one ? </p> <p>if it doesnt work </p> <p><a href="https://code.google.com/archive/p/pygmaps/downloads" rel="nofollow noreferrer">https://code.google.com/archive/p/pygmaps/downloads</a> go to this link and download it manually and add to your site-packages</p>
python|pip|version
0
612
57,089,919
Where is spark/pyspark saving my parquet files?
<p>I'm saving a dataframe in pyspark to a particular location, but cannot see the file/files in the directory. Where are they? How do I get to them out side of pyspark? And how do I delete them? And what is it that I am missing about how spark works? </p> <p>Here's how I save them...</p> <pre><code>df.write.format('parquet').mode('overwrite').save('path/to/filename') </code></pre> <p>And subsequently the following works...</p> <pre><code>df_ntf = spark.read.format('parquet').load('path/to/filename') </code></pre> <p>But no files ever appear in path/to/filename. </p> <p>This is on a cloudera cluster, let me know if any other details are needed to diagnose the problem.</p> <p>EDIT - This is the command I use to set up my spark contexts.</p> <pre><code>os.environ['SPARK_HOME'] = "/opt/cloudera/parcels/Anaconda/../SPARK2/lib/spark2/" os.environ['PYSPARK_PYTHON'] = "/opt/cloudera/parcels/Anaconda/envs/python3/bin/python" conf = SparkConf() conf.setAll([('spark.executor.memory', '3g'), ('spark.executor.cores', '3'), ('spark.num.executors', '29'), ('spark.cores.max', '4'), ('spark.driver.memory', '2g'), ('spark.pyspark.python', '/opt/cloudera/parcels/Anaconda/envs/python3/bin/python'), ('spark.dynamicAllocation.enabled', 'false'), ('spark.sql.execution.arrow.enabled', 'true'), ('spark.sql.crossJoin.enabled', 'true') ]) print("Creating Spark Context at {}".format(datetime.now())) spark_ctx = SparkContext.getOrCreate(conf=conf) spark = SparkSession(spark_ctx) hive_ctx = HiveContext(spark_ctx) sql_ctx = SQLContext(spark_ctx) </code></pre>
<p>Ok, a colleague and I have figured it out. It's not complicated but we are but simple data scientists so it wasn't obvious to us.</p> <p>Basically the files were being saved in a different hdfs drive, not the drive from which we run our queries using Jupyter notebooks.</p> <p>We found them by doing;</p> <pre><code>hdfs dfs -ls -h /user/my.name/path/to </code></pre>
python-3.x|apache-spark|pyspark|cloudera
1
613
57,278,821
How does one fit multiple independent and overlapping Lorentzian peaks in a set of data?
<p>I need to fit several Lorentzian peaks in the same dataset, some of which are overlapping. What I need most from the function is the peak positions (centers) however I can't seem to fit all the peaks in these data. </p> <p>I first tried using scipy's optimize curve fit, however I wasn't able to get the bounds to work and it would try to fit the full range of spectra. I've been using the python package lmfit with decent results, however I seem to be unable to get the fit to pick the overlapping peaks well. </p> <p>you can see the raw spectra with marked peaks <a href="https://i.stack.imgur.com/dpd3V.png" rel="nofollow noreferrer">here</a> and the results of my fitting <a href="https://i.stack.imgur.com/83oE3.png" rel="nofollow noreferrer">here</a></p> <p>You can find the data I am working with <a href="https://github.com/agust2010/test_data" rel="nofollow noreferrer">here</a></p> <pre class="lang-py prettyprint-override"><code>import os import matplotlib.pyplot as plt import numpy as np from lmfit.models import LorentzianModel test=np.loadtxt('filename.txt') plt.figure() # lz1 = LorentzianModel(prefix='lz1_') pars=lz1.guess(y,x=x) pars.update(lz1.make_params()) pars['lz1_center'].set(0.61, min=0.5, max=0.66) pars['lz1_amplitude'].set(0.028) pars['lz1_sigma'].set(0.7) lz2 = LorentzianModel(prefix='lz2_') pars.update(lz2.make_params()) pars['lz2_center'].set(0.76, min=0.67, max=0.84) pars['lz2_amplitude'].set(0.083) pars['lz2_sigma'].set(0.04) lz3 = LorentzianModel(prefix='lz3_') pars.update(lz3.make_params()) pars['lz3_center'].set(0.85,min=0.84, max=0.92) pars['lz3_amplitude'].set(0.048) pars['lz3_sigma'].set(0.05) lz4 = LorentzianModel(prefix='lz4_') pars.update(lz4.make_params()) pars['lz4_center'].set(0.98, min=0.94, max=1.0) pars['lz4_amplitude'].set(0.028) pars['lz4_sigma'].set(0.02) lz5 = LorentzianModel(prefix='lz5_') pars.update(lz5.make_params()) pars['lz5_center'].set(1.1, min=1.0, max=1.2) pars['lz5_amplitude'].set(0.037) pars['lz5_sigma'].set(0.07) lz6 = LorentzianModel(prefix='lz6_') pars.update(lz6.make_params()) pars['lz6_center'].set(1.4, min=1.2, max=1.5) pars['lz6_amplitude'].set(0.048) pars['lz6_sigma'].set(0.45) lz7 = LorentzianModel(prefix='lz7_') pars.update(lz7.make_params()) pars['lz7_center'].set(1.54,min=1.4, max=1.6) pars['lz7_amplitude'].set(0.037) pars['lz7_sigma'].set(0.03) lz8 = LorentzianModel(prefix='lz8_') pars.update(lz8.make_params()) pars['lz8_center'].set(1.7, min=1.6, max=1.8) pars['lz8_amplitude'].set(0.04) pars['lz8_sigma'].set(0.17) mod = lz1 + lz2 + lz3 + lz4 + lz5 + lz6 +lz7 + lz8 init = mod.eval(pars,x=x) out=mod.fit(y,pars,x=x) print(out.fit_report(min_correl=0.5)) plt.scatter(x,y, s=1) plt.plot(x,init,'k:') plt.plot(x,out.best_fit, 'r-') </code></pre>
<p>Actually, just adding a quadratic background and lifting the bounds on the centroids should give a decent fit.</p> <p>Using your data, I modified your example a little::</p> <pre><code>#!/usr/bin/env python import matplotlib.pyplot as plt import numpy as np from lmfit.models import LorentzianModel, QuadraticModel test = np.loadtxt('spectra.txt') xdat = test[0, :] ydat = test[1, :] def add_peak(prefix, center, amplitude=0.005, sigma=0.05): peak = LorentzianModel(prefix=prefix) pars = peak.make_params() pars[prefix + 'center'].set(center) pars[prefix + 'amplitude'].set(amplitude) pars[prefix + 'sigma'].set(sigma, min=0) return peak, pars model = QuadraticModel(prefix='bkg_') params = model.make_params(a=0, b=0, c=0) rough_peak_positions = (0.61, 0.76, 0.85, 0.99, 1.10, 1.40, 1.54, 1.7) for i, cen in enumerate(rough_peak_positions): peak, pars = add_peak('lz%d_' % (i+1), cen) model = model + peak params.update(pars) init = model.eval(params, x=xdat) result = model.fit(ydat, params, x=xdat) comps = result.eval_components() print(result.fit_report(min_correl=0.5)) plt.plot(xdat, ydat, label='data') plt.plot(xdat, result.best_fit, label='best fit') for name, comp in comps.items(): plt.plot(xdat, comp, '--', label=name) plt.legend(loc='upper right') plt.show() </code></pre> <p>which prints a report of</p> <pre><code>[[Model]] ((((((((Model(parabolic, prefix='bkg_') + Model(lorentzian, prefix='lz1_')) + Model(lorentzian, prefix='lz2_')) + Model(lorentzian, prefix='lz3_')) + Model(lorentzian, prefix='lz4_')) + Model(lorentzian, prefix='lz5_')) + Model(lorentzian, prefix='lz6_')) + Model(lorentzian, prefix='lz7_')) + Model(lorentzian, prefix='lz8_')) [[Fit Statistics]] # fitting method = leastsq # function evals = 1101 # data points = 800 # variables = 27 chi-square = 7.3824e-04 reduced chi-square = 9.5504e-07 Akaike info crit = -11062.6801 Bayesian info crit = -10936.1956 [[Variables]] bkg_c: 0.03630504 +/- 9.4269e-04 (2.60%) (init = 0) bkg_b: -0.05150031 +/- 0.00272084 (5.28%) (init = 0) bkg_a: 0.02285577 +/- 0.00109543 (4.79%) (init = 0) lz1_sigma: 0.03853490 +/- 0.00224206 (5.82%) (init = 0.05) lz1_center: 0.60596282 +/- 0.00101699 (0.17%) (init = 0.61) lz1_amplitude: 0.00121362 +/- 8.0862e-05 (6.66%) (init = 0.005) lz1_fwhm: 0.07706979 +/- 0.00448412 (5.82%) == '2.0000000*lz1_sigma' lz1_height: 0.01002487 +/- 3.1221e-04 (3.11%) == '0.3183099*lz1_amplitude/max(2.220446049250313e-16, lz1_sigma)' lz2_sigma: 0.03534226 +/- 3.5893e-04 (1.02%) (init = 0.05) lz2_center: 0.76784323 +/- 1.9002e-04 (0.02%) (init = 0.76) lz2_amplitude: 0.00738785 +/- 8.9378e-05 (1.21%) (init = 0.005) lz2_fwhm: 0.07068452 +/- 7.1786e-04 (1.02%) == '2.0000000*lz2_sigma' lz2_height: 0.06653864 +/- 3.6663e-04 (0.55%) == '0.3183099*lz2_amplitude/max(2.220446049250313e-16, lz2_sigma)' lz3_sigma: 0.03948780 +/- 0.00111507 (2.82%) (init = 0.05) lz3_center: 0.85427526 +/- 5.4206e-04 (0.06%) (init = 0.85) lz3_amplitude: 0.00317016 +/- 1.1244e-04 (3.55%) (init = 0.005) lz3_fwhm: 0.07897560 +/- 0.00223015 (2.82%) == '2.0000000*lz3_sigma' lz3_height: 0.02555459 +/- 3.9771e-04 (1.56%) == '0.3183099*lz3_amplitude/max(2.220446049250313e-16, lz3_sigma)' lz4_sigma: 0.02983045 +/- 0.00283845 (9.52%) (init = 0.05) lz4_center: 0.99544342 +/- 0.00142552 (0.14%) (init = 0.99) lz4_amplitude: 6.9114e-04 +/- 7.6016e-05 (11.00%) (init = 0.005) lz4_fwhm: 0.05966089 +/- 0.00567690 (9.52%) == '2.0000000*lz4_sigma' lz4_height: 0.00737492 +/- 3.6918e-04 (5.01%) == '0.3183099*lz4_amplitude/max(2.220446049250313e-16, lz4_sigma)' lz5_sigma: 0.06666333 +/- 0.00196152 (2.94%) (init = 0.05) lz5_center: 1.10162076 +/- 7.8293e-04 (0.07%) (init = 1.1) lz5_amplitude: 0.00522275 +/- 2.2587e-04 (4.32%) (init = 0.005) lz5_fwhm: 0.13332666 +/- 0.00392304 (2.94%) == '2.0000000*lz5_sigma' lz5_height: 0.02493807 +/- 4.7491e-04 (1.90%) == '0.3183099*lz5_amplitude/max(2.220446049250313e-16, lz5_sigma)' lz6_sigma: 0.11712113 +/- 0.00307555 (2.63%) (init = 0.05) lz6_center: 1.43220451 +/- 0.00102240 (0.07%) (init = 1.4) lz6_amplitude: 0.01215451 +/- 5.1928e-04 (4.27%) (init = 0.005) lz6_fwhm: 0.23424227 +/- 0.00615109 (2.63%) == '2.0000000*lz6_sigma' lz6_height: 0.03303334 +/- 6.2184e-04 (1.88%) == '0.3183099*lz6_amplitude/max(2.220446049250313e-16, lz6_sigma)' lz7_sigma: 0.02603963 +/- 0.00335175 (12.87%) (init = 0.05) lz7_center: 1.55545329 +/- 0.00152567 (0.10%) (init = 1.54) lz7_amplitude: 4.6978e-04 +/- 7.1036e-05 (15.12%) (init = 0.005) lz7_fwhm: 0.05207926 +/- 0.00670351 (12.87%) == '2.0000000*lz7_sigma' lz7_height: 0.00574266 +/- 3.8805e-04 (6.76%) == '0.3183099*lz7_amplitude/max(2.220446049250313e-16, lz7_sigma)' lz8_sigma: 0.11332337 +/- 0.00336106 (2.97%) (init = 0.05) lz8_center: 1.79132485 +/- 0.00117968 (0.07%) (init = 1.7) lz8_amplitude: 0.00700579 +/- 3.2606e-04 (4.65%) (init = 0.005) lz8_fwhm: 0.22664674 +/- 0.00672212 (2.97%) == '2.0000000*lz8_sigma' lz8_height: 0.01967830 +/- 4.2422e-04 (2.16%) == '0.3183099*lz8_amplitude/max(2.220446049250313e-16, lz8_sigma)' [[Correlations]] (unreported correlations are &lt; 0.500) C(bkg_b, bkg_a) = -0.993 C(bkg_c, bkg_b) = -0.981 C(bkg_c, bkg_a) = 0.966 C(lz6_sigma, lz6_amplitude) = 0.963 C(lz8_sigma, lz8_amplitude) = 0.935 C(lz5_sigma, lz5_amplitude) = 0.933 C(bkg_b, lz6_amplitude) = -0.907 C(lz3_sigma, lz3_amplitude) = 0.905 &lt;snip&gt; </code></pre> <p>and shows a plot of <a href="https://i.stack.imgur.com/aXoyt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aXoyt.png" alt="enter image description here"></a></p> <p>That may not be perfect, but should give you a pretty good start.</p>
python|curve-fitting|data-fitting|model-fitting|scipy-optimize
2
614
25,469,950
Matplot: How to plot true/false or active/deactive data?
<p>I want to plot a <code>true/false</code> or <code>active/deactive</code> binary data similar to the following picture: <img src="https://i.stack.imgur.com/GncSv.jpg" alt="True/False Plot"></p> <p>The horizontal axis is time and the vertical axis is some entities(Here some sensors) which is active(white) or deactive(black). How can I plot such a graphs using <code>pyplot</code>.</p> <p>I searched to find the name of these graphs but I couldn't find it.</p>
<p>What you are looking for is <code>imshow</code>:</p> <pre><code>import matplotlib.pyplot as plt import numpy as np # get some data with true @ probability 80 % data = np.random.random((20, 500)) &gt; .2 fig = plt.figure() ax = fig.add_subplot(111) ax.imshow(data, aspect='auto', cmap=plt.cm.gray, interpolation='nearest') </code></pre> <p>Then you will just have to get the Y labels from somewhere.</p> <p><img src="https://i.stack.imgur.com/rqZ5s.png" alt="enter image description here"></p> <p>It seems that the image in your question has some interpolation in the image. Let us set a few more things:</p> <pre><code>import matplotlib.pyplot as plt import numpy as np # create a bit more realistic-looking data # - looks complicated, but just has a constant switch-off and switch-on probabilities # per column # - the result is a 20 x 500 array of booleans p_switchon = 0.02 p_switchoff = 0.05 data = np.empty((20,500), dtype='bool') data[:,0] = np.random.random(20) &lt; .2 for c in range(1, 500): r = np.random.random(20) data[data[:,c-1],c] = (r &gt; p_switchoff)[data[:,c-1]] data[-data[:,c-1],c] = (r &lt; p_switchon)[-data[:,c-1]] # create some labels labels = [ "label_{0:d}".format(i) for i in range(20) ] # this is the real plotting part fig = plt.figure() ax = fig.add_subplot(111) ax.imshow(data, aspect='auto', cmap=plt.cm.gray) ax.set_yticks(np.arange(len(labels))) ax.set_yticklabels(labels) </code></pre> <p>creates <img src="https://i.stack.imgur.com/K2rdl.png" alt="enter image description here"></p> <p>However, the interpolation is not necessarily a good thing here. To make the different rows easier to separate, one might use colors:</p> <pre><code>import matplotlib.pyplot as plt import matplotlib.colors import numpy as np # create a bit more realistic-looking data # - looks complicated, but just has a constant switch-off and switch-on probabilities # per column # - the result is a 20 x 500 array of booleans p_switchon = 0.02 p_switchoff = 0.05 data = np.empty((20,500), dtype='bool') data[:,0] = np.random.random(20) &lt; .2 for c in range(1, 500): r = np.random.random(20) data[data[:,c-1],c] = (r &gt; p_switchoff)[data[:,c-1]] data[-data[:,c-1],c] = (r &lt; p_switchon)[-data[:,c-1]] # create some labels labels = [ "label_{0:d}".format(i) for i in range(20) ] # create a color map with random colors colmap = matplotlib.colors.ListedColormap(np.random.random((21,3))) colmap.colors[0] = [0,0,0] # create some colorful data: data_color = (1 + np.arange(data.shape[0]))[:, None] * data # this is the real plotting part fig = plt.figure() ax = fig.add_subplot(111) ax.imshow(data_color, aspect='auto', cmap=colmap, interpolation='nearest') ax.set_yticks(np.arange(len(labels))) ax.set_yticklabels(labels) </code></pre> <p>creates</p> <p><img src="https://i.stack.imgur.com/NIHua.png" alt="enter image description here"></p> <p>Of course, you will want to use something less strange as the coloring scheme, but that is really up to your artistic views. Here the trick is that all <code>True</code> elements on row <code>n</code> have value <code>n+1</code> and, and all <code>False</code> elements are <code>0</code> in <code>data_color</code>. This makes it possible to create a color map. Naturally, if you want a cyclic color map with two or three colors, just use the modulus of <code>data_color</code> in <code>imshow</code> by, e.g. <code>data_color % 3</code>.</p>
python|matplotlib|plot|scipy
31
615
25,590,456
Can not clone cStringIO object properly
<p>I have the following code to get an image from a url:</p> <pre><code>im = cStringIO.StringIO(image_buffer) </code></pre> <p>now i have to do different operations on the original image such as:</p> <pre><code>Image.open(im).crop(box=(1, 1, 1, 1) </code></pre> <p>but this will edit the im itsself so i can't reuse the Image.open command.</p> <p>Therefore i would like to clone the im object. i have tried that by using the following:</p> <pre><code>copy.deepcopy(im) copy.copy(im) im[:] </code></pre> <p>But none of those seem to work, the copy ones even throw the following exception:</p> <pre><code>object.__new__(cStringIO.StringI) is not safe, use cStringIO.StringI.__new__() </code></pre> <p>I have tried to search for this error but it's not clear to me why it refuses to clone the im object.</p> <p>This is written in python (using the django framework)</p> <p>I am using the PIL library for image manipulations</p>
<p>You can create a copy of a <code>cStringIO.StringIO</code> file object by simply getting out the string value and creating a new object, using the <a href="https://docs.python.org/2/library/stringio.html#StringIO.StringIO.getvalue" rel="nofollow"><code>StringIO.getvalue()</code> method</a>:</p> <pre><code>new_file = cStringIO.StringIO(original.getvalue()) </code></pre> <p>That said, store a reference to the <em>image object</em> instead, and apply operations to that:</p> <pre><code>image = Image.open(im) image.crop(box=(1, 1, 1, 1)) </code></pre> <p>This then allows you to also save the image to a new file (in-memory or otherwise) after you applied all the transformations.</p> <p>You can more easily create additional copies of an image object with the <a href="http://pillow.readthedocs.org/en/latest/reference/Image.html#PIL.Image.Image.copy" rel="nofollow"><code>Image.copy()</code> method</a>:</p> <pre><code>image = Image.open(im) image_copy = image.copy() image.crop(box=(1, 1, 1, 1)) </code></pre> <p>Here <code>image_copy</code> remains uncropped.</p>
python|django|python-imaging-library|pillow
4
616
61,700,645
Add wT*x+b after CNN Python
<p>I have a problem.</p> <p>I have to take the output of last conv layer of EfficientNet(shape=(,7,7,1280), I call this x) and then calculate H = wT*x+b. My w is [49,49]. After that I have to apply softmax on H and then do <a href="https://i.stack.imgur.com/yvjF1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yvjF1.png" alt="enter image description here"></a> .</p> <p>H and x have the same shape=[49,1280].</p> <p>I can't find anything that help me to translate this in code python. Can you help me? Thanks.</p>
<p>I see you use Tensorflow only (I mean without Keras). </p> <p>If you want to multiply <code>H</code> and <code>X</code> elementwise, and <code>H</code> and <code>X</code> are tensors with the same shape, you may use the elementwise multiplication functionality available in Tensorflow. If they are not tensors, you may transform the variables in tensors. Check <a href="https://riptutorial.com/tensorflow/example/10033/elementwise-multiplication" rel="nofollow noreferrer">here</a> the official documentation with all the information.</p>
python|tensorflow|keras|conv-neural-network|efficientnet
1
617
23,940,520
Removing lines from a text file using python and regular expressions
<p>I have some text files, and I want to remove all lines that begin with the asterisk (“*”).</p> <p>Made-up example:</p> <pre><code>words *remove me words words *remove me </code></pre> <p>My current code fails. It follows below:</p> <pre><code>import re program = open(program_path, "r") program_contents = program.readlines() program.close() new_contents = [] pattern = r"[^*.]" for line in program_contents: match = re.findall(pattern, line, re.DOTALL) if match.group(0): new_contents.append(re.sub(pattern, "", line, re.DOTALL)) else: new_contents.append(line) print new_contents </code></pre> <p>This produces ['', '', '', '', '', '', '<em>', '', '</em>', '', '*', ''], which is no goo.</p> <p>I’m very much a python novice, but I’m eager to learn. And I’ll eventually bundle this into a function (right now I’m just trying to figure it out in an ipython notebook).</p> <p>Thanks for the help! </p>
<p>You <em>don't</em> want to use a <code>[^...]</code> negative character class; you are matching <em>all</em> characters except for the <code>*</code> or <code>.</code> characters now.</p> <p><code>*</code> is a meta character, you want to escape that to <code>\*</code>. The <code>.</code> 'match any character' syntax needs a multiplier to match more than one. Don't use <code>re.DOTALL</code> here; you are operating on a line-by-line basis but don't want to erase the newline.</p> <p>There is no need to test first; if there is nothing to replace the original line is returned.</p> <pre><code>pattern = r"^\*.*" for line in program_contents: new_contents.append(re.sub(pattern, "", line)) </code></pre> <p>Demo:</p> <pre><code>&gt;&gt;&gt; import re &gt;&gt;&gt; program_contents = '''\ ... words ... *remove me ... words ... words ... *remove me ... '''.splitlines(True) &gt;&gt;&gt; new_contents = [] &gt;&gt;&gt; pattern = r"^\*.*" &gt;&gt;&gt; for line in program_contents: ... new_contents.append(re.sub(pattern, "", line)) ... &gt;&gt;&gt; new_contents ['words\n', '\n', 'words\n', 'words\n', '\n'] </code></pre>
python|regex
1
618
24,335,419
Matplotlib -- mplot3d: triplot projected on z=0 axis in 3d plot?
<p>I'm trying to plot a function in two variables, piecewise defined on a set of known triangles, more or less like so:</p> <pre><code>import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D import random def f( x, y): if x + y &lt; 1: return 0 else: return 1 x = [0, 1, 1, 0] y = [0, 0, 1, 1] tris = [[0, 1, 3], [1, 2,3]] fig = plt.figure() ax = fig.add_subplot( 121) ax.triplot( x, y, tris) xs = [random.random() for _ in range( 100)] ys = [random.random() for _ in range( 100)] zs = [f(xs[i], ys[i]) for i in range( 100)] ax2 = fig.add_subplot( 122, projection='3d') ax2.scatter( xs, ys, zs) plt.show() </code></pre> <p>Ideally, I'd combine both subplots into one by projecting the triangles onto the axis z=0. I know this is possible with other variants of 2d plots, but not with triplot. Is it possible to get what I want?</p> <p>PS. this is a heavily simplified version of the actual implementation I am using right now, therefore the random scattering might seem a bit weird.</p>
<p>I'm not an expert, but this was an interesting problem. After doing some poking around, I think I got something close. I made the Triangulation object manually and then passed it and a z list of zeros into plot_trisurf, and it put the triangles in the right place on z=0.</p> <pre><code>import matplotlib.pyplot as plt import matplotlib.tri as tri from mpl_toolkits.mplot3d import Axes3D import random def f( x, y): if x + y &lt; 1: return 0 else: return 1 x = [0, 1, 1, 0] y = [0, 0, 1, 1] tris = [[0, 1, 3], [1, 2,3]] z = [0] * 4 triv = tri.Triangulation(x, y, tris) fig = plt.figure() ax = fig.add_subplot( 111, projection='3d') trip = ax.plot_trisurf( triv, z ) trip.set_facecolor('white') xs = [random.random() for _ in range( 100)] ys = [random.random() for _ in range( 100)] zs = [f(xs[i], ys[i]) for i in range( 100)] ax.scatter( xs, ys, zs) plt.show() </code></pre> <p>ETA: Added a call to set_facecolor on the Poly3DCollection to make it white rather than follow a colormap. Can be futzed with for desired effect...</p>
python|matplotlib|plot|triangulation|mplot3d
1
619
23,994,233
Calling a method from an existing instance
<p>My understanding of Object Orientated Programming is a little shaky so if you have any links that would help explain the concepts it would be great to see them!</p> <p>I've shortened the code somewhat. The basic principle is that I have a game that starts with an instance of the main Controller class. When the game is opened the Popup class is opened. The events happens as follows:</p> <ol> <li>The start button on the popup is clicked</li> <li>The method start_click() runs</li> <li>Which calls the method start_game() in the Controller instance</li> <li>Which in turn changes the game state to 'True' in the original Controller instance</li> </ol> <p>My problem is with step 3. The error message I get is:</p> <pre><code>TypeError: unbound method start_game() must be called with Controller instance as first argument (got nothing instead) </code></pre> <p>I guess there needs to be some reference to the Controller class in the StartPopUp class. But I don't quite understand how to create that reference? </p> <pre><code>import kivy kivy.require('1.8.0') from kivy.app import App from kivy.uix.widget import Widget from kivy.clock import Clock from kivy.properties import BooleanProperty, NumericProperty, ObjectProperty from kivy.uix.popup import Popup from kivy.lang import Builder Builder.load_string(''' &lt;StartPopUp&gt; size_hint: .2, .2 auto_dismiss: False title: 'Welcome' Button: text: 'Play' on_press: root.start_click() on_press: root.dismiss() ''') class StartPopUp(Popup): def __init__(self, **kw): super(StartPopUp, self).__init__(**kw) def start_click(self): Controller.start_game() class Controller(Widget): playing_label = BooleanProperty(False) #Intitial phase of game is off def __init__(self, **kw): super(Controller, self).__init__(**kw) def start_popup(self, dt): sp = StartPopUp() sp.open() def start_game(self): self.playing_label = True print self.playing_label class MoleHuntApp(App): def build(self): game = Controller() Clock.schedule_once(game.start_popup, 1) return game if __name__ == '__main__': MoleHuntApp().run() </code></pre> <p>Thanks in advance!</p>
<p>You can pass the instance like this</p> <pre><code>class StartPopUp(Popup): def __init__(self, controller, **kw): super(StartPopUp, self).__init__(**kw) self.controller = controller def start_click(self): self.controller.start_game() </code></pre> <p>and in Controller</p> <pre><code>def start_popup(self, dt): sp = StartPopUp(self) sp.open() </code></pre>
android|python|python-2.7|kivy
2
620
24,268,020
Compare 2 files in Python
<p>I am trying to compare two files, A and C, in Python and for some reason the double for loop doesn't seem to work properly:</p> <pre><code>with open(locationA + filenameC,'r') as fileC, open(locationA + filenameA,'r') as fileA: for lineC in fileC: fieldC = lineC.split('#') for lineA in fileA: fieldA = lineA.split('#') print 'UserID Clicks' + fieldC[0] print 'UserID Activities' + fieldA[0] if (fieldC[0] == fieldA[0]) and (fieldC[2] == fieldA[2]): print 'OK' </code></pre> <p>Here, only the line of C seems to be compared, but for the other lines, the "A loop" seems to be ignored.</p> <p>Can anyone help me with this? </p>
<p>Your problem is that once you iterate over <code>fileA</code> once you need to change the pointer to the beginning of the file again. So what you might do is create two lists from both files and iterate over them as many times as you want. For example:</p> <pre><code>fileC_list = fileC.readlines() fileA_list = fileA.readlines() for lineC in fileC_list: # do something for lineA in fileA_list: # do somethins </code></pre>
python
1
621
72,096,196
Connectionn error Jupyter and Elasticsearch (Docker)
<p>I am trying to make a connection from Jupyter Notebook to Elasticsearch both in Docker containers but connected to the same network (bridge).</p> <p>Here is my code:</p> <pre class="lang-py prettyprint-override"><code>elastic_client = Elasticsearch(hosts=[&quot;http://localhost:9200/&quot;], http_auth=('generator', 'generator')) elastic_index='data_generator_nvdi_geoloc' df_out = pandas.read_csv(local_destination_path) </code></pre> <p>I get this error:</p> <blockquote> <p>ConnectionError: Connection error caused by: ConnectionError(Connection error caused by: NewConnectionError(&lt;urllib3.connection.HTTPConnection object at 0x7f7ada675710&gt;: Failed to establish a new connection: [Errno 111] Connection refused))</p> </blockquote> <p>I think it may be a problem of the containers, but both are connected to the same network and I don't know how to solve it.</p>
<p>Issue in your case is using <b>localhost:9200</b> as connection string. Because ES is not &quot;localhost&quot; inside your Jupiter container. Eeach container gets its own localhost reference. You need to adjust the connection string to your Docker's DNS record so service/container name you set up inside your docker-compose.yml file. Based on your comment you should use <b> elasticsearch:9200</b> for your connection string.</p>
python|docker|elasticsearch|jupyter-notebook
2
622
71,791,533
Return entire row and append to value in a dataframe
<p>I am trying to write a function that searches a data frame row by row for a values in a column then appends entire row to the right side of the value if that value is found in any row.</p> <pre><code>Dataframe1 Col1 Col2 Col3 Lookup 400 60 80 90 50 90 68 80 </code></pre> <p>What I want is a following dataframe:</p> <pre><code>Dataframe 2 Lookup Col1 Col2 Col3 90 50 90 68 80 400 60 80 </code></pre> <p>Any help is much appreciated.</p>
<p>You can try this out;</p> <pre><code>df1 = df.iloc[:,0:-1] new = pd.DataFrame() for val in df['Lookup']: s = df1[df1.eq(val).any(1)] new = new.append(s,ignore_index = True) new.insert(0,'Lookup',df['Lookup']) print(new) # Lookup Col1 Col2 Col3 # 0 90 50 90 68 # 1 80 400 60 80 </code></pre>
python|pandas|dataframe|loops
2
623
36,106,823
To sum up values of same items in a list of tuples while they are string
<p>If I have list of tuples like this: </p> <pre><code>my_list = [('books', '$5'), ('books', '$10'), ('ink', '$20'), ('paper', '$15'), ('paper', '$20'), ('paper', '$15')] </code></pre> <p>how can I turn the list to this:</p> <pre><code>[('books', '$15'), ('ink', '$20'), ('paper', '$50')] </code></pre> <p>i.e. to add the expense of same item while both the items are string in the tuples. I have problem with the price items being string. Any hint would be greatly appreciated. Thanks a lot!</p> <p>I am getting the first list in this way:</p> <pre><code>my_list=[] for line in data: item, price = line.strip('\n').split(',') cost = ["{:s}".format(item.strip()), "${:.2f}".format(float(price))] my_list.append(tuple(cost)) </code></pre> <p>Now <code>my_list</code> should look like given above.</p>
<p>You can use <a href="https://docs.python.org/3.6/library/collections.html#collections.defaultdict" rel="nofollow"><code>defaultdict</code></a> to do this:</p> <pre><code>&gt;&gt;&gt; from collections import defaultdict &gt;&gt;&gt; my_list = [('books', '$5'), ('books', '$10'), ('ink', '$20'), ('paper', '$15'), ('paper', '$20'), ('paper', '$15')] &gt;&gt;&gt; res = defaultdict(list) &gt;&gt;&gt; for item, price in my_list: ... res[item].append(int(price.strip('$'))) ... &gt;&gt;&gt; total = [(k, "${}".format(sum(v))) for k, v in res.items()] &gt;&gt;&gt; total [('ink', '$20'), ('books', '$15'), ('paper', '$50')] </code></pre>
python|string|list|tuples
2
624
29,402,905
How can resolve recursion depth exceeded (Goose-extractor)
<p>I am one problem with goose-extractor This is my code:</p> <pre><code> for resultado in soup.find_all('a', href=True,text=re.compile(llave)): url = resultado['href'] article = g.extract(url=url) print article.title </code></pre> <p>and take a look at my problem. </p> <pre><code>RuntimeError: maximum recursion depth exceeded </code></pre> <p>any suggestions ?</p> <p>I am a lousy programmer or hidden errors are not visible in python</p>
<p>As mentioned in the comments, you can increase the recursion limit with <code>sys.setrecursionlimit()</code> (<a href="https://docs.python.org/2/library/sys.html#sys.setrecursionlimit" rel="nofollow">2</a>/<a href="https://docs.python.org/3/library/sys.html#sys.setrecursionlimit" rel="nofollow">3</a>):</p> <pre><code>import sys sys.setrecursionlimit(10**5) </code></pre> <p>You can check what the default limit is with <code>sys.getrecursionlimit()</code> (<a href="https://docs.python.org/2/library/sys.html#sys.getrecursionlimit" rel="nofollow">2</a>/<a href="https://docs.python.org/3/library/sys.html#sys.getrecursionlimit" rel="nofollow">3</a>).</p> <p>Of course, this won't fix whatever's causing this recursion (there's no way to know what's wrong without more details), and might crash your computer if you don't fix that.</p>
python|extractor|goose
0
625
62,473,201
How do I enable Pylint in VSCode?
<p>I can't get pylint errors to show up in VSCode. I installed pylint globally (sudo apt install pylint), I created venv and installed it there with pip, I selected pylint as linter in VSCode, enabled it, ran it, and it doesnt show any errors in my file. If I check from the command line, it shows many errors in my file.</p> <p>This was working earlier, but not now on VSCode version 1.46.1 and 1.45.1 installed using snap.</p> <p>Same results with the Microsoft and the Jedi python language server.</p> <p>I found the pylint command in the developer console:</p> <pre><code>~/Documents/work/python/.venv/bin/python ~/.vscode/extensions/ms-python.python-2020.6.89148/pythonFiles/pyvsc-run-isolated.py pylint --disable=all --enable=F,unreachable,duplicate-key,unnecessary-semicolon,global-variable-not-assigned,unused-variable,unused-wildcard-import,binary-op-exception,bad-format-string,anomalous-backslash-in-string,bad-open-mode,E0001,E0011,E0012,E0100,E0101,E0102,E0103,E0104,E0105,E0107,E0108,E0110,E0111,E0112,E0113,E0114,E0115,E0116,E0117,E0118,E0202,E0203,E0211,E0213,E0236,E0237,E0238,E0239,E0240,E0241,E0301,E0302,E0303,E0401,E0402,E0601,E0602,E0603,E0604,E0611,E0632,E0633,E0701,E0702,E0703,E0704,E0710,E0711,E0712,E1003,E1101,E1102,E1111,E1120,E1121,E1123,E1124,E1125,E1126,E1127,E1128,E1129,E1130,E1131,E1132,E1133,E1134,E1135,E1136,E1137,E1138,E1139,E1200,E1201,E1205,E1206,E1300,E1301,E1302,E1303,E1304,E1305,E1306,E1310,E1700,E1701 --msg-template='{line},{column},{category},{symbol}:{msg}' --reports=n --output-format=text ~/Documents/work/python/micro.py </code></pre> <p>So pylint is indeed executed! If I run it like this from the command line, the output is:</p> <pre><code>Your code has been rated at 10.00/10 (previous run: 10.00/10, +0.00) </code></pre> <p>But if I execute <code>pylint micro.py</code> I get:</p> <pre><code>Your code has been rated at -2.50/10 (previous run: 10.00/10, -12.50) </code></pre> <p>Why is VSCode using that command line? I am testing now without a .pylintrc, but even when I had it, VSCode showed no errors, only the command line! However I just tried it again, <strong>added a .pylintrc and now the errors do show up in the editor for some reason!</strong></p> <p>But this is only with the Jedi server, when trying with the Microsoft server, linting cannot be enabled with its command, nothing happens and it stays off.</p> <p>My .vscode/settings.json:</p> <pre><code>{ &quot;python.linting.pylintEnabled&quot;: true, &quot;python.linting.enabled&quot;: true, &quot;python.linting.pylintArgs&quot;: [ &quot;--rcfile&quot;, &quot;${workspaceFolder}/backend/.pylintrc&quot; ] } </code></pre>
<p>Simplest way using UI:</p> <ol> <li><em>Press</em> &quot;<strong>Ctrl + Shift + P</strong>&quot; <em>to get Command Palette</em></li> <li><em>Type</em> &quot;<strong>Lint</strong>&quot;</li> </ol> <p><a href="https://i.stack.imgur.com/ijYzY.png" rel="noreferrer"><img src="https://i.stack.imgur.com/ijYzY.png" alt="enter image description here" /></a></p> <ol start="3"> <li><em>Select</em> &quot;<strong>Python : Enable/Disable Linting</strong>&quot;, <em>click on &quot;Enable&quot;</em></li> <li><em>Repeat Step 1 &amp; 2, now select</em> &quot;<strong>Python : Select Linter</strong>&quot;, <em>Select <strong>pylint</strong> from options</em></li> </ol> <p><a href="https://i.stack.imgur.com/59Fxr.png" rel="noreferrer"><img src="https://i.stack.imgur.com/59Fxr.png" alt="enter image description here" /></a></p> <ol start="5"> <li><em>Above steps will add below lines in</em> &quot;<strong>settings.json</strong>&quot; <em>under .vscode dir</em></li> </ol> <p><a href="https://i.stack.imgur.com/ROs7X.png" rel="noreferrer"><img src="https://i.stack.imgur.com/ROs7X.png" alt="enter image description here" /></a></p>
python|visual-studio-code|pylint
14
626
70,028,076
pyparsing nestedExpr and double closing characters
<p>I am trying to parse nested column type definitions such as</p> <pre><code>1 string 2 struct&lt;col_1:string,col_2:int&gt; 3 row(col_1 string,array(col_2 string),col_3 boolean) 4 array&lt;struct&lt;col_1:string,col_2:int&gt;,col_3:boolean&gt; 5 array&lt;struct&lt;col_1:string,col2:int&gt;&gt; </code></pre> <p>Using nestedExpr works as expected for cases 1-4, but throws a parse error on case 5. Adding a space between double closing brackets like &quot;&gt; &gt;&quot; seems work, and might be explained by this quote from the author.</p> <blockquote> <p>By default, nestedExpr will look for space-delimited words of printables <a href="https://sourceforge.net/p/pyparsing/bugs/107/" rel="nofollow noreferrer">https://sourceforge.net/p/pyparsing/bugs/107/</a></p> </blockquote> <p>I'm mostly looking for alternatives to pre and post processing the input string</p> <pre><code>type_str = type_str.replace(&quot;&gt;&quot;, &quot;&gt; &quot;) # parse string here type_str = type_str.replace(&quot;&gt; &quot;, &quot;&gt;&quot;) </code></pre> <p>I've tried using the infix_notation but I haven't been able to figure out how to use it in this situation. I'm probably just using this the wrong way...</p> <p>Code snippet</p> <pre><code>array_keyword = pp.Keyword('array') row_keyword = pp.Keyword('row') struct_keyword = pp.Keyword('struct') nest_open = pp.Word('&lt;([') nest_close = pp.Word('&gt;)]') col_name = pp.Word(pp.alphanums + '_') col_type = pp.Forward() col_type_delimiter = pp.Word(':') | pp.White(' ') column = col_name('name') + col_type_delimiter + col_type('type') col_list = pp.delimitedList(pp.Group(column)) struct_type = pp.nestedExpr( opener=struct_keyword + nest_open, closer=nest_close, content=col_list | col_type, ignoreExpr=None ) row_type = pp.locatedExpr(pp.nestedExpr( opener=row_keyword + nest_open, closer=nest_close, content=col_list | col_type, ignoreExpr=None )) array_type = pp.nestedExpr( opener=array_keyword + nest_open, closer=nest_close, content=col_type, ignoreExpr=None ) col_type &lt;&lt;= struct_type('children') | array_type('children') | row_type('children') | scalar_type('type') </code></pre>
<p><code>nestedExpr</code> and <code>infixNotation</code> are not really appropriate for this project. <code>nestedExpr</code> is generally a short-cut expression for stuff you don't really want to go into details parsing, you just want to detect and step over some chunk of text that happens to have some nesting in opening and closing punctuation. <code>infixNotation</code> is intended for parsing expressions with unary and binary operators, usually some kind of arithmetic. You <em>might</em> be able to treat the punctuation in your grammar as operators, but it is a stretch, and definitely doing things the hard way.</p> <p>For your project, you will really need to define the different elements, and it will be a recursive grammar (since the array and struct types will themselves be defined in terms of other types, which could also be arrays or structs).</p> <p>I took a stab at a BNF, for a subset of your grammar using scalar types <code>int</code>, <code>float</code>, <code>boolean</code>, and <code>string</code>, and compound types <code>array</code> and <code>struct</code>, with just the '&lt;' and '&gt;' nesting punctuation. An array will take a single type argument, to define the type of the elements in the array. A struct will take one or more struct fields, where each field is an <code>identifier:type</code> pair.</p> <pre><code>scalar_type ::= 'int' | 'float' | 'string' | 'boolean' array_type ::= 'array' '&lt;' type_defn '&gt;' struct_type ::= 'struct' '&lt;' struct_element (',' struct_element)... '&gt;' struct_element ::= identifier ':' type_defn type_defn ::= scalar_type | array_type | struct_type </code></pre> <p>(If you later want to add a row definition also, think about what the row is supposed to look like, and how its elements would be defined, and then add it to this BNF.)</p> <p>You look pretty comfortable with the basics of pyparsing, so I'll just start you off with some intro pieces, and then let you fill in the rest.</p> <pre><code># define punctuation LT, GT, COLON = map(pp.Suppress, &quot;&lt;&gt;:&quot;) ARRAY = pp.Keyword('array') STRUCT = pp.Keyword('struct') # create a Forward that will be used in other type expressions type_defn = pp.Forward() # here is the array type, you can fill in the other types following this model # and the definitions in the BNF array_type = pp.Group(ARRAY + LT + type_defn + GT) ... # then finally define type_defn in terms of the other type expressions type_defn &lt;&lt;= scalar_type | array_type | struct_type </code></pre> <p>Once you have that finished, try it out with some tests:</p> <pre><code>type_defn.runTests(&quot;&quot;&quot;\ string struct&lt;col_1:string,col_2:int&gt; array&lt;struct&lt;col_1:string,col2:int&gt;&gt; &quot;&quot;&quot;, fullDump=False) </code></pre> <p>And you should get something like:</p> <pre><code>string ['string'] struct&lt;col_1:string,col_2:int&gt; ['struct', [['col_1', 'string'], ['col_2', 'int']]] array&lt;struct&lt;col_1:string,col2:int&gt;&gt; ['array', ['struct', [['col_1', 'string'], ['col2', 'int']]]]&gt; </code></pre> <p>Once you have that, you can play around with extending it to other types, such as your row type, maybe unions, or arrays that take multiple types (if that was your intention in your posted example). Always start by updating the BNF - then the changes you'll need to make in the code will generally follow.</p>
python|pyparsing|column-types
1
627
45,933,743
Python Loop, .remove, and List Exercise
<p>So we have an exercise that we are giving to the children and we need to use a list(or array), as well as the '.remove()' method, and a loop. </p> <p>I have the following code and the code isn't working. </p> <pre><code>usernames = [ 'Steph','JHG','Greg','Matt','Rodney','David', 'Chris','Sally','Gemma','Pam','Daniel','JHG', 'JHG','Ishmael','Sam', 'JHG','Jacob' ] for i in range(0,3): for name in usernames: usernames.remove('JHG') print(usernames) </code></pre> <h1>Checker</h1> <pre><code>print ( 'Success! all the JHG values have been deleted from the list, onto the next!') </code></pre>
<p>Or as a <code>while</code> loop:</p> <pre><code>while 'JHG' in usernames: usernames.remove('JHG') </code></pre>
python|loops
1
628
54,915,861
How can I delete multiple files using a Python script?
<p>I am playing around with some python scripts and I ran into a problem with the script I'm writing. It's supposed to find all the files in a folder that meets the criteria and then delete it. However, it finds the files, but at the time of deleting the file, it says that the file is not found.</p> <p>This is my code:</p> <pre><code>import os for filename in os.listdir('C:\\New folder\\'): if filename.endswith(".rdp"): os.unlink(filename) </code></pre> <p>And this is the error I get after running it:</p> <p>FileNotFoundError: [WinError 2] The system cannot find the file specified:</p> <p>Can somebody assist with this?</p>
<p><a href="https://docs.python.org/2.7/library/os.html#os.unlink" rel="nofollow noreferrer"><code>os.unlink</code></a> takes the <strong>path</strong> to the file, not only its <code>filename</code>. Try <strong>pre-pending</strong> your <code>filename</code> with the <code>dirname</code>. Like this</p> <pre><code>import os dirname = 'C:\\New folder\\' for filename in os.listdir(dirname): if filename.endswith(".rdp"): # Add your "dirname" to the file path os.unlink(dirname + filename) </code></pre>
python
1
629
54,779,518
How to scrape wikipedia infobox and store it into a csv file
<p>I already done scraping of wikipedia's infobox but I don't know how to store taht data in csv file. Please help me out.</p> <pre><code>from bs4 import BeautifulSoup as bs from urllib.request import urlopen def infobox(query) : query = query url = 'https://en.wikipedia.org/wiki/'+query raw = urlopen(url) soup = bs(raw) table = soup.find('table',{'class':'infobox vcard'}) for tr in table.find_all('tr') : print(tr.text) infobox('Infosys') </code></pre>
<p>You have to collect the required data and write in csv file, you can use csv module see below example:</p> <pre><code>from bs4 import BeautifulSoup as bs from urllib import urlopen import csv def infobox(query) : query = query content_list = [] url = 'https://en.wikipedia.org/wiki/'+query raw = urlopen(url) soup = bs(raw) table = soup.find('table',{'class':'infobox vcard'}) for tr in table.find_all('tr') : if len(tr.contents) &gt; 1: content_list.append([tr.contents[0].text.encode('utf-8'), tr.contents[1].text.encode('utf-8')]) elif tr.text: content_list.append([tr.text.encode('utf-8')]) write_csv_file(content_list) def write_csv_file(content_list): with open(r'd:\Test.csv', mode='wb') as csv_file: writer = csv.writer(csv_file, delimiter=',') writer.writerows(content_list) infobox('Infosys') </code></pre>
python|web-scraping|beautifulsoup
0
630
33,114,468
Append input to existing list
<p>I'm running through a beginners guide to Python and I'm currently working on lists. Now I've created this sample code, but I can't seem to add the user input dynamically to the list that I've created. If you enter an item from the list, you get a success message, but if the item isn't on the list I try to append it to the current list and then I return an error message that it's not in the inventory. The last line is just printing out the list in hopes that the new addition is there. I've tried the append method and even tried extending into another list. Can someone spot where I'm going wrong? </p> <pre><code>topping_list_one = ['pepperoni', 'sausage', 'cheese', 'peppers'] error_message = "Sorry we don't have " success_message = "Great, we have " first_topping = raw_input ('Please give me a topping: ') if ( first_topping in topping_list_one ) : print '{}!'.format(success_message + first_topping) elif ( first_topping in topping_list_one ): topping_list_one.append('first_topping') else : print '{}'.format(error_message + first_topping) print 'Heres a list of the items now in our inventory: {}'.format(topping_list_one) </code></pre>
<p>I think you mean to say</p> <pre><code>elif ( first_topping not in topping_list_one ): topping_list_one.append(first_topping) </code></pre> <p>ie. "not in" instead of "in" and remove the quotes from 'first_topping'</p>
python
2
631
73,591,884
How to change some characters at the same time in Vscode?
<p>i have Vscode and Anaconda.</p> <p>There are 50+ ipynb tutorial files that i studied. I work with cells. These files have some Turkih characters that i want to change. These characters are both in uppercase and lowercase.</p> <pre><code>Ç --&gt; C Ğ --&gt; G Ö --&gt; O Ş --&gt; S Ü --&gt; U İ --&gt; I ç --&gt; c ğ --&gt; g ö --&gt; o ş --&gt; s ü --&gt; u ı --&gt; i </code></pre> <p>In Vscode there are replace function. How can i change all these characters at the same time for a ipynb file or for all ipynb files</p> <p>Thanks very much.</p>
<p>use the extension <a href="https://marketplace.visualstudio.com/items?itemName=bhughes339.replacerules" rel="nofollow noreferrer">Replace Rules</a></p> <p>Add the following to your settings:</p> <pre class="lang-json prettyprint-override"><code> &quot;replacerules.rules&quot;: { &quot;Replace Turkih&quot;: { &quot;find&quot;: [&quot;Ç&quot;, &quot;Ğ&quot;, &quot;Ö&quot;, &quot;Ş&quot;, &quot;Ü&quot;, &quot;İ&quot;, &quot;ç&quot;, &quot;ğ&quot;, &quot;ö&quot;, &quot;ş&quot;, &quot;ü&quot;, &quot;ı&quot;], &quot;replace&quot;: [&quot;C&quot;, &quot;G&quot;, &quot;O&quot;, &quot;S&quot;, &quot;U&quot;, &quot;I&quot;, &quot;c&quot;, &quot;g&quot;, &quot;o&quot;, &quot;s&quot;, &quot;u&quot;, &quot;i&quot;] } } </code></pre> <ol> <li>Open the file</li> <li>Execute the command: <strong>Replace Rule: Run Rule...</strong></li> <li>Select the <strong>Replace Turkih</strong> rule.</li> </ol> <p>With extension <a href="https://marketplace.visualstudio.com/items?itemName=rioj7.commandOnAllFiles" rel="nofollow noreferrer">Command on All Files</a> you can apply a command on a selection of files in the workspace.</p> <p>We need the extension <a href="https://marketplace.visualstudio.com/items?itemName=ryuta46.multi-command" rel="nofollow noreferrer">multi-command</a> because the have to add arguments to the command.</p> <p>Add the following to your settings:</p> <pre class="lang-json prettyprint-override"><code> &quot;multiCommand.commands&quot;: [ { &quot;command&quot;: &quot;multiCommand.replaceTurkih&quot;, &quot;sequence&quot;: [ { &quot;command&quot;: &quot;replacerules.runRule&quot;, &quot;args&quot;: { &quot;ruleName&quot;: &quot;Replace Turkih&quot; } } ] } ], &quot;commandOnAllFiles.commands&quot;: { &quot;Add Hello to the End&quot;: { &quot;command&quot;: &quot;multiCommand.replaceTurkih&quot;, &quot;includeFileExtensions&quot;: [&quot;.ipynb&quot;] } } </code></pre>
python|visual-studio-code|jupyter-notebook
1
632
73,717,231
Is it possible to get the xpath (source) of an element using selenium in python?
<p>If you take a look at this <a href="https://example.com/" rel="nofollow noreferrer">site</a>, you will see the title/text &quot;Example Domain&quot;. Is it possible to get its xpath which is <code>/html/body/div/h1</code> using selenium? Is there any other possibilities? I mean I want to get xpath itself and not its content! I know we can get the page_soruce using <code>driver.page_source</code> but this not what I'm looking for. I simply expect an output as <code>/html/body/div/h1</code>.</p> <p>I tried this:</p> <pre><code>test = driver.page_source ps = str(test) root = etree.fromstring(ps) tree = etree.ElementTree(root) find_text = etree.XPath(&quot;//p[text()='my_target_text']&quot;) # in our case Example Domain for target in find_text(root): print(tree.getpath(target)) </code></pre> <p>It returns:</p> <blockquote> <p>lxml.etree.XMLSyntaxError: Opening and ending tag mismatch</p> </blockquote>
<p>What you need (based on how you worded your question: I honestly doubt this is what you <strong>really</strong> need, and I'm sure that if you would state your end goal, someone would put you on the right path) is this:</p> <p><a href="https://gist.github.com/ergoithz/6cf043e3fdedd1b94fcf" rel="nofollow noreferrer">https://gist.github.com/ergoithz/6cf043e3fdedd1b94fcf</a></p> <p>I figured this would actually represent a full answer to your question as asked, so posting it as a response.</p>
python|selenium|web-scraping|xpath|xml.etree
2
633
73,563,343
python PIL image.getdata() returns data returns more data than is expected
<p>the following:</p> <pre><code>image = Image.open(name, 'r') data = np.array(image.getdata()) data.reshape(image.size) </code></pre> <p>returns:</p> <pre><code>Traceback (most recent call last): File &quot;/home/usr/colorviewer/main.py&quot;, line 207, in &lt;module&gt; print(getPalette()) File &quot;/home/usr/colorviewer/main.py&quot;, line 152, in getPalette data.reshape(image.size) ValueError: cannot reshape array of size 921600 into shape (640,480) </code></pre> <p>why does it say the array size is 921600 instead of 307200 as the size would suggest, and how might the image data be reshaped into its normal resolution?</p>
<p>Every pixel uses three values <code>R,G,B</code> so you have <code>640*480*3</code> which gives <code>921600</code> bytes.</p> <p>So you need reshape into <code>(640,480,3)</code>. And it needs <code>*</code> to unpack <code>size</code>.</p> <pre><code>data.reshape(*image.size, 3) </code></pre> <p>If image can be transparent then it may have <code>640*480*4</code> bytes. And it can be safer to reshape <code>(640,480,-1)</code> and it should automatically ise <code>3</code> or <code>4</code></p> <pre><code>data.reshape(*image.size, -1) </code></pre> <hr /> <p>Or you should skip <code>.getdata()</code> and you get already reshaped array</p> <pre><code>data = np.array(image) </code></pre>
python|image|image-processing|python-imaging-library
0
634
21,432,947
How to compress 300GB file using python
<p>I am trying to compress a virtual machine file with size 300GB.</p> <p>Every single time the python script is killed because the actually memory usage of the <code>gzip</code> module exceeds 30GB (virtual memory). </p> <p>Is there any way to achieve large file(300GB to 64TB) compression using python?</p> <pre><code>def gzipFile(fileName): startTime = time.time() with open(fileName,'rb') as fileHandle: compressedFileName = "%s-1.gz" % fileName with gzip.open(compressedFileName, 'wb') as compressedFH: compressedFH.writelines(fileHandle) finalTime = time.time() - startTime print("gzipFile=%s fileName=%s" % (finalTime,compressFileName)) </code></pre>
<pre><code>with gzip.open(compressedFileName, 'wb') as compressedFH: compressedFH.writelines(fileHandle) </code></pre> <p>writes the file <code>fileHandle</code> <em>line by line</em>, i. e. splits it into chunks separated by the <code>\n</code> character.</p> <p>While it is quite probable that this character occurs from time to time in a binary file as well, this is not guaranteed.</p> <p>It might be better to do</p> <pre><code>with gzip.open(compressedFileName, 'wb') as compressedFH: while True: chunk = fileHandle.read(65536) if not chunk: break # the while loop compressedFH.write(chunk) </code></pre> <p>or, as tqzf writes in a comment,</p> <pre><code>with gzip.open(compressedFileName, 'wb') as compressedFH: shutil.copyfileobj(fileHandle, compressedFileName) </code></pre>
python|compression
3
635
41,057,516
Python: Find all pairwise distances between points and return points along with distance
<p>I have a list containing list of points with a name and coordinates in 3D. Something like this with a <strong>much larger length of the list</strong>:</p> <pre><code>group=[[gr1, 5, 8, 9], [gr2, 7, 4, 5], [gr3, 3, 8, 1], [gr4, 3, 4, 8]] </code></pre> <p>I want to calculate <strong>all possible pairwise distances</strong> among the coordinates and return the <strong>distance along with the corresponding points.</strong> Something like this:</p> <pre><code>distances=[[gr1, gr2, 6.],[gr1, gr3, 8.24621125], [gr1, gr4, 4.58257569], [gr2, gr3, 6.92820323], [gr2, gr4, 5.], [gr3, gr4, 8.06225775]] </code></pre> <p>I tried using scipy.spatial.distance.pdist but that only returns me this</p> <pre><code>array([ 6. , 8.24621125, 4.58257569, 6.92820323, 5. , 8.06225775]) </code></pre> <p>How do I <strong>extract the information along with the groups that were considered for each distance value</strong>?</p> <p>I am a beginner and I am using python 3. Thanks</p>
<p>Maybe you could try two nested for-loops to combine every element in "group" except the last one with every other element to the right:</p> <pre><code>group=[["gr1", 5, 8, 9], ["gr2", 7, 4, 5], ["gr3", 3, 8, 1], ["gr4", 3, 4, 8]] distances=[] for i,g in enumerate(group[:-1]): for h in group[i+1:]: d = ((g[1]-h[1])**2+(g[2]-h[2])**2+(g[3]-h[3])**2)**0.5 distances.append([g[0],h[0],d]) print(*distances,sep="\n") </code></pre> <p>Result:</p> <pre><code>['gr1', 'gr2', 6.0] ['gr1', 'gr3', 8.246211251235321] ['gr1', 'gr4', 4.58257569495584] ['gr2', 'gr3', 6.928203230275509] ['gr2', 'gr4', 5.0] ['gr3', 'gr4', 8.06225774829855] </code></pre>
list|python-3.x|numpy|scipy
1
636
29,127,350
completely connected subgraphs from a larger graph in networkx
<p>I have tried not to repost here, but I think my request is very simple and I am just inexperienced with network graphs. When using the networkx module in python, I would like to recover, from a connected graph, the subgraphs where all nodes are connected to each other (where the number of nodes is greater than 2). Is there a simple way to do this?</p> <p>Here is my example:</p> <p>A simple graph with seven nodes. Nodes 1,2,3 are shared connections, nodes 1,2,4 all share connections, and nodes 5,6,7 all share connections. </p> <pre><code>import networkx as nx G=nx.Graph() #Make the graph G.add_nodes_from([1,2,3,4,5,6,7]) #Add nodes, although redundant because of the line below G.add_edges_from([(1,2),(1,3),(2,3),(1,4),(2,4),(1,5),(5,6),(5,7),(6,7)]) # Adding the edges </code></pre> <p>My desired output would be: ([1,2,3],[1,2,4],[5,6,7])</p> <p>I can think of slightly laborious methods for writing this but was wondering if there was a simple inbuilt function for it.</p>
<p>It sounds like you want to discover the cliques in your graph. For this you could use <a href="http://networkx.lanl.gov/reference/generated/networkx.algorithms.clique.find_cliques.html#networkx.algorithms.clique.find_cliques" rel="nofollow"><code>nx.clique.find_cliques()</code></a>:</p> <pre><code>&gt;&gt;&gt; list(nx.clique.find_cliques(G)) [[1, 2, 3], [1, 2, 4], [1, 5], [6, 5, 7]] </code></pre> <p><code>nx.clique.find_cliques()</code> returns a generator which will yield all cliques in the graph. You can filter out the cliques with fewer than three nodes using list comprehension:</p> <pre><code>&gt;&gt;&gt; [g for g in nx.clique.find_cliques(G) if len(g) &gt; 2] [[1, 2, 3], [1, 2, 4], [6, 5, 7]] </code></pre>
python|nodes|networkx|subgraph
3
637
29,260,404
create a multidimensional random matrix in spark
<p>With the python API of Spark I am able to quickly create an RDD vector with random normal number and perform a calculation with the following code: </p> <pre><code>from pyspark.mllib.random import RandomRDDs RandomRDDs.uniformRDD(sc, 1000000L, 10).sum() </code></pre> <p>where <code>sc</code> is an available SparkContext. The upside of this approach is that it is very performant, the downside is that I am not able to create a random matrix this way. </p> <p>You could create use numpy again, but this isn't performant.</p> <pre><code>%%time sc.parallelize(np.random.rand(1000000,2)).sum() array([ 499967.0714618 , 499676.50123474]) CPU times: user 52.7 ms, sys: 31.1 ms, total: 83.9 ms Wall time: 669 ms </code></pre> <p>For comparison with Spark: </p> <pre><code>%%time RandomRDDs.uniformRDD(sc, 2000000, 10).sum() 999805.091403467 CPU times: user 4.54 ms, sys: 1.89 ms, total: 6.43 ms Wall time: 183 ms </code></pre> <p>Is there a performant way to create random matrices/RDD's that contain more than one dimension with the Python Spark API? </p>
<p>Spark evolved a bit since this question was asked and Spark will probably have better support still in the future. </p> <p>In the meantime you can be a bit creative with the <code>.zip</code> method of RDD's as well as DataFrames to get close to what numpy can do. It is a bit more verbose, but it works. </p> <pre><code>n = 100000 p1 = RandomRDDs.uniformRDD(sc, n).zip(RandomRDDs.uniformRDD(sc, n)) p2 = RandomRDDs.uniformRDD(sc, n).zip(RandomRDDs.uniformRDD(sc, n)) point_rdd = p1.zip(p2)\ .map(lambda r: Row(x1=r[0][0], y1 = r[0][1], x2=r[1][0], y2 = r[1][1])) </code></pre>
python|numpy|multidimensional-array|apache-spark
1
638
52,083,470
Writing to an Excel File With Python
<p>I am doing some webscraping with BeautifulSoup and Selenium and I want to write my data to an excel file </p> <pre><code># coding: utf-8 import requests import bs4 from datetime import datetime import re import os import urllib import urllib2 from bs4 import BeautifulSoup from selenium import webdriver import time initialpage = 'https://www.boxofficemojo.com/yearly/chart/?yr=2017&amp;p=.htm' res = requests.get(initialpage, timeout=None) soup = bs4.BeautifulSoup(res.text, 'html.parser') pages = [] pagelinks=soup.select('a[href^="/yearly/chart/?page"]') for i in range(int(len(pagelinks)/2)): pages.append(str(pagelinks[i])[9:-14]) pages[i]=pages[i].replace("amp;","") pages[i]= "https://www.boxofficemojo.com" + pages[i] pages[i]=pages[i][:-1] pages.insert(0, initialpage) date_dic = {} movie_links = [] titles = [] Domestic_Gross_Arr=[] Genre_Arr=[] Release_Date_Arr = [] Theaters_Arr=[] Budget_Arr = [] Views_Arr = [] Edits_Arr = [] Editors_Arr = [] for i in range(int(len(pagelinks)/2 + 1)): movie_count=0; res1 = requests.get(pages[i]) souppage=bs4.BeautifulSoup(res1.text, 'html.parser') for j in souppage.select('tr &gt; td &gt; b &gt; font &gt; a'): link = j.get("href")[7:].split("&amp;") str1 = "".join(link) final = "https://www.boxofficemojo.com/movies" + str1 if "/?id" in final: movie_links.append(final) movie_count += 1 number_of_theaters=souppage.find("tr", bgcolor="#dcdcdc") for k in range(movie_count): #print(number_of_theaters.next_sibling.contents[4].text) Theaters_Arr.append(number_of_theaters.next_sibling.contents[4].text) number_of_theaters=number_of_theaters.next_sibling k=0 path = os.getcwd() path = path + '/movie_pictures' os.makedirs(path) os.chdir(path) while(k &lt; 2): j = movie_links[k] try: res1 = requests.get(j) soup1 = bs4.BeautifulSoup(res1.text, 'html.parser') c = soup1.select('td[width="35%"]') d=soup1.select('div[class="mp_box_content"]') genre = soup1.select('td[valign="top"]')[5].select('b') image = soup1.select('img')[6].get('src') budget = soup1.select('tr &gt; td &gt; b') domestic = str(c[0].select('b'))[4:-5] release = soup1.nobr.a title = soup1.select('title')[0].getText()[:-25] print ("-----------------------------------------") print ("Title: " +title) titles.append(title) print ("Domestic Gross: " +domestic) Domestic_Gross_Arr.append(domestic) print ("Genre: "+genre[0].getText()) Genre_Arr.append(genre[0].getText()) print ("Release Date: " +release.contents[0]) Release_Date_Arr.append(release.contents[0]) print ("Production Budget: " +budget[5].getText()) Budget_Arr.append(budget[5].getText()) year1=str(release.contents[0])[-4:] a,b=str(release.contents[0]).split(",") month1, day1=a.split(" ") datez= year1 + month1 + day1 new_date= datetime.strptime(datez , "%Y%B%d") date_dic[title]=new_date with open('pic' + str(k) + '.png', 'wb') as handle: response = requests.get(image, stream=True) if not response.ok: print response for block in response.iter_content(1024): if not block: break handle.write(block) except: print("Error Occured, Page Or Data Not Available") k+=1 def subtract_one_month(t): import datetime one_day = datetime.timedelta(days=1) one_month_earlier = t - one_day while one_month_earlier.month == t.month or one_month_earlier.day &gt; t.day: one_month_earlier -= one_day year=str(one_month_earlier)[:4] day=str(one_month_earlier)[8:10] month=str(one_month_earlier)[5:7] newdate= year + "-" + month +"-" + day return newdate number_of_errors=0 browser = webdriver.Chrome("/Users/Gokce/Downloads/chromedriver") browser.maximize_window() browser.implicitly_wait(20) for i in titles: try: release_date = date_dic[i] i = i.replace(' ', '_') i = i.replace("2017", "2017_film") #end = datetime.strptime(release_date, '%B %d, %Y') end_date = release_date.strftime('%Y-%m-%d') start_date = subtract_one_month(release_date) url = "https://tools.wmflabs.org/pageviews/?project=en.wikipedia.org&amp;platform=all-access&amp;agent=user&amp;start="+ start_date +"&amp;end="+ end_date + "&amp;pages=" + i browser.get(url) page_views_count = browser.find_element_by_css_selector(" .summary-column--container .legend-block--pageviews .linear-legend--counts:first-child span.pull-right ") page_edits_count = browser.find_element_by_css_selector(" .summary-column--container .legend-block--revisions .linear-legend--counts:first-child span.pull-right ") page_editors_count = browser.find_element_by_css_selector(" .summary-column--container .legend-block--revisions .legend-block--body .linear-legend--counts:nth-child(2) span.pull-right ") print (i) print ("Number of Page Views: " +page_views_count.text) Views_Arr.append(page_views_count.text) print ("Number of Edits: " +page_edits_count.text) Edits_Arr.append(page_edits_count.text) print ("Number of Editors: " +page_editors_count.text) Editors_Arr.append(page_editors_count.text) except: print("Error Occured for this page: " + str(i)) number_of_errors += 1 Views_Arr.append(-1) Edits_Arr.append(-1) Editors_Arr.append(-1) time.sleep(5) browser.quit() import xlsxwriter os.chdir("/home") workbook = xlsxwriter.Workbook('WebScraping.xlsx') worksheet = workbook.add_worksheet() worksheet.write(0,0, "Hello") worksheet.write(0,1, 'Genre') worksheet.write(0,2, 'Production Budget') worksheet.write(0,3, 'Domestic Gross') worksheet.write(0,4, 'Release Date') worksheet.write(0,5, 'Number of Wikipedia Page Views') worksheet.write(0,6, 'Number of Wikipedia Edits') worksheet.write(0,7, 'Number of Wikipedia Editors') row=1 for i in range(len(titles)): worksheet.write(row, 0, titles[i]) worksheet.write(row, 1, Genre_Arr[i]) worksheet.write(row, 2, Budget_Arr[i]) worksheet.write(row, 3, Domestic_Gross_Arr[i]) worksheet.write(row, 4, Release_Date_Arr[i]) worksheet.write(row, 5, Theaters_Arr[i]) worksheet.write(row, 6, Views_Arr[i]) worksheet.write(row, 7, Edits_Arr[i]) worksheet.write(row, 8, Editors_Arr[i]) row += 1 workbook.close() </code></pre> <p>The code works until import xlsxwriter, then I get this error:</p> <pre><code>--------------------------------------------------------------------------- IOError Traceback (most recent call last) &lt;ipython-input-9-c99eea52d475&gt; in &lt;module&gt;() 27 28 ---&gt; 29 workbook.close() /Users/Gokce/anaconda2/lib/python2.7/site-packages/xlsxwriter/workbook.pyc in close(self) 309 if not self.fileclosed: 310 self.fileclosed = 1 --&gt; 311 self._store_workbook() 312 313 def set_size(self, width, height): /Users/Gokce/anaconda2/lib/python2.7/site-packages/xlsxwriter/workbook.pyc in _store_workbook(self) 638 639 xlsx_file = ZipFile(self.filename, "w", compression=ZIP_DEFLATED, --&gt; 640 allowZip64=self.allow_zip64) 641 642 # Add XML sub-files to the Zip file with their Excel filename. /Users/Gokce/anaconda2/lib/python2.7/zipfile.pyc in __init__(self, file, mode, compression, allowZip64) 754 modeDict = {'r' : 'rb', 'w': 'wb', 'a' : 'r+b'} 755 try: --&gt; 756 self.fp = open(file, modeDict[mode]) 757 except IOError: 758 if mode == 'a': IOError: [Errno 45] Operation not supported: 'WebScraping.xlsx' </code></pre> <p>What might be the problem? If if cut off the last part and run in a new IDLE with fake data, it works. But it does not work in the main IDLE. So the problem must be in the previous part I believe</p>
<p>The error is triggered when the code tries to write the file. Confirm that you have write permissions to that directory, and that the file doesn't already exist. It's unlikely that you have access to <code>/home</code>.</p>
python|excel|xlsxwriter
1
639
51,993,599
unable to run print statements from loss function when calling model.fit in Keras
<p>I have created a custom loss function called </p> <p><code>def customLoss(true, pred) //do_stuff //print(variables) return loss</code></p> <p>Now I'm calling compile as <code>model.compile(optimizer='Adamax', loss = customLoss)</code></p> <p>EDIT: I tried tf.Print and this is my result.</p> <pre><code> def customLoss(params): def lossFunc(true, pred): true = tf.Print(true, [true.shape],'loss-func') #obviously this won't work because the tensors aren't the same shape; however, this is what I want to do. #stuff return loss return lossFunc model = Model(inputs=[inputs], outputs=[outputs]) parallel_model = multi_gpu_model(model, gpus=8) parallel_model.compile(opimizer='Adam', loss = customLoss(params), metrics = [mean_iou) history = parallel_model.fit(X_train, Y_train, validation_split=0.25, batch_size = 32, verbose=1) </code></pre> <p>and the output is </p> <pre><code>Epoch 1/10 1159/1159 [==============================] - 75s 65ms/step - loss: 0.1051 - mean_iou: 0.4942 - val_loss: 0.0924 - val_mean_iou: 0.6933 Epoch 2/10 1152/1159 [============================&gt;.] - ETA: 0s - loss: 0.0408 - mean_iou: 0.7608 </code></pre> <p>The print statements still aren't printing. Am I missing something - are my inputs into <code>tf.Print</code> not proper? </p>
<p>It's not because Keras dumps buffers or does magic, it simply doesn't call them! The loss function is called once to construct the <em>computation graph</em> and then the symbolic tensor that represents the loss value is returned. Tensorflow uses that to compute the loss, gradients etc.</p> <p>You might instead be interested <a href="https://www.tensorflow.org/api_docs/python/tf/Print" rel="nofollow noreferrer">tf.Print</a> that is null operation with a side effect that prints the arguments passed. Since <code>tf.Print</code> is part of the computation graph it will be run when training as well. From the documentation:</p> <blockquote> <p>Prints a list of tensors. This is an identity op (behaves like tf.identity) with the side effect of printing data when evaluating.</p> </blockquote>
python|tensorflow|neural-network|keras
2
640
51,720,851
Issue when trying to login with Docusign API by official python lib on live account
<p>I have issue when trying to login with API on live account, while on sandbox all works great. When doing request on login, I must get on response data object <code>login_accounts</code>. Data in object looks like this(I've delete few symbols in password for sequrity reasons)</p> <pre><code>``` {'api_password': 'ZQQ+oSUO1alRWlCapJ0=', 'login_accounts': [{'account_id': '4342454', 'account_id_guid': '6765dcc3-5dc6-4340-8240-9a53d3e728ab', 'base_url': 'https://demo.docusign.net/restapi/v2/accounts/4342454', 'email': '[email protected]', 'is_default': 'true', 'login_account_settings': None, 'login_user_settings': None, 'name': 'Moonshot Capital', 'site_description': '', 'user_id': '1c960342-458b-4b33-b7ae-68ff9817bbb6', 'user_name': 'Here some name'}]}``` </code></pre> <p>That was my sandbox data. But in live account I've get only empty object</p> <pre><code>{'api_password': None, 'login_accounts': None} </code></pre> <p>If that was an error with login, I sould get something like 'bad auth' error code or something like this, but everywhere is 'http200 ok'. In system logs, that I can download from docusign service, I get only one error 404 on image object because in profile there no image, all other requests have code 'http200'. I thought that problem may be in integrator key and i've create another one, but it gives me same empty object. Also all my integration was working in free trial period on live account, and stops working after swithcing to BasicAPI billing plan. I am using official python lib <a href="https://github.com/docusign/docusign-python-client" rel="nofollow noreferrer">https://github.com/docusign/docusign-python-client</a> Here is my code sample</p> <pre><code>def setUp(): # setting local configuration api_client = docusign.ApiClient(BASE_URL) oauth_login_url = api_client.get_jwt_uri(integrator_key, redirect_uri, oauth_base_url) try: api_client.configure_jwt_authorization_flow(private_key_filename, oauth_base_url, integrator_key, api_username, 3600) docusign.configuration.api_client = api_client return api_client except: logger.exception('') print(("If you login for first time please follow the url and give the" " access for app.\n"), oauth_login_url) def docusign_login(api_client): auth_api = AuthenticationApi() try: login_info = auth_api.login(api_password='true', include_account_id_guid='true') assert login_info is not None assert len(login_info.login_accounts) &gt; 0 login_accounts = login_info.login_accounts assert login_accounts[0].account_id is not None logger.info(login_info) base_url, _ = login_accounts[0].base_url.split('/v2') api_client.host = base_url docusign.configuration.api_client = api_client return login_accounts except ApiException: logger.exception('') def request_signature_for_template(client_id): api_client = setUp() try: login_accounts = docusign_login(api_client) envelopes_api = EnvelopesApi() envelope_summary = envelopes_api.create_envelope( login_accounts[0].account_id, envelope_definition=envelope_definition) </code></pre>
<p>The AuthenticationApi.login() method is intended to be used with Legacy Header authentication. For JWT/OAuth you would want to use the Get UserInfo method. Unfortunately, that method hasn't yet been implemented in the Python client. You'll need to make that call manually into the SDK is updated to include that functionality. Here's an example of that using the <a href="http://docs.python-requests.org/en/master/" rel="nofollow noreferrer">Requests</a> package. </p> <pre><code>oauth_base_url = "account-d.docusign.com" #use account.docusign.com for Prod api_client.configure_jwt_authorization_flow(private_key, oauth_base_url, integrator_key, user_id, 3600) docusign.configuration.api_client = api_client # using Requests to manually call userinfo endpoint user_info_url = "https://" + oauth_base_url + "/oauth/userinfo" request_auth_header = {'Authorization' : api_client.default_headers['Authorization']} r = requests.get(user_info_url, headers=request_auth_header) # parse response to pull default account for a in r.json().get('accounts'): if a.get('is_default'): account_id = a.get('account_id') new_base_url = a.get('base_uri') + "/restapi" api_client.host = new_base_url docusign.configuration.api_client = api_client </code></pre>
python|docusignapi
0
641
51,849,871
How to tune scipy interpolate function?
<p>I'm not sure why it's doing such a crappy job. Here's the set of 189 data points I was hoping to get smoothed. Why is it lagging so much?</p> <pre><code>y = data x = range(len(y)) tck, _ = splprep([x,y]) x2, y2 = splev(np.linspace(0,1,len(y)), tck) plt.plot(y, 'b') plt.plot(y2, 'g') plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/MxYw7.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MxYw7.jpg" alt="enter image description here"></a></p>
<p>Smoothing is a fairly common problem in time series analysis. Have you tried out <a href="https://en.wikipedia.org/wiki/Exponential_smoothing" rel="nofollow noreferrer">exponential smoothing</a>? The package <a href="http://www.statsmodels.org/dev/tsa.html" rel="nofollow noreferrer">StatsModels</a> has a lot of callable smoothing functions.</p> <pre><code>from statsmodels.tsa.api import ExponentialSmoothing, SimpleExpSmoothing, Holt y_hat_avg = test.copy() fit2 = SimpleExpSmoothing(np.asarray(train['Count'])).fit(smoothing_level=0.6,optimized=False) y_hat_avg['SES'] = fit2.forecast(len(test)) plt.figure(figsize=(16,8)) plt.plot(train['Count'], label='Train') plt.plot(test['Count'], label='Test') plt.plot(y_hat_avg['SES'], label='SES') plt.legend(loc='best') plt.show() </code></pre>
python|matplotlib|scipy
0
642
59,709,094
Read n tables in csv file to separate pandas DataFrames
<p>I have a single .csv file with four tables, each a different financial statement four Southwest Airlines from 2001-1986. I know I could separate each table into separate files, but they are initially downloaded as one.</p> <p>I would like to read each table to its own pandas DataFrame for analysis.Here is a subset of the data:</p> <pre><code>Balance Sheet Report Date 12/31/2001 12/31/2000 12/31/1999 12/31/1998 Cash &amp; cash equivalents 2279861 522995 418819 378511 Short-term investments - - - - Accounts &amp; other receivables 71283 138070 73448 88799 Inventories of parts... 70561 80564 65152 50035 Income Statement Report Date 12/31/2001 12/31/2000 12/31/1999 12/31/1998 Passenger revenues 5378702 5467965 4499360 3963781 Freight revenues 91270 110742 102990 98500 Charter &amp; other - - - - Special revenue adjustment - - - - Statement of Retained Earnings Report Date 12/31/2001 12/31/2000 12/31/1999 12/31/1998 Previous ret earn... 2902007 2385854 2044975 1632115 Cumulative effect of.. - - - - Three-for-two stock split 117885 - 78076 - Issuance of common.. 52753 75952 45134 10184 </code></pre> <p>The tables each have 17 columns, the first the line item description, but varying numbers of rows i.e. the balance sheet is 100 rows whereas the statement of cash flows is 65</p> <h2>What I've Done</h2> <pre><code>import pandas as pd import numpy as np # Lines that separate the various financial statements lines_to_skip = [0, 102, 103, 158, 159, 169, 170] with open('LUV.csv', 'r') as file: fin_statements = pd.read_csv(file, skiprows=lines_to_skip) balance_sheet = fin_statements[0:100] </code></pre> <p>I have seen posts with a similar objective noting to utilize nrows and skiprows. I utilized skiprows to read the entire file, then I created the individual financial statement by indexing.</p> <p>I am looking for comments and cconstructive criticism for creating a dataframe for each respective table in better Pythonic style and best practices. </p>
<p>What you want to do if far beyond what <code>read_csv</code> can do. If fact you input file struct can be modeled as:</p> <pre class="lang-none prettyprint-override"><code>REPEAT: Dataframe name Header line REPEAT: Data line BLANK LINE OR END OF FILE </code></pre> <p>IMHO, the simplest way is to parse the line <em>by hand</em> line by line, feeding a temporary csv file per dataframe, then loading the dataframe. Code could be:</p> <pre><code>df = {} # dictionary of dataframes def process(tmp, df_name): '''Process the temporary file corresponding to one dataframe''' # print("Process", df_name, tmp.name) # uncomment for debugging if tmp is not None: tmp.close() df[df_name] = pd.read_csv(tmp.name) os.remove(tmp.name) # do not forget to remove the temp file with open('LUV.csv') as file: df_name = "NONAME" # should never be in resulting dict... tmp = None for line in file: # print(line) # uncomment for debugging if len(line.strip()) == 0: # close temp file on empty line process(tmp, df_name) # and process it tmp = None elif tmp is None: # a new part: store the name df_name = line.strip() state = 1 tmp = tempfile.NamedTemporaryFile("w", delete=False) else: tmp.write(line) # just feed the temp file # process the last part if no empty line was present... process(tmp, df_name) </code></pre> <p>This is not really efficient because each line is written to a temporary file an then read again, but it is simple and robust.</p> <p>A possible improvement would be to initially parse the parts with the csv module (can parse a stream while pandas wants files). The downside is that the csv module only parse into strings and you lose the automatic conversions to numbers of pandas. My opinion is that it is worth it only if the file is large and the full operation will have to be repeated.</p>
python|pandas|file|csv|dataframe
1
643
19,229,389
Beautiful Soup crashes upon special chars like "&quot;" and "&lt;"
<p>I'm trying to scrape an atom based RSS feed using beautiful soup, but it's proving difficult. Capturing the data goes just fine until an <code>&lt;item&gt;</code> comes up that breaks the code and crashes the script. Such <code>&lt;item&gt;</code>s consistently have tags (firefox marks them in orange) like "&amp; lt;" or "&amp; quot;", while s without them work fine. I've tried a bunch of stuff like BeautifulStoneSoup, stripping special chars with regex, and setting the "xml" argument, but nothing works and often they just throw a warning about being deprecated in BS4. </p> <p>Why do these characters appear and how can I deal with them effectively?</p> <p>Here's a page I'm trying to scrape: <a href="http://www.thestar.com/feeds.articles.news.gta.rss" rel="nofollow">http://www.thestar.com/feeds.articles.news.gta.rss</a></p> <p>And here's my code:</p> <pre><code>news_url = "http://www.thestar.com/feeds.articles.news.gta.rss" # Toronto Star RSS Feed try: news_rss = urllib2.urlopen(news_url) news = news_rss.read() news_rss.close() soup = BeautifulSoup(news) except: return "error" titles = soup.findAll('title') links = soup.findAll('link') for link in links: link = link.contents # I want the url without the &lt;link&gt; tags news_stuff = [] for item in titles: if item.text == "TORONTO STAR | NEWS | GTA": # These have &lt;title&gt; tags and I don't want them; just skip 'em. pass else: news_stuff.append((item.text, links[i])) # Here's a news story. Grab it. i = 0 for thing in news_stuff: print '&lt;a href="' print thing[1] print '"target="_blank"&gt;' print thing[0] print '&lt;/a&gt;&lt;br/&gt;' i += 1 </code></pre>
<p>Not sure which problem you are talking about, but I got this error while running you code:</p> <pre><code>UnicodeEncodeError: 'ascii' codec can't encode character u'\u2018' in position 54: ordinal not in range(128) </code></pre> <p>To fix it I just added encoding:</p> <pre><code>for thing in news_stuff: print '&lt;a href="' print thing[1] print '"target="_blank"&gt;' print thing[0].encode("utf-8") print '&lt;/a&gt;&lt;br/&gt;' i += 1 </code></pre> <p>After that script executes without any errors.</p>
python|web-scraping|beautifulsoup
2
644
18,911,903
Python requests, can't log into a site
<p>I am trying to use Python (3.2) requests to login to a site and navigate protected content on subsequent pages. However, when I login it seems to just leave me at the original login page (not navigating to the success page), and the subsequent page call is only showing the unprotected content. Can you please help me identify the bug in my code:</p> <pre><code>import requests import sys class login: def __init__(self): self.payload = None self.c = None def start(self,username,password,loginpage): self.payload = {'login':username,'password':password} self.loginpage = loginpage def login(self,url): self.c = requests.session() response = self.c.post(self.loginpage,data=self.payload) if response.status_code == 200: request = self.c.get(url) print(request.text) if __name__ == '__main__': username = 'username' password = "password" loginpage = "https://www.clubfreetime.com/login/" nextpage = "http://www.clubfreetime.com/new-york-city-nyc/free-theater-performances-shows" login = login() login.start(username,password,loginpage) login.login(nextpage) </code></pre>
<p>I figured out the problem. I needed to change self.payload to: self.payload = {'login':username,'password':password,'submit_login':'Login'}</p>
python|python-3.x|python-requests
0
645
62,056,312
How can I use python to convert multiple date columns into one date column and sum up their values (example below)?
<p>Following is the table I have -</p> <pre><code>Market 05-20 06-20 07-20 08-20 HK 5 5 5 5 US 2 2 2 2 HK 3 3 3 3 UK 7 7 7 7 UK 2 2 2 2 </code></pre> <p>Follwoing is what I want to make of it -</p> <pre><code>Market Date Quantity HK 05-20 8 HK 06-20 8 HK 07-20 8 HK 08-20 8 US 05-20 2 US 06-20 2 US 07-20 2 US 08-20 2 UK 05-20 9 UK 06-20 9 UK 07-20 9 UK 08-20 9 </code></pre> <p>How can I use python to convert multiple date columns into one date column and sum up their values (example below)?</p>
<p>First you can use </p> <pre><code> df = df.groupby("Market").sum() </code></pre> <p>Result:</p> <pre><code> 05-20 06-20 07-20 08-20 Market HK 8 8 8 8 UK 9 9 9 9 US 2 2 2 2 </code></pre> <p>Next you can </p> <pre><code>df = df.stack() </code></pre> <p>Result:</p> <pre><code>Market HK 05-20 8 06-20 8 07-20 8 08-20 8 UK 05-20 9 06-20 9 07-20 9 08-20 9 US 05-20 2 06-20 2 07-20 2 08-20 2 </code></pre> <p>Now you have to only <code>reset_index()</code>, and add column names.</p> <pre><code>df = df.reset_index() df.columns = ['Market', 'Data', 'Quantity'] </code></pre> <p>Result: </p> <pre><code> Market Data Quantity 0 HK 05-20 8 1 HK 06-20 8 2 HK 07-20 8 3 HK 08-20 8 4 UK 05-20 9 5 UK 06-20 9 6 UK 07-20 9 7 UK 08-20 9 8 US 05-20 2 9 US 06-20 2 10 US 07-20 2 11 US 08-20 2 </code></pre> <hr> <p>Full example code. I use <code>io&gt;StringIO</code> only to simulate file.</p> <pre><code>text ='''Market 05-20 06-20 07-20 08-20 HK 5 5 5 5 US 2 2 2 2 HK 3 3 3 3 UK 7 7 7 7 UK 2 2 2 2''' import pandas as pd import io #df = pd.read_csv("filename.csv") df = pd.read_csv(io.StringIO(text), sep="\s+") df = df.groupby("Market").sum() print(df) df = df.stack() print(df) df = df.reset_index() print(df) df.columns = ['Market', 'Data', 'Quantity'] print(df) </code></pre>
python
1
646
67,497,778
Why am I unable to scrape values from a Hidden tooltip of a Highchart using selenium python?
<p>I've particularly asked a couple of questions on the same topic before asking it one final time. To begin with, I am scraping values from <a href="https://www.similarweb.com/website/zalando.de/#overview" rel="nofollow noreferrer">https://www.similarweb.com/website/zalando.de/#overview</a></p> <p>I am trying to scrape the contents from a graph. Take a look at this highchart graph. <a href="https://i.stack.imgur.com/bkdRb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bkdRb.png" alt="enter image description here" /></a></p> <p>I want to scrape value like its value as : 27,100,000. from the hidden tooltip. At present I am able to scrape the Months as [Nov '20,....Apr '21], However, I am unable to scrape its values.</p> <p>Here's my complete code:</p> <pre><code>def website_monitoring(): websites = ['https://www.similarweb.com/website/zalando.de/#overview'] options = webdriver.ChromeOptions() options.add_argument('start-maximized') options.add_experimental_option(&quot;excludeSwitches&quot;, [&quot;enable-automation&quot;]) options.add_experimental_option(&quot;useAutomationExtension&quot;, False) browser = webdriver.Chrome(ChromeDriverManager().install(), options=options) for crawler in websites: browser.get(crawler) wait = WebDriverWait(browser, 10) website_names = browser.find_element_by_xpath('/html/body/div[1]/main/div/div/section[1]/div[1]/div/div[1]/a').get_attribute(&quot;href&quot;) total_visits = browser.find_element_by_xpath('/html/body/div[1]/main/div/div/div[2]/div[2]/div/div[3]/div/div/div/div[2]/div/span[2]/span[1]').text tooltip = wait.until(EC.presence_of_element_located((By.XPATH, &quot;//*[local-name() = 'svg']/*[local-name()='g'][8]/*[local-name()='text']&quot;))) ActionChains(browser).move_to_element(tooltip).perform() month_value = wait.until(EC.presence_of_all_elements_located((By.XPATH, &quot;//*[local-name() = 'svg']/*[local-name()='g' and @class='highcharts-tooltip']/*[local-name()='text']&quot;))) values = [elem.text for elem in month_value] print('VALUES--&gt;', values) months = browser.find_elements(By.XPATH, &quot;//*[local-name() = 'svg']/*[local-name()='g'][6]/*/*&quot;) for date in months: print(date.text) # printing all scraped data print('Website Names:', website_names) print('Total visits:', total_visits) if __name__ == &quot;__main__&quot;: website_monitoring() </code></pre> <p>The output that I presently get:</p> <pre><code>VALUES--&gt; [''] Nov '20 Dec '20 Jan '21 Feb '21 Mar '21 Apr '21 </code></pre> <p>The output that I want:</p> <pre><code>VALUES--&gt; ['27,100,000', .....] Nov '20 Dec '20 Jan '21 Feb '21 Mar '21 Apr '21 </code></pre> <p>I am stuck on this issue since 2 days and nothing sofar has worked upon trying. Please, Please help!</p> <p><strong>EDIT</strong>: I also tried a method to check if a csv file exists upon inspecting the page and then going to the networks tab as Highcharts graph usually store a csv file but I Guess the site has blocked it. Is this possible by using a json or lxml?</p>
<p>I have a solution that works. I took the time to identify a way to hover over each of the points, print the data, and move to the next. This could be way cleaner, but it works. Here's my python file:</p> <pre><code>from selenium import webdriver import chromedriver_autoinstaller from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.by import By from selenium.webdriver.common.action_chains import ActionChains import time def website_monitoring(): chromedriver_autoinstaller.install() websites = ['https://www.similarweb.com/website/zalando.de/#overview'] options = webdriver.ChromeOptions() options.add_argument('start-maximized') options.add_experimental_option(&quot;excludeSwitches&quot;, [&quot;enable-automation&quot;]) options.add_experimental_option(&quot;useAutomationExtension&quot;, False) browser = webdriver.Chrome(options=options) def stringToPrint(): return browser.find_element_by_css_selector('g.highcharts-tooltip &gt; text &gt; tspan:nth-child(1)').text + ': ' + browser.find_element_by_css_selector('tspan:nth-child(3)').text for crawler in websites: browser.get(crawler) wait = WebDriverWait(browser, 10) website_names = browser.find_element_by_xpath('/html/body/div[1]/main/div/div/section[1]/div[1]/div/div[1]/a').get_attribute(&quot;href&quot;) total_visits = browser.find_element_by_xpath('/html/body/div[1]/main/div/div/div[2]/div[2]/div/div[3]/div/div/div/div[2]/div/span[2]/span[1]').text highchartElement = wait.until(EC.presence_of_element_located((By.CSS_SELECTOR, 'g:nth-child(8) &gt; path:nth-child(3)'))) ActionChains(browser).move_to_element(highchartElement).move_by_offset(-300,0).perform() print(stringToPrint()) ActionChains(browser).move_by_offset(120,-10).perform() print(stringToPrint()) ActionChains(browser).move_by_offset(120,0).perform() print(stringToPrint()) ActionChains(browser).move_by_offset(120,-10).perform() print(stringToPrint()) ActionChains(browser).move_by_offset(120,10).perform() print(stringToPrint()) ActionChains(browser).move_by_offset(120,0).perform() print(stringToPrint()) # printing all scraped data print('Website Names:', website_names) print('Total visits:', total_visits) if __name__ == &quot;__main__&quot;: website_monitoring() </code></pre> <p>And here's the console output:</p> <pre><code>November, 2020: 29,900,000 December, 2020: 27,100,000 January, 2020: 26,900,000 February, 2021: 22,600,000 March, 2021: 24,700,000 April, 2021: 26,200,000 Website Names: http://zalando.de/ Total visits: 19.94M </code></pre>
python|json|selenium|selenium-webdriver|lxml
1
647
67,313,416
How to multiply tuple values in array?
<p>I've tried everything to multiply the values in the tuple, but I get the error: TypeError: can't multiply sequence by non-int of type 'tuple' .</p> <pre><code>from itertools import product arr = 0 val_x = [] val_y = [] n = int(input('n = ')) def multiply(product, *nums): factor = product for num in nums: factor *= num return factor if __name__ == &quot;__main__&quot;: for x in range(n ** 2): for y in range(n ** 2): if y == 0: arr += 1 if arr == 1: val_x.append(bin(x)[2:].zfill(n)) val_y.append(bin(x)[2:].zfill(n)) arr = 0 res = list(product(val_x, val_y)) print(f'Input x,y = {res}') print(f'Output z = {multiply(*res)}') </code></pre> <p>The result I get is an array which contains tuples, for example [(00, 00), (00, 01)]. How do I multiply them so I get result of [(0000), (0000)] etc. When I run the script, for the last print, I get the error.</p>
<p>As your description in comments, you need to change <code>multiply()</code>. In the following implementation, <code>multiply()</code> receives a list of tuples and returns a list by multiplying all elements of each tuple in input list. It is required to cast <code>str</code> to <code>int</code> before multiplication. The output of <code>multiply()</code> is a list of integers, you can convert integers to binary representation if needed.</p> <pre><code>from itertools import product arr = 0 val_x = [] val_y = [] n = int(input('n = ')) def multiply(res): output = [] for i in res: output.append(int(i[0], 2)*int(i[1], 2)) return output if __name__ == &quot;__main__&quot;: for x in range(n ** 2): for y in range(n ** 2): if y == 0: arr += 1 if arr == 1: val_x.append(bin(x)[2:].zfill(n)) val_y.append(bin(x)[2:].zfill(n)) arr = 0 res = list(product(val_x, val_y)) print(f'Input x,y = {res}') print(f'Output z = {multiply(res)}') </code></pre> <p>sample for <code>n=2</code>:</p> <pre><code>n = 2 Input x,y = [('00', '00'), ('00', '01'), ('00', '10'), ('00', '11'), ('01', '00'), ('01', '01'), ('01', '10'), ('01', '11'), ('10', '00'), ('10', '01'), ('10', '10'), ('10', '11'), ('11', '00'), ('11', '01'), ('11', '10'), ('11', '11')] Output z = [0, 0, 0, 0, 0, 1, 2, 3, 0, 2, 4, 6, 0, 3, 6, 9] </code></pre>
python|python-3.x
2
648
63,723,147
'utf-8' codec can't decode byte 0xff in position 0: invalid start byte / unexpected end of data
<p>I am trying to pass some functions from C++ to Python using the Qt library (Pyside2 in python). At the moment everything works correctly passing the code from one side to the other and adapting it to Python, but when I start treating images errors happen.</p> <p>The only thing that I achieve is to correctly parse the shadows of the images, however, the inner part of the image (which would correspond to the rest of the colors is hollow).</p> <p>I should get this</p> <p><img src="https://i.stack.imgur.com/hQ1Dg.png" alt="Expected Image result" /></p> <p>but I get this instead</p> <p><img src="https://i.stack.imgur.com/hJVY3.png" alt="This code Image result" /></p> <p>And every time I treat those bytes, the program crashes with the following errors.</p> <pre><code>'utf-8' codec can't decode byte 0x87 in position 0: invalid start byte 'utf-8' codec can't decode byte 0xba in position 0: invalid start byte 'utf-8' codec can't decode byte 0xcb in position 0: unexpected end of data </code></pre> <p>Debugging the program I discovered that only the bytes that correspond to the colors crashes the program, making it impossible to know the RGBA of the pixel. The problem must be with the way I get GB and AR in Python, since the original C ++ program never had this problem in any of the pixels.</p> <p>I am quite a newbie dealing with bytes and bytearrays. Do you think that I can be doing wrong to get GB and AR or what do you think?</p> <p>Thank you all!</p> <p>This is the original function in C++:</p> <pre><code>QImage ImageConverter::convertGBAR4444(QByteArray &amp;array, int width, int height, int startByte) /// GBAR = ARGB (endianness) { qDebug() &lt;&lt; &quot;Opened GBAR4444 image.&quot;; QImage img(width, height, QImage::Format_ARGB32); img.fill(Qt::transparent); for (int y = 0; y &lt; height; ++y) { for (int x = 0; x &lt; width; ++x) { uchar gb = array.at(startByte + y * 2 * width + x * 2); uchar ar = array.at(startByte + y * 2 * width + x * 2 + 1); uchar g = gb &gt;&gt; 4; uchar b = gb &amp; 0xF; uchar a = ar &gt;&gt; 4; uchar r = ar &amp; 0xF; img.setPixel(x, y, qRgba(r * 0x11, g * 0x11, b * 0x11, a * 0x11)); } } return img; } </code></pre> <p>And this is my code for the Python version of the project:</p> <pre><code>from PySide2 import QtGui from PySide2.QtGui import QImage, qRgba, qRgb def convertGBAR4444(array, width, height, startByte = 0): y = 0 img = QImage(width, height, QImage.Format_ARGB32) img.fill(QtGui.QColor(0,0,0,0)) while (y &lt; height): x = 0 while (x &lt; width): gb = ord(array.at(startByte + y * 2 * width + x * 2)) ar = ord(array.at(startByte + y * 2 * width + x * 2 + 1)) g = gb &gt;&gt; 4 b = gb &amp; 0xF a = ar &gt;&gt; 4 r = ar &amp; 0xF img.setPixel(x, y, qRgba(r * 0x11, g * 0x11, b * 0x11, a * 0x11)) x += 1 y += 1 return img </code></pre> <p>If you want to try it yourself this are the data you will need:</p> <pre><code>convertGBAR4444(data, 32, 32, 13) data = b'\x01 \x00 \x00\x10\x00\x10\x00\r\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\ x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0 0\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x11U&quot;\x873\xbaD\xcbD\xcbD\xcb3\xa8&quot;\x87\x00B\x00\x10\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\ x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00&quot;\x87f\xff\xca\xff\xdb\xff\xca\xff\xeb\xff\xeb\xff\xdb\xff\xca\xff\x b9\xffU\xffD\xdb\x00B\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x11D\xdcU\xff\xa9\xff\xfc \xff\xfc\xff\xfc\xff\xfc\xff\xfc\xff\xfc\xff\xfc\xff\xec\xffv\xffU\xffU\xfe&quot;\x86\x00 \x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x 00\x00\x00\x00\x003\xbaU\xffU\xffU\xff\xb9\xff\xfc\xff\xb9\xff\x98\xffe\xffU\xffU\xffU\xffU\xffU\xffU\xffD\xfe\x11\x95\x000\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00&quot;fU\xff\xba\xff\xfc\xff\x98\xffU\xffU\xffU\xffD\xfeD\xfdD\xfdD\xfdU\xffU\xffU\xffU\xffD\xfe3\xfc\x00r\x00 \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\ x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00D\xdc\xa9\xff\xfc\xff\xfc\xff\xa8\xffU\xffD\xfe3\xfd3\xfc3\xfc3\xfc3\xfc3\xfcD\xfdU\xffU\xffU\xffD\xfd&quot;\xc7\x00@\x00\x10\x00\x00\x00 \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00&quot;U\xff\xec\xff\xfc\xff\xfc\xffv\xffD\xfe3\xfc3\xfb&quot;\xd7&quot;\xd83\xfb3\xfc3\xfc3\xfcU\xffU\xffU\xffD\xfe3\xea\x00`\x00 \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00&quot;vU\xff\xa9\xff\xfc\xff\xa9\xffU\xffD\xfd3\xea\x00\x90\x00\x80\x00p\x00p3\xfb3\xfc3\xfcU\xffU\xffU\xffD\xfe 3\xea\x00p\x000\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x11TU\xfeU\xffU\xffU\xffD\xfeU\xff&quot;\xc6\x00p\x00P\x000\x00@3\xec3\xfc3\xfcU\xffU\xffU\xffD\ xfe3\xfb\x00\x81\x000\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x10\x00RD\xfeU\xffU\xffU\xff3\xd9\x00\x81\x00P\x00 \x00 3\xbaD\xfd3\xfcD\xfeU\xff U\xffU\xffD\xfd&quot;\xd8\x00p\x000\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 \x00@\x00`\x00p\x00p\x00`\x00P\x000\x00!D\xdcD\xfe3\xfcD\xfeU\xf fU\xffU\xffD\xfe3\xfb\x00\xa1\x00`\x00 \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x10\x00 \x000\x000\x000\x00 \x11CU\xfeU\xffD\xf eU\xffU\xffU\xffU\xffD\xfe3\xfb\x11\xc5\x00p\x00@\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x10\x00\x10\x 00\x10\x11CU\xfeU\xffU\xffU\xffU\xffU\xffU\xffD\xfd3\xfc\x11\xb3\x00p\x00P\x00 \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 \x00\x00\x00\x00\x00\x00\x00\x00\x11D\xedU\xffU\xffU\xffU\xffU\xffU\xff3\xfc3\xfb\x11\xb3\x00p\x00P\x00 \x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0 0\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00&quot;fU\xffU\xffU\xffU\xffU\xffU\xffD\xfe3\xfb\x00\xb2\x00p\x00P\x00 \x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\ x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00D\xccU\xffU\xffU\xffU\xffU\xffU\xff3\xfa&quot;\xe9\x00\x80\x00P\x00 \x00\x10\x00\x00\x00\x00\x00\x00\ x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00D\xeeU\xffU\xffU\xffU\xffU\xffD\xfd3\xfb\x11\xc5\x00`\x000\x 00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x003\xcaD\xfc3\xfc3\xfc3 \xfc3\xfc3\xfb&quot;\xe8\x00\x81\x00P\x00 \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x 00\x00\x00\x00\x00\x000\x00a\x00\x81\x00\x91\x00\x91\x00\x91\x00\x91\x00\x81\x00P\x000\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\ x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x11C&quot;\x863\xca3\xda&quot;\xc8\x11\x93\x00r\x00P\x000\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x10U\xff\xa9\xff\xa8\xffU\xffU\xffD\xfdD\xfd\x00@\x00 \x00\x10\x00\x00 \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x003\xcb\x98\xff\xfc\xff\xeb\x ffU\xffU\xffD\xfe3\xfc\x11\xa5\x000\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 \x00\x00\x00\x00\x00\x00U\xffe\xff\xb9\xffv\xffU\xffU\xffD\xfe3\xfc3\xea\x00P\x00 \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\ x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x10U\xffU\xffU\xffU\xffU\xffU\xff3\xfc3\xfc3\xfb\x00`\x000\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\ x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x103\xba3\xfc3\xfc3\xfc3\xfc3\xfc3\xfc3\xfc&quot;\xd7\x00`\x000\x00\x00\x00\x00\x00\x00\x00\x00\ x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0003\xfc3\xfc3\xfc3\xfc3\xfc3\xfc3\xfc\x00\x90\x00 P\x00 \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x10\x000 3\xda3\xfc3\xfc3\xfc3\xea\x00\x90\x00`\x000\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x10\x000\x00P\x00p\x00p\x00p\x00`\x00@\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x10\x00 \x000\x000\x000\x00 \x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x 00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x10\x00\x10\x00\ x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00' </code></pre>
<p>The code you’ve posted doesn’t work because there are several errors, but none that would cause the error message you’re observing.</p> <p>Here’s the code with those errors fixed. This should work (though it doesn’t work with the data you provided, since that is truncated):</p> <pre><code>def convertGBAR4444(array, width, height, startByte = 0): img = QImage(width, height, QImage.Format_ARGB32) img.fill(QtGui.QColor(0,0,0,0)) for y in range(height): for x in range(width): gb = array[startByte + y * 2 * width + x * 2] ar = array[startByte + y * 2 * width + x * 2 + 1] #print(gb, ar) g = gb &gt;&gt; 4 b = gb &amp; 0xF a = ar &gt;&gt; 4 r = ar &amp; 0xF img.setPixel(x, y, qRgba(r * 0x11, g * 0x11, b * 0x11, a * 0x11)) return img </code></pre>
python|pyside2
1
649
36,357,036
server doesn't send data to clients
<p>I have this piece of code for server to handle clients. it properly receive data but when i want to send received data to clients nothing happens.</p> <p><strong>server</strong></p> <pre><code>import socket from _thread import * class GameServer: def __init__(self): # Game parameters board = [None] * 9 turn = 1 # TCP parameters specifying self.tcp_ip = socket.gethostname() self.tcp_port = 9999 self.buffer_size = 2048 self.s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) try: self.s.bind((self.tcp_ip, self.tcp_port)) except: print("socket error, Please try again! ") self.s.listen(5) print('Waiting for a connection...') def messaging(self, conn): while True: data = conn.recv(self.buffer_size) if not data: break print("This data from client:", data) conn.send(data) def thread_run(self): while True: conn, addr = self.s.accept() print('connected to: ' + addr[0] + " : " + str(addr[1])) start_new_thread(self.messaging, (conn,)) def main(): gameserver = GameServer() gameserver.thread_run() if __name__ == '__main__': main() ' </code></pre> <p>I want to if data received completely send to clients by retrieve the address of sender and send it to other clients by means of <code>conn.send()</code> but seems there is no way to do this with 'send()' method.</p> <p>The piece of <strong>client side</strong> code '</p> <pre><code>def receive_parser(self): global turn rcv_data = self.s.recv(4096) rcv_data.decode() if rcv_data[:2] == 'c2': message = rcv_data[2:] if message[:3] == 'trn': temp = message[3] if temp == 2: turn = -1 elif temp ==1: turn = 1 elif message[:3] == 'num': self.set_text(message[3]) elif message[:3] == 'txt': self.plainTextEdit_4.appendPlainText('client1: ' + message[3:]) else: print(rcv_data) ' </code></pre> <p>the receiver method does not receive any data.</p>
<p>I modified your code a little(as I have python 2.7) and <code>conn.send()</code> seems to work fine. You can also try <code>conn.sendall()</code>. Here is the code I ran:</p> <p><strong>Server code:</strong></p> <pre><code>import socket from thread import * class GameServer: def __init__(self): # Game parameters board = [None] * 9 turn = 1 # TCP parameters specifying self.tcp_ip = "127.0.0.1"#socket.gethostname() self.tcp_port = 9999 self.buffer_size = 2048 self.s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) try: self.s.bind((self.tcp_ip, self.tcp_port)) except: print("socket error, Please try again! ") self.s.listen(5) print('Waiting for a connection...') def messaging(self, conn): while True: data = conn.recv(self.buffer_size) if not data: break print("This data from client:", data) conn.send(data) def thread_run(self): while True: conn, addr = self.s.accept() print('connected to: ' + addr[0] + " : " + str(addr[1])) start_new_thread(self.messaging, (conn,)) def main(): gameserver = GameServer() gameserver.thread_run() main() </code></pre> <p><strong>Client code:</strong></p> <pre><code>import socket s=socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect(("127.0.0.1", 9999)) def receive_parser(): #global turn s.sendall("hello world") rcv_data = s.recv(4096) # rcv_data.decode() # if rcv_data[:2] == 'c2': # message = rcv_data[2:] # if message[:3] == 'trn': # temp = message[3] # if temp == 2: # turn = -1 # elif temp ==1: # turn = 1 # elif message[:3] == 'num': # self.set_text(message[3]) # elif message[:3] == 'txt': # self.plainTextEdit_4.appendPlainText('client1: ' + message[3:]) # else: print(rcv_data) receive_parser() </code></pre>
sockets|python-3.x|networking|network-programming|python-sockets
0
650
19,489,040
Invert alternation regex
<p>I have an alternation regex that I want to invert but can't seem to get it working, it looks like this:</p> <pre><code>( |\w+-\w+| \+\w+|\w) </code></pre> <p>which will extract all special characters except for - in the middle of a word or + in front of a word. The problem is that I want to remove everything that is not covered by this regex but the simple solution of putting ?! in front of this doesn't work.</p> <p>sample input: <code>-xxx xxx- xx-xx +xxx xxx+ xx+xx</code><br/> desired output: <code>xxx xxx xx-xx +xxx xxx xxxx</code></p> <p>Thanks for the help,</p> <p>Mattias</p>
<p>Your question is a bit unclear, are you looking for this?</p> <pre><code>a = "abc def,ghi remove - this keep-that foo + bar +keep!" import re print re.sub(r'[^\w\s+-]|(?&lt;!\w)-(?!\w)|\+(?!\w)', '', a) #abc defghi remove this keep-that foo bar +keep </code></pre> <p>The more accurate regexp:</p> <pre><code>[^\w\s+-]|^-|-$|\+$|(?&lt;=\W)-|-(?=\W)|\+(?=\W)|(?&lt;=\w)\+ </code></pre>
python|regex
2
651
43,808,714
Creating a New DataFrame Column by Using a Comparison Operator
<p>I have a DataFrame that looks like something similar to this:</p> <pre><code> 0 0 3 1 11 2 7 3 15 </code></pre> <p>And I want to add a column using two comparison operators. Something like this: </p> <pre><code>df[1] = np.where(df[1]&lt;= 10,1 &amp; df[1]&gt;10,0) </code></pre> <p>I want my return to look like this:</p> <pre><code> 0 1 0 3 1 1 11 0 2 7 1 3 15 0 </code></pre> <p>But, I get this error message:</p> <pre><code>TypeError: cannot compare a dtyped [float64] array with a scalar of type [bool] </code></pre> <p>Any help would be appreciated!</p>
<p><strong>Setup</strong></p> <pre><code>df = pd.DataFrame({'0': {0: 3, 1: 11, 2: 7, 3: 15}}) Out[1292]: 0 0 3 1 11 2 7 3 15 </code></pre> <p><strong>Solution</strong></p> <pre><code>#compare df['0'] to 10 and convert the results to int and assign it to df['1'] df['1'] = (df['0']&lt;10).astype(int) df Out[1287]: 0 1 0 3 1 1 11 0 2 7 1 3 15 0 </code></pre>
python|pandas|dataframe
1
652
54,613,623
Return boolean in for-loop evaluating multiple lists
<p>I'm attempting to iterate over multiple text articles, comparing whether these articles have keywords in 2 disparate lists. If the article has a keyword from both lists, then it should return 'true.' If an article only has a keyword from one list, then it should be 'false'. </p> <p>Note: I'm breaking down a larger for-loop into smaller bits to see if I can get it to work, which is why I'm not splitting this into 2 for loops which would check each list and return a '1' for each and then subsetting out anything less than a '2'...which still may be the way to go even if it's a large dataset?</p> <p>Example of Data:</p> <pre><code>Data: Text result The co-worker ate all of the candy. False Bluejays love peanuts. False Westies will eat avocado, even figs. True </code></pre> <p>Here is my code, but I'm struggling with my for loop. </p> <pre><code>def z(etext): words = ['candy', 'chocolate', 'mints', 'figs', 'avocado'] words2 = ['eat', 'slurp', 'chew', 'digest'] for keywords in words and words2: return True df['result'] = df['Keyterm'].apply(z) </code></pre> <p>This code returns 'true' for every row of my dataframe, which is not correct. Each row has a list of text in it. </p> <p>EDIT: The solution: <code> def z(etext): words = ['candy', 'chocolate', 'mints', 'figs', 'avocado'] words2 = ['eat', 'slurp', 'chew', 'digest'] for keyword in words: index = etext.find(keyword) if index != -1: for anotherword in words2: index2 = etext.find(anotherword) if index2 != -1: return True</code></p> <pre><code>df['result'] = df['Text'].apply(z) </code></pre>
<p>what about </p> <blockquote> <p>Westies will eat avocado, even figs. [eat, avocado, figs]</p> </blockquote> <p>which has multiple keyterms, do you want to check each one of them. I mean return True when each keyterm is present in both lists or what? </p> <p>Check if the solution works for you? </p> <pre><code>Text = ["The co-worker ate all of the candy.", "Bluejays love peanuts.","Westies will eat avocado, even figs."] Keyterm = [["candy"], [], ["eat", "avocado", "figs"]] data = pd.DataFrame({'Text': Text, 'Keyterm': Keyterm}) words = ['candy', 'chocolate', 'mints', 'figs', 'avocado'] words2 = ['eat', 'slurp', 'chew', 'digest', 'candy', 'figs'] def checkList(word, lists): if word in lists: return True else: return False def z(etext): res = [] for keyword in etext: ############# Using function checkList here ############## if checkList(keyword, words) and checkList(keyword, words2): res.append(True) else: res.append(False) return res data['result'] = data['Keyterm'].apply(z) </code></pre>
python|python-3.x
0
653
54,332,079
Pip install for warrant fails
<p>I tried installing warrant using pip and got the following error:</p> <pre><code>Command "c:\...\venv\scripts\python.exe -u -c "import setuptools, tokenize; __file__='C:\\...\\AppData\\Local\\Temp\\1\\pip-install-lahy2d9f\\pycryptodome\\setup.py'; f=getattr(tokenize, 'open', open)(__file__); code=f.read().replace('\r\n', '\n'); f.close(); exec(compile(code, __file__, 'exec'))" install --record C:\...\AppData\Local\Temp\1\pip-record-2t9higml\install-record.txt --single-version-externally-managed --compile --install-headers c:\\...\venv\include\site\python3.6\pycryptodome" failed with error code 1 in C:\\...\AppData\Local\Temp\1\pip-install-lahy2d9f\pycryptodome\ </code></pre> <p>Has anyone else faced this issue before?</p> <p>python version: 3.6</p> <p>pip version: 19.0.1</p>
<p>I had the same issue when I ran <code>pip3 install warrant</code>. I fixed the issue by installing a C compiler. Try installing Visual Studio build tools which provides a bunch of compilers.</p> <p><a href="https://visualstudio.microsoft.com/thank-you-downloading-visual-studio/?sku=BuildTools" rel="nofollow noreferrer">Link to Visual Studio build tools download</a></p>
python
1
654
71,295,114
JSON file loaded as one row, one column using pandas read_json; expecting a full dataframe
<p>I was provided with a JSON file which looks something like below when opened with Atom:</p> <pre><code>[&quot;[{\&quot;column1\&quot;:value1,\&quot;column2\&quot;:value2,\&quot;column3\&quot;:value3 ... </code></pre> <p>I tried loading it in Jupyter with pandas read_json as such:</p> <pre><code>data = pd.read_json('filename.json', orient = 'records') </code></pre> <p>And when I print <code>data.head()</code>, it shows the result below: <a href="https://i.stack.imgur.com/RTl45.png" rel="nofollow noreferrer">screenshot of results</a></p> <p>I have also tried the following:</p> <pre><code>import json with open('filename.json', 'r') as file: data = json.load(file) </code></pre> <p>When I check with <code>type(data)</code> I see that it is a <code>list</code>. When I check with <code>data[0][1]</code>, it returns me <code>{</code> i.e. it seems that the characters in the file has been loaded as a single element in the list?</p> <p>Just wondering if I am missing anything? I am expecting the JSON file to be loaded as a dataframe so that I can analyze the data inside. Appreciate any guidance and advice. Thanks in advance!</p>
<p>Ok so I think as <code>head()</code> only shows one entry that the outer brackets are not needed. I would try to read your file as a string and change the string to something that <code>pd.read_json()</code> can parse. I assume that your file contains data in a form like this:</p> <pre><code>[&quot;[{\&quot;column1\&quot;:2,\&quot;column2\&quot;:\&quot;value2\&quot;,\&quot;column3\&quot;:4}, {\&quot;column1\&quot;:4,\&quot;column2\&quot;:\&quot;value2\&quot;,\&quot;column3\&quot;:8}]&quot;] </code></pre> <p>Now, I would read it and remove trailing <code>\n</code> if they exist and correct the automatic escaping of the <code>read()</code> method. Then I remove <code>[&quot;</code> and <code>&quot;]</code> from the string with this code:</p> <pre><code>with open('input.json', 'r') as file: data = file.read().rstrip() cleaned_string = data.replace('\\', '')[2:-2] </code></pre> <p>The result is now a valid json string that looks like this:</p> <pre><code>'[{&quot;column1&quot;:2,&quot;column2&quot;:&quot;value2&quot;,&quot;column3&quot;:4}, {&quot;column1&quot;:4,&quot;column2&quot;:&quot;value2&quot;,&quot;column3&quot;:8}]' </code></pre> <p>This string can now be easily read by pandas with this line:</p> <pre><code>pd.read_json(cleaned_string, orient = 'records') </code></pre> <p>Output:</p> <pre><code> column1 column2 column3 0 2 value2 4 1 4 value2 8 </code></pre> <p>The specifics (e.g. the indices to remove unused characters) could be different for your string as I do not know your input. However, I think this approach allows you to read your data.</p>
python|json|pandas|dataframe
0
655
9,170,638
PyQT - setting the text color for a QTabWidget
<p>Is there any way to set the text color of a certain tab that's part of a QTabWidget? <a href="http://www.riverbankcomputing.co.uk/static/Docs/PyQt4/html/qtabbar.html#setTabTextColor" rel="nofollow">QTabBar</a> seems to have a method to set the tab text color, but I do not see a similar method for <a href="http://www.riverbankcomputing.co.uk/static/Docs/PyQt4/html/qtabwidget.html#setTabText" rel="nofollow">QTabWidget.</a></p>
<p>The tab text color can be set via the tab-widget's <a href="http://www.riverbankcomputing.co.uk/static/Docs/PyQt4/html/qtabwidget.html#tabBar" rel="noreferrer">tabBar</a> method:</p> <pre><code>tabwidget.tabBar().setTabTextColor(index, color) </code></pre>
python|pyqt|qtabwidget
6
656
9,357,268
Django related key of the same model
<p>I'm working on a feature for an app much like a Twitter Retweet.</p> <p>In the model for <code>Item</code>, I want to add a related field for <code>reposted_from</code> that will reference another <code>Item</code>. I dont think I use <code>ForeignKey</code> for this, since it's the same Model, but what do I use instead?</p>
<p>It is common to add a <a href="https://docs.djangoproject.com/en/dev/ref/models/fields/#foreignkey" rel="nofollow">foreign key to self</a> as such:</p> <pre><code>class Item(models.Model): parent = models.ForeignKey('self') </code></pre> <p>You may specify a <a href="https://docs.djangoproject.com/en/dev/ref/models/fields/#django.db.models.ForeignKey.related_name" rel="nofollow">related name</a> as such:</p> <pre><code>class Item(models.Model): parent = models.ForeignKey('self', related_name='children') </code></pre> <p>Because an Item may not have a parent, don't forget null=True and blank=True as such:</p> <pre><code>class Item(models.Model): parent = models.ForeignKey('self', related_name='children', null=True, blank=True) </code></pre> <p>Then you will be able to query children as such:</p> <pre><code>item.children </code></pre> <p>You might as well use <a href="https://github.com/django-mptt/django-mptt" rel="nofollow">django-mptt</a> and benefit of some optimization and extra tree features:</p> <pre><code>from mptt.models import MPTTModel, TreeForeignKey class Item(MPTTModel): parent = TreeForeignKey('self', null=True, blank=True, related_name='children') </code></pre>
python|django
7
657
9,121,826
Class and Array Problems
<p>I am absolutely useless with python and I'm struggling to do what seems to be simple things. I need to read a text file which contains a network routing table which contains the distance between each node on the network (below)</p> <pre><code>0,2,4,1,6,0,0 2,0,0,0,5,0,0 4,0,0,0,0,5,0 1,0,0,0,1,1,0 6,5,0,1,0,5,5 0,0,5,1,5,0,0 0,0,0,0,5,0,0 </code></pre> <p>I then need to assign it to a two dimensional array which I have done with the code i have written below..</p> <pre><code>Network = [] NodeTable = [] def readNetwork(): myFile = open('network.txt','r') for line in myFile.readlines(): line.strip(' \n' '\r') line = line.split(',') line = [int(num) for num in line] Network.append(line) </code></pre> <p>Once that has been done I then need to iterate through the Network array and add each horizontal line to another array which will hold information about the nodes, but as far as I have been able to get with that is here:</p> <pre><code>class Node(object): index = #Needs to start from A and increase with each node previousNode = invalid_node distFromSource = infinity visited = False NodeTable.append(Node()) </code></pre> <p>So that array will be initialised as:</p> <pre><code>A invalid_node infinity False B invalid_node infinity False C invalid_node infinity False ... etc </code></pre> <p>Could anyone give me a hand with creating each node in the NodeTable array?</p>
<h2>Redundant line</h2> <p>Strings in Python are <strong>immutable</strong>, thus with the following line:</p> <pre><code>line.strip(' \n' '\r') </code></pre> <p>you are only getting a copy of the <code>line</code> string, stripped of some characters, but you do not assign it to anything. Change it into:</p> <pre><code>line = line.strip(' \n' '\r') </code></pre> <p>As DSM pointed out in the comments, it will not change much, as <code>int</code> will just ignore redundant whitespaces.</p> <h2>Mapping string to <code>int</code>s</h2> <p>You also are mapping strings to <code>int</code>s like that:</p> <pre><code>line = [int(num) for num in line] </code></pre> <p>which could be replaced by clearer:</p> <pre><code>line = map(int, line) </code></pre> <p>and should give you some slight performance gain also. To shorten your code, you can also replace the following lines:</p> <pre><code>line.strip(' \n' '\r') line = line.split(',') line = [int(num) for num in line] Network.append(line) </code></pre> <p>with the following:</p> <pre><code>Network.append(map(int, line.split(','))) </code></pre> <h2>How to increase <code>Node</code>'s <code>index</code> attribute with each instance</h2> <p>This could be done like that:</p> <pre><code>&gt;&gt;&gt; class Node(object): baseindex = '@' # sign before "A" def __init__(self): cls = self.__class__ cls.baseindex = chr(ord(cls.baseindex) + 1) self.index = self.baseindex self.previousNode = 'invalid_node' self.distFromSource = 'infinity' self.visited = False &gt;&gt;&gt; a = Node() &gt;&gt;&gt; a.index 'A' &gt;&gt;&gt; b = Node() &gt;&gt;&gt; b.index 'B' &gt;&gt;&gt; a.index 'A' </code></pre> <p>As you can see <code>baseindex</code> is attached to the class, and <code>index</code> is attached to the class's instance. I suggest you should attach every instance-specific variable to the instance, as shown in the example above.</p> <h2>Adding <code>Node</code> into the list as list</h2> <p>One of the easiest ways to insert it as list into another list, is to add a method returning it as list (see <code>as_list()</code> method):</p> <pre><code>&gt;&gt;&gt; class Node(object): baseindex = '@' # sign before "A" def __init__(self): cls = self.__class__ cls.baseindex = chr(ord(cls.baseindex) + 1) self.index = self.baseindex self.previousNode = 'invalid_node' self.distFromSource = 'infinity' self.visited = False def as_list(self): return [self.index, self.previousNode, self.distFromSource, self.visited] &gt;&gt;&gt; a = Node() &gt;&gt;&gt; a.index 'A' &gt;&gt;&gt; a.as_list() ['A', 'invalid_node', 'infinity', False] </code></pre> <p>so you should be able to add nodes like this:</p> <pre><code>NodeTable.append(Node().as_list()) </code></pre> <p>But remember - after doing the above, you will not get list of <code>Node</code> instances, you will get list of lists.</p>
python
4
658
39,367,593
Repeating a for in line loop python
<p>How would I repeat this (excluding the opening of the file and the setting of the variables)? this is my code in python3 </p> <pre><code>file = ('file.csv','r') count = 0 #counts number of times i was equal to 1 i = 0 #column number for line in file: line = line.split(",") if line[i] == 1: count = count + 1 i = i+1 </code></pre>
<p>If I understand the question, try this and adjust for however you want to format. Replace <code>NUM_COLUMNS</code> with the number of times you want it repeating</p> <pre><code>file = open('file.csv','r') data = file.readlines() for i in range(NUM_COLUMNS): count = 0 for line in data: line = line.split(",") if line[i] == ("1"): count = count + 1 print count </code></pre>
python|list|python-3.x|csv|repeat
1
659
55,474,353
Python syntax for an unless statement
<p>My request is simple but I do not know how to proceed:</p> <p>I would like to translate an unless statement in python as followed:</p> <pre><code>taken_asks -= 1 unless taken_asks == 0 </code></pre> <p>This is just one line of code which is part of a very big function. Any idea? </p> <p>Thank you in advance !</p>
<p><code>taken_asks -= (1 if taken_asks != 0 else 0)</code></p>
python
6
660
52,510,176
How to make data persistent when setData method is used
<p>The code below creates a single <code>QComboBox</code>. The combo's <code>QStandardItem</code>s are set with <code>data_obj</code> using <code>setData</code> method. Changing <code>combo</code>'s current index triggers <code>run</code> method which iterates <code>combo</code>' and prints the <code>data_obj</code> which turns to Python dictionary. How to make the <code>data_obj</code> persistent?</p> <p><a href="https://i.stack.imgur.com/0pmZ7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0pmZ7.png" alt="enter image description here"></a></p> <pre><code>app = QApplication(list()) class DataObj(dict): def __init__(self, **kwargs): super(DataObj, self).__init__(**kwargs) class Dialog(QDialog): def __init__(self, parent=None): super(Dialog, self).__init__(parent) self.setLayout(QVBoxLayout()) self.combo = QComboBox(self) for i in range(5): combo_item = QStandardItem('item_%s' % i) data_obj = DataObj(foo=i) print '..out: %s' % type(data_obj) combo_item.setData(data_obj, Qt.UserRole + 1) self.combo.model().appendRow(combo_item) self.combo.currentIndexChanged.connect(self.run) self.layout().addWidget(self.combo) def run(self): for i in range(self.combo.count()): item = self.combo.model().item(i, 0) data_obj = item.data(Qt.UserRole + 1) print ' ...in: %s' % type(data_obj) if __name__ == '__main__': gui = Dialog() gui.resize(400, 100) gui.show() qApp.exec_() </code></pre>
<p>Below is the working solution to this problem:</p> <p><a href="https://i.stack.imgur.com/V8tGy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/V8tGy.png" alt="enter image description here"></a></p> <pre><code>app = QApplication(list()) class DataObj(dict): def __init__(self, **kwargs): super(DataObj, self).__init__(**kwargs) class Object(object): def __init__(self, data_obj): super(Object, self).__init__() self.data_obj = data_obj class Dialog(QDialog): def __init__(self, parent=None): super(Dialog, self).__init__(parent) self.setLayout(QVBoxLayout()) self.combo = QComboBox(self) for i in range(5): combo_item = QStandardItem('item_%s' % i) obj = Object(data_obj=DataObj(foo=i)) print '..out: %s' % type(obj.data_obj) combo_item.setData(obj, Qt.UserRole + 1) self.combo.model().appendRow(combo_item) self.combo.currentIndexChanged.connect(self.run) self.layout().addWidget(self.combo) def run(self): for i in range(self.combo.count()): item = self.combo.model().item(i, 0) obj = item.data(Qt.UserRole + 1) print ' ...in: %s' % type(obj.data_obj) if __name__ == '__main__': gui = Dialog() gui.resize(400, 100) gui.show() qApp.exec_() </code></pre>
python|pyside|qcombobox|qstandarditem
0
661
47,933,019
How to properly sample truncated distributions?
<p>I am trying to learn how to sample truncated distributions. To begin with I decided to try a simple example I found here <a href="https://darrenjw.wordpress.com/2012/06/04/metropolis-hastings-mcmc-when-the-proposal-and-target-have-differing-support/" rel="nofollow noreferrer">example</a></p> <p>I didn't really understand the division by the CDF, therefore I decided to tweak the algorithm a bit. Being sampled is an exponential distribution for values <code>x&gt;0</code> Here is an example python code:</p> <pre><code># Sample exponential distribution for the case x&gt;0 import numpy as np import matplotlib.pyplot as plt from scipy.stats import norm def pdf(x): return x*np.exp(-x) xvec=np.zeros(1000000) x=1. for i in range(1000000): a=x+np.random.normal() xs=x if a &gt; 0. : xs=a A=pdf(xs)/pdf(x) if np.random.uniform()&lt;A : x=xs xvec[i]=x x=np.linspace(0,15,1000) plt.plot(x,pdf(x)) plt.hist([x for x in xvec if x != 0],bins=150,normed=True) plt.show() </code></pre> <p>Ant the output is: <a href="https://i.stack.imgur.com/ZWzzH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZWzzH.png" alt="Correctly sampled pdf with the condition a &gt; 0."></a></p> <p>The code above seems to work fine only for when using the condition <code>if a &gt; 0. :</code>, i.e. positive <code>x</code>, choosing another condition (e.g. <code>if a &gt; 0.5 :</code>) produces wrong results.</p> <p><a href="https://i.stack.imgur.com/NskVh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NskVh.png" alt="Wrong sampling with the condition a&gt;0.5"></a> </p> <p>Since my final goal was to sample a 2D-Gaussian - pdf on a truncated interval I tried extending the simple example using the exponential distribution (see the code below). Unfortunately, since the simple case didn't work, I assume that the code given below would yield wrong results.</p> <p>I assume that all this can be done using the advanced tools of python. However, since my primary idea was to understand the principle behind, I would greatly appreciate your help to understand my mistake. Thank you for your help. </p> <p><strong>EDIT:</strong></p> <pre><code># code updated according to the answer of CrazyIvan from scipy.stats import multivariate_normal RANGE=100000 a=2.06072E-02 b=1.10011E+00 a_range=[0.001,0.5] b_range=[0.01, 2.5] cov=[[3.1313994E-05, 1.8013737E-03],[ 1.8013737E-03, 1.0421529E-01]] x=a y=b j=0 for i in range(RANGE): a_t,b_t=np.random.multivariate_normal([a,b],cov) # accept if within bounds - all that is neded to truncate if a_range[0]&lt;a_t and a_t&lt;a_range[1] and b_range[0]&lt;b_t and b_t&lt;b_range[1]: print(dx,dy) </code></pre> <p><strong>EDIT:</strong></p> <p>I changed the code by norming the analytic pdf according to <a href="https://en.wikipedia.org/wiki/Truncated_distribution" rel="nofollow noreferrer">this scheme</a>, and according to the answers given by, @Crazy Ivan and @Leandro Caniglia , for the case where the bottom of the pdf is removed. That is dividing by (1-CDF(0.5)) since my accept condition is <code>x&gt;0.5</code>. This seems again to show some discrepancies. Again the mystery prevails ..</p> <p><a href="https://i.stack.imgur.com/9D4qE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9D4qE.png" alt="enter image description here"></a> </p> <pre><code>import numpy as np import matplotlib.pyplot as plt from scipy.stats import norm def pdf(x): return x*np.exp(-x) # included the corresponding cdf def cdf(x): return 1. -np.exp(-x)-x*np.exp(-x) xvec=np.zeros(1000000) x=1. for i in range(1000000): a=x+np.random.normal() xs=x if a &gt; 0.5 : xs=a A=pdf(xs)/pdf(x) if np.random.uniform()&lt;A : x=xs xvec[i]=x x=np.linspace(0,15,1000) # new part norm the analytic pdf to fix the area plt.plot(x,pdf(x)/(1.-cdf(0.5))) plt.hist([x for x in xvec if x != 0],bins=200,normed=True) plt.savefig("test_exp.png") plt.show() </code></pre> <p>It seems that this can be cured by choosing larger shift size</p> <pre><code>shift=15. a=x+np.random.normal()*shift. </code></pre> <p>which is in general an issue of the Metropolis - Hastings. See the graph below: <a href="https://i.stack.imgur.com/UWo1E.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UWo1E.png" alt="using the step size a=x+np.random.normal()*15. "></a> </p> <p>I also checked <code>shift=150</code> <a href="https://i.stack.imgur.com/DgnUZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DgnUZ.png" alt="shift=150"></a></p> <p>Bottom line is that changing the shift size definitely improves the convergence. The misery is why, since the Gaussian is unbounded. </p>
<p>You say you want to learn the basic idea of sampling a truncated distribution, but your source is a blog post about <a href="https://en.wikipedia.org/wiki/Metropolis%E2%80%93Hastings_algorithm" rel="noreferrer">Metropolis–Hastings algorithm</a>? Do you actually need this "method for obtaining a sequence of random samples from a probability distribution <em>for which direct sampling is difficult</em>"? Taking this as your starting point is like learning English by reading Shakespeare. </p> <h3>Truncated normal</h3> <p>For truncated normal, basic <strong>rejection sampling</strong> is all you need: generate samples for original distribution, reject those outside of bounds. As Leandro Caniglia noted, you should not expect truncated distribution to have the same PDF except on a shorter interval &mdash; this is plain impossible because the area under the graph of a PDF is always 1. If you cut off stuff from sides, there has to be more in the middle; the PDF gets rescaled. </p> <p>It's quite inefficient to gather samples one by one, when you need 100000. I would grab 100000 normal samples at once, accept only those that fit; then repeat until I have enough. Example of sampling truncated normal between amin and amax:</p> <pre><code>import numpy as np n_samples = 100000 amin, amax = -1, 2 samples = np.zeros((0,)) # empty for now while samples.shape[0] &lt; n_samples: s = np.random.normal(0, 1, size=(n_samples,)) accepted = s[(s &gt;= amin) &amp; (s &lt;= amax)] samples = np.concatenate((samples, accepted), axis=0) samples = samples[:n_samples] # we probably got more than needed, so discard extra ones </code></pre> <p>And here is the comparison with the PDF curve, <strong>rescaled</strong> by division by <code>cdf(amax) - cdf(amin)</code> as explained above. </p> <pre><code>from scipy.stats import norm _ = plt.hist(samples, bins=50, density=True) t = np.linspace(-2, 3, 500) plt.plot(t, norm.pdf(t)/(norm.cdf(amax) - norm.cdf(amin)), 'r') plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/djBzh.png" rel="noreferrer"><img src="https://i.stack.imgur.com/djBzh.png" alt="histogram"></a></p> <h3>Truncated multivariate normal</h3> <p>Now we want to keep the first coordinate between amin and amax, and the second between bmin and bmax. Same story, except there will be a 2-column array and the comparison with bounds is done in a relatively sneaky way:</p> <pre><code>(np.min(s - [amin, bmin], axis=1) &gt;= 0) &amp; (np.max(s - [amax, bmax], axis=1) &lt;= 0) </code></pre> <p>This means: subtract amin, bmin from each row and keep only the rows where both results are nonnegative (meaning we had a >= amin and b >= bmin). Also do a similar thing with amax, bmax. Accept only the rows that meet both criteria. </p> <pre><code>n_samples = 10 amin, amax = -1, 2 bmin, bmax = 0.2, 2.4 mean = [0.3, 0.5] cov = [[2, 1.1], [1.1, 2]] samples = np.zeros((0, 2)) # 2 columns now while samples.shape[0] &lt; n_samples: s = np.random.multivariate_normal(mean, cov, size=(n_samples,)) accepted = s[(np.min(s - [amin, bmin], axis=1) &gt;= 0) &amp; (np.max(s - [amax, bmax], axis=1) &lt;= 0)] samples = np.concatenate((samples, accepted), axis=0) samples = samples[:n_samples, :] </code></pre> <p>Not going to plot, but here are some values: naturally, within bounds.</p> <pre><code>array([[ 0.43150033, 1.55775629], [ 0.62339265, 1.63506963], [-0.6723598 , 1.58053835], [-0.53347361, 0.53513105], [ 1.70524439, 2.08226558], [ 0.37474842, 0.2512812 ], [-0.40986396, 0.58783193], [ 0.65967087, 0.59755193], [ 0.33383214, 2.37651975], [ 1.7513789 , 1.24469918]]) </code></pre>
python|numpy|random|probability|mcmc
9
662
37,541,311
Unique ID in html for generating Buttons
<p>sorry if the title is misleading.</p> <p>I'm having the following problem. I am creating multiple rows in HTML using Genshi. For each row I have a button at the end of the row for delete purposes. </p> <p>The code looks like this:</p> <pre><code>&lt;form action="/deleteAusleihe" method="post"&gt; &lt;table&gt; &lt;tr&gt; &lt;th&gt;ID&lt;/th&gt; &lt;th&gt;Person&lt;/th&gt; &lt;th&gt;Buch&lt;/th&gt; &lt;th&gt;&lt;/th&gt; &lt;/tr&gt; &lt;tr py:for="v in verleihen"&gt; &lt;input type = "hidden" value="v.id" name="toDelete"/&gt; &lt;td py:content="v.id"&gt;Vorname der Person&lt;/td&gt; &lt;td py:content="v.kundeID"&gt;Name der Person&lt;/td&gt; &lt;td py:content="v.buchID"&gt;Straße der Person&lt;/td&gt; &lt;td&gt; &lt;input type="submit" name="submit" value="Löschen"/&gt; &lt;/td&gt; &lt;br/&gt; &lt;/tr&gt; &lt;/table&gt; &lt;/form&gt; </code></pre> <p>The input type ="hidden" should store the value of each id so I am able to identify the row later on. </p> <p>When I try to delete now, and lets assume I have 2 rows filled, I get 2 id's as a paramater, which is logical to me, but I don't know how to solve it.</p> <p>The deleteAusleihe function looks like this:</p> <pre><code>@expose() def deleteAusleihe(self,toDelete,submit): Verleih1 = DBSession.query(Verleih).filter_by(id=toDelete) for v in Verleih1: DBSession.delete(v) DBSession.flush() transaction.commit() redirect("/Verleih") </code></pre> <p>Thanks in advance for your help!</p>
<p>The issue is that all the hidden inputs inside the <code>&lt;form&gt;</code> element get submitted at once.</p> <p>There are various ways you could solve this. Probably the easiest would be to move the form tag inside the loop, so that there are multiple forms and each one only wraps a single input and button.</p>
python|html|python-2.7|turbogears2|genshi
2
663
34,291,661
Setuptools pip failed with error code 1 when installing Hue browser for Apache Hadoop
<p>I'm trying to install Hue browser for Apache Hadoop on my mac. So I retrieve the git folder :</p> <pre><code>git clone https://github.com/cloudera/hue.git </code></pre> <p>I followed this tutorial <a href="http://blog.cloudera.com/blog/2015/04/how-to-install-hue-on-a-mac/" rel="nofollow">here</a> </p> <p>But when doing <code>make apps</code> I end up with the following error :</p> <pre><code>python2.7 /Users/leo/Downloads/hue-3.8.1/tools/virtual-bootstrap/virtual-bootstrap.py \ -qq --no-site-packages /Users/leo/Downloads/hue-3.8.1/build/env Traceback (most recent call last): File "/Users/leo/Downloads/hue-3.8.1/tools/virtual-bootstrap/virtual-bootstrap.py", line 2355, in &lt;module&gt; main() File "/Users/leo/Downloads/hue-3.8.1/tools/virtual-bootstrap/virtual-bootstrap.py", line 827, in main symlink=options.symlink) File "/Users/leo/Downloads/hue-3.8.1/tools/virtual-bootstrap/virtual-bootstrap.py", line 995, in create_environment install_wheel(to_install, py_executable, search_dirs) File "/Users/leo/Downloads/hue-3.8.1/tools/virtual-bootstrap/virtual-bootstrap.py", line 963, in install_wheel 'PIP_NO_INDEX': '1' File "/Users/leo/Downloads/hue-3.8.1/tools/virtual-bootstrap/virtual-bootstrap.py", line 905, in call_subprocess % (cmd_desc, proc.returncode)) OSError: Command /Users/leo/Downloads...ld/env/bin/python2.7 -c "import sys, pip; sys...d\"] + sys.argv[1:]))" setuptools pip failed with error code 1 </code></pre> <p>I don't understand what the problem is. Thanks for any help on this.</p>
<p>try <code>sudo make apps</code></p> <p>it works for me on Sierra.</p>
python|pip|hue
0
664
66,289,384
Create Panorama from Non-Sequential Video Frames
<p>There is a <a href="https://stackoverflow.com/questions/23856786/video-to-panorama-image">similar question</a> (not that detailed and no exact solution). I want to create <strong>a single panorama image</strong> from video frames. And for that, I need to get <strong>minimum non-sequential video frames</strong> at first. A demo video file is uploaded <a href="https://drive.google.com/file/d/174gKRNlPkKu9mEEFSPrsKJg58x9O3g5s/view?usp=sharing" rel="nofollow noreferrer">here</a>.</p> <h3>What I Need</h3> <p>A mechanism that can produce not-only non-sequential video frames but also in such a way that can be used to create a panorama image. A sample is given below. As we can see to create a panorama image, all the input samples must contain <strong>minimum overlap regions</strong> to each other otherwise it can not be done.</p> <p><a href="https://i.stack.imgur.com/M9XI3.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/M9XI3.jpg" alt="enter image description here" /></a></p> <p>So, if I have the following video frame's order</p> <pre><code>A, A, A, B, B, B, B, C, C, A, A, C, C, C, B, B, B ... </code></pre> <p>To create a panorama image, I need to get something as follows - reduced sequential frames (or adjacent frames) but with minimum overlapping.</p> <pre><code> [overlap] [overlap] [overlap] [overlap] [overlap] A, A,B, B,C, C,A, A,C, C,B, ... </code></pre> <h3>What I've Tried and Stuck</h3> <p>A demo video clip is given <a href="https://drive.google.com/file/d/174gKRNlPkKu9mEEFSPrsKJg58x9O3g5s/view" rel="nofollow noreferrer">above</a>. To get non-sequential video frames, I primarily rely on <code>ffmpeg</code> software.</p> <p><strong>Trial 1</strong> <a href="https://stackoverflow.com/a/37089629/9215780">Ref.</a></p> <pre><code>ffmpeg -i check.mp4 -vf mpdecimate,setpts=N/FRAME_RATE/TB -map 0:v out.mp4 </code></pre> <p>After that, on the <code>out.mp4</code>, I applied slice the video frames using <code>opencv</code></p> <pre><code>import cv2, os from pathlib import Path vframe_dir = Path(&quot;vid_frames/&quot;) vframe_dir.mkdir(parents=True, exist_ok=True) vidcap = cv2.VideoCapture('out.mp4') success,image = vidcap.read() count = 0 while success: cv2.imwrite(f&quot;{vframe_dir}/frame%d.jpg&quot; % count, image) success,image = vidcap.read() count += 1 </code></pre> <p>Next, I rotated these saved images horizontally (as my video is a vertical view).</p> <pre><code>vframe_dir = Path(&quot;out/&quot;) vframe_dir.mkdir(parents=True, exist_ok=True) vframe_dir_rot = Path(&quot;vframe_dir_rot/&quot;) vframe_dir_rot.mkdir(parents=True, exist_ok=True) for i, each_img in tqdm(enumerate(os.listdir(vframe_dir))): image = cv2.imread(f&quot;{vframe_dir}/{each_img}&quot;)[:, :, ::-1] # Read (with BGRtoRGB) image = cv2.rotate(image,cv2.cv2.ROTATE_180) image = cv2.rotate(image,cv2.ROTATE_90_CLOCKWISE) cv2.imwrite(f&quot;{vframe_dir_rot}/{each_img}&quot;, image[:, :, ::-1]) # Save (with RGBtoBGR) </code></pre> <p>The output is ok for this method (with <code>ffmpeg</code>) but <strong>inappropriate</strong> for creating the panorama image. Because it didn't give some overlapping frames sequentially in the results. Thus panorama can't be generated.</p> <img src="https://i.stack.imgur.com/Sgcki.png" width="450"/> <p><strong>Trail 2</strong> - <a href="https://stackoverflow.com/a/52062421/9215780">Ref</a></p> <pre><code>ffmpeg -i check.mp4 -vf decimate=cycle=2,setpts=N/FRAME_RATE/TB -map 0:v out.mp4 </code></pre> <p>didn't work at all.</p> <p><strong>Trail 3</strong></p> <pre><code>ffmpeg -i check.mp4 -ss 0 -qscale 0 -f image2 -r 1 out/images%5d.png </code></pre> <p>No luck either. However, I've found this last <code>ffmpeg</code> command was close by far but wasn't enough. Comparatively to others, this gave me a small amount of non-duplicate frames (good) but the bad thing is still <code>do not need</code> frames, and I kinda manually pick some desired frames, and then the <code>opecv</code> stitching algorithm works. So, after picking some frames and rotating (as mentioned before):</p> <pre><code>stitcher = cv2.Stitcher.create() status, pano = stitcher.stitch(images) # images: manually picked video frames -_- </code></pre> <img src="https://i.stack.imgur.com/n6pOl.png" width="600"/> <h2>Update</h2> <p>After some trials, I am kinda adopting the non-programming solution. But would love to see <strong>an efficient programmatic approach</strong>.</p> <p>On the given demo video, I used <code>Adobe</code> products (<code>premiere pro</code> and <code>photoshop</code>) to do this task, <a href="https://www.youtube.com/watch?v=71O1M2bAj4c" rel="nofollow noreferrer">video instruction</a>. But the issue was, I kind of took all video frames at first (without dropping to any frames and that will computationally cost further) via <code>premier</code> and use <code>photoshop</code> to stitching them (according to the youtube video instruction). It was too heavy for these editor tools and didn't look better way but the output was better than anything until now. Though I took few (400+ frames) video frames only out of 1200+.</p> <p><a href="https://i.stack.imgur.com/CTiAM.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CTiAM.jpg" alt="enter image description here" /></a></p> <hr /> <p>Here are some big challenges. The original video clips have some <strong>conditions</strong> though, and it's too serious. Unlike the given demo video clips:</p> <ul> <li>It's not straight forward, i.e. camera shaking</li> <li>Lighting condition, i.e. causes different visual look at the same spot</li> <li>Cameral flickering or banding</li> </ul> <p>This scenario is not included in the given demo video. And this brings additional and heavy challenges to create panorama images from such videos. Even with the non-programming way (using <code>adobe</code> tools) I couldn't make it any good.</p> <hr /> <p>However, for now, all I'm interest to get a panorama image from the given demo video which is without the above condition. But I would love to know any comment or suggestion on that.</p>
<p>My approach to decimating the video is to pretty much do what a stitching program would do to try and stitch two frames together. I look for matching feature points and I only save frames once the number of matched points dip below what I think is an acceptable level.</p> <p>To stitch, I just used OpenCV's built-in stitcher. If you want to avoid OpenCV's solution, I can redo the code to go without it (though I won't be able to replicate all of the nice cleaning steps that opencv does). The decimate program is honestly already most of the way there towards doing a generic stitch.</p> <p>I got the video from here: <a href="https://www.videezy.com/nature/48905-rain-forest-pan-shot" rel="nofollow noreferrer">https://www.videezy.com/nature/48905-rain-forest-pan-shot</a></p> <p>And this is the panorama (decimated to 7 frames at cutoff = 50)</p> <p><a href="https://i.stack.imgur.com/ctVRp.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ctVRp.jpg" alt="enter image description here" /></a></p> <p>This is a pretty ideal case though, so this strategy might fail for a more difficult video like the one you described. If you can post that video then we can test out this solution on the actual use case and modify it if need be.</p> <p>I like this program. And these panning shots are cool. Here's another one from this video: <a href="https://www.videezy.com/abstract/41671-pan-of-bryce-canyon-in-utah-4k" rel="nofollow noreferrer">https://www.videezy.com/abstract/41671-pan-of-bryce-canyon-in-utah-4k</a></p> <p>(decimated to 4 frames at cutoff = 50)</p> <p><a href="https://i.stack.imgur.com/L9mC5.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/L9mC5.jpg" alt="enter image description here" /></a></p> <p><a href="https://www.videezy.com/nature/11664-panning-shot-of-red-peaks-and-green-valleys-in-4k" rel="nofollow noreferrer">https://www.videezy.com/nature/11664-panning-shot-of-red-peaks-and-green-valleys-in-4k</a></p> <p>(decimated to 4 frames at cutoff = 150)</p> <p><a href="https://i.stack.imgur.com/li8ay.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/li8ay.jpg" alt="enter image description here" /></a></p> <p>Decimate</p> <pre class="lang-py prettyprint-override"><code>import cv2 import numpy as np import os import shutil # rescale the images def rescale(img): scale = 0.5; h,w = img.shape[:2]; h = int(h*scale); w = int(w*scale); return cv2.resize(img, (w,h)); # delete and create directory folder = &quot;frames/&quot;; if os.path.isdir(folder): shutil.rmtree(folder); os.mkdir(folder); # open vidcap cap = cv2.VideoCapture(&quot;PNG_7501.mp4&quot;); # your video here counter = 0; # make an orb feature detector and a brute force matcher orb = cv2.ORB_create(); bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=False); # store the first frame _, last = cap.read(); last = rescale(last); cv2.imwrite(folder + str(counter).zfill(5) + &quot;.png&quot;, last); # get the first frame's stuff kp1, des1 = orb.detectAndCompute(last, None); # cutoff, the minimum number of keypoints cutoff = 50; # Note: this should be tailored to your video, this is high here since a lot of this video looks like # count number of frames prev = None; while True: # get frame ret, frame = cap.read(); if not ret: break; # resize frame = rescale(frame); # count keypoints kp2, des2 = orb.detectAndCompute(frame, None); # match matches = bf.knnMatch(des1, des2, k=2); # lowe's ratio good = [] for m,n in matches: if m.distance &lt; 0.5*n.distance: good.append(m); # check against cutoff print(len(good)); if len(good) &lt; cutoff: # swap and save counter += 1; last = frame; kp1 = kp2; des1 = des2; cv2.imwrite(folder + str(counter).zfill(5) + &quot;.png&quot;, last); print(&quot;New Frame: &quot; + str(counter)); # show cv2.imshow(&quot;Frame&quot;, frame); cv2.waitKey(1); prev = frame; # also save last frame counter += 1; cv2.imwrite(folder + str(counter).zfill(5) + &quot;.png&quot;, prev); # check number of saved frames print(&quot;Counter: &quot; + str(counter)); </code></pre> <p>Stitcher</p> <pre class="lang-py prettyprint-override"><code>import cv2 import numpy as np import os # target folder folder = &quot;frames/&quot;; # load images filenames = os.listdir(folder); images = []; for file in filenames: # get image img = cv2.imread(folder + file); # save images.append(img); # use built in stitcher stitcher = cv2.createStitcher(); (status, stitched) = stitcher.stitch(images); cv2.imshow(&quot;Stitched&quot;, stitched); cv2.waitKey(0); </code></pre>
python|opencv|video|ffmpeg|duplicates
1
665
7,275,710
mutagen: how to detect and embed album art in mp3, flac and mp4
<p>I'd like to be able to detect whether an audio file has embedded album art and, if not, add album art to that file. I'm using mutagen</p> <p>1) Detecting album art. Is there a simpler method than this pseudo code:</p> <pre><code>from mutagen import File audio = File('music.ext') test each of audio.pictures, audio['covr'] and audio['APIC:'] if doesn't raise an exception and isn't None, we found album art </code></pre> <p>2) I found this for embedding album art into an mp3 file: <a href="https://stackoverflow.com/questions/409949/how-do-you-embed-album-art-into-an-mp3-using-python">How do you embed album art into an MP3 using Python?</a></p> <p>How do I embed album art into other formats?</p> <p>EDIT: embed mp4</p> <pre><code>audio = MP4(filename) data = open(albumart, 'rb').read() covr = [] if albumart.endswith('png'): covr.append(MP4Cover(data, MP4Cover.FORMAT_PNG)) else: covr.append(MP4Cover(data, MP4Cover.FORMAT_JPEG)) audio.tags['covr'] = covr audio.save() </code></pre>
<p>Embed flac:</p> <pre><code>from mutagen import File from mutagen.flac import Picture, FLAC def add_flac_cover(filename, albumart): audio = File(filename) image = Picture() image.type = 3 if albumart.endswith('png'): mime = 'image/png' else: mime = 'image/jpeg' image.desc = 'front cover' with open(albumart, 'rb') as f: # better than open(albumart, 'rb').read() ? image.data = f.read() audio.add_picture(image) audio.save() </code></pre> <p>For completeness, detect picture</p> <pre><code>def pict_test(audio): try: x = audio.pictures if x: return True except Exception: pass if 'covr' in audio or 'APIC:' in audio: return True return False </code></pre>
python|metadata|albumart|mutagen
4
666
7,567,318
How to make a list of n numbers in Python and randomly select any number?
<p>I have taken a count of something and it came out to N.</p> <p>Now I would like to have a list, containing 1 to N numbers in it.</p> <p>Example:</p> <p>N = 5</p> <p>then, <code>count_list = [1, 2, 3, 4, 5]</code></p> <p>Also, once I have created the list, I would like to randomly select a number from that list and use that number.</p> <p>After that I would like to select another number from the remaining numbers of the list (N-1) and then use that also.</p> <p>This goes on it the list is empty.</p>
<p>You can create the enumeration of the elements by something like this:</p> <pre><code>mylist = list(xrange(10)) </code></pre> <p>Then you can use the <code>random.choice</code> function to select your items:</p> <pre><code>import random ... random.choice(mylist) </code></pre> <p>As Asim Ihsan correctly stated, my answer did not address the full problem of the OP. To remove the values from the list, simply <code>list.remove()</code> can be called:</p> <pre><code>import random ... value = random.choice(mylist) mylist.remove(value) </code></pre> <p>As takataka pointed out, the <code>xrange</code> builtin function was renamed to <code>range</code> in Python 3.</p>
python|list|random
41
667
16,574,746
How to get id of the tweet posted in tweepy
<p>I want to check if a certain tweet is a reply to the tweet that I sent. Here is how I think I can do it:</p> <p>Step1: Post a tweet and store id of posted tweet</p> <p>Step2: Listen to my handle and collect all the tweets that have my handle in it</p> <p>Step3: Use <code>tweet.in_reply_to_status_id</code> to see if tweet is reply to the stored id</p> <p>In this logic, I am not sure how to get the status id of the tweet that I am posting in step 1. Is there a way I can get it? If not, is there another way in which I can solve this problem?</p>
<p>What one could do, is get the last nth tweet from a user, and then get the tweet.id of the relevant tweet. This can be done doing:</p> <p><code>latestTweets = api.user_timeline(screen_name = 'user', count = n, include_rts = False)</code></p> <p>I, however, doubt that it is the most efficient way.</p>
python|twitter|tweepy
0
668
31,681,194
Using keyboard actions to open files outside of Python
<p>Brand new to programming....</p> <p>I am trying to set up a program that can be controlled using keyboard shortcuts. I want the keyboard shortcuts to be linked to specific excel files. I have figured out how to open the excel files by themselves but now I want to attach the shortcuts to them. This is all I have:</p> <pre><code>import os os.system('start C:\mold\MoldFlowMaster_exce.xlsx') </code></pre>
<p>This makes a Tkinter window which you can type in to. If the 'shortcut' (in this code, 'a') is pressed, you can cause something to happen such as opening your Excel file. </p> <pre><code>import tkinter as tk import os def onKeyPress(event): if event.char == 'a': os.system('start C:\mold\MoldFlowMaster_exce.xlsx') root = tk.Tk() root.geometry('300x200') text = tk.Text(root, background='black', foreground='white', font=('Comic Sans MS', 12)) text.pack() root.bind('&lt;KeyPress&gt;', onKeyPress) root.mainloop() </code></pre>
python|excel
1
669
40,740,525
Pycharm recognizes kwargs in print as wrong (python3)
<p>Whenever I do something like this:</p> <pre><code>print("Hello World", flush=True, file=sys.stderr) </code></pre> <p>PyCharm complains about</p> <pre><code>End of statement expected Statement expected, found Py:DEDENT Statement expected, found Py:RPAR </code></pre> <p>Because of that "Syntax Error" all definitions after that are buggy and displayed as wrong as well.</p> <p>That confuses me a lot and features (e.g. autocompletion) are not working anymore because of this.</p> <p>Is there any setting I did not set correctly? Is this a bug?</p>
<p>My settings for the python interpreter were wrong. To change it, I went to <code>Settings-&gt;Project-&gt;Project Interpreter</code>. Everything is working fine now!</p> <p>Thanks for the comments which leads to the solution!</p>
python-3.x|pycharm
4
670
10,237,608
Python MYSQLDB Insert Syntax Error
<p>I'm trying to insert data into my database and I get a MYSQL syntax error using this code:</p> <pre><code>import MySQLdb db=MySQLdb.connect(host="localhost",user="root",passwd="",db="database") cursor = db.cursor() sql = "INSERT INTO table1('col1','col2') values ('val1','val2');" cursor.execute(sql) db.commit() </code></pre>
<p>No quotes around the column names.</p> <pre><code>INSERT INTO table1(col1, col2) VALUES ('val1', 'val2'); </code></pre> <p>You could use backticks around the column names, but not single quotes.</p>
python|mysql
2
671
9,667,462
Backspace behavior in Python statement, what is correct behavior of printing a '\b' in code?
<blockquote> <p><strong>Possible Duplicate:</strong><br> <a href="https://stackoverflow.com/questions/2856344/backspace-character-weirdness">backspace character weirdness</a> </p> </blockquote> <p>I have noticed that 1. If I print only backspaces, i.e. a sequence of \b in Python, then it is completely blank. 2. If I print characters followed by backspaces i.e. 'sssss\b\b\b\b\b', then it will print the multiple 's' characters But if I print something like 'ssss\b\b\b\baaaa', then the backspace, \b, will actually act like I am typing a backspace and delete the 's' characters.</p> <p>I am using Python 2.6 on Windows XP. Is this expected behavior. If I try to get length of backspace character, it is printed as 1.</p> <p>Here is my test code - </p> <pre><code>&gt;&gt;&gt; print 'ssss\b\b\b\b\baaaaa' aaaaa &gt;&gt;&gt; print 'ssssssss\b\b\b\b\baaaaa' sssaaaaa &gt;&gt;&gt; print 'ssssssss\b\b\b\b\b' ssssssss &gt;&gt;&gt; print 'ssssssss\b\b\b\b\baaaaa' sssaaaaa &gt;&gt;&gt; print '\b\b\b\b\b' &gt;&gt;&gt; </code></pre> <p>My question is- What is the expected behavior when I print '\b' in Python and why the deletion does work in only a particular case?</p>
<p>Expanded answer: The backspace doesn't delete anything, it moves the cursor to the left and it gets covered up by what you write afterwards. If you were writing to a device that can display overstriking (such as an old-fashioned "hard copy" terminal, which works like a typewriter), you'd actually see the new character on top of the old one. That's the <em>real</em> reason backspace has these semantics.</p> <p>On the unix command line, the shell can be set to interpret backspace as meaning "erase"-- unless it's set to only treat delete this way. But that's up to the program reading your input.</p>
python
13
672
26,290,298
Inclusive range of list Python
<p>I'm trying to find the minimum elements within a section of a list. In the following example, a is the start and b is the end. I would like these indexes to partition the list inclusively, so that if the list is [1,2,3,9,4,10] the indexes 1 to 4 would include 2 and 4.</p> <pre><code>def minimum (a,b,list): return min(list[a:b]) </code></pre> <p>In other words, is there a way to make <code>list[a:b]</code> inclusive?</p>
<p>By default, no.</p> <p>For this case, it is more conventional to do:</p> <pre><code>min(li[a:b + 1]) </code></pre> <p>Also beware of naming your variable list, as it can have unintended consequences (silent namespace issues), as "list" also names the built-in list container type.</p> <p>If you simply want to write your own minimum method, you can encapsulate this behavior in your minimum method using the above method so you don't have to think about it ever again.</p> <p>Side note: Standard list-slicing uses O(N) space and can get expensive for large lists if minimum is called over and over gain. A cheaper O(1) space alternative would be:</p> <pre><code>def minimum(a, b, li): min(itertools.islice(li, a, b + 1)) </code></pre> <p><strong>EDIT: Don't use islice unless you are slicing starting at the beginning of the list or have tight memory constraints. It first iterates up to a, rather than directly indexing to a, which can take O(b) run-time.</strong></p> <p>A better solution would be something like this, which runs with O(b-a) run-time and O(1) space:</p> <pre><code>def minimum(li, a=0, b=None): if b is None: b = len(li) - 1 if b - a &lt; 0: raise ValueError("minimum() arg is an empty sequence") current_min = li[a] for index in xrange(a, b + 1): current_min = min(current_min, li[index]) return current_min </code></pre> <p>A fancier solution that you can use if the list is static (elements are not deleted and inserted during the series of queries) is to perform Range Minimum Queries using segment trees: <a href="http://www.geeksforgeeks.org/segment-tree-set-1-range-minimum-query/" rel="nofollow">http://www.geeksforgeeks.org/segment-tree-set-1-range-minimum-query/</a></p> <p>Building the tree requires O(N) run-time and O(N) space, although all queries after that require only O(log(N)) run-time and O(1) additional space.</p>
python-2.7|python-3.x
2
673
60,110,915
Install specific python library version based on another library version
<p>In setup.py I have </p> <pre><code> install_requires=[ "python-consul", "library_a", "library_b" ] </code></pre> <p>library_b is also imported by library_a but it is pinned in library_a. </p> <p>Is it possible to pin library_b to what it is pinned in library_a. I know I can just pin the same version but then everytime library_b pin is updated in library_a I need to repin in my service. </p>
<p>Probably you can just omit one of the libraries, but not sure without concrete examples.</p> <p>Anyhow, you can use <a href="https://pip.pypa.io/en/stable/reference/pip_install/#requirement-specifiers" rel="nofollow noreferrer">requirement specifiers</a> for versions that will define rules for versions and pin this rule for libraries you need. Example:</p> <pre><code>install_requires = [ "python-consul", "library_a &gt;=1.2, &lt;2.0", "library_b &gt;=1.2, &lt;2.0", ] </code></pre> <p>It can be exact version, greater/less than some version (by major, minor or even build version). Full list of version specifiers (rules) and examples <a href="https://www.python.org/dev/peps/pep-0440/#version-specifiers" rel="nofollow noreferrer">can be found here</a>.</p>
python|setup.py
0
674
62,898,802
Git: How to save and return to a specific version
<p>I'm new here so I apologize if this isn't the place to ask this question. I'm writing a python script for my company that looks through files in certain commits and compares them. Well I'm not familiar with git and how commits work so maybe someone more knowledgeable than me can help. What I have so far is something along the lines of this:</p> <pre><code>import git # Directory for my repo repo = git.Repo(&lt;repo path&gt;) # Get the current commit and save it for later use commit = repo.commit() &lt; Here is where I search through the current files to get my info &gt; # Checkout the old commit repo.git.checkout(&quot;HEAD~1&quot;, force=True) &lt; Here is where I search through the old files to get my info &gt; # Re-checkout the current commit repo.git.checkout(commit.hexsha, force=True) &lt; Now I want to be back where I started &gt; </code></pre> <p>This works well and it almost accomplishes what I want it to. However... the whole reason for this script is to check changes to files. In other words, the developer will work on and change many of the files so that they are different from the last commit. The problem is when this checks out the newer commit again, the changes are gone (obviously very frustrating to the developer). So the process overall is something is like this:</p> <pre><code>--&gt; new_commit on local machine (with changes from developer) --&gt; old_commit checked out to see what changed --&gt; new_commit checked back out (as if the developer never worked on it) </code></pre> <p>Overall, my question is: is there any way that I can save this new commit with the changes so that when re-checked out it still has the changes? Thank you for any help!</p> <p>Edit: What I want to achieve is storing the uncommitted/unstaged changes to the version currently checked out, then checkout and older version, and finally bring back the uncommitted/unstaged changes.</p>
<p>For anyone looking for the answer to this question, what worked was LeGEC's solution in the comments. I used:</p> <pre><code>import subprocess # Here is where I got info on the current commit with changes subprocess.run([&quot;git&quot;, &quot;stash&quot;]) # Here is where I got info on the older commit version subprocess.run([&quot;git&quot;, &quot;stash&quot;, &quot;pop&quot;]) # Back to the commit with changes </code></pre>
python|python-3.x|git
0
675
32,242,446
try and except in while-loop python
<p>I am working on a live plot of incoming data. The data comes from a spectrum analyser and sometimes I get faulty data. Faulty in the meaning that there are on some positions letters instead of numbers. </p> <p>I save the incoming data as a list and then I convert it to a <code>numpy.array</code> with</p> <pre><code>trace = np.array(trace, np.float) </code></pre> <p>So when there are letters instead of numbers in one of the entries a <code>ValueError</code> is raised and the program is canceled and doesn't plot anymore. </p> <p>So I was thinking about using <code>try and except</code> inside my <code>while-loop</code>. </p> <p>But here the problem arises: The plot doesn't look anymore like it should. </p> <p>My idea was, that if the data is faulty, the live plot just should not plot the data at all and just skip the drawing. The pieces with wrong data remain white or aren't updated. </p> <p>Thats how the plot normally should look like: <a href="https://i.stack.imgur.com/U5Xv4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/U5Xv4.png" alt=""></a></p> <p>I hope you get the idea... With every new piece of data the next sixteenths part of the circle is drawn. </p> <p>But with <code>try and except</code> it looks like this:</p> <p><a href="https://i.stack.imgur.com/HjMA5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HjMA5.png" alt=""></a></p> <p>and only the part on the negative y-axes is updated. </p> <p>Oh and I forgot to mention that the <code>while-loop</code> has no breaking condition. </p> <p>Maybe I have the wrong idea of how <code>try and except</code> work. But I hope you can help me.</p> <p>The code of the <code>while-loop</code></p> <pre><code>while True : try: trace = inst.query(':TRACe:DATA? TRACE1').partition(' ')[2][:-2].split(', ')# the first &amp; last 2 entries are cut off, are random numbers f = open(timestamp,'a') # open file for value in trace : #write to file f.write(value) f.write('\n') zeroarray = np.zeros(200) #change the length of zeroarray to gain a bigger circle in the middle trace = np.array(trace, np.float) indexmax = np.argsort(trace) #gives us the index array of the sorted vector maximum maximum = np.sort(trace) #sorts the array with the values print 'The four maxima are' # prints the four biggest values for i in range(-1,-5,-1): if indexmax[i] == 0 : frequency = start elif indexmax[i] == 600 : frequency = stop else : frequency = ( indexmax[i] + 1 ) * (start -stop)/601 print maximum[i], 'dB at', frequency ,' Hz ' print '\n' trace = np.insert(trace,0,zeroarray) a = np.linspace(i*np.pi/8+np.pi/16-np.pi/8, i*np.pi/8+np.pi/16, 2)#Angle, circle is divided into 16 pieces b = np.linspace(start -scaleplot, stop,801) #points of the frequency + 200 more points to gain the inner circle A, B = np.meshgrid(a, trace) #actual plotting ax = plt.subplot(111, polar=True) ctf = ax.contourf(a, b, B, cmap=cm.jet) xCooPoint = i*np.pi/8 + np.pi/16 #shows the user the position of the plot yCooPoint = stop ax.plot(xCooPoint, yCooPoint, 'or', markersize = 15) xCooWhitePoint = (i-1) * np.pi/8 + np.pi/16 #this erases the old red points yCooWhitePoint = stop ax.plot(xCooWhitePoint, yCooWhitePoint, 'ow', markersize = 15) plt.draw() except ValueError : print('Some data was wrong') i+=1 </code></pre> <p>And thanks for the quick help!</p>
<p>I would recommend putting in the <code>try/except</code> clause only what you expect to raise an exception. The code would be more clear and you would be sure that the exception is raised by what you expect to raise it. Something like:</p> <pre><code>try: trace = np.array(trace, np.float) except ValueError: print('Some data was wrong') i += 1 continue #remaining code... </code></pre> <p>Some more comments:</p> <ol> <li>Do you need to open de file at every iteration? Shouldn’t you close it as well?</li> <li>Do you need to create the subplot at every iteration?</li> <li>You are using the <code>i</code> variable in <code>range</code> and at the end of the <code>except</code>. Shouldn't you use different variable names? Are you sure <code>i</code> needs only be increased in case of an exception?</li> </ol>
python|numpy|matplotlib|plot|try-except
0
676
28,155,535
Numpy-style error tracebacks?
<p>In numpy, when you make a mistake, the error doesn't tell you about all the numpy internals, just the user-level error made. For example:</p> <pre><code>import numpy as np A = np.ones([1,2]) B = np.ones([2,3]) A+B </code></pre> <p>spits back</p> <pre><code>Traceback (most recent call last): File "/home/roderic/Desktop/scratchpad.py", line 5, in &lt;module&gt; A+B ValueError: operands could not be broadcast together with shapes (1,2) (2,3) </code></pre> <p>Notice how it doesn't tell you about all the internal bouncing around that numpy did in order to determine that you are multiplying incompatible matrices, nor where the ValueError was raised exactly. I want to do the same for my project, where the traceback should stop outside of the module internals (unless I am on debug mode). So, if the traceback is 10 steps long, and the first 4 are on user level, and the last 6 are internal processing from my library, I only want to feature the first 4.</p> <p>I know how to extract the stack, but I don't know how to modify it and re-inject it before raising the exception. I also assume this is considered a bad idea, and if so, I'd like to know what my other options are.</p> <p>My horrible temporary solution is looking like this:</p> <pre><code> except AssertionError as error: # something went wrong, the input was not correct print( "Traceback (most recent call last):") for filepath, line_no, namespace, line in traceback.extract_stack(): if os.path.basename(filepath)=='MyModuleName.py': break print( ' File "{filepath}", line {line_no}, in {namespace}\n' ' {line}'.format(**locals())) exit() </code></pre>
<p>The only reason that <code>A+B</code> doesn't show any internal stack frames is that <code>numpy.ndarray.__add__()</code> happens to be implemented in C, so there are no Python stack frames after the one containing the <code>A+B</code> to show. numpy is not doing anything special to clean up the stack trace.</p>
python|numpy|traceback
1
677
44,058,215
Dissecting a permutation algorithm in Python
<p>I am trying to get my head around how this permutation algorithm works:</p> <pre><code>def perm(n, i): if i == len(n) - 1: print n else: for j in range(i, len(n)): n[i], n[j] = n[j], n[i] perm(n, i + 1) n[i], n[j] = n[j], n[i] # swap back, for the next loop perm([1, 2, 3], 0) </code></pre> <p>Output:</p> <pre><code>[1, 2, 3] [1, 3, 2] [2, 1, 3] [2, 3, 1] [3, 2, 1] [3, 1, 2] </code></pre> <p><strong>Question</strong></p> <p>How is it that the original list is the <em>first</em> line printed?</p> <p>In this example, the length of <code>n</code> is 3. Initially, <code>i</code> is 0. The code should skip the <code>if</code> statement, and then first iteration mutates the list. How do we get <code>[1, 2, 3]</code> as the first line of output?</p>
<p>It <em>does</em> skip the <code>if</code> at the top level. It drops into the <code>else</code> and iterates <code>j</code> through the list. The first iteration has i == j == 0, so the swap does nothing, and you recur with ([1, 2, 3], 1).</p> <p>This process repeats for the that instance, having i == j == 1. That recurs with ([1, 2, 3], 2) <em>That</em> instance is the one that print <code>[1, 2, 3]</code> as the first line of output.</p> <p>Does that clear it up?</p> <p>If not, learn how to insert useful <code>print</code> statements to trace execution. Perhaps this makes it more clear.</p> <pre><code>indent = "" def perm(n, i): global indent indent += " " print indent, "ENTER", n, i if i == len(n) - 1: print n else: for j in range(i, len(n)): print indent, "RECUR", i, j n[i], n[j] = n[j], n[i] perm(n, i + 1) n[i], n[j] = n[j], n[i] # swap back, for the next loop indent = indent[2:] perm([1, 2, 3], 0) </code></pre> <p>Output:</p> <pre><code> ENTER [1, 2, 3] 0 RECUR 0 0 ENTER [1, 2, 3] 1 RECUR 1 1 ENTER [1, 2, 3] 2 [1, 2, 3] RECUR 1 2 ENTER [1, 3, 2] 2 [1, 3, 2] RECUR 0 1 ENTER [2, 1, 3] 1 RECUR 1 1 ENTER [2, 1, 3] 2 [2, 1, 3] RECUR 1 2 ENTER [2, 3, 1] 2 [2, 3, 1] RECUR 0 2 ENTER [3, 2, 1] 1 RECUR 1 1 ENTER [3, 2, 1] 2 [3, 2, 1] RECUR 1 2 ENTER [3, 1, 2] 2 [3, 1, 2] </code></pre>
python|algorithm|permutation
3
678
13,819,019
Add element to a bibtexfile in Python
<p>I have created a script which scrapes many pdfs for abstract and keywords. I also have a collection of bibtex-files in which I want to place the texts I've extracted. What I'm looking for is a way of adding elements to the bibtex files. </p> <p>I have written a short parser: </p> <pre><code>#!/usr/bin/python #-*- coding: utf-8 import os from pybtex.database.input import bibtex dir_path = "nime_archive/nime/bibtex/" num_texts = 0 class Bibfile: def __init__(self,bibs): self.bibs = bibs for a in self.bibs.entries.keys(): num_text += 1 print bibs.entries[a].fields['title'] #Need to implement a way of getting just the nime-identificator try: print bibs.entries[a].fields['url'] except: print "couldn't find URL for text: %s " % a print "creating new bibfile" bibfiles = [] parser = bibtex.Parser() for infile in os.listdir(dir_path): if infile.endswith(".bib"): print infile bibfiles = Bibfile(parser.parse_file(dir_path+infile)) </code></pre> <p>My question is if there is possible to use Pybtex to add elements into the existing bibtex-files (or create a copy) so I can merge my extractions with what is already available. If this is not possible in Pybtex, what other bibtex parser can I use? </p>
<p>I've never used pybtex, but from a quick glance, you can add entries. Since <code>self.bibs.entries</code> appears to be a <code>dict</code>, you can come up with a unique key, and add more entries to it. Example:</p> <pre><code>key = "some_unique_string" new_entry = Entry('article', fields={ 'language': u'english', 'title': u'Predicting the Diffusion Coefficient in Supercritical Fluids', 'journal': u'Ind. Eng. Chem. Res.', 'volume': u'36', 'year': u'1997', 'pages': u'888-895', }, persons={'author': [Person(u'Liu, Hongquin'), Person(u'Ruckenstein, Eli')]}, ) self.bibs.entries[key] = new_entry </code></pre> <p>(caveat: untested)</p> <p>If you wonder where I got this example form: have a look in the <code>tests/</code> subdirectory of the source of pybtex. I got the above code example mainly from <code>tests/database_test/data.py</code>. Tests can be a good source of documentation if the actual documentation is lacking.</p>
python|bibtex
1
679
13,840,697
Xlib control keyboard events
<p>How does one simulate keyboard key presses in python (Xlib) I have been using Xlib-python for simulating mouse pointer events such as movements and clicks. But I haven't been able to find enough help for doing a similar thing for keyboard presses.</p> <p>Preferred platform : python on linux</p>
<p>I'm no expert on Xlib, but managed to piece together this code for the PyAutoGUI module. Here's the minimum viable example that can simulate a <code>keyDown()</code> and <code>keyUp()</code> for a keyboard key:</p> <pre><code># You must run `pip3 install python3-xlib` to get the Xlib modules. import os from Xlib.display import Display from Xlib import X from Xlib.ext.xtest import fake_input import Xlib.XK _display = Display(os.environ['DISPLAY']) # Create the keyboard mapping. KEY_NAMES = ['\t', '\n', '\r', ' ', '!', '"', '#', '$', '%', '&amp;', "'", '(', ')', '*', '+', ',', '-', '.', '/', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', ':', ';', '&lt;', '=', '&gt;', '?', '@', '[', '\\', ']', '^', '_', '`', 'a', 'b', 'c', 'd', 'e','f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', '{', '|', '}', '~', 'accept', 'add', 'alt', 'altleft', 'altright', 'apps', 'backspace', 'browserback', 'browserfavorites', 'browserforward', 'browserhome', 'browserrefresh', 'browsersearch', 'browserstop', 'capslock', 'clear', 'convert', 'ctrl', 'ctrlleft', 'ctrlright', 'decimal', 'del', 'delete', 'divide', 'down', 'end', 'enter', 'esc', 'escape', 'execute', 'f1', 'f10', 'f11', 'f12', 'f13', 'f14', 'f15', 'f16', 'f17', 'f18', 'f19', 'f2', 'f20', 'f21', 'f22', 'f23', 'f24', 'f3', 'f4', 'f5', 'f6', 'f7', 'f8', 'f9', 'final', 'fn', 'hanguel', 'hangul', 'hanja', 'help', 'home', 'insert', 'junja', 'kana', 'kanji', 'launchapp1', 'launchapp2', 'launchmail', 'launchmediaselect', 'left', 'modechange', 'multiply', 'nexttrack', 'nonconvert', 'num0', 'num1', 'num2', 'num3', 'num4', 'num5', 'num6', 'num7', 'num8', 'num9', 'numlock', 'pagedown', 'pageup', 'pause', 'pgdn', 'pgup', 'playpause', 'prevtrack', 'print', 'printscreen', 'prntscrn', 'prtsc', 'prtscr', 'return', 'right', 'scrolllock', 'select', 'separator', 'shift', 'shiftleft', 'shiftright', 'sleep', 'space', 'stop', 'subtract', 'tab', 'up', 'volumedown', 'volumemute', 'volumeup', 'win', 'winleft', 'winright', 'yen', 'command', 'option', 'optionleft', 'optionright'] keyboardMapping = dict([(key, None) for key in KEY_NAMES]) keyboardMapping.update({ 'backspace': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('BackSpace')), '\b': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('BackSpace')), 'tab': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Tab')), 'enter': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Return')), 'return': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Return')), 'shift': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Shift_L')), 'ctrl': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Control_L')), 'alt': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Alt_L')), 'pause': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Pause')), 'capslock': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Caps_Lock')), 'esc': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Escape')), 'escape': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Escape')), 'pgup': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Page_Up')), 'pgdn': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Page_Down')), 'pageup': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Page_Up')), 'pagedown': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Page_Down')), 'end': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('End')), 'home': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Home')), 'left': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Left')), 'up': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Up')), 'right': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Right')), 'down': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Down')), 'select': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Select')), 'print': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Print')), 'execute': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Execute')), 'prtsc': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Print')), 'prtscr': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Print')), 'prntscrn': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Print')), 'printscreen': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Print')), 'insert': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Insert')), 'del': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Delete')), 'delete': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Delete')), 'help': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Help')), 'winleft': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Super_L')), 'winright': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Super_R')), 'apps': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Super_L')), 'num0': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('KP_0')), 'num1': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('KP_1')), 'num2': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('KP_2')), 'num3': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('KP_3')), 'num4': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('KP_4')), 'num5': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('KP_5')), 'num6': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('KP_6')), 'num7': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('KP_7')), 'num8': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('KP_8')), 'num9': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('KP_9')), 'multiply': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('KP_Multiply')), 'add': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('KP_Add')), 'separator': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('KP_Separator')), 'subtract': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('KP_Subtract')), 'decimal': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('KP_Decimal')), 'divide': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('KP_Divide')), 'f1': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('F1')), 'f2': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('F2')), 'f3': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('F3')), 'f4': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('F4')), 'f5': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('F5')), 'f6': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('F6')), 'f7': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('F7')), 'f8': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('F8')), 'f9': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('F9')), 'f10': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('F10')), 'f11': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('F11')), 'f12': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('F12')), 'f13': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('F13')), 'f14': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('F14')), 'f15': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('F15')), 'f16': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('F16')), 'f17': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('F17')), 'f18': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('F18')), 'f19': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('F19')), 'f20': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('F20')), 'f21': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('F21')), 'f22': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('F22')), 'f23': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('F23')), 'f24': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('F24')), 'numlock': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Num_Lock')), 'scrolllock': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Scroll_Lock')), 'shiftleft': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Shift_L')), 'shiftright': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Shift_R')), 'ctrlleft': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Control_L')), 'ctrlright': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Control_R')), 'altleft': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Alt_L')), 'altright': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Alt_R')), # These are added because unlike a-zA-Z0-9, the single characters do not have a ' ': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('space')), 'space': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('space')), '\t': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Tab')), '\n': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Return')), # for some reason this needs to be cr, not lf '\r': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Return')), '\e': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Escape')), '!': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('exclam')), '#': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('numbersign')), '%': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('percent')), '$': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('dollar')), '&amp;': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('ampersand')), '"': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('quotedbl')), "'": _display.keysym_to_keycode(Xlib.XK.string_to_keysym('apostrophe')), '(': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('parenleft')), ')': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('parenright')), '*': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('asterisk')), '=': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('equal')), '+': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('plus')), ',': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('comma')), '-': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('minus')), '.': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('period')), '/': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('slash')), ':': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('colon')), ';': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('semicolon')), '&lt;': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('less')), '&gt;': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('greater')), '?': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('question')), '@': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('at')), '[': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('bracketleft')), ']': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('bracketright')), '\\': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('backslash')), '^': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('asciicircum')), '_': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('underscore')), '`': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('grave')), '{': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('braceleft')), '|': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('bar')), '}': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('braceright')), '~': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('asciitilde')), }) for c in """abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890""": keyboardMapping[c] = _display.keysym_to_keycode(Xlib.XK.string_to_keysym(c)) def isShiftCharacter(character): """Returns True if the key character is uppercase or shifted.""" return character.isupper() or character in '~!@#$%^&amp;*()_+{}|:"&lt;&gt;?' def keyDown(key): """Performs a keyboard key press without the release. This will put that key in a held down state. NOTE: For some reason, this does not seem to cause key repeats like would happen if a keyboard key was held down on a text field. Args: key (str): The key to be pressed down. The valid names are listed in pyautogui.KEY_NAMES. Returns: None """ if key not in keyboardMapping or keyboardMapping[key] is None: return if type(key) == int: fake_input(_display, X.KeyPress, key) _display.sync() return needsShift = isShiftCharacter(key) if needsShift: fake_input(_display, X.KeyPress, keyboardMapping['shift']) fake_input(_display, X.KeyPress, keyboardMapping[key]) if needsShift: fake_input(_display, X.KeyRelease, keyboardMapping['shift']) _display.sync() def keyUp(key): """Performs a keyboard key release (without the press down beforehand). Args: key (str): The key to be released up. The valid names are listed in KEY_NAMES. Returns: None """ """ Release a given character key. Also works with character keycodes as integers, but not keysyms. """ if key not in keyboardMapping or keyboardMapping[key] is None: return if type(key) == int: keycode = key else: keycode = keyboardMapping[key] fake_input(_display, X.KeyRelease, keycode) _display.sync() </code></pre>
python|linux|xlib
3
680
8,084,260
How to print a file to stdout?
<p>I've searched and I can only find questions about the other way around: writing stdin to a file.</p> <p>Is there a quick and easy way to dump the contents of a file to <code>stdout</code>?</p>
<p>Sure. Assuming you have a string with the file's name called <code>fname</code>, the following does the trick.</p> <pre><code>with open(fname, 'r') as fin: print(fin.read()) </code></pre>
python
136
681
1,101,550
Why my Python test generator simply doesn't work?
<p>This is a sample script to test the use of yield... am I doing it wrong? It always returns '1'...</p> <pre><code>#!/usr/bin/python def testGen(): for a in [1,2,3,4,5,6,7,8,9,10]: yield a w = 0 while w &lt; 10: print testGen().next() w += 1 </code></pre>
<p>You're creating a new generator each time. You should only call <code>testGen()</code> once and then use the object returned. Try:</p> <pre><code>w = 0 g = testGen() while w &lt; 10: print g.next() w += 1 </code></pre> <p>Then of course there's the normal, idiomatic generator usage:</p> <pre><code>for n in testGen(): print n </code></pre> <p>Note that this will only call <code>testGen()</code> once at the start of the loop, not once per iteration.</p>
python|testing|generator|yield
10
682
326,254
Packaging a Python library
<p>I have a few Munin plugins which report stats from an Autonomy database. They all use a small library which scrapes the XML status output for the relevant numbers.</p> <p>I'm trying to bundle the library and plugins into a Puppet-installable RPM. The actual RPM-building should be straightforward; once I have a <code>distutils</code>-produced distfile I can make it into an RPM based on a .spec file pinched from the Dag or EPEL repos [1]. It's the <code>distutils</code> bit I'm unsure of -- in fact I'm not even sure my library is correctly written for packaging. Here's how it works:</p> <p>idol7stats.py:</p> <pre><code>import datetime import os import stat import sys import time import urllib import xml.sax class IDOL7Stats: cache_dir = '/tmp' def __init__(self, host, port): self.host = host self.port = port # ... def collect(self): self.data = self.__parseXML(self.__getXML()) def total_slots(self): return self.data['Service:Documents:TotalSlots'] </code></pre> <p>Plugin code:</p> <pre><code>from idol7stats import IDOL7Stats a = IDOL7Stats('db.example.com', 23113) a.collect() print a.total_slots() </code></pre> <p>I guess I want idol7stats.py to wind up in <code>/usr/lib/python2.4/site-packages/idol7stats</code>, or something else in Python's search path. What <code>distutils</code> magic do I need? This:</p> <pre><code>from distutils.core import setup setup(name = 'idol7stats', author = 'Me', author_email = '[email protected]', version = '0.1', py_modules = ['idol7stats']) </code></pre> <p>almost works, except the code goes in <code>/usr/lib/python2.4/site-packages/idol7stats.py</code>, not a subdirectory. I expect this is down to my not understanding the difference between modules/packages/other containers in Python.</p> <p>So, what's the rub?</p> <p>[1] Yeah, I could just plonk the library in <code>/usr/lib/python2.4/site-packages</code> using RPM but I want to know how to package Python code.</p>
<p>You need to create a <em>package</em> to do what you want. You'd need a directory named <code>idol7stats</code> containing a file called <code>__init__.py</code> and any other library modules to package. Also, this will affect your scripts' imports; if you put <code>idol7stats.py</code> in a package called <code>idol7stats</code>, then your scripts need to "<code>import idol7stats.idol7stats</code>".</p> <p>To avoid that, you could just rename <code>idol7stats.py</code> to <code>idol7stats/__init__.py</code>, or you could put this line into <code>idol7stats/__init__.py</code> to "massage" the imports into the way you expect them:</p> <pre><code>from idol7stats.idol7stats import * </code></pre>
python|packaging|distutils
2
683
41,978,603
removing outline color of scatter plot in matplotlib python
<p>Suppose I have gridded data with dimensions (x,y) and values are in z. so simply we can make scatter plot for third dimension by:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt x = np.random.random(10) y = np.random.random(10) z = np.random.random(10) plt.scatter(x, y, c = z, s=150, cmap = 'jet') plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/4IW2w.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4IW2w.png" alt="[1]: https://i.stack.imgur.com/2acLp.png"></a></p> <p>what i am thinking now is to remove the line color of each circular scatter plot. And also instead of circle can we make it square??</p> <p>I did not find any way to do that. your help will be highly appreciated.</p>
<ul> <li>Pass the argument <code>edgecolors='none'</code> to <code>plt.scatter</code>. The patch boundary will not be drawn. </li> <li>Pass the argument <code>marker='s'</code> to <code>plt.scatter</code>. The marker style will be square. </li> </ul> <p>Then, we have,</p> <p><a href="https://i.stack.imgur.com/kA440.png" rel="noreferrer"><img src="https://i.stack.imgur.com/kA440.png" alt="enter image description here"></a></p> <hr> <p>The source code,</p> <pre><code>import numpy as np import matplotlib.pyplot as plt x = np.random.random(10) y = np.random.random(10) z = np.random.random(10) plt.scatter(x, y, c = z, s=150, cmap = 'jet', edgecolors='none', marker='s') plt.show() </code></pre> <p>Refer to <a href="http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.scatter" rel="noreferrer"><code>matplotlib.pyplot.scatter</code></a> for more information.</p>
python|matplotlib|plot
8
684
47,418,621
Error compiling python
<p>Whenever I try to execute this code: </p> <pre><code>name = input("What's your name?") print("Hello World", name) </code></pre> <p>By running the command <code>python myprogram.py</code> on the command line, it gives me this error: </p> <pre><code>What's your name?John Traceback (most recent call last): File "HelloWorld.py", line 1, in &lt;module&gt; name = input("What's your name?") File "&lt;string&gt;", line 1, in &lt;module&gt; NameError: name 'John' is not defined </code></pre> <p>It asks me the name but as soon as I type it and press enter it crashes, what does the error mean? Thanks.</p>
<p>In Python 2 you should use <code>raw_input</code> instead of <code>input</code> in this case.</p>
python
1
685
33,935,076
Python: Add strings to a list full of integers
<p>Simple example:</p> <p>I got a list full of integers which looks like this:</p> <pre><code>mylist1 = [1, 2, 3, 4, 5] print mylist1 [1, 2, 3, 4, 5] </code></pre> <p>Now I want to add a string to every integer in the list. It should look like this afterwards:</p> <pre><code>['1 Hi', '2 How', '3 Are', '4 You', '5 Doing'] </code></pre> <p>By now I should have a list full of strings. How do I do that?</p>
<pre><code>&gt;&gt;&gt; mylist1 = [1, 2, 3, 4, 5] &gt;&gt;&gt; mylist2 = ['Hi', 'How', 'Are', 'You', 'Doing'] &gt;&gt;&gt; map(lambda x,y:str(x)+" "+y, mylist1,mylist2) ['1 Hi', '2 How', '3 Are', '4 You', '5 Doing'] </code></pre>
python|string|list|integer
4
686
29,878,151
Why is re.findall matching the string, but not returning the results correctly?
<p>I want to find a substring of the pattern <code>([A-Z][0-9]+)+</code> in another string.</p> <p>One way to do this would be:</p> <pre><code>import re re.findall("([A-Z][0-9]+)+", "asdf A0B52X4 asdf")[0] </code></pre> <p>Curiously, this yields <code>'X4'</code>, not <code>'A0B52X4'</code>, which was the result I expected.</p> <p>Digging a bit into this, I also tried to just match the simple groups the string is composed of:</p> <pre><code>re.findall("[A-Z][0-9]+", "asdf A0B52X4 asdf") </code></pre> <p>Which yields the expected result: <code>['A0', 'B52', 'X4']</code></p> <p>And even more interesting:</p> <pre><code>re.findall("([A-Z][0-9]+){3,}", "asdf A0B52X4 asdf") </code></pre> <p>Which yields <code>['X4']</code>, but still seems to match the whole string I'm interested in, which is confirmed by trying <code>re.search</code> and using the result to obtain the substring manually:</p> <pre><code>m = re.search("([A-Z][0-9]+)+", "asdf A0B52X4 asdf") m.string[m.start():m.end()] </code></pre> <p>This yields <code>'A0B52X4'</code>.</p> <p>Now from what I know about regular expressions in python, parentheses not only just match the RE inside them, but also declare a "group" which lets you do all sorts of things with it. My theory would be that for some reason, <code>re.findall</code> only puts the last match of a group into the result string as opposed to the complete match.</p> <p>Why does <code>re.findall</code> behave like this?</p>
<p>It's because your matching group only matches one instance of the pattern at a time. The <code>+</code> just means to match all of them that occur in a row. It still only captures the first part of the match at one time.</p> <p>Wrap your regex in an outer group, like this:</p> <pre><code>((?:[A-Z][0-9]+)+) </code></pre> <p><a href="https://regex101.com/r/gX7pB8/1" rel="nofollow"><strong>Demo</strong></a></p>
python|regex|string
2
687
61,309,144
Python sqlite query based on logged username
<p><em>I am struggling to understand why I cannot get the expected result from my query. I am using flask with SQLite and can easily return the username to the webpage with the</em> <strong>"userlogin = session['username']"</strong> <em>What i am trying to get is to query the database based on the username of the logged user in order to only show information related to this specific user.</em> <strong>mytable</strong> <em>is configured with username column used for the filter.</em></p> <pre><code>@app.route('/dashboard') @is_logged_in def dashboard(): from functions.sqlquery import sql_query userlogin = session['username'] results = sql_query('''SELECT * FROM mytable WHERE username IS "userlogin"''') return render_template('dashboard.html', results=results) </code></pre> <p><strong>sql_query.py</strong></p> <pre><code>def sql_query(query): cur = conn.cursor() cur.execute(query) rows = cur.fetchall() return rows </code></pre>
<p>The sql query isn't correct. You use should <code>=</code> instead of <code>IS</code>.</p> <p>I would recommend making the following changes:</p> <p>1) use a parameterised query to avoid sql injection attacks. So pass the parameters to <code>sql_query()</code> as a tuple:</p> <pre><code>def sql_query(query, params): cur = conn.cursor() cur.execute(query, params) rows = cur.fetchall() return rows </code></pre> <p>2) change the call to <code>sql_query</code></p> <pre><code>results = sql_query("SELECT * FROM mytable WHERE username = ?", (userlogin,)) </code></pre>
python|sqlite|flask
1
688
27,891,546
Use built-in setattr simultaneously with index slicing
<p>A class I am writing requires the use of variable-name attributes storing numpy arrays. I would like to assign values to slices of these arrays. I have been using setattr so that I can leave the attribute name to vary. My attempts to assign values to slices are these:</p> <pre><code>class Dummy(object): def __init__(self, varname): setattr(self, varname, np.zeros(5)) d = Dummy('x') ### The following two lines are incorrect setattr(d, 'x[0:3]', [8,8,8]) setattr(d, 'x'[0:3], [8,8,8]) </code></pre> <p>Neither of the above uses of setattr produce the behavior I want, which is for d.x to be a 5-element numpy array with entries [8,8,8,0,0]. Is it possible to do this with setattr?</p>
<p>Think about how you would normally write this bit of code:</p> <pre><code>d.x[0:3] = [8, 8, 8] # an index operation is really a function call on the given object # eg. the following has the same effect as the above d.x.__setitem__(slice(0, 3, None), [8, 8, 8]) </code></pre> <p>Thus, to do the indexing operating you need to get the object refered to by the name <code>x</code> and then perform an indexing operation on it. eg.</p> <pre><code>getattr(d, 'x')[0:3] = [8, 8, 8] </code></pre>
python|numpy|slice|setattr
3
689
27,804,600
Python function got 2 lists but only changes 1
<p>why does Lista1 get changed but Lista2 doesn't? which methods change directly the list?</p> <pre><code>def altera(L1, L2): for elemento in L2: L1.append(elemento) L2 = L2 + [4] L1[1]= 10 del L2[0] return L2[:] Lista1 = [1, 2, 3] Lista2 = [1, 2, 3] Lista3 = altera(Lista1, Lista2) print(Lista1) print(Lista2) print(Lista3) </code></pre>
<pre><code>L2 = L2 + [4] </code></pre> <p>reassigns the address of L2 so it is a different list than was passed in .... thats the easy explanation at least</p> <p>you can see this by printing <code>id(L2)</code> before the assignment and after</p> <p>if you changed it to </p> <pre><code>L2.append(4) </code></pre> <p>then it would indeed change <code>Lista2</code></p>
python|list|methods|mutability
1
690
43,192,986
Stuck parallelisation with sklearn with large number of features (n_jobs=-1)
<p>When trying to run a simple <code>GridSearchCV</code> with <code>n_job=-1</code> often results in stuck processing. For example, </p> <pre><code>&gt;&gt; parameters_SGD = {'clf__l1_ratio': np.linspace(0,1,30), 'clf__alpha': np.logspace(-5,-1,5), 'clf__penalty':['elasticnet'], 'clf__class_weight': [None, 'balanced'],'clf__loss':['log','hinge']} &gt;&gt; pipe_SGD = Pipeline([('scl', StandardScaler()),('clf', linear_model.SGDClassifier())]) &gt;&gt; grid_search_SGD = GridSearchCV(estimator=pipe_SGD, param_grid=parameters_SGD, verbose=1, scoring=make_scorer(f1_score, average='weighted', pos_label=1), n_jobs = -1) </code></pre> <p>executing on some data (X_train, y_train):</p> <pre><code>&gt;&gt; grid_search_SGD.fit(X_train, y_train) </code></pre> <p>may result in frozen computations -> CPU usage drops to 1-3% and nothing happens.</p> <p><strong>When it happens:</strong> if the number of features of <code>X</code> is (relatively) large (>100). The CPU usage climbs up to 99% (which means all cores work) and then suddenly drops down to 1-3%.</p> <p>If I use only small subset of features (randomly selected), then parallelisation works perfectly (99-100% of CPU and I can see jobs done in parallel).</p> <p><strong>Does anyone have any idea why it happens?</strong> What may cause parallel jobs to be stuck?</p> <p>(sklearn v 0.18, mac osx)</p>
<h1>Reason</h1> <p>Parallelization in this case is based on copying all the data and send a copy to each of the different parallel processes (sklearn is based on <a href="https://pythonhosted.org/joblib/parallel.html" rel="nofollow noreferrer">joblib</a>). This means using <code>X cores</code> needs at least <code>x-times</code> the memory of the naive one.</p> <p>So in your case your memory probably is exhausted and <a href="https://en.wikipedia.org/wiki/Thrashing_(computer_science)" rel="nofollow noreferrer">trashing occurs</a>.</p> <h1>What you can do</h1> <h2>Stick with smaller size of samples / less features</h2> <p>You already observed that this works</p> <h2>Tune sklearn's GridSearchCV params</h2> <p>As explained <a href="http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html" rel="nofollow noreferrer">here</a>, the parameter <code>pre_dispatch</code> can be very important:</p> <blockquote> <p>Controls the number of jobs that get dispatched during parallel execution. Reducing this number can be useful to avoid an explosion of memory consumption when more jobs get dispatched than CPUs can process. This parameter can be: None, in which case all the jobs are immediately created and spawned. Use this for lightweight and fast-running jobs, to avoid delays due to on-demand spawning of the jobs An int, giving the exact number of total jobs that are spawned A string, giving an expression as a function of n_jobs, as in ‘2*n_jobs’</p> </blockquote> <p>I would recommend trying something like:</p> <pre><code>pre_dispatch=‘1*n_jobs’ </code></pre> <p>(A sidenote: without any links available, i'm very positive that OS X is the OS with the most problems in regards to sklearn's parallelization-implementation; maybe check the issues on sklearn's github)</p>
python|parallel-processing|scikit-learn
1
691
43,297,903
is there a way to make movement smoother in the Python Tk canvas?
<p>I am making a dot move around a screen, but it seems to pause(stop moving) for a bit when changing direction. </p> <p>Is there a better way to make the movement smoother, or just stop the delay in changing directions?</p> <p>Here is what i am using to move it:</p> <pre><code>def keypress(event): key = (event.keysym) if key == "w": canvas.move(player,0,-20) if key == "a": canvas.move(player,-20,0) if key == "s": canvas.move(player,0,20) if key == "d": canvas.move(player,20,0) canvas.bind_all("&lt;Key&gt;", keypress) </code></pre>
<p>Naming constants makes it easier to change them and experiment, especially when the same constant is used in multiple places in the code. In the code below, you just need to change one copy of <code>20</code> to experiment, as Bryan suggested.</p> <pre><code>distance = 20 movements = { 'w': (0, -distance), 'a': (-distance, 0), 's': (0, distance), 'd': (distance, 0), } def keypress(event): key = (event.keysym).lower() canvas.move(player, *movements[key]) </code></pre> <p>While writing this, I took the opportunity to show how to use a dict to replace multiple conditionals by factoring out the common code from the changing code. The * syntax in the move call separates the tuple into two arguments.</p>
python|tkinter|tk
0
692
36,960,402
PYTHON - input decimal to fraction
<p>When working on <strong>python</strong>, I was able to convert a fraction to a decimal where the user would input a numerator, then a denominator and then the <code>n/d = the result</code> (fairly simple). But i can't work out how to convert a decimal into a fraction. I want the user to input any decimal ( ie <code>0.5</code>) and then find the simplest form of <code>x (1/2)</code>. Any help would be greatly appreciated. Thanks.</p>
<p>Use the <code>fractions</code> module.</p> <pre><code>from fractions import Fraction f1 = Fraction(14, 8) print(f) # Output: 7/4 print(float(f)) # Output: 1.75 f1 = Fraction(1.75) print(f) # Output: 7/4 print(float(f)) # Output: 1.75 </code></pre> <p>It accepts both pairs of numerator/denominator as well as float decimal numbers to construct a Fraction object.</p>
python-3.x
1
693
48,825,031
Is there a simple way to copy text from the debug console of PyCharm?
<p>Is there a sane way to copy log text from the PyCharm console, instead of selecting it slowly with the mouse (espacially when there's a bundance of text there)? There seem to be no "Select All" from the debug console. Is it on porpose? Is there any way to copy (all of) the text from the console sanely?</p> <p>I do hope the guys and girls at JetBrain do understand that Notepad++ is wayyyy more easy when looking at/analysing logs?</p>
<p>With VIM emulation on:</p> <ol> <li>Use scrollbar to scroll to the end of what you want to copy. (click/drag bar) </li> <li>Click and drag up to highlight a few lines.</li> <li>Use scrollbar again to scroll to the start of what you want to copy.</li> <li>Shift/click at the start of the text you want to copy. (should now be highlighted)</li> <li>Right click and select copy.</li> </ol> <p>This isn't as quick as Ctrl-A, but quicker than turning VIM emulation off/on. This worked for me in the Python Console, windows 10, PyCharm Community 2018.1.2</p>
python|pycharm|jetbrains-ide
0
694
48,873,893
List within a dataframe cell - counting the number of items in list
<p>I currently have a dataframe that contains a list of floats within a column, and I want to add a second column to the df that counts the length of the list within the first column (the number of items within that list). What would be the easiest way to go about doing this and would I have to write a function that iterates over each item in the column? </p>
<p>This should work:</p> <pre><code>df['list_len'] = df['list_column'].str.len() </code></pre>
python|pandas|dataframe
2
695
48,663,698
How to read specific sheets from My XLS file in Python
<p>As of now i can read EXCEL file's all sheet.</p> <pre><code>e.msgbox("select Excel File") updated_deleted_xls = e.fileopenbox() book = xlrd.open_workbook(updated_deleted_xls, formatting_info=True) openfile = e.fileopenbox() for sheet in book.sheets(): for row in range(sheet.nrows): for col in range(sheet.ncols): thecell = sheet.cell(row, 0) xfx = sheet.cell_xf_index(row, 0) xf = book.xf_list[xfx] </code></pre>
<p>If you open your editor from the desktop or command line, you would have to specify the file path while trying to read the file:</p> <pre><code>import pandas as pd df = pd.read_excel(r'File path', sheet_name='Sheet name') </code></pre> <p>Alternatively, if you open your editor in the file's directory, then you could read directly using the panda library </p> <pre><code>import pandas as pd df = pd.read_excel('KPMG_VI_New_raw_data_update_final.xlsx', sheet_name='Title Sheet') df1 = pd.read_excel('KPMG_VI_New_raw_data_update_final.xlsx',sheet_name='Transactions') df2 = pd.read_excel('KPMG_VI_New_raw_data_update_final.xlsx', sheet_name='NewCustomerList') df3 = pd.read_excel('KPMG_VI_New_raw_data_update_final.xlsx', sheet_name='CustomerDemographic') df4 = pd.read_excel('KPMG_VI_New_raw_data_update_final.xlsx', sheet_name='CustomerAddress') </code></pre>
python|excel|xlsx
4
696
48,719,077
Is there a way to raise normal Django form validation through ajax?
<p>I found a <a href="https://stackoverflow.com/questions/7766621/django-form-validation-message-not-render-in-viewa-in-jquery-ajax-post">similar question</a> which is quite a bit outdated. I wonder if it's possible without the use of another library. </p> <p>Currently, the <code>forms.ValidationError</code> will trigger the <code>form_invalid</code> which will only return a JSON response with the error and status code.<br> I have an ajax form and wonder if the usual django field validations can occur on the form field upon an ajax form submit. </p> <p>My form triggering the error: </p> <pre><code>class PublicToggleForm(ModelForm): class Meta: model = Profile fields = [ "public", ] def clean_public(self): public_toggle = self.cleaned_data.get("public") if public_toggle is True: raise forms.ValidationError("ERROR") return public_toggle </code></pre> <p>The corresponding View's mixin for ajax: </p> <pre><code>from django.http import JsonResponse class AjaxFormMixin(object): def form_invalid(self, form): response = super(AjaxFormMixin, self).form_invalid(form) if self.request.is_ajax(): return JsonResponse(form.errors, status=400) else: return response def form_valid(self, form): response = super(AjaxFormMixin, self).form_valid(form) if self.request.is_ajax(): print(form.cleaned_data) print("VALID") data = { 'message': "Successfully submitted form data." } return JsonResponse(data) else: return response </code></pre> <p>The View: </p> <pre><code>class PublicToggleFormView(AjaxFormMixin, FormView): form_class = PublicToggleForm success_url = '/form-success/' </code></pre> <p>On the browser console, errors will come through as a 400 Bad Request, followed by the responseJSON which has the correct ValidationError message. </p> <p>Edit: Any way to get the field validation to show client-side? </p> <p>edit: Additional code: </p> <p>Full copy of data received on front-end: </p> <pre><code>{readyState: 4, getResponseHeader: ƒ, getAllResponseHeaders: ƒ, setRequestHeader: ƒ, overrideMimeType: ƒ, …} abort: ƒ (a) always: ƒ () catch: ƒ (a) done: ƒ () fail: ƒ () getAllResponseHeaders: ƒ () getResponseHeader: ƒ (a) overrideMimeType: ƒ (a) pipe: ƒ () progress: ƒ () promise: ƒ (a) readyState: 4 responseJSON: public: ["ERROR"] __proto__: Object responseText: "{"public": ["ERROR"]}" setRequestHeader: ƒ (a,b) state: ƒ () status: 400 statusCode: ƒ (a) statusText: "Bad Request" then: ƒ (b,d,e) __proto__: Object </code></pre> <p>The form in the template is rendered using Django's {{as_p}}: </p> <pre><code>{% if request.user == object.user %} Make your profile public? &lt;form class="ajax-public-toggle-form" method="POST" action='{% url "profile:detail" username=object.user %}' data-url='{% url "profile:public_toggle" %}'&gt; {{public_toggle_form.as_p|safe}} &lt;/form&gt; {% endif %} </code></pre> <p>Javascript: </p> <pre><code>$(document).ready(function(){ var $myForm = $('.ajax-public-toggle-form') $myForm.change(function(event){ var $formData = $(this).serialize() var $endpoint = $myForm.attr('data-url') || window.location.href // or set your own url $.ajax({ method: "POST", url: $endpoint, data: $formData, success: handleFormSuccess, error: handleFormError, }) }) function handleFormSuccess(data, textStatus, jqXHR){ // no need to do anything here console.log(data) console.log(textStatus) console.log(jqXHR) } function handleFormError(jqXHR, textStatus, errorThrown){ // on error, reset form. raise valifationerror console.log(jqXHR) console.log("==2" + textStatus) console.log("==3" + errorThrown) $myForm[0].reset(); // reset form data } }) </code></pre>
<p>Sou you have your error response in JSON formatted as <code>{field_key: err_codes, ...}</code>. Then all you have to do is for example create <code>&lt;div class="error" style="display: none;"&gt;&lt;/div&gt;</code> under every rendered form field, which can be done by manually rendering the form field by field or you can create a block with errors below the form such as:</p> <pre><code>&lt;div id="public_toggle_form-errors" class="form-error" style="display: none;"&gt;&lt;div&gt; </code></pre> <p>add some css to the form:</p> <pre><code>div.form-error { margin: 5px; -webkit-box-shadow: 0px 0px 5px 0px rgba(255,125,125,1); -moz-box-shadow: 0px 0px 5px 0px rgba(255,125,125,1); box-shadow: 0px 0px 5px 0px rgba(255,125,125,1); } </code></pre> <p>so it'll look like something wrong happened, and then add to the handleFormError function code:</p> <pre><code>function handleFormError(jqXHR, textStatus, errorThrown){ ... $('#public_toggle_form-errors').text(textStatus.["public"]); $('#public_toggle_form-errors').show(); ... } </code></pre> <p>I think you'll get the idea.</p>
javascript|python|django|django-forms|django-templates
1
697
4,782,028
How to create a variable containing input from a list in Python
<p>I've a lists containing .las Files of different length. I couldn't figure out how it is possible to create a variable containing all the list entries separated by a ";" ? </p> <p>Thanks for your help,</p> <p>Mauro</p>
<p>Well am not sure if I get you but:</p> <pre><code>some_list = ['file.las', 'another_file.las', 'something.las'] e = ';'.join(some_list) </code></pre>
python
3
698
48,119,587
Regex Search program, how not to duplicate answers during iterate thru text? (Python3)
<p>I am working on a 'Regex Search' project from the book Automate boring stuff with python. I tried searching for answer, but I failed to find related thread in python.</p> <p>The task is: <em>"Write a program that opens all .txt files in a folder and searches for any line that matches a user-supplied regular expression. The results should be printed to the screen."</em></p> <p>I am sending below the part of code that I have problem with: import glob, os, re</p> <pre><code>os.chdir(r'C:\Users\PythonScripts') for file in glob.glob("*.txt"): content = open(file) text = content.read() print(text) for i in text: whatToFind = re.compile(r'panda|by|NOUN') finded = whatToFind.findall(text) print(finded) </code></pre> <p>I would like to find that 3 words: panda|by|NOUN After iterate thru text, I get output with repeated answers couple times. I get the answer <em>'by'</em> two times, but it should be only once. For example for text:</p> <blockquote> <p>'The ADJECTIVE panda walked to the NOUN and then VERB. A nearby NOUN was unaffected by these events.'</p> </blockquote> <p>I get: </p> <blockquote> <p>['panda', 'NOUN', 'by', 'NOUN', 'by']</p> </blockquote> <p>I should get only 4 first strings. I tried to fix it but I have no idea how to do that. Can anyone tell me what I am doing wrong?</p>
<p>That's because you are missing the <em>word boundaries</em> in your regular expression pattern and <code>by</code> from the "nearby" word was also matched:</p> <pre><code>In [3]: import re In [4]: whatToFind = re.compile(r'panda|by|NOUN') In [5]: s = 'The ADJECTIVE panda walked to the NOUN and then VERB. A nearby NOUN was unaffected by these events.' In [6]: whatToFind.findall(s) # no word boundaries Out[6]: ['panda', 'NOUN', 'by', 'NOUN', 'by'] In [7]: whatToFind = re.compile(r'\b(panda|by|NOUN)\b') In [8]: whatToFind.findall(s) # word boundaries Out[8]: ['panda', 'NOUN', 'NOUN', 'by'] </code></pre> <hr> <p>Note that there is probably a better way to look for words in an English text - using a natural language processing toolkit (<a href="http://www.nltk.org/" rel="nofollow noreferrer"><code>nltk</code></a>) and its <code>word_tokenize()</code> function:</p> <pre><code>In [9]: from nltk import word_tokenize In [10]: desired_words = {'panda', 'by', 'NOUN', 'cookie'} In [11]: set(word_tokenize(s)) &amp; desired_words # note: "cookie" was not found Out[11]: {'NOUN', 'by', 'panda'} </code></pre>
python|regex|python-3.x
1
699
48,037,991
Create instances from list of classes
<p>How do I create instances of classes from a list of classes? I've looked at other SO answers but did understand them.</p> <p>I have a list of classes:</p> <pre><code>list_of_classes = [Class1, Class2] </code></pre> <p>Now I want to create instances of those classes, where the variable name storing the class is the name of the class. I have tried:</p> <pre><code>for cls in list_of_classes: str(cls) = cls() </code></pre> <p>but get the error: "SyntaxError: can't assign to function call". Which is of course obvious, but I don't know what to do else.</p> <p>I really want to be able to access the class by name later on. Let's say we store all the instance in a dict and that one of the classes are called ClassA, then I would like to be able to access the instance by dict['ClassA'] later on. Is that possible? Is there a better way?</p>
<p>You say that you want "the variable name storing the class [to be] the name of the class", but that's a very bad idea. Variable names are not data. The names are for programmers to use, so there's seldom a good reason to generate them using code.</p> <p>Instead, you should probably populate a list of instances, or if you are sure that you want to index by class name, use a dictionary mapping names to instances.</p> <p>I suggest something like:</p> <pre><code>list_of_instances = [cls() for cls in list_of_classes] </code></pre> <p>Or this:</p> <pre><code>class_name_to_instance_mapping = {cls.__name__: cls() for cls in list_of_classes} </code></pre> <p>One of the rare cases where it can sometimes make sense to automatically generate variables is when you're writing code to create or manipulate class objects themselves (e.g. producing methods automatically). This is somewhat easier and less fraught than creating global variables, since at least the programmatically produced names will be contained within the class namespace rather than polluting the global namespace.</p> <p>The <code>collections.NamedTuple</code> class factory from the standard library, for example, creates <code>tuple</code> subclasses on demand, with special descriptors as attributes that allow the tuple's values to be accessed by name. Here's a very crude example of how you could do something vaguely similar yourself, using <code>getattr</code> and <code>setattr</code> to manipulate attributes on the fly:</p> <pre><code>def my_named_tuple(attribute_names): class Tup: def __init__(self, *args): if len(args) != len(attribute_names): raise ValueError("Wrong number of arguments") for name, value in zip(attribute_names, args): setattr(self, name, value) # this programmatically sets attributes by name! def __iter__(self): for name in attribute_names: yield getattr(self, name) # you can look up attributes by name too def __getitem__(self, index): name = attribute_names[index] if isinstance(index, slice): return tuple(getattr(self, n) for n in name) return getattr(self, name) return Tup </code></pre> <p>It works like this:</p> <pre><code>&gt;&gt;&gt; T = my_named_tuple(['foo', 'bar']) &gt;&gt;&gt; i = T(1, 2) &gt;&gt;&gt; i.foo 1 &gt;&gt;&gt; i.bar 2 </code></pre>
python-3.x
2