Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
200
16,049,437
Add new items to some structured array in a dictionary-like way
<p>I want to extend the structured array object in numpy such that I can easily add new elements. </p> <p>For example, for a simple structured array </p> <pre><code>&gt;&gt;&gt; import numpy as np &gt;&gt;&gt; x=np.ndarray((2,),dtype={'names':['A','B'],'formats':['f8','f8']}) &gt;&gt;&gt; x['A']=[1,2] &gt;&gt;&gt; x['B']=[3,4] </code></pre> <p>I would like to easily add a new element <code>x['C']=[5,6]</code>, but then an error appears associated to the undefined name <code>'C'</code>.</p> <p>Just adding a new method to <code>np.ndarray</code> works:</p> <pre><code>import numpy as np class sndarray(np.ndarray): def column_stack(self,i,x): formats=['f8']*len(self.dtype.names) new=sndarray(shape=self.shape,dtype={'names':list(self.dtype.names)+[i],'formats':formats+['f8']}) for key in self.dtype.names: new[key]=self[key] new[i]=x return new </code></pre> <p>Then, </p> <pre><code>&gt;&gt;&gt; x=sndarray((2,),dtype={'names':['A','B'],'formats':['f8','f8']}) &gt;&gt;&gt; x['A']=[1,2] &gt;&gt;&gt; x['B']=[3,4] &gt;&gt;&gt; x=x.column_stack('C',[4,4]) &gt;&gt;&gt; x sndarray([(1.0, 3.0, 4.0), (2.0, 4.0, 4.0)], dtype=[('A', '&lt;f8'), ('B', '&lt;f8'), ('C', '&lt;f8')]) </code></pre> <p>Is there any way that the new element could be added in a dictionary-like way?, e.g</p> <pre><code>&gt;&gt;&gt; x['C']=[4,4] &gt;&gt;&gt; x sndarray([(1.0, 3.0, 4.0), (2.0, 4.0, 4.0)], dtype=[('A', '&lt;f8'), ('B', '&lt;f8'), ('C', '&lt;f8')]) </code></pre> <p>Update:</p> <p>By using <code>__setitem__</code> I am still one step away from the ideal solution because I don't know how: </p> <blockquote> <p>change the object referenced at self</p> </blockquote> <pre><code>import numpy as np class sdarray(np.ndarray): def __setitem__(self, i,x): if i in self.dtype.names: super(sdarray, self).__setitem__(i,x) else: formats=['f8']*len(self.dtype.names) new=sdarray(shape=self.shape,dtype={'names':list(self.dtype.names)+[i],'formats':formats+['f8']}) for key in self.dtype.names: new[key]=self[key] new[i]=x self.with_new_column=new </code></pre> <p>Then</p> <pre><code>&gt;&gt;&gt; x=sndarray((2,),dtype={'names':['A','B'],'formats':['f8','f8']}) &gt;&gt;&gt; x['A']=[1,2] &gt;&gt;&gt; x['B']=[3,4] &gt;&gt;&gt; x['C']=[4,4] &gt;&gt;&gt; x=x.with_new_column #extra uggly step! &gt;&gt;&gt; x sndarray([(1.0, 3.0, 4.0), (2.0, 4.0, 4.0)], dtype=[('A', '&lt;f8'), ('B', '&lt;f8'), ('C', '&lt;f8')]) </code></pre> <p><strong>Update 2</strong> After the right implementation in the selected answer, I figure out that the problem is already solved by <code>pandas</code> <code>DataFrame</code> object:</p> <pre><code>&gt;&gt;&gt; import numpy as np &gt;&gt;&gt; import pandas as pd &gt;&gt;&gt; x=np.ndarray((2,),dtype={'names':['A','B'],'formats':['f8','f8']}) &gt;&gt;&gt; x=pd.DataFrame(x) &gt;&gt;&gt; x['A']=[1,2] &gt;&gt;&gt; x['B']=[3,4] &gt;&gt;&gt; x['C']=[4,4] &gt;&gt;&gt; x A B C 0 1 3 4 1 2 4 4 &gt;&gt;&gt; </code></pre>
<p>Use <code>numpy.recarray</code>instead, in my <code>numpy 1.6.1</code> you get an extra method <code>field</code> that does not exist when you subclass from <code>numpy.ndarray</code>.</p> <p><a href="https://stackoverflow.com/questions/1201817/adding-a-field-to-a-structured-numpy-array">This question</a> or <a href="https://stackoverflow.com/questions/5288736/adding-a-field-to-a-structured-numpy-array-2">this one (if using numpy 1.3)</a> also discuss adding a field to a <code>structured array</code>. From there you will see that using:</p> <pre><code>import numpy.lib.recfunctions as rf rf.append_fields( ... ) </code></pre> <p>can greatly simplify your life. At the first glance I thought this function would append to the original array, but it creates a new instance instead. The <code>class</code>shown below is using your solution for <code>__setitem__()</code>, which is working very well.</p> <p>The issue you found that led you to the ugly solution was <a href="https://stackoverflow.com/questions/5149269/subclassing-numpy-ndarray-problem">reported in another question</a>. The problem is that when you do <code>self=...</code> you are just storing the <code>new</code>object in a variable, but the entity <code>sdarray</code> is not being updated. Maybe it is possible to directly destroy and reconstruct the <code>class</code> from inside its method, but based on <a href="https://stackoverflow.com/questions/5149269/subclassing-numpy-ndarray-problem">that</a> discussion the following <code>class</code> can be created, in which <code>ndarray</code> is not subclassed, but stored and called internally. Some other methods were added to make it work and look like you are working directly with <code>ndarray</code>. I did not test it in detail.</p> <p>For automatic resizing a <a href="https://stackoverflow.com/questions/6405342/automatically-resizing-numpy-recarray">good solution has been presented here</a>. You can also incorporate in your code.</p> <pre><code>import numpy as np class sdarray(object): def __init__(self, *args, **kwargs): self.recarray = np.recarray( *args, **kwargs) def __getattr__(self,attr): if hasattr( self.recarray, attr ): return getattr( self.recarray, attr ) else: return getattr( self, attr ) def __len__(self): return self.recarray.__len__() def __add__(self,other): return self.recarray.__add__(other) def __sub__(self,other): return self.recarray.__sub__(other) def __mul__(self,other): return self.recarray.__mul__(other) def __rmul__(self,other): return self.recarray.__rmul__(other) def __getitem__(self,i): return self.recarray.__getitem__(i) def __str__(self): return self.recarray.__str__() def __repr__(self): return self.recarray.__repr__() def __setitem__(self, i, x): keys = [] formats = [] if i in self.dtype.names: self.recarray.__setitem__(i,x) else: for name, t in self.dtype.fields.iteritems(): keys.append(name) formats.append(t[0]) keys.append( i ) formats.append( formats[-1] ) new = np.recarray( shape = self.shape, dtype = {'names' : keys, 'formats': formats} ) for k in keys[:-1]: new[k] = self[k] new[i] = x self.recarray = new </code></pre>
python|numpy
4
201
57,635,128
Split DataFrame into Chunks according to time or index differences
<p>i'm trying to separate a DataFrame into smaller DataFrames according to the Index value or Time. As you can see in the example below, the time resolution of my data is 5 min, and i would like to create a new dataframe when the time difference between each row is greater than 5 min, or when the Index grows more than 1 (which is the same criteria, so any would work).</p> <p>Here is an example of my data: </p> <pre><code>Index Time Data 0 6:00 A 1 6:05 D 2 6:10 B 58 10:50 C 59 10:55 A 60 11:00 D 92 13:40 A 93 13:45 B </code></pre> <p>And i would like to have the following:</p> <p>Split 1:</p> <pre><code>Index Time Data 0 6:00 A 1 6:05 D 2 6:10 B </code></pre> <p>Split 2:</p> <pre><code>Index Time Data 58 10:50 C 59 10:55 A 60 11:00 D </code></pre> <p>Split 3:</p> <pre><code>Index Time Data 92 13:40 A 93 13:45 B </code></pre>
<p>You have to create a helper series like:</p> <pre><code>s=df.index.to_series().diff().fillna(1).ne(1).cumsum() print(s) Index 0 0 1 0 2 0 58 1 59 1 60 1 92 2 93 2 </code></pre> <p>Then you can store each group in a dictionary and call each key of the dict to refer the df:</p> <pre><code>d={f'df_{i}':g for i,g in df.groupby(s)} </code></pre> <hr> <pre><code>print(d['df_0']) print('\n') print(d['df_1']) print('\n') print(d['df_2']) </code></pre> <hr> <pre><code> Time Data Index 0 6:00 A 1 6:05 D 2 6:10 B Time Data Index 58 10:50 C 59 10:55 A 60 11:00 D Time Data Index 92 13:40 A 93 13:45 B </code></pre> <p>Another way using <code>more_itertools</code>:</p> <pre><code>from more_itertools import consecutive_groups indices=[[*i] for i in consecutive_groups(df.index)] #[[0, 1, 2], [58, 59, 60], [92, 93]] d2={f'df_{e}':df.loc[i] for e,i in enumerate(indices)} </code></pre>
python|pandas
2
202
57,391,946
How to do a scalar product along the right axes with numpy and vectorize the process
<p>I have numpy array 'test' of dimension (100, 100, 16, 16) which gives me a different 16x16 array for points on a 100x100 grid. I also have some eigenvalues and vectors where vals has the dimension (100, 100, 16) and vecs (100, 100, 16, 16) where vecs[x, y, :, i] would be the ith eigenvector of the matrix at the point (x, y) corresponding to the ith eigenvalue vals[x, y, i].</p> <p>Now I want to take the first eigenvector of the array at ALL points on the grid, do a matrix product with the test matrix and then do a scalar product of the resulting vector with all the other eigenvectors of the array at all points on the grid and sum them. The resulting array should have the dimension (100, 100). After this I would like to take the 2nd eigenvector of the array, matrix multiply it with test and then take the scalar product of the result with all the eigenvectors that is not the 2nd and so on so that in the end I have 16 (100, 100) or rather a (100, 100, 16) array. I only succeded sofar with a lot of for loops which I would like to avoid, but using tensordot gives me the wrong dimension and I don't see how to pick the axis which is vectorized along for the np.dot function. I heard that einsum might be suitable to this task, but everything that doesn't rely on the python loops is fine by me. </p> <pre><code>import numpy as np from numpy import linalg as la test = np.arange(16*16*100*100).reshape((100, 100, 16, 16)) vals, vecs = la.eig(test + 1) np.tensordot(vecs, test, axes=[2, 3]).shape &gt;&gt;&gt; (10, 10, 16, 10, 10, 16) </code></pre> <p>EDIT: Ok, so I used np.einsum to get a correct intermediate result.</p> <pre><code>np.einsum('ijkl, ijkm -&gt; ijlm', vecs, test) </code></pre> <p>But in the next step I want to do the scalarproduct only with all the other entries of vec. Can I implement maybe some inverse Kronecker delta in this einsum formalism? Or should I switch back to the usual numpy now? </p>
<p>Ok, I played around and with np.einsum I found a way to do what is described above. A nice feature of einsum is that if you repeat doubly occuring indices in the 'output' (so right of the '->'-thing) you can have element-wise multiplication along some and contraction along some other axes (something that you don't have in handwritten tensor algebra notation).</p> <pre><code>result = np.einsum('ijkl, ijlm -&gt; ijkm', np.einsum('ijkl, ijkm -&gt; ijlm', vecs, test), vecs) </code></pre> <p>This nearly does the trick. Now only the diagonal terms have to be taken out. We could do this by just substracting the diagonal terms like this:</p> <pre><code>result = result - result * np.eye(np.shape(test)[-1])[None, None, ...] </code></pre>
python|python-3.x|numpy|linear-algebra
0
203
57,318,726
Append a word after matching a string from looped list in data frame with external lists to new column in same dataframe
<p>I want to loop over a pandas data frame where each row has a list of strings. But for each row, I want to cross-reference it with another set of lists with predefined strings. If the predefined string within the external lists matches with the string in the row, I want to append the matching string to a new column with the same index as the looped over row. If no string matches then a generic string must be appended to the column with the same index as the looped over row. Once all the rows(1207 to be exact) have been looped over, the column with the appended words must match the number of rows.</p> <pre><code>#these are the predefined lists traffic=['stationary','congest','traffic','slow','heavi','bumper','flow','spectate','emergm','jam','visibl'] #predefined list of strings accident=['outsur','accid','avoid','crash','overturn','massiv','fatalmov','roll'] #predefined list of strings crime=['shootout','lawnessness','robbery','fire','n1shoot','rob','mug','killed','kill','scene','lawness'] #predefined list of strings #this is the code I have already tried for x in test['text']: for y in x: if y in traffic: test['type1']=('traffic') break if y in crime: test['type1']=('crime') break if y in accident: test['type1']=('accident') break else: test['type1']=('ignore') break Below is a sample of my data frame Dataframe name is test [original dataframe][1] [1]: https://i.stack.imgur.com/aZML4.png from what I have tried this is the output [Output of code in dataframe][2] [2]: https://i.stack.imgur.com/iwj1g.png </code></pre>
<p>There might be a simpler way for you to run such comparison. The order was not clear which list should be compared first, but below is one way:</p> <p><em>PS: Created a sample data</em>:</p> <pre class="lang-py prettyprint-override"><code>x =[ [['report','shootout','midrand','n1','north','slow']], [['jhbtraffic','lioght','out','citi','deep']], [['jhbtraffic','light','out','booysen','booysen']] ] df = pd.DataFrame(x, columns=['text']) df Out[2]: text 0 [report, shootout, midrand, n1, north, slow] 1 [jhbtraffic, lioght, out, citi, deep] 2 [jhbtraffic, light, out, booysen, booysen] </code></pre> <p>Actual solution:</p> <pre class="lang-py prettyprint-override"><code>### get matched strings per row matched = df['text'].apply(lambda x: [a for a in x for i in crime+accident+traffic if i==a ]) ### merge to the original dataset df.join(pd.DataFrame(matched.tolist(), index= df.index)).fillna('ignored') Out[1]: text 0 1 0 [report, shootout, midrand, n1, north, slow] shootout slow 1 [jhbtraffic, lioght, out, citi, deep] ignored ignored 2 [jhbtraffic, light, out, booysen, booysen] ignored ignored </code></pre>
python|pandas|list|loops
1
204
57,474,737
Selecting rows with pandas.loc using multiple boolean filters in sequence
<p>I have a following DataFrame:</p> <pre><code>import pandas as pd stuff = [ {"num": 4, "id": None}, {"num": 3, "id": "stuff"}, {"num": 6, "id": None}, {"num": 8, "id": "other_stuff"}, ] df = pd.DataFrame(stuff) </code></pre> <p>I need to select rows where "num" is higher than a given number but only if "id" is not None:</p> <p>This doesn't have any effect:</p> <pre><code>df = df.loc[df["num"] &gt;= 5 &amp; ~pd.isnull(df["id"])] </code></pre> <p>What I need is something like this (presudocode):</p> <pre><code>df = df.loc[ if ~pd.isnull(df["id"]): if df["num"] &gt;= 5: select row ] </code></pre> <p>The expected result:</p> <pre><code>&gt;&gt;&gt; df id num 1 stuff 3 2 None 6 3 other_stuff 8 </code></pre> <p>Any help appreciated.</p>
<p>Add parantheses (because priority operators) with <code>|</code> for bitwise <code>OR</code> instead <code>&amp;</code> for bitwise <code>AND</code>, also for inverted <code>pd.isnull</code> is possible use <code>notna</code> or <code>notnull</code> for oldier pandas versions:</p> <pre><code>df = df[(df["num"] &gt;= 5) | (df["id"].notna())] print (df) num id 1 3 stuff 2 6 None 3 8 other_stuff </code></pre>
python|pandas
2
205
57,588,604
Multiplying Pandas series rows containing NaN
<p>given this Dataframe :</p> <pre><code>import pandas as pd import numpy as np data = {'column1': [True,False, False, True, True], 'column2' : [np.nan,0.21, np.nan, 0.2222, np.nan], 'column3': [1000, 0, 0, 0, 0 ]} df = pd.DataFrame.from_dict(data) print(df) </code></pre> <hr> <pre><code> column1 column2 column3 0 True NaN 1000 1 False 0.2100 0 2 False NaN 0 3 True 0.2222 0 4 True NaN 0 </code></pre> <p>How can I multiply the result from <strong>column2</strong> with the previous value of <strong>column3</strong> when the <strong>column2</strong> row isn't a NaN otherwise just return the previous value of <strong>column3</strong> ?</p> <p>The results should be something like this :</p> <pre><code> column1 column2 column3 0 True NaN 1000 1 False 0.2100 210 2 False NaN 210 3 True 0.2222 46.662 4 True NaN 46.662 </code></pre> <p>I've been browsing through similar questions but I just can't get my head around it ..</p> <p>I'd appreciate your input :)</p>
<p>You can give this a try:</p> <pre><code>#replace 0 with nan and create a copy of the df m=df.assign(column3=df.column3.replace(0,np.nan)) #ffill on axis 1 where column2 is not null , and filter the last col then cumprod final=(df.assign(column3=m.mask(m.column2.notna(),m.ffill(1)).iloc[:,-1].cumprod().ffill())) </code></pre> <hr> <pre><code> column1 column2 column3 0 True NaN 1000.000 1 False 0.2100 210.000 2 False NaN 210.000 3 True 0.2222 46.662 4 True NaN 46.662 </code></pre>
python|python-3.x|pandas
2
206
43,745,792
Apply function across all columns using the column name - Python, Pandas
<h2>Basically:</h2> <p>Is there a way to apply a function that uses the column name of a dataframe in Pandas? Like this:</p> <pre><code>df['label'] = df.apply(lambda x: '_'.join(labels_dict[column_name][x]), axis=1) </code></pre> <p>Where column name is the column that the <code>apply</code> is 'processing'.</p> <hr> <h2>Details:</h2> <p>I'd like to create a label for each row of a dataframe, based on a dictionary.</p> <p>Let's take the dataframe <code>df</code>:</p> <pre><code>df = pd.DataFrame({ 'Application': ['Compressors', 'Fans', 'Fans', 'Material Handling'], 'HP': ['0.25', '0.25', '3.0', '15.0'], 'Sector': ['Commercial', 'Industrial', 'Commercial', 'Residential']}, index=[0, 1, 2, 3]) </code></pre> <p>After I apply the label:</p> <pre><code>In [139]: df['label'] = df.apply(lambda x: '_'.join(x), axis=1) In [140]: df Out[140]: Application HP Sector label 0 Compressors 0.25 Commercial Compressors_0.25_Commercial 1 Fans 0.25 Industrial Fans_0.25_Industrial 2 Fans 3.0 Commercial Fans_3.0_Commercial 3 Material Handling 15.0 Residential Material Handling_15.0_Residential </code></pre> <p>But the label is too long, especially when I consider the full dataframe, which contains a lot more columns. What I want is to use a dictionary to shorten the fields that come from the columns (I pasted the code for the dictionary at the end of the question). </p> <p>I can do that for one field:</p> <pre><code>In [145]: df['application_label'] = df['Application'].apply( lambda x: labels_dict['Application'][x]) In [146]: df Out[146]: Application HP Sector application_label 0 Compressors 0.25 Commercial cmp 1 Fans 0.25 Industrial fan 2 Fans 3.0 Commercial fan 3 Material Handling 15.0 Residential mat </code></pre> <p>But I want to do it for all the fields, like I did in snippet #2. So I'd like to do something like:</p> <pre><code>df['label'] = df.apply(lambda x: '_'.join(labels_dict[column_name][x]), axis=1) </code></pre> <p>Where column name is the column of <code>df</code> to which the function is being applied. Is there a way to access that information?</p> <p>Thank you for your help!</p> <hr> <p>I defined the dictionary as:</p> <pre><code>In [141]: labels_dict Out[141]: {u'Application': {u'Compressors': u'cmp', u'Fans': u'fan', u'Material Handling': u'mat', u'Other/General': u'oth', u'Pumps': u'pum'}, u'ECG': {u'Polyphase': u'pol', u'Single-Phase (High LRT)': u'sph', u'Single-Phase (Low LRT)': u'spl', u'Single-Phase (Med LRT)': u'spm'}, u'Efficiency Level': {u'EL0': u'el0', u'EL1': u'el1', u'EL2': u'el2', u'EL3': u'el3', u'EL4': u'el4'}, u'HP': {0.25: 1.0, 0.33: 2.0, 0.5: 3.0, 0.75: 4.0, 1.0: 5.0, 1.5: 6.0, 2.0: 7.0, 3.0: 8.0, 10.0: 9.0, 15.0: 10.0}, u'Sector': {u'Commercial': u'com', u'Industrial': u'ind', u'Residential': u'res'}} </code></pre>
<p>I worked out one way to do it, but it seems clunky. I'm hoping there's something more elegant out there.</p> <pre><code>df['label'] = pd.DataFrame([df[column_name].apply(lambda x: labels_dict[column_name][x]) for column_name in df.columns]).apply('_'.join) </code></pre>
python|pandas
1
207
43,538,399
Deleting time series rows that have too many consequitive equal values
<p>I have a time series, namely a pandas.DataFrame with one column (containing the values) and the index (containing the timestamps). There are many values with 0 and I want to check for consecutive 0s. If there are too many 0 one after the other, I want to delete the 0s that are too much.</p> <p>For example, if I allow 0 only for 5 seconds, then I want all rows that represent time spans more than 5 seconds of 0s to be reduced to the first 5 seconds of 0s:</p> <pre><code> value time 12:01:01.001 1 12:01:01.002 0 12:01:01.004 6 12:01:01.010 4 12:01:03.010 0 12:01:05.010 0 12:01:08.010 0 12:01:10.010 0 12:01:10.510 0 12:01:11.101 3 12:01:12.101 3 12:01:15.101 0 </code></pre> <p>should become</p> <pre><code> value time 12:01:01.001 1 12:01:01.002 0 12:01:01.004 6 12:01:01.010 4 12:01:03.010 0 12:01:05.010 0 12:01:08.010 0 12:01:11.101 3 12:01:12.101 3 12:01:15.101 0 </code></pre> <h3>Possible Solution</h3> <p>A possible solution would loop through the DataFrame having two variables: The first remembering when the first 0 of after a non-0 and the second iterating further until the time (e.g. 5sec) is exceeded. Then the first variables at the second variables position and the second moves until it reaches a non-0. All the zeros between the first and the second variable are deleted.</p> <p>This is probably very efficient in C, but in Python, using a library is probably faster. <strong>How do I do this elegantly with a Python library?</strong></p>
<p>Here's a solution using pandas groupby. Updating answer to show how to apply filter based on one column of dataframe.</p> <p><strong>IMPORT DATA</strong></p> <pre><code>from io import StringIO import pandas as pd import numpy as np inp_str = u""" time value 12:01:01.001 1 12:01:01.002 0 12:01:01.004 6 12:01:01.010 4 12:01:03.010 0 12:01:05.010 0 12:01:08.010 0 12:01:10.010 0 12:01:10.510 0 12:01:11.101 3 12:01:12.101 3 12:01:15.101 0 """ frame = pd.read_csv(StringIO(inp_str), sep = " ").set_index('time') # make sure we have a datetime index frame.index = pd.to_datetime(frame.index) # EDIT: ADD ANOTHER COLUM frame = frame.assign(other = range(len(frame))) # EDIT: REPLACE ts with the relevant column ts = frame['value'] # Everything else remain unchanged! # Group by consecutive values `ts != ts.shift()` out = ts.groupby([(ts != ts.shift()).cumsum(), ts]) # for all sequences of zeros, identify where more than 5 seconds passed from beginning of sequence def seconds_elapsed(ts): return ts.index.map(lambda x: (x - ts.index[0]).total_seconds()) to_drop = [group.index[np.where(map(lambda x: x&gt;5, seconds_elapsed(group)))] for key, group in out if key[1] == 0] # Collapse everything to flat list of dates to_drop = reduce(lambda x, y: x.union(y), to_drop) # Remove from dataframe frame.drop(to_drop) </code></pre> <p>In order to apply multiple filters, there can be two situations:</p> <ol> <li><p>Apply filters based on values in the original dataframe: For each filtering column, apply procedure above without overwriting the original dataframe, but always creating a new one. To have the final result, do an inner joins of the dataframes filtered by one column at the time</p></li> <li><p>Apply filters consecutively: use the approach above for each filtering column, every time overwriting the original dataframe (order matters!)</p></li> </ol>
python|pandas
2
208
43,720,973
Subtracting dataframes with unequal numbers of rows
<p>I have two dataframes like this </p> <pre><code>import pandas as pd import numpy as np np.random.seed(0) df1 = pd.DataFrame(np.random.randint(10, size=(5, 4)), index=list('ABCDE'), columns=list('abcd')) df2 = pd.DataFrame(np.random.randint(10, size=(2, 4)), index=list('CE'), columns=list('abcd')) a b c d A 5 0 3 3 B 7 9 3 5 C 2 4 7 6 D 8 8 1 6 E 7 7 8 1 a b c d C 5 9 8 9 E 4 3 0 3 </code></pre> <p>The index of <code>df2</code> is always a subset of the index of <code>df1</code> and the column names are identical.</p> <p>I want to create a third dataframe <code>df3 = df1 - df2</code>. If one does that, one obtains</p> <pre><code> a b c d A NaN NaN NaN NaN B NaN NaN NaN NaN C -3.0 -5.0 -1.0 -3.0 D NaN NaN NaN NaN E 3.0 4.0 8.0 -2.0 </code></pre> <p>I don't want the <code>NAs</code> in the ouput but the respective values of <code>df1</code>. Is there a smart way of using e.g. <code>fillna</code> with the values of <code>df1</code> in the rows not contained in <code>df2</code>?</p> <p>A workaround would be to do the subtract only the required rows like:</p> <pre><code>sub_ind = df2.index df3 = df1.copy() df3.loc[sub_ind, :] = df1.loc[sub_ind, :] - df2.loc[sub_ind, :] </code></pre> <p>which gives me the desired output</p> <pre><code> a b c d A 5 0 3 3 B 7 9 3 5 C -3 -5 -1 -3 D 8 8 1 6 E 3 4 8 -2 </code></pre> <p>but maybe there is a more straightforward way of achieving this?</p>
<p>I think this is what you want:</p> <pre><code>(df1-df2).fillna(df1) Out[40]: a b c d A 5.0 0.0 3.0 3.0 B 7.0 9.0 3.0 5.0 C -3.0 -5.0 -1.0 -3.0 D 8.0 8.0 1.0 6.0 E 3.0 4.0 8.0 -2.0 </code></pre> <p>Just subtract the dataframes like you would normally, but "package" the result using parentheses and run the <code>pandas.DataFrame.fillna</code> method on the result. Or, a bit more verbosely:</p> <pre><code>diff = df1-df2 diff.fillna(df1, inplace=True) </code></pre>
python|pandas|dataframe
4
209
43,855,103
Calling a basic LSTM cell within a custom Tensorflow cell
<p>I'm trying to implement the MATCH LSTM from this paper: <a href="https://arxiv.org/pdf/1608.07905.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1608.07905.pdf</a></p> <p>I'm using Tensorflow. One part of the architecture is an RNN that uses the input and the previous state to compute an attention vector which it applies to a context before concatenating the result with the inputs and sending them into an LSTM. To build the first part of this RNN, I wrote a custom cell for Tensorflow to call. But I'm not sure how to send the results into an LSTM. Is it possible to call the basic LSTM cell within the custom cell I'm writing? I tried this a few different ways but kept getting the error "module' object has no attribute 'rnn_cell'" at the line where the LSTM cell is called. Any help would be much appreciated!</p> <p>EDIT to add code:</p> <p>import numpy as np import tensorflow as tf</p> <p>class MatchLSTMCell(tf.contrib.rnn.RNNCell):</p> <pre><code>def __init__(self, state_size, question_tensor, encoded_questions, batch_size): self._state_size = state_size self.question_tensor = question_tensor self.encoded_questions = encoded_questions self.batch_size = batch_size @property def state_size(self): return self._state_size @property def output_size(self): return self._state_size def __call__(self, inputs, state, scope=None): scope = scope or type(self).__name__ with tf.variable_scope(scope): W_p = tf.get_variable("W_p", dtype=tf.float64, shape=[self.state_size, self.state_size], initializer=tf.contrib.layers.xavier_initializer()) W_r = tf.get_variable("W_r", dtype=tf.float64, shape=[self.state_size, self.state_size], initializer=tf.contrib.layers.xavier_initializer()) b_p = tf.get_variable("b_p", dtype=tf.float64, shape=[self.state_size]) w = tf.get_variable("w", dtype=tf.float64, shape=[1,self.state_size]) b = tf.get_variable("b", dtype=tf.float64, shape=[]) #print 'question tensor', np.shape(self.question_tensor) #print 'inputs', np.shape(inputs) #print 'insides', np.shape(tf.matmul(inputs, W_p) + tf.matmul(state, W_r) + b_p) G = tf.nn.tanh( tf.transpose(tf.transpose(self.question_tensor, perm=[1,0,2]) + (tf.matmul(inputs, W_p) + tf.matmul(state, W_r) + b_p), perm=[1,0,2]) ) #print 'big G', np.shape(G) attention_list = [] for i in range(self.batch_size): attention_matrix = tf.matmul(G[i,:,:], tf.transpose(w)) attention_list.append(attention_matrix) attention_scores = tf.stack(attention_list) a = tf.nn.softmax(attention_scores + b) a = tf.reshape(a, [self.batch_size, -1]) #print 'a shape is', np.shape(a) weighted_question_list = [] for i in range(self.batch_size): attention_vector = tf.matmul(tf.reshape(a[i], [1,-1]), self.encoded_questions[i]) weighted_question_list.append(attention_vector) weighted_questions = tf.stack(weighted_question_list) weighted_questions = tf.reshape(weighted_questions, [32, -1]) #print'weighted questions', np.shape(weighted_questions) z = tf.concat([inputs, weighted_questions], 1) lstm_cell = tf.nn.rnn_cell.LSTMCell(self.state_size) output, new_state = lstm_cell.__call__(z, state) return output, new_state </code></pre>
<p>I'm also trying to reimplement Match_LSTM for Squad for experiment. I use <a href="https://github.com/MurtyShikhar/Question-Answering" rel="nofollow noreferrer">MurtyShikhar's</a> as reference. It works! However, he had to customize AttentionWrapper and use existed BasicLSTM cell.</p> <p>I also try to create a Match_LSTM_cell by putting z and state as (inputs,state) pair in <a href="https://github.com/tensorflow/tensorflow/blob/927f811b0303e51126531c135f8b093383de2d6d/tensorflow/python/ops/rnn_cell_impl.py#L374" rel="nofollow noreferrer">Basic_LSTM</a>:</p> <pre><code> def __call__(self, inputs,state): #c is not a output. c somehow is a "memory keeper". #Necessary to update and pass new_c through LSTM c,h=state #...Calculate your z #...inputs will be each tokens in context(passage) respectively #...Calculate alpha_Q z=tf.concat([inputs,alpha_Q],axis=1) ########This part is reimplement of Basic_LSTM with vs.variable_scope("LSTM_core"): sigmoid=math_ops.sigmoid concat=_linear([z,h],dimension*4,bias=True) i,j,f,o=array_ops.split(concat,num_or_size_splits=4,axis=1) new_c=(c*sigmoid(f+self._forget_bias)+sigmoid(i)*self._activation(j)) new_h = self._activation(new_c) * sigmoid(o) new_state=(new_c,new_h) return new_h,new_state </code></pre>
tensorflow
1
210
43,583,572
How can I rename strings of indices?
<p>I am looking forward to rename the indices names 'Juan Gonzalez' to 'Jason', 'Jorge Sanchez' to 'George' and 'Miguel Sanz' to 'Michael'  </p> <pre><code>                            age height(cm) weight(kg) People Juan Gonzalez               22     181    60 Jorge Sanchez          34     190     84 Miguel Sanz 50 166 59 </code></pre> <p>I thought it worked like when renaming columns:</p> <pre><code>df.rename(columns={,,}, inplace=True) </code></pre> <p>However when I try </p> <pre><code>df.rename(index={'Juan Gonzalez':'Jason','Jorge Sanchez':'George','Miguel Sanz':'Michael'}, inplace=True) </code></pre> <p>It doesn't works, it returns the same dataframe with same indices names</p>
<p>It seems there are some whitespaces in index values.</p> <p>For remove it use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.strip.html" rel="nofollow noreferrer"><code>strip</code></a>:</p> <pre><code>df.index = df.index.str.strip() </code></pre> <p>Or add parameter <code>skipinitialspace=True</code> to <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html" rel="nofollow noreferrer"><code>read_csv</code></a>.</p> <p>Samples:</p> <pre><code>import pandas as pd from pandas.compat import StringIO temp=u"""People,age height(cm),weight(kg) Juan Gonzalez,22,181,60 Jorge Sanchez,34,190,84 Miguel Sanz,50,166,59""" #after testing replace 'StringIO(temp)' to 'filename.csv' df = pd.read_csv(StringIO(temp), skipinitialspace=True) print (df) People age height(cm) weight(kg) Juan Gonzalez 22 181 60 Jorge Sanchez 34 190 84 Miguel Sanz 50 166 59 print (df.index) Index(['Juan Gonzalez', 'Jorge Sanchez', 'Miguel Sanz'], dtype='object') d = {'Juan Gonzalez':'Jason','Jorge Sanchez':'George','Miguel Sanz':'Michael'} df.rename(index=d, inplace=True) print (df) People age height(cm) weight(kg) Jason 22 181 60 George 34 190 84 Michael 50 166 59 </code></pre> <hr> <pre><code>import pandas as pd from pandas.compat import StringIO temp=u"""People,age height(cm),weight(kg) Juan Gonzalez,22,181,60 Jorge Sanchez,34,190,84 Miguel Sanz,50,166,59""" #after testing replace 'StringIO(temp)' to 'filename.csv' df = pd.read_csv(StringIO(temp)) print (df) People age height(cm) weight(kg) Juan Gonzalez 22 181 60 Jorge Sanchez 34 190 84 Miguel Sanz 50 166 59 print (df.index) Index([' Juan Gonzalez', ' Jorge Sanchez', 'Miguel Sanz'], dtype='object') df.index = df.index.str.strip() print (df.index) Index(['Juan Gonzalez', 'Jorge Sanchez', 'Miguel Sanz'], dtype='object') d = {'Juan Gonzalez':'Jason','Jorge Sanchez':'George','Miguel Sanz':'Michael'} df.rename(index=d, inplace=True) print (df) People age height(cm) weight(kg) Jason 22 181 60 George 34 190 84 Michael 50 166 59 </code></pre>
python|pandas
2
211
73,043,768
Find first row after a specific row with higher value in a column in pandas
<p>I have a pandas dataframe like this:</p> <pre class="lang-py prettyprint-override"><code> first second third 0 2 2 False 1 3 1 True 2 1 4 False 3 0 6 False 4 5 7 True 5 4 2 False 6 3 4 False 7 3 6 True </code></pre> <p>and it could be created with the code:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd df = pd.DataFrame( { 'first': [2, 3, 1, 0, 5, 4, 3, 3], 'second': [2, 1, 4, 6, 7, 2, 4, 6], 'third': [False, True, False, False, True, False, False, True] } ) </code></pre> <p>For any row with a <code>True</code> value in the third column, I want to find the first row in the next rows which has a value in the second column greater than the value in the first column.</p> <p>So the output should be:</p> <pre class="lang-py prettyprint-override"><code> first second third 2 1 4 False 6 3 4 False </code></pre> <p>And also it is my priority not to use any for-loop.</p> <p>Have you any idea about this?</p>
<p>You can try</p> <pre class="lang-py prettyprint-override"><code>m = df['third'].cumsum() out = (df[m.gt(0) &amp; (~df['third'])] # filter out heading False row and the middle True row .groupby(m, as_index=False) # select the first row that value in the second column greater than in the first column .apply(lambda g: g[g['second'].gt(g['first'])].iloc[:1])) </code></pre> <pre><code>print(out) first second third 0 1 4 False 1 3 4 False </code></pre>
python|pandas|dataframe
1
212
73,137,976
pandas dataframe replace multiple substring of column
<p>I have below the code</p> <pre><code>import pandas as pd df = pd.DataFrame({'A': ['$5,756', '3434', '$45', '1,344']}) pattern = ','.join(['$', ',']) df['A'] = df['A'].str.replace('$|,', '', regex=True) print(df['A']) </code></pre> <p>What I am trying to remove every occurrence of '$' or ','... so I am trying to replace with blank..</p> <p>But its replacing only ,</p> <p>Output I am getting</p> <pre><code>0 $5756 1 3434 2 $45 3 1344$ </code></pre> <p>it should be</p> <pre><code>0 5756 1 3434 2 45 3 1344 </code></pre> <p>What I am doing wrong</p> <p>Any help appreciated</p> <p>Thanks</p>
<p>Use:</p> <pre><code>import pandas as pd df = pd.DataFrame({'A': ['$5,756', '3434', '$45', '1,344']}) df['A'] = df['A'].str.replace('[$,]', '', regex=True) print(df) </code></pre> <p><strong>Output</strong></p> <pre><code> A 0 5756 1 3434 2 45 3 1344 </code></pre> <p>The problem is that the character <code>$</code> has a special meaning in regular expressions. From the <a href="https://docs.python.org/3/library/re.html" rel="nofollow noreferrer">documentation</a> (emphasis mine):</p> <blockquote> <p><strong>$</strong><br /> Matches the end of the string or just before the newline at the end of the string, and in MULTILINE mode also matches before a newline. foo matches both ‘foo’ and ‘foobar’, while the regular expression foo$ matches only ‘foo’. More interestingly, searching for foo.$ in 'foo1\nfoo2\n' matches ‘foo2’ normally, but ‘foo1’ in MULTILINE mode; searching for a single $ in 'foo\n' will find two (empty) matches: one just before the newline, and one at the end of the string.mode; searching for a single $ in 'foo\n' will find two (empty) matches: one just before the newline, and one at the end of the string.</p> </blockquote> <p>So you need to <em>escape</em> the character or put it inside a character class.</p> <p>As an alternative use:</p> <pre><code>df['A'].str.replace('\$|,', '', regex=True) # note the escaping \ </code></pre>
python|pandas|dataframe
3
213
73,016,142
I can't import yfinance and pandas in JupyterNotebook or pycharm. (Mac M1)
<p>I'm working on M1. I tried to import pandas in Jupyter. But it doesn't work.</p> <p>When I check it using 'pip show pandas' in Jupyter, it appears like this. <a href="https://i.stack.imgur.com/UpaRE.png" rel="nofollow noreferrer">enter image description here</a></p> <p>But I can't import Pandas in Jupyter. Error appears. <a href="https://i.stack.imgur.com/bhW6V.jpg" rel="nofollow noreferrer">enter image description here</a></p> <p>The image is too big, so error message is cropped. Here is the last sentence of the error message.</p> <blockquote> <p>ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/pandas/_libs/interval.cpython-38-darwin.so, 0x0002): tried: '/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/pandas/_libs/interval.cpython-38-darwin.so' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64e'))</p> </blockquote> <p>When I checked the place where python3 is installed, it appears like this.</p> <p>~&gt; which python3 /Library/Frameworks/Python.framework/Versions/3.8/bin/python3</p> <p>Jupyter's Version : 3.4.3</p> <p>pip's Version : 22.1.2 from /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/pip (python 3.8)</p> <p>What do you think is problem?</p>
<p>I used to have some similar issues with importing after I pip installed the needed modules piecemeal. To fix this, I started from scratch and used Anaconda on both mac and windows machines to install python and the needed modules without any further issues.</p> <p><a href="https://www.anaconda.com/products/distribution#Downloads" rel="nofollow noreferrer">https://www.anaconda.com/products/distribution#Downloads</a></p> <p>I also have been using more virtual environments, which I would recommend looking up afterwards.</p>
python|python-3.x|pandas|jupyter-notebook|python-import
0
214
73,071,223
The kernel appears to have died. It will restart automatically. Problem with matplotlib.pyplot
<p>Whenever I am trying to plot the history of a model my kernel outputs this error.</p> <blockquote> <p>Error: &quot;The kernel appears to have died. It will restart automatically.&quot;</p> </blockquote> <p>It only occurs when I am trying to plot the history of any model otherwise matplotlib.pyplot works perfectly. I have tried <code>conda install freetype=2.10.4</code> but didn't get any positive results.</p>
<p>After investing a lot I found that matplotlib.pyplot isn't installed properly on the current env. Creating a new env and then installing the library, resolved my problem.</p>
python|tensorflow|matplotlib|keras|conda
0
215
72,910,978
My dataframe column output NaN for all values
<pre><code>41-45 93 46-50 81 36-40 73 51-55 71 26-30 67 21-25 62 31-35 61 56-70 29 56-60 26 61 or older 23 15-20 10 Name: age, dtype: int64 </code></pre> <pre><code> pd.to_numeric(combined['age'], errors='coerce') </code></pre> <p>i used this above code to convert my dataframe column to numeric but all it does it convert it all to NaN values</p> <p>Here is my output</p> <pre><code>3 NaN 5 NaN 8 NaN 9 NaN 11 NaN .. 696 NaN 697 NaN 698 NaN 699 NaN 701 NaN Name: age, Length: 651, dtype: float64 </code></pre>
<p>try the below:</p> <pre><code>import pandas as pd df = pd.DataFrame({&quot;age&quot;: [&quot;41-45&quot;, &quot;46-50&quot;,&quot;61 or older&quot;], &quot;Col2&quot;: [93, 81, 23]}) Cols = [&quot;Lower_End_Age&quot;, &quot;Higher_End_Age&quot;,] # list of column names for later # replacing whitespace by delimiter and splitting only once `n=1` using the same delimiter df[Cols] = df[&quot;age&quot;].str.replace(' ', '-').str.split(&quot;-&quot;, n=1, expand = True) print(df) age Col2 Lower_End_Age Higher_End_Age 0 41-45 93 41 45 1 46-50 81 46 50 2 61 or older 23 61 or-older </code></pre> <p>later:</p> <pre><code>df['Lower_End_Age'] = pd.to_numeric(df['Lower_End_Age'], errors='coerce') df.dtypes age object Col2 int64 Lower_End_Age int64 Higher_End_Age object </code></pre> <p>and if you want to get rid of <code>or-older</code>, simply repeat</p> <pre><code>df['Higher_End_Age'] = pd.to_numeric(df['Higher_End_Age'], errors='coerce') print(df) age Col2 Lower_End_Age Higher_End_Age 0 41-45 93 41 45.0 1 46-50 81 46 50.0 2 61 or older 23 61 NaN </code></pre>
python|pandas|dataframe|nan
0
216
70,669,308
Python: Only keep section of string after second dash and before third dash
<p>I have a column 'Id' that has data like this:</p> <p>'10020-100-700-800-2'</p> <p>How can I create a new column that would only contain the third number, in this case 700, for each row?</p> <p>Here is an example dataframe:</p> <p>d = {'id': {0: '10023_11_762553_762552_11', 1: '10023_14_325341_359865_14', 2: '10023_17_771459_771453_17', 3: '10023_20_440709_359899_20', 4: '10023_24_773107_625033_24', 5: '10023_27_771462_771463_27', 6: '10023_30_771262_771465_30', 7: '10023_33_761971_762470_33'}, 'values': {0: 10023, 1: 10023, 2: 10023, 3: 10023, 4: 10023, 5: 10023, 6: 10023, 7: 10023}}</p>
<p>Use <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.str.split.html" rel="nofollow noreferrer"><code>str.split</code></a> and take the third argument of the list:</p> <pre><code>df = pd.DataFrame({'Col': ['10020-100-700-800-2']}) df['NewCol'] = df['Col'].str.split('-').str[2].astype(int) print(df) # Output Col NewCol 0 10020-100-700-800-2 700 </code></pre> <p><strong>Update</strong> with your sample:</p> <pre><code>data = {'Id': ['10020-100-700-800-2', '10022-400-900-900-2', '10045-600-800-900-3', '10000-300-400-300-3', '10020-200-200-200-2'], 'Employed': [1, 0, 0, 1, 1], 'Name': ['Alan', 'Joe', 'Sam', 'Amy', 'Chloe']} df = pd.DataFrame(data) df['Id2'] = df['Id'].str.split('-').str[2].astype(int) print(df) # Output Id Employed Name Id2 0 10020-100-700-800-2 1 Alan 700 1 10022-400-900-900-2 0 Joe 900 2 10045-600-800-900-3 0 Sam 800 3 10000-300-400-300-3 1 Amy 400 4 10020-200-200-200-2 1 Chloe 200 </code></pre> <p><strong>Update 2</strong> with your new data</p> <pre><code>data = {'id': ['10023_11_762553_762552_11', '10023_14_325341_359865_14', '10023_17_771459_771453_17', '10023_20_440709_359899_20', '10023_24_773107_625033_24', '10023_27_771462_771463_27', '10023_30_771262_771465_30', '10023_33_761971_762470_33'], 'values': [10023, 10023, 10023, 10023, 10023, 10023, 10023, 10023]} df = pd.DataFrame(data) df['id2'] = df['id'].str.split('_').str[2].astype(int) print(df) # Output id values id2 0 10023_11_762553_762552_11 10023 762553 1 10023_14_325341_359865_14 10023 325341 2 10023_17_771459_771453_17 10023 771459 3 10023_20_440709_359899_20 10023 440709 4 10023_24_773107_625033_24 10023 773107 5 10023_27_771462_771463_27 10023 771462 6 10023_30_771262_771465_30 10023 771262 7 10023_33_761971_762470_33 10023 761971 </code></pre>
python|pandas
0
217
70,507,719
Fill pd.Series from list of values if value exist in another df.Series description
<p>I need to solve tricky problem, and minimize big O notation problem.</p> <p>I have two pandas dataframes:</p> <p>The first df is like as:</p> <pre><code>| source | searchTermsList | |:---- |:------:| | A | [t1, t2, t3,...tn] | | B | [t4, t5, t6,...tn] | | C | [t7, t8, t9,...tn] | </code></pre> <p>Where the first column is string, the second one is a list of strings no duplicated, just unique values.</p> <p>The second dataframe, which I need to create a new pd.Series with first column (df1.source) if term in searchTerm list, exist in the follow df2.Series, called &quot;description&quot;.</p> <pre><code>Example. | objID | dataDescr | |:---- |:------:| | 1 | The first description name has t2 | | 2 | The second description name has t6 and t7| | 3 | The third description name has t8, t1, t9| </code></pre> <p><strong>Expected results</strong></p> <pre><code>| objID | dataDescr | source | |:---- |:------:| -----:| | 1 | The first description name | A | | 2 | The second description name | B | | 3 | The third description name | C | </code></pre> <p><strong>Explanation</strong></p> <ul> <li><p>The first description has t2, so the column filled with A, because t2 appears in the term list.</p> </li> <li><p>The second description has two terms, t6 and t7, in that case match only the first one with the second list, so the source will be filled B</p> </li> <li><p>The third description has three terms, as above, only get the first one with the list and source will be filled with C.</p> </li> </ul> <p><strong>My approach</strong></p> <p>If I split descrName and finally search that word in the all lists, maybe the computational cost will be very huge. The idea with map, doesn't work, because with haven't ordered dataframe, in the first just we have 10-20 rows, only unique values, in the second will be to matching with each terms n times.</p> <p>Any suggestion,please?</p>
<p>Create<code>Series</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.explode.html" rel="nofollow noreferrer"><code>DataFrame.explode</code></a> for indices of <code>searchTermsList</code> for mapping first:</p> <pre><code>s = df1.explode('searchTermsList').set_index('searchTermsList')['source'] print (s) t1 A t2 A t3 A t4 B t5 B t6 B t7 C t8 C t9 C Name: source, dtype: object </code></pre> <p>Then join values of <code>s.index</code> by <code>|</code> for regex <code>or</code>, <code>\b\b</code> are words boundaries. For get first matched value is used <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.extract.html" rel="nofollow noreferrer"><code>Series.str.extract</code></a> and then mapping by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.map.html" rel="nofollow noreferrer"><code>Series.map</code></a>:</p> <pre><code>pat = r&quot;\b({})\b&quot;.format(&quot;|&quot;.join(s.index)) df2['searchTermsList'] = df2['dataDescr'].str.extract(pat, expand=False).map(s) print (df2) objID dataDescr searchTermsList 0 1 The first description name has t2 A 1 2 The second description name has t6 and t7 B 2 3 The third description name has t8, t1, t9 C </code></pre> <p>Another solution is extract words by <code>re.findall</code> and mapoping by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.get.html" rel="nofollow noreferrer"><code>Series.get</code></a>:</p> <pre><code>import re s = df1.explode('searchTermsList').set_index('searchTermsList')['source'] f = lambda x: next((s.get(y) for y in re.findall(r'\b\w+\b',x) if y in s), np.nan) df2['searchTermsList'] = df2['dataDescr'].apply(f) print (df2) objID dataDescr searchTermsList 0 0 The first description n NaN 1 1 The first description name has t2 A 2 2 The second description name has t6 and t7 B 3 3 The third description name has t8, t1, t9 C </code></pre>
python|pandas|dataframe|dictionary
1
218
70,718,030
Is there a function similar to ncdisp to view .npy files?
<p>Is there a function similar to <code>ncdisp</code> in MATLAB to view .npy files?</p> <p>Alternatively, it would be helpful to have some command that would spit out header titles in a .npy file. I can see everything in the file, but it is absolutely enormous. There has to be a way to view what categories of data are in this file.</p>
<p>Looking at the code for <code>np.lib.npyio.load</code> we see that it calls <code>np.lib.format.read_array</code>. That in turn calls as <code>np.lib.format._read_array_header</code>.</p> <p>This can studied and perhaps even used, but it isn't in the public API.</p> <p>But if you are such a MATLAB fan as you claim, you already know (?) that you can explore <code>.m</code> files to see the MATLAB code. Same with <code>python/numpy</code>. Read the files/functions until you hit compiled 'builtin' functions.</p> <p>Since a <code>npy</code> contains only one array,the header isn't that interesting by itself - just the array dtype and shape (and total size). This isn't like the matlab save file with lots of variables. <code>scipy.io.loadmat</code> can read those.</p> <p>But looking up <code>ncdisp</code> I see that's part of its <code>NetCDF</code> reader. That's a whole different kind of file.</p>
python|numpy|file-type
-1
219
42,943,979
Python & Pandas: How can I skip creating intermediate data file when reading data?
<p>I have data files looks like this:</p> <pre><code>ABE200501.dat ABE200502.dat ABE200503.dat ... </code></pre> <p>So I first combine these files into <code>all.dat</code>, and do a little bit clean up </p> <pre><code>fout=open("all.dat","w") for year in range(2000,2017): for month in range(1,13): try: for line in open("ABE"+ str(year) +"%02d"%(month)+".dat"): fout.write(line.replace("[", " ").replace("]", " ").replace('"', " ").replace('`', " ")) except: pass fout.close() </code></pre> <p>And I later on read the final file in pandas </p> <pre><code>df = pd.read_csv("all.dat", skipinitialspace=True, error_bad_lines=False, sep=' ', names = ['stationID','time','vis','day_type','vis2','day_type2','dir','speed','dir_max','speed_max','visual_range', 'unknown']) </code></pre> <p>I want to know, if it is possible to save combine files in directly in RAM instead in my hard disk? This can save me a lot of unnecessary space.</p> <p>Thanks! </p>
<p>The <a href="https://docs.python.org/2/library/stringio.html" rel="nofollow noreferrer"><code>StringIO</code></a> module lets you treat strings as files.</p> <p>Example from the docs:</p> <pre><code>import StringIO output = StringIO.StringIO() output.write('First line.\n') print &gt;&gt;output, 'Second line.' # Retrieve file contents -- this will be # 'First line.\nSecond line.\n' contents = output.getvalue() # Close object and discard memory buffer -- # .getvalue() will now raise an exception. output.close() </code></pre> <p>For your own code:</p> <pre><code>fout = StringIO.StringIO() # treat fout as a file handle like usual # parse input files, writing to fout file = fout.getvalue() # file is kind of a virtual file now # and can be "opened" by StringIO fout.close() # ... using StringIO.StringIO(file) as fin: df = pd.read_csv(fin, skipinitialspace=True, error_bad_lines=False, sep=' ', names = ['stationID','time','vis','day_type','vis2','day_type2','dir','speed','dir_max','speed_max','visual_range', 'unknown']) </code></pre> <p>pandas accepts both pathname strings and file handles as input.</p>
python|pandas
1
220
27,184,204
Write a method that accesses to a parametric dimension numpy array
<p>I'm using <strong>python 2.7</strong> and <strong>numpy 1.9</strong>.<br> I have 3 methods that applies a transformation to a couple of numpy arrays.</p> <pre><code>def sum_arrays2(a, b): c = np.zeros(a.shape) c[:, 0:-1] = (a[:, 1:] + b[:, 0:-1]) ** 2 return a[0:-1, 1:] + c[1:, 0:-1] def sum_arrays3(a, b): c = np.zeros(a.shape) c[:, :, 0:-1] = (a[:, :, 1:] + b[:, :, 0:-1]) ** 2 return a[0:-1, :, 1:] + c[1:, :, 0:-1] def sum_arrays4(a, b): c = np.zeros(a.shape) c[:, :, :, 0:-1] = (a[:, :, :, 1:] + b[:, :, :, 0:-1]) ** 2 return a[0:-1, :, :, 1:] + c[1:, :, :, 0:-1] </code></pre> <p>As you can see, they are very similar. The only difference is the required input array size.<br> Depending on the size of my data, I have to call the first one, the second or the third.</p> <p>Actually I have to do something like this:</p> <pre><code>if a.ndims == 2: result = sum_arrays2(a, b) elif a.ndims == 3: result = sum_arrays3(a, b) elif a.ndims == 4: result = sum_arrays4(a, b) </code></pre> <p>How can I make a more general method that computes this for n-dimensional inputs?</p> <p>The only solution I found is something like this:</p> <pre><code>def n_size_sum_arrays(a, b): c = np.zeros(a.shape) c[(Ellipsis, np.r_[0:c.shape[-1]-1])] = (a[(Ellipsis, np.r_[0:a.shape[-1]])] + b[(Ellipsis, np.r_[0:b.shape[-1]])]) ** 2 return a[(r_[0:a.shape[0]-1], Ellipsis, np.r_[1:a.shape[-1]])] + c[(np.r_[1:c.shape[0]], Ellipsis, np.r_[0:c.shape[-1]-1])] </code></pre> <p>But it absolutely unclear and I'm not sure that it's correct.</p> <p>Is there a better approach?</p>
<p>You can do the following:</p> <pre><code>def sum_arrays(a, b): c = np.zeros(a.shape) c[..., :-1] = (a[..., 1:] + b[..., :-1]) ** 2 return a[:-1, ..., 1:] + c[1:, ..., :-1] </code></pre>
python|arrays|python-2.7|numpy
3
221
26,991,363
Numpy: Extract particular rows in matrix
<p>I have a matrix W and two vectors y1 and y2. I want to extract rows from W. The rows I am interested in are in the range [y1:y2]. What is the best way of doing this in Numpy? <strong>Can this be done without using any for-loops or map method</strong>? For e.g.:</p> <pre><code>W = [[ 1., 2., 3., 4.], [ 5., 6., 7., 8.], [ 9., 10., 11., 12.], [ 13., 21., 33., 41.], [ 55., 66., 74., 83.], [ 92., 106., 711., 142.], [ 19., 27., 33., 24.], [ 54., 66., 74., 38.], [ 29., 210., 131., 412.]] y1 = [[0], [0], [6], [3]] y2 = [[3], [3], [9], [6]] I want w[y1:y2,:] ., i.e. newW = [[ 1., 2., 3., 4.], [ 5., 6., 7., 8.], [ 9., 10., 11., 12.], [ 1., 2., 3., 4.], [ 5., 6., 7., 8.], [ 9., 10., 11., 12.], [ 19., 27., 33., 24.], [ 54., 66., 74., 38.], [ 29., 210., 131., 412.], [ 13., 21., 33., 41.], [ 55., 66., 74., 83.], [ 92., 106., 711., 142.]] </code></pre>
<p>You need to build the slices for yourself as indices and then use them:</p> <pre><code>indices = np.concatenate([np.arange(iy1, iy2) for iy1, iy2 in zip(y1.ravel(), y2.ravel())] newW = w[indices] </code></pre>
python|numpy|scipy
1
222
30,695,034
set column of pandas.DataFrame object
<p>Ideally, I want to be able something like: </p> <pre><code>cols = ['A', 'B', 'C'] df = pandas.DataFrame(index=range(5), columns=cols) df.get_column(cols[0]) = [1, 2, 3, 4, 5] </code></pre> <p>What is the pythonic/pandonic way to do this?</p> <p>Edit: I know that I can access the column 'A' by <code>df.A</code>, but in general I do not know what the column names are. </p>
<p>Okay, this is particularly straightforward. </p> <pre><code>df[cols[0]] = [1, 2, 3, 4, 5] </code></pre>
python|pandas
0
223
30,730,718
Merge two daily series into one hour series
<p>I have two daily series and I have to merge them into one hour series with the 1st series for the first 12 hours and the 2nd series for the remaining hours. </p> <p>Is there a more efficient way instead of building a list manually and convert it to series? Thanks </p> <pre><code>a = pd.Series(np.random.rand(5), pd.date_range('2015-01-01', periods=5)) b = pd.Series(np.random.rand(5), pd.date_range('2015-01-01', periods=5)) c = an hourly series </code></pre>
<p>possibly:</p> <pre><code>&gt;&gt;&gt; b.index += dt.timedelta(hours=12) &gt;&gt;&gt; pd.concat((a, b), axis=0).sort_index() 2015-01-01 00:00:00 0.150 2015-01-01 12:00:00 0.970 2015-01-02 00:00:00 0.746 2015-01-02 12:00:00 0.937 2015-01-03 00:00:00 0.523 2015-01-03 12:00:00 0.392 2015-01-04 00:00:00 0.606 2015-01-04 12:00:00 0.459 2015-01-05 00:00:00 0.221 2015-01-05 12:00:00 0.029 dtype: float64 </code></pre> <p>and, <code>ts.asfreq('H', method='ffill')</code> to have hourly frequency.</p>
python|pandas
1
224
30,550,295
Pandas data frame creation inconsistencies
<p>I was creating a pandas dataframe from a python dictionary: </p> <pre><code>import numpy as np import pandas as pd obs_dict = { 'pos':[[0,0],[10,10]], 'vel':[[0,0],[0,0]], 'mass':np.array([10000,10000]) } print pd.DataFrame(obs_dict) </code></pre> <p>Returns:</p> <pre><code> mass pos vel 0 10000 [0, 0] [0, 0] 1 10000 [10, 10] [0, 0] </code></pre> <p>Notice that I have a 2d list as a items in the position ('pos') and velocity ('vel') column. However when I replace the 2d list with a 2d numpy array:</p> <pre><code>obs_dict = { 'pos':np.array([[0,0],[10,10]]), 'vel':np.array([[0,0],[0,0]]), 'mass':np.array([10000,10000]) } </code></pre> <p>I get an exception:</p> <pre><code>Exception: Data must be 1-dimensional </code></pre> <p>Unfortunately the data that I am using is contained within a numpy array and I don't want to convert it to a python list, is there any way to make this work?</p>
<p>In this code:</p> <pre><code>obs_dict = { 'pos':[[0,0],[10,10]], 'vel':[[0,0],[0,0]], 'mass':np.array([10000,10000]) } print pd.DataFrame(obs_dict)` </code></pre> <p>you just got a dataframe with some object columns, try dtypes:</p> <pre><code>tt = pd.DataFrame(obs_dict) tt.dtypes </code></pre> <p>shows: </p> <pre><code>mass int64 pos object vel object dtype: object </code></pre> <p>I don't think this can be done.</p>
python|arrays|numpy|pandas
0
225
19,395,019
for loop over 20 data frames on each day simultaneously
<p>I have the following data structure (still subject to changes):</p> <pre><code>pp = ([Pair1, Pair2, Pair3, ..., Pair25]) </code></pre> <p>Each Pair has has the following format: </p> <pre><code> &lt;class 'pandas.core.frame.DataFrame'&gt; DatetimeIndex: 2016 entries, 2005-09-19 00:00:00 to 2013-09-12 00:00:00 Data columns (total 2 columns): CA 2016 non-null values Na 2016 non-null values </code></pre> <p>I have lots of functions which need to be applied on each day for each DataFrame. However, the For-Loop may not run step-by-step such as for Pair1, Pair2, Pair3. The For-loop should rather run each day, for example: </p> <pre><code> 2005-09-19: do function for each pair! 2005-09-20 and continue 2005-09-21 2005-09-22 </code></pre> <p>Is there a way to do that or do I need to completely change my data structure as well as codes?</p> <p><strong>Update 1</strong></p> <p>This where I am right now, however, is it efficient?</p> <pre><code>for i in range(len(ps[1])): for row in ps: print row[i:i+1] A C DE Date 2005-09-19 -0.600021 4.649857 3 A G DE Date 2005-09-19 -0.600021 6.39693 0.105716 A W DE Date 2005-09-19 -0.600021 6.950815 5 A C DE Date 2005-09-20 -0.59711 4.637831 3 A G DE Date 2005-09-20 -0.59711 6.396263 0.109079 A W DE Date 2005-09-20 -0.59711 6.951772 5 A C DE Date 2005-09-21 -0.594207 4.641213 3 A G DE Date 2005-09-21 -0.594207 6.40059 0.109055 A W DE Date 2005-09-21 -0.594207 6.955593 5 </code></pre>
<p>If there is no reason they need to be separate data frames you should combine them into one dataframe with a multi index or simply a column indicating which pair they belong to. Then you can group by to perform your function applications.</p> <pre><code>DF.groupby(['Date','pair']).apply(function) </code></pre>
python|numpy|pandas
2
226
19,584,599
scatter function in matplotlib
<pre><code>from numpy import array import matplotlib import matplotlib.pyplot as plt from fileread import file2matrix datingDataMat,datingLabels = file2matrix('iris_data.txt') fig = plt.figure() ax = fig.add_subplot(111) ax.scatter(datingDataMat[:,1], datingDataMat[:,2],15.0*array(datingLabels), 15.0*array(datingLabels)) plt.show() </code></pre> <p>This code is displaying error as::</p> <pre><code>TypeError: unsupported operand type(s) for *: 'float' and 'numpy.ndarray' </code></pre> <p>According to the author i should be able to generate different colors based on datalabels.</p>
<p>The array should contains numeric values.</p> <pre><code>&gt;&gt;&gt; 15.0 * array([1,2]) array([ 15., 30.]) </code></pre> <hr> <pre><code>&gt;&gt;&gt; 15.0 * array(['1','2']) Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; TypeError: unsupported operand type(s) for *: 'float' and 'numpy.ndarray' </code></pre> <p>Check the value of <code>datingLabels</code>.</p>
python|numpy|matplotlib
3
227
13,167,391
Filter out groups with a length equal to one
<p>I am creating a <code>groupby</code> object from a Pandas <code>DataFrame</code> and want to select out all the groups with &gt; 1 size.</p> <p>Example:</p> <pre><code> A B 0 foo 0 1 bar 1 2 foo 2 3 foo 3 </code></pre> <p>The following doesn't seem to work:</p> <pre><code>grouped = df.groupby('A') grouped[grouped.size &gt; 1] </code></pre> <p>Expected Result:</p> <pre><code>A foo 0 2 3 </code></pre>
<p>As of pandas 0.12 you can do:</p> <pre><code>&gt;&gt;&gt; grouped.filter(lambda x: len(x) &gt; 1) A B 0 foo 0 2 foo 2 3 foo 3 </code></pre>
python|pandas|group-by
46
228
13,187,778
Convert pandas dataframe to NumPy array
<p>How do I convert a pandas dataframe into a NumPy array?</p> <p>DataFrame:</p> <pre><code>import numpy as np import pandas as pd index = [1, 2, 3, 4, 5, 6, 7] a = [np.nan, np.nan, np.nan, 0.1, 0.1, 0.1, 0.1] b = [0.2, np.nan, 0.2, 0.2, 0.2, np.nan, np.nan] c = [np.nan, 0.5, 0.5, np.nan, 0.5, 0.5, np.nan] df = pd.DataFrame({'A': a, 'B': b, 'C': c}, index=index) df = df.rename_axis('ID') </code></pre> <p>gives</p> <pre><code>label A B C ID 1 NaN 0.2 NaN 2 NaN NaN 0.5 3 NaN 0.2 0.5 4 0.1 0.2 NaN 5 0.1 0.2 0.5 6 0.1 NaN 0.5 7 0.1 NaN NaN </code></pre> <p>I would like to convert this to a NumPy array, like so:</p> <pre><code>array([[ nan, 0.2, nan], [ nan, nan, 0.5], [ nan, 0.2, 0.5], [ 0.1, 0.2, nan], [ 0.1, 0.2, 0.5], [ 0.1, nan, 0.5], [ 0.1, nan, nan]]) </code></pre> <hr /> <p>Also, is it possible to preserve the dtypes, like this?</p> <pre><code>array([[ 1, nan, 0.2, nan], [ 2, nan, nan, 0.5], [ 3, nan, 0.2, 0.5], [ 4, 0.1, 0.2, nan], [ 5, 0.1, 0.2, 0.5], [ 6, 0.1, nan, 0.5], [ 7, 0.1, nan, nan]], dtype=[('ID', '&lt;i4'), ('A', '&lt;f8'), ('B', '&lt;f8'), ('B', '&lt;f8')]) </code></pre>
<h1>Use <code>df.to_numpy()</code></h1> <p>It's better than <code>df.values</code>, here's why.<sup>*</sup></p> <p>It's time to deprecate your usage of <code>values</code> and <code>as_matrix()</code>.</p> <p>pandas v0.24.0 introduced two new methods for obtaining NumPy arrays from pandas objects:</p> <ol> <li><strong><code>to_numpy()</code></strong>, which is defined on <code>Index</code>, <code>Series</code>, and <code>DataFrame</code> objects, and</li> <li><strong><code>array</code></strong>, which is defined on <code>Index</code> and <code>Series</code> objects only.</li> </ol> <p>If you visit the v0.24 docs for <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.values.html#pandas.DataFrame.values" rel="nofollow noreferrer"><code>.values</code></a>, you will see a big red warning that says:</p> <blockquote> <h3>Warning: We recommend using <code>DataFrame.to_numpy()</code> instead.</h3> </blockquote> <p>See <a href="https://pandas.pydata.org/pandas-docs/stable/whatsnew/v0.24.0.html#accessing-the-values-in-a-series-or-index" rel="nofollow noreferrer">this section of the v0.24.0 release notes</a>, and <a href="https://stackoverflow.com/a/54324513/4909087">this answer</a> for more information.</p> <p><sub>* - <code>to_numpy()</code> is my recommended method for any production code that needs to run reliably for many versions into the future. However if you're just making a scratchpad in jupyter or the terminal, using <code>.values</code> to save a few milliseconds of typing is a permissable exception. You can always add the fit n finish later.</sub></p> <hr /> <hr /> <h2><strong>Towards Better Consistency: <a href="http://pandas.pydata.org/pandas-docs/version/0.24.0rc1/api/generated/pandas.Series.to_numpy.html" rel="nofollow noreferrer"><code>to_numpy()</code></a></strong></h2> <p>In the spirit of better consistency throughout the API, a new method <code>to_numpy</code> has been introduced to extract the underlying NumPy array from DataFrames.</p> <pre><code># Setup df = pd.DataFrame(data={'A': [1, 2, 3], 'B': [4, 5, 6], 'C': [7, 8, 9]}, index=['a', 'b', 'c']) # Convert the entire DataFrame df.to_numpy() # array([[1, 4, 7], # [2, 5, 8], # [3, 6, 9]]) # Convert specific columns df[['A', 'C']].to_numpy() # array([[1, 7], # [2, 8], # [3, 9]]) </code></pre> <p>As mentioned above, this method is also defined on <code>Index</code> and <code>Series</code> objects (see <a href="https://stackoverflow.com/a/54324513/4909087">here</a>).</p> <pre><code>df.index.to_numpy() # array(['a', 'b', 'c'], dtype=object) df['A'].to_numpy() # array([1, 2, 3]) </code></pre> <p>By default, a view is returned, so any modifications made will affect the original.</p> <pre><code>v = df.to_numpy() v[0, 0] = -1 df A B C a -1 4 7 b 2 5 8 c 3 6 9 </code></pre> <p>If you need a copy instead, use <code>to_numpy(copy=True)</code>.</p> <hr /> <h3>pandas &gt;= 1.0 update for ExtensionTypes</h3> <p>If you're using pandas 1.x, chances are you'll be dealing with extension types a lot more. You'll have to be a little more careful that these extension types are correctly converted.</p> <pre><code>a = pd.array([1, 2, None], dtype=&quot;Int64&quot;) a &lt;IntegerArray&gt; [1, 2, &lt;NA&gt;] Length: 3, dtype: Int64 # Wrong a.to_numpy() # array([1, 2, &lt;NA&gt;], dtype=object) # yuck, objects # Correct a.to_numpy(dtype='float', na_value=np.nan) # array([ 1., 2., nan]) # Also correct a.to_numpy(dtype='int', na_value=-1) # array([ 1, 2, -1]) </code></pre> <p>This is <a href="https://pandas.pydata.org/pandas-docs/stable/whatsnew/v1.0.0.html#arrays-integerarray-now-uses-pandas-na" rel="nofollow noreferrer">called out in the docs</a>.</p> <hr /> <h3>If you need the <code>dtypes</code> in the result...</h3> <p>As shown in another answer, <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_records.html#pandas-dataframe-to-records" rel="nofollow noreferrer"><code>DataFrame.to_records</code></a> is a good way to do this.</p> <pre><code>df.to_records() # rec.array([('a', 1, 4, 7), ('b', 2, 5, 8), ('c', 3, 6, 9)], # dtype=[('index', 'O'), ('A', '&lt;i8'), ('B', '&lt;i8'), ('C', '&lt;i8')]) </code></pre> <p>This cannot be done with <code>to_numpy</code>, unfortunately. However, as an alternative, you can use <code>np.rec.fromrecords</code>:</p> <pre><code>v = df.reset_index() np.rec.fromrecords(v, names=v.columns.tolist()) # rec.array([('a', 1, 4, 7), ('b', 2, 5, 8), ('c', 3, 6, 9)], # dtype=[('index', '&lt;U1'), ('A', '&lt;i8'), ('B', '&lt;i8'), ('C', '&lt;i8')]) </code></pre> <p>Performance wise, it's nearly the same (actually, using <code>rec.fromrecords</code> is a bit faster).</p> <pre><code>df2 = pd.concat([df] * 10000) %timeit df2.to_records() %%timeit v = df2.reset_index() np.rec.fromrecords(v, names=v.columns.tolist()) 12.9 ms ± 511 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) 9.56 ms ± 291 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) </code></pre> <hr /> <hr /> <h2><strong>Rationale for Adding a New Method</strong></h2> <p><code>to_numpy()</code> (in addition to <code>array</code>) was added as a result of discussions under two GitHub issues <a href="https://github.com/pandas-dev/pandas/issues/19954" rel="nofollow noreferrer">GH19954</a> and <a href="https://github.com/pandas-dev/pandas/issues/23623" rel="nofollow noreferrer">GH23623</a>.</p> <p>Specifically, the docs mention the rationale:</p> <blockquote> <p>[...] with <code>.values</code> it was unclear whether the returned value would be the actual array, some transformation of it, or one of pandas custom arrays (like <code>Categorical</code>). For example, with <code>PeriodIndex</code>, <code>.values</code> generates a new <code>ndarray</code> of period objects each time. [...]</p> </blockquote> <p><code>to_numpy</code> aims to improve the consistency of the API, which is a major step in the right direction. <code>.values</code> will not be deprecated in the current version, but I expect this may happen at some point in the future, so I would urge users to migrate towards the newer API, as soon as you can.</p> <hr /> <hr /> <h2><strong>Critique of Other Solutions</strong></h2> <p><code>DataFrame.values</code> has inconsistent behaviour, as already noted.</p> <p><code>DataFrame.get_values()</code> was <a href="https://github.com/pandas-dev/pandas/pull/29989" rel="nofollow noreferrer">quietly removed in v1.0</a> and was previously deprecated in v0.25. Before that, it was simply a wrapper around <code>DataFrame.values</code>, so everything said above applies.</p> <p><code>DataFrame.as_matrix()</code> was removed in v1.0 and was previously deprecated in v0.23. Do <strong>NOT</strong> use!</p>
python|arrays|pandas|numpy|dataframe
544
229
29,236,097
cuts in the dataset with python
<p>Im studying the sun frequencies during one month every minute. So I have one matrix M with 43200 elements, one per minute.</p> <p>The way to do the power spectrum for all the elements is:</p> <pre><code>import numpy as np import pylab as pl from scipy import fftpack M=np.loadtxt('GOLF-SOHO-Sol.dat') N=len(M) dt=1 t= np.arange(0., N, dt) yt = M frecs= fftpack.fftfreq(yt.size, dt) fft_yt = fftpack.fft(yt) vector_amp = np.abs(fft_yt) pl.subplot(212) pl.xlim(-0.5, 0.5) pl.plot(frecs, vector_amp, color="blue", linestyle="-", linewidth=1.5) pl.xlabel('Frecuencia (Hz)') pl.ylabel('Espec. amplitud') pl.title('Espectro de amplitud') marcasx = np.arange(-0.5, 0.5, 0.1) # vector de marcas en x pl.xticks(marcasx) pl.show() </code></pre> <p>The problem is that now I want to make some cuts in this matrix. I only need the datas every 12 hours (when it is sunny). So in my matrix M I need for example that the first 720 are the same but the nexts 720 have to be 0, the nexts 720 the originals and the nexts 720 zero, etc. </p> <p>How can I compute this? I should do a bucle with while in which every 720 dates it changes.</p> <p>Thank you in advance. I hope I was clear.</p>
<pre><code># your data values (all ones for illustration) &gt;&gt;&gt; values = numpy.ones( (43200,) ) # reshape to matrix with rows of 720 samples &gt;&gt;&gt; mat = values.reshape( (60, 720) ) # now it's easy to set alternating rows to 0.0 &gt;&gt;&gt; mat[1::2, :] = 0 # and because y is a view of your data, "values" now # has zeroes in the right places &gt;&gt;&gt; values[710:730] array([ 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]) </code></pre>
python|python-2.7|python-3.x|numpy
1
230
29,056,156
How to pad multiple lists with trailing zeros?
<p>Suppose I have two lists containing the same number of elements which are lists of integers. For instance:</p> <pre><code>a = [[1, 7, 3, 10, 4], [1, 3, 8], ..., [2, 5, 10, 91, 54, 0]] b = [[5, 4, 23], [1, 2, 0, 4], ..., [5, 15, 11]] </code></pre> <p>For each index, I want to pad the shorter list with trailing zeros. The example above should look like:</p> <pre><code>a = [[1, 7, 3, 10, 4], [1, 3, 8, 0], ..., [2, 5, 10, 91, 54, 0]] b = [[5, 4, 23, 0, 0], [1, 2, 0, 4], ..., [51, 15, 11, 0, 0, 0]] </code></pre> <p>Is there an elegant way to perform this comparison and padding build into Python lists or perhaps numpy? I am aware that <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.pad.html" rel="nofollow">numpy.pad</a> can perform the padding, but its the iteration and comparison over the lists that has got me stuck.</p>
<p>I'm sure there's an elegant Python one-liner for this sort of thing, but sometimes a straightforward imperative solution will get the job done:</p> <pre><code>for i in xrange(0, len(a)): x = len(a[i]) y = len(b[i]) diff = max(x, y) a[i].extend([0] * (diff - x)) b[i].extend([0] * (diff - y)) print a, b </code></pre> <p>Be careful with "elegant" solutions too, because they can be very difficult to comprehend (I can't count the number of times I've come back to a piece of code I wrote using <code>reduce()</code> and had to struggle to figure out how it worked).</p>
python|arrays|numpy|padding
3
231
22,895,405
How to check if there exists a row with a certain column value in pandas dataframe
<p>Very new to pandas.</p> <p>Is there a way to check given a pandas dataframe, if there exists a row with a certain column value. Say I have a column 'Name' and I need to check for a certain name if it exists.</p> <p>And once I do this, I will need to make a similar query, but with a bunch of values at a time. I read that there is 'isin', but I'm not sure how to use it. So I need to make a query such that I get all the rows which have 'Name' column matching to any of the values in a big array of names.</p>
<pre><code>import numpy as np import pandas as pd df = pd.DataFrame(data = np.arange(8).reshape(4,2), columns=['name', 'value']) </code></pre> <p>Result:</p> <pre><code>&gt;&gt;&gt; df name value 0 0 1 1 2 3 2 4 5 3 6 7 &gt;&gt;&gt; any(df.name == 4) True &gt;&gt;&gt; any(df.name == 5) False </code></pre> <p>Second Part:</p> <pre><code>my_data = np.arange(8).reshape(4,2) my_data[0,0] = 4 df = pd.DataFrame(data = my_data, columns=['name', 'value']) </code></pre> <p>Result:</p> <pre><code>&gt;&gt;&gt; df.loc[df.name == 4] name value 0 4 1 2 4 5 </code></pre> <p>Update:</p> <pre><code>my_data = np.arange(8).reshape(4,2) my_data[0,0] = 4 df = pd.DataFrame(data = my_data, index=['a', 'b', 'c', 'd'], columns=['name', 'value']) </code></pre> <p>Result:</p> <pre><code>&gt;&gt;&gt; df.loc[df.name == 4] # gives relevant rows name value a 4 1 c 4 5 &gt;&gt;&gt; df.loc[df.name == 4].index # give "row names" of relevant rows Index([u'a', u'c'], dtype=object) </code></pre>
python|pandas|dataframe
9
232
22,881,825
How to deal with SettingWithCopyWarning in this case
<p>I've read the answers in <a href="https://stackoverflow.com/questions/20625582/how-to-deal-with-this-pandas-warning">How to deal with this Pandas warning?</a> but I can't figure out if I should ignore the SettingWithCopyWarning warning or if I'm doing something really wrong.</p> <p>I have this function that resamples some data to a specific time frame (1h for instance) and then fills the NaN values accordingly.</p> <pre><code>def resample_data(raw_data, time_frame): # resamples the ticker data in ohlc ohlc_dict = { 'open': 'first', 'high': 'max', 'low': 'min', 'close': 'last', 'price': 'mean' } volume_dict = {'volume': 'sum', 'volume_quote': 'sum'} resampled_data = raw_data.resample(time_frame, how={'price': ohlc_dict, 'amount': volume_dict}) resampled_data['amount'] = resampled_data['amount']['volume'].fillna(0.0) resampled_data['amount']['volume_quote'] = resampled_data['amount']['volume'] resampled_data['price']['close'] = resampled_data['price']['close'].fillna(method='pad') resampled_data['price']['open'] = resampled_data['price']['open'].fillna(resampled_data['price']['close']) resampled_data['price']['high'] = resampled_data['price']['high'].fillna(resampled_data['price']['close']) resampled_data['price']['low'] = resampled_data['price']['low'].fillna(resampled_data['price']['close']) resampled_data['price']['price'] = resampled_data['price']['price'].fillna(resampled_data['price']['close']) # ugly hack to remove multi index, must be better way output_data = resampled_data['price'] output_data['volume'] = resampled_data['amount']['volume'] output_data['volume_quote'] = resampled_data['amount']['volume_quote'] return output_data </code></pre> <p>Is this the right way to do it and should I ignore the warning?</p> <p>Edit: If I try to use .loc as sugested in the warning:</p> <pre><code>resampled_data = raw_data.resample(time_frame, how={'price': ohlc_dict, 'amount': volume_dict}) resampled_data.loc['amount'] = resampled_data['amount']['volume'].fillna(0.0) resampled_data.loc['amount']['volume_quote'] = resampled_data['amount']['volume'] resampled_data.loc['price']['close'] = resampled_data['price']['close'].fillna(method='pad') resampled_data.loc['price']['open'] = resampled_data['price']['open'].fillna(resampled_data['price']['close']) resampled_data.loc['price']['high'] = resampled_data['price']['high'].fillna(resampled_data['price']['close']) resampled_data.loc['price']['low'] = resampled_data['price']['low'].fillna(resampled_data['price']['close']) resampled_data.loc['price']['price'] = resampled_data['price']['price'].fillna(resampled_data['price']['close']) </code></pre> <p>I get the following error refering to line <code>resampled_data.loc['price']['close'] = resampled_data['price']['close'].fillna(method='pad')</code></p> <blockquote> <p>KeyError: 'the label [price] is not in the [index]'</p> </blockquote>
<p>As Jeff points out, since this is a MulitIndex column you should use a tuple to access it:</p> <pre><code>resampled_data['price']['close'] resampled_data[('price', 'close')] resampled_data.loc[:, ('price', 'close')]  # equivalent </code></pre> <p>This also disaembiguates it from take the column and the row:</p> <pre><code>resampled_data.loc['close', 'price'] </code></pre> <p>(which is what pandas was trying to do when it gave the KeyError.)</p> <p>You'll usually see the SettingWithCopy warning if you use consecutive [] in your code, and the are best combined into one [] e.g. using loc:</p> <pre><code>resampled_data.loc['price']['close'] = ... # this *may* set to a copy </code></pre> <p>If you do set to a copy (sometime the above may actually not be a copy, but pandas makes no guarantee here), the copy will correctly updated but then immediately garbage collected.</p> <p><em>Aside: as mentioned in comments resample offers <code>how='ohlc'</code>, so you may be best of doing this, padding, filling and then joining with the resampled volumes.</em></p>
python|pandas|resampling
2
233
13,736,115
How to create values from Zipf Distribution with range n in Python?
<p>I would like to create an array of Zipf Distributed values withing range of [0, 1000].</p> <p>I am using <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.random.zipf.html" rel="nofollow">numpy.random.zipf</a> to create the values but I cannot create them within the range I want.</p> <p>How can I do that?</p>
<p>normalize and multiply by 1000 ?</p> <pre><code>a=2 s = np.random.zipf(a, 1000) result = (s/float(max(s)))*1000 print min(s), max(s) print min(result), max(result) </code></pre> <p>althought isn't the whole point of zipf that the range of values is a function of the number of values generated ?</p>
python|numpy|distribution
3
234
13,691,485
Issue with merging two dataframes in Pandas
<p>I am attempting to left merge two dataframes, but I am running into an issue. I get only NaN's in columns that are in the right dataframe.</p> <p>This is what I did:</p> <pre><code>X = read_csv('fileA.txt',sep=',',header=0); print "-----FILE DATA-----" print X; X = X.astype(object); # convert every column to string type? does it do it? print "-----INTERNALS-----" pprint(vars(X)); Y = file_to_dataframe('fileB.txt',',',0); print "-----FILE DATA-----" print Y; print "-----INTERNALS-----" pprint(vars(Y)); Z = merge(X,Y,how='left'); print Z; sys.exit(); Y = file_to_dataframe('tmp.chr20.thresh.frq.count','\t',0); print Y.dtypes; def file_to_dataframe(filename,sep,header): # list of dict's i = 0; k = 0; cols = list(); colNames = list(); for line in fileinput.input([filename]): line = line.rstrip('\n'); lst = line.split(sep); if i == header: # row number to use as the column names for colName in lst: colNames.append(colName); elif i &gt; header: j = 0; record = dict(); for j in range(0,len(lst)): # iterate over all tokens in the current line if j &gt;= len(colNames): colNames.append('#Auto_Generated_Label_'+ str(k)); k += 1; record[colNames[j]] = lst[j]; cols.append(record); # push the record onto stack i += 1; return DataFrame.from_records(cols); </code></pre> <p>Here's the output:</p> <p>-----FILE DATA-----</p> <pre><code> Chrom Gene Position 0 20 DZANK1 18446022 1 20 TGM6 2380332 2 20 C20orf96 271226 </code></pre> <p>-----INTERNALS-----</p> <pre><code>{'_data': BlockManager Items: array([Chrom, Gene, Position], dtype=object) Axis 1: array([0, 1, 2]) ObjectBlock: array([Chrom, Gene, Position], dtype=object), 3 x 3, dtype object, '_item_cache': {}} </code></pre> <p>-----FILE DATA-----</p> <pre><code> Chrom Position Random 0 20 18446022 ABC 1 20 2380332 XYZ 2 20 271226 PQR </code></pre> <p>-----INTERNALS-----</p> <pre><code>{'_data': BlockManager Items: array([Chrom, Position, Random], dtype=object) Axis 1: array([0, 1, 2]) ObjectBlock: array([Chrom, Position, Random], dtype=object), 3 x 3, dtype object, '_item_cache': {}} Chrom Gene Position Random 0 20 C20orf96 271226 NaN 1 20 TGM6 2380332 NaN 2 20 DZANK1 18446022 NaN </code></pre> <p>As you see, there's a column of NaN's there where there should be values from Random column in Y. Any ideas on how to debug this?</p>
<p>Working for me (v0.10.0b1, though I am somewhat confident--but haven't checked-- this would also work in 0.9.1):</p> <pre><code>In [7]: x Out[7]: Chrom Gene Position 0 20 DZANK1 18446022 1 20 TGM6 2380332 2 20 C20orf96 271226 In [8]: y Out[8]: Chrom Position Random 0 20 18446022 ABC 1 20 2380332 XYZ 2 20 271226 PQR In [9]: pd.merge(x, y, how='left') Out[9]: Chrom Gene Position Random 0 20 DZANK1 18446022 ABC 1 20 TGM6 2380332 XYZ 2 20 C20orf96 271226 PQR </code></pre> <p>I'm very surprised that all the columns are object dtype. There must be some kind of parsing problem with your data-- examine the values in each column (not what they look like, but what they actually <em>are</em>, strings, ints, what?)</p>
python|pandas|dataset
1
235
29,518,923
numpy.asarray: how to check up that its result dtype is numeric?
<p>I have to create a <code>numpy.ndarray</code> from array-like data with int, float or complex numbers.</p> <p>I hope to do it with <code>numpy.asarray</code> function.</p> <p>I don't want to give it a strict <code>dtype</code> argument, because I want to convert complex values to <code>complex64</code> or <code>complex128</code>, floats to <code>float32</code> or <code>float64</code>, etc.</p> <p>But if I just simply run <code>numpy.ndarray(some_unknown_data)</code> and look at the dtype of its result, how can I understand, that the data is numeric, not object or string or something else?</p>
<p>You could check if the dtype of the array is a sub-dtype of <code>np.number</code>. For example:</p> <pre><code>&gt;&gt;&gt; np.issubdtype(np.complex128, np.number) True &gt;&gt;&gt; np.issubdtype(np.int32, np.number) True &gt;&gt;&gt; np.issubdtype(np.str_, np.number) False &gt;&gt;&gt; np.issubdtype('O', np.number) # 'O' is object False </code></pre> <p>Essentially, this just checks whether the dtype is below 'number' in the <a href="http://docs.scipy.org/doc/numpy/reference/arrays.scalars.html" rel="noreferrer">NumPy dtype hierarchy</a>:</p> <p><img src="https://i.stack.imgur.com/ziJg1.png" alt="enter image description here"> </p>
python|arrays|numpy|types
67
236
62,235,278
Hotencoded values & DataFrame for logistic regression
<p>I am trying to run a logistic regression on a dataset that has features with some categorical values. In order to process those features through regression, I was planning to encode them</p> <pre><code>#Select categorical features only &amp; encode name numerically with LabelEncoder cat_features = df.select_dtypes(include=[object]) label_enc = preprocessing.LabelEncoder() le_features = cat_features.apply(label_enc.fit_transform) #Aggregate all encoded values into a binary matrix enc = preprocessing.OneHotEncoder() enc.fit(le_features) final_cat_features = enc.transform(le_features).toarray() </code></pre> <p>After running this code, I do confirm it returns an encoded matrix</p> <pre><code>(4665, 290) &lt;class 'numpy.ndarray'&gt; </code></pre> <p>This is where I get stuck. How I am supposed to regenerate a dataframe from that exactly?! Should I concatenate the 290 columns together in order to end up with a new feature to add to my new dataframe? If not, I have to say I am stuck here.</p>
<p>You should add all 290 columns to your dataframe with the remaining (i.e. non-categorical or numerical) values. For that you can create a dataframe from the array and join it to the original dataframe:</p> <pre><code>final_cat_features_df = pd.DataFrame(final_cat_features, index=df.index) df = df.join(final_cat_features_df) </code></pre> <p>As an alternative, you may want to have a look at pandas <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.get_dummies.html" rel="nofollow noreferrer"><code>get_dummies</code></a>.</p>
python|pandas|dataframe|encoding|scikit-learn
1
237
62,381,286
How to obtain sequence of submodules from a pytorch module?
<p>For a pytorch <a href="https://pytorch.org/docs/master/generated/torch.nn.Module.html" rel="nofollow noreferrer">module</a>, I suppose I could use <code>.named_children</code>, <code>.named_modules</code>, etc. to obtain a list of the submodules. However, I suppose the list is not given in order, right? An example: </p> <pre><code>In [19]: import transformers In [20]: model = transformers.DistilBertForSequenceClassification.from_pretrained('distilb ...: ert-base-cased') In [21]: [name for name, _ in model.named_children()] Out[21]: ['distilbert', 'pre_classifier', 'classifier', 'dropout'] </code></pre> <p>The order of <code>.named_children()</code> in the above model is given as distilbert, pre_classifier, classifier, and dropout. However, if you examine the <a href="https://github.com/huggingface/transformers/blob/9931f817b75ecb2c8bb08b6e9d4cbec4b0933935/src/transformers/modeling_distilbert.py#L641" rel="nofollow noreferrer">code</a>, it is evident that <code>dropout</code> happens before <code>classifier</code>. So how do I get the order of these submodules? </p>
<p>In Pytorch, the results of <code>print(model)</code> or <code>.named_children()</code>, etc are listed based on the order they are declared in <code>__init__</code> of the model's class e.g.</p> <p><strong>Case 1</strong></p> <pre><code>class Model(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(1, 10, kernel_size=5) self.conv2 = nn.Conv2d(10, 20, kernel_size=5) self.fc1 = nn.Linear(320, 50) self.fc2 = nn.Linear(50, 10) self.conv2_drop = nn.Dropout2d() def forward(self, x): x = F.relu(F.max_pool2d(self.conv1(x), 2)) x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2)) x = x.view(-1, 320) x = F.relu(self.fc1(x)) x = F.dropout(x, p=0.6) x = self.fc2(x) return F.log_softmax(x, dim=1) model = Model() print(model) [name for name, _ in model.named_children()] # output ['conv1', 'conv2', 'fc1', 'fc2', 'conv2_drop'] </code></pre> <p><strong>Case 2</strong></p> <p>Changed order of <code>fc1</code> and <code>fc2</code> layers in constructor.</p> <pre><code>class Model(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(1, 10, kernel_size=5) self.conv2 = nn.Conv2d(10, 20, kernel_size=5) self.fc2 = nn.Linear(50, 10) self.fc1 = nn.Linear(320, 50) self.conv2_drop = nn.Dropout2d() def forward(self, x): x = F.relu(F.max_pool2d(self.conv1(x), 2)) x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2)) x = x.view(-1, 320) x = F.relu(self.fc1(x)) x = F.dropout(x, p=0.6) x = self.fc2(x) return F.log_softmax(x, dim=1) model = Model() print(model) [name for name, _ in model.named_children()] # output ['conv1', 'conv2', 'fc2', 'fc1', 'conv2_drop'] </code></pre> <hr> <p>That's why <code>classifier</code> is printed before <code>dropout</code> as it's declared so in constructor:</p> <pre><code>class DistilBertForSequenceClassification(DistilBertPreTrainedModel): ... self.distilbert = DistilBertModel(config) self.pre_classifier = nn.Linear(config.dim, config.dim) self.classifier = nn.Linear(config.dim, config.num_labels) self.dropout = nn.Dropout(config.seq_classif_dropout) </code></pre> <hr> <p>Nevertheless, you can play with model's submodules using <code>.modules()</code>, etc. but they'll be listed only in the order they are declared in <code>__init__</code>. If you only want to print structure based on <code>forward</code> method, you may try using <a href="https://github.com/Fangyh09/pytorch-summary" rel="nofollow noreferrer">pytorch-summary</a>.</p>
pytorch|huggingface-transformers
1
238
62,433,943
Trying to apply a function on a Pandas DataFrame in Python
<p>I'm trying to apply this function to fill the <code>Age</code> column based on <code>Pclass</code> and <code>Sex</code> columns. But I'm unable to do so. How can I make it work?</p> <pre><code>def fill_age(): Age = train['Age'] Pclass = train['Pclass'] Sex = train['Sex'] if pd.isnull(Age): if Pclass == 1: return 34.61 elif (Pclass == 1) and (Sex == 'male'): return 41.2813 elif (Pclass == 2) and (Sex == 'female'): return 28.72 elif (Pclass == 2) and (Sex == 'male'): return 30.74 elif (Pclass == 3) and (Sex == 'female'): return 21.75 elif (Pclass == 3) and (Sex == 'male'): return 26.51 else: pass else: return Age train['Age'] = train['Age'].apply(fill_age(),axis=1) </code></pre> <p>I'm getting the following error:</p> <blockquote> <p>ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().</p> </blockquote>
<p>You should consider using parenthesis to separate the arguments (which you already did) and change the boolean operator <code>and</code> for bitwise opeator <code>&amp;</code> to avoid this type of errors. Also, keep in mind that if you want to use <code>apply</code> then you should use a parameter <code>x</code> for the function which will part of a lambda in the <code>apply</code> function:</p> <pre><code>def fill_age(x): Age = x['Age'] Pclass = x['Pclass'] Sex = x['Sex'] if pd.isnull(Age): if Pclass == 1: return 34.61 elif (Pclass == 1) &amp; (Sex == 'male'): return 41.2813 elif (Pclass == 2) &amp; (Sex == 'female'): return 28.72 elif (Pclass == 2) &amp; (Sex == 'male'): return 30.74 elif (Pclass == 3) &amp; (Sex == 'female'): return 21.75 elif (Pclass == 3) &amp; (Sex == 'male'): return 26.51 else: pass else: return Age </code></pre> <p>Now, using apply with the lambda:</p> <pre><code>train['Age'] = train['Age'].apply(lambda x: fill_age(x),axis=1) </code></pre> <p>In a sample dataframe:</p> <pre><code>df = pd.DataFrame({'Age':[1,np.nan,3,np.nan,5,6], 'Pclass':[1,2,3,3,2,1], 'Sex':['male','female','male','female','male','female']}) </code></pre> <p>Using the answer provided above:</p> <pre><code>df['Age'] = df.apply(lambda x: fill_age(x),axis=1) </code></pre> <p>Output:</p> <pre><code> Age Pclass Sex 0 1.00 1 male 1 28.72 2 female 2 3.00 3 male 3 21.75 3 female 4 5.00 2 male 5 6.00 1 female </code></pre>
python|pandas
1
239
62,362,478
what is difference between batch size in data pipeline and batch size in midel.fit()?
<p>Are these 2 the same batch-size, or they have different meaning?</p> <pre><code>BATCH_SIZE=10 dataset = tf.data.Dataset.from_tensor_slices((filenames, labels)) dataset = dataset.batch(BATCH_SIZE) </code></pre> <p>2nd</p> <pre><code>history = model.fit(train_ds, epochs=EPOCHS, validation_data=create_dataset(X_valid, y_valid_bin), max_queue_size=1, workers=1, batch_size=10, use_multiprocessing=False) </code></pre> <p>I'm having problem of Ram out of run... Traning images example 333000 Ram 30GB 12 GB GPU What should be the batch size for this?</p> <p><a href="https://github.com/mobinalhassan/Multi-label-image-classification/blob/master/iMaterialist%20c-2020.ipynb" rel="nofollow noreferrer">Full Code Here</a></p>
<p><strong>Data-Set (Batch Size)</strong></p> <p>Batch size only mean that how much data will pass through a pipeline you defined. In the case of Dateset batch size represent how much data will be passed to the model in one iteration. For-example if you form a data generator and set batch size 8. Now on every iteration data generator give 8 data records.</p> <p><strong>Model.fit (Batch Size)</strong> </p> <p>and in the model.fit when we set batch size it means that model will calculate loss after passing data-records equal to the batch-size. If you know about deep learning models they will calculate a particular loss on feed forward and than through back propagation they improves themselves. Now if you set batch-size 8 in model.fit now 8 data-records are pass to the model and loss is calculated of that 8 data-records and than model improve from that loss.</p> <p><strong>Example:</strong> </p> <p>Now if you set dateset batch-size equal to 4 and set model.fit batch-size equal to 8. Now your dateset generator has to iterate 2 times to give 8 images to model and model.fit only perform 1 iteration to calculate loss.</p> <p><strong>Ram Issue</strong></p> <p>what is your image size ? Try to reduce batch_size because step per epochs are not related to ram but batch size is. Because if you are giving 10 batch size than 10 images have to load onto the ram for processing and your ram is not capable of loading 10 images at the same time. Try to give batch size 4 or 2. that may help you</p>
tensorflow|machine-learning|neural-network|data-science|batchsize
2
240
48,331,736
Efficiently computing Khatri-Rao like sum (pairwise row sum)
<p>I'm trying to compute <a href="https://en.wikipedia.org/wiki/Kronecker_product#Khatri%E2%80%93Rao_product" rel="nofollow noreferrer">Khatri-Rao</a> like sum (i.e. pairwise row sum) and was able to come up with this solution:</p> <pre><code>In [15]: arr1 Out[15]: array([[1, 2, 3], [2, 3, 4], [3, 4, 5]]) In [16]: arr2 Out[16]: array([[11, 12, 13], [12, 13, 14], [13, 14, 15]]) # for every row in `arr1`, sum it with all rows in `arr2` (in pairwise manner) In [17]: np.repeat(arr1, arr2.shape[0], 0) + np.tile(arr2, (arr1.shape[0], 1)) Out[17]: array([[12, 14, 16], [13, 15, 17], [14, 16, 18], [13, 15, 17], [14, 16, 18], [15, 17, 19], [14, 16, 18], [15, 17, 19], [16, 18, 20]]) # thus `axis0` in the result will become `arr1.shape[0] * arr2.shape[0]` In [18]: (np.repeat(arr1, arr2.shape[0], 0) + np.tile(arr2, (arr1.shape[0], 1))).shape Out[18]: (9, 3) </code></pre> <p>It works perfectly fine. However, I was wondering whether is this the <strong>optimized way</strong> to do this computation. I also timed the computation time for a (fairly) large array</p> <pre><code># inputs In [69]: arr1 = np.arange(9000).reshape(100, 90) In [70]: arr2 = np.arange(45000).reshape(500, 90) In [71]: (np.repeat(arr1, arr2.shape[0], 0) + np.tile(arr2, (arr1.shape[0], 1))).shape Out[71]: (50000, 90) In [72]: %timeit np.repeat(arr1, arr2.shape[0], 0) + np.tile(arr2, (arr1.shape[0], 1)) 22.5 ms ± 420 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) </code></pre> <p>Is it possible to optimize it further, maybe using more sophisticated approaches? </p> <p>Also, I'm not completely sure about whether <a href="https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.einsum.html" rel="nofollow noreferrer"><code>numpy.einsum()</code></a> can be leveraged here.. Because, as far as I understand, it can't be used to increase the shape of the resultant array, which is what is happening here. I welcome corrections, suggestions, and improvements to my solution :)</p>
<p>We can leverage <code>broadcasting</code> -</p> <pre><code>(arr1[:,None] + arr2).reshape(-1,arr1.shape[1]) </code></pre> <p>For large arrays, we can gain some further speedup with <a href="http://numexpr.readthedocs.io/en/latest/intro.html#how-it-works" rel="nofollow noreferrer"><code>numexpr</code></a> to transfer the <code>broadcasting</code> part -</p> <pre><code>import numexpr as ne arr1_3D = arr1[:,None] out = ne.evaluate('arr1_3D + arr2').reshape(-1,arr1.shape[1]) </code></pre> <p>Runtime test -</p> <pre><code>In [545]: arr1 = np.random.rand(500,500) In [546]: arr2 = np.random.rand(500,500) In [547]: %timeit (arr1[:,None] + arr2).reshape(-1,arr1.shape[1]) 1 loop, best of 3: 215 ms per loop In [548]: %%timeit ...: arr1_3D = arr1[:,None] ...: out = ne.evaluate('arr1_3D + arr2').reshape(-1,arr1.shape[1]) 10 loops, best of 3: 174 ms per loop </code></pre>
python|performance|numpy|multidimensional-array|linear-algebra
2
241
48,351,276
Pandas Creating Normal Dist series
<p>I'm trying to convert an excel "normal distribution" formula into python.</p> <p>(1-NORM.DIST(a+col,b,c,TRUE))/(1-NORM.DIST(a,b,c,TRUE)))</p> <p>For example: Here's my given df</p> <pre><code>Id a b c ijk 4 3.5 12.53 xyz 12 3 10.74 </code></pre> <p>My goal: </p> <pre><code>Id a b c 0 1 2 3 ijk 4 3.5 12.53 1 .93 .87 .81 xyz 12 3 10.74 1 .87 .76 .66 </code></pre> <p>Here's the math behind it:</p> <p>column 0: always 1</p> <p>column 1: (1-NORM.DIST(a+1,b,c,TRUE))/(1-NORM.DIST(a,b,c,TRUE))</p> <p>column 2: (1-NORM.DIST(a+2,b,c,TRUE))/(1-NORM.DIST(a,b,c,TRUE))</p> <p>column 3: (1-NORM.DIST(a+3,b,c,TRUE))/(1-NORM.DIST(a,b,c,TRUE))</p> <p>This is what I have so far:</p> <pre><code>df1 = pd.DataFrame(df, columns=np.arange(0,4)) result = pd.concat([df, df1], axis=1, join_axes=[df.index]) result[0] = 1 </code></pre> <p>I'm not sure what to do after this.</p> <p>This is how I use the normal distribution function: <a href="https://support.office.com/en-us/article/normdist-function-126db625-c53e-4591-9a22-c9ff422d6d58" rel="nofollow noreferrer">https://support.office.com/en-us/article/normdist-function-126db625-c53e-4591-9a22-c9ff422d6d58</a></p> <p>Many many thanks!</p>
<p><code>NORM.DIST(..., TRUE)</code> means the cumulative distribution function and <code>1 - NORM.DIST(..., TRUE)</code> means the survival function. These are available under scipy's stats module (see <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.html" rel="nofollow noreferrer">ss.norm</a>). For example,</p> <pre><code>import scipy.stats as ss ss.norm.cdf(4, 3.5, 12.53) Out: 0.51591526057026538 </code></pre> <p>For your case, you can first define a function:</p> <pre><code>def normalize(a, b, c, col): return ss.norm.sf(a+col, b, c) / ss.norm.sf(a, b, c) </code></pre> <p>and call that function with <code>apply</code>:</p> <pre><code>for col in range(4): df[col] = df.apply(lambda x: normalize(x.a, x.b, x.c, col), axis=1) df Out: Id a b c 0 1 2 3 0 ijk 4 3.5 12.53 1.0 0.934455 0.869533 0.805636 1 xyz 12 3.0 10.74 1.0 0.875050 0.760469 0.656303 </code></pre> <p>This is not the most efficient approach as it calculates the survival function for same values again and involves two loops. One level of loops can be omitted by passing an array of values to <code>ss.sf</code>:</p> <pre><code>out = df.apply( lambda x: pd.Series( ss.norm.sf(x.a + np.arange(4), x.b, x.c) / ss.norm.sf(x.a, x.b, x.c) ), axis=1 ) Out: 0 1 2 3 0 1.0 0.934455 0.869533 0.805636 1 1.0 0.875050 0.760469 0.656303 </code></pre> <p>And you can use join to add this to your original DataFrame:</p> <pre><code>df.join(out) Out: Id a b c 0 1 2 3 0 ijk 4 3.5 12.53 1.0 0.934455 0.869533 0.805636 1 xyz 12 3.0 10.74 1.0 0.875050 0.760469 0.656303 </code></pre>
pandas|normal-distribution
2
242
47,995,208
Deep neural network: could a random batch size be useful for training?
<p>I am working with Keras for about three months and now I wonder whether it could be useful to train on batches of different (random) sizes (16, 32, 64, 128), to combine the benefits of the different batch sizes.</p> <p>I didn't found any document that answers this question. Am I totally wrong?</p>
<p>I've seen two most popular strategies of working with the batch size:</p> <ul> <li><p>Select it as large as possible so that the model still fits in GPU memory. This is done mostly to speed up training due to parallelism and vectorization.</p></li> <li><p>Tune batch size, just like any other hyper-parameter, either via random search or via <a href="https://stackoverflow.com/q/41860817/712995">Bayesian Optimization</a>. Surprisingly, bigger batch size doesn't always mean better model performance (though in many cases it does). See <a href="https://stats.stackexchange.com/q/140811/130598">this discussion</a> on this matter: the main idea is that extra noise in training can be beneficial to generalization. Remember that L2 regularization is equivalent to adding Gaussian noise to <code>x</code>. Reducing batch size also adds noise to the training process, especially if you're using batch-norm.</p></li> </ul> <p>I don't know any work on changing the batch size for the <em>same</em> model during training. But choosing the random batch size for <em>different</em> models can have benefits for sure.</p> <p>Caveat: in some settings, e.g., in deep reinforcement learning, extra noise actually hurts the performance, so in case reducing batch size can be a bad idea. So, as always, it greatly depends on your problem.</p>
tensorflow|machine-learning|deep-learning|keras
0
243
48,629,866
declaring matrix as np.float32(np.array())
<pre><code> matrix = np.float32(np.array([[0.0 for i in range(dimension)] for j in range(dimension)])) </code></pre> <p>If I want to do matrix operation in single precision, is the declaring array as above sufficient, or do I have to truncate for every arithmetic operation as follows?</p> <pre><code>np.float32(matrix[a][b] op matrix[c][d]) </code></pre>
<p>No, you can specify the <code>dtype</code> of the array:</p> <pre><code>np.array([[0.0 for i in range(dimension)] for j in range(dimension)], <b>dtype=np.float32</b>)</code></pre> <p>Note that if you work with zeros, you can also use:</p> <pre><code>np.zeros((dimension, dimension), dtype=np.float32) </code></pre> <p>By specifying <code>dtype</code>, <em>all</em> elements of the array have the same type. You can also specify the type of data if for instance <em>each</em> column has the same type, but the columns might differ, like is described in <a href="https://stackoverflow.com/a/24833188/67579">this answer</a>.</p>
python|numpy
3
244
48,605,961
how to change the datatype of numpy.random.rand array in python?
<p>I'm learning Numpy module from scratch going through the official Numpy documentation. <br> Now my goal is to create a random (3, 3) array.</p> <pre><code>&gt;&gt;&gt; np.random.rand(3,3) </code></pre> <p>However the output I recieved is a bunch of random float values.</p> <pre><code>array([[ 0.33519419, 0.53502883, 0.35485002], [ 0.73419236, 0.85315716, 0.64135169], [ 0.51732791, 0.27411508, 0.32482598]]) </code></pre> <p>I wonder how can I do the same with <code>int</code><br><br> Referring the Official documentation of Numpy for the <code>numpy.random.rand</code> <a href="https://docs.scipy.org/doc/numpy-dev/reference/generated/numpy.random.rand.html#numpy.random.rand" rel="nofollow noreferrer">here</a> I don't find a clue on converting or specifying the data type from float values to integer.<br><br> How can I achieve random values in <code>int</code>?</p>
<p>If you want to generate random <code>int</code>, use <a href="https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.random.randint.html" rel="nofollow noreferrer">np.random.randint()</a>.</p> <p>If you want to convert <code>float</code> to <code>int</code>, use <code>np.int32(something)</code>.</p>
python|numpy|random
2
245
48,593,014
Launching tensorboard error - ImportError: cannot import name weakref
<p>With python 2.7 on a mac with tensorflow and running <code>tensorboard --logdir= directory/wheremylog/fileis</code> produces the following error <code>ImportError: cannot import name weakref</code></p> <p>I've seen several folks remedy the issue with <code>pip install backports.weakref </code> but that requirement is already satisfied for me. <code>Requirement already satisfied: backports.weakref in /usr/local/lib/python2.7/site-packages </code></p> <p>I am out of ideas and am really keen to get tensorboard working. </p> <p>Thanks</p>
<p>For those with the same problem I was able to fix it through the following: 1) find where your tensorflow lives <code>pip show tensorflow</code> and look at the location line, copy it.</p> <p>2) For me it was <code>cd /usr/local/lib/python2.7/site-packages/</code></p> <p>3) <code>cd tensorflow/python/lib</code></p> <p>4) <code>open tf_should_use.py</code></p> <p>5) In your python editor replace line 28 <code>from backports import weakref</code> with <code>import weakref</code> and save the file.</p>
python|tensorflow|tensorboard
2
246
48,843,521
Training a RNN to output word2vec embedding instead of logits
<p>Traditionally it seems that RNNs use logits to predict next time step in the sequence. In my case I need the RNN to output a word2vec (50 depth) vector prediction. This means that the cost function has be based off 2 vectors: <code>Y</code> the actual vector of the next word in the series and <code>Y_hat</code>, the network prediction.</p> <p>I've tried using a <code>cosine distance</code> cost function but the network does not seem to learn (I've let it run other 10 hours on a AWS P3 and the cost is always around 0.7)</p> <p>Is such a model possible at all ? If so what cost function should be used ?</p> <p>Cosine distance in TF:</p> <pre><code>cosine_distance = tf.losses.cosine_distance(tf.nn.l2_normalize(outputs, 2), tf.nn.l2_normalize(targets, 2), axis=2) </code></pre> <p><strong>Update:</strong> I am trying to predict a word2vec so during sampling I could pick next word based on the closest neighbors of the predicted vector. </p>
<p>What is the reason that you want to predict a word embedding? Where are you getting the "ground truth" word embeddings from? For word2vec models you typically will re-use the trained word-embeddings in future models. If you trained a <code>word2vec</code> model with an embedding size of 50, then you would have 50-d embeddings that you could save and use in future models. If you just want to re-create an existing ground truth word2vec model, then you could just use those values. Typical word2vec would be having regular softmax outputs via <a href="https://www.quora.com/What-are-the-continuous-bag-of-words-and-skip-gram-architectures" rel="noreferrer">continuous-bag-of-words or skip-gram</a> and then saving the resulting word embeddings. </p> <p>If you really do have a reason for trying to generate a model that creates tries to match word2vec, then looking at your loss function here are a few suggestions. I do not believe that you should be normalizing your outputs or your targets -- you probably want those to remain unaffected (the targets are no longer the "ground truth" targets if you have normalized them. Also, it appears you are using <code>dim=0</code> which has now been deprecated and replaced with <code>axis</code>. Did you try different values for <code>dim</code>? This should represent the dimension along which to compute the cosine distance and I think that the <code>0th</code> dimension would be the wrong dimension (as this likely should be the batch size. I would try with values of <code>axis=-1</code> (last dimension) or <code>axis=1</code> and see if you observe any difference. </p> <p>Separately, what is your optimizer/learning rate? If the learning rate is too small then you may not actually be able to move enough in the right direction. </p>
python|tensorflow|machine-learning|deep-learning
5
247
70,897,392
How to add rows to a matrix with pad?
<p>I have a matrix like this:</p> <pre><code>profile=np.array([[0,0,0.5,0.1], [0.3,0,0,0], [0,0,0.1,0.9], [0,0,0,0.1], [0,0.5,0,0]]) </code></pre> <p>And I want to add a row before and after filled with zeros. How can I do that? I thought of using <code>np.pad</code> but not sure how.</p> <p>Output should be:</p> <pre><code>np.array([[0,0,0,0], [0,0,0.5,0.1], [0.3,0,0,0], [0,0,0.1,0.9], [0,0,0,0.1], [0,0.5,0,0] [0,0,0,0]]) </code></pre>
<p>You can use <code>np.pad</code>:</p> <pre><code>out = np.pad(profile, 1)[:, 1:-1] </code></pre> <p>Output:</p> <pre><code>&gt;&gt;&gt; out array([[0. , 0. , 0. , 0. ], [0. , 0. , 0.5, 0.1], [0.3, 0. , 0. , 0. ], [0. , 0. , 0.1, 0.9], [0. , 0. , 0. , 0.1], [0. , 0.5, 0. , 0. ], [0. , 0. , 0. , 0. ]]) </code></pre> <p>Because <code>np.pad</code> pads it on <em>all sides</em> (left and right, in addition to top and bottom), <code>[:, 1:-1]</code> slices off the first and last columns.</p>
python|python-3.x|numpy
1
248
51,806,675
How can I determine whether a intermediate results has or has no data?
<p>How can I implement "if there exist items in a Tensor then calculate the average value of it, else assign it a certain value"? take <strong>tf.gather_nd()</strong> for example choosing some rows from source_tensor with <strong>shape (?, 2)</strong></p> <pre><code>result = tf.gather_nd(source_tensor, indices) </code></pre> <p>should get the items from source_tensor according indices, but if indices is <strong>am empty list []</strong>, tf.gather_nd will, the program will continue and there is nothing in <strong>result</strong>.</p> <p>So I wonder that is there a way to determine if the <strong>result</strong> is empty (that is it has no data) when building the computational graph of tensorflow? and if so, I want to assign it constant value manually.</p> <p>Because what I'm going to do next is</p> <pre><code>tf.reduce_mean(result) </code></pre> <p>if the <strong>result</strong> has no data, tf.reduce_mean(result) will produce <strong>nan</strong>.</p>
<p>You should be able to do this via <code>tf.cond</code>, which executes one of two branches depending on some condition. I haven't tested the below code so please report whether it works.</p> <pre><code>mean = tf.cond(tf.size(result), lambda: tf.reduce_mean(result), lambda: some_constant) </code></pre> <p>The idea is to check whether <code>result</code> contains any items via <code>tf.size</code> (should return 0 if <code>result</code> is empty). You might need to convert it to a boolean condition explicitly, i.e. use <code>tf.cast(tf.size(result), tf.bool)</code> instead.</p>
python|tensorflow|shapes|dimensions|tensor
1
249
51,743,033
How to set values in a data frame based on index
<p>Here's my data</p> <pre><code> customer_id feature_1 feature_2 feature_3 0 1 78 73 63 1 2 79 71 66 2 2 82 76 69 3 3 43 32 53 4 3 63 42 54 </code></pre> <p>I want to label the dataframe one by one. For example, for index = 3, target is Bad</p> <pre><code> customer_id feature_1 feature_2 feature_3 target 0 1 78 73 63 1 2 79 71 66 2 2 82 76 69 3 3 43 32 53 bad 4 3 63 42 54 </code></pre> <p>Basically, I do the process of pulling out one by one with my anotation specialsit</p> <p>Best regards</p>
<p>Use the <code>set_value</code> function </p> <pre><code>syntax format is: `DataFrame.set_value(index, col, value, takeable=False)[source]` </code></pre> <p>So for your question the answer would be</p> <pre><code>df.set_value(3, 'target', 'bad') </code></pre>
python|pandas|dataframe
2
250
41,918,795
Minimize a function of one variable in Tensorflow
<p>I am new to Tensorflow and was wondering whether it would be possible to minimize a function of one variable using Tensorflow.</p> <p>For example, can we use Tensorflow to minimize 2*x^2 - 5^x + 4 using an initial guess (say x = 1)?</p> <p>I am trying the following:</p> <pre><code>import tensorflow as tf import numpy as np X = tf.placeholder(tf.float32, shape = ()) xvar = tf.Variable(np.random.randn()) f = 2*mul(X,X) - 5*X + 4 opt = tf.train.GradientDescentOptimizer(0.5).minimize(f) with tf.Session() as sess: tf.global_variables_initializer().run() y = sess.run(opt, feed_dict = {X : 5.0}) #initial guess = 5.0 print(y) </code></pre> <p>But this gives the following error:</p> <pre><code>ValueError: No gradients provided for any variable, check your graph for ops that do not support gradients, between variables </code></pre> <p>Please help me understand what am I doing wrong here.</p>
<p>If you want to minimize a single parameter you could do the following (I've avoided using a placeholder since you are trying to train a parameter - placeholders are often used for hyper-parameters and input and aren't considered trainable parameters):</p> <pre><code>import tensorflow as tf x = tf.Variable(10.0, trainable=True) f_x = 2 * x* x - 5 *x + 4 loss = f_x opt = tf.train.GradientDescentOptimizer(0.1).minimize(f_x) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for i in range(100): print(sess.run([x,loss])) sess.run(opt) </code></pre> <p>This will output the following list of pairs (x,loss):</p> <pre><code>[10.0, 154.0] [6.5, 56.0] [4.4000001, 20.720001] [3.1400001, 8.0192013] [2.3840001, 3.4469128] [1.9304, 1.8008881] [1.65824, 1.2083197] [1.494944, 0.99499512] [1.3969663, 0.91819811] [1.3381798, 0.89055157] [1.3029079, 0.88059855] [1.2817447, 0.87701511] [1.2690468, 0.87572551] [1.2614281, 0.87526155] [1.2568569, 0.87509394] [1.2541142, 0.87503386] [1.2524685, 0.87501216] [1.2514811, 0.87500429] [1.2508886, 0.87500143] [1.2505331, 0.87500048] [1.2503198, 0.875] [1.2501919, 0.87500024] [1.2501152, 0.87499976] [1.2500691, 0.875] [1.2500415, 0.875] [1.2500249, 0.87500024] [1.2500149, 0.87500024] [1.2500089, 0.875] [1.2500054, 0.87500024] [1.2500032, 0.875] [1.2500019, 0.875] [1.2500012, 0.87500024] [1.2500007, 0.87499976] [1.2500005, 0.875] [1.2500002, 0.87500024] [1.2500001, 0.87500024] [1.2500001, 0.87500024] [1.2500001, 0.87500024] [1.2500001, 0.87500024] [1.2500001, 0.87500024] [1.2500001, 0.87500024] [1.2500001, 0.87500024] [1.2500001, 0.87500024] [1.2500001, 0.87500024] [1.2500001, 0.87500024] [1.2500001, 0.87500024] [1.2500001, 0.87500024] [1.2500001, 0.87500024] [1.2500001, 0.87500024] [1.2500001, 0.87500024] </code></pre>
python|python-2.7|tensorflow
18
251
41,860,817
Hyperparameter optimization for Deep Learning Structures using Bayesian Optimization
<p>I have constructed a CLDNN (Convolutional, LSTM, Deep Neural Network) structure for raw signal classification task.</p> <p>Each training epoch runs for about 90 seconds and the hyperparameters seems to be very difficult to optimize.</p> <p>I have been research various ways to optimize the hyperparameters (e.g. random or grid search) and found out about Bayesian Optimization.</p> <p>Although I am still not fully understanding the optimization algorithm, I feed like it will help me greatly.</p> <p>I would like to ask few questions regarding the optimization task.</p> <ol> <li>How do I set up the Bayesian Optimization with regards to a deep network?(What is the cost function we are trying to optimize?)</li> <li>What is the function I am trying to optimize? Is it the cost of the validation set after N epochs?</li> <li>Is spearmint a good starting point for this task? Any other suggestions for this task?</li> </ol> <p>I would greatly appreciate any insights into this problem.</p>
<blockquote> <p>Although I am still not fully understanding the optimization algorithm, I feed like it will help me greatly.</p> </blockquote> <p>First up, let me briefly explain this part. Bayesian Optimization methods aim to deal with exploration-exploitation trade off in the <a href="https://en.wikipedia.org/wiki/Multi-armed_bandit" rel="noreferrer">multi-armed bandit problem</a>. In this problem, there is an <em>unknown</em> function, which we can evaluate in any point, but each evaluation costs (direct penalty or opportunity cost), and the goal is to find its maximum using as few trials as possible. Basically, the trade off is this: you know the function in a finite set of points (of which some are good and some are bad), so you can try an area around the current local maximum, hoping to improve it (exploitation), or you can try a completely new area of space, that can potentially be much better or much worse (exploration), or somewhere in between.</p> <p>Bayesian Optimization methods (e.g. PI, EI, UCB), build a model of the target function using a <a href="https://en.wikipedia.org/wiki/Gaussian_process" rel="noreferrer">Gaussian Process</a> (GP) and at each step choose the most "promising" point based on their GP model (note that "promising" can be defined differently by different particular methods).</p> <p>Here's an example:</p> <p><a href="https://i.stack.imgur.com/ksQFy.png" rel="noreferrer"><img src="https://i.stack.imgur.com/ksQFy.png" alt="sin(x)*x"></a></p> <p>The true function is <code>f(x) = x * sin(x)</code> (black curve) on <code>[-10, 10]</code> interval. Red dots represent each trial, red curve is the GP <em>mean</em>, blue curve is the mean plus or minus one <em>standard deviation</em>. As you can see, the GP model doesn't match the true function everywhere, but the optimizer fairly quickly identified the "hot" area around <code>-8</code> and started to exploit it.</p> <blockquote> <p>How do I set up the Bayesian Optimization with regards to a deep network?</p> </blockquote> <p>In this case, the space is defined by (possibly transformed) hyperparameters, usually a multidimensional unit hypercube. </p> <p>For example, suppose you have three hyperparameters: a learning rate <code>α in [0.001, 0.01]</code>, the regularizer <code>λ in [0.1, 1]</code> (both continuous) and the hidden layer size <code>N in [50..100]</code> (integer). The space for optimization is a 3-dimensional cube <code>[0, 1]*[0, 1]*[0, 1]</code>. Each point <code>(p0, p1, p2)</code> in this cube corresponds to a trinity <code>(α, λ, N)</code> by the following transformation:</p> <pre><code>p0 -&gt; α = 10**(p0-3) p1 -&gt; λ = 10**(p1-1) p2 -&gt; N = int(p2*50 + 50) </code></pre> <blockquote> <p>What is the function I am trying to optimize? Is it the cost of the validation set after N epochs?</p> </blockquote> <p>Correct, the target function is neural network validation accuracy. Clearly, each evaluation is expensive, because it requires at least several epochs for training.</p> <p>Also note that the target function is <em>stochastic</em>, i.e. two evaluations on the same point may slightly differ, but it's not a blocker for Bayesian Optimization, though it obviously increases the uncertainty.</p> <blockquote> <p>Is spearmint a good starting point for this task? Any other suggestions for this task?</p> </blockquote> <p><a href="https://github.com/kuz/caffe-with-spearmint" rel="noreferrer">spearmint</a> is a good library, you can definitely work with that. I can also recommend <a href="http://hyperopt.github.io/hyperopt/" rel="noreferrer">hyperopt</a>.</p> <p>In my own research, I ended up writing my own tiny library, basically for two reasons: I wanted to code exact Bayesian method to use (in particular, I found a <a href="https://arxiv.org/pdf/1009.5419.pdf" rel="noreferrer">portfolio strategy</a> of UCB and PI converged faster than anything else, in my case); plus there is another technique that can save up to 50% of training time called <a href="http://aad.informatik.uni-freiburg.de/papers/15-IJCAI-Extrapolation_of_Learning_Curves.pdf" rel="noreferrer">learning curve prediction</a> (the idea is to skip full learning cycle when the optimizer is confident the model doesn't learn as fast as in other areas). I'm not aware of any library that implements this, so I coded it myself, and in the end it paid off. If you're interested, the code is <a href="https://github.com/maxim5/hyper-engine" rel="noreferrer">on GitHub</a>.</p>
optimization|machine-learning|tensorflow|deep-learning|bayesian
23
252
64,357,821
How to change column values to rows value based on condition
<p>df:</p> <pre><code>items M1 v1 v2 v3 A c1 56 52 25 A c2 66 63 85 B c1 29 76 36 B c2 14 24 63 </code></pre> <p>df_output:</p> <pre><code>items M1 C1 C2 A V1 56 66 A V2 52 63 A V3 25 85 B V1 29 14 B V2 76 24 B V3 36 60 </code></pre> <p>I need to change the Column Values to Row Values as in the example. I tried some stack() function but it didn't worked.</p>
<p>You are looking to combine <code>stack</code> and <code>unstack</code>:</p> <pre><code>(df.set_index(['items','M1']) .unstack('M1') # unstack promotes M1 to columns .stack(level=0) # stack turns original columns to index level .rename_axis(columns=None, index=['item','M1']) # rename to match output .reset_index() ) </code></pre> <p>Output:</p> <pre><code> item M1 c1 c2 0 A v1 56 66 1 A v2 52 63 2 A v3 25 85 3 B v1 29 14 4 B v2 76 24 5 B v3 36 63 </code></pre>
python|pandas
2
253
64,195,126
Order Pandas DataFrame by groups and Timestamp
<p>I have the below sample DataFrame</p> <pre><code> Timestamp Item Char Value 4 1/7/2020 1:22:22 AM B C.B 3.2 0 1/7/2020 1:23:23 AM A C.A 1.0 2 1/7/2020 1:23:23 AM A C.B 1.3 1 1/7/2020 1:23:24 AM A C.A 2.0 5 1/7/2020 1:23:29 AM B C.B 3.0 3 1/7/2020 1:25:23 AM B C.B 2.0 </code></pre> <p>I would like to add a new column that tells the order an Item appears in the same Char, based on the Timestamp. In particular, I would like to assign 1 to the last value, 2 to the second-last value and so on.</p> <p>The result should look like as follows</p> <pre><code> Timestamp Item Char Value Order 0 1/7/2020 1:23:23 AM A C.A 1.0 2 1 1/7/2020 1:23:24 AM A C.A 2.0 1 2 1/7/2020 1:23:23 AM A C.B 1.3 1 3 1/7/2020 1:22:22 AM B C.B 3.2 3 4 1/7/2020 1:23:29 AM B C.B 3.0 2 5 1/7/2020 1:25:23 AM B C.B 2.0 1 </code></pre> <p>As you see the B item appears several times in the Char C.B. I would assign 1 to the most recent value based on the Timestamp.</p> <p>My idea is to group the DataFrame by Item and by Char, then order the rows of each group by the Timestamp in descending order, finally assign 1 to the first row, 2 to the second and so on. But I don´t actually know how to do this.</p> <p>Can you help me out?</p> <p>Thank you very much!</p>
<p>Let's <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.groupby.html" rel="nofollow noreferrer"><code>groupby</code></a> the column <code>Timestamp</code> on <code>Char</code> and <code>Item</code> and compute the <a href="https://pandas.pydata.org/pandas-docs/version/0.23.4/generated/pandas.core.groupby.GroupBy.rank.html" rel="nofollow noreferrer"><code>rank</code></a> using <code>method=first</code>, then use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sort_values.html" rel="nofollow noreferrer"><code>sort_values</code></a> to sort the dataframe based on <code>Char</code> and <code>Item</code>:</p> <pre><code>df['Order'] = pd.to_datetime(df['Timestamp'])\ .groupby([df['Char'], df['Item']])\ .rank(method='first', ascending=False) df = df.sort_values(['Char', 'Item'], ignore_index=True) </code></pre> <hr /> <pre><code> Timestamp Item Char Value Order 0 1/7/2020 1:23:23 AM A C.A 1.0 2.0 1 1/7/2020 1:23:24 AM A C.A 2.0 1.0 2 1/7/2020 1:23:23 AM A C.B 1.3 1.0 3 1/7/2020 1:22:22 AM B C.B 3.2 3.0 4 1/7/2020 1:23:29 AM B C.B 3.0 2.0 5 1/7/2020 1:25:23 AM B C.B 2.0 1.0 </code></pre>
python|pandas
4
254
64,455,172
ValueError: could not convert string to float: 'W'
<p>I've been trying to get a Shallow neural network using pandas breast cancer and i keep gettin this error, i would greatly appreciate if someone can tell whats actually wrong and how to fix it.</p> <pre><code>File &quot;D:\Users\USUARIO\Desktop\una carpeta para los oasda proyectos\Ex_Files_Python_EssT\Exercise Files\basic_hands_on.py&quot;, line 55, in predict np.array(WT, dtype=np.float32) ValueError: could not convert string to float: 'W' </code></pre> <p>I tried to convert the value of W on my dictionary to float32 because i need it to actually procces the equation on the predict function but i keep getting that the type of &quot;W&quot; is a string despite the fact that <strong>print([W])</strong> giving me a matrix.</p> <p>this is my code for context sake</p> <pre><code>import pandas as pd from sklearn.model_selection import train_test_split from sklearn.datasets import load_breast_cancer def initialiseNetwork(num_features): W = np.zeros((num_features, 1)) b = 0 parameters = {&quot;W&quot;: W, &quot;b&quot;: b} return parameters def sigmoid(z): a = 1/(1 + np.exp(-z)) return a def forwardPropagation(X, parameters): W = parameters[&quot;W&quot;] b = parameters[&quot;b&quot;] Z = np.dot(W.T,X) + b A = sigmoid(Z) return A def cost(A, Y, num_samples): cost = -1/num_samples *np.sum(Y*np.log(A) + (1-Y)*(np.log(1-A))) return cost def backPropagration(X, Y, A, num_samples): dZ = A - Y dW = (np.dot(X,dZ.T))/num_samples db = np.sum(dZ)/num_samples return dW, db def updateParameters(parameters, dW, db, learning_rate): W = parameters[&quot;W&quot;] - (learning_rate * dW) b = parameters[&quot;b&quot;] - (learning_rate * db) return {&quot;W&quot;: W, &quot;b&quot;: b} def model(X, Y, num_iter, learning_rate): num_features = X.shape[0] num_samples = (X.shape[1]) print(num_samples) parameters = initialiseNetwork(num_features) for i in range(num_iter): A = forwardPropagation(X, parameters) if(i%100 == 0): print(&quot;cost after {} iteration: {}&quot;.format(i, cost(A, Y, num_samples))) dW, db = backPropagration(X, Y, A, num_samples) parameters = updateParameters(parameters, dW, db, learning_rate) return parameters def predict(W, b, X): WT = np.transpose([&quot;W&quot;]) np.array(WT, dtype=np.float32) np.array(WT,dtype=float) Z = np.dot(WT,X) + b Y = np.array([1 if y &gt; 0.5 else 0 for y in sigmoid(Z[0])]).reshape(1,len(Z[0])) return Y (X_cancer, y_cancer) = load_breast_cancer(return_X_y = True) X_train, X_test, y_train, y_test = train_test_split(X_cancer, y_cancer, random_state = 25) def normalize(data): col_max = np.max(data, axis = 0) col_min = np.min(data, axis = 0) return np.divide(data - col_min, col_max - col_min) X_train_n = normalize(X_train) X_test_n = normalize(X_test) X_trainT = X_train_n.T X_testT = X_test_n.T y_trainT = y_train.reshape(1, (X_trainT.shape[1])) y_testT = y_test.reshape(1, (X_testT.shape[1])) parameters = model(X_trainT, y_trainT, 4000, 0.75) print(parameters) print(X_trainT) yPredTrain = predict(['W'], ['b'], X_trainT) # pass weigths and bias from parameters dictionary and X_trainT as input to the function yPredTest = predict(['W'], ['b'], X_testT) # pass the same parameters but X_testT as input data accuracy_train = 100 - np.mean(np.abs(yPredTrain - y_trainT)) * 100 accuracy_test = 100 - np.mean(np.abs(yPredTest - y_testT)) * 100 print(&quot;train accuracy: {} %&quot;.format(accuracy_train)) print(&quot;test accuracy: {} %&quot;.format(accuracy_test)) with open(&quot;Output.txt&quot;, &quot;w&quot;) as text_file: text_file.write(&quot;train= %f\n&quot; % accuracy_train) text_file.write(&quot;test= %f&quot; % accuracy_test)``` </code></pre>
<p>I was about to cuss you out for not showing us where the problem was occuring. But then I happened to match the error message with</p> <pre><code>def predict(W, b, X): WT = np.transpose([&quot;W&quot;]) np.array(WT, dtype=np.float32) ... </code></pre> <p>Of course that would produce this error. An array with character &quot;W&quot; in it certainly can't be turned into an float.</p> <p>You use <code>predict(['W']</code> a number of places, but this isn't one them.</p>
python|pandas|numpy|neural-network
0
255
49,078,385
Tensorflow: Select one column index per row using a list of indices
<p>I saw questions <a href="https://stackoverflow.com/questions/39684415/tensorflow-getting-elements-of-every-row-for-specific-columns">39684415</a>, <a href="https://stackoverflow.com/questions/37026425/elegant-way-to-select-one-element-per-row-in-tensorflow">37026425</a>, and <a href="https://stackoverflow.com/questions/40722200/tensorflow-index-per-row">40722200</a> which have answers and asked the same question. </p> <p>However, these were asked over a year ago, and I was wondering if there was an updated answer to do this more efficiently. In addition, I wasn't sure if they were differentiable due to the use of gather_nd.</p>
<p>It's easy to implementation. For example, suppose you have a tensor data [[1,2],[3,4]], you want to get the first column. you can use <strong>tf.transpose(data, perm=[1, 0])</strong> and <strong>tf.gather_nd(data, [[0]])</strong></p> <pre><code>import tensorflow as tf import numpy as np data = [[1, 3], [2, 4]] a = tf.Variable(initial_value=data, dtype=tf.float32) b = tf.transpose(a, perm=[1, 0]) fc = tf.gather_nd(b, [[0]]) init = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init) column = sess.run([fc])[0] print(column) </code></pre>
tensorflow
0
256
59,025,803
What will happen if I try to use GPU delegate under android 8.1
<p>Here below is the system architecture for NNAPI. <a href="https://i.stack.imgur.com/NAzIF.png" rel="nofollow noreferrer">enter image description here</a></p> <p>The NNAPI is available on Android 8.1 (API level27) or higher. What will happen if I try to use GPU delegate under android 8.1?</p>
<p>Tensorflow's GPU delegate is not using NNAPI (see the <a href="https://www.tensorflow.org/lite/performance/delegates" rel="nofollow noreferrer">TFLite documentation</a>).</p> <p>A couple of corrections on Shree's answer.</p> <ul> <li>NNAPI will be delegating to GPU or any other available device even in Android 8.1 depending on the availability of NNAPI implementations.</li> <li>The <strong>ANeuralNetworksCompilation_createForDevices()</strong> API is used when you want to address a specific set of devices, if you use <strong>ANeuralNetworksCompilation_create()</strong> the device selection will be done by NNAPI. </li> <li>The choice between the two different API (<strong>createForDevices</strong> or <strong>create</strong>) is controlled by the acceleratorName option in the delegate creation.</li> </ul>
android|tensorflow|gpu|tensorflow-lite|nnapi
1
257
58,716,298
How do I set up a custom input-pipeline for sequence classification for the huggingface transformer models?
<p>I want to use one of the models for sequence classification provided by huggingface.It seems they are providing a function called <code>glue_convert_examples_to_features()</code> for preparing the data so that it can be input into the models. </p> <p>However, it seems this conversion function only applies to the glue dataset. I can't find an easy solution to apply the conversion to my costum data. Am I overseen a prebuilt function like above ? What would be an easy way to convert my custom data with one sequence and two labels into the format the model expects ?</p>
<p>Huggingface added a <a href="https://huggingface.co/transformers/custom_datasets.html" rel="nofollow noreferrer">fine-tuning with custom datasets</a> guide that contains a lot of useful information. I was able to use the information in the <a href="https://huggingface.co/transformers/custom_datasets.html#sequence-classification-with-imdb-reviews" rel="nofollow noreferrer">IMDB sequence classification</a> section to successfully adapt a notebook using a glue dataset with my own pandas dataframe.</p> <pre class="lang-py prettyprint-override"><code>from transformers import ( AutoConfig, AutoTokenizer, TFAutoModelForSequenceClassification, AdamW ) import tensorflow as tf import pandas as pd from sklearn.model_selection import train_test_split model_name = 'bert-base-uncased' tokenizer = AutoTokenizer.from_pretrained(model_name) df = pd.read_pickle('data.pkl') train_texts = df.text.values # an array of strings train_labels = df.label.values # an array of integers train_texts, val_texts, train_labels, val_labels = train_test_split(train_texts, train_labels, test_size=.2) train_encodings = tokenizer(train_texts.tolist(), truncation=True, max_length=96, padding=True) val_encodings = tokenizer(val_texts.tolist(), truncation=True, max_length=96, padding=True) train_dataset = tf.data.Dataset.from_tensor_slices(( dict(train_encodings), train_labels )) val_dataset = tf.data.Dataset.from_tensor_slices(( dict(val_encodings), val_labels )) num_labels = 3 num_train_examples = len(train_dataset) num_dev_examples = len(val_dataset) train_dataset = train_dataset.shuffle(100).batch(train_batch_size) val_dataset = val_dataset.shuffle(100).batch(eval_batch_size) learning_rate = 2e-5 train_batch_size = 8 eval_batch_size = 8 num_epochs = 1 train_steps_per_epoch = int(num_train_examples / train_batch_size) dev_steps_per_epoch = int(num_dev_examples / eval_batch_size) config = AutoConfig.from_pretrained(model_name, num_labels=num_labels) model = TFAutoModelForSequenceClassification.from_pretrained(model_name, config=config) optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate) loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) metrics = [tf.keras.metrics.SparseCategoricalAccuracy('accuracy', dtype=tf.float32)] model.compile(optimizer=optimizer, loss=loss, metrics=metrics) history = model.fit(train_dataset, epochs=num_epochs, steps_per_epoch=train_steps_per_epoch, validation_data=val_dataset, validation_steps=dev_steps_per_epoch) </code></pre> <p>Notebook credits: <a href="https://github.com/digitalepidemiologylab/covid-twitter-bert#colab" rel="nofollow noreferrer">digitalepidemiologylab covid-twitter-bert colab</a></p>
python-3.x|tensorflow|machine-learning|nlp
1
258
70,237,135
Iterating over a multi-dimentional array
<p>I have array of shape (3,5,96,96), where channels= 3, number of frames = 5 and height and width = 96 I want to iterate over dimension 5 to get images with size (3,96,96). The code which I have tried is below.</p> <pre><code>b = frame.shape[1] for i in range(b): fr = frame[:,i,:,:] </code></pre> <p>But this is not working.</p>
<p>You could swap axis (using <a href="https://numpy.org/doc/stable/reference/generated/numpy.swapaxes.html#numpy.swapaxes" rel="nofollow noreferrer"><code>numpy.swapaxes(a, axis1, axis2)</code></a> to get the second (frame) in first position</p> <pre><code>import numpy as np m = np.zeros((3, 5, 96, 96)) n = np.swapaxes(m, 0, 1) print(n.shape) </code></pre> <pre><code>(5, 3, 96, 96) </code></pre>
python|numpy-ndarray
0
259
70,224,360
Faster implementation of fractional encoding (similar to one-hot encoding)
<h1>Problem Statement</h1> <p>Create an efficient fractional encoding (similar to a one-hot encoding) for a ragged list of components and corresponding compositions.</p> <h2>Toy Example</h2> <p>Take a composite material with the following <code>class: ingredient</code> combinations:</p> <ul> <li><a href="https://www.totalboat.com/2021/02/11/when-and-how-to-use-an-epoxy-filler/" rel="nofollow noreferrer">Filler</a>: Colloidal Silica (<code>filler_A</code>)</li> <li>Filler: Milled Glass Fiber (<code>filler_B</code>)</li> <li><a href="https://www.thomasnet.com/articles/plastics-rubber/types-of-resins/" rel="nofollow noreferrer">Resin</a>: Polyurethane (<code>resin_A</code>)</li> <li>Resin: Silicone (<code>resin_B</code>)</li> <li>Resin: Epoxy (<code>resin_C</code>)</li> </ul> <h2>Dummy Data</h2> <pre class="lang-py prettyprint-override"><code>components = np.array( [ [&quot;filler_A&quot;, &quot;filler_B&quot;, &quot;resin_C&quot;], [&quot;filler_A&quot;, &quot;resin_B&quot;], [&quot;filler_A&quot;, &quot;filler_B&quot;, &quot;resin_B&quot;], [&quot;filler_A&quot;, &quot;resin_B&quot;, &quot;resin_C&quot;], [&quot;filler_B&quot;, &quot;resin_A&quot;, &quot;resin_B&quot;], [&quot;filler_A&quot;, &quot;resin_A&quot;], [&quot;filler_B&quot;, &quot;resin_A&quot;, &quot;resin_B&quot;], ], dtype=object, ) compositions = np.array( [ [0.4, 0.4, 0.2], [0.5, 0.5], [0.5, 0.3, 0.2], [0.5, 0.5, 0.0], [0.6, 0.4, 0.0], [0.6, 0.4], [0.6, 0.2, 0.2], ], dtype=object, ) </code></pre> <h2>Desired Output</h2> <p><code>X_train</code>:</p> <pre class="lang-py prettyprint-override"><code> filler_A filler_B resin_A resin_B resin_C 0 0.4 0.4 0.0 0.0 0.2 1 0.5 0.0 0.0 0.5 0.0 2 0.5 0.3 0.0 0.2 0.0 3 0.5 0.0 0.0 0.5 0.0 4 0.0 0.6 0.4 0.0 0.0 5 0.6 0.0 0.4 0.0 0.0 6 0.0 0.6 0.2 0.2 0.0 </code></pre> <h1>What I've tried</h1> <p>I have a slow <code>fractional_encode</code> implementation, a <code>fractional_decode</code> for reference, and basic usage.</p> <h2>My (Slow) Implementation</h2> <p>After struggling with making a faster implementation, I resorted to making a slow, 2-level nested <code>for</code> loop implementation of creating the one-hot-like fractional or prevalence encoding.</p> <pre class="lang-py prettyprint-override"><code>def fractional_encode(components, compositions, drop_last=False): &quot;&quot;&quot;Fractionally encode components and compositions similar to one-hot encoding. In one-hot encoding, components are assigned a &quot;1&quot; if it exists for a particular compound, and a &quot;0&quot; if it does not. However, this ignores the case where the composition (i.e. the fractional prevalence) of each component is known. For example, NiAl is 50% Ni and 50% Al. This function computes the fractional components (albeit manually using for loops) where instead of a &quot;1&quot; or a &quot;0&quot;, the corresponding fractional prevalence is assigned (e.g. 0.2, 0.5643, etc.). Parameters ---------- components : list of lists of strings or numbers The components that make up the compound for each compound. If strings, then each string corresponds to a category. If numbers, then each number must uniquely describe a particular category. compositions : list of lists of floats The compositions of each component that makes up the compound for each compound. drop_last : bool, optional Whether to drop the last component. This is useful since compositions are constrained to sum to one, and therefore there is `n_components - 1` degrees of freedom, by default False Returns ------- X_train : 2D array Fractionally encoded matrix. Raises ------ ValueError Components and compositions should have the same shape. See also -------- &quot;Convert jagged array to Pandas dataframe&quot; https://stackoverflow.com/a/63496196/13697228 &quot;&quot;&quot; # lengths, unique components, and initialization n_compounds = len(components) unique_components = np.unique(list(flatten(components))) n_unique = len(unique_components) X_train = np.zeros((n_compounds, n_unique)) for i in range(n_compounds): # unpack component = components[i] composition = compositions[i] # lengths n_component = len(component) n_composition = len(composition) if n_component != n_composition: raise ValueError(&quot;Components and compositions should have the same shape&quot;) for j in range(n_unique): # unpack unique_component = unique_components[j] if unique_component in component: # assign idx = component.index(unique_component) X_train[i, j] = composition[idx] if drop_last: # remove last column: https://stackoverflow.com/a/6710726/13697228 X_train = np.delete(X_train, -1, axis=1) X_train = pd.DataFrame(data=X_train, columns=unique_components) return X_train </code></pre> <h2>An Inverse Implementation (Decoding)</h2> <p>For reference, I also made a function for decoding <code>X_train</code>, which uses higher-level operations:</p> <pre class="lang-py prettyprint-override"><code>def fractional_decode(X_train): &quot;&quot;&quot;Fractionally decode components and compositions similar to one-hot encoding. In one-hot encoding, components are assigned a &quot;1&quot; if it exists for a particular compound, and a &quot;0&quot; if it does not. However, this ignores the case where the composition (i.e. the fractional prevalence) of each component is known. For example, NiAl is 50% Ni and 50% Al. This function decodes the fractional encoding where instead of &quot;1&quot; or a &quot;0&quot;, the corresponding fractional prevalence is used (e.g. 0.2, 0.5643, etc.). Parameters ---------- X_train : DataFrame Fractionally encoded matrix (similar to a one-hot encoded matrix). last_dropped : bool, optional Whether the last component is already dropped. This is useful since compositions are constrained to sum to one, and therefore there is `n_components - 1` degrees of freedom. If `drop_last` from `fractional_encode` is set to True, and you want to decode, set to True. By default False Returns ------- components : list of lists of strings or numbers The components that make up the compound for each compound. If strings, then each string corresponds to a category. If numbers, then each number must uniquely describe a particular category. compositions : list of lists of floats The compositions of each component that makes up the compound for each compound. Raises ------ ValueError Components and compositions should have the same shape. &quot;&quot;&quot; # lengths, unique components, and sparse matrix attributes unique_components = X_train.columns n_unique = len(unique_components) sparse_mat = coo_matrix(X_train.values) row_ids, col_ids = sparse_mat.row, sparse_mat.col idx_pairs = list(zip(row_ids, col_ids)) comps = sparse_mat.data # lookup dictionaries to replace col_ids with components component_lookup = { component_idx: unique_component for (component_idx, unique_component) in zip(range(n_unique), unique_components) } # lookup dictionaries to replace idx_pairs with compositions composition_lookup = {idx_pair: comp for (idx_pair, comp) in zip(idx_pairs, comps)} # contains placeholder col_ids and idx_pairs which will get replaced by components # and compositions, respectively tmp_df = pd.DataFrame( data=[(idx_pair[1], idx_pair) for idx_pair in idx_pairs], columns=[&quot;component&quot;, &quot;composition&quot;], ) # NOTE: component_lookup should be mapped before composition_lookup tmp_df.component = tmp_df.component.map(component_lookup) tmp_df.composition = tmp_df.composition.map(composition_lookup) # add a row_id column to use for grouping into ragged entries cat_df = pd.concat([pd.DataFrame(row_ids, columns=[&quot;row_id&quot;]), tmp_df], axis=1) # combine components and compositions compound-wise df = ( cat_df.reset_index() .groupby(by=&quot;row_id&quot;) .agg({&quot;component&quot;: lambda x: tuple(x), &quot;composition&quot;: lambda x: tuple(x)}) ) # extract and convert to ragged lists components, compositions = [df[key] for key in [&quot;component&quot;, &quot;composition&quot;]] components = list(components) compositions = list(compositions) return components, compositions </code></pre> <h2>Example Usage</h2> <pre class="lang-py prettyprint-override"><code>X_train = fractional_encode(components, compositions) components, compositions = fractional_decode(X_train) </code></pre> <h2>Question</h2> <p>What is a faster implementation of <code>fractional_encode</code>?</p>
<p>A solution that initializes an array with zeros then updates the fields:</p> <pre><code>columns = sorted(list(set(sum(list(components), [])))) data = np.zeros((len(components), len(columns))) for i in range(data.shape[0]): for component, composition in zip(components[i], compositions[i]): j = columns.index(component) data[i, j] = composition df = pd.DataFrame(columns=columns, data=data) </code></pre> <p>Output:</p> <pre><code> filler_A filler_B resin_A resin_B resin_C 0 0.4 0.4 0.0 0.0 0.2 1 0.5 0.0 0.0 0.5 0.0 2 0.5 0.3 0.0 0.2 0.0 3 0.5 0.0 0.0 0.5 0.0 4 0.0 0.6 0.4 0.0 0.0 5 0.6 0.0 0.4 0.0 0.0 6 0.0 0.6 0.2 0.2 0.0 </code></pre>
python|arrays|dataframe|pandas-groupby|one-hot-encoding
2
260
70,092,417
Create Folium map with discrete color fill
<p>I would like to fill in a Folium map with discrete color values. But every shape is returned with the same value (i.e. - the same color)</p> <pre><code>Name Fill_Color Geometry A orange .... B yellow .... C purple .... D red .... m = folium.Map(location=[40.6725, -73.985], zoom_start=14, tiles='CartoDB positron') title_html = ''' &lt;h3 align=&quot;center&quot; style=&quot;font-size:16px&quot;&gt;&lt;b&gt;{}&lt;/b&gt;&lt;/h3&gt; '''.format('Gowanus Projects') for _, r in gzones.iterrows(): sim_geo = gpd.GeoSeries(r['geometry']) geo_j = sim_geo.to_json() geo_j = folium.GeoJson(data=geo_j, style_function=lambda x: {'fillColor': r['fill_color']}) folium.Tooltip(r['Name']).add_to(geo_j) geo_j.add_to(m) m.get_root().html.add_child(folium.Element(title_html)) m </code></pre>
<p>The question is, if you use geopandas that contain geometry information, it will be internally converted to geojson when referenced in the style function, so the reference will be <code>r['properties']['fill_color']</code>. Also, no looping is required and the entire data frame can be handled. I have modified <a href="https://github.com/python-visualization/folium/blob/main/examples/GeoJsonPopupAndTooltip.ipynb" rel="nofollow noreferrer">this official sample</a> to fit your code.</p> <pre><code>import geopandas url = (&quot;https://raw.githubusercontent.com/python-visualization/folium/master/examples/data&quot;) nybb = f&quot;{url}/nybb.zip&quot; boros = geopandas.read_file(nybb) # colors added fill_color = ['orange','yellow','purple','red','blue'] boros['fill_color'] = fill_color boros BoroCode BoroName Shape_Leng Shape_Area geometry fill_color 0 5 Staten Island 330454.175933 1.623847e+09 MULTIPOLYGON (((970217.022 145643.332, 970227.... orange 1 3 Brooklyn 741227.337073 1.937810e+09 MULTIPOLYGON (((1021176.479 151374.797, 102100... yellow 2 4 Queens 896875.396449 3.045079e+09 MULTIPOLYGON (((1029606.077 156073.814, 102957... purple 3 1 Manhattan 358400.912836 6.364308e+08 MULTIPOLYGON (((981219.056 188655.316, 980940.... red 4 2 Bronx 464475.145651 1.186822e+09 MULTIPOLYGON (((1012821.806 229228.265, 101278... blue import folium from folium.features import GeoJsonTooltip m = folium.Map(location=[40.6725, -73.985], zoom_start=10, tiles='CartoDB positron') title_html = ''' &lt;h3 align=&quot;center&quot; style=&quot;font-size:16px&quot;&gt;&lt;b&gt;{}&lt;/b&gt;&lt;/h3&gt; '''.format('Gowanus Projects') tooltip = GeoJsonTooltip( fields=[&quot;BoroName&quot;], labels=True ) folium.GeoJson(boros, style_function=lambda x: {'fillColor':x['properties']['fill_color']}, tooltip=tooltip ).add_to(m) m.get_root().html.add_child(folium.Element(title_html)) m </code></pre> <p><a href="https://i.stack.imgur.com/MTuot.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MTuot.png" alt="enter image description here" /></a></p>
python|pandas|folium
0
261
56,152,408
How to convert String to Numpy Datetime64 from a dataframe
<p>I'm dealing with a DataFrame that contains column of time like "hh:mm:ss" and I need to convert those values to the NumPy <code>datetime64</code> type.</p> <pre><code>import pandas as pd data = [dict(voie="V3", Start="06:10", End='06:20'), dict(voie="V5", Start='06:26', End='06:29'), dict(voie="V3", Start='06:20', End='06:30'), dict(voie="V5", Start='06:32', End='06:35')] df = pd.DataFrame(data) #df = df['Start'].to-datetime64() </code></pre> <p>I need to convert the column <code>Start</code> and <code>End</code> from type <code>string</code> to <code>datetime64</code></p>
<p>Just use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.to_datetime.html" rel="nofollow noreferrer"><code>pandas.to_datetime</code></a> for each column. For example:</p> <pre><code>df.End = pd.to_datetime(df.End) df.End 0 2019-05-15 06:20:00 1 2019-05-15 06:29:00 2 2019-05-15 06:30:00 3 2019-05-15 06:35:00 Name: End, dtype: datetime64[ns] </code></pre> <p>You can also use the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.astype.html" rel="nofollow noreferrer"><code>pandas.DataFrame.astype</code></a> method of the DataFrame.</p> <pre><code>df.End = df.End.astype('datetime64[ns]') df.End 0 2019-05-15 06:20:00 1 2019-05-15 06:29:00 2 2019-05-15 06:30:00 3 2019-05-15 06:35:00 Name: End, dtype: datetime64[ns] </code></pre> <h3>Regarding <code>pd.Timestamp</code> and <code>np.datetime64</code></h3> <p>That is a complicated relationship. The <code>.values</code> attribute of the series will be an array of type <code>np.datetime64</code>, while the type of a single entry will be <code>pd.Timestamp</code>. As far as I know there is nothing you can do with <code>np.datetime64</code> that you can't with <code>pd.Timestamp</code>. There is a nice little graphic in <a href="https://stackoverflow.com/questions/13703720/converting-between-datetime-timestamp-and-datetime64/21916253#21916253">Converting between datetime, Timestamp and datetime64</a> that might help. Deep down within the <code>pd.to_datetime</code> code you will see that in fact when passed a <code>pd.Series</code> each entry is converted to <code>np.datetime64</code>. It isn't until you access an item in the series that it is converted into a <code>pd.Timestamp</code> (see <a href="https://github.com/pandas-dev/pandas/blob/0.24.x/pandas/_libs/index.pyx#L43-L45" rel="nofollow noreferrer">pandas._libs.index.get_value_at</a>).</p>
python|pandas|numpy|matplotlib
1
262
56,317,060
Disabling `@tf.function` decorators for debugging?
<p>In TensorFlow 2, the <a href="https://www.tensorflow.org/alpha/tutorials/eager/tf_function" rel="noreferrer"><code>@tf.function</code></a> decorator allows for Python functions to become TensorFlow graphs (more or less) and can lead to some performance improvements. However, when decorated this way, <a href="https://www.tensorflow.org/alpha/tutorials/eager/tf_function#tracing_and_polymorphism" rel="noreferrer">Python no longer traces the functions each time they run</a>. This makes debugging the functions with Python debuggers a bit more difficult. Is there a way to disable all <code>@tf.function</code> decorators temporarily to allow for easy debugging?</p>
<p>Use <a href="https://www.tensorflow.org/api_docs/python/tf/config/run_functions_eagerly" rel="nofollow noreferrer"><code>tf.config.run_functions_eagerly(True)</code></a>.</p>
python|tensorflow|tensorflow2.0
8
263
56,079,886
how to use previous row value as well as values in other column in same row to compute value of a column in pandas
<p>I have a dataframe <code>df</code>:</p> <pre><code>import pandas as pd df = pd.DataFrame({'A': [1, 1, 1,2,2,2,2], 'B': [10, 0, 0,5,0,0,0], 'C': [1,1,2,2,3,3,3], 'D': [2,3,4,5,2,3,4]}) </code></pre> <p>which looks like:</p> <pre><code> A B C D 0 1 10 1 2 1 1 0 1 3 2 1 0 2 4 3 2 5 2 5 4 2 0 3 2 5 2 0 3 3 6 2 0 3 4 </code></pre> <p>I want to compute the value in column <code>B</code> only for those locations where it is 0 for all groups (1,2 as per example data) denoted in column <code>A</code>. </p> <p>value of column <code>B</code> = value of column B in previous record + value of col <code>C</code> in same record + value of col <code>D</code> in same record.</p> <p>My expected output is:</p> <pre><code> A B C D 0 1 10 1 2 1 1 14 1 3 2 1 20 2 4 3 2 5 2 5 4 2 10 3 2 5 2 16 3 3 6 2 23 3 4 </code></pre> <p>How can I do it in pandas ?</p>
<p>This should do it:</p> <pre><code>def f(g): g.B = (g.B.shift() + g.C + g.D).cumsum() return g df.B.replace(0, df.groupby('A').apply(f).B) </code></pre> <p>The result is:</p> <pre><code> A B C D 0 1 10 1 2 1 1 14 1 3 2 1 20 2 4 3 2 5 2 5 4 2 10 3 2 5 2 16 3 3 6 2 23 3 4 </code></pre>
python|pandas
4
264
56,141,142
How to pass "step" to ExponentialDecay in GradientTape
<p>I tried to use an optimizers.schedules.ExponentialDecay isntance as the learning_rate to Adm optimizer, but i don't know how to pass "step" to it when train the model in GradientTape.</p> <p>I use tensorflow-gpu-2.0-alpha0 and python3.6. And i read the doc <a href="https://tensorflow.google.cn/versions/r2.0/api_docs/python/tf/optimizers/schedules/ExponentialDecay" rel="nofollow noreferrer">https://tensorflow.google.cn/versions/r2.0/api_docs/python/tf/optimizers/schedules/ExponentialDecay</a> but with no idea how to tackle it.</p> <pre><code>initial_learning_rate = 0.1 lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay( initial_learning_rate, decay_steps=100000, decay_rate=0.96) optimizer = tf.optimizers.Adam(learning_rate = lr_schedule) for epoch in range(self.Epoch): ... ... with GradientTape as tape: pred_label = model(images) loss = calc_loss(pred_label, ground_label) grads = tape.gradient(loss, model.trainable_variables) optimizer.apply_gradients(zip(grads, model.trainable_variables)) # I tried this but the result seem not right. # I want to pass "epoch" as "step" to lr_schedule </code></pre>
<p>Using an instance of <code>tf.keras.optimizers.schedules.ExponentialDecay()</code> might not work with <code>GadientTape</code>,it is more suitable for Keras's <code>model.fit()</code>, what I understand is that you need to reduce or schedule the learning rate after certain number of iterations/steps, so there is a work around for that, you can schedule the learning rate manually by using <code>get_config()</code> and <code>from_config()</code> methods of optimizer's class.</p> <pre><code> def exponential_decay(optimizer, decay_rate): #get the optimizer configuration dictionary opt_cfg = optimizer.get_config() &quot;&quot;&quot; The opt_cfg dictionary will look like this given that if you set initial learning rate to 0.1. {'name': 'Adam', 'learning_rate': 0.1, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}&quot;&quot;&quot; #change the value of learning rate by multiplying decay rate with learning rate to get new learning rate opt_cfg['learning_rate'] = opt_cfg['learning_rate']*decay_rate &quot;&quot;&quot; the changed opt_cfg dictionary will look like this, if you have initial learning rate of 0.1 and decay rate of 0.96, then new opt_cfg will look like this: {'name': 'Adam', 'learning_rate': 0.096, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}&quot;&quot;&quot; #now just pass this updated optimizer configuartion dictionary to from_config() method and you are done, now #your optimzer will use new learning rate optimizer = optimizer.from_config(opt_cfg) return optimizer decay_steps=100000 decay_rate=0.96 optimizer = tf.optimizers.Adam(learning_rate = lr_schedule) #loop for epoch for epoch in range(self.Epoch): ... ... #loop to iterate over batches for itr, images in enumerate(batch_images): if decay_steps%itr==0: optimizer = exponential_decay(optimizer, decay_rate) with GradientTape as tape: pred_label = model(images) loss = calc_loss(pred_label, ground_label) grads = tape.gradient(loss, model.trainable_variables) optimizer.apply_gradients(zip(grads, model.trainable_variables)) </code></pre> <p>So what we are doing here is that when the number of iterations/steps reaches our defined limit or decay steps, lets say you want to change the learning rate after every 100K steps we call our method <code>exponential_deacy()</code> pass in the instance of our optimizer and decay rate and that method will change the learning rate of our optimizer and return the optimizer instance, with updated learning rate, you can verify using <code>optimizer.get_config()</code> method. For more information please have a look at <a href="https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/Optimizer#get_config" rel="nofollow noreferrer">get_config()</a> &amp; <a href="https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/Optimizer#from_config" rel="nofollow noreferrer">from_config()</a>.</p>
python|tensorflow2.0
0
265
56,095,054
Cleaning up a column based on spelling? Pandas
<p>I've got two very important, user entered, information columns in my data frame. They are mostly cleaned up except for one issue: the spelling, and the way names are written differ. For example I have five entries for one name: "red rocks canyon", "redrcks", "redrock canyon", "red rocks canyons". This data set is too large for me to go through and clean this manually (2 million entries). Are there any strategies to clean these features up with code?</p> <p><a href="https://i.stack.imgur.com/IBcAY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IBcAY.png" alt="Screen_shot_of_data"></a></p>
<p>I would look into doing <a href="https://en.wikipedia.org/wiki/Phonetic_algorithm" rel="nofollow noreferrer">phonetic string matching</a> here. The basic idea behind this approach is to obtain a phonetic encoding for each entered string, and then group spelling variations by their encoding. Then, you could choose the most frequent variation in each group to be the "correct" spelling.</p> <p>There are several different variations on phonetic encoding, and a great package in Python for trying some of them out is <a href="https://github.com/jamesturk/jellyfish" rel="nofollow noreferrer">jellyfish</a>. Here is an example of how to use it with the <a href="https://en.wikipedia.org/wiki/Soundex" rel="nofollow noreferrer">Soundex</a> encoding:</p> <pre class="lang-py prettyprint-override"><code>import jellyfish import pandas as pd data = pd.DataFrame({ "name": [ "red rocks canyon", "redrcks", "redrock canyon", "red rocks canyons", "bosque", "bosque escoces", "bosque escocs", "borland", "borlange" ] }) data["soundex"] = data.name.apply(lambda x: jellyfish.soundex(x)) print(data.groupby("soundex").agg({"name": lambda x: ", ".join(x)})) </code></pre> <p>This prints:</p> <pre><code> name soundex B200 bosque B222 bosque escoces, bosque escocs B645 borland, borlange R362 red rocks canyon, redrcks, redrock canyon, red... </code></pre> <p>This definitely won't be perfect and you'll have to be careful as it might group things too aggressively, but I hope it gives you something to try!</p>
python-3.x|pandas|pandas-groupby|sklearn-pandas
2
266
55,637,437
How create a simple animated graph with matplotlib from a dataframe
<p>Someone can help me to correct the code below to visualize this data with animated matplotlib?</p> <p>The dataset for X and Y axis are describe below.</p> <pre><code>X- Range mydata.iloc[:,[4]].head(10) Min_pred 0 1.699189 1 0.439975 2 2.989244 3 2.892075 4 2.221990 5 3.456261 6 2.909323 7 -0.474667 8 -1.629343 9 2.283976 Y - range dataset_meteo.iloc[:,[2]].head(10) Out[122]: Min 0 0.0 1 -1.0 2 2.0 3 -2.0 4 -4.0 5 -4.0 6 -5.0 7 -7.0 8 -3.0 9 -1.0 </code></pre> <p>I've tried the code below,</p> <pre><code>import numpy as np import matplotlib.pyplot as plt import matplotlib.animation as animation d = pd.read_excel("mydata.xls") x = np.array(d.index) y = np.array(d.iloc[:,[2]]) mydata = pd.DataFrame(y,x) fig = plt.figure(figsize=(10,6)) plt.xlim(1999, 2016) plt.ylim(np.min(x), np.max(x)) plt.xlabel('Year',fontsize=20) plt.ylabel(title,fontsize=20) plt.title('Meteo Paris',fontsize=20) def animate(i): data = mydata.iloc[:int(i+1)] #select data range p = sns.lineplot(x=data.index, y=data[title], data=data, color="r") p.tick_params(labelsize=17) plt.setp(p.lines,linewidth=7) ani = matplotlib.animation.FuncAnimation(fig, animate, frames=17, repeat=True) </code></pre> <p>The idea is to create a graph where the predicted (Y) would be animated kind a same like this one in the link below. <a href="https://www.courspython.com/animation-matplotlib.html" rel="nofollow noreferrer">https://www.courspython.com/animation-matplotlib.html</a></p> <p>Thanks if you can help</p>
<p>Is this what you are trying to get?</p> <pre><code>x = np.arange(1999,2017) y = np.random.random(size=x.shape) fig = plt.figure(figsize=(4,3)) plt.xlim(1999, 2016) plt.ylim(np.min(y), np.max(y)) plt.xlabel('Year',fontsize=20) plt.ylabel('Y',fontsize=20) plt.title('Meteo Paris',fontsize=20) plt.tick_params(labelsize=17) line, = plt.plot([],[],'r-',lw=7) def animate(i): x_, y_ = x[:i+1],y[:i+1] line.set_data(x_,y_) return line, ani = matplotlib.animation.FuncAnimation(fig, animate, frames=len(x), repeat=True) </code></pre> <p><a href="https://i.stack.imgur.com/Pggxj.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Pggxj.gif" alt="enter image description here"></a></p>
python|numpy|matplotlib
2
267
55,594,537
Fillna not working when combined with groupby and mean
<p>The below code filters my Dataframe for 5 rows with Zambia as the Country Name.</p> <pre><code>df2.loc[df2['Country Name'] == 'Zambia'].head(5) Country Name Year CO2 262 Zambia 1960 NaN 526 Zambia 1961 NaN 790 Zambia 1962 NaN 1054 Zambia 1963 NaN 1318 Zambia 1964 0.949422 </code></pre> <p>Next, shown below is the average Zambia CO2 value.</p> <pre><code>df2.groupby('Country Name', as_index=False)['CO2'].mean().loc[df2['Country Name'] == 'Zambia'] Country Name CO2 262 Zambia 0.484002 </code></pre> <p>Finally, I now try to fill in all the NaN values with the average value. Notice only the first NaN value actually gets filled in. Why is this and how can I make sure all NaN values get filled in with the average of each country?</p> <pre><code>df2['CO2'] = df2['CO2'].fillna(value = df2.groupby('Country Name', as_index=False)['CO2'].mean()['CO2']) Country Name Year CO2 262 Zambia 1960 0.484002 526 Zambia 1961 NaN 790 Zambia 1962 NaN 1054 Zambia 1963 NaN 1318 Zambia 1964 0.949422 </code></pre>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.transform.html" rel="nofollow noreferrer"><code>GroupBy.transform</code></a> for return <code>Series</code> filled by aggregate values with same size like original <code>DataFrame</code>, so <code>fillna</code> working nice:</p> <pre><code>s = df2.groupby('Country Name')['CO2'].transform('mean') df2['CO2'] = df2['CO2'].fillna(value = s) </code></pre>
python|python-3.x|pandas|dataframe
1
268
64,954,738
Getting error "AttributeError: 'numpy.ndarray' object has no attribute 'lower' " in word tokenizer
<p>I am trying to train a model to classify multi-label data set by referring <a href="https://stackabuse.com/python-for-nlp-multi-label-text-classification-with-keras/" rel="nofollow noreferrer">this</a> article. I am entirely new to this field and I am getting this error &quot;AttributeError: 'numpy.ndarray' object has no attribute 'lower'&quot;</p> <p>Here is my code</p> <pre><code>reviews = pd.read_csv(&quot;/content/drive/My Drive/Interim Project Data/score.csv&quot;) Review_labels = reviews[[&quot;natral score&quot;, &quot;man-made score&quot;, &quot;sport score&quot;, &quot;special event score&quot;]] Review_labels.head() def preprocess_text(sen): # Remove punctuations and numbers sentence = re.sub('[^a-zA-Z]', ' ', sen) sentence = re.sub(r&quot;\s+[a-zA-Z]\s+&quot;, ' ', sentence) sentence = re.sub(r'\s+', ' ', sentence) return sentence X = [] sentences = list(reviews[&quot;User Review&quot;]) for sen in sentences: X.append(preprocess_text(sen)) y = Review_labels.values X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=123) print(len(X_train)) print(len(X_test)) from numpy import array from keras.preprocessing.sequence import pad_sequences from keras.preprocessing.text import Tokenizer tokenizer = Tokenizer(num_words=5000) tokenizer.fit_on_texts(X) X_train = tokenizer.texts_to_sequences(X_train) X_test = tokenizer.texts_to_sequences(X_test) </code></pre> <p>The error is here</p> <pre><code>--------------------------------------------------------------------------- AttributeError Traceback (most recent call last) &lt;ipython-input-128-ff374d8c5eb4&gt; in &lt;module&gt;() 7 tokenizer.fit_on_texts(X) 8 ----&gt; 9 X_train = tokenizer.texts_to_sequences(X_train) 10 X_test = tokenizer.texts_to_sequences(X_test) 11 2 frames /usr/local/lib/python3.6/dist-packages/keras_preprocessing/text.py in text_to_word_sequence(text, filters, lower, split) 41 &quot;&quot;&quot; 42 if lower: ---&gt; 43 text = text.lower() 44 45 if sys.version_info &lt; (3,): AttributeError: 'numpy.ndarray' object has no attribute 'lower' </code></pre> <p><a href="https://drive.google.com/file/d/1z7oPHzcA0QTdCRT3VUDxXyUPN57Q86OW/view?usp=sharing" rel="nofollow noreferrer">here is the data set that I used in the code</a></p> <p>It will be really helpful if someone can help with this issue.</p>
<p>Is <em>text</em> a str variable? If it's not maybe you could do</p> <pre><code>text = str(text).lower() </code></pre> <p>as long as it's value is something that can be turned into a string</p>
python|tensorflow|machine-learning|keras|multilabel-classification
1
269
64,924,224
Getting a view of a zarr array slice
<p>I would like to produce a zarr array pointing to <em>part</em> of a zarr array on disk, similar to how <code>sliced = np_arr[5]</code> gives me a view into <code>np_arr</code>, such that modifying the data in <code>sliced</code> modifies the data in <code>np_arr</code>. Example code:</p> <pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt import numpy as np import zarr arr = zarr.open( 'temp.zarr', mode='a', shape=(4, 32, 32), chunks=(1, 16, 16), dtype=np.float32, ) arr[:] = np.random.random((4, 32, 32)) fig, ax = plt.subplots(1, 2) arr[2, ...] = 0 # works fine, &quot;wipes&quot; slice 2 ax[0].imshow(arr[2]) # all 0s arr_slice = arr[1] # returns a NumPy array — loses ties to zarr on disk arr_slice[:] = 0 ax[1].imshow(arr[1]) # no surprises — shows original random data plt.show() </code></pre> <p>Is there anything I can write instead of <code>arr_slice = arr[1]</code> that will make <code>arr_slice</code> be a (writeable) view into the <code>arr</code> array on disk?</p>
<p>The <a href="https://github.com/google/tensorstore" rel="noreferrer">TensorStore</a> library is specifically designed to do this --- all indexing operations produce lazy views:</p> <pre class="lang-py prettyprint-override"><code>import tensorstore as ts import numpy as np arr = ts.open({ 'driver': 'zarr', 'kvstore': { 'driver': 'file', 'path': '.', }, 'path': 'temp.zarr', 'metadata': { 'dtype': '&lt;f4', 'shape': [4, 32, 32], 'chunks': [1, 16, 16], 'order': 'C', 'compressor': None, 'filters': None, 'fill_value': None, }, }, create=True).result() arr[1] = 42 # Overwrites, just like numpy/zarr library view = arr[1] # Returns a lazy view, no I/O performed np.array(view) # Reads from the view # Returns JSON spec that can be passed to `ts.open` to reopen the view. view.spec().to_json() </code></pre> <p>You can read more about the &quot;index transform&quot; mechanism that underlies these lazy views here: <a href="https://google.github.io/tensorstore/index_space.html#index-transform" rel="noreferrer">https://google.github.io/tensorstore/index_space.html#index-transform</a> <a href="https://google.github.io/tensorstore/python/indexing.html" rel="noreferrer">https://google.github.io/tensorstore/python/indexing.html</a></p> <p>Disclaimer: I'm an author of TensorStore.</p>
python|numpy|zarr
5
270
40,077,188
Pandas - Data Frame - Reshaping Values in Data Frame
<p>I am new to Pandas and have a data frame with a team's score in 2 separate columns. This is what I have.</p> <pre><code>Game_ID Teams Score 1 Team A 95 1 Team B 85 2 Team C 90 2 Team D 72 </code></pre> <p>This is where I would like to get to and then ideally to.</p> <pre><code>1 Team A 95 Team B 94 2 Team C 90 Team B 72 </code></pre>
<p>You can try something as follows: Create a <code>row_id</code> within each group by the <code>Game_ID</code> and then unstack by the <code>row_id</code> which will transform your data to wide format:</p> <pre><code>import pandas as pd df['row_id'] = df.groupby('Game_ID').Game_ID.transform(lambda g: pd.Series(range(g.size))) df.set_index(['row_id', 'Game_ID']).unstack(level=0).sortlevel(level = 1, axis = 1) </code></pre> <p><a href="https://i.stack.imgur.com/Lsl0a.png" rel="nofollow"><img src="https://i.stack.imgur.com/Lsl0a.png" alt="enter image description here"></a></p> <p><em>Update</em>:</p> <p>If the <code>row_id</code> is preferred to be dropped, you can drop the level from the columns:</p> <pre><code>df1 = df.set_index(['row_id', 'Game_ID']).unstack(level=0).sortlevel(level = 1, axis = 1) df1.columns = df1.columns.droplevel(level = 1) df1 </code></pre> <p><a href="https://i.stack.imgur.com/aF5Gc.png" rel="nofollow"><img src="https://i.stack.imgur.com/aF5Gc.png" alt="enter image description here"></a></p>
python|pandas|dataframe
4
271
40,196,986
Numpy arrays changing id
<p>What is happening here? It seems like the id locations of the array is not remaining steady maybe?is operator is returning False even thought the ids are same. then after printing the arrays the ids of elements are changing. Any explanations?</p> <pre><code>import numpy as np a = np.arange(27) b = a[1:5] a[0] is b[1] #False id(a[0]) #40038736L id(b[1]) #40038736L a #prints the array id(b[1]) #40038712L id(a[0]) #40038712L b[0] #1 a[1] #1 id(b[0]) #40038712L id(a[1]) #40038784L </code></pre>
<p>First test with a list:</p> <pre><code>In [1109]: a=[0,1,2,3,4] In [1112]: b=a[1:3] In [1113]: id(a[1]) Out[1113]: 139407616 In [1114]: id(b[0]) Out[1114]: 139407616 In [1115]: a[1] is b[0] Out[1115]: True </code></pre> <p>later I tried</p> <pre><code>In [1129]: id(1) Out[1129]: 139407616 </code></pre> <p>So the object in <code>a[1]</code> is consistently the integer <code>1</code> (<code>id</code> of integers is a bit tricky, and implementation dependent).</p> <p>But with an array:</p> <pre><code>In [1118]: aa=np.arange(5) In [1119]: ba=aa[1:] In [1121]: aa[1] Out[1121]: 1 In [1122]: ba[0] Out[1122]: 1 In [1123]: id(aa[1]) Out[1123]: 2925837264 In [1124]: id(ba[0]) Out[1124]: 2925836912 </code></pre> <p><code>id</code> are totally different; in fact they change with each access:</p> <pre><code>In [1125]: id(aa[1]) Out[1125]: 2925837136 In [1126]: id(ba[0]) Out[1126]: 2925835104 </code></pre> <p>That's because <code>aa[1]</code> isn't just the integer <code>1</code>. It is a <code>np.int32</code> object.</p> <pre><code>In [1127]: type(aa[1]) Out[1127]: numpy.int32 </code></pre> <p>In contrast to a list, values of an array are stored as bytes in a <code>databuffer</code>. <code>b[1:]</code> is a <code>view</code> and accesses the same data buffer. But <code>a[1]</code> is a new object that contains a reference to that data buffer. In contrast to the list case, <code>a[1]</code> is not the 2nd object in <code>a</code>.</p> <p>In general, <code>id</code> is not useful when working with arrays, and the <code>is</code> test is also not useful. Use <code>==</code> or <code>isclose</code> (for floats).</p> <p>================</p> <p>A way to see where the values of <code>aa</code> are stored is with:</p> <pre><code>In [1137]: aa.__array_interface__ Out[1137]: {'data': (179274256, False), # 'id' so to speak of the databuffer 'descr': [('', '&lt;i4')], 'shape': (5,), 'strides': None, 'typestr': '&lt;i4', 'version': 3} In [1138]: ba.__array_interface__ Out[1138]: {'data': (179274260, False), # this is 4 bytes larger 'descr': [('', '&lt;i4')], 'shape': (4,), 'strides': None, 'typestr': '&lt;i4', 'version': 3} </code></pre> <p>the <code>data</code> pointer for the 2 arrays is related because <code>ba</code> is a <code>view</code>.</p> <p><code>aa[1]</code> is array-like, and too has a data buffer, but it isn't a view.</p> <pre><code>In [1139]: aa[1].__array_interface__ Out[1139]: {'__ref': array(1), 'data': (182178952, False), ...} </code></pre>
python|python-2.7|numpy
2
272
69,369,036
Tensorflow, possible to downweigh gradients for certain data items
<p>Say I have a multi output model with outputs y_0 and y_1. For some data examples I am confident that y_0 is correct, but know that y_1 may be a complete guess. My idea was to use a custom training loop and multiply by a calculated weight, but this does not seem to be working. Is there a way to do this through the keras api that may be simpler than this?</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code> @tf.function def train_on_batch(x,y): y_true = y[:, 0] weights = y[:,1] with tf.GradientTape() as tape: y_pred = model(x, training=True) print("ytrainpred ", y_pred) loss_value_pre = loss(y_true, y_pred) loss_value = loss_value_pre * weights # compute gradient grads = tape.gradient(loss_value, model.trainable_weights) # update weights optimizer.apply_gradients(zip(grads, model.trainable_weights)) # update metrics loss_1_train.update_state(y_true[:, 0], loss_value[:,0]) loss_2_train.update_state(y_true[:, 1], loss_value[:,1) return loss_value</code></pre> </div> </div> </p>
<p>In the method <a href="https://www.tensorflow.org/api_docs/python/tf/keras/Model#compile" rel="nofollow noreferrer">compile</a> of the keras object you have a parameter called loss weights to do that, you only need to implement the lost functions that take one or other output and passed as an array of losses to the loss parameter, but this becomes quite impractical if you have many ys</p>
tensorflow|keras|gradient
0
273
69,631,906
Correlation of dataframe with itself
<p>I have a dataframe that looks like this:</p> <pre><code>import pandas as pd a=pd.DataFrame([[name1, name2, name3, name4],[text1, text2, text3, text4]], columns=(['names','texts'])) </code></pre> <p>I have implemented a function to perform a cosine similarity between the words in each text using <a href="https://nlp.stanford.edu/projects/glove/" rel="nofollow noreferrer">GloVe</a>.</p> <pre><code>def cosine_distance_wordembedding_method(s1, s2): import scipy import scipy.spatial vector_1 = np.mean([glove[word] if word in glove else 0 for word in preprocess(s1)], axis=0) vector_2 = np.mean([glove[word] if word in glove else 0 for word in preprocess(s2)], axis=0) cosine = scipy.spatial.distance.cosine(vector_1, vector_2) return 1 - cosine </code></pre> <p>Now, I want to apply this function to all rows of my dataframe and compare the texts column with itself. So the resulting dataframe should be something like (namely a correlation matrix):</p> <pre><code> name1 name2 name3 name4 name1 1 0.95 0.79 0.4 name2 0.95 1 0.85 0.65 name3 0.79 0.85 1 0.79 name4 0.66 0.65 0.79 1.00000 </code></pre> <p><strong>I have done 2 ways to implement this and they are very slow. I want to know if there's another maybe faster.</strong></p> <p>First way:</p> <pre><code>df = a.texts.apply(lambda text1: a.texts.apply(lambda text2: cosine_distance_wordembedding_method(text1, text2))) </code></pre> <p>Second way:</p> <pre><code># Create a dataframe to store output. df = pd.DataFrame(index=a.index, columns = a.index) # Compute the similarities for index1, row1 in a.iterrows(): for index2, row2 in a.iterrows(): df.loc[index1, index2] = cosine_distance_wordembedding_method(row1[&quot;eng_text&quot;], row2[&quot;eng_text&quot;]) </code></pre>
<p><code>scipy.spatial.cdist</code> is vectorized. So you can calculate all the vector representations for the texts and use <code>cdist</code> once:</p> <pre><code>from scipy.spatial import cdist vectors = [np.mean([glove[word] if word in glove else 0 for word in preprocess(s1)] for s1 in df['texts'] ] distance = 1 - cdist(vectors, vectors, metric='consine') </code></pre> <p>Remember that this way might require a lot of memory, specially when you have a big text data.</p>
python|pandas|correlation
0
274
69,413,841
Evaluating a data frame through another Indicator data frame
<p>I have a source dataframe <strong>input_df</strong>:</p> <pre> PatientID KPI_Key1 KPI_Key2 KPI_Key3 0 1 (C602+C603) C601 NaN 1 2 (C605+C606) C602 NaN 2 3 75 L239+C602 NaN 3 4 (32*(C603+234)) 75 NaN 4 5 L239 NaN C601 </pre> <p>I have another indicator dataframe <strong>indicator_df</strong></p> <pre> 99 75 C604 C602 C601 C603 C605 C606 44 L239 32 PatientID 1 1 0 1 0 1 0 0 0 1 0 1 2 0 0 0 0 0 0 1 1 0 0 0 3 1 1 1 1 0 1 1 1 1 1 1 4 0 0 0 0 0 1 0 1 0 1 0 5 1 0 1 1 1 1 0 1 1 1 1 6 0 1 0 0 0 0 0 0 0 0 0 7 1 1 1 1 1 1 1 1 1 1 1 8 0 0 0 0 0 0 0 0 0 0 0 </pre> <p>Now, I need to generate an output like this <strong>output_df</strong></p> <pre> PatientID KPI_Key1 KPI_Key2 KPI_Key3 0 1 0 1 0 1 2 1 0 0 2 3 1 1 0 3 4 0 0 0 4 5 1 0 1 </pre> <p>the output_df is obtained by &quot;Evaluating&quot; the input formulas in the input_df against the indicator_df.<br> The + represents OR condition 1 + 1 = 1 ; 1 + 0 = 1 ; 0 + 0 = 0 <br> The * represents AND condition. 1 * 1 = 1 ; 0 * 0 = 0 ; 1 * 0 = 0</p> <p>source :</p> <pre> input_df = pd.DataFrame({'PatientID': [1,2,3,4,5], 'KPI_Key1': ['(C602+C603)','(C605+C606)','75','(32*(C603+234))','L239'] , 'KPI_Key2' : ['C601','C602','L239+C602','75',''] , 'KPI_Key3' : ['','','','','C601']}) </pre> <pre> indicator_df = pd.DataFrame({'PatientID': [1,2,3,4,5,6,7,8],'99' : ['1','0','1','0','1','0','1','0'],'75' : ['0','0','1','0','0','1','1','0'],'C604' : ['1','0','1','0','1','0','1','0'],'C602' : ['0','0','1','0','1','0','1','0'],'C601' : ['1','0','0','0','1','0','1','0'],'C603' : ['0','0','1','1','1','0','1','0'],'C605' : ['0','1','1','0','0','0','1','0'],'C606' : ['0','1','1','1','1','0','1','0'],'44' : ['1','0','1','0','1','0','1','0'],'L239' : ['0','0','1','1','1','0','1','0'], '32' : ['1','0','1','0','1','0','1','0'],}).set_index('PatientID') </pre> <pre> output_df = pd.DataFrame({'PatientID': [1,2,3,4,5], 'KPI_Key1': ['0','1','1','0','1'] , 'KPI_Key2' : ['1','0','1','0','0'] , 'KPI_Key3' : ['0','0','0','0','1']}) </pre>
<p>Finally I was able to solve it :</p> <pre><code>final_out_df = pd.DataFrame() for i in range(len(input_df)): for j in ['KPI_Key1','KPI_Key2','KPI_Key3']: exp = input_df[j].iloc[i] #checking for NaN values if exp == exp: temp_out_df=indicator_df.eval(re.sub(r'(\w+)', r'`\1`', exp)).reset_index(name=j) out_df['KPI_Key'] = input_df['KPI_Id'].iloc[i] out_df = out_df.merge(temp_out_df, on='PateintID', how='left') final_out_df= final_out_df.append(out_df) out_df = pd.DataFrame(index=indicator_df.index) out_df.reset_index(level=0, inplace=True) final_out_df.index = range(len(final_out_df)) #filling NAN values to 0 and converting everything to int final_out_df.fillna(0,inplace=True) final_out_df[[&quot;KPI_Key1&quot;, &quot;KPI_Key2&quot;, &quot;KPI_Key3&quot;]] = final_out_df[[&quot;KPI_Key1&quot;, &quot;KPI_Key2&quot;, &quot;KPI_Key3&quot;]].astype(int) #columns &gt;1 = 1 final_out_df.loc[final_out_df['KPI_Key1'] &gt;= 1, 'KPI_Key1'] = 1 final_out_df.loc[final_out_df['KPI_Key2'] &gt;= 1, 'KPI_Key2'] = 1 final_out_df.loc[final_out_df['KPI_Key3'] &gt;= 1, 'KPI_Key3'] = 1 </code></pre>
python|pandas|date-range
0
275
69,641,146
Tensorflow urllib.error.URLError: <urlopen error Errno 60 Operation timed out>
<p>Recently, I tried to use the elmo in tensorflow, but I meet some mistakes, if you can help me, I would be really appreciate.</p> <p>this is my test_code:</p> <pre><code>import tensorflow as tf import tensorflow_hub as hub import numpy as np import urllib.request if __name__ == '__main__': elmo = hub.Module('https://tfhub.dev/google/elmo/3',trainable=True) x = [&quot;Hi my friend&quot;] embeddings = elmo(tf.constant(x),signature=&quot;default&quot;,as_dict=True)[&quot;elmo&quot;] print(embeddings.numpy()) </code></pre> <p>and I run it, an error woudl be reported in my computer.</p> <pre><code>urllib.error.URLError: &lt;urlopen error [Errno 60] Operation timed out&gt; </code></pre> <p>I found lots of methods, but still cannot resolve this question, so if you can help me, I would really appreciate. and there is all error report.</p> <pre><code>Traceback (most recent call last): File &quot;/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py&quot;, line 1346, in do_open h.request(req.get_method(), req.selector, req.data, headers, File &quot;/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/http/client.py&quot;, line 1255, in request self._send_request(method, url, body, headers, encode_chunked) File &quot;/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/http/client.py&quot;, line 1301, in _send_request self.endheaders(body, encode_chunked=encode_chunked) File &quot;/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/http/client.py&quot;, line 1250, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File &quot;/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/http/client.py&quot;, line 1010, in _send_output self.send(msg) File &quot;/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/http/client.py&quot;, line 950, in send self.connect() File &quot;/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/http/client.py&quot;, line 1417, in connect super().connect() File &quot;/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/http/client.py&quot;, line 921, in connect self.sock = self._create_connection( File &quot;/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/socket.py&quot;, line 843, in create_connection raise err File &quot;/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/socket.py&quot;, line 831, in create_connection sock.connect(sa) TimeoutError: [Errno 60] Operation timed out During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;/Users/jinsongyang/PycharmProjects/pythonProject11/elmo_test.py&quot;, line 7, in &lt;module&gt; elmo = hub.Module('https://tfhub.dev/google/elmo/3',trainable=True) File &quot;/Users/jinsongyang/PycharmProjects/pythonProject11/lib/python3.9/site-packages/tensorflow_hub/module.py&quot;, line 157, in __init__ self._spec = as_module_spec(spec) File &quot;/Users/jinsongyang/PycharmProjects/pythonProject11/lib/python3.9/site-packages/tensorflow_hub/module.py&quot;, line 30, in as_module_spec return load_module_spec(spec) File &quot;/Users/jinsongyang/PycharmProjects/pythonProject11/lib/python3.9/site-packages/tensorflow_hub/module.py&quot;, line 64, in load_module_spec path = registry.resolver(path) File &quot;/Users/jinsongyang/PycharmProjects/pythonProject11/lib/python3.9/site-packages/tensorflow_hub/registry.py&quot;, line 51, in __call__ return impl(*args, **kwargs) File &quot;/Users/jinsongyang/PycharmProjects/pythonProject11/lib/python3.9/site-packages/tensorflow_hub/compressed_module_resolver.py&quot;, line 67, in __call__ return resolver.atomic_download(handle, download, module_dir, File &quot;/Users/jinsongyang/PycharmProjects/pythonProject11/lib/python3.9/site-packages/tensorflow_hub/resolver.py&quot;, line 418, in atomic_download download_fn(handle, tmp_dir) File &quot;/Users/jinsongyang/PycharmProjects/pythonProject11/lib/python3.9/site-packages/tensorflow_hub/compressed_module_resolver.py&quot;, line 63, in download response = self._call_urlopen(request) File &quot;/Users/jinsongyang/PycharmProjects/pythonProject11/lib/python3.9/site-packages/tensorflow_hub/resolver.py&quot;, line 522, in _call_urlopen return urllib.request.urlopen(request) File &quot;/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py&quot;, line 214, in urlopen return opener.open(url, data, timeout) File &quot;/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py&quot;, line 517, in open response = self._open(req, data) File &quot;/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py&quot;, line 534, in _open result = self._call_chain(self.handle_open, protocol, protocol + File &quot;/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py&quot;, line 494, in _call_chain result = func(*args) File &quot;/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py&quot;, line 1389, in https_open return self.do_open(http.client.HTTPSConnection, req, File &quot;/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py&quot;, line 1349, in do_open raise URLError(err) urllib.error.URLError: &lt;urlopen error [Errno 60] Operation timed out&gt; </code></pre>
<p>Which <code>Tensorflow version</code> are you using?</p> <p>You need to disable Tensorlfow eager execution mode to run this code.</p> <p>Use below code:</p> <pre><code>import tensorflow.compat.v1 as tf tf.disable_v2_behavior() </code></pre> <p>before the above code and remove <code>.numpy()</code> as <code>.numpy</code> is only supported in eager mode. If you are in graph mode, it will not be supported.</p>
python|tensorflow|url|nlp|elmo
0
276
41,069,771
Single column matrix and its transpose to create Symmetric matrix in python, numpy scipy
<p>Is there any existing function in numpy or scipy to do following operation?</p> <pre><code>z = [a, b] z*z.T (transpose of the z) = [[a**2, a*b] [b*a, b**2]] </code></pre> <p>Thank you!</p>
<p>Use can use numpy <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.outer.html" rel="nofollow noreferrer">outer</a> function:</p> <pre><code>np.outer([2,4],[2,4]) array([[ 4, 8], [ 8, 16]]) </code></pre>
python|numpy|scipy
2
277
54,148,669
Counting rows back and forth based on time column
<p>I have a dataframe with user ids and two different times. <code>time1</code> is the same for one user, but <code>time2</code> is different. </p> <pre><code>test = pd.DataFrame({ 'id': [1,1,1,1,1,1,1,1,1,1,2,2,2,2,2], 'time1': ['2018-11-01 21:19:32', '2018-11-01 21:19:32', '2018-11-01 21:19:32','2018-11-01 21:19:32','2018-11-01 21:19:32', '2018-11-01 21:19:32', '2018-11-01 21:19:32', '2018-11-01 21:19:32','2018-11-01 21:19:32','2018-11-01 21:19:32', '2018-11-02 11:20:12', '2018-11-02 11:20:12','2018-11-02 11:20:12','2018-11-02 11:20:12','2018-11-02 11:20:12'], 'time2': ['2018-11-01 10:19:32', '2018-11-01 22:19:32', '2018-11-01 12:19:32','2018-11-01 23:44:32','2018-11-01 14:19:32', '2018-11-01 15:19:32', '2018-11-01 11:19:32', '2018-11-01 23:19:32','2018-11-01 13:22:32','2018-11-01 23:56:32', '2018-11-02 11:57:12', '2018-11-02 10:20:12','2018-11-02 11:25:12','2018-11-02 11:32:12','2018-11-02 09:15:12'] }) </code></pre> <p>I would like to create a <code>row_num</code> column which sorts and counts <code>time2</code> according to <code>time1</code>. Everything which happened before <code>time1</code> is counted in reverse:</p> <pre><code> id time1 time2 row_num 0 1 2018-11-01 21:19:32 2018-11-01 10:19:32 -6 1 1 2018-11-01 21:19:32 2018-11-01 11:19:32 -5 2 1 2018-11-01 21:19:32 2018-11-01 12:19:32 -4 3 1 2018-11-01 21:19:32 2018-11-01 13:19:32 -3 4 1 2018-11-01 21:19:32 2018-11-01 14:19:32 -2 5 1 2018-11-01 21:19:32 2018-11-01 15:19:32 -1 6 1 2018-11-01 21:19:32 2018-11-01 22:19:32 1 7 1 2018-11-01 21:19:32 2018-11-01 23:19:32 2 8 1 2018-11-01 21:19:32 2018-11-01 23:44:32 3 9 1 2018-11-01 21:19:32 2018-11-01 23:56:32 4 10 2 2018-11-02 11:20:12 2018-11-02 09:20:12 -2 11 2 2018-11-02 11:20:12 2018-11-02 10:20:12 -1 12 2 2018-11-02 11:20:12 2018-11-02 11:25:12 1 13 2 2018-11-02 11:20:12 2018-11-02 11:32:12 2 14 2 2018-11-02 11:20:12 2018-11-02 11:57:12 3 </code></pre> <p>Will appreciate your help and advice!</p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.cumcount.html" rel="nofollow noreferrer"><code>cumcount</code></a> with no parameter and also with <code>ascending=False</code>:</p> <pre><code>#necessary unique default RangeIndex test = test.reset_index(drop=True) #convert columns to datetimes test[['time1','time2']] = test[['time1','time2']].apply(pd.to_datetime) #sorting both columns test = test.sort_values(['id','time1','time2']) #boolean mask m = test['time2'] &lt; test['time1'] #filter and get counter, last join togather test['row_num'] = pd.concat([(test[m].groupby('id').cumcount(ascending=False) +1) * -1, test[~m].groupby('id').cumcount() + 1]) print (test) id time1 time2 row_num 0 1 2018-11-01 21:19:32 2018-11-01 10:19:32 -6 6 1 2018-11-01 21:19:32 2018-11-01 11:19:32 -5 2 1 2018-11-01 21:19:32 2018-11-01 12:19:32 -4 8 1 2018-11-01 21:19:32 2018-11-01 13:22:32 -3 4 1 2018-11-01 21:19:32 2018-11-01 14:19:32 -2 5 1 2018-11-01 21:19:32 2018-11-01 15:19:32 -1 1 1 2018-11-01 21:19:32 2018-11-01 22:19:32 1 7 1 2018-11-01 21:19:32 2018-11-01 23:19:32 2 3 1 2018-11-01 21:19:32 2018-11-01 23:44:32 3 9 1 2018-11-01 21:19:32 2018-11-01 23:56:32 4 14 2 2018-11-02 11:20:12 2018-11-02 09:15:12 -2 11 2 2018-11-02 11:20:12 2018-11-02 10:20:12 -1 12 2 2018-11-02 11:20:12 2018-11-02 11:25:12 1 13 2 2018-11-02 11:20:12 2018-11-02 11:32:12 2 10 2 2018-11-02 11:20:12 2018-11-02 11:57:12 3 </code></pre>
python-3.x|pandas
2
278
53,908,646
Pandas dataframe to numpy array
<p>I am very new to Python and have very little experience. I've managed to get some code working by copying and pasting and substituting the data I have, but I've been looking up how to select data from a dataframe but can't make sense of the examples and substitute my own data in.</p> <p><strong>The overarching goal</strong>: (if anyone could actually help me write the entire thing, that would be helpful, but highly unlikely and probably not allowed)</p> <p>I am trying to use <code>scipy</code> to fit the curve of a temperature change when two chemicals react. There are 40 trials. The model I am hoping to use is a generalized logistic function with six parameters. All I need are the 40 functions, and nothing else. I have no idea how to achieve this, but I will ask another question when I get there.</p> <p><strong>The current issue</strong>:</p> <p>I had imported 40 <code>.csv</code> files, compiled/shortened the data into 2 sections so that there are 20 trials in 1 file. Now the data has 21 columns and 63 rows. There is a title in the first row for each column, and the first column is a consistent time interval. </p> <p>However, each trial is not necessarily that long. One of them does, though. So I've managed to write the following code for a dataframe:</p> <pre><code>import pandas as pd df = pd.read_csv("~/Truncated raw data hcl.csv") print(df) </code></pre> <p>It prints the table out, but as expected, there are NaNs where there exists no data. </p> <p>So I would like to know how to arrange it into workable array with 2 columns , time and a trial like an (x,y) for a graph for future workings with <code>numpy</code> or <code>scipy</code> such that the rows that there is no data would not be included.</p> <p>Part of the <code>.csv</code> file begins after the horizontal line. I'm too lazy to put it in a code block, sorry. Thank you.</p> <hr> <pre><code>time,1mnaoh trial 1,1mnaoh trial 2,1mnaoh trial 3,1mnaoh trial 4,2mnaoh trial 1,2mnaoh trial 2,2mnaoh trial 3,2mnaoh trial 4,3mnaoh trial 1,3mnaoh trial 2,3mnaoh trial 3,3mnaoh trial 4,4mnaoh trial 1,4mnaoh trial 2,4mnaoh trial 3,4mnaoh trial 4,5mnaoh trial 1,5mnaoh trial 2,5mnaoh trial 3,5mnaoh trial 4 0.0,23.2,23.1,23.1,23.8,23.1,23.1,23.3,22.0,22.8,23.4,23.3,24.0,23.0,23.8,23.8,24.0,23.3,24.3,24.1,24.1 0.5,23.2,23.1,23.1,23.8,23.1,23.1,23.3,22.1,22.8,23.4,23.3,24.0,23.0,23.8,23.8,24.0,23.4,24.3,24.1,24.1 1.0,23.2,23.1,23.1,23.7,23.1,23.1,23.3,22.3,22.8,23.4,23.3,24.0,23.0,23.8,23.8,24.0,23.5,24.3,24.1,24.1 1.5,23.2,23.1,23.1,23.7,23.1,23.1,23.3,22.4,22.8,23.4,23.3,24.0,23.0,23.8,23.8,23.9,23.6,24.3,24.1,24.1 2.0,23.3,23.2,23.2,24.2,23.6,23.2,24.3,22.5,23.0,23.7,24.4,24.1,23.1,23.9,24.4,24.2,23.7,24.5,24.7,25.1 2.5,24.0,23.5,23.5,25.4,25.3,23.3,26.4,22.7,23.5,25.8,27.9,25.1,23.1,23.9,27.4,26.8,23.8,27.2,26.7,28.1 3.0,25.4,24.4,24.1,26.5,27.8,23.3,28.5,22.8,24.6,28.6,31.2,27.2,23.2,23.9,30.9,30.5,23.9,31.4,29.8,31.3 3.5,26.9,25.5,25.1,27.4,29.9,23.4,30.1,22.9,26.4,31.4,34.0,30.0,23.3,24.2,33.8,34.0,23.9,35.1,33.2,34.4 4.0,27.8,26.5,26.2,27.9,31.4,23.4,31.3,23.1,28.8,34.0,36.1,32.6,23.3,26.6,36.0,36.7,24.0,37.7,35.9,36.8 4.5,28.5,27.3,27.0,28.2,32.6,23.5,32.3,23.1,31.2,36.0,37.5,34.8,23.4,30.0,37.7,38.7,24.0,39.7,38.0,38.7 5.0,28.9,27.9,27.7,28.5,33.4,23.5,33.1,23.2,33.2,37.6,38.6,36.5,23.4,33.2,39.0,40.2,24.0,40.9,39.6,40.2 5.5,29.2,28.2,28.3,28.9,34.0,23.5,33.7,23.3,35.0,38.7,39.4,37.9,23.5,35.6,39.9,41.2,24.0,41.9,40.7,41.0 6.0,29.4,28.5,28.6,29.1,34.4,24.9,34.2,23.3,36.4,39.6,40.0,38.9,23.5,37.3,40.6,42.0,24.1,42.5,41.6,41.2 6.5,29.5,28.8,28.9,29.3,34.7,27.0,34.6,23.3,37.6,40.4,40.4,39.7,23.5,38.7,41.1,42.5,24.1,43.1,42.3,41.7 7.0,29.6,29.0,29.1,29.5,34.9,28.8,34.8,23.5,38.6,40.9,40.8,40.2,23.5,39.7,41.4,42.9,24.1,43.4,42.8,42.3 7.5,29.7,29.2,29.2,29.6,35.1,30.5,35.0,24.9,39.3,41.4,41.1,40.6,23.6,40.5,41.7,43.2,24.0,43.7,43.1,42.9 8.0,29.8,29.3,29.3,29.7,35.2,31.8,35.2,26.9,40.0,41.6,41.3,40.9,23.6,41.1,42.0,43.4,24.2,43.8,43.3,43.3 8.5,29.8,29.4,29.4,29.8,35.3,32.8,35.4,28.9,40.5,41.8,41.4,41.2,23.6,41.6,42.2,43.5,27.0,43.9,43.5,43.6 9.0,29.9,29.5,29.5,29.9,35.4,33.6,35.5,30.5,40.8,41.8,41.6,41.4,23.6,41.9,42.4,43.7,30.8,44.0,43.6,43.8 9.5,29.9,29.6,29.5,30.0,35.5,34.2,35.6,31.7,41.0,41.8,41.7,41.5,23.6,42.2,42.5,43.7,33.9,44.0,43.7,44.0 10.0,30.0,29.7,29.6,30.0,35.5,34.6,35.7,32.7,41.1,41.9,41.8,41.7,23.6,42.4,42.6,43.8,36.2,44.0,43.7,44.1 10.5,30.0,29.7,29.6,30.1,35.6,35.0,35.7,33.3,41.2,41.9,41.8,41.8,23.6,42.6,42.6,43.8,37.9,44.0,43.8,44.2 11.0,30.0,29.7,29.6,30.1,35.7,35.2,35.8,33.8,41.3,41.9,41.9,41.8,24.0,42.9,42.7,43.8,39.3,,43.8,44.3 11.5,30.0,29.8,29.7,30.1,35.8,35.4,35.8,34.1,41.4,41.9,42.0,41.8,26.6,43.1,42.7,43.9,40.2,,43.8,44.3 12.0,30.0,29.8,29.7,30.1,35.8,35.5,35.9,34.3,41.4,42.0,42.0,41.9,30.3,43.3,42.7,43.9,40.9,,43.9,44.3 12.5,30.1,29.8,29.7,30.2,35.9,35.7,35.9,34.5,41.5,42.0,42.0,,33.4,43.4,42.7,44.0,41.4,,43.9,44.3 13.0,30.1,29.8,29.8,30.2,35.9,35.8,36.0,34.7,41.5,42.0,42.1,,35.8,43.5,42.7,44.0,41.8,,43.9,44.4 13.5,30.1,29.9,29.8,30.2,36.0,36.0,36.0,34.8,41.5,42.0,42.1,,37.7,43.5,42.8,44.1,42.0,,43.9,44.4 14.0,30.1,29.9,29.8,30.2,36.0,36.1,36.0,34.9,41.6,,42.2,,39.0,43.5,42.8,44.1,42.1,,,44.4 14.5,,29.9,29.8,,36.0,36.2,36.0,35.0,41.6,,42.2,,40.0,43.5,42.8,44.1,42.3,,,44.4 15.0,,29.9,,,36.0,36.3,,35.0,41.6,,42.2,,40.7,,42.8,44.1,42.4,,, 15.5,,,,,36.0,36.4,,35.1,41.6,,42.2,,41.3,,,,42.4,,, </code></pre>
<p>To convert a whole <code>DataFrame</code> into a numpy array, use</p> <p><code>df = df.values()</code></p> <p>If i understood you correctly, you want seperate arrays for every trial though. This can be done like this: </p> <p><code>data = [df.iloc[:, [0, i]].values() for i in range(1, 20)]</code></p> <p>which will make a list of numpy arrays, every one containing the first column with temperature and one of the trial columns.</p>
python|pandas|numpy|scipy
1
279
54,131,872
How to make vectorized computation instead of 'for' loops for all grid?
<p>I have a double for loop among all the grid, and I want to make it work faster. <code>r, vec1, vec2, theta</code> are the vectors of the same length <code>N</code>. <code>c</code> is a constant.</p> <pre><code>import numpy as np N = 30 x_coord, y_coord = 300, 300 m1 = np.zeros((x_coord, y_coord)) vec1, vec2 = np.ones(N), np.ones(N) theta = np.ones(N) for x in np.arange(x_coord): for y in np.arange(y_coord): m1[x,y] = np.sum(np.cos(2.*np.pi*(r*(vec1*x + vec2*y))+theta)) * c </code></pre> <p>The time for two loops was: </p> <p><code>1.03 s ± 8.96 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)</code></p> <p>Also I tried to use <code>np.meshgrid</code>:</p> <pre><code>def f1(x, y): sum1 = vec1*x + vec2*y mltpl1 = r * sum1 sum2 = 2.*np.pi * mltpl1 + theta sum3 = np.sum(np.cos(sum2)) mltpl2 = sum3 * c return mltpl2 msh1, msh2 = np.meshgrid(range(x_coord), range(y_coord)) pairs = np.vstack((np.ravel(msh1), np.ravel(msh2))).T m1 = np.reshape(list(map(lambda x: f1(x[0], x[1]), pairs)), (m1.shape[0], m1.shape[1])).T </code></pre> <p>Trying meshgrid time was more:</p> <p><code>1.25 s ± 48.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)</code></p> <p>So I need a solution how to do it on a vectors and matrices level. Are there any ideas? Thank you in advance.</p>
<p>We can use a trigonometric trick here -</p> <pre><code>cos(A + B) = cos A cos B − sin A sin B </code></pre> <p>This lets us leverage <code>matrix-multiplication</code> for a solution that would look something like this -</p> <pre><code># Get x and y as 1D arrays x = np.arange(x_coord) y = np.arange(y_coord) # Get the common constant for scaling vec1 and vec2 parts k1 = 2.*np.pi*r # Use outer multiplications for the two vectors against x,y and also scale them p1 = k1*vec1*x[:,None] + theta p2 = (k1*vec2)[:,None]*y # Finally use trigonometry+matrix-multiplication for sum reductions out = c*(np.cos(p1).dot(np.cos(p2)) - np.sin(p1).dot(np.sin(p2))) </code></pre> <p>Timings -</p> <pre><code># Setup In [151]: np.random.seed(0) ...: c = 2.3 ...: N = 30 ...: x_coord, y_coord = 300, 300 ...: vec1 = np.random.rand(N) ...: vec2 = np.random.rand(N) ...: r = np.random.rand(N) ...: theta = np.ones(N) # Original solution In [152]: %%timeit ...: m1 = np.zeros((x_coord, y_coord)) ...: for x in np.arange(x_coord): ...: for y in np.arange(y_coord): ...: m1[x,y] = np.sum(np.cos(2.*np.pi*(r*(vec1*x + vec2*y))+theta)) * c 1 loop, best of 3: 960 ms per loop # Proposed solution In [153]: %%timeit ...: x = np.arange(x_coord) ...: y = np.arange(y_coord) ...: k1 = 2.*np.pi*r ...: p1 = k1*vec1*x[:,None] + theta ...: p2 = (k1*vec2)[:,None]*y ...: out = c*(np.cos(p1).dot(np.cos(p2)) - np.sin(p1).dot(np.sin(p2))) 100 loops, best of 3: 2.54 ms per loop </code></pre> <p><strong><code>375x+</code></strong> speedup!</p>
python|numpy|vectorization
3
280
53,872,063
Removing quote and hidden new line
<p>I am reading a excel file using pd.read_excel and in one the column, few rows have quotes(") and hidden new lines. I want to remove both of them before doing some further transformation. The sample string is as follows</p> <pre><code>col1 col2 col3 IC201829 100234 "Valuation of GF , Francis Street D8. I number: 106698 " </code></pre> <p>I am using following code to remove the quote and hidden new line (between D8. and I number),</p> <pre><code>df['col3'] = df['col3'].str.replace('"','') df['col3'] = df['col3'].replace(r'\\n',' ', regex=True) </code></pre> <p>Any suggestion is much appreciated. Thank you</p>
<p>You can do this way with single line <code>replace()</code>,</p> <pre><code>import pandas as pd str = '''"Valuation of "GF , Francis Street D8.\nI number: 106698"''' df = pd.DataFrame({'Col3':[str]}) print (df) df = df.replace('\n',' ', regex=True).replace('"', '',regex=True) print (df) </code></pre> <p><strong>RUN DEMO:</strong> <a href="https://repl.it/@SanyAhmed/EarnestTatteredRepo" rel="nofollow noreferrer">https://repl.it/@SanyAhmed/EarnestTatteredRepo</a></p>
python-3.x|pandas
1
281
52,664,433
Efficiently add column to Pandas DataFrame with values from another DataFrame
<p>I have a simple database consisting of 2 tables (say, Items and Users), where a column of the Users is their <strong>User_ID</strong>, a column of the Items is their <strong>Item_ID</strong> and another column of the Items is <strong>a foreign key to a User_ID</strong>, for instance:</p> <pre><code>Items Users Item_ID Value_A Its_User_ID ... User_ID Name ... 1 35 1 1 Alice 2 991 1 2 John 3 20 2 </code></pre> <p>Imagine I want to <a href="https://en.wikipedia.org/wiki/Denormalization" rel="nofollow noreferrer">denormalize</a> this database, i.e. I'm adding the value of column Name from table Users into table Items for performance reasons when querying the data. My current solution is the following:</p> <pre><code>items['User_Name'] = pd.Series([users.loc[users['User_ID']==x, 'Name'].iloc[0] for x in items['Its_User_ID']]) </code></pre> <p>That is, I'm adding the column as a Pandas Series constructed from a comprehension list, which uses <strong>.loc[]</strong> to retrieve the names of the users with a specific ID, and <strong>.iloc[0]</strong> to get the first element of the selection (which is the only one because user IDs are unique).</p> <p>But this solution is really slow for large sets of items. I did the following tests:</p> <ul> <li>For 1000 items and ~200K users: 20 seconds.</li> <li>For ~400K items and ~200K users: 2.5 hours. (and this is the real data size).</li> </ul> <p>Because this approach is column-wise, its execution time grows multiplicatively by the number of columns for which I'm doing this process, and gets too time-expensive. While I haven't tried using <strong>for</strong> loops to fill the new Series row by row, I expect that it should be much more costly. Are there other approaches that I'm ignoring? Is there a possible solution that takes a few minutes instead of a few hours?</p>
<p>I think it would be more straightforward if you used table <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.merge.html" rel="nofollow noreferrer">merges</a>.</p> <pre><code>items.merge(users[['User_ID', 'Name']], left_on='Its_User_ID', right_on='User_ID', how='left') </code></pre> <p>This will add the column Name to the new dataset, which you can of-course rename later. This will be much more efficient that doing the operation via a for loop column-wise.</p>
python|pandas|performance|dataframe|series
2
282
52,771,770
Different results for IRR from numpy.irr and the Excel IRR function
<p>I have the following annual cash flows:</p> <pre><code>w=np.array([ -56501, -14918073, -1745198, -20887403, -9960686, -31076934, 0, 0, 11367846, 26736802, -2341940, 20853917, 22166416, 19214094, 23056582, -11227178, 18867100, 24947517, 28733869, 24707603, -17030396, 7753089, 27526723, 31534327, 26726270, -24607953, 11532035, 29444013, 24350595, 30140678, -33262793, 5640172, 32846900, 38165710, 31655489, -74343373, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -8727068]) </code></pre> <p>I calculate IRR using np.irr</p> <pre><code>np.irr(w) Out[141]: -0.05393588064654964 </code></pre> <p>When I use IRR function in Excel for the same cash flows, I get 12%. These two functions usually produce the same result. Does anyone know why in this case the results are so different? Thanks!</p>
<p>For the given cash flows, the IRR is not unique; see <a href="https://en.wikipedia.org/wiki/Internal_rate_of_return#Multiple_IRRs" rel="nofollow noreferrer">Multiple IRRs</a>. Both the numpy and Excel values for <code>r</code> satisfy <code>NPV(r) = 0</code>, where NPV is the net present value.</p> <p>Here's a plot of NPV(r) for the data in <code>w</code>. The red stars mark the IRR values (where NPV(r) is zero).</p> <p><a href="https://i.stack.imgur.com/NH2Cy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NH2Cy.png" alt="plot"></a></p> <p>Here's the script that generates the plot:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt w = np.array([ -56501, -14918073, -1745198, -20887403, -9960686, -31076934, 0, 0, 11367846, 26736802, -2341940, 20853917, 22166416, 19214094, 23056582, -11227178, 18867100, 24947517, 28733869, 24707603, -17030396, 7753089, 27526723, 31534327, 26726270, -24607953, 11532035, 29444013, 24350595, 30140678, -33262793, 5640172, 32846900, 38165710, 31655489, -74343373, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -8727068]) r_excel = 0.1200963665 r_numpy = np.irr(w) rr = np.linspace(-0.055, 0.16, 500) npvals = np.array([np.npv(r, w) for r in rr]) plt.plot(rr, npvals/1e6, alpha=0.8) plt.plot(r_numpy, 0, 'r*') plt.plot(r_excel, 0, 'r*') plt.grid(True) plt.xlabel('r') plt.ylabel('NPV(r) [millions]') plt.show() </code></pre>
python|numpy|excel-formula
4
283
52,548,382
How to replace column values in one dataframe from other dataframe on condition?
<pre><code>import pandas as pd df = pd.DataFrame({"A":["foo", "foo", "foo", "bar","Panda", "Panda", "Zootopia", "Zootopia"],"B":[0,1,1,1,0,1,1,1]}) df1 = pd.DataFrame({"A":["foo", "foo", "foo", "bar","Panda", "Panda", "Zootopia", "Zootopia","Toy Story"]}) </code></pre> <p>If column <strong>A</strong> in df and df1 matches then replace df1 column A by df column <strong>B</strong> value also count number of values not replaced and take a list of them.</p> <p>Expected Output</p> <pre><code> A 0 0.0 1 1.0 2 1.0 3 1.0 4 0.0 5 1.0 6 1.0 7 1.0 8 Toy Story </code></pre> <p>Not Replaced Count: 1</p> <p>Not Replaced DataFrame:</p> <pre><code> 0 A Toy Story </code></pre>
<p>Here is one solution:</p> <pre><code>num = len(df['A']) if all(df1['A'][:num]==df['A'][:num]): df1['A'][:num] = df['B'] </code></pre> <p>Output for df1:</p> <pre><code> A 0 0 1 1 2 1 3 1 4 0 5 1 6 1 7 1 8 Toy Story </code></pre> <hr> <p>If you did not want to alter df1, you could make a copy of it and run the code:</p> <pre><code>num = len(df['A']) df1_copy = df1.copy() if all(df1_copy['A'][:num]==df['A'][:num]): df1_copy['A'][:num] = df['B'] </code></pre> <p>Output for df1:</p> <pre><code> A 0 foo 1 foo 2 foo 3 bar 4 Panda 5 Panda 6 Zootopia 7 Zootopia 8 Toy Story </code></pre> <p>Output for df1_copy:</p> <pre><code> A 0 0 1 1 2 1 3 1 4 0 5 1 6 1 7 1 8 Toy Story </code></pre>
python|pandas
1
284
46,250,972
Split columns into MultiIndex with missing columns in pandas
<p>This is similar to the problem I asked <a href="https://stackoverflow.com/questions/46247302/pandas-split-columns-into-multilevel">here</a>. However, I found out that the data I am working is not always consistent. For, example say :</p> <pre><code>import pandas as pd df = pd.DataFrame(pd.DataFrame([[1,2,3,4],[5,6,7,8],[9,10,11,12]],columns=["X_a","Y_c","X_b","Y_a"])) X_a Y_c X_b Y_a 0 1 2 3 4 1 5 6 7 8 2 9 10 11 12 </code></pre> <p>Now you can see that <code>X</code> does not have corresponding <code>c</code> column and <code>Y</code> does not have corresponding <code>b</code> column. Now when I want to create the multi-level index, I want the dataframe to look like this:</p> <pre><code> X Y a b c a b c 0 1 3 -1 4 -1 2 1 5 7 -1 8 -1 6 2 9 11 -1 12 -1 10 </code></pre> <p>So as you can see, I want the split in such a way that all upper level columns should have the same lower level columns. Since, the dataset is positve, I am thinking of filling the missing columns with -1, although I am open for suggestions on this. The closest thing I found to my problem was <a href="https://stackoverflow.com/questions/28097222/pandas-merge-two-dataframes-with-different-columns">this answer</a>. However, I cannot make it to somehow work with MultiLevel Index like in my previous question. Any help is appreciated.</p>
<p>Create a <code>MultiIndex</code> and set <code>df.columns</code>.</p> <pre><code>idx = df.columns.str.split('_', expand=True) idx MultiIndex(levels=[['X', 'Y'], ['a', 'b', 'c']], labels=[[0, 1, 0, 1], [0, 2, 1, 0]]) df.columns = idx </code></pre> <p>Now, with the existing <code>MultiIndex</code>, create a new index and use that to <code>reindex</code> the original. </p> <pre><code>idx = pd.MultiIndex.from_product([idx.levels[0], idx.levels[1]]) idx MultiIndex(levels=[['X', 'Y'], ['a', 'b', 'c']], labels=[[0, 0, 0, 1, 1, 1], [0, 1, 2, 0, 1, 2]]) df.reindex(columns=idx, fill_value=-1) X Y a b c a b c 0 1 3 -1 4 -1 2 1 5 7 -1 8 -1 6 2 9 11 -1 12 -1 10 </code></pre>
python|pandas|dataframe|multi-index
20
285
58,290,318
Create new Python DataFrame column based on conditions of multiple other columns
<p>I'm trying to create a new DataFrame column (Column C) based on the inputs of two other columns. The two criteria I have is if either "Column A is > 0" OR "Column B contains the string "Apple",* then Column C should have the value of "Yes" otherwise it should have the value of "No"</p> <p>*Bonus points if answer is not case-sensitive (that is, it'll pick up the "apple" in "Pineapple" as well as in "Apple Juice"</p> <p>Data might look like (and what Column C should result in)</p> <pre><code>Column_A Column_B Column_C 23 Orange Juice Yes 2 Banana Smoothie Yes 8 Pineapple Juice Yes 0 Pineapple Smoothie Yes 0 Apple Juice Yes 0 Lemonade No 34 Coconut Water Yes </code></pre> <p>I've tried several things, including: </p> <pre><code>df['Keep6']= np.where((df['Column_A'] &gt;0) | (df['Column_B'].find('Apple')&gt;0) , 'Yes','No') </code></pre> <p>But get the error message: <code>"AttributeError: 'Series' object has no attribute 'find'"</code></p>
<p>Use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.contains.html" rel="nofollow noreferrer">Series.str.contains</a> with <code>case=False</code> to <strong>not case-sensitive</strong>:</p> <pre><code>df['Column_C']= np.where((df['Column_A']&gt;0) | (df['Column_B'].str.contains('apple', case=False)) ,'Yes','No') print(df) </code></pre> <hr> <pre><code> Column_A Column_B Column_C 0 23 Orange_Juice Yes 1 2 Banana_Smoothie Yes 2 8 Pineapple_Juice Yes 3 0 Pineapple_Smoothie Yes 4 0 Apple_Juice Yes 5 0 Lemonade No 6 34 Coconut_Water Yes </code></pre>
python|pandas
1
286
68,997,462
Cannot import name Label from pandas._typing
<p>I faced this issue while was trying to export a dataframe to a csv file. I cannot find any similar issue online on this issue. Any help would be highly appreciated.</p> <p>I am using pandas 1.3 with python 3.7.1.</p> <pre class="lang-py prettyprint-override"><code> ImportError Traceback (most recent call last) &lt;timed exec&gt; in &lt;module&gt; /tmp/ipykernel_30888/2657956500.py in image_viewer(chunk_num, img_dir, chunk_size, zoom) 89 df = df[~df.img_name.duplicated(keep='last')] 90 ---&gt; 91 df.to_csv(output_name, index=False) 92 93 return df ~/.local/share/virtualenvs/labelvalidation-cagv8R9l/lib/python3.7/site-packages/pandas/core/generic.py in to_csv(self, path_or_buf, sep, na_rep, float_format, columns, header, index, index_label, mode, encoding, compression, quoting, quotechar, line_terminator, chunksize, date_format, doublequote, escapechar, decimal, errors, storage_options) 3480 self._check_setitem_copy(stacklevel=5, t=&quot;referent&quot;) 3481 -&gt; 3482 if clear: 3483 self._clear_item_cache() 3484 ~/.local/share/virtualenvs/labelvalidation-cagv8R9l/lib/python3.7/site-packages/pandas/io/formats/format.py in to_csv(self, path_or_buf, encoding, sep, columns, index_label, mode, compression, quoting, quotechar, line_terminator, chunksize, date_format, doublequote, escapechar, errors, storage_options) 1076 quotechar=quotechar, 1077 date_format=date_format, -&gt; 1078 doublequote=doublequote, 1079 escapechar=escapechar, 1080 storage_options=storage_options, ~/.local/share/virtualenvs/labelvalidation-cagv8R9l/lib/python3.7/site-packages/pandas/io/formats/csvs.py in &lt;module&gt; 10 11 from pandas._libs import writers as libwriters ---&gt; 12 from pandas._typing import ( 13 CompressionOptions, 14 FilePathOrBuffer, ImportError: cannot import name 'Label' from 'pandas._typing' </code></pre>
<p>I downgraded to pandas 0.20.3 and the issue is now gone!</p>
pandas|dataframe
0
287
69,257,364
TensorBoard ValueError: Expected scalar shape, saw shape: (1,)
<p>When I use the callback function TensorBoard as the following:</p> <pre><code>skip_training = False tensorboad_cb = TensorBoard('logs') def train_model(model, callbacks_list): ''' Input: Model and callback list, Return: Model with best-checkpoint weights. ''' ## TYPE YOUR CODE for task 10 here: #history = model.fit(X_tr, y_tr, batch_size=4096, epochs=20, verbose=1, validation_data=(X_va, y_va), callbacks=[callbacks_list]) if not skip_training: history = model.fit(X_tr, y_tr, batch_size=4096, epochs=1, verbose=1, callbacks=[tensorboad_cb]) model.save(checkpoint_name) else: model = load_model(checkpoint_name) return model model = train_model(model, callbacks_list) </code></pre> <p>I get this error:</p> <blockquote> <p>287/287 [==============================] - 180s 626ms/step - loss: 0.1103 - f1_score: 0.6010 - acc: 0.9565 --------------------------------------------------------------------------- ValueError Traceback (most recent call last) in () 17 return model 18 ---&gt; 19 model = train_model(model, callbacks_list) 20 21 #history = model.fit(train_data, epochs=15, validation_data=valid_data, validation_steps=50, callbacks=[checkpoint_callback])</p> <p>9 frames /usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/check_ops.py in assert_scalar(tensor, name, message) 2282 if context.executing_eagerly(): 2283 raise ValueError('%sExpected scalar shape, saw shape: %s.' -&gt; 2284 % (message or '', shape,)) 2285 else: 2286 raise ValueError('%sExpected scalar shape for %s, saw shape: %s.'</p> <p>ValueError: Expected scalar shape, saw shape: (1,).</p> </blockquote>
<p>I have same issue on tensorflow 2.6.2, but on tensorflow 2.4.1 it works well.</p> <p>finally, I fixed it by comment the code:</p> <p><a href="https://i.stack.imgur.com/5go6N.png" rel="nofollow noreferrer">comment tensorboard code</a></p>
python|tensorflow|tensorboard
1
288
44,520,581
How can I create a model which is starts with a single convolution layer and then its output given to two different convolution layers
<p>Can I generate a Model in keras which is not sequencial, i.e can I design a model with two train of cascaded convolution layer but the starting input is a common convolution output.</p>
<p>Yes! You just need to use the <a href="https://keras.io/getting-started/functional-api-guide/#multi-input-and-multi-output-models" rel="nofollow noreferrer">Functional API</a>.</p> <p>e.g.</p> <pre><code>main_input = Input(shape=input_shape) conv_1 = Conv1D(32, 3)(main_input) conv_2 = Conv1D(64, 3)(conv_1) conv_3 = Conv1d(64, 3)(conv_1) </code></pre>
python|tensorflow|keras|keras-layer|keras-2
1
289
44,787,423
Fill zero values for combinations of unique multi-index values after groupby
<p>To better explain by problem better lets pretend i have a shop with 3 unique customers and my dataframe contains every purchase of my customers with weekday, name and paid price.</p> <pre><code> name price weekday 0 Paul 18.44 0 1 Micky 0.70 0 2 Sarah 0.59 0 3 Sarah 0.27 1 4 Paul 3.45 2 5 Sarah 14.03 2 6 Paul 17.21 3 7 Micky 5.35 3 8 Sarah 0.49 4 9 Micky 17.00 4 10 Paul 2.62 4 11 Micky 17.61 5 12 Micky 10.63 6 </code></pre> <p>The information i would like to get is the average price per unique customer per weekday. What i often do in similar situations is to group by several columns with sum and then take the average of a subset of the columns.</p> <pre><code>df = df.groupby(['name','weekday']).sum() price name weekday Micky 0 0.70 3 5.35 4 17.00 5 17.61 6 10.63 Paul 0 18.44 2 3.45 3 17.21 4 2.62 Sarah 0 0.59 1 0.27 2 14.03 4 0.49 df = df.groupby(['weekday']).mean() price weekday 0 6.576667 1 0.270000 2 8.740000 3 11.280000 4 6.703333 5 17.610000 6 10.630000 </code></pre> <p>Of course this only works if all my unique customers would have at least one purchase per day. Is there an elegant way to get a zero value for all combinations between unique index values that have no sum after the first groupby?</p> <p>My solutions has been so far to either to reindex on a multi index i created from the unique values of the grouped columns or the combination of unstack-fillna-stack but both solutions do not really satisfy me.</p> <p>Appreciate your help!</p>
<p>IIUC, let's use <code>unstack</code> and <code>fillna</code> then <code>stack</code>:</p> <pre><code>df_out = df.groupby(['name','weekday']).sum().unstack().fillna(0).stack() </code></pre> <p>Output:</p> <pre><code> price name weekday Micky 0 0.70 1 0.00 2 0.00 3 5.35 4 17.00 5 17.61 6 10.63 Paul 0 18.44 1 0.00 2 3.45 3 17.21 4 2.62 5 0.00 6 0.00 Sarah 0 0.59 1 0.27 2 14.03 3 0.00 4 0.49 5 0.00 6 0.00 </code></pre> <p>And,</p> <pre><code>df_out.groupby('weekday').mean() </code></pre> <p>Output:</p> <pre><code> price weekday 0 6.576667 1 0.090000 2 5.826667 3 7.520000 4 6.703333 5 5.870000 6 3.543333 </code></pre>
python|pandas
2
290
44,569,021
Removing NaN from dictionary inside dictionary (Dict changes size during runtime error)
<p>using to_dict() I come up with the following dictionary. I need to drop all nan values. This approach doesn't work because it changes size during iteration. Is there another way to accomplish this?</p> <pre><code>{'k': {'a': nan, 'b': 1.0, 'c': 1.0}, 'u': {'a': 1.0, 'b': nan, 'c': 1.0}, 'y': {'a': nan, 'b': 1.0, 'c': nan}} In [196]: for listing, rate in master_dict.items(): for key in rate: if pd.isnull(master_dict[listing][key]) == True: del master_dict[listing][key] --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) &lt;ipython-input-200-8859eb717bb9&gt; in &lt;module&gt;() 1 for listing, rate in master_dict.items(): ----&gt; 2 for key in rate: 3 if pd.isnull(master_dict[listing][key]) == True: 4 del master_dict[listing][key] 5 RuntimeError: dictionary changed size during iteration </code></pre>
<p>You can use double <code>dict comprehension</code> with filtering with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.notnull.html" rel="noreferrer"><code>pandas.notnull</code></a> or <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.isnull.html" rel="noreferrer"><code>pandas.isnull</code></a>:</p> <pre><code>y = {k1:{k:v for k,v in v1.items() if pd.notnull(v)} for k1, v1 in d.items()} print (y) {'k': {'c': 1.0, 'b': 1.0}, 'u': {'c': 1.0, 'a': 1.0}, 'y': {'b': 1.0}} </code></pre> <p>Similar solution:</p> <pre><code>y = {k1:{k:v for k,v in v1.items() if not pd.isnull(v)} for k1, v1 in d.items()} print (y) {'k': {'c': 1.0, 'b': 1.0}, 'u': {'c': 1.0, 'a': 1.0}, 'y': {'b': 1.0}} </code></pre>
python|pandas|dictionary
5
291
60,896,045
Pandas Shift Rows and Backfill (Time-Series Alignment)
<p>I have time-series customer data with running totals that look like this:</p> <pre><code> week1 | week2 | week3 | week4 | week5 user1 20 40 40 50 50 user2 0 10 20 30 40 user3 0 0 0 10 10 </code></pre> <p>I am looking for spending trends, so I want to shift all my rows to start at week one and backfill with their last value, resulting in:</p> <pre><code> week1 | week2 | week3 | week4 | week5 user1 20 40 40 50 50 user2 10 20 30 40 40 user3 10 10 10 10 10 </code></pre> <p>Any help would be amazing!</p>
<p>You can do this quite compactly as:</p> <pre><code>df.iloc[:, 1:] = df.iloc[:, 1:]. \ apply(lambda row: row.shift(-np.argmax(row &gt; 0)), axis=1). \ ffill(axis=1) </code></pre> <p>but there is a lot going on in that 1 statement</p> <p><code>iloc[:, 1:]</code> selects all rows, and all but the first column (since we are not interested in touching the user column. My answer <em>assumes</em> that the user is a column, if the user is an index instead, then you can remove both the occurrences of <code>[:, 1:]</code> in this answer.</p> <p><code>apply(&lt;function&gt;, axis=1)</code> applies the provided function to each <em>row</em></p> <p><code>np.argmax</code> <em>[as I used here]</em> finds the first index in an array that meets a condition. in this case the first position with value > 0</p> <p><code>row.shift(-np.argmax(row &gt; 0))</code> shifts the row backwards dynamically, based on the position of the first greater-than-0-value.</p> <p><code>ffill</code> forward fills null values with the last non-null value.</p>
python|pandas
2
292
61,007,840
fill dataframe column with list of tuples
<p>I have dataframe like below:</p> <pre><code>miles uid 12 235 13 234 14 233 15 236 </code></pre> <p>a list <strong>list1</strong> like below:</p> <pre><code>[(39.14973, -77.20692), (33.27569, -86.35877), (42.55214, -83.18532), (41.3278, -95.96396)] </code></pre> <p>The output dataframe I want</p> <pre><code>miles uid lat-long lat long 12 235 (39.14973, -77.20692) 39.14973 -77.20692 13 234 (33.27569, -86.35877) 33.27569 -86.35877 </code></pre> <p>How can I achieve this</p>
<p>you can use:</p> <pre><code>df['lat-long'] = my_list df['lat'] = [e[0] for e in my_list] df['long'] = [e[1] for e in my_list] </code></pre> <p>output:</p> <p><a href="https://i.stack.imgur.com/rzYxx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rzYxx.png" alt="enter image description here"></a></p>
python|pandas
2
293
71,619,322
Iterate over function and concatinate results
<p>So let's say I have a function x which I want to iterate over a list of 3 What I wat to do is to iterate over this list and concatinate the results. Like this:</p> <pre><code>number1 = func(1) number2 = func(2) number3 = func(3) </code></pre> <p>And then</p> <pre><code>Results = pd.concat([number1, number2, number3], axis = 1) </code></pre> <p>I tried it like this</p> <pre><code>numbers = list(range(1:3) for i in numbers: Results = pd.concat([func(i)], axis = 1) </code></pre> <p>it didn't work...</p> <p>Anybody have an idea?</p>
<p>You could use list-comprehension and loops like the other answers suggested, or, if <code>func</code> takes only one parameter, you could use <code>map</code>:</p> <pre><code>df = pd.concat(map(func, range(1, 4)), axis=1) </code></pre> <p>Example output:</p> <pre><code>&gt;&gt;&gt; def func(x): ... return pd.DataFrame({f'col{x}':[x, x, x]},) &gt;&gt;&gt; df = pd.concat(map(func, range(1, 4)), axis=1) &gt;&gt;&gt; df col1 col2 col3 0 1 2 3 1 1 2 3 2 1 2 3 </code></pre>
python|pandas|loops|concatenation
1
294
71,653,696
Counting values greater than 0 in a given area (specific Rows * Columns) - Python, Excel, Pandas
<p>Based on the following data:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Participant</th> <th>Condition</th> <th>RT</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>1</td> <td>0.10</td> </tr> <tr> <td>1</td> <td>1</td> <td></td> </tr> <tr> <td>1</td> <td>2</td> <td>0.48</td> </tr> <tr> <td>2</td> <td>1</td> <td>1.2</td> </tr> <tr> <td>2</td> <td>2</td> <td></td> </tr> <tr> <td>2</td> <td>2</td> <td>0.58</td> </tr> </tbody> </table> </div> <p>What is the appropriate code to count the values which are greater than 0 based on the term:</p> <blockquote> <p>Participant == 1 and Condition == 1</p> </blockquote> <p>The answer Should be: <strong>N = 1</strong> noting that an empty slot is not considered</p> <p>Looking forward to your reply with gratitude, Avishai</p>
<p>Use a boolean mask and sum it:</p> <pre><code>N = sum((df['Participant'] == 1) &amp; (df['Condition'] == 1) &amp; (df['RT'].notna())) print(N) # Output 1 </code></pre> <p>Details:</p> <pre><code>m1 = df['Participant'] == 1 m2 = df['Condition'] == 1 m3 = df['RT'].notna() df[['m1', 'm2', 'm3']] = pd.concat([m1, m2, m3], axis=1) print(df) # Output Participant Condition RT m1 m2 m3 0 1 1 0.10 True True True # All True, N = 1 1 1 1 NaN True True False 2 1 2 0.48 True False True 3 2 1 1.20 False True True 4 2 2 NaN False False False 5 2 2 0.58 False False True </code></pre>
python|excel|pandas|conditional-statements|counting
1
295
42,471,183
specificy gpu devices failed when using tensorflow c++ api
<p>I trained my tf model in python:</p> <pre><code> with sv.managed_session(master='') as sess: with tf.device("/gpu:1"):#my systerm has 4 nvidia cards </code></pre> <p>and use the command line to abstract the model:</p> <pre><code> freeze_graph.py --clear_devices False </code></pre> <p>and during test phase, I set the device as follow:</p> <pre><code> tensorflow::graph::SetDefaultDevice("/gpu:1", &amp;tensorflow_graph); </code></pre> <p>but someting is wrong:</p> <pre><code> ould not create Tensorflow Graph: Invalid argument: Cannot assign a device to node '.../RNN_backword/while/Enter': Could not satisfy explicit device specification '/gpu:1' because no devices matching that specification are registered in this process; available devices: /job:localhost/replica:0/task:0/cpu:0 </code></pre> <p>so,how can I use gpu i correctly??</p> <p>anyone could help??</p>
<p>Is it possible you're using a version of TensorFlow without GPU support enabled? If you're building a binary you may need to add additional BUILD rules from //tensorflow that enable GPU support. Also ensure you enabled GPU support when running configure.</p> <p><strong>EDIT</strong>: Can you file a bug on TF's github issues with:</p> <p>1) your BUILD rule</p> <p>2) much more of your code so we can see how you're building your model and creating your session</p> <p>3) how you ran configure</p> <p>While this API is not yet marked "public"; we want to see if there's indeed a bug you are running into so we can fix it.</p>
python|c++|tensorflow|gpu
0
296
69,740,806
How could I create a column with matchin values from different datasets with different lengths
<p>I want to create a new column in the dataset in which a ZipCode is assigned to a specific Region.</p> <p>There are in total 5 Regions. Every Region consists of an <code>x</code> amount of ZipCodes. I would like to use the two different datasets to create a new column.</p> <p>I tried some codes already, however, I failed because the series are not identically labeled. How should I tackle this problem?</p> <p>I have two datasets, one of them has <strong>1518 rows</strong> x <strong>3 columns</strong> and the other one has<br /> <strong>46603 rows</strong> x <strong>3 columns</strong>.</p> <img src="https://i.stack.imgur.com/AeYb6.png"> <p>As you can see in the picture:</p> <ul> <li><p><code>df1</code> is the first dataset with the <code>Postcode</code> and <code>Regio</code> columns, which are the ZipCodes assigned to the corresponding <code>Regio</code>.</p> </li> <li><p><code>df2</code> is the second dataset where the <code>Regio</code> column is missing as you can see. I would like to add a new column into the df2 dataset which contains the corresponding <code>Regio</code>.</p> </li> </ul> <p>I hope someone could help me out.</p> <p>Kind regards.</p>
<p>I believe you need to map the zipcode from dataframe 2 to the region column from the first dataframe. Assuming Postcode and ZipCode are same.</p> <p>First create a dictionary from df1 and then replace the zipcode values based on the dictionary values</p> <pre><code>zip_dict = dict(zip(df1.Postcode, df1.Regio)) df2.ZipCode.replace(zip_dict) </code></pre>
python|pandas|numpy|dataset|matching
0
297
69,694,584
Python SQL Query Execution
<p>I trying to run a SQL query to perform a lookup between table and add column and update the result in new table in SQL and then pass the new table in pandas dataframe.</p> <p>But when i execute i get the following error:</p> <p>&quot;</p> <pre><code>File &quot;C:\Users\Sundar_ars\Desktop\Code\SQL_DB_Extract_1.py&quot;, line 27, in &lt;module&gt; df1 = pd.read_sql(Sql_Query,conn) File &quot;C:\Users\Sundar_ars\AppData\Local\Programs\Python\Python39\lib\site-packages\pandas\io\sql.py&quot;, line 602, in read_sql return pandas_sql.read_query( File &quot;C:\Users\Sundar_ars\AppData\Local\Programs\Python\Python39\lib\site-packages\pandas\io\sql.py&quot;, line 2117, in read_query columns = [col_desc[0] for col_desc in cursor.description] TypeError: 'NoneType' object is not iterable&quot; </code></pre> <p>Below is the Code:</p> <pre><code>import pypyodbc import pandas as pd SERVER_NAME ='DESKTOP-LBM9IMO\SQLEXPRESS' DATABASE_NAME='721991dc-b510-40b2-ac5b-4d005a3cfd14' conn = pypyodbc.connect(&quot;&quot;&quot; Driver={{SQL Server Native Client 11.0}}; Server={0}; Database={1}; Trusted_Connection=yes;&quot;&quot;&quot;.format(SERVER_NAME, DATABASE_NAME) ) Sql_Query = &quot;&quot;&quot; SELECT [EventFrequency].[ResultID] ,[EventFrequency].[EventFrequency] ,[EventFrequency].[ImmediateIgnitionFrequency] ,[EventFrequency].[DelayedIgnitionFrequency] ,[Result].[OnshoreCaseID] INTO [%s].[SafetiRisk].[UpdatedEventFreq] FROM [%s].[SafetiRisk].[EventFrequency] LEFT OUTER JOIN [%s].[SafetiRisk].[Result] ON [EventFrequency].[ResultID] = [Result].[ResultID] &quot;&quot;&quot; %(DATABASE_NAME, DATABASE_NAME, DATABASE_NAME) df1 = pd.read_sql(Sql_Query,conn) print(df1) </code></pre> <p>Anyone can guide me what am I doing wrong. Thanks</p>
<p><a href="https://pandas.pydata.org/docs/reference/api/pandas.read_sql_query.html#pandas-read-sql-query" rel="nofollow noreferrer"><code>read_sql()</code></a> assumes that your query returns rows - but your SQL query <em>modifies</em> rows (without returning any).</p> <p>If you want to fetch the content of <code>[SafetiRisk].[UpdatedEventFreq]</code>, you should perform a<code>read_sql('SELECT *', ...)</code> on that table <em>after</em> executing your first query.</p>
python|sql|pandas
0
298
69,816,918
loss: nan when fitting training data in tensorflow keras regression network
<p>I am trying to do replicate a regression network from a book but no matter what I try I only get nan losses during the fitting. I have checked and this might happen because of:</p> <ul> <li>bad input data: my data is clean</li> <li>unscaled input data: I have tried with StandardScaler and MinMaxScaler but no dice</li> <li>unscaled output data: I have also tried scaling between 0 and 1 using the train set, but new instances will fall outside.</li> <li>exploding gradients: this might be the case but even with regularization it still happens</li> <li>learning rate too steep: not even setting it to a low number fixes it</li> <li>unbounded steps: not even clipping fixes it</li> <li>error measurement: changing from mse to mean absolute doesn't work either</li> <li>too big of a batch: reducing the training data to the first 200 entries doesn't work either</li> </ul> <p>What else can be the reason for nans in the loss function?</p> <p>EDIT: This also happens with all example models around the internet</p> <p>I am truly out of ideas.</p> <p>The data looks like this:</p> <pre><code>X_train[:5] Out[4]: array([[-3.89243447e-01, -6.10268198e-01, 7.23982383e+00, 7.68512713e+00, -9.15360303e-01, -4.34319791e-02, 1.69375104e+00, -2.66593858e-01], [-1.00512751e+00, -6.10268198e-01, 5.90241386e-02, 6.22319189e-01, -7.82304360e-01, -6.23993472e-02, -8.17899555e-01, 1.52950349e+00], [ 5.45617265e-01, 5.78632450e-01, -1.56942033e-01, -2.49063893e-01, -5.28447626e-01, -3.67342889e-02, -8.31983577e-01, 7.11281365e-01], [-1.53276576e-01, 1.84679314e+00, -9.75702024e-02, 3.03921163e-01, -5.96726334e-01, -6.73883756e-02, -7.14616727e-01, 6.56400612e-01], [ 1.97163670e+00, -1.56138872e+00, 9.87949430e-01, -3.36887553e-01, -3.42869600e-01, 5.08919289e-03, -6.86448683e-01, 3.12148621e-01]]) X_valid[:5] Out[5]: array([[ 2.06309546e-01, 1.21271280e+00, -7.86614121e-01, 1.36422365e-01, -6.81637034e-01, -1.12999850e-01, -8.78930317e-01, 7.21259683e-01], [ 7.12374210e-01, 1.82332234e-01, 2.24876920e-01, -2.22866905e-02, 1.51713346e-01, -2.62325989e-02, 8.01762978e-01, -1.20954497e+00], [ 5.86851369e+00, 2.61592277e-01, 1.86656568e+00, -9.86220816e-02, 7.11794858e-02, -1.50302387e-02, 9.05045806e-01, -1.38915470e+00], [-1.81402984e-01, -5.54478959e-02, -6.23050382e-02, 3.15382948e-02, -2.41326907e-01, -4.58773896e-02, -8.74235643e-01, 7.86118754e-01], [ 5.02584914e-01, -6.10268198e-01, 8.08807908e-01, 1.22787966e-01, -3.13107087e-01, 4.73927994e-03, 1.14447418e+00, -8.00433903e-01]]) y_train[:5] Out[6]: array([[-0.4648844 ], [-1.26625476], [-0.11064919], [ 0.55441007], [ 1.19863195]]) y_valid[:5] Out[7]: array([[ 2.018235 ], [ 1.25593471], [ 2.54525539], [ 0.04215816], [-0.39716296]]) </code></pre> <p>The code: <code>keras.__version__ 2.4.0</code></p> <pre><code>from sklearn.datasets import fetch_california_housing from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler, MinMaxScaler from tensorflow import keras import numpy as np housing = fetch_california_housing() X_train_full, X_test, y_train_full, y_test = train_test_split(housing.data, housing.target) X_train, X_valid, y_train, y_valid = train_test_split(X_train_full, y_train_full) scaler = StandardScaler() scaler.fit(X_train) X_train = scaler.transform(X_train) X_valid = scaler.transform(X_valid) X_test = scaler.transform(X_test) print(f'X_train:{X_train.shape}, X_valid: { X_valid.shape}, y_train: {y_train.shape}, y_valid:{y_valid.shape}') print(f'X_test: {X_test.shape}, y_test: {y_test.shape}') assert not np.nan in X_train assert not np.nan in X_valid scalery=StandardScaler() y_train=scalery.fit_transform(y_train.reshape(len(y_train),1)) y_valid=scalery.transform(y_valid.reshape(len(y_valid),1)) y_test=scalery.transform(y_test.reshape(len(y_test),1)) #initializers: relu:he_uniform, tanh:glorot model = keras.models.Sequential([ keras.layers.Dense(30, activation=&quot;relu&quot;,input_shape=X_train.shape[1:] , kernel_initializer=&quot;he_uniform&quot; , kernel_regularizer='l1') ,keras.layers.Dense(1) ]) optimizer = keras.optimizers.SGD(lr=0.0001, clipvalue=1) model.compile(loss=keras.losses.MeanSquaredError() , optimizer=optimizer) history = model.fit(X_train[0:200], y_train[0:200] , epochs=5 ,validation_data=(X_valid[0:20], y_valid[0:20])) </code></pre> <p>Output:</p> <pre><code>X_train:(11610, 8), X_valid: (3870, 8), y_train: (11610,), y_valid:(3870,) X_test: (5160, 8), y_test: (5160,) Epoch 1/5 7/7 [==============================] - 0s 24ms/step - loss: nan - val_loss: nan Epoch 2/5 7/7 [==============================] - 0s 4ms/step - loss: nan - val_loss: nan Epoch 3/5 7/7 [==============================] - 0s 4ms/step - loss: nan - val_loss: nan Epoch 4/5 7/7 [==============================] - 0s 5ms/step - loss: nan - val_loss: nan Epoch 5/5 7/7 [==============================] - 0s 4ms/step - loss: nan - val_loss: nan </code></pre> <p>Interesting reads (that haven't helped):</p> <ul> <li><a href="https://stats.stackexchange.com/questions/362461/is-it-better-to-avoid-relu-as-activation-function-if-input-data-has-plenty-of-ne">https://stats.stackexchange.com/questions/362461/is-it-better-to-avoid-relu-as-activation-function-if-input-data-has-plenty-of-ne</a></li> <li><a href="https://discuss.tensorflow.org/t/getting-nan-for-loss/4826" rel="nofollow noreferrer">https://discuss.tensorflow.org/t/getting-nan-for-loss/4826</a></li> <li><a href="https://stackoverflow.com/questions/54011173/what-is-the-default-weight-initializer-in-keras">What is the default weight initializer in Keras?</a></li> <li><a href="https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/" rel="nofollow noreferrer">https://machinelearningmastery.com/how-to-improve-neural-network-stability-and-modeling-performance-with-data-scaling/</a></li> <li><a href="https://keras.io/api/optimizers/sgd/" rel="nofollow noreferrer">https://keras.io/api/optimizers/sgd/</a></li> <li><a href="https://keras.io/api/layers/regularizers/" rel="nofollow noreferrer">https://keras.io/api/layers/regularizers/</a></li> <li><a href="https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html" rel="nofollow noreferrer">https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html</a></li> </ul>
<p>I have found the answer to my own question:</p> <p>As it turns out, Tensorflow doesn't work as of now in python 3.10. After downgrading my python version to 3.8 everything started working.</p>
tensorflow|machine-learning|keras|nan
0
299
50,480,517
How to take difference of several Timestamp series in a dataframe in Pandas?
<p>I want to obtain the timedelta interval between several timestamp columns in a dataframe. Also, several entries are NaN. </p> <p>Original DF:</p> <pre><code> 0 1 2 3 4 5 0 date1 date2 NaN NaN NaN NaN 1 date3 date4 date5 date6 date7 date8 </code></pre> <p>Desired Output:</p> <pre><code> 0 1 2 3 4 0 date2-date1 NaN NaN NaN NaN 1 date4-date3 date5-date4 date6-date5 date7-date6 date8-date7 </code></pre>
<p>I think you can use if consecutive <code>NaN</code>s to end of rows:</p> <pre><code>df = pd.DataFrame([['2015-01-02','2015-01-03', np.nan, np.nan], ['2015-01-02','2015-01-05','2015-01-07','2015-01-12']]) print (df) 0 1 2 3 0 2015-01-02 2015-01-03 NaN NaN 1 2015-01-02 2015-01-05 2015-01-07 2015-01-12 df = df.apply(pd.to_datetime).ffill(axis=1).diff(axis=1) print (df) 0 1 2 3 0 NaT 1 days 0 days 0 days 1 NaT 3 days 2 days 5 days </code></pre> <p><strong>Details</strong>:</p> <p>First convert all columns to datetimes:</p> <pre><code>print (df.apply(pd.to_datetime)) 0 1 2 3 0 2015-01-02 2015-01-03 NaT NaT 1 2015-01-02 2015-01-05 2015-01-07 2015-01-12 </code></pre> <p>Replace <code>NaN</code>s by forward filling last value per rows:</p> <pre><code>print (df.apply(pd.to_datetime).ffill(axis=1)) 0 1 2 3 0 2015-01-02 2015-01-03 2015-01-03 2015-01-03 1 2015-01-02 2015-01-05 2015-01-07 2015-01-12 </code></pre> <p>Get difference by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.diff.html" rel="nofollow noreferrer"><code>diff</code></a>: </p> <pre><code>print (df.apply(pd.to_datetime).ffill(axis=1).diff(axis=1)) 0 1 2 3 0 NaT 1 days 0 days 0 days 1 NaT 3 days 2 days 5 days </code></pre>
python|pandas|time-series|timedelta
1