Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
---|---|---|---|---|---|---|
700 | 49,622,836 | Search for a keyword in a user's tweets | <p>Whenever I run the code below, it gives me the most recent 10 tweets from <code>@Spongebob</code>, instead of giving me the tweets that include "<code>Bikini Bottom</code>" from the last 10 tweets. How do I make it conditional to the keyword?</p>
<pre><code>user = api.get_user('SpongeBob')
public_tweets = api.user_timeline("Bikini Bottom", count=10, screen_name = "SpongeBob")
for tweet in public_tweets:
print(tweet.text)
</code></pre> | <p>You need to use the <a href="https://developer.twitter.com/en/docs/tweets/search/api-reference/get-search-tweets" rel="nofollow noreferrer">Twitter Search API</a>, with the correct <a href="https://developer.twitter.com/en/docs/tweets/search/guides/standard-operators" rel="nofollow noreferrer">search operators</a>.</p>
<p>In this case, you want to search for the string "<code>bikini bottom from:spongebob</code>"</p>
<p>With Tweepy, this will be:</p>
<p><code>public_tweets = api.search("bikini bottom from:spongebob")</code></p> | pandas|api|twitter|tweepy | 1 |
701 | 49,776,841 | How to split on dataframe column into two another column in pandas? | <p>I am trying to split on column named as "variable" into two another column "Type" and "Parameter"</p>
<pre><code> BatchNumber PhaseNumber SiteID variable Values
0 4552694035 0020B 2 min_tempC 27.0
1 4552694035 OverAll 2 max_tempF 24.0
</code></pre>
<p>I tried to use below code</p>
<pre><code>weatherData = weatherData['variable'].str.split('_', 1)
</code></pre>
<p>But not getting expected result.
The expected result is as below.</p>
<pre><code> BatchNumber PhaseNumber SiteID variable Values Type Parameter
0 4552694035 0020B 2 min_tempC 27.0 min tempC
1 4552694035 OverAll 2 max_tempF 24.0 max tempF
</code></pre>
<p>Any body knows.. how to get it?</p> | <p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.pop.html" rel="nofollow noreferrer"><code>DataFrame.pop</code></a> for extract column with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.split.html" rel="nofollow noreferrer"><code>split</code></a> and parameter <code>expand=True</code> for <code>DataFrame</code>:</p>
<pre><code>weatherData[['Type','Parameter']]=weatherData.pop('variable').str.split('_', 1, expand=True)
print (weatherData)
BatchNumber PhaseNumber SiteID Values Type Parameter
0 4552694035 0020B 2 27.0 min tempC
1 4552694035 OverAll 2 24.0 max tempF
</code></pre>
<p>If want also original column remove <code>pop</code>:</p>
<pre><code>weatherData[['Type','Parameter']] = weatherData['variable'].str.split('_', 1, expand=True)
print (weatherData)
BatchNumber PhaseNumber SiteID variable Values Type Parameter
0 4552694035 0020B 2 min_tempC 27.0 min tempC
1 4552694035 OverAll 2 max_tempF 24.0 max tempF
</code></pre> | python|pandas|dataframe | 2 |
702 | 67,223,048 | From specific row in a df_a, to count its occurrences in the past a year in df_b | <p>I have two dataframe as below and I want to return how many Success (Yes) in a year (for a specific person) 1 year prior to his/her specific date, i.e. each entry in <code>to check</code> to define the range in <code>history</code>.</p>
<p>For example, in <code>to_check</code>, Mike 20200602, I want to know how many Success (Yes) in Mike's history (1 year before, until 20200602).</p>
<p><a href="https://i.stack.imgur.com/nWCxp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nWCxp.png" alt="enter image description here" /></a></p>
<p>By using the "to_check" as a list, I came up with a clumsy way:</p>
<pre><code>import pandas as pd
import datetime
import numpy as np
from io import StringIO
import time
from datetime import datetime, date, time, timedelta
csvfile = StringIO("""
Name Check
Mike 20200602
David 20210415
Kate 20201109""")
csvfile_1 = StringIO("""
Name History Success
David 20180312 Yes
David 20180811 Yes
David 20191223 Yes
David 20210311 Yes
Kate 20180906 Yes
Kate 20180912 Yes
Kate 20191204 Yes
Kate 20200505 Yes
Mike 20180912 Yes
Mike 20190312 Yes
Mike 20190806 Yes
Mike 20191204 Yes""")
df_check = pd.read_csv(csvfile, sep = ' ', engine='python')
df_history = pd.read_csv(csvfile_1, sep = ' ', engine='python')
df_history['Date'] = pd.to_datetime(df_history['History'], format='%Y%m%d')
to_check = ["Mike 20200602","David 20210415","Kate 20201109"]
for t in to_check:
name, d = t.split(" ")
date_obj = datetime.strptime(d, '%Y%m%d')
delta = timedelta(days = 365)
day_before = date_obj - delta
m1 = df_history['Name'] == name
m2 = df_history['Date'] >= day_before
df_history['OP'] = np.where(m1 & m2, "fit", '')
how_many = df_history['OP'].value_counts().tolist()[1]
print (t, how_many)
</code></pre>
<p>Output:</p>
<pre><code>Mike 20200602 2
David 20210415 1
Kate 20201109 2
</code></pre>
<p>What's the better and smarter way to achieve it? Thank you.</p> | <p><code>merge</code> and <code>query</code>, but I would suggest leaving the dates as number for easy offset:</p>
<pre><code># both `Check` and `History` are numbers, not dates
(df_check.merge(df_history, on='Name', how='left')
.query('History<=Check<History+10000')
.groupby('Name').agg({'History':'first', 'Success':'size'})
)
</code></pre>
<p>Output:</p>
<pre><code> History Success
Name
David 20210311 1
Kate 20191204 2
Mike 20190806 2
</code></pre> | python|pandas|dataframe | 3 |
703 | 67,327,268 | Split a Pandas column with lists of tuples into separate columns | <p>I have data in a pandas dataframe and I'm trying to separate and extract data out of a specific column <code>col</code>. The values in <code>col</code> are all lists of various sizes that store 4-value tuples (previous 4 key-value dictionaries). These values are always in the same relative order for the tuple.</p>
<p>For each of those tuples, I'd like to have a separate row in the final dataframe as well as having the respective value from the tuple stored in a new column.</p>
<p>The DataFrame <code>df</code> looks like this:</p>
<pre><code>ID col
A [(123, 456, 111, False), (124, 456, 111, true), (125, 456, 111, False)]
B []
C [(123, 555, 333, True)]
</code></pre>
<p>I need to split <code>col</code> into four columns but also lengthen the dataframe for each record so each tuple has its own row in <code>df2</code>. DataFrame <code>d2</code> should look like this:</p>
<pre><code>ID col1 col2 col3 col4
A 123 456 111 False
A 124 456 111 True
A 125 456 111 False
B None None None None
C 123 555 333 True
</code></pre>
<p>I have some sort of workaround loop-based code that seems to get the job done but I'd like to find a better and more efficient way that I can run on a huge data set. Perhaps using vectorization or <code>NumPy</code> if possible. Here's what I have so far:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'ID': ['A', 'B', 'C'],
'col': [[('123', '456', '111', False),
('124', '456', '111', True),
('125', '456', '111', False)],
[],
[('123', '555', '333', True)]]
})
final_rows = []
for index, row in df.iterrows():
if not row.col: # if list is empty
final_rows.append(row.ID)
for tup in row.col:
new_row = [row.ID]
vals = list(tup)
new_row.extend(vals)
final_rows.append(new_row)
df2 = pd.DataFrame(final_rows, columns=['ID', 'col1', 'col2', 'col3', 'col4'])
</code></pre> | <p>Here is another solution, you can try out using <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.explode.html#pandas-dataframe-explode" rel="nofollow noreferrer"><code>explode</code></a> + <a href="https://pandas.pydata.org/docs/reference/api/pandas.concat.html#pandas.concat" rel="nofollow noreferrer"><code>concat</code></a></p>
<pre><code>df_ = df.explode('col').reset_index(drop=True)
pd.concat(
[df_[['ID']], pd.DataFrame(df_['col'].tolist()).add_prefix('col')], axis=1
)
</code></pre>
<hr />
<pre><code> ID col0 col1 col2 col3
0 A 123 456 111 False
1 A 124 456 111 True
2 A 125 456 111 False
3 B NaN None None None
4 C 123 555 333 True
</code></pre> | python|pandas|dataframe|numpy | 3 |
704 | 67,588,650 | Pandas Grouper "Cumulative" sum() | <p>I'm trying to calculate the cumulative total for the next 4 weeks.</p>
<p>Here is an example of my data frame</p>
<pre><code>d = {'account': [10, 10, 10, 10, 10, 10, 10, 10],
'volume': [25, 60, 40, 100, 50, 100, 40, 50]}
df = pd.DataFrame(d)
df['week_starting'] = pd.date_range('05/02/2021',
periods=8,
freq='W')
</code></pre>
<pre><code>df['volume_next_4_weeks'] = [225, 250, 290, 290, 240, 190, 90, 50]
df['volume_next_4_weeks_cumulative'] = ['(25+60+40+100)', '(60+40+100+50)', '(40+100+50+100)', '(100+50+100+40)', '(50+100+40+50)', '(100+40+50)', '(40+50)', '(50)']
df.head(10)
</code></pre>
<p><a href="https://i.stack.imgur.com/8uDgL.png" rel="nofollow noreferrer">dataframe_table_view</a></p>
<p>I would to find a way to calculate the cumulative amount by pd.Grouper freq = 4W.</p> | <p>This should work:</p>
<pre><code>df['volume_next_4_weeks'] = [sum(df['volume'][i:i+4]) for i in range(len(df))]
</code></pre>
<p>For the other column showing the addition as <code>string</code>, I have stored the values in a list using the same logic above but not applying sum and then joining the list elements as <code>string</code>:</p>
<pre><code>df['volume_next_4_weeks_cumulative'] = [df['volume'][i:i+4].to_list() for i in range(len(df))]
df['volume_next_4_weeks_cumulative'] = df['volume_next_4_weeks_cumulative'].apply(lambda row: ' + '.join(str(x) for x in row))
</code></pre>
<p>Now as you mentioned you have different multiple accounts and you want to do it separately for all of them, create a custom function and then use <code>groupby</code> and <code>apply</code> to create the columns:</p>
<pre><code>def create_mov_cols(df):
df['volume_next_4_weeks'] = [sum(df['volume'][i:i+4]) for i in range(len(df))]
df['volume_next_4_weeks_cumulative'] = [df['volume'][i:i+4].to_list() for i in range(len(df))]
df['volume_next_4_weeks_cumulative'] = df['volume_next_4_weeks_cumulative'].apply(lambda row: ' + '.join(str(x) for x in row))
return df
</code></pre>
<p>Apply the function to the DataFrame:</p>
<pre><code>df = df.groupby(['account']).apply(create_mov_cols)
</code></pre> | python|pandas|numpy|pandas-groupby|cumsum | 0 |
705 | 59,927,915 | Pandas: get unique elements then merge | <p>I think this should be simple, but I'm having difficulty searching for solutions to this problem, perhaps because I don't know the best vocabulary. But to illustrate, say I have three data frames:</p>
<p><code>df1 = df({'id1':['1','2','3'], 'val1':['a','b','c']})</code></p>
<p><code>df2 = df({'id2':['1','2','4'], 'val2':['d','e','f']})</code></p>
<p><code>df3 = df({'id3':['1','5','6'], 'val3':['g','h','i']})</code></p>
<p>What I want to get is:</p>
<pre><code>comb_id val1 val2 val3
1 a d g
2 b e n.d.
3 c n.d. n.d.
4 n.d. f n.d.
5 n.d. n.d. h
6 n.d. n.d. i
</code></pre>
<p>I think it must be an outer merge of some kind but so far I haven't gotten it to work. Anyone know the best way to go about this?</p> | <p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.concat.html" rel="nofollow noreferrer"><code>concat</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.set_index.html" rel="nofollow noreferrer"><code>DataFrame.set_index</code></a> for all <code>DataFrame</code>s:</p>
<pre><code>df = pd.concat([df1.set_index('id1'),
df2.set_index('id2'),
df3.set_index('id3')], axis=1, sort=True)
print (df)
val1 val2 val3
1 a d g
2 b e NaN
3 c NaN NaN
4 NaN f NaN
5 NaN NaN h
6 NaN NaN i
</code></pre>
<p>If necessary replace missing values add <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.fillna.html" rel="nofollow noreferrer"><code>DataFrame.fillna</code></a>:</p>
<pre><code>df = pd.concat([df1.set_index('id1'),
df2.set_index('id2'),
df3.set_index('id3')], axis=1, sort=True).fillna('n.d.')
print (df)
val1 val2 val3
1 a d g
2 b e n.d.
3 c n.d. n.d.
4 n.d. f n.d.
5 n.d. n.d. h
6 n.d. n.d. i
</code></pre> | python|pandas|dataframe|merge | 4 |
706 | 60,210,750 | How can I solve this problem nested renamer is not supported | <pre><code>SpecificationError Traceback (most recent call last)
<ipython-input-42-d850d85f8342> in <module>
----> 1 train_label=extract_feature(train,train_label)
<ipython-input-33-23ab8dbf7d96> in extract_feature(df, train)
1 def extract_feature(df,train):
----> 2 t=groupy_feature(df,'ship','x',['max','min','mean','std','median','std','skew','sum'])
3 train=pd.merge(train,t,on='ship',how='left')
4 t=groupy_feature(df,'ship','y',['max','min','mean','std','median','std','skew','sum'])
5 train=pd.merge(train,t,on='ship',how='left')
<ipython-input-32-63d47754fe81> in groupy_feature(df, key, target, aggs)
4 agg_dict[f'{target}_{agg}']=agg
5 print(agg_dict)
----> 6 t=df.groupby(key)[target].agg(agg_dict).reset_index()
7 return t
~\AppData\Roaming\Python\Python37\site-packages\pandas\core\groupby\generic.py in aggregate(self, func, *args, **kwargs)
251 # but not the class list / tuple itself.
252 func = _maybe_mangle_lambdas(func)
--> 253 ret = self._aggregate_multiple_funcs(func)
254 if relabeling:
255 ret.columns = columns
~\AppData\Roaming\Python\Python37\site-packages\pandas\core\groupby\generic.py in _aggregate_multiple_funcs(self, arg)
292 # GH 15931
293 if isinstance(self._selected_obj, Series):
--> 294 raise SpecificationError("nested renamer is not supported")
295
296 columns = list(arg.keys())
**SpecificationError: nested renamer is not supported**
</code></pre> | <p>I see two times the term 'std' in </p>
<p>t=groupy_feature(df,'ship','x',['max','min','mean','std','median','std','skew','sum'])</p> | python|pandas | 0 |
707 | 65,163,818 | Transpose part of dataframe Python | <p>I have a dataset as below:</p>
<pre class="lang-py prettyprint-override"><code>>>>df = pd.DataFrame(
[
["site1", "2020-12-05T15:50:00", "0", "0"],
["site1", "2020-12-05T15:55:00", "0.5", "0"],
["site2", "2020-12-05T15:50:00", "0.5", "0"],
["site2", "2020-12-05T15:55:00", "1", "0"],
],
columns=["code", "site_time", "r1", "r2"],
)
>>>df
code site_time r1 r2
0 site1 2020-12-05T15:50:00 0 0
1 site1 2020-12-05T15:55:00 0.5 0
2 site2 2020-12-05T15:50:00 0.5 0
3 site2 2020-12-05T15:55:00 1 0
</code></pre>
<p>Then I would like to transpose it to the table as below:</p>
<pre><code>code site_time trace value
site1 2020-12-05T15:50:00 r1 0
site1 2020-12-05T15:50:00 r2 0
site1 2020-12-05T15:55:00 r1 0.5
site1 2020-12-05T15:55:00 r2 0
site2 2020-12-05T15:50:00 r1 0.5
site2 2020-12-05T15:50:00 r2 0
site2 2020-12-05T15:55:00 r1 1
site2 2020-12-05T15:55:00 r2 0
</code></pre>
<p>Could I ask how i accomplish this?</p> | <p>use melt:</p>
<pre><code>df.melt(id_vars=['code','site_time']).rename(columns={'variable':'trace'}).sort_values(by=['code','trace',])
</code></pre>
<p>desired result:</p>
<pre><code> code site_time trace value
0 site1 2020-12-05T15:50:00 r1 0
1 site1 2020-12-05T15:55:00 r1 0.5
4 site1 2020-12-05T15:50:00 r2 0
5 site1 2020-12-05T15:55:00 r2 0
2 site2 2020-12-05T15:50:00 r1 0.5
3 site2 2020-12-05T15:55:00 r1 1
6 site2 2020-12-05T15:50:00 r2 0
7 site2 2020-12-05T15:55:00 r2 0
</code></pre> | python|pandas|dataframe|transpose | 2 |
708 | 65,280,524 | Matplotlib: Why does interpolated points fall outside the plotted line? | <p>I have recreated a common geoscientific plot using Matplotlib. It shows the grain size distribution of a soil sample and is used for soil classification.</p>
<p>Basically, a soil sample is placed in a stack of sieves, which is then shaked for a certain amount of time, and the remaining weight of each grain fraction is then plotted onto the diagram (see attached image below).</p>
<p><a href="https://i.stack.imgur.com/NWlo3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NWlo3.png" alt="enter image description here" /></a></p>
<p>An important use for this type of diagram, is to determine two parameters known as D60 and D10, which is the grain size at 60 and 10 percent passing, respectively (see orange dots in diagram). I have interpolated these values with a function using <code>np.interp</code>, but oddly enough these points fall outside of the line plotted by Matplotlib. Can anyone give me a hint where I'm going wrong with this? They should intersect the line where y = 10 and y = 60 exactly.</p>
<p>The data looks like this:</p>
<pre><code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.ticker import FormatStrFormatter
d = {
'x': [0.063, 0.125, 0.250, 0.500, 1.000, 2.000, 4.000, 8.000],
'y': [5.9, 26.0, 59.0, 87.0, 95.0, 97.0, 97.0, 100.0]
}
df = pd.DataFrame(d)
df
x y
0 0.063 5.9
1 0.125 26.0
2 0.250 59.0
3 0.500 87.0
4 1.000 95.0
5 2.000 97.0
6 4.000 97.0
7 8.000 100.0
</code></pre>
<p>The function for interpolating values looks like this (I have tried a similar approach using Scipy with the same results):</p>
<pre><code>def interpolate(xval, df, xcol, ycol):
return np.interp([xval], df[ycol], df[xcol])
</code></pre>
<p>The code for creating the plot itself looks like this:</p>
<pre><code>fig, ax = plt.subplots(figsize=(10,5))
ax.scatter(df['x'], df['y']) #Show datapoints
# Beginning of table
cell_text = [
['Clay','FSi','MSi','CSi', 'FSa', 'MSa', 'CSa', 'FGr', 'MGr', 'CGr', 'Co']
]
table = ax.table(
cellText=cell_text,
colWidths=[0.06, 0.1, 0.1, 0.1,0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.04],
cellLoc = 'center',
rowLoc = 'center',
loc='top')
h = table.get_celld()[(0,0)].get_height()
w = table.get_celld()[(0,0)].get_width()
header = [table.add_cell(-1,pos, 0.1, h, loc="center", facecolor="none") for pos in [i for i in range(1,10)]]
table.auto_set_font_size(False)
table.set_fontsize(12)
table.scale(1, 1.5)
for i in [0,3,6]:
header[i].visible_edges = 'TBL'
for i in [1,4,7]:
header[i].visible_edges = 'TB'
for i in [2,5,8]:
header[i].visible_edges = 'TBR'
header[1].get_text().set_text('Silt')
header[4].get_text().set_text('Sand')
header[7].get_text().set_text('Gravel')
# End of table
plt.grid(b=True, which='major', color='k', linestyle='--', alpha=0.5)
plt.grid(b=True, which='minor', color='k', linestyle='--', alpha=0.5)
ax.set_yticks(np.arange(0, 110, 10))
ax.set_xscale('log')
ax.xaxis.set_major_formatter(FormatStrFormatter('%g'))
ax.axis(xmin=0.001,xmax=100, ymin=0, ymax=100)
#Interpolate D10 and D60
x2 = np.concatenate((interpolate(10, df, 'x', 'y'), interpolate(60, df, 'x', 'y')))
y2 = np.array([10,60])
#Plot D10 and D60
ax.scatter(x2, y2)
#Plot the line
ax.plot(df['x'], df['y'])
ax.set_xlabel('Grain size (mm)'), ax.set_ylabel('Percent passing (%)')
</code></pre>
<p>Can anyone help me figure out why the orange dots fall slightly outside of the lines, what am I doing wrong? Thanks!</p> | <p>The problem is that you are using linear interpolation to find the points, while the plot has straight lines on a log scale. This can be accomplished via interpolation in log space:</p>
<pre class="lang-py prettyprint-override"><code>def interpolate(yval, df, xcol, ycol):
return np.exp(np.interp([yval], df[ycol], np.log(df[xcol])))
</code></pre>
<p>If you furthermore write <code>np.array(yval)</code> instead of <code>[yval]</code>, vector <code>x2</code> can be simplified. Providing a <code>z-order</code> of <code>3</code> draws the new dots on top of the line. Optionally some text could be added:</p>
<pre class="lang-py prettyprint-override"><code>def interpolate(yval, df, xcol, ycol):
return np.exp(np.interp(np.array(yval), df[ycol], np.log(df[xcol])))
y2 = [10, 60]
x2 = interpolate(y2, df, 'x', 'y')
ax.scatter(x2, y2, zorder=3, color='crimson')
for x, y in zip(x2, y2):
ax.text(x, y, f' D{y}={x:.4f}', color='crimson', ha='left', va='top', size=12)
</code></pre>
<p><a href="https://i.stack.imgur.com/dEskB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dEskB.png" alt="example plot" /></a></p> | python|pandas|numpy|matplotlib | 3 |
709 | 65,157,499 | How do i specify to a model what to take as input of a custom loss function? | <p>I'm having issues in understanding/implementing a custom loss function in my model.</p>
<p>I have a keras model which is composed by 3 sub models as you can see here in the model architecture,</p>
<p><a href="https://i.stack.imgur.com/l1gJs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/l1gJs.png" alt="model architecture" /></a></p>
<p>Now, I'd like to use the outputs of <em>model</em> and <em>model_2</em> in my custom loss function.
I understand that in the loss function definition I can write:</p>
<pre class="lang-py prettyprint-override"><code> def custom_mse(y_true, y_pred):
*calculate stuff*
return loss
</code></pre>
<p>But how do I tell <strong>the model</strong> to take its 2 outputs as inputs of the loss function?</p>
<p>Maybe, and i hope so, it's super trivial but I didn't find anything online, if you could help me it'd be fantastic.</p>
<p>Thanks in advance</p>
<p><strong>Context:</strong>
<em>model</em> and <em>model_2</em> are the same pretrained model, a binary classifier, which predicts the interaction between 2 inputs (of image-like type).
<em>model_1</em> is a generative model which will edit one of the inputs.</p>
<p>Therefore:</p>
<pre class="lang-py prettyprint-override"><code> complete_model = Model(inputs=[input_1, input_2], outputs=[out_model, out_model2])
opt = *an optimizer*
complete_model.compile(loss=custom_mse,
??????,
optimizer = opt,
metrics=['whatever'])
</code></pre>
<p>The main goal is to compare the prediction with the edited input against the one with the un-edited input, therefore the model will outputs the 2 interactions, which i need to use in the loss function.</p>
<p><strong>EDIT:</strong>
Thank you Andrey for the solution,</p>
<p>Now however i can't manage to implement together the 2 loss functions, namely the one with add_loss(func) and a classic binary_crossentropy in model.complie(loss='binary_crossentropy', ...).</p>
<p>Can I maybe add an add_loss specifying model_2.output and the label? If yes do you know how?</p>
<p>They work by themselves but not together, when i try to run the code they raise</p>
<p><code>ValueError: Shapes must be equal rank, but are 0 and 4 From merging shape 0 with other shapes. for '{{node AddN}} = AddN[N=2, T=DT_FLOAT](binary_crossentropy/weighted_loss/value, complete_model/generator/tf_op_layer_SquaredDifference_3/SquaredDifference_3)' with input shapes: [], [?,500,400,1].</code></p> | <p>You can add loss with <code>compile()</code> only for standard loss function signature (y_true, y_pred). You can not use it because your signature is something like (y_true, (y_pred1, y_pred2)). Use <code>add_loss()</code> API instead. See here: <a href="https://keras.io/api/losses/" rel="nofollow noreferrer">https://keras.io/api/losses/</a></p> | python|tensorflow|keras|deep-learning|loss-function | 0 |
710 | 65,380,768 | TensorflowJS: how to reset input/output shapes for pretrained model in TFJS | <p>For the <a href="https://github.com/HasnainRaz/Fast-SRGAN/blob/master/models/generator.h5" rel="nofollow noreferrer">pre-trained model in python</a> we can reset input/output shapes:</p>
<pre><code>from tensorflow import keras
# Load the model
model = keras.models.load_model('models/generator.h5')
# Define arbitrary spatial dims, and 3 channels.
inputs = keras.Input((None, None, 3))
# Trace out the graph using the input:
outputs = model(inputs)
# Override the model:
model = keras.models.Model(inputs, outputs)
</code></pre>
<p><a href="https://github.com/HasnainRaz/Fast-SRGAN" rel="nofollow noreferrer">The source code</a></p>
<p>I'm trying to do the same in TFJS:</p>
<pre><code> // Load the model
this.model = await tf.loadLayersModel('/assets/fast_srgan/model.json');
// Define arbitrary spatial dims, and 3 channels.
const inputs = tf.layers.input({shape: [null, null, 3]});
// Trace out the graph using the input.
const outputs = this.model.apply(inputs) as tf.SymbolicTensor;
// Override the model.
this.model = tf.model({inputs: inputs, outputs: outputs});
</code></pre>
<p>TFJS does not support one of the layers in the model:</p>
<pre><code> ...
u = keras.layers.Conv2D(filters, kernel_size=3, strides=1, padding='same')(layer_input)
u = tf.nn.depth_to_space(u, 2) # <- TFJS does not support this layer
u = keras.layers.PReLU(shared_axes=[1, 2])(u)
...
</code></pre>
<p>I wrote my own:</p>
<pre><code>import * as tf from '@tensorflow/tfjs';
export class DepthToSpace extends tf.layers.Layer {
constructor() {
super({});
}
computeOutputShape(shape: Array<number>) {
// I think the issue is here
// because the error occurs during initialization of the model
return [null, ...shape.slice(1, 3).map(x => x * 2), 32];
}
call(input): tf.Tensor {
const result = tf.depthToSpace(input[0], 2);
return result;
}
static get className() {
return 'TensorFlowOpLayer';
}
}
</code></pre>
<p>Using the model:</p>
<pre><code> tf.tidy(() => {
let img = tf.browser.fromPixels(this.imgLr.nativeElement, 3);
img = tf.div(img, 255);
img = tf.expandDims(img, 0);
let sr = this.model.predict(img) as tf.Tensor;
sr = tf.mul(tf.div(tf.add(sr, 1), 2), 255).arraySync()[0];
tf.browser.toPixels(sr as tf.Tensor3D, this.imgSrCanvas.nativeElement);
});
</code></pre>
<p>but I get the error:</p>
<blockquote>
<p>Error: Input 0 is incompatible with layer p_re_lu: expected axis 1 of input shape to have value 96 but got shape 1,128,128,32.</p>
</blockquote>
<p>The pre-trained model was trained with 96x96 pixels images. If I use the 96x96 image, it works. But if I try to use other sizes (for example 128x128), It doesn't work. In python, we can easily reset input/output shapes. Why it doesn't work in JS?</p> | <p>To define a new model from the layers of the previous model, you need to use <code>tf.model</code></p>
<pre><code>this.model = tf.model({inputs: inputs, outputs: outputs});
</code></pre> | tensorflow|tensorflow.js|tensorflowjs-converter | 1 |
711 | 65,431,015 | Object Detection Few-Shot training with TensorflowLite | <p>I am trying to create a mobile app that uses object detection to detect a specific type of object. To do this I am starting with the <a href="https://github.com/tensorflow/examples/tree/master/lite/examples/object_detection/android" rel="nofollow noreferrer">Tensorflow object detection example Android app</a>, which uses TF2 and <a href="https://tfhub.dev/tensorflow/lite-model/ssd_mobilenet_v1/1/metadata/2?lite-format=tflite" rel="nofollow noreferrer">ssd_mobilenet_v1</a>.</p>
<p>I'd like to try <a href="https://colab.research.google.com/github/tensorflow/models/blob/master/research/object_detection/colab_tutorials/eager_few_shot_od_training_tflite.ipynb" rel="nofollow noreferrer">Few-Shot training</a> (Colab link) so I started by replacing the example app's SSD Mobilenet v1 download with the Colab's output file <code>model.tflite</code>, however this causes the the app to crash with following error:</p>
<pre><code>java.lang.IllegalStateException: This model does not contain associated files, and is not a Zip file.
at org.tensorflow.lite.support.metadata.MetadataExtractor.assertZipFile(MetadataExtractor.java:313)
at org.tensorflow.lite.support.metadata.MetadataExtractor.getAssociatedFile(MetadataExtractor.java:164)
at org.tensorflow.lite.examples.detection.tflite.TFLiteObjectDetectionAPIModel.create(TFLiteObjectDetectionAPIModel.java:126)
at org.tensorflow.lite.examples.detection.DetectorActivity.onPreviewSizeChosen(DetectorActivity.java:99)
</code></pre>
<p>I realize the Colab uses <a href="http://download.tensorflow.org/models/object_detection/tf2/20200711/ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8.tar.gz" rel="nofollow noreferrer">ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8.tar.gz</a> - does this mean there are changes needed in the app code - or is there something more fundamentally wrong with my approach?</p>
<p>Update: I also tried the Lite output of the Colab <a href="https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/tf2_image_retraining.ipynb" rel="nofollow noreferrer">tf2_image_retraining</a> and got the same error.</p> | <p>The fix apparently was <a href="https://github.com/tensorflow/examples/compare/master...cachvico:darren/fix-od" rel="nofollow noreferrer">https://github.com/tensorflow/examples/compare/master...cachvico:darren/fix-od</a> - .tflite files can now be zip files including the labels, but the example app doesn't work with the old format.</p>
<p>This doesn't throw error when using the Few Shots colab output. Although I'm not getting results yet - pointing the app at pictures of rubber ducks not yet work.</p> | tensorflow2.0|object-detection|tensorflow-lite | 0 |
712 | 65,161,630 | Getting the index of a timestamp element in a pandas data frame | <p>I have a pandas data frame that I created as follows:</p>
<pre><code>dates = pd.date_range('12-01-2020','12-10-2020')
my_df = pd.DataFrame(dates, columns = ['Date'])
</code></pre>
<p>So this gives</p>
<pre><code> Date
0 2020-12-01
1 2020-12-02
2 2020-12-03
3 2020-12-04
4 2020-12-05
5 2020-12-06
6 2020-12-07
7 2020-12-08
8 2020-12-09
9 2020-12-10
</code></pre>
<p>My question is very elementary: What is the correct function to use for returning the index of a given date? I have tried <code>my_df['Date'].index('2020-12-05')</code>, expecting to get 4, but instead I got the following error: 'RangeIndex' object is not callable. I also tried</p>
<pre><code>d = pd.TimeStamp('12-05-2020' + '00:00:00')
my_df['Date'].index(d)
</code></pre>
<p>but I got the same error...I'm confused because I've used .index successfully in similar situations, such as on lists with integers. Any help would be appreciated.</p> | <p>You can also use <code>query</code> without having to reset the index</p>
<pre><code>my_df.query("Date == '2020-12-05'").index.values[0]
</code></pre>
<p>or if you want to assign the value to search:</p>
<pre><code>d = pd.to_datetime('12-05-2020')
my_df.query("Date == @d").index.values[0]
</code></pre>
<p>or without <code>loc</code> or <code>query</code></p>
<pre><code>my_df[my_df.Date == '12-05-2020'].index.values[0]
</code></pre>
<p>And your answer:</p>
<pre><code>4
</code></pre> | python|pandas|dataframe|indexing|timestamp | 1 |
713 | 65,324,352 | Pandas df.equals() returning False on identical dataframes? | <p>Let <code>df_1</code> and <code>df_2</code> be:</p>
<pre><code>In [1]: import pandas as pd
...: df_1 = pd.DataFrame({'a': [1, 2, 3], 'b': [4, 5, 6]})
...: df_2 = pd.DataFrame({'a': [1, 2, 3], 'b': [4, 5, 6]})
In [2]: df_1
Out[2]:
a b
0 1 4
1 2 5
2 3 6
</code></pre>
<p>We add a row <code>r</code> to <code>df_1</code>:</p>
<pre><code>In [3]: r = pd.DataFrame({'a': ['x'], 'b': ['y']})
...: df_1 = df_1.append(r, ignore_index=True)
In [4]: df_1
Out[4]:
a b
0 1 4
1 2 5
2 3 6
3 x y
</code></pre>
<p>We now remove the added row from <code>df_1</code> and get the original <code>df_1</code> back again:</p>
<pre><code>In [5]: df_1 = pd.concat([df_1, r]).drop_duplicates(keep=False)
In [6]: df_1
Out[6]:
a b
0 1 4
1 2 5
2 3 6
In [7]: df_2
Out[7]:
a b
0 1 4
1 2 5
2 3 6
</code></pre>
<p>While <code>df_1</code> and <code>df_2</code> are identical, <code>equals()</code> returns <code>False</code>.</p>
<pre><code>In [8]: df_1.equals(df_2)
Out[8]: False
</code></pre>
<p>Did reseach on SO but could not find a related question.
Am I doing somthing wrong? How to get the correct result in this case?
<code>(df_1==df_2).all().all()</code> returns <code>True</code> but not suitable for the case where <code>df_1</code> and <code>df_2</code> have different length.</p> | <p>This again is a subtle one, well done for spotting it.</p>
<pre><code>import pandas as pd
df_1 = pd.DataFrame({'a': [1, 2, 3], 'b': [4, 5, 6]})
df_2 = pd.DataFrame({'a': [1, 2, 3], 'b': [4, 5, 6]})
r = pd.DataFrame({'a': ['x'], 'b': ['y']})
df_1 = df_1.append(r, ignore_index=True)
df_1 = pd.concat([df_1, r]).drop_duplicates(keep=False)
df_1.equals(df_2)
from pandas.util.testing import assert_frame_equal
assert_frame_equal(df_1,df_2)
</code></pre>
<p>Now we can see the issue as the assert fails.</p>
<pre><code>AssertionError: Attributes of DataFrame.iloc[:, 0] (column name="a") are different
Attribute "dtype" are different
[left]: object
[right]: int64
</code></pre>
<p>as you added strings to integers the integers became objects. so this is why the equals fails as well..</p> | python|pandas|dataframe|equals|dtype | 10 |
714 | 65,221,982 | How to clear a numpy array? | <p>How can I clear a one dimensional numpy array?</p>
<p>if it was a list, I would do this:</p>
<pre><code>my_list = []
</code></pre>
<p>How to do this with numpy?</p>
<p><strong>edited:</strong></p>
<p>What I mean by clearing is to remove all element in the same array.
And for list I had to use <code>my_list.clear()</code></p> | <p>What do you mean by 'clear'? The code</p>
<pre><code>my_list = []
</code></pre>
<p>just overwrites whatever was stored as <code>my_list</code> previously, regardless whether it was a list, array or whatever. If that is what you wish to do, you use the exact same syntax if <code>my_list</code> was an array.</p>
<p>If by clear you mean delete all elements in the list, then use</p>
<pre><code>my_list.clear()
</code></pre>
<p>which deletes all the elements in the list and keeps the <code>id</code>:</p>
<pre><code>>>> a = [4]*2
>>> a
[4, 4]
>>> id(a)
140202631135872
>>> a.clear()
>>> a
[]
>>> id(a)
140202631135872
</code></pre>
<p>There exists no equivalent to this for numpy arrays, since it is not possible to change the number of elements in a numpy array.</p>
<p>If you wish to simply set all the values in the array to 0:</p>
<pre><code>>>> arr = np.random.randn(3)
>>> arr
array([-1.11074625, -0.12784997, -0.53716969])
>>> id(arr)
140203438053536
>>> arr[:] = 0
>>> arr
array([0., 0., 0.])
>>> id(arr)
140203438053536
</code></pre> | python|numpy | 1 |
715 | 50,125,055 | Merging multiple CSV files and dropping duplicates by field | <p>I need to match data from multiple CSV files.
For example, if I have three CSV files.</p>
<p>input 1 csv</p>
<pre class="lang-none prettyprint-override"><code>PANYNJ LGA WEST 1,available, LGA West GarageFlushing
PANYNJ LGA WEST 4,unavailable,LGA West Garage
iPark - Tesla,unavailable,530 E 80th St
</code></pre>
<p>input 2 csv</p>
<pre class="lang-none prettyprint-override"><code>PANYNJ LGA WEST 4,unavailable,LGA West Garage
PANYNJ LGA WEST 5,available,LGA West Garage
</code></pre>
<p>input 3 csv</p>
<pre class="lang-none prettyprint-override"><code>PANYNJ LGA WEST 5,available,LGA West Garage
imPark - Tesla,unavailable,611 E 83rd St
</code></pre>
<p>The first column is <code>name</code>, the second one is <code>status</code>, and the last one is <code>address</code>. I would like to merge these three documents into one csv file if they have the same name. My desire output file is like</p>
<p>output csv</p>
<pre class="lang-none prettyprint-override"><code>PANYNJ LGA WEST 1,available, LGA West GarageFlushing
PANYNJ LGA WEST 4,unavailable,LGA West Garage
iPark - Tesla,unavailable,530 E 80th St
PANYNJ LGA WEST 5,available,LGA West Garage
imPark - Tesla,unavailable,611 E 83rd St
</code></pre>
<p>I'm trying to fix this with <code>pandas</code> or <code>CSV</code> but I'm unsure how to go about this.</p>
<p>Any help is greatly appreciated!</p> | <p>With <code>pandas</code>, you can use <code>pd.concat</code> followed by <code>pd.drop_duplicates</code>:</p>
<pre><code>import pandas as pd
from io import StringIO
str1 = StringIO("""PANYNJ LGA WEST 1,available, LGA West GarageFlushing
PANYNJ LGA WEST 4,unavailable,LGA West Garage
iPark - Tesla,unavailable,530 E 80th St""")
str2 = StringIO("""PANYNJ LGA WEST 4,unavailable,LGA West Garage
PANYNJ LGA WEST 5,available,LGA West Garage""")
str3 = StringIO("""PANYNJ LGA WEST 5,available,LGA West Garage
imPark - Tesla,unavailable,611 E 83rd St""")
# replace str1, str2, str3 with 'file1.csv', 'file2.csv', 'file3.csv'
df1 = pd.read_csv(str1, header=None)
df2 = pd.read_csv(str2, header=None)
df3 = pd.read_csv(str3, header=None)
res = pd.concat([df1, df2, df3], ignore_index=True)\
.drop_duplicates(0)
print(res)
0 1 2
0 PANYNJ LGA WEST 1 available LGA West GarageFlushing
1 PANYNJ LGA WEST 4 unavailable LGA West Garage
2 iPark - Tesla unavailable 530 E 80th St
4 PANYNJ LGA WEST 5 available LGA West Garage
6 imPark - Tesla unavailable 611 E 83rd St
</code></pre> | python|python-3.x|pandas|csv | 1 |
716 | 50,077,712 | Replacing 2D subarray in 3D array if condition is met | <p>I have a matrix that looks like this:</p>
<pre><code>a = np.random.rand(3, 3, 3)
[[[0.04331462, 0.30333583, 0.37462236],
[0.30225757, 0.35859228, 0.57845153],
[0.49995805, 0.3539933, 0.11172398]],
[[0.28983508, 0.31122743, 0.67818926],
[0.42720309, 0.24416101, 0.5469823 ],
[0.22894097, 0.76159389, 0.80416832]],
[[0.25661154, 0.64389696, 0.37555374],
[0.87871659, 0.27806621, 0.3486518 ],
[0.26388296, 0.8993144, 0.7857116 ]]]
</code></pre>
<p>I want to check every block for a value smaller than 0.2. If value is smaller than 0.2 then the whole block equals 0.2. In this case:</p>
<pre><code>[[[0.2 0.2 0.2]
[0.2 0.2 0.2]
[0.2 0.2 0.2]]
[[0.28983508 0.31122743 0.67818926]
[0.42720309 0.24416101 0.5469823 ]
[0.22894097 0.76159389 0.80416832]]
[[0.25661154 0.64389696 0.37555374]
[0.87871659 0.27806621 0.3486518 ]
[0.26388296 0.8993144 0.7857116 ]]]
</code></pre> | <p>Here is a vectorized way to get what you want.<br>
Taking <code>a</code> from your example:</p>
<pre><code>a[(a < 0.2).any(axis=1).any(axis=1)] = 0.2
print(a)
</code></pre>
<p>gives:</p>
<pre><code>array([[[ 0.2 , 0.2 , 0.2 ],
[ 0.2 , 0.2 , 0.2 ],
[ 0.2 , 0.2 , 0.2 ]],
[[ 0.28983508, 0.31122743, 0.67818926],
[ 0.42720309, 0.24416101, 0.5469823 ],
[ 0.22894097, 0.76159389, 0.80416832]],
[[ 0.25661154, 0.64389696, 0.37555374],
[ 0.87871659, 0.27806621, 0.3486518 ],
[ 0.26388296, 0.8993144 , 0.7857116 ]]])
</code></pre>
<hr>
<p><strong>Explanation:</strong> </p>
<p>Taking another example where each step will be more clear: </p>
<pre><code>a = np.array([[[0.51442898, 0.90447442, 0.45082496],
[0.59301203, 0.30025497, 0.43517362],
[0.28300437, 0.64143037, 0.73974422]],
[[0.228676 , 0.59093859, 0.14441217],
[0.37169639, 0.57230533, 0.81976775],
[0.95988687, 0.43372407, 0.77616701]],
[[0.03098771, 0.80023031, 0.89061113],
[0.86998351, 0.39619143, 0.16036088],
[0.24938437, 0.79131954, 0.38140462]]])
</code></pre>
<p>Let's see which elements are less than 0.2: </p>
<pre><code>print(a < 0.2)
</code></pre>
<p>gives:</p>
<pre><code>array([[[False, False, False],
[False, False, False],
[False, False, False]],
[[False, False, True],
[False, False, False],
[False, False, False]],
[[ True, False, False],
[False, False, True],
[False, False, False]]])
</code></pre>
<p>From here we would like to get indices of those 2D arrays that have at least one <code>True</code> element: <code>[False, True, True]</code>. We require <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.any.html" rel="nofollow noreferrer"><code>np.any</code></a> for this. Note that I will be using <a href="https://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.ndarray.any.html" rel="nofollow noreferrer"><code>np.ndarray.any</code></a> method chaining here instead of nesting function calls of <code>np.any</code>. <sup>1</sup></p>
<p>Now just using <code>(a < 0.2).any()</code> will give just <code>True</code> because by default it performs logical OR over all dimensions. We have to specify <code>axis</code> parameter. In our case we will be fine with either <code>axis=1</code> or <code>axis=2</code>.<sup>2</sup> </p>
<pre><code>print((a < 0.2).any(axis=1))
</code></pre>
<p>gives<sup>3</sup>:</p>
<pre><code>array([[False, False, False],
[False, False, True],
[ True, False, True]])
</code></pre>
<p>From here we get desired boolean indices by applying another <code>.any()</code> along the rows:</p>
<pre><code>print((a < 0.2).any(axis=1).any(axis=1))
</code></pre>
<p>gives:</p>
<pre><code>array([False, True, True])
</code></pre>
<p>Fianlly, we can simply use this <a href="https://docs.scipy.org/doc/numpy-1.14.0/user/basics.indexing.html#boolean-or-mask-index-arrays" rel="nofollow noreferrer">boolean index array</a> to replace the values of the original array: </p>
<pre><code>a[(a < 0.2).any(axis=1).any(axis=1)] = 0.2
print(a)
</code></pre>
<p>gives:</p>
<pre><code>array([[[0.51442898, 0.90447442, 0.45082496],
[0.59301203, 0.30025497, 0.43517362],
[0.28300437, 0.64143037, 0.73974422]],
[[0.2 , 0.2 , 0.2 ],
[0.2 , 0.2 , 0.2 ],
[0.2 , 0.2 , 0.2 ]],
[[0.2 , 0.2 , 0.2 ],
[0.2 , 0.2 , 0.2 ],
[0.2 , 0.2 , 0.2 ]]])
</code></pre>
<hr>
<p><sup>1</sup>Just compare chaining: </p>
<pre><code>a[(a < 0.2).any(axis=1).any(axis=1)] = 0.2
</code></pre>
<p>with nesting:</p>
<pre><code>a[np.any(np.any(a < 0.2, axis=1), axis=1)] = 0.2
</code></pre>
<p>I think the latter is more confusing.</p>
<p><sup>2</sup>For me this was difficult to comprehend at first. What helped me was to draw an image of a 3x3x3 cube, print results for different axis, and check which axis correspond to which directions. Also, here is an explanation of using axis with <code>np.sum</code> in 3D case: <a href="https://stackoverflow.com/questions/24281263/axis-in-numpy-multidimensional-array">Axis in numpy multidimensional array.
</a></p>
<p><sup>3</sup>One could expect to get <code>[False, True, True]</code> at once which is not the case. For explanation see this: <a href="https://stackoverflow.com/questions/7684501/small-clarification-needed-on-numpy-any-for-matrices">Small clarification needed on numpy.any for matrices
</a></p> | python|numpy|vectorization | 3 |
717 | 46,964,363 | Filtering out outliers in Pandas dataframe with rolling median | <p>I am trying to filter out some outliers from a scatter plot of GPS elevation displacements with dates</p>
<p>I'm trying to use df.rolling to compute a median and standard deviation for each window and then remove the point if it is greater than 3 standard deviations.</p>
<p>However, I can't figure out a way to loop through the column and compare the the median value rolling calculated.</p>
<p>Here is the code I have so far</p>
<pre><code>import pandas as pd
import numpy as np
def median_filter(df, window):
cnt = 0
median = df['b'].rolling(window).median()
std = df['b'].rolling(window).std()
for row in df.b:
#compare each value to its median
df = pd.DataFrame(np.random.randint(0,100,size=(100,2)), columns = ['a', 'b'])
median_filter(df, 10)
</code></pre>
<p>How can I loop through and compare each point and remove it?</p> | <p>Just filter the dataframe</p>
<pre><code>df['median']= df['b'].rolling(window).median()
df['std'] = df['b'].rolling(window).std()
#filter setup
df = df[(df.b <= df['median']+3*df['std']) & (df.b >= df['median']-3*df['std'])]
</code></pre> | pandas|median|outliers|rolling-computation | 17 |
718 | 46,640,945 | Grouping by multiple columns to find duplicate rows pandas | <p>I have a <code>df</code></p>
<pre><code>id val1 val2
1 1.1 2.2
1 1.1 2.2
2 2.1 5.5
3 8.8 6.2
4 1.1 2.2
5 8.8 6.2
</code></pre>
<p>I want to group by <code>val1 and val2</code> and get similar dataframe only with rows which has multiple occurance of same <code>val1 and val2</code> combination.</p>
<p>Final <code>df</code>:</p>
<pre><code>id val1 val2
1 1.1 2.2
4 1.1 2.2
3 8.8 6.2
5 8.8 6.2
</code></pre> | <p>You need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.duplicated.html" rel="noreferrer"><code>duplicated</code></a> with parameter <code>subset</code> for specify columns for check with <code>keep=False</code> for all duplicates for mask and filter by <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="noreferrer"><code>boolean indexing</code></a>:</p>
<pre><code>df = df[df.duplicated(subset=['val1','val2'], keep=False)]
print (df)
id val1 val2
0 1 1.1 2.2
1 1 1.1 2.2
3 3 8.8 6.2
4 4 1.1 2.2
5 5 8.8 6.2
</code></pre>
<p>Detail:</p>
<pre><code>print (df.duplicated(subset=['val1','val2'], keep=False))
0 True
1 True
2 False
3 True
4 True
5 True
dtype: bool
</code></pre> | python|pandas|dataframe | 57 |
719 | 67,990,177 | Use pandas to determine how many of the previous rows are required to sum to a threshold | <p>I have a pandas data frame and I would like to create a new column <code>desired_col</code>, which contains the required number of rows in <code>col1</code> to sum >= a threshold value.</p>
<p>For example if I choose my threshold value to be 10 and I have</p>
<pre><code>d = {'col1': [10,1,2,6,1,6,4,3,2,3,1,1,4,5,5]}
df = pd.DataFrame(data=d)
display(df)
</code></pre>
<p>What I would like to end up with is</p>
<pre><code>new_d = {'col1': [10,1, 2,6,1,6,4,3,2,3,1,1,4,5,5], 'desired_output':[0,1,2,3,3,2,1,2,3,3,4,4,4,2,1]}
new_df = pd.DataFrame(data=new_d)
display(new_df)
</code></pre> | <p>This code will produce your desired output:</p>
<pre><code>import pandas as pd
d = {'col1': [10,1,2,6,1,6,4,3,2,3,1,1,4,5,5]}
df = pd.DataFrame(data=d)
target = []
for i in range(len(df)):
element = df.iloc[i].col1
print(element)
s = element
threshold = 10
j=i
steps = 0
while s < threshold:
j -= 1
steps += 1
if j < 0:
steps = None
s = threshold + 1
else:
s += df.iloc[j].col1
target.append(steps)
df["desired_output"] = target
</code></pre> | python|pandas|dataframe | 0 |
720 | 61,190,366 | Creating several weight tensors for each object in Multi-Object Tracking (MOT) using TensorFlow | <p>I am using TensorFlow V1.10.0 and developing a Multi-Object Tracker based on MDNet. I need to assign a separate weight matrix for each detected object for the fully connected layers in order to get different embedding for each object during online training. I am using this tf.map_fn in order to generate a higher-order weight tensor (n_objects, flattened layer, hidden_units), </p>
<p>'''</p>
<pre><code> def dense_fc4(n_objects):
initializer = lambda: tf.contrib.layers.xavier_initializer()(shape=(1024, 512))
return tf.Variable(initial_value=initializer, name='fc4/kernel',
shape=(n_objects.shape[0], 1024, 512))
W4 = tf.map_fn(dense_fc4, samples_flat)
b4 = tf.get_variable('fc4/bias', shape=512, initializer=tf.zeros_initializer())
fc4 = tf.add(tf.matmul(samples_flat, W4), b4)
fc4 = tf.nn.relu(fc4)
</code></pre>
<p>'''</p>
<p>However during execution when I run the session for W4 I get a weight matrix but all having the same values. Any help? </p>
<p>TIA</p> | <p>Here is a workaround, I was able to generate the multiple kernels outside the graph in a for loop and then giving it to the graph:</p>
<pre><code>w6 = []
for n_obj in range(pos_data.shape[0]):
w6.append(tf.get_variable("fc6/kernel-" + str(n_obj), shape=(512, 2),
initializer=tf.contrib.layers.xavier_initializer()))
print("modeling fc6 branches...")
prob, train_op, accuracy, loss, pred, initialize_vars, y, fc6 = build_branches(fc5, w6)
def build_branches(fc5, w6):
y = tf.placeholder(tf.int64, [None, None])
b6 = tf.get_variable('fc6/bias', shape=2, initializer=tf.zeros_initializer())
fc6 = tf.add(tf.matmul(fc5, w6), b6)
loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y,
logits=fc6))
train_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope="fc6")
with tf.variable_scope("", reuse=tf.AUTO_REUSE):
optimizer = tf.train.AdamOptimizer(learning_rate=0.001, name='adam')
train_op = optimizer.minimize(loss, var_list=train_vars)
initialize_vars = train_vars
initialize_vars += [optimizer.get_slot(var, name)
for name in optimizer.get_slot_names()
for var in train_vars]
if isinstance(optimizer, tf.train.AdamOptimizer):
initialize_vars += optimizer._get_beta_accumulators()
prob = tf.nn.softmax(fc6)
pred = tf.argmax(prob, 2)
correct_pred = tf.equal(pred, y)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
return prob, train_op, accuracy, loss, pred, initialize_vars, y, fc6
</code></pre>
<p><a href="https://i.stack.imgur.com/S3rWO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/S3rWO.png" alt="Graph"></a></p> | tensorflow | 0 |
721 | 61,401,720 | Return a matrix by applying a boolean mask (a boolean matrix of same size) in python | <p>I have generated a square matrix of size 4 and a boolean matrix of same size by:</p>
<pre><code>import numpy as np
A = np.random.randn(4,4)
B = np.full((4,4), True, dtype = bool)
B[[0],:] = False
B[:,[0]] = False
</code></pre>
<p>The following code return two matrices of size 4, A has all the random numbers, and B has all the boolean operators where the enitre first row and column is false</p>
<pre><code>B = [[False, False, False, False],
[False, True, True, True],
[False, True, True, True],
[False, True, True, True]]
</code></pre>
<p>What i want is to apply the B boolean matrix to A, such that, i get a 3 by 3 matrix of A where B is True (the elements in B == True).
Is their any logical operator in numpy to perform this operation? or do I have to go through each element of A and B and compare them and then assign it to a new matrix?</p> | <pre><code>In [214]: A = np.random.randn(4,4)
...: B = np.full((4,4), True, dtype = bool)
...: B[[0],:] = False
...: B[:,[0]] = False
In [215]: A
Out[215]:
array([[-0.80676817, -0.20810386, 1.28448594, -0.52667651],
[ 0.6292733 , -0.05575997, 0.32466482, -0.23495175],
[-0.70896794, -1.60571282, -1.43718839, -0.42032337],
[ 0.01541418, -2.00072652, -1.54197002, 1.2626283 ]])
In [216]: B
Out[216]:
array([[False, False, False, False],
[False, True, True, True],
[False, True, True, True],
[False, True, True, True]])
</code></pre>
<p>Boolean indexing (with matching size array) always produces a 1d array. In this case it did not select any values for <code>A[0,:]</code>:</p>
<pre><code>In [217]: A[B]
Out[217]:
array([-0.05575997, 0.32466482, -0.23495175, -1.60571282, -1.43718839,
-0.42032337, -2.00072652, -1.54197002, 1.2626283 ])
</code></pre>
<p>But because the other 3 rows all have 3 <code>True</code>, reshaping the result does produce a reasonable result:</p>
<pre><code>In [218]: A[B].reshape(3,3)
Out[218]:
array([[-0.05575997, 0.32466482, -0.23495175],
[-1.60571282, -1.43718839, -0.42032337],
[-2.00072652, -1.54197002, 1.2626283 ]])
</code></pre>
<p>Whether the reshape makes sense depends on the total number of elements, and your own interpretation of the data.</p> | python-3.x|numpy|boolean|boolean-logic | 0 |
722 | 68,837,542 | Dividing by Rows in Pandas | <p>I have a dataframe like this:</p>
<pre><code>df = pd.DataFrame()
df['Stage'] = ['A','B','A','B']
df['Value'] = ['3','0','2','4']
Stage Value
A 3
B 0
A 2
B 4
</code></pre>
<p>I want to be able to transform it into something like this:</p>
<pre><code>df = pd.DataFrame()
df['Stage'] = ['B/A','B/A']
df['Ratio'] = ['0','2']
Stage Ratio
B/A 0
B/A 2
</code></pre>
<p>Is this possible in pandas?</p> | <p>You can <code>unstack</code> and <code>eval</code>:</p>
<pre><code>df['Value'] = df['Value'].astype(int)
df['group'] = df.groupby('Stage').cumcount()
df.set_index(['group', 'Stage'])['Value'].unstack().eval('B/A')
</code></pre>
<p>output:</p>
<pre><code>group
0 0.0
1 2.0
Name: Ratio, dtype: float64
</code></pre> | python|pandas | 1 |
723 | 68,631,781 | pandas Series: Remove and replace NaNs with interpolated data point | <p>I have a pandas Series and I have currently just resampled it using</p>
<pre><code>signal = pd.Series(thick, index = pd.TimedeltaIndex(time_list_thick,unit = 's'))
resampled_signal = signal.resample('1S').mean()
</code></pre>
<p>However, my resampled data contains NaNs which I would like to remove:</p>
<pre><code>00:00:00.415290 451.369402
00:00:01.415290 NaN
00:00:02.415290 451.358724
00:00:03.415290 451.356055
00:00:04.415290 451.350716
00:00:05.415290 451.340039
00:00:06.415290 NaN
00:00:07.415290 451.332031
00:00:08.415290 451.326692
00:00:09.415290 451.318684
00:00:10.415290 451.310675
00:00:11.415290 NaN
00:00:12.415290 451.302667
00:00:13.415290 451.291990
00:00:14.415290 NaN
00:00:15.415290 451.286651
00:00:16.415290 451.278643
00:00:17.415290 451.274639
00:00:18.415290 451.265296
00:00:19.415290 NaN
00:00:20.415290 451.255953
00:00:21.415290 NaN
00:00:22.415290 451.243941
00:00:23.415290 NaN
00:00:24.415290 451.234598
00:00:25.415290 NaN
00:00:26.415290 451.225255
00:00:27.415290 451.219916
00:00:28.415290 451.211908
00:00:29.415290 451.201231
</code></pre>
<p>What I would like to do is replace these NaNs with an interpolated point whos value lies in between the nearest finite data points (for example: line 2 in my data would be around 451.364..). Is this possible and if so how?</p> | <p>You can use <code>df.interpolate()</code> to perform this operation. Additionally, the <code>time</code> method can be used in order to take time-based index into account.</p>
<p>As follows:</p>
<pre><code>import pandas as pd
import datetime
import numpy as np
todays_date = datetime.datetime.now().date()
index = pd.date_range(todays_date, periods=10, freq='D')
A = np.random.rand(10)
A[1::4] = np.nan
df = pd.DataFrame({'A': A }, index=index)
df['A'] = df['A'].interpolate(method='time')
</code></pre>
<p>Before:</p>
<pre><code> A
2021-08-03 0.360953
2021-08-04 NaN
2021-08-05 0.801508
2021-08-06 0.927827
2021-08-07 0.532153
2021-08-08 NaN
2021-08-09 0.897129
2021-08-10 0.713843
2021-08-11 0.709481
2021-08-12 NaN
</code></pre>
<p>After:</p>
<pre><code> A
2021-08-03 0.360953
2021-08-04 0.581230
2021-08-05 0.801508
2021-08-06 0.927827
2021-08-07 0.532153
2021-08-08 0.714641
2021-08-09 0.897129
2021-08-10 0.713843
2021-08-11 0.709481
2021-08-12 0.709481
</code></pre> | python|pandas|dataframe|interpolation|series | 0 |
724 | 53,155,749 | Replace elements in numpy array avoiding loops | <p>I have a quite large 1d numpy array Xold with given values. These values shall be
replaced according to the rule specified by a 2d numpy array Y:
An example would be</p>
<pre><code>Xold=np.array([0,1,2,3,4])
Y=np.array([[0,0],[1,100],[3,300],[4,400],[2,200]])
</code></pre>
<p>Whenever a value in Xold is identical to a value in Y[:,0], the new value in Xnew should be the corresponding value in Y[:,1]. This is accomplished by two nested for loops:</p>
<pre><code>Xnew=np.zeros(len(Xold))
for i in range(len(Xold)):
for j in range(len(Y)):
if Xold[i]==Y[j,0]:
Xnew[i]=Y[j,1]
</code></pre>
<p>With the given example, this yields <code>Xnew=[0,100,200,300,400]</code>.
However, for large data sets this procedure is quite slow. What is a faster and more elegant way to accomplish this task?</p> | <p><strong>SELECTING THE FASTEST METHOD</strong></p>
<p>Answers to this question provided a nice assortment of ways to replace elements in numpy array. Let's check, which one would be the quickest. </p>
<p><em>TL;DR:</em> Numpy indexing is the winner</p>
<pre><code> def meth1(): # suggested by @Slam
for old, new in Y:
Xold[Xold == old] = new
def meth2(): # suggested by myself, convert y_dict = dict(Y) first
[y_dict[i] if i in y_dict.keys() else i for i in Xold]
def meth3(): # suggested by @Eelco Hoogendoom, import numpy_index as npi first
npi.remap(Xold, keys=Y[:, 0], values=Y[:, 1])
def meth4(): # suggested by @Brad Solomon, import pandas as pd first
pd.Series(Xold).map(pd.Series(Y[:, 1], index=Y[:, 0])).values
# suggested by @jdehesa. create Xnew = Xold.copy() and index
# idx = np.searchsorted(Xold, Y[:, 0]) first
def meth5():
Xnew[idx] = Y[:, 1]
</code></pre>
<p><em>Not so surprising results</em></p>
<pre><code> In [39]: timeit.timeit(meth1, number=1000000)
Out[39]: 12.08
In [40]: timeit.timeit(meth2, number=1000000)
Out[40]: 2.87
In [38]: timeit.timeit(meth3, number=1000000)
Out[38]: 55.39
In [12]: timeit.timeit(meth4, number=1000000)
Out[12]: 256.84
In [50]: timeit.timeit(meth5, number=1000000)
Out[50]: 1.12
</code></pre>
<p>So, the good old list comprehension is the second fastest, and the winning approach is numpy indexing combined with <code>searchsorted()</code>. </p> | python|numpy|for-loop|numpy-slicing | 5 |
725 | 52,910,615 | Python: Using a list with TF-IDF | <p>I have the following piece of code that currently compares all the words in the 'Tokens' with each respective document in the 'df'. Is there any way I would be able to compare a predefined list of words with the documents instead of the 'Tokens'. </p>
<pre><code>from sklearn.feature_extraction.text import TfidfVectorizer
tfidf_vectorizer = TfidfVectorizer(norm=None)
list_contents =[]
for index, row in df.iterrows():
list_contents.append(' '.join(row.Tokens))
# list_contents = df.Content.values
tfidf_matrix = tfidf_vectorizer.fit_transform(list_contents)
df_tfidf = pd.DataFrame(tfidf_matrix.toarray(),columns= [tfidf_vectorizer.get_feature_names()])
df_tfidf.head(10)
</code></pre>
<p>Any help is appreciated. Thank you!</p> | <p>Not sure if I understand you correctly, but if you want to make the Vectorizer consider a fixed list of words, you can use the <code>vocabulary</code> parameter.</p>
<pre><code>my_words = ["foo","bar","baz"]
# set the vocabulary parameter with your list of words
tfidf_vectorizer = TfidfVectorizer(
norm=None,
vocabulary=my_words)
list_contents =[]
for index, row in df.iterrows():
list_contents.append(' '.join(row.Tokens))
# this matrix will have only 3 columns because we have forced
# the vectorizer to use just the words foo bar and baz
# so it'll ignore all other words in the documents.
tfidf_matrix = tfidf_vectorizer.fit_transform(list_contents)
</code></pre> | python|pandas|text|tf-idf|tfidfvectorizer | 1 |
726 | 65,774,348 | how to store title of two urls in an excel file | <pre><code>import bs4
from bs4 import BeautifulSoup
from pandas.core.base import DataError
from pandas.core.frame import DataFrame
import requests
import pandas as pd
from fake_useragent import UserAgent
urls = ['https://www.digikala.com/search/category-mobile', 'https://www.digikala.com/search/category-tablet-ebook-reader']
user_agent = UserAgent()
for url in urls:
data = requests.get(url, headers={"user-agent": user_agent.chrome})
soup = bs4.BeautifulSoup(data.text, "html.parser")
title = soup.find_all("title")
bar_list = []
for b in title:
result = b.text.strip()
bar_list.append(result)
print(bar_list)
ex = pd.DataFrame({"title": bar_list,})
print(ex)
ex.to_excel('sasa.xlsx', index=False)
</code></pre>
<p>I want to get both urls but my code gives me only one <a href="https://i.stack.imgur.com/YhRr6.jpg" rel="nofollow noreferrer">shown in the picture </a></p>
<p>Any other methods are appreciated I m new to these libraries</p> | <p>Your <code>for url in urls</code> is indeed iterating over both urls, however the <code>ex.to_excel('sasa.xlsx', index=False)</code> line will overwrite <code>'sasa.xlsx'</code> on the second loop.</p>
<p>I would recommend either:</p>
<ul>
<li>Changing the filename on the second loop, or</li>
<li>Writing the results to different sheets of the same excel file, like <a href="https://datascience.stackexchange.com/questions/46437/how-to-write-multiple-data-frames-in-an-excel-sheet">here</a></li>
</ul> | python-3.x|pandas|python-requests | 1 |
727 | 65,757,175 | Saving image from numpy array gives errors | <p>this function is to show image of adversarial and its probability, I only want to download the image.</p>
<pre><code>def visualize(x, x_adv, x_grad, epsilon, clean_pred, adv_pred, clean_prob, adv_prob):
x = x.squeeze(0) #remove batch dimension # B X C H X W ==> C X H X W
x = x.mul(torch.FloatTensor(std).view(3,1,1)).add(torch.FloatTensor(mean).view(3,1,1)).numpy()#reverse of normalization op- "unnormalize"
x = np.transpose( x , (1,2,0)) # C X H X W ==> H X W X C
x = np.clip(x, 0, 1)
x_adv = x_adv.squeeze(0)
x_adv = x_adv.mul(torch.FloatTensor(std).view(3,1,1)).add(torch.FloatTensor(mean).view(3,1,1)).numpy()#reverse of normalization op
x_adv = np.transpose( x_adv , (1,2,0)) # C X H X W ==> H X W X C
x_adv = np.clip(x_adv, 0, 1)
x_grad = x_grad.squeeze(0).numpy()
x_grad = np.transpose(x_grad, (1,2,0))
x_grad = np.clip(x_grad, 0, 1)
figure, ax = plt.subplots(1,3, figsize=(80,80))
ax[0].imshow(x_adv)
im = Image.fromarray(x_adv)
im.save("car.jpeg")
files.download('car.jpeg')
plt.show()
</code></pre>
<p>I am getting this error here</p>
<p>TypeError: Cannot handle this data type: (1, 1, 3), <f4</p> | <p>Try changing this:</p>
<pre><code>im = Image.fromarray(x_adv)
</code></pre>
<p>to this:</p>
<pre><code>im = Image.fromarray((x_adv * 255).astype(np.uint8))
</code></pre> | python|numpy|matplotlib|numpy-ndarray | 0 |
728 | 65,491,886 | class wise object detection product count in tensorflow 2 | <p>Can we count detected objects in <strong>Tensorflow 2</strong> object detection API class wise?</p>
<p>as i am new to this so having hard time in manipulating the output of object detection model according to my use case described below</p>
<p>Say you have two classes tomato and potato in a super market shelf stock, I would like to count each object class wise</p>
<p>for example; potato_count:5 tomato_count:3.</p>
<p>for reference the following output i have is like this, just to give an idea:-</p>
<pre><code>{'raw_detection_scores': array([[3.8237274e-03, 3.1729043e-03, 5.1983595e-03, ..., 1.0126382e-02,4.1468740e-03, 3.5721064e-03],
[3.7932396e-03, 1.9723773e-03, 2.3661852e-03, ..., 3.4036636e-03,
9.3266368e-04, 4.3996871e-03],
[3.2063425e-03, 2.9956400e-03, 3.9784312e-03, ..., 5.5939257e-03,
2.3936033e-03, 2.7040839e-03],
...,
[4.1239262e-03, 3.9246678e-04, 4.5044391e-05, ..., 1.2922287e-04,
2.8958917e-04, 2.2355914e-03],
[1.4656782e-03, 4.8859119e-03, 1.4899671e-03, ..., 2.8479993e-03,
2.8250813e-03, 2.1298528e-03],
[1.8135607e-03, 2.2478402e-03, 1.1820495e-03, ..., 9.5197558e-04,
1.3802052e-03, 2.2761822e-03]], dtype=float32), 'detection_anchor_indices': array([44692., 44710., 44728., 44818., 39652., 44674., 39670., 40036.,
40018., 44800., 39634., 39371., 44830., 38090., 44728., 44731.,
44728., 44710., 10078., 39796., 27838., 37604., 16933., 24833.,
39778., 44659., 45058., 38084., 44791., 44710., 44692., 30244.,
5284., 38090., 37604., 38204., 33593., 38192., 37982., 39796.,
44692., 6635., 33118., 24389., 37604., 44910., 33112., 39601.,
16133., 3845., 39918., 48370., 19204., 44740., 39778., 16792.,
6629., 25763., 38150., 48187., 15839., 38180., 23524., 44914.,
40036., 1438., 25763., 38078., 35992., 38012., 39888., 38084.,
44578., 40018., 51075., 38204., 15833., 37976., 40258., 37604.,
48751., 39906., 31684., 16453., 38054., 5140., 42568., 36484.,
38012., 38202., 37946., 14024., 2404., 40002., 5764., 39870.,
48823., 26878., 38198., 39430.], dtype=float32), 'detection_multiclass_scores': array([[1.6726255e-03, 3.2217724e-07, 2.8865278e-02, ..., 2.6032329e-04,
2.0583175e-05, 4.5886636e-04],
[1.0796189e-03, 5.8811463e-07, 3.6984652e-02, ..., 6.6033006e-04,
3.2279208e-05, 4.3705106e-04],
[6.1860681e-04, 2.4921978e-06, 6.0835034e-02, ..., 2.0631850e-03,
5.2474130e-05, 3.7664175e-04],
...,
[2.0163953e-03, 1.0121465e-03, 1.6601086e-03, ..., 5.1327944e-03,
1.6998947e-03, 9.6607208e-04],
[1.2855232e-03, 5.4006279e-03, 1.0573506e-02, ..., 1.3051391e-02,
1.0753423e-02, 1.3659596e-03],
[8.1962347e-04, 9.5169604e-01, 1.5044212e-03, ..., 5.1358938e-03,
8.4767938e-03, 3.2877922e-04]], dtype=float32),
'detection_classes': array([4, 4, 4, 4, 1, 4, 1, 1, 1, 4, 1, 1, 4, 6, 6, 4, 2, 6, 7, 7, 4, 6,
1, 6, 7, 4, 7, 9, 4, 2, 6, 7, 1, 3, 9, 6, 1, 6, 6, 3, 2, 5, 1, 2,
3, 6, 1, 1, 8, 1, 1, 1, 7, 4, 3, 1, 9, 6, 6, 8, 9, 6, 8, 2, 8, 7,
2, 6, 1, 6, 9, 5, 2, 9, 9, 3, 1, 9, 8, 5, 9, 9, 3, 5, 6, 1, 6, 6,
9, 6, 6, 1, 2, 1, 2, 9, 9, 4, 4, 7]), 'detection_boxes': array([[5.92004418e-01, 1.69490814e-01, 7.45701075e-01, 2.46565759e-01],
[5.89631081e-01, 2.46157080e-01, 7.39599228e-01, 3.18454713e-01],
[5.87109149e-01, 3.14503819e-01, 7.36972034e-01, 3.85336846e-01],
[5.87837219e-01, 7.05797434e-01, 7.28340387e-01, 7.74214983e-01],
[6.35630414e-02, 1.69735521e-01, 2.04962432e-01, 2.47269154e-01],
[5.92036664e-01, 9.53008384e-02, 7.46890843e-01, 1.73706189e-01],
[6.85142130e-02, 2.45773658e-01, 2.10277155e-01, 3.21099281e-01],
[9.23785418e-02, 7.77337551e-01, 2.25108251e-01, 8.45668435e-01],
[9.24619362e-02, 7.06092656e-01, 2.26126671e-01, 7.77547657e-01],
[5.85118353e-01, 6.37673438e-01, 7.26098835e-01, 7.05848277e-01],
[5.94619289e-02, 9.38070714e-02, 2.03622460e-01, 1.72308475e-01],
[3.44502553e-02, 7.54546002e-03, 2.08990484e-01, 9.39401984e-02],
[5.87913811e-01, 7.74250209e-01, 7.28712976e-01, 8.34733903e-01],
[9.84132528e-01, 3.18221241e-01, 9.96858120e-01, 3.95583898e-01],
[5.87109149e-01, 3.14503819e-01, 7.36972034e-01, 3.85336846e-01],
[5.89539468e-01, 3.49524260e-01, 7.35065162e-01, 4.21008408e-01],
[5.87109149e-01, 3.14503819e-01, 7.36972034e-01, 3.85336846e-01],
[5.89631081e-01, 2.46157080e-01, 7.39599228e-01, 3.18454713e-01],
[1.87163889e-01, 9.88169909e-01, 3.71130943e-01, 9.98176932e-01],
[9.36717317e-02, 7.77330160e-01, 2.24804163e-01, 8.45728278e-01],
[6.63008153e-01, 9.89469707e-01, 8.10642183e-01, 1.00000000e+00],
[9.70665693e-01, 3.16653520e-01, 9.95440483e-01, 3.85887355e-01],
[3.70503038e-01, 2.54840344e-01, 4.76123840e-01, 3.14984292e-01],
[5.87433934e-01, 7.05650687e-01, 7.27492571e-01, 7.73511648e-01],
[9.28397924e-02, 7.06507027e-01, 2.26004675e-01, 7.77664006e-01],
[5.82323313e-01, 1.54982358e-02, 7.39678025e-01, 1.03125945e-01],
[5.87077260e-01, 7.05565095e-01, 7.27602482e-01, 7.74259925e-01],
[9.83516991e-01, 3.11883837e-01, 9.97174442e-01, 3.89778942e-01],
[5.88727355e-01, 6.20116591e-01, 7.30183959e-01, 6.90428734e-01],
[5.89631081e-01, 2.46157080e-01, 7.39599228e-01, 3.18454713e-01],
[5.92004418e-01, 1.69490814e-01, 7.45701075e-01, 2.46565759e-01],
[7.42125034e-01, 0.00000000e+00, 8.81110668e-01, 8.84985179e-03],
[5.44907227e-02, 0.00000000e+00, 2.06223458e-01, 1.28744319e-02],
[9.84132528e-01, 3.18221241e-01, 9.96858120e-01, 3.95583898e-01],
[9.70665693e-01, 3.16653520e-01, 9.95440483e-01, 3.85887355e-01],
[9.84484971e-01, 5.55115521e-01, 9.97788608e-01, 6.32665694e-01],
[7.99783111e-01, 9.75158930e-01, 9.83929038e-01, 9.97444630e-01],
[9.85278428e-01, 5.28306305e-01, 9.97080624e-01, 6.08543932e-01],
[9.88123000e-01, 9.33963358e-02, 9.99226749e-01, 1.72955215e-01],
[9.36717317e-02, 7.77330160e-01, 2.24804163e-01, 8.45728278e-01],
[5.92004418e-01, 1.69490814e-01, 7.45701075e-01, 2.46565759e-01],
[9.08265784e-02, 7.78585851e-01, 2.25109786e-01, 8.43916476e-01],
[7.94785440e-01, 9.86553550e-01, 9.70936120e-01, 9.98435020e-01],
[5.84929466e-01, 7.75964439e-01, 7.24675894e-01, 8.34971726e-01],
[9.70665693e-01, 3.16653520e-01, 9.95440483e-01, 3.85887355e-01], 3.21474910e-01]],
dtype=float32), 'raw_detection_boxes': array([[-0.0132168 , -0.00798112, 0.03437265, 0.02366759],
[-0.01795438, -0.01333077, 0.04313567, 0.03091241],
[-0.00845873, -0.01297706, 0.02555573, 0.02979016],
[-0.01206583, -0.01901898, 0.03632494, 0.04061931],
[-0.01634497, -0.00570066, 0.04027664, 0.01987169],
[-0.02299639, -0.01094626, 0.05078602, 0.02601441],
[-0.01034649, -0.00047059, 0.03106559, 0.04336115],
[-0.01548673, -0.00679935, 0.03944379, 0.05214766],
[-0.00469762, -0.00637354, 0.02257038, 0.05068764],
[-0.00889431, -0.01532986, 0.03383063, 0.06445184],
[-0.01338234, 0.00258018, 0.03299785, 0.03899822],
[-0.02030504, -0.00274394, 0.04193052, 0.04610612],
[-0.0114202 , 0.00825354, 0.0315875 , 0.05609718],
[-0.01720474, 0.00155611, 0.03969076, 0.06473814],
[-0.0055348 , 0.00137738, 0.02347516, 0.06321988],
[-0.0093858 , -0.00954537, 0.03353771, 0.0789085 ],
[-0.01528691, 0.0120711 , 0.03230394, 0.05128276],
[-0.02242971, 0.00611713, 0.04139108, 0.0590462 ],
[-0.01265933, 0.01957938, 0.03226281, 0.06821183],
[-0.0190082 , 0.01264081, 0.04051029, 0.07676097],
[-0.00625486, 0.01262659, 0.02384217, 0.07535952],
[-0.01057751, 0.00036938, 0.03408406, 0.09211845],
[-0.01712188, 0.02387175, 0.03272626, 0.0631646 ],
[-0.02457684, 0.01729448, 0.04191976, 0.07130254],
[-0.01416131, 0.03209703, 0.03322188, 0.08013913],
[-0.02092581, 0.02524993, 0.04159252, 0.08845924],
[-0.00731821, 0.02507119, 0.02447667, 0.08743346],
[-0.01213621, 0.01294496, 0.03459452, 0.10395688],
[-0.01857999, 0.0361888 , 0.03388733, 0.07542843],
[-0.02637036, 0.02969162, 0.04293538, 0.08341235],
[-0.01507254, 0.04520991, 0.03351783, 0.09184141],
[-0.02222046, 0.03861695, 0.04212021, 0.10008947],
[-0.00780608, 0.03797973, 0.02448018, 0.09932629],
[-0.01303079, 0.02687315, 0.03459996, 0.1151351 ],
[-0.0191509 , 0.04890272, 0.03473954, 0.08777986],
[-0.02749499, 0.04277577, 0.04370061, 0.09534387],
[-0.01489433, 0.05867497, 0.03314201, 0.10344677],
[-0.02239214, 0.05207732, 0.04205906, 0.11197228],
[-0.00734611, 0.05139816, 0.02392033, 0.11116292],
[-0.01289164, 0.0412713 , 0.03449183, 0.12679553],
[-0.01872004, 0.06203329, 0.03483813, 0.09988385],
[-0.02761277, 0.05606709, 0.04412681, 0.10715124], 0.2496243 ]],
dtype=float32), 'detection_scores': array([0.9957284 , 0.9954956 , 0.9948391 , 0.9935589 , 0.9928843 ,
0.9922596 , 0.99091065, 0.9904872 , 0.9904753 , 0.9836049 ,
0.97076845, 0.76198786, 0.11483946, 0.08861226, 0.06485316,
0.06403089, 0.06083503, 0.05606595, 0.05304798, 0.05192479,
0.05068725, 0.0497607 , 0.04650801, 0.04170695, 0.04141748,
0.0396772 , 0.03875464, 0.03834933, 0.03700855, 0.03698465,
0.03656569, 0.03464538, 0.03429574, 0.03408125, 0.033981 ,
0.03356522, 0.03337869, 0.03140217, 0.03058183, 0.02957818,
0.02886528, 0.02712101, 0.02674139, 0.02655837, 0.02634463,
0.02611795, 0.02595255, 0.02580112, 0.0251711 , 0.02473494,
0.02423027, 0.02406707, 0.02352765, 0.02347961, 0.02342641,
0.02327773, 0.02312759, 0.0229713 , 0.02272761, 0.02240831,
0.02240023, 0.02203956, 0.02200234, 0.02167007, 0.02112213,
0.0210447 , 0.02079707, 0.02007249, 0.01999336, 0.01993376,
0.01986268, 0.0196887 , 0.01967749, 0.01877454, 0.01874545,
0.01856974, 0.01855248, 0.01853141, 0.01839408, 0.01838818,
0.01830906, 0.01829055, 0.01759666, 0.01758116, 0.01747909,
0.01745978, 0.01728415, 0.01719788, 0.0171611 , 0.01715598,
0.01704106, 0.01684934, 0.01672551, 0.01663077, 0.01645952,
0.01627839, 0.01607156, 0.01592609, 0.01579505, 0.01570672],
dtype=float32), 'num_detections': 100}
</code></pre>
<p>Please guyys help me out in this
thanks in advance</p> | <p>Check out his link: <a href="https://www.tensorflow.org/api_docs/python/tf/data/Dataset" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/data/Dataset</a></p>
<p>Here, you can find how to iterate over Dataset using ".as_numpy_iterator()", but also how to use different methods to manipulate the input dataset.
Hope this will be useful.</p> | python|tensorflow|object-detection-api|tensorflow2 | 0 |
729 | 63,382,734 | Comparing strings within two columns in pandas with SequenceMatcher | <p>I am trying to determine the similarity of two columns in a pandas dataframe:</p>
<pre><code>Text1 All
Performance results achieved by the approaches submitted to this Challenge. The six top approaches and three others outperform the strong baseline.
Accuracy is one of the basic principles of perfectionist. Where am I?
</code></pre>
<p>I would like to compare <code>'Performance results ... '</code> with <code>'The six...'</code> and '<code>Accuracy is one...'</code> with <code>'Where am I?'</code>.
The first row should have a higher similarity degree between the two columns as it includes some words; the second one should be equal to 0 as no words are in common between the two columns.</p>
<p>To compare the two columns I've used <code>SequenceMatcher</code> as follows:</p>
<pre><code>from difflib import SequenceMatcher
ratio = SequenceMatcher(None, df.Text1, df.All).ratio()
</code></pre>
<p>but it seems to be wrong the use of <code>df.Text1, df.All</code>.</p>
<p>Can you tell me why?</p> | <ul>
<li><a href="https://docs.python.org/3/library/difflib.html#sequencematcher-objects" rel="nofollow noreferrer"><code>SequenceMatcher</code></a> isn't designed for a pandas series.</li>
<li>You could <code>.apply</code> the function.</li>
<li><a href="https://docs.python.org/3/library/difflib.html#sequencematcher-examples" rel="nofollow noreferrer"><code>SequenceMatcher</code> Examples</a>
<ul>
<li>With <code>isjunk=None</code> even spaces are not considered junk.</li>
<li>With <code>isjunk=lambda y: y == " "</code> considers spaces as junk.</li>
</ul>
</li>
</ul>
<pre class="lang-py prettyprint-override"><code>from difflib import SequenceMatcher
import pandas as pd
data = {'Text1': ['Performance results achieved by the approaches submitted to this Challenge.', 'Accuracy is one of the basic principles of perfectionist.'],
'All': ['The six top approaches and three others outperform the strong baseline.', 'Where am I?']}
df = pd.DataFrame(data)
# isjunk=lambda y: y == " "
df['ratio'] = df[['Text1', 'All']].apply(lambda x: SequenceMatcher(lambda y: y == " ", x[0], x[1]).ratio(), axis=1)
# display(df)
Text1 All ratio
0 Performance results achieved by the approaches submitted to this Challenge. The six top approaches and three others outperform the strong baseline. 0.356164
1 Accuracy is one of the basic principles of perfectionist. Where am I? 0.088235
# isjunk=None
df['ratio'] = df[['Text1', 'All']].apply(lambda x: SequenceMatcher(None, x[0], x[1]).ratio(), axis=1)
# display(df)
Text1 All ratio
0 Performance results achieved by the approaches submitted to this Challenge. The six top approaches and three others outperform the strong baseline. 0.410959
1 Accuracy is one of the basic principles of perfectionist. Where am I? 0.117647
</code></pre> | python|pandas|nlp|sequencematcher | 1 |
730 | 53,497,334 | Erro groupby by timedate | <p>I have a sample dataset with 2 columns: Dates and eVal like this: </p>
<pre><code> eVal Dates
0 3.622833 2015-01-01
1 3.501333 2015-01-01
2 3.469167 2015-01-01
3 3.436333 2015-01-01
4 3.428000 2015-01-01
5 3.400667 2015-01-01
6 3.405667 2015-01-01
7 3.401500 2015-01-01
8 3.404333 2015-01-01
9 3.424833 2015-01-01
10 3.489500 2015-01-01
11 3.521000 2015-01-01
12 3.527833 2015-01-01
13 3.523500 2015-01-01
14 3.511667 2015-01-01
15 3.602500 2015-01-01
16 3.657667 2015-01-01
17 3.616667 2015-01-01
18 3.534500 2015-01-01
19 3.529167 2015-01-01
20 3.548167 2015-01-01
21 3.565500 2015-01-01
22 3.539833 2015-01-01
23 3.485667 2015-01-01
24 3.493167 2015-01-02
25 3.434667 2015-01-02
26 3.422500 2015-01-02
... ...
3304546 3.166000 2015-01-31
3304547 3.138500 2015-01-31
3304548 3.128000 2015-01-31
3304549 3.078833 2015-01-31
3304550 3.106000 2015-01-31
3304551 3.116167 2015-01-31
3304552 3.087500 2015-01-31
3304553 3.089167 2015-01-31
3304554 3.126667 2015-01-31
3304555 3.191667 2015-01-31
3304556 3.227500 2015-01-31
3304557 3.263833 2015-01-31
3304558 3.263667 2015-01-31
3304559 3.255333 2015-01-31
3304560 3.265500 2015-01-31
3304561 3.234167 2015-01-31
3304562 3.231167 2015-01-31
3304563 3.236333 2015-01-31
3304564 3.274667 2015-01-31
3304565 3.223167 2015-01-31
3304566 3.238333 2015-01-31
3304567 3.235000 2015-01-31
3304568 3.227333 2015-01-31
3304569 3.185333 2015-01-31
</code></pre>
<p>I want to aggregate by day and do a mean (of column eVal) for each day. I tryed to use:</p>
<pre><code>me = time['eVal'].groupby(time['Dates']).mean()
</code></pre>
<p>but it returns me wrong values of mean:</p>
<pre><code>me.head(10)
Out[149]:
Dates
2015-01-01 4.014973
2015-01-02 4.006548
2015-01-03 4.010406
2015-01-04 4.034531
2015-01-05 3.988262
2015-01-06 3.972111
2015-01-07 3.989347
2015-01-08 3.959556
2015-01-09 3.995394
2015-01-10 4.048786
Name: eVal, dtype: float64
</code></pre>
<p>If I applyed an describe on groupby, groupby group incorrectly. The values of the maximum and minimum, of the average, for the individual days, are wrong.</p> | <p>you can use below code line. </p>
<blockquote>
<p>time.groupby('Dates').mean()</p>
</blockquote>
<p>I have tried this on your sample and below are sample output.</p>
<pre><code>eVal Dates
2015-01-01 3.506160
2015-01-02 3.450111
</code></pre> | python-3.x|mean|pandas-groupby | 1 |
731 | 53,580,946 | Problems with shifting a matrix | <p>this is my code and I have a few problems:</p>
<pre><code>import numpy as np
from scipy.ndimage.interpolation import shift
index_default = np.array([2, 4])
b = np.zeros((5, 5), dtype=float)
f = np.array([[0, 0, 0, 0, 1],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 1],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0]])
m = np.array([[1, 2, 1, 2, 1],
[1, 2, 1, 2, 1],
[1, 2, 1, 2, 0],
[1, 2, 1, 2, 1],
[1, 2, 1, 2, 1]])
for index, x in np.ndenumerate(f):
if x == 1:
a = np.asarray(index)
y = np.subtract(a, index_default)
m_shift = shift(m, (y[0], y[1]), cval=0)
b += np.add(m_shift, b)
print(m_shift)
print(b)
</code></pre>
<p>SO, when i just want to print m_shift, the code will show me only two m_shift arrays. If i run this code as depicted with print(b) it will show me THREE m_shift arrays. Furthermore it calculate not intuitively. For me output should be:</p>
<pre><code>f = np.array([[2, 4, 2, 4, 1],
[2, 4, 2, 4, 2],
[2, 4, 2, 4, 1],
[1, 2, 1, 2, 1],
[1, 2, 1 ,2, 1]])
</code></pre>
<p>I think the += operator creates the problems. But i think i have to use it because i want to keep the result from the loop and to not overwrite it</p> | <p>Change the b+= to b=.</p>
<pre><code>b+= ... # corresponds to b = b + add(m_shift,b)
# net effect is that b = b + m_shift + b I think you intend b=b+m_shift
b+=m_shift # probably works too.
</code></pre> | python|arrays|numpy|shift | 0 |
732 | 71,996,972 | Ammed a column in pandas dataframe based on the string value in another column | <p>I have a text column with chat information, I want to pass in a list of words that if they appear in that text column will ammend the error column to 1 and 0 if the words do not appear:</p>
<pre><code>chatid text card_declined booking_error website_error
401 hi my card declined.. 0 0 0
402 you website crashed.. 0 0 0
403 hi my card declined.. 0 0 0
for example
carddeclined = ['card declined', 'Card error']
for i in df[df['textchat'].str.contains('|'.join(carddeclined),na=False)]:
df['card declined'] = 1
This currently just returns all card declined rows with 1
</code></pre> | <p>You can assign converted boolean Series to integers, because some upercase values in list is added <code>case=False</code> parameter in <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.contains.html" rel="nofollow noreferrer"><code>Series.str.contains</code></a>:</p>
<pre><code>carddeclined = ['card declined', 'Card error']
df['card_declined'] = df['text'].str.contains('|'.join(carddeclined),
na=False,
case=False).astype(int)
print (df)
chatid text card_declined booking_error website_error
0 401 hi my card declined.. 1 0 0
1 402 you website crashed.. 0 0 0
2 403 hi my card declined.. 1 0 0
</code></pre> | python|pandas|dataframe | 1 |
733 | 56,544,931 | Replace entry in specific numpy array stored in dictionary | <p>I have a dictionary containing a variable number of numpy arrays (all same length), each array is stored in its respective key. </p>
<p>For each index I want to replace the value in one of the arrays by a newly calculated value. (This is a very simplyfied version what I'm actually doing.)</p>
<p>The problem is that when I try this as shown below, the value at the current index of every array in the dictionary is replaced, not just the one I specify.</p>
<p>Sorry if the formatting of the example code is confusing, it's my first question here (Don't quite get how to show the line <code>example_dict["key1"][idx] = idx+10</code> properly indented in the next line of the for loop...).</p>
<pre><code>>>> import numpy as np
>>> example_dict = dict.fromkeys(["key1", "key2"], np.array(range(10)))
>>> example_dict["key1"]
</code></pre>
<blockquote>
<p>array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])</p>
</blockquote>
<pre><code>>>> example_dict["key2"]
</code></pre>
<blockquote>
<p>array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])</p>
</blockquote>
<pre><code>>>> for idx in range(10):
example_dict["key1"][idx] = idx+10
>>> example_dict["key1"]
</code></pre>
<blockquote>
<p>array([10, 11, 12, 13, 14, 15, 16, 17, 18, 19])</p>
</blockquote>
<pre><code>>>> example_dict["key2"]
</code></pre>
<blockquote>
<p>array([10, 11, 12, 13, 14, 15, 16, 17, 18, 19])</p>
</blockquote>
<p>I expected the loop to only access the array in <code>example_dict["key1"]</code>, but somehow the same operation is applied to the array stored in <code>example_dict["key2"]</code> as well.</p> | <pre class="lang-py prettyprint-override"><code>>>> hex(id(example_dict["key1"]))
'0x26a543ea990'
>>> hex(id(example_dict["key2"]))
'0x26a543ea990'
</code></pre>
<p><code>example_dict["key1"]</code> and <code>example_dict["key2"]</code> are pointing at the same address. To fix this, you can use a dict comprehension.</p>
<pre><code>import numpy
keys = ["key1", "key2"]
example_dict = {key: numpy.array(range(10)) for key in keys}
</code></pre> | python|dictionary|numpy-ndarray | 1 |
734 | 67,034,981 | Pandas dataframe values and row condition both depend on other columns | <p>I have a Pandas DataFrame:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'col1': ['a','a','b','b'],
'col2': [1,2,3,4],
'col3': [11,12,13,14]})
col1 col2 col3
0 a 1 11
1 a 2 12
2 b 3 13
3 b 4 14
</code></pre>
<p>I need to replace an entry in <code>col2</code> by some function of the row's <code>col2</code> and <code>col3</code> values if the value in <code>col1</code> is a <code>b</code>, but leave the rows unchanged if the value in <code>col1</code> is not a <code>b</code>. Say the function is <code>col3 * exp(col2)</code>, then applying this to <code>df</code> above would yield</p>
<pre><code> col1 col2 col3
0 a 1 11
1 a 2 12
2 b 261.1 13
3 b 764.4 14
</code></pre>
<p>Ideally this would be vectorised and in-place as my real DataFrame has a few million rows.</p>
<p>This differs from other questions on Stack Overflow as they only require either the new value to not depend on other columns or to change all rows at once. Thank you.</p>
<p>Edit: Corrected the target DataFrame. Had changed the function from <code>exp(col2)+col3</code> to <code>exp(col2)*col3</code> without updating the values in the example.</p> | <p>Use <code>DataFrame.iloc</code></p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({'col1': ['a', 'a', 'b', 'b'], 'col2': [1, 2, 3, 4], 'col3': [11, 12, 13, 14]})
df.loc[df['col1'] == 'b', 'col2'] = df['col3'] * np.exp(df['col2'])
print(df)
</code></pre>
<p>Giving the correct</p>
<pre><code> col1 col2 col3
0 a 1.00000 11
1 a 2.00000 12
2 b 261.11198 13
3 b 764.37410 14
</code></pre> | python|pandas|dataframe|numpy|slice | 1 |
735 | 68,383,897 | Why is this numba code to store and process pointcloud data slower than the pure Python version? | <p>I need to store some data structure like that:</p>
<pre><code>{'x1,y1,z1': [[p11_x,p11_y,p11_z], [p12_x,p12_y,p12_z], ..., [p1n_x,p1n_y,p1n_z]],
'x2,y2,z2': [[p21_x,p21_y,p21_z], [p22_x,p22_y,p22_z], ..., [p2n_x,p2n_y,p2n_z]],
...
'xn,yn,zn': [[pn1_x,pn1_y,pn1_z], [pn2_x,pn2_y,pn2_z], ..., [pnm_x,pnm_y,pnm_z]]}
</code></pre>
<p>Every key is a grid cell index and the value is a list of classified points. The list can be variable length but I can set it static, for example 1000 elements.</p>
<p>For now I tried something like this:</p>
<pre><code>np.zeros(shape=(100,100,100,50,3))
</code></pre>
<p>But if I use <code>numba.jit</code> with that function the execution time is few times worse than with pure Python.</p>
<p>Simple Python example of what I want to do:</p>
<pre><code>def split_into_grid_py(points: np.array):
grid = {}
for point in points:
r_x = round(point[0])
r_y = round(point[1])
r_z = round(point[2])
try:
grid[(r_x, r_y, r_z)].append(point)
except KeyError:
grid[(r_x, r_y, r_z)] = [point]
return grid
</code></pre>
<p>Is there any efficient way of doing that with numba?
Per 10 execution in loop times are like:</p>
<ul>
<li>numba: 7.050494909286499</li>
<li>pure Python: 1.0014197826385498</li>
</ul>
<p>With the same data set, so it's crap optimization.</p>
<p>My numba code:</p>
<pre><code>@numba.jit(nopython=True)
def split_into_grid(points: np.array):
grid = np.zeros(shape=(100,100,100,50,3))
for point in points:
r_x = round(point[0])
r_y = round(point[1])
r_z = round(point[2])
i = 0
for cell in grid[r_x][r_y][r_z]:
if not np.sum(cell):
grid[r_x][r_y][r_z][i] = point
break
i += 1
return grid
</code></pre> | <p>The pure Python version append items in <code>O(1)</code> time thanks to the dictionary container while the Numba version use a O(n) array search (bounded by 50). Moreover, <code>np.zeros(shape=(100,100,100,50,3))</code> allocate an array of about 1 GiB which resulting in many cache misses during the computation which will be done in RAM. Meanwhile, the pure Python version may fit in the CPU caches. There are two strategies to solve that.</p>
<p>The first strategy is to use 3 containers. An array <code>keyGrid</code> mapping each grid cell to an offset in a second array <code>valueGrid</code> or -1 if there is no point associated to this cell. <code>valueGrid</code> contains all the points for a given grid cell. Finally, <code>countingGrid</code> count the number of points per grid cell. Here is an untested example:</p>
<pre class="lang-py prettyprint-override"><code>@numba.jit(nopython=True)
def split_into_grid(points: np.array):
# Note: use np.uint16 if the actual number of filled grid cell is less than 65536
keyGrid = np.full(shape=(100,100,100), -1, dtype=np.uint32)
i = 0
for point in points:
r_x = round(point[0])
r_y = round(point[1])
r_z = round(point[2])
if keyGrid[r_x,r_y,r_z] < 0:
keyGrid[r_x,r_y,r_z] = i
i += 1
uniqueCloundPointCount = i
# Note the number of points per grid cell is also bounded by the type
countingGrid = np.zeros(uniqueCloundPointCount, dtype=np.uint8)
valueGrid = np.full((uniqueCloundPointCount, 50, 3), -1, dtype=np.int32)
for point in points:
r_x = round(point[0])
r_y = round(point[1])
r_z = round(point[2])
key = keyGrid[r_x,r_y,r_z]
addingPos = countingGrid[key]
valueGrid[key, addingPos] = point
countingGrid[key] += 1
return (keyGrid, valueGrid, countingGrid)
</code></pre>
<p>Note that the arrays are quite small as long as not all grid cells contains points resulting in less cache misses. Moreover the mapping of each point is done in (small) constant time resulting in a much faster code.</p>
<p>The second strategy is to use the same method than in the pure Python implementation, but with Numba types. Indeed, Numba experimentally <a href="https://stackoverflow.com/questions/55078628/using-dictionaries-with-numba-njit-function">supports dictionaries</a>. You can replace the exception with a dictionary check (<code>(r_x, r_y, r_z) in grid</code>) which will cause less compilation issues and likely speed up resulting code. Note that Numba dict are often as fast as CPython ones (if not slower). So the resulting code may not be much much faster.</p> | python|numpy|performance|numba | 2 |
736 | 68,116,678 | pandas to_json , django and d3.js for visualisation | <p>I have a pandas dataframe that I converted to json in order to create graphs and visualize with d3.js so I would like to know how to send this json format obtained in django (in the view or template) in order to visualize with d3.js</p>
<pre><code>def parasol_view(request):
parasol = function_parasol()
parasol_json = parasol.to_json(orient='records')
parasol = parasol.to_html(index = False, table_id="table_parasol")
context = {
'parasol': parasol
'parasol_json':parasol_json
}
return render(request,'parasol.html',context)
</code></pre>
<p>template :</p>
<pre><code>{%block content%}
{{parasol| safe}}
{{parasol_json| safe}}
{%endblock content%}
</code></pre> | <p>I'm not sure what the parasol.to_html is for, therefore I left that part untouched.
But this is what I would do in order to use your .json file:</p>
<p>Views.py:</p>
<pre><code>def parasol_view(request):
parasol = function_parasol()
# parasol_json = parasol.to_json(orient='records')
parasol = parasol.to_html(index = False, table_id="table_parasol")
context = {
'parasol': parasol
# 'parasol_json':parasol_json
}
return render(request,'parasol.html', context)
</code></pre>
<p>Function parasol:</p>
<pre><code>function_parasol(){
#whatever code you have here
#I made a new var parasol2 for this so that the parasol that will be returned will be the same as before.
#So that you can still use it for parasol.to_html
parasol2 = parasol.to_json(orient='records')
text_file = open("parasol.json", "w") #name of file, make sure it ends with .json
text_file.write(parasol2)
text_file.close()
return parasol
}
</code></pre>
<p>The javascript file where you want to make a graph with d3:</p>
<pre><code>//so basically var_name = parasol.json
d3.json("/parasol.json", function(var_name) {
//Go make your graphs
});
</code></pre>
<p>Ps. If you were to upload the file that gets parsed into a json file. Parasol.json will be overwritten every time.
So there won't be an abundance of json files at some point.</p> | json|django|pandas|d3.js | 0 |
737 | 68,138,679 | GridSearchCV results heatmap | <p>I am trying to generate a heatmap for the GridSearchCV results from sklearn. The thing I like about <a href="https://sklearn-evaluation.readthedocs.io/en/stable/user_guide/grid_search.html" rel="nofollow noreferrer">sklearn-evaluation</a> is that it is really easy to generate the heatmap. However, I have hit one issue. When I give a parameter as None, for e.g.</p>
<pre><code>max_depth = [3, 4, 5, 6, None]
</code></pre>
<p>while generating, the heatmap, it shows error saying:</p>
<pre><code>TypeError: '<' not supported between instances of 'NoneType' and 'int'
</code></pre>
<p>Is there any workaround for this?
I have found other ways to generate heatmap like using matplotlib and seaborn, but nothing gives as beautiful heatmaps as sklearn-evalutaion.</p>
<p><a href="https://i.stack.imgur.com/GU7YF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GU7YF.png" alt="enter image description here" /></a></p> | <p>I fiddled around with the <code>grid_search.py</code> file <code>/lib/python3.8/site-packages/sklearn_evaluation/plot/grid_search.py</code>. At line 192/193 change the lines</p>
<p>From</p>
<pre><code>row_names = sorted(set([t[0] for t in matrix_elements.keys()]),
key=itemgetter(1))
col_names = sorted(set([t[1] for t in matrix_elements.keys()]),
key=itemgetter(1))
</code></pre>
<p>To:</p>
<pre><code>row_names = sorted(set([t[0] for t in matrix_elements.keys()]),
key=lambda x: (x[1] is None, x[1]))
col_names = sorted(set([t[1] for t in matrix_elements.keys()]),
key=lambda x: (x[1] is None, x[1]))
</code></pre>
<p>Moving all <code>None</code> to the end of a list while sorting is based on a previous <a href="https://stackoverflow.com/a/18411610/3896008">answer</a>
from Andrew Clarke.</p>
<p>Using this tweak, my demo script is shown below:</p>
<pre><code>import numpy as np
import sklearn.datasets as datasets
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import GridSearchCV
from sklearn_evaluation import plot
data = datasets.make_classification(n_samples=200, n_features=10, n_informative=4, class_sep=0.5)
X = data[0]
y = data[1]
hyperparameters = {
"max_depth": [1, 2, 3, None],
"criterion": ["gini", "entropy"],
"max_features": ["sqrt", "log2"],
}
est = RandomForestClassifier(n_estimators=5)
clf = GridSearchCV(est, hyperparameters, cv=3)
clf.fit(X, y)
plot.grid_search(clf.cv_results_, change=("max_depth", "criterion"), subset={"max_features": "sqrt"})
import matplotlib.pyplot as plt
plt.show()
</code></pre>
<p>The output is as shown below:
<a href="https://i.stack.imgur.com/o034f.png" rel="noreferrer"><img src="https://i.stack.imgur.com/o034f.png" alt="enter image description here" /></a></p> | python|matplotlib|scikit-learn|seaborn|sklearn-pandas | 5 |
738 | 59,391,569 | Is word embedding + other features possible for classification problem? | <p>My task was to create a classifier model for a review dataset. I have 15000 train observations, 5000 dev and 5000 test. </p>
<p>The task specified that 3 features needed to be used: I used <code>TFIDF</code> (5000 features there), <code>BOW</code> (2000 more features) and the <code>review length</code> (1 more feature). So, for example, my X_train is an array shaped (15000,7001).</p>
<p>I was investigating and I found that word embedding (<code>word2vec</code> especially) could be a nice alternative to BOW.
My question is, can it be used (can it be put in the same "array" format as my other features?) in addition to my other features? </p>
<p>I did some research on it but didn't quite answered my question. </p> | <p>Theoretically <strong>yes</strong>.</p>
<p>Every document (lets say sentences) in your text corpus can be quantified as an array. Adding all these arrays you get a matrix. Lets say that this qunatification was using BOW, now you want to apply word2vec, only thing you need to make sure is that your arrays (quantified representations of individual sentences) are of same length as BOW arrays. Just add them row-wise and you have. (This is in theory, there are better pooling method to combine this) also some neat sklearn modules, take a look at <a href="https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html" rel="nofollow noreferrer">pipeline</a>.</p>
<p><strong>However</strong> sometimes it will be an overkill (depending on the data-ofcourse) combining tf-idf and BOW you can get too much redundant information.</p> | python|machine-learning|scikit-learn|text-classification|sklearn-pandas | 0 |
739 | 59,260,854 | What does required_grad do in PyTorch? (Not requires_grad) | <p>I have been trying to carry out transfer learning on a multiclass classification task using resnet as my backbone.</p>
<p>In many tutorials, it was stated that it would be wise to try and train only the last layer (usually a fully connected layer) again, while freezing the other layers. The freezing would be done as so:</p>
<pre><code>for param in model.parameters():
param.requires_grad = False
</code></pre>
<p>However, I just realized that all of my layers were actually <strong>not freezed</strong>, and while checking on my code I realized I had made a typo:</p>
<pre><code>for param in model.parameters():
param.required_grad = False
</code></pre>
<p>In a way that I wrote <code>required_grad</code> instead of <code>requires_grad</code>.</p>
<p>I can't seem to find information on <code>required_grad</code> - what it is, nor what it does. The only thing I found out was that it did not change the <code>requires_grad</code> flag, and that there is a separate <code>required_grad</code> flag which is set to False instead. </p>
<p>Can anyone explain what <code>required_grad</code> does? Have I been 'not freezing' my other layers all this time?</p> | <p>Ok, this was really silly.</p>
<pre><code>for param in model.parameters():
param.required_grad = False
</code></pre>
<p>In this case, a new 'required_grad' is created due to the typo I made.
For example, even the following wouldn't invoke an error:</p>
<pre><code>for param in model.parameters():
param.what_in_the_world = False
</code></pre>
<p>And all the parameters of the model would now have a <code>what_in_the_world</code> attribute.</p>
<p>I hope no one else wastes their time due to this.</p> | python|pytorch|backpropagation|resnet | 3 |
740 | 56,966,499 | Feeding Torch objects from csv to learn | <p>I am trying to give PyTorch the input to build a very simple Neural Network. Here is my problem:
I have all the data I want to use in an csv and I am using Panda to read it.
Here is my code: </p>
<pre><code>data = pd.read_csv("../myFile.csv")
input = [x for x in data]
input = np.asarray(input)
input = torch.from_numpy(input)
</code></pre>
<p>This returns the error:</p>
<pre><code>line 42, in <module>
input = torch.from_numpy(input)
TypeError: can't convert np.ndarray of type numpy.str_. The only supported types are: float64, float32, float16, int64, int32, int16, int8, and uint8.
</code></pre>
<p>I understand what the error means. The main problem is, that there are a few columns in my csv that can not be casted to int or float as they are basically Strings, for example the customer-ID would be something like: AAABBBCCC; and i can not cast that to float or int. Do you have any idea what I can do?</p>
<p>EDIT: Here is my new updated code with the proposed answer by Anubhav:</p>
<pre><code>data = pd.read_csv("myFile.csv", names=col_names)
data = data.drop(["Customer-ID", "name"], axis=1)
for column in list(data):
# one hot encode of Object Columns
one_hot = pd.get_dummies(data[column])
# drop encoded columns
data = data.drop(column, axis=1)
# join the encoded data
data = data.join(one_hot)
print(data.dtypes)
inp = [x for x in data]
inp = np.asarray(inp, dtype=np.float32)
inp = torch.from_numpy(inp)
</code></pre>
<p>But I still get the following error:</p>
<pre><code>line 52, in <module>
inp = np.asarray(inp, dtype=np.float32)
File "C:\Users\Paul\anaconda3\lib\site-packages\numpy\core\numeric.py", line 538, in asarray
return array(a, dtype, copy=False, order=order)
ValueError: could not convert string to float: '01139_Lichtenau'
</code></pre>
<p>Looking up this value from print(data.dtypes) it clearly says:</p>
<pre><code>01139_Lichtenau uint8
</code></pre>
<p>Did the encoding fail for some reason?</p> | <p>Have you checked the output of <code>[x for x in data]</code> ? It's just the list of column names, which are of type string. That's why, you are getting above error. Now, I will help you solve your problem using a sample <code>csv</code> file.</p>
<p>Filename: <code>data.csv</code></p>
<pre><code>custID name age salary label
EMP1 Mahi 23 120000 Yes
EMP2 Tom 28 200000 No
EMP3 Chris 25 123000 No
EMP4 Matt 29 2130000 Yes
EMP5 Brown 27 324675 Yes
</code></pre>
<p>As you can see, above file contains both string and integer values. First, I will show the output of your code:</p>
<pre><code>data = pd.read_csv('data.csv', sep=' ')
inp = [x for x in data] # ['custID', 'name', 'age', 'salary', 'label']
inp = np.asarray(inp) # array(['custID', 'name', 'age', 'salary', 'label'], dtype='<U6')
inp = torch.from_numpy(inp)
</code></pre>
<blockquote>
<p>TypeError: can't convert np.ndarray of type numpy.str_. The only
supported types are: double, float, float16, int64, int32, and uint8</p>
</blockquote>
<p>What you can do is consider only those string column are important for the neural networks and that you can <code>one-hot</code> encode. Columns like <code>custID</code> and <code>name</code> don't have any significance as far as neural network is concerned. </p>
<pre><code>data = pd.read_csv('data.csv', sep=' ')
# one-hot encode of column 'label'
# Get one hot encoding of column 'label'
one_hot = pd.get_dummies(data['label'])
# Drop column 'label' as it is now encoded
data = data.drop('label',axis = 1)
# Join the encoded data
data = data.join(one_hot)
inp = [data[x] for x in data]
inp = np.asarray(inp[2:], dtype=np.float32)
inp = torch.from_numpy(inp)
</code></pre>
<p>output:</p>
<pre><code>tensor([[ 23., 28., 25., 29., 27.],
[ 120000., 200000., 123000., 2130000., 324675.],
[ 0., 1., 1., 0., 0.],
[ 1., 0., 0., 1., 1.]])
</code></pre>
<p>In the above code, I have first one-hot encoded the column <code>label</code>, then I dropped that column and joined the encoded data. After that, I read all the columns of the <code>csv</code> file(including <code>custID</code> and <code>name</code>), then ignoring <code>custID</code> and <code>name</code> columns and converting others to float and finally using <code>torch.from_numpy</code> to convert it to tensor.</p>
<p>In the above output, every row corresponds to a column in <code>one-hot</code> encoded data.</p> | python|csv|pytorch|torch | 0 |
741 | 46,143,091 | How do I add a new column and insert (calculated) data in one line? | <p>I'm pretty new to python so it's a basic question.</p>
<p>I have data that I imported from a csv file. Each row reflects a person and his data. Two attributes are Sex and Pclass. I want to add a new column (predictions) that is fully depended on those two in one line. If both attributes' values are 1 it should assign 1 to the person's predictions data field, 0 otherwise.</p>
<p>How do I do it in one line (let's say with Pandas)?</p> | <p>Use:</p>
<pre><code>np.random.seed(12)
df = pd.DataFrame(np.random.randint(3,size=(10,2)), columns=['Sex','Pclass'])
df['prediction'] = ((df['Sex'] == 1) & (df['Pclass'] == 1)).astype(int)
print (df)
Sex Pclass prediction
0 2 1 0
1 1 2 0
2 0 0 0
3 2 1 0
4 0 1 0
5 1 1 1
6 2 2 0
7 2 0 0
8 1 0 0
9 0 1 0
</code></pre>
<p>If all values are <code>1</code> and <code>0</code> only use <a href="https://stackoverflow.com/questions/46143091/pyhton-how-do-i-add-a-new-column-and-insert-calculated-data-in-one-line/46143110#comment79247988_46143106">John Galt</a> solutions:</p>
<pre><code>#only 0, 1 values
df['predictions'] = df.all(axis=1).astype(int)
#if more possible values
df['predictions'] = df.eq(1).all(axis=1).astype(int)
print (df)
Sex Pclass predictions
0 2 1 0
1 1 2 0
2 0 0 0
3 2 1 0
4 0 1 0
5 1 1 1
6 2 2 0
7 2 0 0
8 1 0 0
9 0 1 0
</code></pre> | python|pandas | 1 |
742 | 45,991,452 | Sorting the columns of a pandas dataframe | <pre><code>Out[1015]: gp2
department MOBILE QA TA WEB MOBILE QA TA WEB
minutes minutes minutes minutes growth growth growth growth
period
2016-12-24 NaN NaN 140.0 400.0 NaN NaN 0.0 260.0
2016-12-25 NaN NaN NaN 80.0 NaN NaN NaN -320.0
2016-12-26 NaN NaN NaN 20.0 NaN NaN NaN -60.0
2016-12-27 NaN 45.0 NaN 180.0 NaN 25.0 NaN 135.0
2016-12-28 600.0 NaN NaN 15.0 420.0 NaN NaN -585.0
... ... ... ... ... ... ... ... ...
2017-01-03 NaN NaN NaN 80.0 NaN NaN NaN -110.0
2017-01-04 20.0 NaN NaN NaN -60.0 NaN NaN NaN
2017-02-01 120.0 NaN NaN NaN 100.0 NaN NaN NaN
2017-02-02 45.0 NaN NaN NaN -75.0 NaN NaN NaN
2017-02-03 NaN 45.0 NaN 30.0 NaN 0.0 NaN -15.0
</code></pre>
<p>I need MOBILE.minutes and MOBILE.growth to be one after another.</p>
<p>I tried this</p>
<pre><code>In [1019]:gp2.columns = gp2.columns.sort_values()
In [1020]: gp2
Out[1020]:
department MOBILE QA TA WEB
growth minutes growth minutes growth minutes growth minutes
period
2016-12-24 NaN NaN 140.0 400.0 NaN NaN 0.0 260.0
2016-12-25 NaN NaN NaN 80.0 NaN NaN NaN -320.0
2016-12-26 NaN NaN NaN 20.0 NaN NaN NaN -60.0
2016-12-27 NaN 45.0 NaN 180.0 NaN 25.0 NaN 135.0
2016-12-28 600.0 NaN NaN 15.0 420.0 NaN NaN -585.0
... ... ... ... ... ... ... ... ...
2017-01-03 NaN NaN NaN 80.0 NaN NaN NaN -110.0
2017-01-04 20.0 NaN NaN NaN -60.0 NaN NaN NaN
2017-02-01 120.0 NaN NaN NaN 100.0 NaN NaN NaN
2017-02-02 45.0 NaN NaN NaN -75.0 NaN NaN NaN
2017-02-03 NaN 45.0 NaN 30.0 NaN 0.0 NaN -15.0
</code></pre>
<p>It sorted just the columns but didn't assign them proper values. </p> | <p>Just use <code>df.sort_index</code>:</p>
<pre><code>df = df.sort_index(level=[0, 1], axis=1)
print(df)
MOBILE QA TA WEB
growth minutes growth minutes growth minutes growth minutes
period
2016-12-24 NaN NaN NaN NaN 0.0 140.0 260.0 400.0
2016-12-25 NaN NaN NaN NaN NaN NaN -320.0 80.0
2016-12-26 NaN NaN NaN NaN NaN NaN -60.0 20.0
2016-12-27 NaN NaN 25.0 45.0 NaN NaN 135.0 180.0
2016-12-28 420.0 600.0 NaN NaN NaN NaN -585.0 15.0
2017-01-03 NaN NaN NaN NaN NaN NaN -110.0 80.0
2017-01-04 -60.0 20.0 NaN NaN NaN NaN NaN NaN
2017-02-01 100.0 120.0 NaN NaN NaN NaN NaN NaN
2017-02-02 -75.0 45.0 NaN NaN NaN NaN NaN NaN
2017-02-03 NaN NaN 0.0 45.0 NaN NaN -15.0 30.0
</code></pre> | python|pandas|sorting|dataframe | 6 |
743 | 50,712,246 | Pytrends anaconda install conflict with TensorFlow | <p>It seems I have a conflict when trying to install pytrends via anaconda. After submitting "pip install pytrends" the following error arises:</p>
<p>tensorflow-tensorboard 1.5.1 has requirement bleach==1.5.0, but you'll have bleach 2.0.0 which is incompatible.
tensorflow-tensorboard 1.5.1 has requirement html5lib==0.9999999, but you'll have html5lib 0.999999999 which is incompatible.</p>
<p>I also have tensorflow but don't necessarily need it. But I'd prefer a means to operate with both.</p> | <p>Try upgrading your version of tensorflow. I tried it with Tensorflow 1.6.0 ,tensorboard 1.5.1 and it worked fine. I was able to import pytrends.</p> | python|tensorflow|anaconda | 1 |
744 | 51,086,721 | convert pandas dataframe of strings to numpy array of int | <p>My input is a pandas dataframe with strings inside:</p>
<pre><code>>>> data
218.0
221.0
222.0
224.0 71,299,77,124
227.0 50,283,81,72
229.0
231.0 84,349
233.0
235.0
240.0 53,254
Name: Q25, dtype: object
</code></pre>
<p>now i want a shaped ( .reshape(-1,2) ) numpy array of ints for every row like that:</p>
<pre><code>>>> data
218.0 []
221.0 []
222.0 []
224.0 [[71,299], [77,124]]
227.0 [[50,283], [81,72]]
229.0 []
231.0 [[84,349]]
233.0 []
235.0 []
240.0 [[53,254]]
Name: Q25, dtype: object
</code></pre>
<p>i dont know how to get there by vector operations.
can someone help?</p> | <p>You can use <code>apply</code>, this isn't vector operation though</p>
<pre><code>In [277]: df.val.fillna('').apply(
lambda x: np.array(x.split(','), dtype=int).reshape(-1, 2) if x else [])
Out[277]:
0 []
1 []
2 []
3 [[71, 299], [77, 124]]
4 [[50, 283], [81, 72]]
5 []
6 [[84, 349]]
7 []
8 []
9 [[53, 254]]
Name: val, dtype: object
</code></pre> | python|pandas|numpy | 3 |
745 | 66,707,461 | Read dataframe values by certain creteria | <p>I'm reading some data using panadas and this is dataframe:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;"></th>
<th style="text-align: center;">PARAMETER</th>
<th style="text-align: right;">VALUE</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">0</td>
<td style="text-align: center;">Param1</td>
<td style="text-align: right;">1.2</td>
</tr>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: center;">Param2</td>
<td style="text-align: right;">5.0</td>
</tr>
<tr>
<td style="text-align: left;">2</td>
<td style="text-align: center;">Param3</td>
<td style="text-align: right;">9.3</td>
</tr>
<tr>
<td style="text-align: left;">3</td>
<td style="text-align: center;">Param4</td>
<td style="text-align: right;">30</td>
</tr>
<tr>
<td style="text-align: left;">4</td>
<td style="text-align: center;">Param5</td>
<td style="text-align: right;">1500</td>
</tr>
</tbody>
</table>
</div>
<p>What would be the best way to access values by parameter name? For example I need value of Param4 is there any way to say like read['Param'].value?</p> | <p>One way would be using <code>loc</code> like:</p>
<pre><code>float(df.loc[df['PARAMETER']=='Param4']['VALUE']) # locate col PARAMETER and get VALUE
Out[81]: 30.0
# Or
df.loc[df['PARAMETER']=='Param4']['VALUE'].values
Out[94]: array([30.])
</code></pre>
<p>Another way would be to create a <code>dict</code> and access them like:</p>
<pre><code># Using a dictionary
d = dict(zip(df.PARAMETER,df.VALUE))
d['Param4']
Out[82]: 30.0
d['Param3']
Out[90]: 9.3
</code></pre> | python|pandas|dataframe | 1 |
746 | 57,316,557 | tf.keras.layers.pop() doesn't work, but tf.keras._layers.pop() does | <p>I want to pop the last layer of the model. So I use the <code>tf.keras.layers.pop()</code>, but it doesn't work.</p>
<pre><code>base_model.summary()
</code></pre>
<p><a href="https://i.stack.imgur.com/wRz02.png" rel="noreferrer"><img src="https://i.stack.imgur.com/wRz02.png" alt="enter image description here"></a></p>
<pre><code>base_model.layers.pop()
base_model.summary()
</code></pre>
<p><a href="https://i.stack.imgur.com/msGnY.png" rel="noreferrer"><img src="https://i.stack.imgur.com/msGnY.png" alt="enter image description here"></a></p>
<p>When I use <code>tf.keras._layers.pop()</code>, it works.</p>
<pre><code>base_model.summary()
</code></pre>
<p><a href="https://i.stack.imgur.com/CAq45.png" rel="noreferrer"><img src="https://i.stack.imgur.com/CAq45.png" alt="enter image description here"></a></p>
<pre><code>base_model._layers.pop()
base_model.summary()
</code></pre>
<p><a href="https://i.stack.imgur.com/viCxT.png" rel="noreferrer"><img src="https://i.stack.imgur.com/viCxT.png" alt="enter image description here"></a></p>
<p>I don't find docs about this usage. Could someone help explain this?</p> | <p>I agree this is confusing. The reason is that <code>model.layers</code> returns a shallow copy of the layers list so:</p>
<p>The tldr is dont use <code>model.layers.pop()</code> to remove the last layer. Instead we should create a new model with all but the last layer. Perhaps something like this:</p>
<pre class="lang-py prettyprint-override"><code>new_model = tf.keras.models.Sequential(base_model.layers[:-1])
</code></pre>
<p>Checkout this <a href="https://github.com/tensorflow/tensorflow/issues/22479" rel="noreferrer">github issue</a> for more details</p> | python|tensorflow|tensorflow2.0 | 9 |
747 | 57,655,992 | How to Concatenate Mnist Test and Train Images? | <p>I'm trying to train a Generative Adversarial Network. To train the network I'm using mnist dataset. I will train the network with concatenated test and train images.</p>
<pre><code>import numpy as np
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist=input_data.read_data_sets("data/mnist",one_hot=False)
images=np.concatenate(mnist.test.images,mnist.train.images)
</code></pre>
<p>An error has been occurred when I run the code. </p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-9-02ac414642a1> in <module>()
3 from tensorflow.examples.tutorials.mnist import input_data
4 mnist=input_data.read_data_sets("data/mnist",one_hot=False)
----> 5 images=np.concatenate(mnist.test.images,mnist.train.images)
TypeError: only integer scalar arrays can be converted to a scalar index
</code></pre>
<p>How to solve that or is there another way to concatenate <code>mnist.test.images</code> and <code>mnist.train.images</code> arrays?</p> | <p>That's not how you should use <code>numpy.concatenate</code>. You can do it like this:</p>
<pre><code>images = np.concatenate([mnist.test.images, mnist.train.images], axis=0)
</code></pre>
<p>If you go through the <code>numpy.concatenate</code> <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.concatenate.html" rel="nofollow noreferrer">documentation</a>, you will see that as a first argument the <code>numpy.concatenate</code> expects:</p>
<blockquote>
<p>a1, a2, … : sequence of array_like</p>
</blockquote>
<p>Therefore, combining the <code>mnist.test.images</code> and <code>mnist.train.images</code> in an array as in the snippet above solves your problem. Additionally, even though the second argument <code>axis</code> default value is <code>axis=0</code>, I tend to specify it for clarity.</p> | python|numpy|tensorflow | 1 |
748 | 72,996,661 | Pivot from tabular to matrix with row and column multiindex and given order | <p>Given the following dataframe containing a matrix in tabular format:</p>
<pre><code>values = list(range(5))
var1 = ['01', '02', '03', '0', '0']
var2 = ['a', 'b', 'c', 'd', 'e']
var3 = ['01', '02', '03', '0', '0']
var4 = ['a', 'b', 'c', 'd', 'e']
var5 = ['S1', 'S1','S1', 'S3', 'S2']
var6 = ['P1', 'P1','P1', 'P3', 'P2']
df = pd.DataFrame({'var1': var1,
'var2': var2,
'var3': var3,
'var4': var4,
'var5': var5,
'var6': var6,
'value': values})
</code></pre>
<p>And the following imposed order for var5 and var6:</p>
<pre><code>var5_order = ['S1', 'S2', 'S3', 'S4']
var6_order = ['P1', 'P2', 'P3', 'P4']
</code></pre>
<p>How to pivot the dataframe in a way that var6, var1, and var2 (in this order) define a row multiindex and var5, var3, and var4 (in this order) define a column multiindex? In addition, how to impose var5_order and var6_order in the pivoted dataframe?</p> | <p>Yes, you can do <code>pd.Categorical</code> with <code>ordered=True</code>:</p>
<pre><code>df['var5'] = pd.Categorical(df['var5'], categories=var5_order, ordered=True)
df['var6'] = pd.Categorical(df['var6'], categories=var6_order, ordered=True)
df.pivot_table(index=['var6','var1','var2'],
columns=['var5','var3','var4'],
values='value')
</code></pre>
<p>Output:</p>
<pre><code>var5 S1 S2 S3
var3 01 02 03 0 0
var4 a b c e d
var6 var1 var2
P1 01 a 0.0 NaN NaN NaN NaN
02 b NaN 1.0 NaN NaN NaN
03 c NaN NaN 2.0 NaN NaN
P2 0 e NaN NaN NaN 4.0 NaN
P3 0 d NaN NaN NaN NaN 3.0
</code></pre> | python|pandas | 1 |
749 | 73,059,534 | Combine (merge/join/concat) two dataframes by mask (leave only first matches) in pandas [python] | <p>I have <code>df1</code>:</p>
<pre class="lang-py prettyprint-override"><code> match
0 a
1 a
2 b
</code></pre>
<p>And I have <code>df2</code>:</p>
<pre class="lang-py prettyprint-override"><code> match number
0 a 1
1 b 2
2 a 3
3 a 4
</code></pre>
<p>I want to combine these two dataframes so that only first matches remained, like this:</p>
<pre class="lang-py prettyprint-override"><code> match_df1 match_df2 number
0 a a 1
1 a a 3
2 b b 2
</code></pre>
<p>I've tried different combinations of inner <code>join</code>, <code>merge</code> and <code>pd.concat</code>, but nothing gave me anything close to the desired output. Is there any pythonic way to make it without any loops, just with pandas methods?</p>
<p>Update:</p>
<p>For now came up with this solution. Not sure if it's the most efficient. Your help would be appreciated!</p>
<pre class="lang-py prettyprint-override"><code>df = pd.merge(df1, df2, on='match').drop_duplicates('number')
for match, count in df1['match'].value_counts().iteritems():
df = df.drop(index=df[df['match'] == match][count:].index)
</code></pre> | <p>In your case you can do with <code>groupby</code> and <code>cumcount</code> before <code>merge</code> ,Notice I do not keep two match columns since they are the same</p>
<pre><code>df1['key'] = df1.groupby('match').cumcount()
df2['key'] = df2.groupby('match').cumcount()
out = df1.merge(df2)
Out[418]:
match key number
0 a 0 1
1 a 1 3
2 b 0 2
</code></pre> | python|pandas|join | 1 |
750 | 72,992,807 | converting the duration column into short, medium, long values | <p>how to Convert the duration column into short, medium, long values. Come up with the boundaries by splitting the duration range in 3 equal size ranges</p>
<pre class="lang-py prettyprint-override"><code>df['duration'].head()
</code></pre>
<p>Output:</p>
<pre class="lang-py prettyprint-override"><code>0 01:13
1 01:09
2 01:40
3 00:48
4 00:49
...
98 00:52
99 01:10
100 01:05
101 01:09
102 00:58
Name: duration, Length: 103, dtype: object
</code></pre> | <p>You can use <a href="https://pandas.pydata.org/docs/reference/api/pandas.cut.html" rel="nofollow noreferrer"><code>pandas.cut</code></a> on the Series of Timedelta after conversion with <a href="https://pandas.pydata.org/docs/reference/api/pandas.to_timedelta.html" rel="nofollow noreferrer"><code>pandas.to_timedelta</code></a>:</p>
<pre><code>df['bin'] = pd.cut(pd.to_timedelta(df['duration']+':00'),
bins=3, labels=['low', 'medium', 'high'])
</code></pre>
<p><em>NB. assuming a <code>HH:MM</code> format here, for <code>MM:SS</code>, use <code>'00:'+df['duration']</code></em></p>
<p>output:</p>
<pre><code> duration bin
0 01:13 medium
1 01:09 medium
2 01:40 high
3 00:48 low
4 00:49 low
... ... ...
98 00:52 low
99 01:10 medium
100 01:05 low
101 01:09 medium
102 00:58 low
</code></pre>
<p>If you also want the boundaries:</p>
<pre><code>df['bin'], boundaries = pd.cut(pd.to_timedelta(df['duration']+':00'),
bins=3, labels=['low', 'medium', 'high'],
retbins=True
)
print(boundaries)
</code></pre>
<p>output:</p>
<pre><code>TimedeltaIndex(['0 days 00:47:56.880000', '0 days 01:05:20',
'0 days 01:22:40', '0 days 01:40:00'],
dtype='timedelta64[ns]', freq=None)
</code></pre> | python|pandas|dataframe|binning | 1 |
751 | 73,090,849 | How to transform Pandas df for stacked bargraph | <p>I have the following df resulting from:</p>
<pre><code>plt_df = df.groupby(['Aufnahme_periode','Preisgruppe','Land']).count()
INDEX Aufnahme_periode Preisgruppe Land Anzahl
1344 2021-11-01 1 NL 2
1345 2021-12-01 1 AT 8
1346 2021-12-01 1 BE 1
1347 2021-12-01 1 CH 1
1348 2021-12-01 1 DE 154
1349 2021-12-01 1 GB 3
1350 2021-12-01 1 JP 1
1351 2021-12-01 1 NL 2
1352 2022-01-01 1 AT 1
1353 2022-01-01 1 DE 19
1354 2022-01-01 1 IT 1
1355 2022-01-01 2 CH 1
1356 2022-01-01 2 DE 1
1357 2022-02-01 1 DE 16
1358 2022-02-01 1 FR 1
1359 2022-02-01 2 CH 1
1360 2022-03-01 1 DE 23
</code></pre>
<p>I would like to make two bar charts:</p>
<p>1:</p>
<p>Monthly signup by "Preisgruppe" so stack group 2 on top of group 1:</p>
<p>The df should probably have columns for group 1 and 2 and rows for each month.</p>
<p>2:
Monthly signup by country:</p>
<p>Same as above, one column for each country and rows for the months.</p>
<p>I think I know how to make the charts, I just need help transforming the df.</p> | <p>You need a <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.pivot_table.html" rel="nofollow noreferrer"><code>pivot_table</code></a>:</p>
<pre><code>(plt_df
.pivot_table(index='Aufnahme_periode', columns='Preisgruppe',
values='Anzahl', aggfunc='sum')
.plot.bar(stacked=True)
)
</code></pre>
<p>output:</p>
<p><a href="https://i.stack.imgur.com/H6lBx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/H6lBx.png" alt="enter image description here" /></a></p>
<p>Same thing for the countries, just use <code>'Land'</code> in place of <code>'Preisgruppe'</code>:</p>
<p>output:</p>
<p><a href="https://i.stack.imgur.com/FX8c8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FX8c8.png" alt="enter image description here" /></a></p> | python|pandas|matplotlib|bar-chart | 0 |
752 | 70,988,713 | Normalizing rows of pandas DF when there's string columns? | <p>I'm trying to normalize a Pandas DF by row and there's a column which has string values which is causing me a lot of trouble. Anyone have a neat way to make this work?</p>
<p>For example:</p>
<pre><code> system Fluency Terminology No-error Accuracy Locale convention Other
19 hyp.metricsystem2 111 28 219 98 0 133
18 hyp.metricsystem1 97 22 242 84 0 137
22 hyp.metricsystem5 107 11 246 85 0 127
17 hyp.eTranslation 49 30 262 80 0 143
20 hyp.metricsystem3 86 23 263 89 0 118
21 hyp.metricsystem4 74 17 274 70 0 111
</code></pre>
<p>I am trying to normalize each row from <code>Fluency</code>, <code>Terminology</code>, etc. Other over the total. In other words, divide each integer column entry over the total of each row (<code>Fluency[0]/total_row[0]</code>, <code>Terminology[0]/total_row[0]</code>, ...)</p>
<p>I tried using this command, but it's giving me an error because I have a column of strings</p>
<pre><code>bad_models.div(bad_models.sum(axis=1), axis = 0)
</code></pre>
<p>Any help would be greatly appreciated...</p> | <p>Use <code>select_dtypes</code> to select numeric only columns:</p>
<pre><code>subset = bad_models.select_dtypes('number')
bad_models[subset.columns] = subset.div(subset.sum(axis=1), axis=0)
print(bad_models)
# Output
system Fluency Terminology No-error Accuracy Locale convention Other
19 hyp.metricsystem2 0.211832 0.21374 0.145418 0.193676 0 0.172952
18 hyp.metricsystem1 0.185115 0.167939 0.160691 0.166008 0 0.178153
22 hyp.metricsystem5 0.204198 0.083969 0.163347 0.167984 0 0.16515
17 hyp.eTranslation 0.093511 0.229008 0.173971 0.158103 0 0.185956
20 hyp.metricsystem3 0.164122 0.175573 0.174635 0.175889 0 0.153446
21 hyp.metricsystem4 0.141221 0.129771 0.181939 0.13834 0 0.144343
</code></pre> | python|pandas|dataframe | 0 |
753 | 51,731,389 | Issue for scraping website data for every webpage automatically and save it in csv by using beautiful Soup, pandas and request | <p>The code can't scrape web data page by page successfully and the csv format doesn't match the web data record. I want to the code enable run all web pages automatically. Right now, it only can run first page data. How it can run second, third page by itself? Secondly, in csv format, 'hospital_name','name','license_type' columns are all empty in csv format. They all show up at the end of csv format</p>
<pre><code>import requests
from bs4 import BeautifulSoup as soup
import pandas as pd
url = "https://www.abvma.ca/client/roster/clientRosterView.html?
clientRosterId=168"
url_page_2 = url + '&page=' + str(2)
def get_data_from_url(url):
#output the data
data = requests.get(url)
page_data = soup(data.text,'html.parser')
AB_data = page_data.find_all('div',{"class":"col-md-4 roster_tbl"})
#create a table
#for each in AB_data:
#print (each.text)
df=pd.DataFrame(AB_data)
df.head()
df.drop([0,1,2,9,3,4,5,6,7,8,10,12],axis=1, inplace=True)
for each in AB_data:
hospital = each.find('a').text
name = each.find('strong').text
license_type=each.find('font').text
#print(hospital)
#df['hospital_name']= hospital
df=df.append(pd.DataFrame({'hospital_name':hospital,
'name':name,'license_type':license_type},index=[0]), sort=False)
pd.set_option('display.max_columns',None)
print (df)
df.to_csv('AB_Vets_2018.csv',index=False)
</code></pre> | <h3>* Python 3 *</h3>
<pre><code>import csv
import requests
from bs4 import BeautifulSoup
FIELDNAMES = (
'first_name',
'last_name',
'license_type',
'location',
'reg_num'
)
def get_page(page_num):
base_url = "https://www.abvma.ca/client/roster/clientRosterView.html"
params = {
'clientRosterId': 168,
'page': page_num
}
r = requests.get(base_url, params=params)
r.raise_for_status()
return r.text
def parse_page(page_html):
result = []
soup = BeautifulSoup(page_html, 'lxml')
for vet in soup.find_all('div', class_='col-md-4 roster_tbl'):
name, *location, title, licence_type, reg_num = vet.stripped_strings
last_name, first_name = name.split(', ', maxsplit=1)
result.append({
'first_name': first_name,
'last_name': last_name,
'license_type': licence_type,
'location': '' if not location else location[0],
'reg_num': int(reg_num.split()[-1])
})
return result
if __name__ == '__main__':
result = []
for page_num in range(1, 35):
page_html = get_page(page_num)
parsed_page = parse_page(page_html)
result.extend(parsed_page)
with open('output.csv', 'w') as f:
writer = csv.DictWriter(f, fieldnames=FIELDNAMES)
writer.writeheader()
writer.writerows(result)
</code></pre>
<p><a href="https://bpaste.net/show/54c9b59721f3" rel="nofollow noreferrer">Output CSV</a></p> | python|pandas|csv|beautifulsoup|python-requests | 0 |
754 | 51,790,793 | Pandas Resample Upsample last date / edge of data | <p>I'm trying to upsample weekly data to daily data, however, I'm having difficulty upsampling the last edge. How can I go about this?</p>
<pre><code>import pandas as pd
import datetime
df = pd.DataFrame({'wk start': ['2018-08-12', '2018-08-12', '2018-08-19'],
'car': [ 'tesla model 3', 'tesla model x', 'tesla model 3'],
'sales':[38000,98000, 40000]})
df['wk start'] = df['wk start'].apply(lambda x: datetime.datetime.strptime(x, '%Y-%m-%d'))
df.set_index('wk start').groupby('car').resample('D').pad()
</code></pre>
<p>This returns:</p>
<pre><code> car sales
car wk start
tesla model 3 2018-08-12 tesla model 3 38000
2018-08-13 tesla model 3 38000
2018-08-14 tesla model 3 38000
2018-08-15 tesla model 3 38000
2018-08-16 tesla model 3 38000
2018-08-17 tesla model 3 38000
2018-08-18 tesla model 3 38000
2018-08-19 tesla model 3 40000
tesla model x 2018-08-12 tesla model x 98000
</code></pre>
<p>My desired output is:</p>
<pre><code> car sales
car wk start
tesla model 3 2018-08-12 tesla model 3 38000
2018-08-13 tesla model 3 38000
2018-08-14 tesla model 3 38000
2018-08-15 tesla model 3 38000
2018-08-16 tesla model 3 38000
2018-08-17 tesla model 3 38000
2018-08-18 tesla model 3 38000
2018-08-19 tesla model 3 40000
2018-08-20 tesla model 3 40000
2018-08-21 tesla model 3 40000
2018-08-22 tesla model 3 40000
2018-08-23 tesla model 3 40000
2018-08-24 tesla model 3 40000
2018-08-25 tesla model 3 40000
tesla model x 2018-08-12 tesla model x 98000
2018-08-13 tesla model x 98000
2018-08-14 tesla model x 98000
2018-08-15 tesla model x 98000
2018-08-16 tesla model x 98000
2018-08-17 tesla model x 98000
2018-08-18 tesla model x 98000
</code></pre>
<p>I looked at <a href="https://stackoverflow.com/questions/41696355/resample-upsample-period-index-and-using-both-extreme-time-edges-of-the-data">this</a>, but they're using periods and I'm looking at datetimes. Thanks in advance!</p> | <p>Yes, you are right, last edge data are excluded. Solution is add them to input <code>DataFrame</code> - my solution creates a helper <code>Dataframe</code> using <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.drop_duplicates.html" rel="nofollow noreferrer"><code>drop_duplicates</code></a>, adds <code>6</code> days and <a href="http://pands.pydata.org/pandas-docs/stable/generated/pandas.concat.html" rel="nofollow noreferrer"><code>concat</code></a>'s to original <code>df</code> before using your solution:</p>
<pre><code>df1 = df.sort_values('wk start').drop_duplicates('car', keep='last').copy()
df1['wk start'] = df1['wk start'] + pd.Timedelta(6, unit='d')
df = pd.concat([df, df1], ignore_index=True)
df = df.set_index('wk start').groupby('car').resample('D').pad()
print (df)
car sales
car wk start
tesla model 3 2018-08-12 tesla model 3 38000
2018-08-13 tesla model 3 38000
2018-08-14 tesla model 3 38000
2018-08-15 tesla model 3 38000
2018-08-16 tesla model 3 38000
2018-08-17 tesla model 3 38000
2018-08-18 tesla model 3 38000
2018-08-19 tesla model 3 40000
2018-08-20 tesla model 3 40000
2018-08-21 tesla model 3 40000
2018-08-22 tesla model 3 40000
2018-08-23 tesla model 3 40000
2018-08-24 tesla model 3 40000
2018-08-25 tesla model 3 40000
tesla model x 2018-08-12 tesla model x 98000
2018-08-13 tesla model x 98000
2018-08-14 tesla model x 98000
2018-08-15 tesla model x 98000
2018-08-16 tesla model x 98000
2018-08-17 tesla model x 98000
2018-08-18 tesla model x 98000
</code></pre> | python|python-3.x|pandas|datetime|reindex | 2 |
755 | 36,045,510 | Matrix multiplication with iterator dependency - NumPy | <p>Sometime back <a href="https://stackoverflow.com/questions/36042556/numpy-multiplication-anything-more-than-tensordot"><code>this question</code></a> (now deleted but 10K+ rep users can still view it) was posted. It looked interesting to me and I learnt something new there while trying to solve it and I thought that's worth sharing. I would like to post those ideas/solutions and would love to see people post other possible ways to solve it. I am posting the gist of the question next.</p>
<p>So, we have two NumPy ndarrays <code>a</code> and <code>b</code> of shapes :</p>
<pre><code>a : (m,n,N)
b : (n,m,N)
</code></pre>
<p>Let's assume we are dealing with cases where <code>m</code>,<code>n</code> & <code>N</code> are comparable.</p>
<p>The problem is to solve the following multiplication and summation with focus on performance :</p>
<pre><code>def all_loopy(a,b):
P,Q,N = a.shape
d = np.zeros(N)
for i in range(N):
for j in range(i):
for k in range(P):
for n in range(Q):
d[i] += a[k,n,i] * b[n,k,j]
return d
</code></pre> | <p>I learnt few things along the way trying to find vectorized and faster ways to solve it.</p>
<p>1) First off, there is a dependency of iterators at <code>"for j in range(i)"</code>. From my previous experience, especially with trying to solve such problems on <code>MATLAB</code>, it appeared that such dependency could be taken care of with a <a href="http://mathworld.wolfram.com/LowerTriangularMatrix.html" rel="nofollow"><code>lower triangular matrix</code></a>, so <a href="http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.tril.html" rel="nofollow"><code>np.tril</code></a> should work there. Thus, a fully vectorized solution and not so memory efficient solution (as it creates an intermediate <code>(N,N)</code> shaped array before finally reducing to <code>(N,)</code> shaped array) would be -</p>
<pre><code>def fully_vectorized(a,b):
return np.tril(np.einsum('ijk,jil->kl',a,b),-1).sum(1)
</code></pre>
<p>2) Next trick/idea was to keep one loop for the iterator <code>i</code> in <code>for i in range(N)</code>, but insert that dependency with indexing and using <a href="http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.einsum.html" rel="nofollow"><code>np.einsum</code></a> to perform all those multiplications and summations. The advantage would be memory-efficiency. The implementation would look like this -</p>
<pre><code>def einsum_oneloop(a,b):
d = np.zeros(N)
for i in range(N):
d[i] = np.einsum('ij,jik->',a[:,:,i],b[:,:,np.arange(i)])
return d
</code></pre>
<p>There are two more obvious ways to solve it. So, if we start working from the original <code>all_loopy</code> solution, one could keep the outer two loops and use <code>np.einsum</code> or <a href="http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.tensordot.html" rel="nofollow"><code>np.tensordot</code></a> to perform those operations and thus remove the inner two loops, like so -</p>
<pre><code>def tensordot_twoloop(a,b):
d = np.zeros(N)
for i in range(N):
for j in range(i):
d[i] += np.tensordot(a[:,:,i],b[:,:,j], axes=([1,0],[0,1]))
return d
def einsum_twoloop(a,b):
d = np.zeros(N)
for i in range(N):
for j in range(i):
d[i] += np.einsum('ij,ji->',a[:,:,i],b[:,:,j])
return d
</code></pre>
<p><strong>Runtime test</strong></p>
<p>Let's compare all the five approaches posted thus far to solve the problem, including the one posted in the question.</p>
<p>Case #1 :</p>
<pre><code>In [26]: # Input arrays with random elements
...: m,n,N = 20,20,20
...: a = np.random.rand(m,n,N)
...: b = np.random.rand(n,m,N)
...:
In [27]: %timeit all_loopy(a,b)
...: %timeit tensordot_twoloop(a,b)
...: %timeit einsum_twoloop(a,b)
...: %timeit einsum_oneloop(a,b)
...: %timeit fully_vectorized(a,b)
...:
10 loops, best of 3: 79.6 ms per loop
100 loops, best of 3: 4.97 ms per loop
1000 loops, best of 3: 1.66 ms per loop
1000 loops, best of 3: 585 µs per loop
1000 loops, best of 3: 684 µs per loop
</code></pre>
<p>Case #2 :</p>
<pre><code>In [28]: # Input arrays with random elements
...: m,n,N = 50,50,50
...: a = np.random.rand(m,n,N)
...: b = np.random.rand(n,m,N)
...:
In [29]: %timeit all_loopy(a,b)
...: %timeit tensordot_twoloop(a,b)
...: %timeit einsum_twoloop(a,b)
...: %timeit einsum_oneloop(a,b)
...: %timeit fully_vectorized(a,b)
...:
1 loops, best of 3: 3.1 s per loop
10 loops, best of 3: 54.1 ms per loop
10 loops, best of 3: 26.2 ms per loop
10 loops, best of 3: 27 ms per loop
10 loops, best of 3: 23.3 ms per loop
</code></pre>
<p>Case #3 (Leaving out all_loopy for being very slow) :</p>
<pre><code>In [30]: # Input arrays with random elements
...: m,n,N = 100,100,100
...: a = np.random.rand(m,n,N)
...: b = np.random.rand(n,m,N)
...:
In [31]: %timeit tensordot_twoloop(a,b)
...: %timeit einsum_twoloop(a,b)
...: %timeit einsum_oneloop(a,b)
...: %timeit fully_vectorized(a,b)
...:
1 loops, best of 3: 1.08 s per loop
1 loops, best of 3: 744 ms per loop
1 loops, best of 3: 568 ms per loop
1 loops, best of 3: 866 ms per loop
</code></pre>
<p>Going by the numbers, <code>einsum_oneloop</code> looks pretty good to me, whereas <code>fully_vectorized</code> could be used when dealing with small to decent sized arrays!</p> | python|arrays|performance|numpy|multiplication | 3 |
756 | 36,015,685 | Python Pandas co-occurrence after groupby | <p>I would like to compute the co-occurrence percentages after grouping. I am unable to determine the best method for doing so. I can think of ways to brute-force the answers, but this means lots of hard-coded calculations that may break as more source data is added. There must be a more elegant method, but I don't see it. I appreciate any suggestions.</p>
<p>(perhaps a little similar to <a href="https://stackoverflow.com/q/19804241/893518">Python Pandas check if a value occurs more then once in the same day</a>)</p>
<p>Goal: Table of co-occurrence percentages for a data column after grouping.
For example: When A occurred, B was found with A 45% of the time in January. When A occurred, C was found with A 21% of the time for week 6.</p>
<p>Sample Data (df):</p>
<pre><code>Date ID Region Event
1/01/2016 1001 S C
1/01/2016 1001 S D
1/01/2016 1001 N E
1/01/2016 1002 E D
1/02/2016 1003 E A
1/04/2016 1005 N B
1/04/2016 1005 N B
1/04/2016 1005 N B
1/04/2016 1006 N A
1/04/2016 1006 N F
2/12/2016 1008 E C
2/12/2016 1008 E B
</code></pre>
<p>To calculate the percentages, I need to find Events that happen in with the same ID. So, for the whole dataset C when B is 50%, B isolated is 50% and all others are 0%. But, if i groupby Month, then B isolated is 100% for Jan, and C when B is 100% for Feb.</p>
<p>Currently, I have code using .isin and .drop_duplicates to find and reduce the lists:</p>
<pre><code>b_ids = df[df.Event == 'B'].ID.drop_duplicates()
x = len(b_ids)
c_when_b = df[(df.ID.isin(b_ids)) & (df.Event == 'C')].ID.drop_duplicates()
y = len(c_when_b)
pct_cb = float(x)/y
</code></pre>
<p>Problems:</p>
<ul>
<li>How can this be extended to all binary combinations of Events (the real data has 25 events)</li>
<li>How do I modify this for easy grouping by date (week, month, quarter, etc.)?</li>
<li>How can the Region also be a grouping?</li>
<li>How can it easily be extended to multiple criteria ( (A | B) & (C | D) )?</li>
<li>Is there something easy that I'm completely missing?
Please let me know if this is unclear. Thanks in advance.</li>
</ul>
<p>EDIT:
Expected output would be a multiple column series for each event for a given time grouping for plotting (ignore these actual numbers):</p>
<pre><code>EVENT A
A B C ...
1 96.19 1.23 2.22
2 96.23 1.56 1.12
3 95.24 2.58 3.02
4 78.98 20.31 1.11
... .... ... ...
EVENT B
A B C ...
1 96.19 1.23 3.33
2 96.23 1.56 1.08
3 95.24 2.58 1.78
4 78.98 20.31 5.12
... .... ... ...
</code></pre> | <p>I think you want crosstabs:</p>
<p><a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.crosstab.html" rel="nofollow">http://pandas.pydata.org/pandas-docs/stable/generated/pandas.crosstab.html</a></p>
<p>This will give you just the raw frequencies. You can then divide each cell by the total number number of occurrences to get joint probabilities.</p>
<p>EDIT: I'm reading your question more thoroughly, and I think you're going to need to do a lot of data wrangling beyond just tossing pd.crosstabs at your original dataset. For example, you'll probably want to create a new column df['Week'], which is just a value 1-#ofWeeks based on df['Date'].</p>
<p>But this question is a little old, so maybe you already figured this out.</p> | python|pandas|find-occurrences|bigdata | 0 |
757 | 35,943,101 | How to delete every nth row of an if one contains a zero? | <p>I have an array containing data for three different indicators (X-Z) in five different categories (A-E).
Now I want to check every column from the dataset whether there is a 0 in it. In case there is a 0 in a row, I want to delete all indicators of this type.</p>
<p>In my minimum example it should find the zero in one of the Y rows and consequently delete all Y rows.</p>
<pre><code>AA =(['0','A','B','C','D','E'],
['X','2','3','3','3','4'],
['Y','3','4','9','7','3'],
['Z','3','4','6','3','4'],
['X','2','3','3','3','4'],
['Y','3','4','8','7','0'],
['Z','3','4','6','3','4'],
['X','2','5','3','3','4'],
['Y','3','4','0','7','3'],
['Z','3','4','6','3','4'])
</code></pre>
<p>My code is the following:</p>
<pre><code> import numpy as np
sequence = 3 #number of columns per sequence X,Y,Z
AA = np.array(AA)
for i in range(1,AA.shape[0]):
for j in range(1,AA.shape[1]):
if j == 0.0:
for k in range(np.min((j-1)/sequence,1),AA.shape[0],sequence):
np.delete(AA, k, 0)
</code></pre>
<p>and should give me:</p>
<pre><code>AA =(['0','A','B','C','D','E'],
['X','2','3','3','3','4'],
['Z','3','4','6','3','4'],
['X','2','3','3','3','4'],
['Z','3','4','6','3','4'],
['X','2','5','3','3','4'],
['Z','3','4','6','3','4'])
</code></pre>
<p>But somehow my code does not delete anything. So I guess I have a problem with the delete function, but I can't figure out what exactly the problem is.</p>
<p>EDIT:
In my real data the indicators (X-Z) don't have all exactly the same name but rather 'asdf - X' or 'qwer - Y - asdf'. So always the label part after the first '-' separator is identical.</p>
<p>So I cannot use a set() function on them but rather have to select the rows to delete by the distances from the row where the 0 was detected.</p> | <p>I would do it in two passes. It is a lot cleaner, and it might even be faster under some circumstances. Here's an implementation without numpy; feel free to convert it to use <code>array()</code>.</p>
<pre><code>AA =(['0','A','B','C','D','E'],
['X','2','3','3','3','4'],
['Y','3','4','9','7','3'],
['Z','3','4','6','3','4'],
['X','2','3','3','3','4'],
['Y','3','4','8','7','0'],
['Z','3','4','6','3','4'],
['X','2','5','3','3','4'],
['Y','3','4','0','7','3'],
['Z','3','4','6','3','4'])
todrop = set(row[0] for row in AA[1:] if '0' in row)
filtered = list(row for row in AA[1:] if row[0] not in todrop)
</code></pre>
<p>Since <code>row[0]</code> does not contain the exact indicator label, write a simple function that will extract the label and use that instead of the entire <code>row[0]</code>. Details depend on what your data <em>actually</em> looks like.</p>
<p><strong>Option 2:</strong> In case you <em>really</em> want to do it by counting the rows (which I don't recommend): Save the row numbers modulo 3, instead of the row ID. It's about the same amount of work:</p>
<pre><code>relabeled = list((n % 3, row) for n, row in enumerate(AA[1:]))
todrop = set(n for n, row in relabeled if '0' in row) # Will save {1} for Y
filtered = list(row for n, row in relabeled if n not in todrop)
</code></pre> | python|arrays|numpy | 4 |
758 | 37,546,491 | How does a neural network work with correlated image data | <p>I am new to TensorFlow and deep learning. I am trying to create a fully connected neural network for image processing. I am somewhat confused.</p>
<p>We have an image, say 28x28 pixels. This will have 784 inputs to the NN. For non-correlated inputs, this is fine, but image pixels are generally correlated. For instance, consider a picture of a cow's eye. How can a neural network understand this when we have all pixels lined up in an array for a fully-connected network. How does it determine the correlation?</p> | <p>Please research some tutorials on CNN (Convolutional Neural Network); <a href="http://deeplearning.net/tutorial/lenet.html" rel="nofollow">here</a> is a starting point for you. A fully connected layer of a NN surrenders <em>all</em> of the correlation information it might have had with the input. Structurally, it implements the principle that the inputs are statistically independent.</p>
<p>Alternately, a convolution layer depends upon the physical organization of the inputs (such as pixel adjacency), using that to find simple combinations (convolutions) of feature form one layer to another.</p>
<p>Bottom line: your NN doesn't find the correlation: the topology is wrong, and cannot do the job you want.</p>
<hr>
<p>Also, please note that a layered network consisting of fully-connected neurons with linear weight combinations, is <em>not</em> <strong>deep learning</strong>. Deep learning has at least one hidden layer, a topology which fosters "understanding" of intermediate structures. A purely linear, fully-connected layering provides no such hidden layers. Even if you program hidden layers, the outputs remain a simple linear combination of the inputs.</p>
<p>Deep learning requires some other discrimination, such as convolutions, pooling, rectification, or other non-linear combinations.</p> | neural-network|tensorflow|deep-learning | 0 |
759 | 41,851,044 | Python Median Filter for 1D numpy array | <p>I have a <code>numpy.array</code> with a dimension <code>dim_array</code>. I'm looking forward to obtain a median filter like <code>scipy.signal.medfilt(data, window_len)</code>. </p>
<p>This in fact doesn't work with <code>numpy.array</code> may be because the dimension is <code>(dim_array, 1)</code> and not <code>(dim_array, )</code>.</p>
<p>How to obtain such filter?</p>
<p>Next, another question, how can I obtain other filter, i.e., min, max, mean?</p> | <p>Based on <a href="https://stackoverflow.com/a/40085052/3293881"><code>this post</code></a>, we could create sliding windows to get a <code>2D</code> array of such windows being set as rows in it. These windows would merely be views into the <code>data</code> array, so no memory consumption and thus would be pretty efficient. Then, we would simply use those <code>ufuncs</code> along each row <code>axis=1</code>.</p>
<p>Thus, for example <code>sliding-</code>median` could be computed like so -</p>
<pre><code>np.median(strided_app(data, window_len,1),axis=1)
</code></pre>
<p>For the other <code>ufuncs</code>, just use the respective <code>ufunc</code> names there : <code>np.min</code>, <code>np.max</code> & <code>np.mean</code>. Please note this is meant to give a generic solution to use <code>ufunc</code> supported functionality. </p>
<p>For the best performance, one must still look into specific functions that are built for those purposes. For the four requested functions, we have the builtins, like so -</p>
<p>Median : <a href="https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.signal.medfilt.html" rel="noreferrer"><code>scipy.signal.medfilt</code></a>.</p>
<p>Max : <a href="https://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.ndimage.filters.maximum_filter1d.html" rel="noreferrer"><code>scipy.ndimage.filters.maximum_filter1d</code></a>.</p>
<p>Min : <a href="https://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.ndimage.filters.minimum_filter1d.html" rel="noreferrer"><code>scipy.ndimage.filters.minimum_filter1d</code></a>.</p>
<p>Mean : <a href="https://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.ndimage.filters.uniform_filter1d.html" rel="noreferrer"><code>scipy.ndimage.filters.uniform_filter1d</code></a></p> | python|arrays|numpy|filtering|median | 9 |
760 | 41,970,426 | m Smallest values from upper triangular matrix with their indices as a list of tuples | <p>I have a np.ndarray as follows: </p>
<pre><code>[[ inf 1. 3. 2. 1.]
[ inf inf 2. 3. 2.]
[ inf inf inf 5. 4.]
[ inf inf inf inf 1.]
[ inf inf inf inf inf]]
</code></pre>
<p>Is there a way to get the indices and values of the m smallest items in that nd array? So, if I wanted the 4 smallest it would be</p>
<pre><code>[(0,1,1),(0,4,1),(3,4,1),(0,3,2)]
</code></pre>
<p>where (row,col,val) is the notation above. </p>
<p>If there are multiple values then one of them is just randomly chosen. For instance, there were 3 ones and then next smallest is a value 2 but (0,3,2), (1,2,2),(1,4,2) were all possible choices. </p>
<p>Essentially, Can I extract the k smallest values in that format from the upper triangular matrix efficiently (the matrix is much larger than the example above). I tried flattening it, using square form, nsmallest, but am having trouble getting the indices and values to align. Thanks!</p> | <p>For an <code>Inf</code> filled array -</p>
<pre><code>r,c = np.unravel_index(a.ravel().argsort()[:4], a.shape)
out = zip(r,c,a[r,c])
</code></pre>
<p>For performance, consider using <code>np.argpartition</code>. So, replace <code>a.ravel().argsort()[:4]</code> with <code>np.argpartition(a.ravel(), range(4))[:4]</code>. </p>
<p>Sample run -</p>
<pre><code>In [285]: a
Out[285]:
array([[ inf, 1., 3., 2., 1.],
[ inf, inf, 2., 3., 2.],
[ inf, inf, inf, 5., 4.],
[ inf, inf, inf, inf, 1.],
[ inf, inf, inf, inf, inf]])
In [286]: out
Out[286]: [(0, 1, 1.0), (0, 4, 1.0), (3, 4, 1.0), (0, 3, 2.0)]
</code></pre>
<hr>
<p>For a generic case -</p>
<pre><code>R,C = np.triu_indices(a.shape[1],1)
idx = a[R,C].argsort()[:4]
r,c = R[idx], C[idx]
out = zip(r,c,a[r,c])
</code></pre>
<p>Sample run -</p>
<pre><code>In [351]: a
Out[351]:
array([[ 68., 67., 81., 23., 16.],
[ 84., 83., 20., 66., 48.],
[ 58., 72., 98., 63., 30.],
[ 61., 40., 1., 86., 22.],
[ 29., 95., 38., 22., 95.]])
In [352]: out
Out[352]: [(0, 4, 16.0), (1, 2, 20.0), (3, 4, 22.0), (0, 3, 23.0)]
</code></pre>
<p>For performance, consider using <code>np.argpartition</code>. So, replace <code>a[R,C].argsort()[:4]</code> with <code>np.argpartition(a[R,C], range(4))[:4]</code>.</p> | python|arrays|pandas|numpy|min | 2 |
761 | 37,852,924 | How to imagine convolution/pooling on images with 3 color channels | <p>I am a beginner and i understood the mnist tutorials. Now i want to get something going on the SVHN dataset. In contrast to mnist, it comes with 3 color channels. I am having a hard time visualizing how convolution and pooling works with the additional dimensionality of the color channels.</p>
<p>Has anyone a good way to think about it or a link for me ? </p>
<p>I appreciate all input :)</p> | <p>This is very simple, the difference only lies in the <strong>first convolution</strong>:</p>
<ul>
<li>in grey images, the input shape is <code>[batch_size, W, H, 1]</code> so your first convolution (let's say 3x3) has a filter of shape <code>[3, 3, 1, 32]</code> if you want to have 32 dimensions after.</li>
<li>in RGB images, the input shape is <code>[batch_size, W, H, 3]</code> so your first convolution (still 3x3) has a filter of shape <code>[3, 3, 3, 32]</code>.</li>
</ul>
<p>In both cases, the output shape (with stride 1) is <code>[batch_size, W, H, 32]</code></p> | tensorflow|convolution|pooling | 4 |
762 | 47,892,097 | Broadcast rotation matrices multiplication | <p>How to do the line marked with <code># <----</code> in a more direct way?</p>
<p>In the program, each row of <code>x</code> is coordinates of a point, <code>rot_mat[0]</code> and <code>rot_mat[1]</code> are two rotation matrices. The program rotates <code>x</code> by each rotation matrix.</p>
<p>Changing the order of multiplication between each rotation matrix and the coordinates is fine, if it makes things simpler. I want to have each row of <code>x</code> or the result representing coordinate of a point.</p>
<p>The result should match the checks.</p>
<p>Program:</p>
<pre><code># Rotation of coordinates of 4 points by
# each of the 2 rotation matrices.
import numpy as np
from scipy.stats import special_ortho_group
rot_mats = special_ortho_group.rvs(dim=3, size=2) # 2 x 3 x 3
x = np.arange(12).reshape(4, 3)
result = np.dot(rot_mats, x.T).transpose((0, 2, 1)) # <----
print("---- result ----")
print(result)
print("---- check ----")
print(np.dot(x, rot_mats[0].T))
print(np.dot(x, rot_mats[1].T))
</code></pre>
<p>Result:</p>
<pre><code>---- result ----
[[[ 0.20382264 1.15744672 1.90230739]
[ -2.68064533 3.71537598 5.38610452]
[ -5.56511329 6.27330525 8.86990165]
[ -8.44958126 8.83123451 12.35369878]]
[[ 1.86544623 0.53905202 -1.10884323]
[ 5.59236544 -1.62845022 -4.00918928]
[ 9.31928465 -3.79595246 -6.90953533]
[ 13.04620386 -5.9634547 -9.80988139]]]
---- check ----
[[ 0.20382264 1.15744672 1.90230739]
[ -2.68064533 3.71537598 5.38610452]
[ -5.56511329 6.27330525 8.86990165]
[ -8.44958126 8.83123451 12.35369878]]
[[ 1.86544623 0.53905202 -1.10884323]
[ 5.59236544 -1.62845022 -4.00918928]
[ 9.31928465 -3.79595246 -6.90953533]
[ 13.04620386 -5.9634547 -9.80988139]]
</code></pre> | <p>Use <a href="https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.tensordot.html" rel="nofollow noreferrer"><code>np.tensordot</code></a> for multiplication involving such <code>tensors</code> -</p>
<pre><code>np.tensordot(rot_mats, x, axes=((2),(1))).swapaxes(1,2)
</code></pre>
<p>Here's some timings to convince ourselves why <code>tensordot</code> works better with <code>tensors</code> -</p>
<pre><code>In [163]: rot_mats = np.random.rand(20,30,30)
...: x = np.random.rand(40,30)
# With numpy.dot
In [164]: %timeit np.dot(rot_mats, x.T).transpose((0, 2, 1))
1000 loops, best of 3: 670 µs per loop
# With numpy.tensordot
In [165]: %timeit np.tensordot(rot_mats, x, axes=((2),(1))).swapaxes(1,2)
10000 loops, best of 3: 75.7 µs per loop
In [166]: rot_mats = np.random.rand(200,300,300)
...: x = np.random.rand(400,300)
# With numpy.dot
In [167]: %timeit np.dot(rot_mats, x.T).transpose((0, 2, 1))
1 loop, best of 3: 1.82 s per loop
# With numpy.tensordot
In [168]: %timeit np.tensordot(rot_mats, x, axes=((2),(1))).swapaxes(1,2)
10 loops, best of 3: 185 ms per loop
</code></pre> | python|numpy | 4 |
763 | 58,764,104 | How to upload a new file in a image-recognitionmodel | <p>I am trying to make my first image-recognition model by following a tutorial on the site:
<a href="https://towardsdatascience.com/all-the-steps-to-build-your-first-image-classifier-with-code-cf244b015799" rel="nofollow noreferrer">Tutorial Towardsdatascience.com</a></p>
<p>After building the model you should be able to load up your own image and let the model do a prediction. But at that point I get stuck. I keep getting errors and cannot figure out what I am doing wrong.</p>
<p>This is the code for the last part:</p>
<pre><code>def prepare(file):
IMG_SIZE = 224
img_array = cv2.imread(file, cv2.IMREAD_GRAYSCALE)
new_array = cv2.resize(img_array, (IMG_SIZE, IMG_SIZE))
return new_array.reshape(-1, IMG_SIZE, IMG_SIZE, 1)
image = prepare('users/docs/proj/test/test.jpg')
model = tf.keras.models.load_model("CNN.model")
prediction = model.predict([image])
prediction = list(prediction[0])
print(CATEGORIES[prediction.index(max(prediction))])
</code></pre>
<p>The <strong>error</strong> I get is:</p>
<blockquote>
<p>ValueError: Python inputs incompatible with input_signature: inputs:
(Tensor("IteratorGetNext:0", shape=(None, 224,224,3),dtype=uint8))
input_signature: (TensorSpec(shape=None,None,None,3),
dtype=tf.float32, name=None))</p>
</blockquote>
<p>I tested it with different jpg-files but keep the same error.</p>
<p>Full-log:</p>
<pre><code>---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-25-ac0118ff787e> in <module>
12 model = tf.keras.models.load_model('CNN.model')
13
---> 14 prediction = model.predict([image])
15 prediction = list(prediction[0])
16 print(categories[prediction.index(max(prediction))])
/anaconda3/envs/Tensor_env/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py in predict(self, x, batch_size, verbose, steps, callbacks, max_queue_size, workers, use_multiprocessing)
907 max_queue_size=max_queue_size,
908 workers=workers,
--> 909 use_multiprocessing=use_multiprocessing)
910
911 def reset_metrics(self):
/anaconda3/envs/Tensor_env/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2.py in predict(self, model, x, batch_size, verbose, steps, callbacks, **kwargs)
460 return self._model_iteration(
461 model, ModeKeys.PREDICT, x=x, batch_size=batch_size, verbose=verbose,
--> 462 steps=steps, callbacks=callbacks, **kwargs)
463
464
/anaconda3/envs/Tensor_env/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2.py in _model_iteration(self, model, mode, x, y, batch_size, verbose, sample_weight, steps, callbacks, **kwargs)
442 mode=mode,
443 training_context=training_context,
--> 444 total_epochs=1)
445 cbks.make_logs(model, epoch_logs, result, mode)
446
/anaconda3/envs/Tensor_env/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2.py in run_one_epoch(model, iterator, execution_function, dataset_size, batch_size, strategy, steps_per_epoch, num_samples, mode, training_context, total_epochs)
121 step=step, mode=mode, size=current_batch_size) as batch_logs:
122 try:
--> 123 batch_outs = execution_function(iterator)
124 except (StopIteration, errors.OutOfRangeError):
125 # TODO(kaftan): File bug about tf function and errors.OutOfRangeError?
/anaconda3/envs/Tensor_env/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2_utils.py in execution_function(input_fn)
84 # `numpy` translates Tensors to values in Eager mode.
85 return nest.map_structure(_non_none_constant_value,
---> 86 distributed_function(input_fn))
87
88 return execution_function
/anaconda3/envs/Tensor_env/lib/python3.7/site-packages/tensorflow_core/python/eager/def_function.py in __call__(self, *args, **kwds)
455
456 tracing_count = self._get_tracing_count()
--> 457 result = self._call(*args, **kwds)
458 if tracing_count == self._get_tracing_count():
459 self._call_counter.called_without_tracing()
/anaconda3/envs/Tensor_env/lib/python3.7/site-packages/tensorflow_core/python/eager/def_function.py in _call(self, *args, **kwds)
501 # This is the first call of __call__, so we have to initialize.
502 initializer_map = object_identity.ObjectIdentityDictionary()
--> 503 self._initialize(args, kwds, add_initializers_to=initializer_map)
504 finally:
505 # At this point we know that the initialization is complete (or less
/anaconda3/envs/Tensor_env/lib/python3.7/site-packages/tensorflow_core/python/eager/def_function.py in _initialize(self, args, kwds, add_initializers_to)
406 self._concrete_stateful_fn = (
407 self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
--> 408 *args, **kwds))
409
410 def invalid_creator_scope(*unused_args, **unused_kwds):
/anaconda3/envs/Tensor_env/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs)
1846 if self.input_signature:
1847 args, kwargs = None, None
-> 1848 graph_function, _, _ = self._maybe_define_function(args, kwargs)
1849 return graph_function
1850
/anaconda3/envs/Tensor_env/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py in _maybe_define_function(self, args, kwargs)
2148 graph_function = self._function_cache.primary.get(cache_key, None)
2149 if graph_function is None:
-> 2150 graph_function = self._create_graph_function(args, kwargs)
2151 self._function_cache.primary[cache_key] = graph_function
2152 return graph_function, args, kwargs
/anaconda3/envs/Tensor_env/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)
2039 arg_names=arg_names,
2040 override_flat_arg_shapes=override_flat_arg_shapes,
-> 2041 capture_by_value=self._capture_by_value),
2042 self._function_attributes,
2043 # Tell the ConcreteFunction to clean up its graph once it goes out of
/anaconda3/envs/Tensor_env/lib/python3.7/site-packages/tensorflow_core/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes)
913 converted_func)
914
--> 915 func_outputs = python_func(*func_args, **func_kwargs)
916
917 # invariant: `func_outputs` contains only Tensors, CompositeTensors,
/anaconda3/envs/Tensor_env/lib/python3.7/site-packages/tensorflow_core/python/eager/def_function.py in wrapped_fn(*args, **kwds)
356 # __wrapped__ allows AutoGraph to swap in a converted function. We give
357 # the function a weak reference to itself to avoid a reference cycle.
--> 358 return weak_wrapped_fn().__wrapped__(*args, **kwds)
359 weak_wrapped_fn = weakref.ref(wrapped_fn)
360
/anaconda3/envs/Tensor_env/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2_utils.py in distributed_function(input_iterator)
71 strategy = distribution_strategy_context.get_strategy()
72 outputs = strategy.experimental_run_v2(
---> 73 per_replica_function, args=(model, x, y, sample_weights))
74 # Out of PerReplica outputs reduce or pick values to return.
75 all_outputs = dist_utils.unwrap_output_dict(
/anaconda3/envs/Tensor_env/lib/python3.7/site-packages/tensorflow_core/python/distribute/distribute_lib.py in experimental_run_v2(self, fn, args, kwargs)
758 fn = autograph.tf_convert(fn, ag_ctx.control_status_ctx(),
759 convert_by_default=False)
--> 760 return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
761
762 def reduce(self, reduce_op, value, axis):
/anaconda3/envs/Tensor_env/lib/python3.7/site-packages/tensorflow_core/python/distribute/distribute_lib.py in call_for_each_replica(self, fn, args, kwargs)
1785 kwargs = {}
1786 with self._container_strategy().scope():
-> 1787 return self._call_for_each_replica(fn, args, kwargs)
1788
1789 def _call_for_each_replica(self, fn, args, kwargs):
/anaconda3/envs/Tensor_env/lib/python3.7/site-packages/tensorflow_core/python/distribute/distribute_lib.py in _call_for_each_replica(self, fn, args, kwargs)
2130 self._container_strategy(),
2131 replica_id_in_sync_group=constant_op.constant(0, dtypes.int32)):
-> 2132 return fn(*args, **kwargs)
2133
2134 def _reduce_to(self, reduce_op, value, destinations):
/anaconda3/envs/Tensor_env/lib/python3.7/site-packages/tensorflow_core/python/autograph/impl/api.py in wrapper(*args, **kwargs)
290 def wrapper(*args, **kwargs):
291 with ag_ctx.ControlStatusCtx(status=ag_ctx.Status.DISABLED):
--> 292 return func(*args, **kwargs)
293
294 if inspect.isfunction(func) or inspect.ismethod(func):
/anaconda3/envs/Tensor_env/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2_utils.py in _predict_on_batch(***failed resolving arguments***)
160 def _predict_on_batch(model, x, y=None, sample_weights=None):
161 del y, sample_weights
--> 162 return predict_on_batch(model, x)
163
164 func = _predict_on_batch
/anaconda3/envs/Tensor_env/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2_utils.py in predict_on_batch(model, x)
368
369 with backend.eager_learning_phase_scope(0):
--> 370 return model(inputs) # pylint: disable=not-callable
/anaconda3/envs/Tensor_env/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py in __call__(self, inputs, *args, **kwargs)
845 outputs = base_layer_utils.mark_as_return(outputs, acd)
846 else:
--> 847 outputs = call_fn(cast_inputs, *args, **kwargs)
848
849 except errors.OperatorNotAllowedInGraphError as e:
/anaconda3/envs/Tensor_env/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/sequential.py in call(self, inputs, training, mask)
254 if not self.built:
255 self._init_graph_network(self.inputs, self.outputs, name=self.name)
--> 256 return super(Sequential, self).call(inputs, training=training, mask=mask)
257
258 outputs = inputs # handle the corner case where self.layers is empty
/anaconda3/envs/Tensor_env/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/network.py in call(self, inputs, training, mask)
706 return self._run_internal_graph(
707 inputs, training=training, mask=mask,
--> 708 convert_kwargs_to_constants=base_layer_utils.call_context().saving)
709
710 def compute_output_shape(self, input_shape):
/anaconda3/envs/Tensor_env/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/network.py in _run_internal_graph(self, inputs, training, mask, convert_kwargs_to_constants)
858
859 # Compute outputs.
--> 860 output_tensors = layer(computed_tensors, **kwargs)
861
862 # Update tensor_dict.
/anaconda3/envs/Tensor_env/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py in __call__(self, inputs, *args, **kwargs)
845 outputs = base_layer_utils.mark_as_return(outputs, acd)
846 else:
--> 847 outputs = call_fn(cast_inputs, *args, **kwargs)
848
849 except errors.OperatorNotAllowedInGraphError as e:
/anaconda3/envs/Tensor_env/lib/python3.7/site-packages/tensorflow_core/python/keras/saving/saved_model/utils.py in return_outputs_and_add_losses(*args, **kwargs)
55 inputs = args[inputs_arg_index]
56 args = args[inputs_arg_index + 1:]
---> 57 outputs, losses = fn(inputs, *args, **kwargs)
58 layer.add_loss(losses, inputs)
59 return outputs
/anaconda3/envs/Tensor_env/lib/python3.7/site-packages/tensorflow_core/python/eager/def_function.py in __call__(self, *args, **kwds)
455
456 tracing_count = self._get_tracing_count()
--> 457 result = self._call(*args, **kwds)
458 if tracing_count == self._get_tracing_count():
459 self._call_counter.called_without_tracing()
/anaconda3/envs/Tensor_env/lib/python3.7/site-packages/tensorflow_core/python/eager/def_function.py in _call(self, *args, **kwds)
492 # In this case we have not created variables on the first call. So we can
493 # run the first trace but we should fail if variables are created.
--> 494 results = self._stateful_fn(*args, **kwds)
495 if self._created_variables:
496 raise ValueError("Creating variables on a non-first call to a function"
/anaconda3/envs/Tensor_env/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py in __call__(self, *args, **kwargs)
1820 def __call__(self, *args, **kwargs):
1821 """Calls a graph function specialized to the inputs."""
-> 1822 graph_function, args, kwargs = self._maybe_define_function(args, kwargs)
1823 return graph_function._filtered_call(args, kwargs) # pylint: disable=protected-access
1824
/anaconda3/envs/Tensor_env/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py in _maybe_define_function(self, args, kwargs)
2105 if self.input_signature is None or args is not None or kwargs is not None:
2106 args, kwargs = self._function_spec.canonicalize_function_inputs(
-> 2107 *args, **kwargs)
2108
2109 cache_key = self._cache_key(args, kwargs)
/anaconda3/envs/Tensor_env/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py in canonicalize_function_inputs(self, *args, **kwargs)
1648 inputs,
1649 self._input_signature,
-> 1650 self._flat_input_signature)
1651 return inputs, {}
1652
/anaconda3/envs/Tensor_env/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py in _convert_inputs_to_signature(inputs, input_signature, flat_input_signature)
1714 flatten_inputs)):
1715 raise ValueError("Python inputs incompatible with input_signature:\n%s" %
-> 1716 format_error_message(inputs, input_signature))
1717
1718 if need_packing:
ValueError: Python inputs incompatible with input_signature:
inputs: (
Tensor("IteratorGetNext:0", shape=(None, 224, 224, 3), dtype=uint8))
input_signature: (
TensorSpec(shape=(None, None, None, 3), dtype=tf.float32, name=None))
[1]: https://towardsdatascience.com/all-the-steps-to-build-your-first-image-classifier-with-code-cf244b015799
</code></pre> | <p>I found the problem. The images X where divided by 255 and the image was not. After I divided image/255 the problem was solved.</p> | python|tensorflow|deep-learning|classification|image-recognition | 0 |
764 | 70,137,431 | How to apply one label to a NumPy dimension for a Keras Neural Network? | <p>I'm currently working on a simple neural network using Keras, and I'm running into a problem with my labels. The network is making a binary choice, and as such, my labels are all 1s and 0s. My data is composed of a 3d NumPy array, basically pixel data from a bunch of images. Its shape is (560, 560, 32086). However since the first two dimensions are only pixels, I shouldn't assign a label to each one so I tried to make the label array with the shape (1, 1, 32086) so that each image only has 1 label. However when I try to compile this with the following code:</p>
<pre><code>model = Sequential(
[
Rescaling(1.0 / 255),
Conv1D(32, 3, input_shape=datax.shape, activation="relu"),
Dense(750, activation='relu'),
Dense(2, activation='sigmoid')
]
)
model.compile(optimizer=SGD(learning_rate=0.1), loss="binary_crossentropy", metrics=['accuracy'])
model1 = model.fit(x=datax, y=datay, batch_size=1, epochs=15, shuffle=True, verbose=2)
</code></pre>
<p>I get this error "ValueError: Data cardinality is ambiguous:
x sizes: 560
y sizes: 1
Make sure all arrays contain the same number of samples." Which I assume means that the labels have to be the same size as the input data, but that doesn't make sense for each pixel to have an individual label.</p>
<p>The data is collected through a for loop looping through files in a directory and reading their pixel data. I then add this to the NumPy array and add their corresponding label to a label array. Any help in this problem would be greatly appreciated.</p> | <p>The <code>ValueError</code> occurs because the first dimension is supposed to be the number of samples and needs to be the same for <code>x</code> and <code>y</code>. In your example that is not the case. You would need <code>datax</code> to to have shape <code>(32086, 560, 560)</code> and <code>datay</code> should be <code>(32086,)</code>.</p>
<p>Have a look at <a href="https://www.tensorflow.org/tutorials/keras/classification" rel="nofollow noreferrer">this example</a> and note how the 60000 training images have shape <code>(60000, 28, 28)</code>.</p>
<p>Also I suspect a couple more mistakes have sneaked into your code:</p>
<ol>
<li>Are you sure you want a <code>Conv1D</code> layer and not <code>Conv2D</code>? Maybe <a href="https://www.tensorflow.org/tutorials/images/cnn" rel="nofollow noreferrer">this example</a> would be informative.</li>
<li>Since you are using binary crossentropy loss your last layer should only have one output instead of two.</li>
</ol> | python|numpy|tensorflow|keras|neural-network | 0 |
765 | 56,287,199 | Save the current state of program and resume again from last saved point | <p>I have a script to download images from a link. Suppose the script gets terminated due to some reason then I want to save the point till which the images have been downloaded and resume again from the point last saved</p>
<p>I have made the download script and tried saving the state of the program using pickle till now</p>
<pre><code>import pandas as pd
import requests as rq
import os,time,random,pickle
import csv
data=pd.read_csv("consensus_data.csv",usecols=["CaptureEventID","Species"])
z=data.loc[ data.Species.isin(['buffalo']), :]
df1=pd.DataFrame(z)
data_2=pd.read_csv("all_images.csv")
df2=pd.DataFrame(data_2)
df3=pd.merge(df1,df2,on='CaptureEventID')
p=df3.to_csv('animal_img_list.csv',index=False)
# you need to change the location below
data_final = pd.read_csv("animal_img_list.csv")
output=("/home/avnika/data_serengeti/url_op")
mylist = []
for i in range(0,100):
x = random.randint(1,10)
mylist.append(x)
print(mylist)
for y in range(len(mylist)):
d=mylist[y]
print(d)
file_name = data_final.URL_Info
print(len(file_name))
for file in file_name:
image_url='https://snapshotserengeti.s3.msi.umn.edu/'+file
f_name=os.path.split(image_url)[-1]
print(f_name)
r=rq.get(image_url)
with open(output+"/"+f_name, 'wb') as f:
f.write(r.content)
time.sleep(d)
with open("/home/avnika/data_serengeti","wb") as fp:
pickle.dump(r,fp)
with open("/home/avnika/data_serengeti","rb") as fp:
pic_obj=pickle.load(fp)
</code></pre>
<p>Suppose i have to download 4000 images from a URL. I successfully downloaded 1000 images but due to some network issue my script crashed. So i want that when the script restarts then it should start downloading from image number 1001. Currently it starts fresh from image number 1 again if the script is restarted. How can i run my loop again after loading the pickle object?</p> | <p>There might be multiple solutions to this problem but this comes first in mind it will help you to solve this problem.</p>
<p><strong>Approach :</strong></p>
<p>It's very clear, the script starts to download from starting because it can't remember the index till where it has downloaded the last time.</p>
<p>To solve this issue we will create a text file which is having an integer 0 denoting to that up to this index file has been downloaded. And when the script runs it checks what is the integer value present in the text file. (It's like recalling the position). The value in the text file gets incremented with 1 if the file is downloaded successfully.</p>
<p><strong>Code</strong></p>
<p>An Example for understanding ::</p>
<p>Please See: I have manually created a text file with '0' in it earlier.</p>
<pre><code># Opening the text file
counter = open('counter.txt',"r")
# Getting the position from where to start.Intially it's 0 later it will be updated
start = counter.read()
print("--> ",start)
counter.close()
for x in range(int(start),1000):
print("Processing Done upto : ",x)
#For every iteration we are writing it in the file with the new position
writer = open('counter.txt',"w")
writer.write(str(x))
writer.close()
</code></pre>
<p>Fixing your code :</p>
<p>Note: Create a text file manually with the name 'counter.txt' and write '0' in it.</p>
<pre><code>import pandas as pd
import requests as rq
import os,time,random,pickle
import csv
data=pd.read_csv("consensus_data.csv",usecols=["CaptureEventID","Species"])
z=data.loc[ data.Species.isin(['buffalo']), :]
df1=pd.DataFrame(z)
data_2=pd.read_csv("all_images.csv")
df2=pd.DataFrame(data_2)
df3=pd.merge(df1,df2,on='CaptureEventID')
p=df3.to_csv('animal_img_list.csv',index=False)
# you need to change the location below
data_final = pd.read_csv("animal_img_list.csv")
output=("/home/avnika/data_serengeti/url_op")
mylist = []
for i in range(0,100):
x = random.randint(1,10)
mylist.append(x)
print(mylist)
for y in range(len(mylist)):
d=mylist[y]
print(d)
# Opeing the file you manually created with '0' present in it.
counter = open('counter.txt',"r")
start = counter.read()
count = start
counter.close()
file_name = data_final.URL_Info
print(len(file_name))
# The starting position from the file is used to slice the file_name from 'start' value.
for file in file_name[start:]:
image_url='https://snapshotserengeti.s3.msi.umn.edu/'+file
f_name=os.path.split(image_url)[-1]
print(f_name)
r=rq.get(image_url)
with open(output+"/"+f_name, 'wb') as f:
f.write(r.content)
# File is downloaded and now, it's time to update the counter in the text file with new position.
count+=1
writer = open('counter.txt',"w")
writer.write(str(count))
writer.close()
time.sleep(d)
</code></pre>
<p>Hope this helps :)</p> | python|python-3.x|pandas|python-requests | 1 |
766 | 56,220,609 | How do I clip error bars in a pandas plot? | <p>I have an array of averages and an array of standard deviations in a pandas dataframe that I would like to plot. The averages correspond to timings (in seconds), and cannot be negative. How do I clip the standard errors in the plot to a minimum of zero?</p>
<pre><code>import numpy as np
import pandas as pd
avg_timings = np.array([0.00039999, 0.00045002, 0.00114999, 0.00155001, 0.00170001,
0.00545 , 0.00550001, 0.0046 , 0.00545 , 0.00685 ,
0.0079 , 0.00979999, 0.0171 , 0.04305001, 0.0204 ,
0.02276359, 0.02916633, 0.06865 , 0.06749998, 0.10619999])
std_dev_timings = array([0.0005831 , 0.00049751, 0.00079214, 0.00135927, 0.00045823,
0.01185953, 0.0083934 , 0.00066328, 0.0007399 , 0.00079214,
0.00083071, 0.00107694, 0.01023177, 0.11911653, 0.00874871,
0.00299976, 0.01048584, 0.01463652, 0.00785808, 0.09579386])
time_df = pd.DataFrame({'avg_time':avg_timings, 'std_dev':std_dev_timings, 'x':np.arange(len(avg_timings))})
ax = time_df.plot(x='x', y='avg_time', yerr='std_dev', figsize=(16,8), legend=False);
ax.set_ylabel('average timings (s)')
ax.set_xlim(-1, 20)
</code></pre>
<p><a href="https://i.stack.imgur.com/7lfMq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7lfMq.png" alt="enter image description here"></a></p>
<p>I want to clip the error bars at zero (highlighted in red), so the timing can never be negative. Is there a way to achieve this?</p> | <p>Try using <code>plt.errorbar</code> and pass in <code>yerr=[y_low, y_high]</code>:</p>
<pre><code>y_errors = time_df[['avg_time', 'std_dev']].min(axis=1)
fig, ax = plt.subplots(figsize=(16,8))
plt.errorbar(x=time_df['x'],
y=time_df['avg_time'],
yerr = [y_errors, time_df['std_dev']]
);
ax.set_ylabel('average timings (s)')
ax.set_xlim(-1, 20)
plt.show()
</code></pre>
<p>Output:</p>
<p><a href="https://i.stack.imgur.com/GFd1I.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GFd1I.png" alt="enter image description here"></a></p> | python|pandas|matplotlib | 2 |
767 | 56,094,714 | How can I call a custom layer in Keras with the functional API | <p>I have written a tiny implementation of a Keras custom layer where I have literally copied the class definition from <a href="https://keras.io/layers/writing-your-own-keras-layers/" rel="nofollow noreferrer">https://keras.io/layers/writing-your-own-keras-layers/</a></p>
<p>Yet when I try to call this custom layer as I would a standard Dense layer, I get the error "AssertionError" and my pycharm throws the warning that the object is not callable</p>
<p>I am missing something basic here but I cannot figure it out</p>
<p>If I switch the line</p>
<pre><code>model_out = MyLayer(2)(model_in)
</code></pre>
<p>to</p>
<pre><code>model_out = Dense(2)(model_in)
</code></pre>
<p>it works</p>
<p>Here's the code that doesn't run:</p>
<pre><code>from tensorflow.keras.layers import Dense, Input
from tensorflow.keras.models import Model
import numpy as np
from tensorflow.keras.layers import Layer
from tensorflow.keras import backend as K
from tensorflow.keras import optimizers
class MyLayer(Layer):
def __init__(self, output_dim, **kwargs):
self.output_dim = output_dim
super(MyLayer, self).__init__(**kwargs)
def build(self, input_shape):
# Create a trainable weight variable for this layer.
self.kernel = self.add_weight(name='kernel',
shape=(input_shape[1], self.output_dim),
initializer='uniform',
trainable=True)
super(MyLayer, self).build(input_shape) # Be sure to call this at the end
def call(self, x):
return K.dot(x, self.kernel)
def compute_output_shape(self, input_shape):
return (input_shape[0], self.output_dim)
model_in = Input([4])
model_out = MyLayer(2)(model_in)
model = Model(inputs=model_in, outputs=model_out, name='my_model')
adamOpt = optimizers.Adam(lr=1e-4)
model.compile(optimizer=adamOpt, loss='mean_squared_error')
hist = model.fit(np.ones((10, 4)), np.ones((10, 2))+1, verbose=2, epochs=100, batch_size=np.power(2,2))
</code></pre>
<p>I expect that this should compile and run, as it does if I call Dense instead of MyLayer</p>
<p>Full error</p>
<pre><code>Traceback (most recent call last):
File "C:\CDocuments\python\venv\lib\site-packages\tensorflow\python\framework\tensor_util.py", line 527, in make_tensor_proto
str_values = [compat.as_bytes(x) for x in proto_values]
File "C:\CDocuments\python\venv\lib\site-packages\tensorflow\python\framework\tensor_util.py", line 527, in <listcomp>
str_values = [compat.as_bytes(x) for x in proto_values]
File "C:\CDocuments\python\venv\lib\site-packages\tensorflow\python\util\compat.py", line 61, in as_bytes
(bytes_or_text,))
TypeError: Expected binary or unicode string, got Dimension(4)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "G:/My Drive/python/wholebrain_worm/extra_classes.py", line 31, in <module>
model_out = MyLayer(2)(model_in)
File "C:\CDocuments\python\venv\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 746, in __call__
self.build(input_shapes)
File "G:/My Drive/python/wholebrain_worm/extra_classes.py", line 20, in build
trainable=True)
File "C:\CDocuments\python\venv\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 609, in add_weight
aggregation=aggregation)
File "C:\CDocuments\python\venv\lib\site-packages\tensorflow\python\training\checkpointable\base.py", line 639, in _add_variable_with_custom_getter
**kwargs_for_getter)
File "C:\CDocuments\python\venv\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 1977, in make_variable
aggregation=aggregation)
File "C:\CDocuments\python\venv\lib\site-packages\tensorflow\python\ops\variables.py", line 183, in __call__
return cls._variable_v1_call(*args, **kwargs)
File "C:\CDocuments\python\venv\lib\site-packages\tensorflow\python\ops\variables.py", line 146, in _variable_v1_call
aggregation=aggregation)
File "C:\CDocuments\python\venv\lib\site-packages\tensorflow\python\ops\variables.py", line 125, in <lambda>
previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)
File "C:\CDocuments\python\venv\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 2437, in default_variable_creator
import_scope=import_scope)
File "C:\CDocuments\python\venv\lib\site-packages\tensorflow\python\ops\variables.py", line 187, in __call__
return super(VariableMetaclass, cls).__call__(*args, **kwargs)
File "C:\CDocuments\python\venv\lib\site-packages\tensorflow\python\ops\resource_variable_ops.py", line 297, in __init__
constraint=constraint)
File "C:\CDocuments\python\venv\lib\site-packages\tensorflow\python\ops\resource_variable_ops.py", line 409, in _init_from_args
initial_value() if init_from_fn else initial_value,
File "C:\CDocuments\python\venv\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 1959, in <lambda>
shape, dtype=dtype, partition_info=partition_info)
File "C:\CDocuments\python\venv\lib\site-packages\tensorflow\python\ops\init_ops.py", line 255, in __call__
shape, self.minval, self.maxval, dtype, seed=self.seed)
File "C:\CDocuments\python\venv\lib\site-packages\tensorflow\python\ops\random_ops.py", line 235, in random_uniform
shape = _ShapeTensor(shape)
File "C:\CDocuments\python\venv\lib\site-packages\tensorflow\python\ops\random_ops.py", line 44, in _ShapeTensor
return ops.convert_to_tensor(shape, dtype=dtype, name="shape")
File "C:\CDocuments\python\venv\lib\site-packages\tensorflow\python\framework\ops.py", line 1050, in convert_to_tensor
as_ref=False)
File "C:\CDocuments\python\venv\lib\site-packages\tensorflow\python\framework\ops.py", line 1146, in internal_convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "C:\CDocuments\python\venv\lib\site-packages\tensorflow\python\framework\constant_op.py", line 229, in _constant_tensor_conversion_function
return constant(v, dtype=dtype, name=name)
File "C:\CDocuments\python\venv\lib\site-packages\tensorflow\python\framework\constant_op.py", line 208, in constant
value, dtype=dtype, shape=shape, verify_shape=verify_shape))
File "C:\CDocuments\python\venv\lib\site-packages\tensorflow\python\framework\tensor_util.py", line 531, in make_tensor_proto
"supported type." % (type(values), values))
TypeError: Failed to convert object of type <class 'tuple'> to Tensor. Contents: (Dimension(4), 2). Consider casting elements to a supported type.
</code></pre> | <p>I recognize that this is the Keras' example to create new layers, which you can find <a href="https://keras.io/layers/writing-your-own-keras-layers/" rel="nofollow noreferrer">here</a>.</p>
<p>One very important detail is that this is a <code>keras</code> example, but you are using it with <code>tf.keras</code>. I think there must be a bug in tensorflow, because the example works with <code>keras</code> but not with <code>tf.keras</code>.</p>
<p>In general you should not mix <code>keras</code> and <code>tf.keras</code>, as they have the same API but not the same implementation. If you change all <code>tf.keras</code> imports to plain <code>keras</code>, then the code works correctly.</p> | python|tensorflow|machine-learning|keras|deep-learning | 0 |
768 | 55,597,903 | Need to speed up very slow loop for image manipulation on Python | <p>I am currently completing a program in Pyhton (3.6) as per internal requirement. As part of it, I am having to loop through a colour image (3 bytes per pixel, R, G & B) and distort the image pixel by pixel.</p>
<p>I have the same code in other languages (C++, C#), and non-optimized code executes in about two seconds, while optimized code executes in less than a second. By non-optimized code I mean that the matrix multiplication is performed by a 10 line function I implemented. The optimized version just uses external libraries for multiplication.</p>
<p>In Python, this code takes close to 300 seconds. I can´t think of a way to vectorize this logic or speed it up, as there are a couple of "if"s inside the nested loop. Any help would be greatly appreciated.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
#for test purposes:
#roi = rect.rect(0, 0, 1200, 1200)
#input = DCImage.DCImage(1200, 1200, 3)
#correctionImage = DCImage.DCImage(1200,1200,3)
#siteToImage= np.zeros((3,3), np.float32)
#worldToSite= np.zeros ((4, 4))
#r11 = r12 = r13 = r21 = r22 = r23 = r31 = r32 = r33 = 0.0
#xMean = yMean = zMean = 0
#tx = ty = tz = 0
#epsilon = np.finfo(float).eps
#fx = fy = cx = cy = k1 = k2 = p1 = p2 = 0
for i in range (roi.x, roi.x + roi.width):
for j in range (roi.y , roi.y + roi.height):
if ( (input.pixels [i] [j] == [255, 0, 0]).all()):
#Coordinates conversion
siteMat = np.matmul(siteToImage, [i, j, 1])
world =np.matmul(worldToSite, [siteMat[0], siteMat[1], 0.0, 1.0])
xLocal = world[0] - xMean
yLocal = world[1] - yMean
zLocal = z_ortho - zMean
#From World to camera
xCam = r11*xLocal + r12*yLocal + r13*zLocal + tx
yCam = r21*xLocal + r22*yLocal + r23*zLocal + ty
zCam = r31*xLocal + r32*yLocal + r33*zLocal + tz
if (zCam > epsilon or zCam < -epsilon):
xCam = xCam / zCam
yCam = yCam / zCam
#// DISTORTIONS
r2 = xCam*xCam + yCam*yCam
a1 = 2*xCam*yCam
a2 = r2 + 2*xCam*xCam
a3 = r2 + 2*yCam*yCam
cdist = 1 + k1*r2 + k2*r2*r2
u = int((xCam * cdist + p1 * a1 + p2 * a2) * fx + cx + 0.5)
v = int((yCam * cdist + p1 * a3 + p2 * a1) * fy + cy + 0.5)
if (u>=0 and u<correctionImage.width and v>=0 and v < correctionImage.height):
input.pixels [i] [j] = correctionImage.pixels [u][v]
</code></pre> | <p>You normally vectorize this kind of thing by making a displacement map. </p>
<p>Make a complex image where each pixel has the value of its own coordinate, apply the usual math operations to compute whatever transform you want, then apply the map to your source image.</p>
<p>For example, in <a href="https://pypi.org/project/pyvips/" rel="nofollow noreferrer">pyvips</a> you might write:</p>
<pre><code>import sys
import pyvips
image = pyvips.Image.new_from_file(sys.argv[1])
# this makes an image where pixel (0, 0) (at the top-left) has value [0, 0],
# and pixel (image.width, image.height) at the bottom-right has value
# [image.width, image.height]
index = pyvips.Image.xyz(image.width, image.height)
# make a version with (0, 0) at the centre, negative values up and left,
# positive down and right
centre = index - [image.width / 2, image.height / 2]
# to polar space, so each pixel is now distance and angle in degrees
polar = centre.polar()
# scale sin(distance) by 1/distance to make a wavey pattern
d = 10000 * (polar[0] * 3).sin() / (1 + polar[0])
# and back to rectangular coordinates again to make a set of vectors we can
# apply to the original index image
distort = index + d.bandjoin(polar[1]).rect()
# distort the image
distorted = image.mapim(distort)
# pick pixels from either the distorted image or the original, depending on some
# condition
result = (d.abs() > 10 or image[2] > 100).ifthenelse(distorted, image)
result.write_to_file(sys.argv[2])
</code></pre>
<p>That's just a silly wobble pattern, but you can swap it for any distortion you want. Then run as:</p>
<pre><code>$ /usr/bin/time -f %M:%e ./wobble.py ~/pics/horse1920x1080.jpg x.jpg
54572:0.31
</code></pre>
<p>300ms and 55MB of memory on this two-core, 2015 laptop to make:</p>
<p><a href="https://i.stack.imgur.com/PWEIj.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PWEIj.jpg" alt="sample output"></a></p> | python|python-3.x|numpy | 1 |
769 | 64,666,455 | Non-OK-status: GpuLaunchKernel(...) Internal: invalid configuration argument | <p>I run my code on tensorflow 2.3.0 Anaconda with CUDA Toolkit 10.1 CUDNN 7.5.0 (Windows 10) and it returns a issue</p>
<pre><code>F .\tensorflow/core/kernels/random_op_gpu.h:246] Non-OK-status: GpuLaunchKernel(FillPhiloxRandomKernelLaunch<Distribution>, num_blocks, block_size, 0, d.stream(), key, counter, gen, data, size, dist) status: Internal: invalid configuration argument
</code></pre>
<p>My GPU is RTX2070 and the code I am testing is</p>
<pre><code>import numpy as np
import os
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.models import Sequential
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
</code></pre>
<p>The full results are here</p>
<pre><code>2020-11-03 09:59:18.494825: I tensorflow/stream_executor/platform/default/dso_loader.cc:49]
Successfully opened dynamic library cudart64_110.dll
2020-11-03 09:59:20.388914: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2020-11-03 09:59:20.389652: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library nvcuda.dll
2020-11-03 09:59:20.426874: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1724] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: GeForce RTX 2070 computeCapability: 7.5
coreClock: 1.62GHz coreCount: 36 deviceMemorySize: 8.00GiB deviceMemoryBandwidth: 417.29GiB/s
2020-11-03 09:59:20.427039: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudart64_110.dll
2020-11-03 09:59:20.435227: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cublas64_11.dll
2020-11-03 09:59:20.437546: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cublasLt64_11.dll
2020-11-03 09:59:20.448543: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cufft64_10.dll
2020-11-03 09:59:20.451378: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library curand64_10.dll
2020-11-03 09:59:20.464548: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cusolver64_10.dll
2020-11-03 09:59:20.472311: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cusparse64_11.dll
2020-11-03 09:59:20.506843: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudnn64_8.dll
2020-11-03 09:59:20.507014: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1866] Adding visible gpu devices: 0
2020-11-03 09:59:20.507910: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2020-11-03 09:59:20.508416: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1724] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: GeForce RTX 2070 computeCapability: 7.5
coreClock: 1.62GHz coreCount: 36 deviceMemorySize: 8.00GiB deviceMemoryBandwidth: 417.29GiB/s
2020-11-03 09:59:20.508536: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudart64_110.dll
2020-11-03 09:59:20.508777: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cublas64_11.dll
2020-11-03 09:59:20.509056: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cublasLt64_11.dll
2020-11-03 09:59:20.509324: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cufft64_10.dll
2020-11-03 09:59:20.509572: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library curand64_10.dll
2020-11-03 09:59:20.509811: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cusolver64_10.dll
2020-11-03 09:59:20.510030: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cusparse64_11.dll
2020-11-03 09:59:20.510102: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudnn64_8.dll
2020-11-03 09:59:20.510384: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1866] Adding visible gpu devices: 0
2020-11-03 09:59:20.952560: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1265] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-11-03 09:59:20.952716: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1271] 0
2020-11-03 09:59:20.952746: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1284] 0: N
2020-11-03 09:59:20.953709: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1410] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6637 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2070, pci bus id: 0000:01:00.0, compute capability: 7.5)
2020-11-03 09:59:20.954420: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2020-11-03 09:59:21.179776: F .\tensorflow/core/kernels/random_op_gpu.h:246] Non-OK-status: GpuLaunchKernel(FillPhiloxRandomKernelLaunch<Distribution>, num_blocks, block_size, 0, d.stream(), key, counter, gen, data, size, dist) status: Internal: invalid configuration argument
</code></pre>
<p><strong>Does any one meet this before? What are the issues and how to fix it?</strong></p>
<p>I also tested tensorflow 2.1.0, 2.2.0. Same issue happened... Thanks!</p> | <p>Figured things out myself. This comes when you forget to initialize GPU.</p>
<p>Adding the following codes solve the problem</p>
<pre><code>import tensorflow as tf
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
try:
# Currently, memory growth needs to be the same across GPUs
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu, True)
logical_gpus = tf.config.experimental.list_logical_devices('GPU')
print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs")
except RuntimeError as e:
# Memory growth must be set before GPUs have been initialized
print(e)
</code></pre> | tensorflow | 1 |
770 | 64,636,448 | Merging two datasets on a column in common | <p>I am trying to use a dataset which includes some useful information to create a new column including those information, whether they were included.</p>
<p>df1</p>
<pre><code>Info User Year
24 user1 2012
0 user2 2012
12 user3 2010
24.5 user4 2011
24 user5 2012
</code></pre>
<p>Users here are unique (no duplicates)</p>
<p>df2</p>
<pre><code>Date User Year Note
2012-02-13 user1 2012 NA
2012-01-11 user4 2011 Paid
2012-02-13 user1 2012 Need review
2012-02-14 user3 2010 NA
2012-02-13 user2 2012 NA
2012-02-11 user2 2012 Need review
</code></pre>
<p>Since there is a column in common, I am considering to join these tables on User. I tried as follows:</p>
<pre><code>result = pd.merge(df1, df2, on=['User'])
</code></pre>
<p>but the output is different from that I would expect, i.e.</p>
<p>(expected output)</p>
<pre><code>Date User Year Note Info
2012-02-13 user1 2012 NA 24
2012-01-11 user4 2011 Paid 24.5
2012-02-13 user1 2012 Need review 24
2012-02-14 user3 2010 NA 12
2012-02-13 user2 2012 NA 0
2012-02-11 user2 2012 Need review 0
</code></pre>
<p>Can you please tell me what is wrong in my code above?</p> | <p>You should do two things: 1) Specify the minimum columns required (<code>[['Info', 'User']]</code>) and <code>how='left'</code>, so you don't merge another <code>Year</code> column in. You had the dataframes flipped around in your merge:</p>
<pre><code>pd.merge(df2, df1[['Info', 'User']], on=['User'], how='left')
Out[1]:
Date User Year Note Info
0 2012-02-13 user1 2012 NaN 24.0
1 2012-01-11 user4 2011 Paid 24.5
2 2012-02-13 user1 2012 Need review 24.0
3 2012-02-14 user3 2010 NaN 12.0
4 2012-02-13 user2 2012 NaN 0.0
5 2012-02-11 user2 2012 Need review 0.0
</code></pre> | python|pandas | 3 |
771 | 65,055,727 | Passing file from filedialogue to another function tkinter | <p>I am creating a program that let me visualise my csv file with tkinter.
However, I am not able to read the file I grab with the <code>filedialogue</code>.
I also have tried to pass filename as argument to the <code>file_loader</code> that as well does not work.
I still found myself with a <code>FileNotFoundError: [Errno 2] No such file or directory: ''</code> error.</p>
<pre><code>root =Tk()
root.geometry("500x500")
root.pack_propagate(False) # tells the root to not let the widgets inside it determine its size.
root.resizable(False, False)
frame = tk.LabelFrame(root, text="CSV/Excel")
frame.place(height=250, width=500)
# Frame for open file dialog
file_frame = tk.LabelFrame(root, text="Opened file")
file_frame.place(height=100, width=400, rely=0.65, relx=0.1)
# Buttons
browse_butt = tk.Button(file_frame, text="Browse A File", command=lambda: file_picker())
browse_butt.place(rely=0.65, relx=0.50)
load_butt = tk.Button(file_frame, text="Load File", command=lambda: file_loader())
load_butt.place(rely=0.65, relx=0.30)
# The file/file path text
label_file = ttk.Label(file_frame, text="Nothing Selected")
label_file.place(x=0, y=0)
# initialising the treeView
trv = ttk.Treeview(frame)
trv.place(relheight=1, relwidth=1)
#============================= Functions under buttons ================================
def file_picker():
root.filename = filedialog.askopenfilename()
label_file["text"] = root.filename
return None
def file_loader():
Label_path=label_file["text"]
try:
csv_file= r''.format(Label_path)
df= pd.read_csv(csv_file)
except ValueError:
messagebox.showerror('File Format Error', 'This program can only read a csv')
return None
clear_data()
trv['column']=list(df.columns)
trv['display']="headings"
for column in trv['column']:
trv.heading(column, text=column)
df_rows=df.to_numpy().to_list()
for row in df_rows:
trv.insert('', 'end', values=row)
def clear_data():
trv.delete(*trv.get_children())
return None
</code></pre> | <p>I see what you were trying to do, but the 'r' is only needed for filenames that you directly enter in the source code (aka "hardcoded" or "string literals"). Here you can use the file path directly from the label.</p>
<pre><code>def file_loader():
try:
csv_file= label_file["text"]
df= pd.read_csv(csv_file)
except ValueError:
messagebox.showerror('File Format Error', 'This program can only read a csv')
</code></pre> | python|pandas|dataframe|csv|tkinter | 1 |
772 | 39,836,318 | Comparing Arrays for Accuracy | <p>I've a 2 arrays:</p>
<pre><code>np.array(y_pred_list).shape
# returns (5, 47151, 10)
np.array(y_val_lst).shape
# returns (5, 47151, 10)
np.array(y_pred_list)[:, 2, :]
# returns
array([[ 0., 1., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 1., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]])
np.array(y_val_lst)[:, 2, :]
# returns
array([[ 0., 1., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 1., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]], dtype=float32)
</code></pre>
<p>I would like to go through all 47151 examples, and calculate the "accuracy". Meaning the sum of those in y_pred_list that matches y_val_lst over 47151. What's the comparison function for this? </p> | <p>You can find a lot of useful classification scores in <code>sklearn.metrics</code>, particularly <code>accuracy_score()</code>. See the <a href="http://scikit-learn.org/stable/modules/generated/sklearn.metrics.accuracy_score.html" rel="noreferrer">doc here</a>, you would use it as:</p>
<pre><code>import sklearn
acc = sklearn.metrics.accuracy_score(np.array(y_val_list)[:, 2, :],
np.array(y_pred_list)[:, 2, :])
</code></pre> | python|arrays|numpy | 6 |
773 | 39,834,999 | Adding row to dataframe in pandas | <p>Suppose I am trying add rows to a dataframe with 40 columns. Ordinarily it would be something like this:</p>
<pre><code>df = pandas.DataFrame(columns = 'row1', 'row2', ... ,'row40'))
df.loc[0] = [value1, value2, ..., value40]
</code></pre>
<p>(don't take the dots literally)</p>
<p>However let's say value1 to value10 are in list1 (i.e. list1 = [value1, value2, ..., value10]), and all the remaining values are individual elements.</p>
<p>I tried: </p>
<pre><code>df.loc[0] = [list1, value11, value12, ..., value40]
</code></pre>
<p>but it doesn't work, because Python interprets the entire list as one element. So it's the wrong dimension. How can I rewrite the code to make it work?</p> | <p>I think you need:</p>
<pre><code>df.loc[0] = list1 + [value11, value12, ..., value40]
</code></pre>
<p>Sample:</p>
<pre><code>df = pd.DataFrame(columns = ['row1', 'row2','row3', 'row4','row5', 'row6','row40'])
list1 = ['a','b','c']
df.loc[0] = list1 + ['d','e','f', 'g']
print (df)
row1 row2 row3 row4 row5 row6 row40
0 a b c d e f g
</code></pre> | python|list|pandas|append|multiple-columns | 2 |
774 | 44,366,171 | Label Encoding of multiple columns without using pandas | <p>Is there a simple way to label encode without using pandas for multiple columns?</p>
<p>Like using only numpy and sklearn's <code>preprocessing.LabelEncoder()</code></p> | <p>One solution would be to loop through the columns, converting them to numeric values, using <code>LabelEncoder</code>:</p>
<pre><code>le = LabelEncoder()
cols_2_encode = [1,3,5]
for col in cols_2_encode:
X[:, col] = le.fit_tramsform(X[:, col])
</code></pre> | python|python-3.x|numpy|machine-learning|scikit-learn | 1 |
775 | 44,170,709 | numpy.savez strips leading slash from keys | <p>I'm trying to save a bunch of numpy arrays keyed by the absolute file path that the data came from using savez. However, when I use load to retrieve that data the leading slashes have been removed from the keys.</p>
<pre><code>>>> import numpy as np
>>> data = {}
>>> data['/foo/bar'] = np.array([1, 2, 3])
>>> data.keys()
['/foo/bar']
>>> np.savez('/tmp/test', **data)
>>> data2 = np.load('/tmp/test.npz')
>>> data2.keys()
['foo/bar']
</code></pre>
<p>Is this behavior expected from numpy.savez? Is there a workaround or am I doing something wrong?</p> | <p>Looks like the stripping is done by the Python <code>zipfile</code> module, possibly on extract rather than on writing:</p>
<p><a href="https://docs.python.org/2/library/zipfile.html" rel="nofollow noreferrer">https://docs.python.org/2/library/zipfile.html</a></p>
<blockquote>
<p>Note If a member filename is an absolute path, a drive/UNC sharepoint and leading (back)slashes will be stripped, e.g.: ///foo/bar becomes foo/bar on Unix, and C:\foo\bar becomes foo\bar on Windows. And all ".." components in a member filename will be removed, e.g.: ../../foo../../ba..r becomes foo../ba..r. On Windows illegal characters (:, <, >, |, ", ?, and *) replaced by underscore (_).</p>
</blockquote>
<p>Writing is done in <code>np.lib.npyio._savez</code>, first to a <code>tmpfile</code> and then to the archive with <code>zipf.write(tmpfile, arcname=fname)</code>.</p>
<pre><code>In [98]: np.savez('test.npz',**{'/foo/bar':arr})
In [99]: !unzip -lv test.npz
Archive: test.npz
Length Method Size Cmpr Date Time CRC-32 Name
-------- ------ ------- ---- ---------- ----- -------- ----
152 Stored 152 0% 2017-05-24 19:58 ef792502 foo/bar.npy
-------- ------- --- -------
152 152 0% 1 file
</code></pre> | numpy | 1 |
776 | 69,507,636 | ValueError: not enough values to unpack (expected 2, got 1) when trying to access dataset | <p>test_ds is a dataset of shape</p>
<pre><code><PrefetchDataset shapes: ((None, 256, 256, 3), (None,)), types: (tf.float32, tf.int32)>.
</code></pre>
<p>When i try to fetch data using for loop it worked.</p>
<pre><code>for image_batch,label_batch in test_ds.take(1):
</code></pre>
<p>but when i try to fetch using following line of code it is throwing error</p>
<pre><code>image_batch,label_batch=test_ds.take(1)
ValueError: not enough values to unpack (expected 2, got 1)
</code></pre>
<p>Could someone let me know the issue here.</p> | <p>A <code>tf.data.Dataset</code> is an iterator. You need to iterate over it in order to access its elements. Try:</p>
<pre><code>image_batch, label_batch = next(iter(test_ds.take(1)))
</code></pre> | python|tensorflow|deep-learning | 1 |
777 | 40,893,602 | How to install NumPy for Python 3.6 | <p>I am using Python 3.6b3 for a long running project, developing on Windows.
For this project I also need NumPy.
I've tried Python36 -m pip install numpy, but it seems that pip is not yet in the beta.
What's the best way to install NumPy for Python 3.6b3?</p>
<p>[EDIT: Added installation log, after using ensurepip]</p>
<pre><code>D:\aaa\numpy-1.12.0b1>call C:\Python36\python.exe -m pip install numpy
Collecting numpy
Using cached numpy-1.11.2.tar.gz
Installing collected packages: numpy
Running setup.py install for numpy: started
Running setup.py install for numpy: finished with status 'error'
Complete output from command C:\Python36\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\info_000\\AppData\\Local\\Temp\\pip-build-ueljt0po\\numpy\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record C:\Users\info_000\AppData\Local\Temp\pip-nmezr3c7-record\install-record.txt --single-version-externally-managed --compile:
Running from numpy source directory.
Note: if you need reliable uninstall behavior, then install
with pip instead of using `setup.py install`:
- `pip install .` (from a git repo or downloaded source
release)
- `pip install numpy` (last Numpy release on PyPi)
blas_opt_info:
blas_mkl_info:
libraries mkl_rt not found in ['C:\\Python36\\lib', 'C:\\', 'C:\\Python36\\libs']
NOT AVAILABLE
openblas_info:
libraries openblas not found in ['C:\\Python36\\lib', 'C:\\', 'C:\\Python36\\libs']
NOT AVAILABLE
atlas_3_10_blas_threads_info:
Setting PTATLAS=ATLAS
libraries tatlas not found in ['C:\\Python36\\lib', 'C:\\', 'C:\\Python36\\libs']
NOT AVAILABLE
atlas_3_10_blas_info:
libraries satlas not found in ['C:\\Python36\\lib', 'C:\\', 'C:\\Python36\\libs']
NOT AVAILABLE
atlas_blas_threads_info:
Setting PTATLAS=ATLAS
libraries ptf77blas,ptcblas,atlas not found in ['C:\\Python36\\lib', 'C:\\', 'C:\\Python36\\libs']
NOT AVAILABLE
atlas_blas_info:
libraries f77blas,cblas,atlas not found in ['C:\\Python36\\lib', 'C:\\', 'C:\\Python36\\libs']
NOT AVAILABLE
C:\Users\info_000\AppData\Local\Temp\pip-build-ueljt0po\numpy\numpy\distutils\system_info.py:1630: UserWarning:
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable.
warnings.warn(AtlasNotFoundError.__doc__)
blas_info:
libraries blas not found in ['C:\\Python36\\lib', 'C:\\', 'C:\\Python36\\libs']
NOT AVAILABLE
C:\Users\info_000\AppData\Local\Temp\pip-build-ueljt0po\numpy\numpy\distutils\system_info.py:1639: UserWarning:
Blas (http://www.netlib.org/blas/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [blas]) or by setting
the BLAS environment variable.
warnings.warn(BlasNotFoundError.__doc__)
blas_src_info:
NOT AVAILABLE
C:\Users\info_000\AppData\Local\Temp\pip-build-ueljt0po\numpy\numpy\distutils\system_info.py:1642: UserWarning:
Blas (http://www.netlib.org/blas/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [blas_src]) or by setting
the BLAS_SRC environment variable.
warnings.warn(BlasSrcNotFoundError.__doc__)
NOT AVAILABLE
non-existing path in 'numpy\\distutils': 'site.cfg'
F2PY Version 2
lapack_opt_info:
openblas_lapack_info:
libraries openblas not found in ['C:\\Python36\\lib', 'C:\\', 'C:\\Python36\\libs']
NOT AVAILABLE
lapack_mkl_info:
libraries mkl_rt not found in ['C:\\Python36\\lib', 'C:\\', 'C:\\Python36\\libs']
NOT AVAILABLE
atlas_3_10_threads_info:
Setting PTATLAS=ATLAS
libraries tatlas,tatlas not found in C:\Python36\lib
libraries lapack_atlas not found in C:\Python36\lib
libraries tatlas,tatlas not found in C:\
libraries lapack_atlas not found in C:\
libraries tatlas,tatlas not found in C:\Python36\libs
libraries lapack_atlas not found in C:\Python36\libs
<class 'numpy.distutils.system_info.atlas_3_10_threads_info'>
NOT AVAILABLE
atlas_3_10_info:
libraries satlas,satlas not found in C:\Python36\lib
libraries lapack_atlas not found in C:\Python36\lib
libraries satlas,satlas not found in C:\
libraries lapack_atlas not found in C:\
libraries satlas,satlas not found in C:\Python36\libs
libraries lapack_atlas not found in C:\Python36\libs
<class 'numpy.distutils.system_info.atlas_3_10_info'>
NOT AVAILABLE
atlas_threads_info:
Setting PTATLAS=ATLAS
libraries ptf77blas,ptcblas,atlas not found in C:\Python36\lib
libraries lapack_atlas not found in C:\Python36\lib
libraries ptf77blas,ptcblas,atlas not found in C:\
libraries lapack_atlas not found in C:\
libraries ptf77blas,ptcblas,atlas not found in C:\Python36\libs
libraries lapack_atlas not found in C:\Python36\libs
<class 'numpy.distutils.system_info.atlas_threads_info'>
NOT AVAILABLE
atlas_info:
libraries f77blas,cblas,atlas not found in C:\Python36\lib
libraries lapack_atlas not found in C:\Python36\lib
libraries f77blas,cblas,atlas not found in C:\
libraries lapack_atlas not found in C:\
libraries f77blas,cblas,atlas not found in C:\Python36\libs
libraries lapack_atlas not found in C:\Python36\libs
<class 'numpy.distutils.system_info.atlas_info'>
NOT AVAILABLE
C:\Users\info_000\AppData\Local\Temp\pip-build-ueljt0po\numpy\numpy\distutils\system_info.py:1532: UserWarning:
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable.
warnings.warn(AtlasNotFoundError.__doc__)
lapack_info:
libraries lapack not found in ['C:\\Python36\\lib', 'C:\\', 'C:\\Python36\\libs']
NOT AVAILABLE
C:\Users\info_000\AppData\Local\Temp\pip-build-ueljt0po\numpy\numpy\distutils\system_info.py:1543: UserWarning:
Lapack (http://www.netlib.org/lapack/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [lapack]) or by setting
the LAPACK environment variable.
warnings.warn(LapackNotFoundError.__doc__)
lapack_src_info:
NOT AVAILABLE
C:\Users\info_000\AppData\Local\Temp\pip-build-ueljt0po\numpy\numpy\distutils\system_info.py:1546: UserWarning:
Lapack (http://www.netlib.org/lapack/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [lapack_src]) or by setting
the LAPACK_SRC environment variable.
warnings.warn(LapackSrcNotFoundError.__doc__)
NOT AVAILABLE
C:\Python36\lib\distutils\dist.py:261: UserWarning: Unknown distribution option: 'define_macros'
warnings.warn(msg)
running install
running build
running config_cc
unifing config_cc, config, build_clib, build_ext, build commands --compiler options
running config_fc
unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options
running build_src
build_src
building py_modules sources
creating build
creating build\src.win-amd64-3.6
creating build\src.win-amd64-3.6\numpy
creating build\src.win-amd64-3.6\numpy\distutils
building library "npymath" sources
No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils
customize GnuFCompiler
Could not locate executable g77
Could not locate executable f77
customize IntelVisualFCompiler
Could not locate executable ifort
Could not locate executable ifl
customize AbsoftFCompiler
Could not locate executable f90
customize CompaqVisualFCompiler
Could not locate executable DF
customize IntelItaniumVisualFCompiler
Could not locate executable efl
customize Gnu95FCompiler
Could not locate executable gfortran
Could not locate executable f95
customize G95FCompiler
Could not locate executable g95
customize IntelEM64VisualFCompiler
customize IntelEM64TFCompiler
Could not locate executable efort
Could not locate executable efc
don't know how to compile Fortran code on platform 'nt'
cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -Inumpy\core\src\private -Inumpy\core\src -Inumpy\core -Inumpy\core\src\npymath -Inumpy\core\src\multiarray -Inumpy\core\src\umath -Inumpy\core\src\npysort -IC:\Python36\include -IC:\Python36\include /Tc_configtest.c /Fo_configtest.obj
Could not locate executable cl.exe
Executable cl.exe does not exist
failure.
removing: _configtest.c _configtest.obj
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\info_000\AppData\Local\Temp\pip-build-ueljt0po\numpy\setup.py", line 386, in <module>
setup_package()
File "C:\Users\info_000\AppData\Local\Temp\pip-build-ueljt0po\numpy\setup.py", line 378, in setup_package
setup(**metadata)
File "C:\Users\info_000\AppData\Local\Temp\pip-build-ueljt0po\numpy\numpy\distutils\core.py", line 169, in setup
return old_setup(**new_attr)
File "C:\Python36\lib\distutils\core.py", line 148, in setup
dist.run_commands()
File "C:\Python36\lib\distutils\dist.py", line 955, in run_commands
self.run_command(cmd)
File "C:\Python36\lib\distutils\dist.py", line 974, in run_command
cmd_obj.run()
File "C:\Users\info_000\AppData\Local\Temp\pip-build-ueljt0po\numpy\numpy\distutils\command\install.py", line 62, in run
r = self.setuptools_run()
File "C:\Users\info_000\AppData\Local\Temp\pip-build-ueljt0po\numpy\numpy\distutils\command\install.py", line 36, in setuptools_run
return distutils_install.run(self)
File "C:\Python36\lib\distutils\command\install.py", line 545, in run
self.run_command('build')
File "C:\Python36\lib\distutils\cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "C:\Python36\lib\distutils\dist.py", line 974, in run_command
cmd_obj.run()
File "C:\Users\info_000\AppData\Local\Temp\pip-build-ueljt0po\numpy\numpy\distutils\command\build.py", line 47, in run
old_build.run(self)
File "C:\Python36\lib\distutils\command\build.py", line 135, in run
self.run_command(cmd_name)
File "C:\Python36\lib\distutils\cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "C:\Python36\lib\distutils\dist.py", line 974, in run_command
cmd_obj.run()
File "C:\Users\info_000\AppData\Local\Temp\pip-build-ueljt0po\numpy\numpy\distutils\command\build_src.py", line 147, in run
self.build_sources()
File "C:\Users\info_000\AppData\Local\Temp\pip-build-ueljt0po\numpy\numpy\distutils\command\build_src.py", line 158, in build_sources
self.build_library_sources(*libname_info)
File "C:\Users\info_000\AppData\Local\Temp\pip-build-ueljt0po\numpy\numpy\distutils\command\build_src.py", line 293, in build_library_sources
sources = self.generate_sources(sources, (lib_name, build_info))
File "C:\Users\info_000\AppData\Local\Temp\pip-build-ueljt0po\numpy\numpy\distutils\command\build_src.py", line 376, in generate_sources
source = func(extension, build_dir)
File "numpy\core\setup.py", line 653, in get_mathlib_info
raise RuntimeError("Broken toolchain: cannot link a simple C program")
RuntimeError: Broken toolchain: cannot link a simple C program
----------------------------------------
</code></pre> | <p>As long as binary packages (so-called 'wheels') for 3.6 have not been released to PyPi yet, you can resort to unofficial (but working) ones available at <a href="http://www.lfd.uci.edu/~gohlke/pythonlibs/" rel="noreferrer">http://www.lfd.uci.edu/~gohlke/pythonlibs/</a>. Download the file and install it like this:</p>
<pre><code>pip install C:\path\to\numpy‑1.11.3+mkl‑cp36‑cp36m‑win_amd64.whl
</code></pre> | python|python-3.x|numpy|pip | 8 |
778 | 53,882,747 | Python 2.7 - merge two CSV files without headers and with two delimiters in the first file | <p>I have one csv test1.csv (I do not have headers in it!!!). I also have as you can see delimiter with pipe but also with exactly one tab after the eight column.</p>
<pre><code>ug|s|b|city|bg|1|94|ON-05-0216 9.72|28|288
ug|s|b|city|bg|1|94|ON-05-0217 9.72|28|288
</code></pre>
<p>I have second file test2.csv with only delimiter pipe</p>
<pre><code>ON-05-0216|100|50
ON-05-0180|244|152
ON-05-0219|269|146
</code></pre>
<p>So because only one value (<strong><code>ON-05-0216</code></strong>) is being matched from the eight column from the first file and first column from the second file it means that I should have only one value in output file, but with addition of SUM column from the second and third column from second file (100+50).</p>
<p>So the final result is the following:</p>
<pre><code>ug|s|b|city|bg|1|94|ON-05-0216 Total=150|9.72|28|288
</code></pre>
<p>or </p>
<pre><code>ug|s|b|city|bg|1|94|ON-05-0216|Total=150 9.72|28|288
</code></pre>
<p>whatever is easier.</p>
<p>I though that the best way to use is with pandas. But I stuck with taking multiple delimiters from the first file and how to match columns without column names, so not sure how to continue further.</p>
<pre><code>import pandas as pd
a = pd.read_csv("test1.csv", header=None)
b = pd.read_csv("test2.csv", header=None)
merged = a.merge(b,)
merged.to_csv("output.csv", index=False)
</code></pre>
<p>Thank you in advance</p> | <p>Use:</p>
<pre><code># Reading files
df1 = pd.read_csv('file1.csv', header=None, sep='|')
df2 = pd.read_csv('file2.csv', header=None, sep='|')
# splitting file on tab and concatenating with rest
ndf = pd.concat([df1.iloc[:,:7], df1[7].str.split('\t', expand=True), df1.iloc[:,8:]], axis=1)
ndf.columns = np.arange(11)
# adding values from df2 and bringing in format Total=sum
df2.columns = ['c1', 'c2', 'c3']
tot = df2.eval('c2+c3').apply(lambda x: 'Total='+str(x))
# Finding which rows needs to be retained
idx_1 = ndf.iloc[:,7].str.split('-',expand=True).iloc[:,2]
idx_2 = df2.c1.str.split('-',expand=True).iloc[:,2]
idx = idx_1.isin(idx_2) # Updated
ndf = ndf[idx].reset_index(drop=True)
tot = tot[idx].reset_index(drop=True)
# concatenating both CSV together and writing output csv
ndf.iloc[:,7] = ndf.iloc[:,7].map(str) + chr(9) + tot
pd.concat([ndf.iloc[:,:8],ndf.iloc[:,8:]], axis=1).to_csv('out.csv', sep='|', header=None, index=None)
# OUTPUT
# ug|s|b|city|bg|1|94|ON-05-0216 Total=150|9.72|28|288
</code></pre> | python|pandas|python-2.7|join|inner-join | 1 |
779 | 54,120,265 | Receiving error (NoneType 'to_csv') while attempting to implement script into a GUI | <p>I'm trying to build a small program that combines csv files. I've made a GUI where a user selects directories of the location of the csv files and where they want the final combined csv file to be outputted. I'm using this script to merge csv files at the moment.</p>
<pre><code>from pathlib import Path
import pandas as pd
def add_dataset(old, new, **kwargs):
if old is None:
return new
else:
return pd.merge(old, new, **kwargs)
combined_csv = None
for csv in Path(r'C:\Users\Personal\Python\Combine').glob('*.csv'):
dataset = pd.read_csv(csv, index_col=0, parse_dates=[0])
combined_csv = add_dataset(combined_csv, dataset, on='DateTime', how='outer')
combined_csv.to_csv(r'C:\Users\Personal\Python\Combine\combined.csv')
</code></pre>
<p>The script that I've built for the GUI is this:
from tkinter import *
from tkinter import filedialog
from pathlib import Path
import pandas as pd
import os
root = Tk()
root.geometry("400x200")</p>
<pre><code># Setting up the 'Browse Directory' dialogs
def selectDirectory():
global dirname
global folder_path
dirname = filedialog.askdirectory(parent=root,initialdir="/",title='Please select a directory')
folder_path.set(dirname)
print(dirname)
def selectOutputDirectory():
global dirname_combine
global folder_pathcombine
dirname_combine = filedialog.askdirectory(parent=root,initialdir="/",title='Please select a directory')
folder_pathcombine.set(dirname_combine)
print(dirname_combine)
# Printing the locations out as a label
folder_path = StringVar()
lbl1 = Label(master=root, textvariable = folder_path)
lbl1.grid(row=0,column=2)
folder_pathcombine = StringVar()
lbl2 = Label(master=root, textvariable = folder_pathcombine)
lbl2.grid(row=1,column=2)
def add_dataset(old, new, **kwargs):
if old is None:
return new
else:
return pd.merge(old, new, **kwargs)
def runscript():
combined_csv = None
path = r'%s' % folder_path
combine = r'%s' % folder_pathcombine
for csv in Path(r'%s' % path).glob('*.csv'):
dataset = pd.read_csv(csv, index_col = 0, parse_dates=[0], delimiter = ',')
combined_csv = add_dataset(combined_csv, dataset, on='DateTime', how='inner')
combined_csv.to_csv(r'%s\combined.csv' % combine)
# Assigning commands to buttons to select folders
selectFolder = Button(root, text = "Select directory", command = selectDirectory)
selectFolder.grid(row=0,column=0)
selectcombine = Button(root, text = "Select output directory", command = selectOutputDirectory)
selectcombine.grid(row=1, column=0)
run = Button(root, text = "Run script", command = runscript)
run.grid(row=3, column=0)
root.mainloop()
</code></pre>
<p>The problem I'm having is correctly implementing the script for the merging in the GUI script. The merge script works fine by itself but when I implemented it into the GUI script I get the error "AttributeError: 'NoneType' object has no attribute 'to_csv'". I think my function is setup in correctly in the GUI so I was reading the following documentation. <a href="https://anh.cs.luc.edu/python/hands-on/3.1/handsonHtml/functions.html" rel="nofollow noreferrer">https://anh.cs.luc.edu/python/hands-on/3.1/handsonHtml/functions.html</a></p>
<p>I read that the "None" error occurs when there is no value being returned. So in this case I think it's not writing the variable "Combined" to a csv because nothing exists it in.</p>
<p>The complete error message is this:</p>
<pre><code>runfile('C:/Users/Personal/Python//test.py', wdir='C:/Users/Personal/Python/Combine')
C:/Users/Personal/Python/Combine
C:/Users/Personal/Python/Combine
Exception in Tkinter callback
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\lib\tkinter\__init__.py", line 1702, in __call__
return self.func(*args)
File "C:/Users/Personal/Python/Combine/gui.py", line 54, in runscript
combined_csv.to_csv(r'%s\combined.csv' % combine)
AttributeError: 'NoneType' object has no attribute 'to_csv'
</code></pre>
<p>Any help fixing the error would be greatly appreciated, as well as any advice on how to improve my code. I'm relatively new to Python and looking to improve. Thanks!</p> | <p>The problem is that you are using <code>StringVar</code> in the following statements inside <code>runscript()</code>:</p>
<pre><code>path = r'%s' % folder_path
combine = r'%s' % folder_pathcombine
</code></pre>
<p>Therefore no file will be found in the for loop below the above statements and <code>combine_csv</code> is not updated.</p>
<p>You should use <code>.get()</code> on the <code>StringVar</code> as below:</p>
<pre><code>path = r'%s' % folder_path.get()
combine = r'%s' % folder_pathcombine.get()
</code></pre> | python|pandas|tkinter | 2 |
780 | 38,314,674 | stack all levels of a MultiIndex | <p>I have a dataframe:</p>
<pre><code>index = pd.MultiIndex.from_product([['a', 'b'], ['A', 'B'], ['One', 'Two']])
df = pd.DataFrame(np.arange(16).reshape(2, 8), columns=index)
df
</code></pre>
<p><a href="https://i.stack.imgur.com/MHPpX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MHPpX.png" alt="enter image description here"></a></p>
<p>How do I stack all levels of the <code>MultiIndex</code> without knowing how many levels columns has.</p>
<p>I expect the results to look like this:</p>
<pre><code>0 a A One 0
Two 1
B One 2
Two 3
b A One 4
Two 5
B One 6
Two 7
1 a A One 8
Two 9
B One 10
Two 11
b A One 12
Two 13
B One 14
Two 15
dtype: int64
</code></pre> | <p>You can first find <code>len</code> of levels, get <code>range</code> and pass it to <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.stack.html" rel="noreferrer"><code>stack</code></a>:</p>
<pre><code>print (df.columns.nlevels)
3
print (list(range(df.columns.nlevels)))
[0, 1, 2]
print (df.stack(list(range(df.columns.nlevels))))
0 a A One 0
Two 1
B One 2
Two 3
b A One 4
Two 5
B One 6
Two 7
1 a A One 8
Two 9
B One 10
Two 11
b A One 12
Two 13
B One 14
Two 15
dtype: int32
</code></pre> | python|pandas|multi-index | 11 |
781 | 38,445,715 | Pandas Seaborn Heatmap Error | <p>I have a DataFrame that looks like this when unstacked.</p>
<pre><code>Start Date 2016-07-11 2016-07-12 2016-07-13
Period
0 1.000000 1.000000 1.0
1 0.684211 0.738095 NaN
2 0.592105 NaN NaN
</code></pre>
<p>I'm trying to plot it in Seaborn as a heatmap but it's giving me unintended results.</p>
<p><a href="https://i.stack.imgur.com/aMWJA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aMWJA.png" alt="enter image description here"></a></p>
<p>Here's my code:</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
df = pd.DataFrame(np.array(data), columns=['Start Date', 'Period', 'Users'])
df = df.fillna(0)
df = df.set_index(['Start Date', 'Period'])
sizes = df['Users'].groupby(level=0).first()
df = df['Users'].unstack(0).divide(sizes, axis=1)
plt.title("Test")
sns.heatmap(df.T, mask=df.T.isnull(), annot=True, fmt='.0%')
plt.tight_layout()
plt.savefig(table._v_name + "fig.png")
</code></pre>
<p>I want it so that text doesn't overlap and there aren't 6 heat legends on the side. Also if possible, how do I fix the date so that it only displays %Y-%m-%d?</p> | <p>While exact reproducible data is not available, consider below using posted snippet data. This example runs a <code>pivot_table()</code> to achieve the structure as posted with StartDates across columns. Overall, your heatmap possibly outputs the multiple color bars and overlapping figures due to the <code>unstack()</code> processing where you seem to be dividing by users (look into <a href="https://stanford.edu/~mwaskom/software/seaborn/generated/seaborn.FacetGrid.html" rel="nofollow noreferrer">seaborn.FacetGrid</a> to split). So below runs the df as is through heatmap. Also, an <code>apply()</code> re-formats datetime to specified need.</p>
<pre><code>from io import StringIO
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
data = '''Period,StartDate,Value
0,2016-07-11,1.000000
0,2016-07-12,1.000000
0,2016-07-13,1.0
1,2016-07-11,0.684211
1,2016-07-12,0.738095
1,2016-07-13
2,2016-07-11,0.592105
2,2016-07-12
2,2016-07-13'''
df = pd.read_csv(StringIO(data))
df['StartDate'] = pd.to_datetime(df['StartDate'])
df['StartDate'] = df['StartDate'].apply(lambda x: x.strftime('%Y-%m-%d'))
pvtdf = df.pivot_table(values='Value', index=['Period'],
columns='StartDate', aggfunc=sum)
print(pvtdf)
# StartDate 2016-07-11 2016-07-12 2016-07-13
# Period
# 0 1.000000 1.000000 1.0
# 1 0.684211 0.738095 NaN
# 2 0.592105 NaN NaN
sns.set()
plt.title("Test")
ax = sns.heatmap(pvtdf.T, mask=pvtdf.T.isnull(), annot=True, fmt='.0%')
plt.tight_layout()
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/JP5uA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JP5uA.png" alt="Heat Map Output"></a></p> | python|pandas|matplotlib|heatmap|seaborn | 1 |
782 | 66,133,725 | Error while converting pytorch model to Coreml. Layer has 1 inputs but expects at least 2 | <p>My goal is to convert my Pytorch model to Coreml. I have no problem with doing an inference with pytorch. However, after I trace my model and try to convert it</p>
<pre><code>trace = torch.jit.trace(traceable_model, data)
mlmodel = ct.convert(
trace,
inputs=[ct.TensorType(name="Image", shape=data.shape)])
</code></pre>
<p>I get the following error <code>Error compiling model: "Error reading protobuf spec. validator error: Layer 'cur_layer_input.1' of type 925 has 1 inputs but expects at least 2."</code>
I have a ConvLSTM layer in my model with a <code>cur_layer</code>. And here is whats inside.</p>
<pre><code>class ConvLSTM(nn.Module):
def __init__(self, input_size, input_dim, hidden_dim, kernel_size, num_layers,
#I cut out some of the init
for i in range(0, self.num_layers):
cur_input_dim = self.input_dim if i == 0 else self.hidden_dim[i - 1]
cell_list.append(ConvLSTMCell(input_size=(self.height, self.width),
input_dim=cur_input_dim,
hidden_dim=self.hidden_dim[i],
kernel_size=self.kernel_size[i],
bias=self.bias))
self.cell_list = nn.ModuleList(cell_list)
def forward(self, input_tensor, hidden_state=None):
if not self.batch_first:
# (t, b, c, h, w) -> (b, t, c, h, w)
input_tensor=input_tensor.permute(1, 0, 2, 3, 4)
# Implement stateful ConvLSTM
if hidden_state is not None:
raise NotImplementedError()
else:
hidden_state = self._init_hidden(batch_size=input_tensor.size(0))
layer_output_list = []
last_state_list = []
seq_len = input_tensor.size(1)
cur_layer_input = input_tensor
for layer_idx in range(self.num_layers):
h, c = hidden_state[layer_idx]
output_inner = []
for t in range(seq_len):
h, c = self.cell_list[layer_idx](input_tensor=cur_layer_input[:, t, :, :, :],
cur_state=[h, c])
output_inner.append(h)
layer_output = torch.stack(output_inner, dim=1)
cur_layer_input = layer_output
layer_output = layer_output.permute(1, 0, 2, 3, 4)
layer_output_list.append(layer_output)
last_state_list.append([h, c])
if not self.return_all_layers:
layer_output_list = layer_output_list[-1:]
last_state_list = last_state_list[-1:]
return layer_output_list, last_state_list
</code></pre>
<p>I do not quite understand where is the input that needs to be 2.</p> | <p>Often the coremltools converters will ignore parts of the model that they don't understand. This results in a conversion that is apparently successful, but actually misses portions of the model.</p> | python|pytorch|coreml|coremltools | 0 |
783 | 66,108,871 | How to create a pie chart from a text column data only using pandas dataframe directly? | <p>I have a csv file which I read using</p>
<pre><code>pd.read_csv()
</code></pre>
<p>In the file name I have a column with <em>Female</em> or <em>Male</em> and I would like to create a pie visualization <em>Female</em> or <em>Male</em>. This means, my legend will contain the color and type ( <em>Female</em> or <em>Male</em>) and in the pie chart, there will be the Prozent of each gender.</p>
<p>An example of a data set will be:</p>
<pre><code>['Female', 'Female', 'Female', 'Male', 'Male', 'Female']
</code></pre>
<p>A possible solution is to count the number of <em>Female</em> and the number of <em>Male</em> and then to generate the ṕlot.</p>
<p>Is there a simpler way using the <em>dataframe</em> data directly to generate the pie chart?</p> | <pre><code>s = pd.Series(['Female', 'Female', 'Female', 'Male', 'Male', 'Female'])
s.value_counts(normalize=True).plot.pie(autopct='%.1f %%', ylabel='', legend=True)
</code></pre>
<p><a href="https://i.stack.imgur.com/7bQVl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7bQVl.png" alt="enter image description here" /></a></p> | pandas|dataframe | 3 |
784 | 66,276,156 | calculate std() in python | <p>i have a dataframe as below</p>
<pre><code>Res_id Mean_per_year
a 10.4
a 12.4
b 4.4
b 4.5
c 17
d 9
</code></pre>
<p>i would like to calculate the std() using panda on "mean_per_year" for the same res_id
i did an aggregate and got</p>
<pre><code>Res_id Mean_per_year_agg
a 10.4,12.4
b 4.4,4.5
c 17
d 9
</code></pre>
<p>but when i applied std() on mean_per_year_agg it does not work
MY CODE:</p>
<pre><code>data = (data.groupby(['res_id'])
.agg({'mean_per_year': lambda x: x.tolist()})
.rename({'res_id' : 'mean_per_year'},axis=1)
.reset_index())
data['std'] = data.groupby('res_id')['mean_per_year_agg'].std()
</code></pre>
<p><strong>Error</strong></p>
<pre><code>/usr/local/lib/python3.6/dist-packages/pandas/core/internals/blocks.py in __init__(self, values, placement, ndim)
129 if self._validate_ndim and self.ndim and len(self.mgr_locs) != len(self.values):
130 raise ValueError(
--> 131 f"Wrong number of items passed {len(self.values)}, "
132 f"placement implies {len(self.mgr_locs)}"
133 )
ValueError: Wrong number of items passed 0, placement implies 1
</code></pre>
<p>Thanks for any help</p> | <p>You simply want <code>data.groupby('res_id')["Mean_per_year"].std()</code> on your original dataframe (remove the whole <code>aggregate</code> business).</p> | python|pandas|std | 1 |
785 | 66,243,395 | Converting values of old columns into new columns, and using old values of paired columns as values in the new columns | <p>I've been tasked with cleaning data from a mobile application designed by a charity</p>
<p>In one section, a users Q/A app use session is represented by a row. This section consists of repeated question answer field pairs, where a field represents the question asked and then the field next to it represents the corresponding answer. Together, each question/field and answer column pair represent a unique question with the answer.</p>
<p>Starting data</p>
<pre><code>_id answers.1.answer answers.0.fieldName answers.1.answer answers.1.fieldName
5f27f29c362a380d3f9a9e46 1.0 enjoyment 20.0 affectedkneeside
5f27f2ac362a380d3f9a9e4b 0.0 fatigue2weeks 40.0 avoidexercise
5f27f4d4362a380d3f9a9e52 1.0 height 50.0 painlocationknee
</code></pre>
<p><a href="https://i.stack.imgur.com/gCyzI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gCyzI.png" alt="enter image description here" /></a></p>
<p>I have been asked to reformat the section so each question forms a column, the corresponding answer a field in that column</p>
<p>The ideal output</p>
<pre><code>_id avoidexercise enjoyment fatigue2weeks height
5f27f29c362a380d3f9a9e46 1.0 yes 20.0 120.0
5f27f2ac362a380d3f9a9e4b 0.0 no 40.0 180.0
5f27f4d4362a380d3f9a9e52 1.0 yes 50.0 150.0
</code></pre>
<p>My plan is to create many pivot tables, from each other Q/A pairs of columns, then concaternate (outer join) then inner join to remove duplications</p>
<p>However, the original dataframe contains a mixture of numeric and object datatypes</p>
<p>Therefore, only some question/answer column pairs appear to be converting to pivot tables. I have tried using various aggregate functions</p>
<pre><code>p1 = ur.pivot_table(index=['_id'],columns= ['answers.0.fieldName'],values=['answers.0.answer'],aggfunc=lambda x: ' '.join(x))
p2 = ur.pivot_table(index=['_id'],columns= ['answers.1.fieldName'],values=['answers.1.answer'],aggfunc=lambda x: ' '.join(x))
p3 = ur.pivot_table(index=['_id'],columns= ['answers.2.fieldName'],values=['answers.2.answer'],aggfunc=lambda x: ' '.join(x))
</code></pre>
<p>I have also tried another lambda function</p>
<pre><code>p1 = ur.pivot_table(index=['_id'],columns= ['answers.0.fieldName'],values=['answers.0.answer'],aggfunc=lambda x: ' '.join(str(v) for v in x)
</code></pre>
<p>The furthest I have got so far is to run pivots with standard mean aggfunc</p>
<pre><code>p1 = ur.pivot_table(index=['_id'],columns=['answers.0.fieldName'],values=['answers.0.answer'])
ps = [p1,p2,p3]
c = pd.concat(ps)
</code></pre>
<p>Then attempting to remove merge rows and columns</p>
<pre><code>df = c.sum(axis=1, level=1, skipna=False)
g = df.groupby('_id').agg(np.sum)
</code></pre>
<p>This returns a dataframe with the right shape</p>
<p>However, it looses the values in the object columns, and I'm not sure how accurate all the numeric columns are</p>
<p>To overcome this problem, I was considering converting as much data as I can into numeric</p>
<pre><code>c4 = c.apply(pd.to_numeric, errors='ignore').info()
</code></pre>
<p>Then splitting the combined pivot table dataframe into numeric and object type</p>
<pre><code>nu = ['int16', 'int32', 'int64', 'float16', 'float32', 'float64']
cndf = c4.select_dtypes(include=nu)
o = ['object', 'bool', 'datetime64', 'category']
codf = c4.select_dtypes(include=o)
</code></pre>
<p>And running through the same .sum and groupby operations as above on the numeric dataframe</p>
<pre><code>n1 = cndf.sum(axis=1, level=1, skipna=False)
n2 = n1.groupby('_id').agg(np.sum)
</code></pre>
<p>However, this still leaves the challenge of dealing with the object columns</p> | <p>This is another, more elegant solution. But again does not deal with object columns</p>
<p>First define the number of question-answer pairs:</p>
<pre><code>num_answers = 2 #Following your 'Starting data' in the question
</code></pre>
<p>Then use the following couple of lines to obtain a dataframe as required:</p>
<pre><code>import pandas as pd
df2 = pd.concat([pd.pivot_table(df1, index=['_id'], columns= ['answers.{}.fieldName'.format(i)], values=['answers.{}.answer'.format(i)]) for i in range(num_answers)], axis = 1).fillna('N/A')
df2.columns = [col[1] for col in df2.columns]
</code></pre>
<p>Here df1 is assumed to be the dataframe with the starting data.</p>
<p>As you might have noticed, 'N/A' is present in cells where the particular id has no recorded answer for that particular field.</p>
<p>Assuming an ID of [1,2,3] for the three rows respectively, the output df2 for your 'Starting data' would look like this:</p>
<pre><code> affectedkneeside avoidexercise height painlocationknee vomitflag weight
_id
0 N/A 0 N/A N/A 0 N/A
1 N/A N/A 156 N/A N/A 54
2 1 N/A N/A 3 N/A N/A
</code></pre> | python|pandas|format|schema|data-cleaning | 0 |
786 | 66,244,145 | why does same element of an ndarray have different id in numpy? | <p>As the below code shows, 3-dimension ndarray b is the view of one-dimension a.<br />
Per my understanding, b[1,0,3] and a[11] should refer to same object with value 11.<br />
But from the print result, id(a[11]) and id(b[1,0,3]) are different.</p>
<p>Isn't id represent the memory address of an object?<br />
If yes, why are the memory addresses different for same object?</p>
<pre><code>import numpy as np
a = np.arange(16)
b = a.reshape(2,2,4)
print(a)
print(b)
print(a[11])
print(b[1,0,3])
[ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15]
[[[ 0 1 2 3]
[ 4 5 6 7]]
[[ 8 9 10 11]
[12 13 14 15]]]
11
11
print(hex(id(a[11])))
print(hex(id(b[1,0,3])))
0x23d456cecf0
0x23d456ce950
</code></pre> | <p>When you apply <code>reshape</code> it doesn't necessarily store <code>b</code> in the same memory location. Refer to the <a href="https://numpy.org/doc/stable/reference/generated/numpy.reshape.html" rel="nofollow noreferrer">documentation</a>, which says:</p>
<blockquote>
<p>Returns:
reshaped_array : ndarray</p>
</blockquote>
<blockquote>
<p>This will be a new view object if possible; otherwise, it will be a copy. Note there is no guarantee of the memory layout (C- or Fortran- contiguous) of the returned array.</p>
</blockquote>
<p>Hence, even though both of them have the same value (i.e 11), they are stored in different memory locations.</p> | python|numpy|multidimensional-array | 1 |
787 | 66,186,839 | Find index of most recent DateTime in Pandas dataframe | <p>I have a dataframe that includes a column of datetimes, past and future. Is there a way to find the index of the most recent datetime?</p>
<p>I can <em>not</em> assume that each datetime is unique, nor that they are in order.</p>
<p>In the event that the most recent datetime is not unique, all the relevant indeces should be returned.</p>
<pre><code>import pandas as pd
from datetime import datetime as dt
df = pd.read_excel('file.xlsx')
# Create datetime column
df['datetime'] = pd.to_datetime(df['Date'].astype(str) + ' ' + df['Time'].astype(str))
print(df['datetime'])
Out[1]:
0 2021-02-13 09:00:00
1 2021-02-13 11:00:00
2 2021-02-13 12:00:00
3 2021-02-13 15:00:00
4 2021-02-13 18:00:00
5 2021-02-13 16:45:00
6 2021-02-13 19:00:00
7 2021-02-13 19:00:00
8 2021-02-13 20:30:00
9 2021-02-14 01:30:00
Name: datetime, dtype: datetime64[ns]
</code></pre> | <p><strong>unique datetimes...</strong></p>
<p>a convenient option would be to use <a href="https://pandas.pydata.org/docs/reference/api/pandas.Index.get_loc.html" rel="nofollow noreferrer"><code>get_loc</code></a> method of <a href="https://pandas.pydata.org/docs/reference/api/pandas.DatetimeIndex.html" rel="nofollow noreferrer"><code>DatetimeIndex</code></a>. Ex:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'datetime': pd.to_datetime(['2021-01-01', '2021-02-01', '2021-02-14'])})
# today is 2021-2-13, so most recent would be 2021-2-1:
pd.DatetimeIndex(df['datetime']).get_loc(pd.Timestamp('now'), method='pad')
# 1
</code></pre>
<p>You could also set the datetime column as index, simplifying the above to <code>df.index.get_loc(pd.Timestamp('now'), method='pad')</code></p>
<hr />
<p><strong>duplicates in datetime column...</strong></p>
<p>The datetime index method shown above won't work here. Instead, you can obtain the value first an <em>then</em> get the indices:</p>
<pre><code>df = pd.DataFrame({'datetime': pd.to_datetime(['2021-02-01', '2021-01-01', '2021-02-01', '2021-02-14'])})
# most recent datetime would be 2021-2-1 at indices 0 and 2
mr_date = df['datetime'].loc[(df['datetime'] - pd.Timestamp('now') <= pd.Timedelta(0))].max()
mr_idx = df.index[df['datetime'] == mr_date]
mr_idx
# Int64Index([0, 2], dtype='int64')
</code></pre> | python|pandas|datetime | 3 |
788 | 52,452,491 | Stop Crawling urls if class does not exist in table beautifulsoup and pandas | <p>i'm using a list of urls in a csv file to crawl and extract data from a html table. i want to stop going through the urls when 'style3' is not present in the table.
I've created a function that will return false if it's not there, but i'm confused as to how to actually implement it. </p>
<p>Any suggestions for a solution or directions to literature will help greatly as i've not been able to find anything on here to help me figure it out. </p>
<p>i've included 1 url with 'style3' and 1 without. Thanks for any and all help. </p>
<p><a href="http://www.wvlabor.com/new_searches/contractor_RESULTS.cfm?wvnumber=WV057808&contractor_name=&dba=&city_name=&County=&Submit3=Search+Contractors" rel="nofollow noreferrer">http://www.wvlabor.com/new_searches/contractor_RESULTS.cfm?wvnumber=WV057808&contractor_name=&dba=&city_name=&County=&Submit3=Search+Contractors</a>
<a href="http://www.wvlabor.com/new_searches/contractor_RESULTS.cfm?wvnumber=WV057924&contractor_name=&dba=&city_name=&County=&Submit3=Search+Contractors" rel="nofollow noreferrer">http://www.wvlabor.com/new_searches/contractor_RESULTS.cfm?wvnumber=WV057924&contractor_name=&dba=&city_name=&County=&Submit3=Search+Contractors</a></p>
<pre><code>import csv
from urllib.request import urlopen
import pandas as pd
from bs4 import BeautifulSoup as BS
def license_exists(soup):
contents = []
with open('WV_urls.csv','r') as csvf:
urls = csv.reader(csvf)
for url in urls:
if soup(class_='style3'):
return True
else:
return False
contents = []
more = True
while more:
df = pd.DataFrame(columns=['WV Number', 'Company', 'DBA', 'Address', 'City', 'State', 'Zip','County', 'Phone', 'Classification*', 'Expires']) #initialize the data frame with columns
with open('WV_urls.csv','r') as csvf: # Open file in read mode
urls = csv.reader(csvf)
for url in urls:
contents.append(url) # Add each url to list contents
for url in contents: # Parse through each url in the list.
page = urlopen(url[0]).read()
df1, header = pd.read_html(page,header=0)#reading with header
more = license_exists(?????)
df=df.append(df1) # append to dataframe
df.to_csv('WV_Licenses_Daily.csv', index=False)
</code></pre> | <p>You can do this with a single for-loop and break (there's no need for the <code>while more</code>):</p>
<pre><code>lst = []
with open('WV_urls.csv','r') as csvf: # Open file in read mode
urls = csv.reader(csvf)
for url in urls:
page = urlopen(url[0]).read()
df1, header = pd.read_html(page, header=0)
if license_exists(BS(page, ‘html.parser’)):
# if the license is present we don't want to parse any more urls.
# Note: we don't append this last result (should we?)
break
lst.append(df1)
df = pd.concat(lst)
df.to_csv('WV_Licenses_Daily.csv', index=False)
</code></pre>
<p><em>Note: this creates the final DataFrame from a list of the DataFrames, this is more efficient that appending each time.</em></p> | python|pandas|dataframe|beautifulsoup | 1 |
789 | 52,730,645 | Keras: Callbacks Requiring Validation Split? | <p>I'm working on a multi-class classification problem with Keras 2.1.3 and a Tensorflow backend. I have two numpy arrays, <code>x</code> and <code>y</code> and I'm using <code>tf.data.Dataset</code> like this:</p>
<pre><code>dataset = tf.data.Dataset.from_tensor_slices(({"sequence": x}, y))
dataset = dataset.apply(tf.contrib.data.batch_and_drop_remainder(self.batch_size))
dataset = dataset.repeat()
xt, yt = dataset.make_one_shot_iterator().get_next()
</code></pre>
<p>Then I make my Keras model (omitted for brevity), compile, and fit:</p>
<pre><code>model.compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'],
)
model.fit(xt, yt, steps_per_epoch=100, epochs=10)
</code></pre>
<p>This works perfectly well. But when I add in callbacks, I run into issues. Specifically, if I do this:</p>
<pre><code>callbacks = [
tf.keras.callbacks.ModelCheckpoint("model_{epoch:04d}_{val_acc:.4f}.h5",
monitor='val_acc',
verbose=1,
save_best_only=True,
mode='max'),
tf.keras.callbacks.TensorBoard(os.path.join('.', 'logs')),
tf.keras.callbacks.EarlyStopping(monitor='val_acc', patience=5, min_delta=0, mode='max')
]
model.fit(xt, yt, steps_per_epoch=10, epochs=100, callbacks=callbacks)
</code></pre>
<p>I get:</p>
<pre><code>KeyError: 'val_acc'
</code></pre>
<p>Also, if I include <code>validation_split=0.1</code> in my <code>model.fit(...)</code> call, I'm told:</p>
<p><code>ValueError: If your data is in the form of symbolic tensors, you cannot use validation_split</code>.`</p>
<p>What is the normal way to use callbacks and validation splits with <code>tf.data.Dataset</code> (tensors)?</p>
<p>Thanks!</p> | <p>Using the tensorflow keras API, you can provide a <code>Dataset</code> for training and another for validation.</p>
<p>First some imports</p>
<pre><code>import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.layers import Dense
import numpy as np
</code></pre>
<p>define the function which will split the numpy arrays into training/val</p>
<pre><code>def split(x, y, val_size=50):
idx = np.random.choice(x.shape[0], size=val_size, replace=False)
not_idx = list(set(range(x.shape[0])).difference(set(idx)))
x_val = x[idx]
y_val = y[idx]
x_train = x[not_idx]
y_train = y[not_idx]
return x_train, y_train, x_val, y_val
</code></pre>
<p>define numpy arrays and the train/val tensorflow <code>Datasets</code></p>
<pre><code>x = np.random.randn(150, 9)
y = np.random.randint(0, 10, 150)
x_train, y_train, x_val, y_val = split(x, y)
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, tf.one_hot(y_train, depth=10)))
train_dataset = train_dataset.batch(32).repeat()
val_dataset = tf.data.Dataset.from_tensor_slices((x_val, tf.one_hot(y_val, depth=10)))
val_dataset = val_dataset.batch(32).repeat()
</code></pre>
<p>Make the model (notice we are using the tensorflow keras API)</p>
<pre><code>model = keras.models.Sequential([Dense(64, input_shape=(9,), activation='relu'),
Dense(64, activation='relu'),
Dense(10, activation='softmax')
])
model.compile(optimizer='Adam', loss='categorical_crossentropy',
metrics=['accuracy'])
model.summary()
model.fit(train_dataset,
epochs=10,
steps_per_epoch=int(100/32)+1,
validation_data=val_dataset,
validation_steps=2)
</code></pre>
<p>and the model trains, kind of (output):</p>
<pre><code>_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense (Dense) (None, 64) 640
_________________________________________________________________
dense_1 (Dense) (None, 64) 4160
_________________________________________________________________
dense_2 (Dense) (None, 10) 650
=================================================================
Total params: 5,450
Trainable params: 5,450
Non-trainable params: 0
_________________________________________________________________
Epoch 1/10
4/4 [==============================] - 0s 69ms/step - loss: 2.3170 - acc: 0.1328 - val_loss: 2.3877 - val_acc: 0.0712
Epoch 2/10
4/4 [==============================] - 0s 2ms/step - loss: 2.2628 - acc: 0.2500 - val_loss: 2.3850 - val_acc: 0.0712
Epoch 3/10
4/4 [==============================] - 0s 2ms/step - loss: 2.2169 - acc: 0.2656 - val_loss: 2.3838 - val_acc: 0.0712
Epoch 4/10
4/4 [==============================] - 0s 2ms/step - loss: 2.1743 - acc: 0.3359 - val_loss: 2.3830 - val_acc: 0.0590
Epoch 5/10
4/4 [==============================] - 0s 2ms/step - loss: 2.1343 - acc: 0.3594 - val_loss: 2.3838 - val_acc: 0.0590
Epoch 6/10
4/4 [==============================] - 0s 2ms/step - loss: 2.0959 - acc: 0.3516 - val_loss: 2.3858 - val_acc: 0.0590
Epoch 7/10
4/4 [==============================] - 0s 4ms/step - loss: 2.0583 - acc: 0.3750 - val_loss: 2.3887 - val_acc: 0.0590
Epoch 8/10
4/4 [==============================] - 0s 2ms/step - loss: 2.0223 - acc: 0.4453 - val_loss: 2.3918 - val_acc: 0.0747
Epoch 9/10
4/4 [==============================] - 0s 2ms/step - loss: 1.9870 - acc: 0.4609 - val_loss: 2.3954 - val_acc: 0.1059
Epoch 10/10
4/4 [==============================] - 0s 2ms/step - loss: 1.9523 - acc: 0.4609 - val_loss: 2.3995 - val_acc: 0.1059
</code></pre>
<h2>Callbacks</h2>
<p>Adding callbacks also works, </p>
<pre><code>callbacks = [ tf.keras.callbacks.ModelCheckpoint("model_{epoch:04d}_{val_acc:.4f}.h5",
monitor='val_acc',
verbose=1,
save_best_only=True,
mode='max'),
tf.keras.callbacks.TensorBoard('./logs'),
tf.keras.callbacks.EarlyStopping(monitor='val_acc', patience=5, min_delta=0, mode='max')
]
model.fit(train_dataset, epochs=10, steps_per_epoch=int(100/32)+1, validation_data=val_dataset,
validation_steps=2, callbacks=callbacks)
</code></pre>
<p>Output:</p>
<pre><code>Epoch 1/10
4/4
[==============================] - 0s 59ms/step - loss: 2.3274 - acc: 0.1094 - val_loss: 2.3143 - val_acc: 0.0833
Epoch 00001: val_acc improved from -inf to 0.08333, saving model to model_0001_0.0833.h5
Epoch 2/10
4/4 [==============================] - 0s 2ms/step - loss: 2.2655 - acc: 0.1094 - val_loss: 2.3204 - val_acc: 0.1389
Epoch 00002: val_acc improved from 0.08333 to 0.13889, saving model to model_0002_0.1389.h5
Epoch 3/10
4/4 [==============================] - 0s 5ms/step - loss: 2.2122 - acc: 0.1250 - val_loss: 2.3289 - val_acc: 0.1111
Epoch 00003: val_acc did not improve from 0.13889
Epoch 4/10
4/4 [==============================] - 0s 2ms/step - loss: 2.1644 - acc: 0.1953 - val_loss: 2.3388 - val_acc: 0.0556
Epoch 00004: val_acc did not improve from 0.13889
Epoch 5/10
4/4 [==============================] - 0s 2ms/step - loss: 2.1211 - acc: 0.2734 - val_loss: 2.3495 - val_acc: 0.0556
Epoch 00005: val_acc did not improve from 0.13889
Epoch 6/10
4/4 [==============================] - 0s 4ms/step - loss: 2.0808 - acc: 0.2969 - val_loss: 2.3616 - val_acc: 0.0556
Epoch 00006: val_acc did not improve from 0.13889
Epoch 7/10
4/4 [==============================] - 0s 2ms/step - loss: 2.0431 - acc: 0.2969 - val_loss: 2.3749 - val_acc: 0.0712
Epoch 00007: val_acc did not improve from 0.13889
</code></pre> | python|tensorflow|keras | 2 |
790 | 46,421,521 | Passing operators as functions to use with Pandas data frames | <p>I am selecting data from series on basis of threshold . </p>
<pre><code>>>> s = pd.Series(np.random.randn(5))
>>> s
0 -0.308855
1 -0.031073
2 0.872700
3 -0.547615
4 0.633501
dtype: float64
>>> cfg = {'threshold' : 0 , 'op' : 'less' }
>>> ops = {'less' : '<', 'more': '>' , 'equal': '==' , 'not equal' : '!='}
>>> ops[cfg['op']]
'<'
>>> s[s < cfg['threshold']]
0 -0.308855
1 -0.031073
3 -0.547615
dtype: float64
</code></pre>
<p>I want to use ops[cfg['op']] in last line of code , instead of '<'. I am willing to change key , values of ops dict if required (like -lt instead of <). How this can be done? </p> | <p>I'm all about @cᴏʟᴅsᴘᴇᴇᴅ's answer and @Zero's linked Q&A...<br>
But here is an alternative with <code>numexpr</code> </p>
<pre><code>import numexpr as ne
s[ne.evaluate('s {} {}'.format(ops[cfg['op']], cfg['threshold']))]
0 -0.308855
1 -0.031073
3 -0.547615
Name: A, dtype: float64
</code></pre>
<hr>
<p>I reopened this question after having been closed as a dup of <a href="https://stackoverflow.com/q/18591778/2336654"><strong>How to pass an operator to a python function?</strong></a> </p>
<p>The question and answers are great and I showed my appreciation with up votes. </p>
<p>Asking in the context of a <code>pandas.Series</code> opens it up to using answers that include <code>numpy</code> and <code>numexpr</code>. Whereas trying to answer the dup target with this answer would be pure nonsense.</p> | python|pandas|conditional|series|dynamic-execution | 4 |
791 | 58,546,864 | How to use dropna to drop columns on a subset of columns in Pandas | <p>I want to use Pandas' <code>dropna</code> function on <code>axis=1</code> to drop columns, but only on a subset of columns with some <code>thresh</code> set. More specifically, I want to pass an argument on which columns to ignore in the <code>dropna</code> operation. How can I do this? Below is an example of what I've tried. </p>
<pre><code>import pandas as pd
df = pd.DataFrame({
'building': ['bul2', 'bul2', 'cap1', 'cap1'],
'date': ['2019-01-01', '2019-02-01', '2019-01-01', '2019-02-01'],
'rate1': [301, np.nan, 250, 276],
'rate2': [250, 300, np.nan, np.nan],
'rate3': [230, np.nan, np.nan, np.nan],
'rate4': [230, np.nan, 245, np.nan],
})
# Only retain columns with more than 3 non-missing values
df.dropna(1, thresh=3)
building date rate1
0 bul2 2019-01-01 301.0
1 bul2 2019-02-01 NaN
2 cap1 2019-01-01 250.0
3 cap1 2019-02-01 276.0
# Try to do the same but only apply dropna to the subset of [building, date, rate1, and rate2],
# (meaning do NOT drop rate3 and rate4)
df.dropna(1, thresh=3, subset=['building', 'date', 'rate1', 'rate2'])
KeyError: ['building', 'date', 'rate1', 'rate2']
</code></pre> | <pre><code># Desired subset of columns against which to apply `dropna`.
cols = ['building', 'date', 'rate1', 'rate2']
# Apply `dropna` and see which columns remain.
filtered_cols = df.loc[:, cols].dropna(axis=1, thresh=3).columns
# Use a conditional list comprehension to determine which columns were dropped.
dropped_cols = [col for col in cols if col not in filtered_cols]
# Use a conditional list comprehension to display all columns other than those that were dropped.
new_cols = [col for col in df if col not in dropped_cols]
>>> df[new_cols]
building date rate1 rate3 rate4
0 bul2 2019-01-01 301.0 230.0 230.0
1 bul2 2019-02-01 NaN NaN NaN
2 cap1 2019-01-01 250.0 NaN 245.0
3 cap1 2019-02-01 276.0 NaN NaN
</code></pre> | python|pandas | 4 |
792 | 58,331,506 | How to create a column that groups all values from column into list that fall between values of different columns in pandas | <p>I have two data frames that look like this:</p>
<pre><code>unit start stop
A 0.0 8.15
B 9.18 11.98
A 13.07 13.80
B 13.82 15.00
A 16.46 17.58
</code></pre>
<p>df_2</p>
<pre><code>time other_data
1 5
2 5
3 6
4 10
5 5
6 2
7 1
8 5
9 5
10 7
11 5
12 5
13 5
14 10
15 5
16 4
17 4
18 4
</code></pre>
<p>How do I append all values from df_2.other_data where df_2.time falls in between df_1.start and df_1.stop into a list (or array)? For example, all the values of df_2.other_data where df_2.time falls between df_1.start and df_1.stop for row 1 would be [5, 5, 6, 10, 5, 2, 1 5].</p>
<p>The desired df will look as below.</p>
<pre><code>unit start stop other_data_list
A 0.0 8.15 [5,5,6,10,5,2,1,5]
B 9.18 11.98 [5,7,5]
A 13.07 13.80 [5]
B 13.82 15.00 [5,10,5]
A 16.46 17.58 [4,4]
</code></pre> | <p>Use the following:</p>
<pre><code>df1['other'] = df1.apply(lambda row : df2['other_data'].loc[(df2['time'] > row['start']) & (df2['time'] < row['stop'])].tolist(), axis=1)
</code></pre>
<p>Output is, using your sample datafames:</p>
<pre><code> unit start stop other
0 A 0.00 8.15 [5, 5, 6, 10, 5, 2, 1, 5]
1 B 9.18 11.98 [7, 5]
2 A 13.07 13.80 []
3 B 13.82 15.00 [10]
4 A 16.46 17.58 [4]
</code></pre>
<p>For each row of <code>df1</code>, with <code>apply</code> you can select the desired values in <code>df2</code>. Convert the selection to a <code>list</code> using the <code>tolist()</code> method of pandas.Series, otherwise you will get a <code>ValueError: Wrong number of items passed</code>.</p> | python|pandas | 1 |
793 | 58,513,256 | Calculating a daily average in Python (One day has multiple values for one variable) | <p>I have one csv data that has several variables as a daily time series. But there are multiple values for one day. I need to calculate daily averages of temperatures from these multiple values for the entire period.</p>
<p>CSV file is stored here: <a href="https://drive.google.com/file/d/1zbojEilckwg5rzNfWtHVF-wu1f8d9m9J/view?usp=sharing" rel="nofollow noreferrer">https://drive.google.com/file/d/1zbojEilckwg5rzNfWtHVF-wu1f8d9m9J/view?usp=sharing</a></p>
<p>When you filtered daily, you can see 27 different values for each day.</p>
<p>I can filter for each day and take averages like:</p>
<pre><code>inpcsvFile = 'C:/.../daily average - one day has multiple values.csv'
df = pd.read_csv(inpcsvFile)
df2=df[df['Dates']=='1/1/1971 0:00']
df3=df2.append(df2.agg(['mean']))
</code></pre>
<p>But how can I take daily averages for the entire period?</p> | <p>Here is the solution, thanks to <a href="https://stackoverflow.com/questions/24082784/pandas-dataframe-groupby-datetime-month">pandas dataframe groupby datetime month</a>.
Here I used "D" instead of "M". </p>
<pre><code>import pandas as pd
inpcsvFile = 'C:/.../daily average - one day has multiple values.csv'
df = pd.read_csv(inpcsvFile)
df['Dates'] = df['Dates'].astype(str) #convert entire "Dates" Column to string
df['Dates']=pd.to_datetime(df['Dates']) #convert entire "Dates" Column to datetime format this time
df.index=df['Dates'] #replace index with entire "Dates" Column to work with groupby function
df3=df.groupby(pd.TimeGrouper(freq='D')).mean() #take daily average of multiple values
</code></pre> | python|pandas|datetime | 0 |
794 | 58,230,879 | Get the last value in multiple columns Pandas | <p>I have a dataset that I want to split columns and get only the row with the last non-empty string (from multiple columns). My initial table looks like this.</p>
<p><a href="https://i.stack.imgur.com/PXWqx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PXWqx.png" alt="Initial dataframe"></a></p>
<p>Now I split like this to have multiple columns.</p>
<pre><code>df['name'].str.split(" ", expand = True)
</code></pre>
<p>And the result is the following.
<a href="https://i.stack.imgur.com/6Bhoj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6Bhoj.png" alt="splitted data"></a></p>
<p>I would like to get the last NONE value. Here is the output I would like to have.</p>
<p><a href="https://i.stack.imgur.com/ImF8Q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ImF8Q.png" alt="enter image description here"></a></p> | <p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.split.html" rel="nofollow noreferrer"><code>Series.str.split</code></a> by default arbitrary whitespace, so no parameter and then select last value of lists by slicing <code>[-1]</code>:</p>
<pre><code>df['last'] = df['name'].str.split().str[-1]
</code></pre> | python|pandas|split | 1 |
795 | 44,723,716 | plot rectangular wave python | <p>I have a dataframe that looks like this:</p>
<p>df:</p>
<pre><code> Start End Value
0 0 98999 0
1 99000 101999 1
2 102000 155999 0
3 156000 161999 1
4 162000 179999 0
</code></pre>
<p>I would like to plot a rectangular wave that goes from "Start" to "End" and that has the value in "Value"
Could you please help me? thanks!</p>
<p>My attempt:</p>
<pre><code> for j in range(0,df.shape[0]):
plt.figure()
plt.plot(range(df['Start'].iloc[j],df['End'].iloc[j]), np.ones(df['End'].iloc[j]-df['Start'].iloc[j])*df['Value'].iloc[j])
</code></pre>
<p>but it plots in different figures...</p> | <p>Maybe you can use <a href="https://matplotlib.org/devdocs/api/_as_gen/matplotlib.pyplot.step.html" rel="nofollow noreferrer"><code>plt.step</code></a> to plot a stepwise function:</p>
<pre><code>import pandas as pd
from matplotlib import pyplot as plt
df = pd.DataFrame({'End': [98999, 101999, 155999, 161999, 179999],
'Start': [0, 99000, 102000, 156000, 162000],
'Value': [0, 1, 0, 1, 0]})
plt.step(df.Start, df.Value, where='post')
</code></pre>
<p>This will plot every <code>Value</code> from its corresponding <code>Start</code> value to the next <code>Start</code>.</p>
<p><a href="https://i.stack.imgur.com/XchOj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XchOj.png" alt="step plot"></a></p> | python|pandas|dataframe | 1 |
796 | 44,637,952 | How can I convert pandas date time xticks to readable format? | <p>I am plotting a time series with a date time index. The plot needs to be a particular size for the journal format. Consequently, the sticks are not readable since they span many years.</p>
<p>Here is a data sample</p>
<pre><code>2013-02-10 0.7714492098202259
2013-02-11 0.7709101833765016
2013-02-12 0.7704911332770049
2013-02-13 0.7694975914173087
2013-02-14 0.7692108921323576
</code></pre>
<p>The data is a series with a datetime index and spans from 2013 to 2016. I use</p>
<pre><code>data.plot(ax = ax)
</code></pre>
<p>to plot the data.</p>
<p>How can I format my xticks to read like <code>'13</code> instead of <code>2013</code>?</p> | <p>It seems there is some incompatibility between pandas and matplotlib formatters/locators when it comes to dates. See e.g. those questions:</p>
<ul>
<li><p><a href="https://stackoverflow.com/questions/42880333/pandas-plot-modify-major-and-minor-xticks-for-dates">Pandas plot - modify major and minor xticks for dates</a></p></li>
<li><p><a href="https://stackoverflow.com/questions/44213781/pandas-dataframe-line-plot-display-date-on-xaxis">Pandas Dataframe line plot display date on xaxis</a></p></li>
</ul>
<p>I'm not entirely sure why it still works in some cases to use matplotlib formatters and not in others. However because of those issues, the bullet-proof solution is to use matplotlib to plot the graph instead of the pandas plotting function.
This allows to use locators and formatters just as seen in the <a href="https://matplotlib.org/examples/api/date_demo.html" rel="nofollow noreferrer">matplotlib example</a>.</p>
<p>Here the solution to the question would look as follows:</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
import numpy as np
dates = pd.date_range("2013-01-01", "2017-06-20" )
y = np.cumsum(np.random.normal(size=len(dates)))
s = pd.Series(y, index=dates)
fig, ax = plt.subplots()
ax.plot(s.index, s.values)
ax.xaxis.set_major_locator(mdates.YearLocator())
ax.xaxis.set_minor_locator(mdates.MonthLocator())
yearFmt = mdates.DateFormatter("'%y")
ax.xaxis.set_major_formatter(yearFmt)
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/njIDu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/njIDu.png" alt="enter image description here"></a></p> | pandas|matplotlib | 1 |
797 | 60,808,630 | How to remove specific character from a string in pyspark? | <p>I am trying to remove specific character from a string but not able to get any proper solution.
Could you please help me how to do this?</p>
<p>I am loading the data into dataframe using pyspark. One of the column having the extra character which i want to remove.</p>
<p>Example:</p>
<pre><code>|"\""warfarin was discontinued 3 days ago and xarelto was started when the INR was 2.7, and now the INR is 5.8, should Xarelto be continued or stopped?"|
</code></pre>
<p>But in result i want only :</p>
<pre><code>|"warfarin was discontinued 3 days ago and xarelto was started when the INR was 2.7, and now the INR is 5.8, should Xarelto be continued or stopped?"|
</code></pre>
<p>I am using below code to write dataframe into file:</p>
<pre><code>df.repartition(1).write.format('com.databricks.spark.csv').mode('overwrite').save(output_path, escape='\"', sep='|',header='True',nullValue=None)
</code></pre> | <p>You can use some other escape characters instead of '\' you can change this to anything else. If you have option to save file to any other format prefer parquet (or orc) over csv.</p> | python|pandas|dataframe|pyspark | 0 |
798 | 60,996,336 | how distinguish equivalent graphs in networkx | <p>I have a question regarding graph equivalency.</p>
<p>Suppose that:</p>
<pre><code>import networkx as nx
import numpy as np
def is_isomorphic(graph1, graph2):
G1 = nx.from_numpy_matrix(graph1)
G2 = nx.from_numpy_matrix(graph2)
isomorphic = nx.is_isomorphic(G1,G2, edge_match=lambda x, y: x==y)
return isomorphic
graph1 = np.array([[1, 1, 0],
[0, 2, 1],
[0, 0, 3]])
graph2 = np.array([[1, 0, 1],
[0, 2, 1],
[0, 0, 3]])
graph3 = np.array([[1, 0, 1],
[0, 1, 1],
[0, 0, 2]])
graph4 = np.array([[1, 1, 1],
[0, 1, 0],
[0, 0, 2]])
print(is_isomorphic(graph1,graph2))
# should return True
print(is_isomorphic(graph3,graph4))
# should return False
</code></pre>
<p>The first <code>is_isomorphic(graph1,graph2)</code> should return True since the vertex labels are nothing but dummy variables to me. In the first case, vertex 2 is bonded to 2 different vertices; in the second case, vertex 3 is bonded to 2 different vertices.</p>
<p>The second <code>is_isomorphic(graph3,graph4)</code> should return False since in <code>graph3</code>, vertex 2 is bonded to the same 2 vertices; and in <code>graph4</code>, vertex 1 is bonded to 2 different kind of vertices.</p>
<p>Is there a pythonic way to solve this problem? The package <code>networkx</code> could be emitted if that makes calculations faster since I am only interested in the adjacency matrices.
Note: this problem must be scalable to bigger adjacency matrices too.</p> | <p>The following works for your given examples (and hopefully does generally the thing you want):</p>
<pre class="lang-py prettyprint-override"><code>import networkx as nx
import numpy as np
def is_isomorphic(graph1, graph2):
G1 = nx.from_numpy_matrix(graph1)
G2 = nx.from_numpy_matrix(graph2)
# remove selfloops (created by the calls above)
# and add "shadow nodes" for the node types
for node in list(G1.nodes):
G1.remove_edge(node, node)
G1.add_edge(node, "type" + str(graph1[node, node]))
for node in list(G2.nodes):
G2.remove_edge(node, node)
G2.add_edge(node, "type" + str(graph2[node, node]))
isomorphic = nx.is_isomorphic(G1, G2, edge_match=lambda x, y: x == y,
)
return isomorphic
graph1 = np.array([[1, 1, 0],
[0, 2, 1],
[0, 0, 3]])
graph2 = np.array([[1, 0, 1],
[0, 2, 1],
[0, 0, 3]])
graph3 = np.array([[1, 0, 1],
[0, 1, 1],
[0, 0, 2]])
graph4 = np.array([[1, 1, 1],
[0, 1, 0],
[0, 0, 2]])
print(is_isomorphic(graph1, graph2))
# True
print(is_isomorphic(graph3, graph4))
# False
</code></pre> | python|numpy|graph|networkx | 0 |
799 | 71,658,818 | Import multiple CSV files into pandas and merge those based on column values | <p>I have 4 dataframes:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df_inventory_parts = pd.read_csv('inventory_parts.csv')
df_colors = pd.read_csv('colors.csv')
df_part_categories = pd.read_csv('part_categories.csv')
df_parts = pd.read_csv('parts.csv')
</code></pre>
<p>Now I have merged them into 1 new dataframe like:</p>
<pre class="lang-py prettyprint-override"><code>merged = pd.merge(
left=df_inventory_parts,
right=df_colors,
how='left',
left_on='color_id',
right_on='id')
merged = pd.merge(
left=merged,
right=df_parts,
how='left',
left_on='part_num',
right_on='part_num')
merged = pd.merge(
left=merged,
right=df_part_categories,
how='left',
left_on='part_cat_id',
right_on='id')
merged.head(20)
</code></pre>
<p>This gives the correct dataset that I'm looking for. However, I was wondering if there's a shorter way / faster way of writing this. Using <code>pd.merge</code> 3 times one seems a bit excessive.</p> | <p>You have a pretty clear section of code that does exactly what you want. You want to do three merges so using merge() three times is adequate rather than excessive.</p>
<p>You can make your code a bit shorter by using the fact DataFrames have a merge function so you don't need the left argument. You can also chain them but I would point out my example does not look as neat and readable as your longer form code.</p>
<pre><code>merged = df_inventory_parts.merge(
right=df_colors,
how='left',
left_on='color_id',
right_on='id').merge(
right=df_parts,
how='left',
left_on='part_num',
right_on='part_num').merge(
right=df_part_categories,
how='left',
left_on='part_cat_id',
right_on='id')
</code></pre> | python|pandas|dataframe|csv | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.