Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
---|---|---|---|---|---|---|
500 | 63,629,719 | Creating new column by column names | <p>I need to create new column by using existing ones. I want to assign column name as value to corresponding row if that column has value (i.e not <code>NaN</code> values). I also need to exclude some columns for example in the sample dataframe <code>Column5</code> should be excluded. I couldn't come up with solution. How can I do it? <strong>Note:</strong> I know there isn't any overlapping values.</p>
<p><strong>Sample DataFrame:</strong></p>
<pre><code> Column1 Column2 Column3 Column4 Column5
0 NaN 1.0 NaN NaN f
1 NaN NaN NaN 5.0 NaN
2 NaN NaN 2.0 NaN c
3 c NaN NaN NaN y
4 NaN NaN NaN NaN x
</code></pre>
<p><strong>Expected DataFrame:</strong></p>
<pre><code> Column1 Column2 Column3 Column4 Column5 newcol
0 NaN 1.0 NaN NaN f Column2
1 NaN NaN NaN 5.0 NaN Column4
2 NaN NaN 2.0 NaN c Column3
3 c NaN NaN NaN y Column1
4 NaN NaN NaN NaN x NaN
</code></pre> | <p>Use <code>DataFrame.notna</code> + <code>DataFrame.dot</code>:</p>
<pre><code>c = df.columns.difference(['Column5'])
df['newcol'] = df[c].notna().dot(c).replace('', np.nan)
</code></pre>
<p>Result:</p>
<pre><code> Column1 Column2 Column3 Column4 Column5 newcol
0 NaN 1.0 NaN NaN f Column2
1 NaN NaN NaN 5.0 NaN Column4
2 NaN NaN 2.0 NaN c Column3
3 c NaN NaN NaN y Column1
4 NaN NaN NaN NaN x NaN
</code></pre> | python|pandas|dataframe | 4 |
501 | 63,363,642 | python sum all next n values in array at each index | <p>I have an array:</p>
<p>my_array = [1, 13, 6, 100, 12,23,45]
and would like to create a new array that for each index in my_array is the sum of 3 next index values</p>
<p>summed_array = [119, 118, 135, 80, 68,45,0]
I tried something like np.cumsum but this cumlative values</p>
<pre><code>import numpy as np
sum_value = 0
my_array = [1, 13, 6, 100, 12,23,45]
summed_array = [0, 0, 0, 0, 0,0,0]
print(len(my_array))
for ind,i in enumerate(my_array):
if ind+3< len(my_array):
summed_array[ind] =my_array[ind+1]+my_array[ind+2]+my_array[ind+3]
elif ind+2 < len(my_array):
summed_array[ind] =my_array[ind+1]+my_array[ind+2]
elif ind+1 < len(my_array):
summed_array[ind]=my_array[ind+1]
else:
summed_array[ind] = 0
print(summed_array) ```
</code></pre> | <p>This should do the trick using <a href="https://www.w3schools.com/python/numpy_array_slicing.asp#:%7E:text=Slicing%20arrays,start%3Aend%3Astep%5D%20." rel="nofollow noreferrer">slices</a>.</p>
<pre><code>import numpy as np
sum_value = 0
my_array = [1, 13, 6, 100, 12,23,45]
summed_array = [0, 0, 0, 0, 0,0,0]
n = 3;
print(len(my_array))
for i in range(len(summed_array)):
summed_array[i] = sum(my_array[i+1:i+1+n])
print(summed_array)
</code></pre> | python|arrays|numpy|sum|cumsum | 1 |
502 | 63,607,212 | Pandas dataframe plot(): x-axis date labels display but not data | <p>I am trying to plot data as a function of time (years) from a pandas data frame. A summary of the data is shown here:</p>
<pre><code> DATE WALCL
0 2010-08-18 2313662
1 2010-08-25 2301015
2 2010-09-01 2301996
3 2010-09-08 2305802
4 2010-09-15 2296079
517 2020-07-15 6958604
518 2020-07-22 6964755
519 2020-07-29 6949032
520 2020-08-05 6945237
521 2020-08-12 6957277
</code></pre>
<p>I try to plot the data using the following code:</p>
<pre><code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
years = mdates.YearLocator() # every year
months = mdates.MonthLocator() # every month
years_fmt = mdates.DateFormatter('%Y')
dfData = pd.read_csv(sPathIn+sFname, skiprows = 0)
ax = dfData.plot()
ax.xaxis.set_major_locator(years)
ax.xaxis.set_major_formatter(years_fmt)
ax.xaxis.set_minor_locator(months)
datemin = np.datetime64(dfData['DATE'][0], 'Y')
datemax = np.datetime64(dfData['DATE'].iloc[-1], 'Y') + np.timedelta64(1, 'Y')
ax.set_xlim( datemin, datemax)
plt.show()
</code></pre>
<p>When I run this code, the plot axes are displayed correctly but the time series data (WALCL) does not appear.</p>
<p><a href="https://i.stack.imgur.com/mxp84.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mxp84.png" alt="enter image description here" /></a></p>
<p>If I omit <code>ax.set_xlim( datemin, datemax)</code>, the time series data are shown, but the x-axis is no longer formatted correctly (starts at 1970 and runs until 1971).</p>
<p><a href="https://i.stack.imgur.com/IP5ir.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IP5ir.png" alt="enter image description here" /></a></p>
<p>Here is a modified code example:</p>
<pre><code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
years = mdates.YearLocator() # every year
months = mdates.MonthLocator() # every month
years_fmt = mdates.DateFormatter('%Y')
sPathIn = "C:\\Users\\reg\\projects\\notes\\Political_Economy\\S&P+Fed-Assets\\"
sFname = "WALCL.csv"
</code></pre>
<p>and here is the traceback:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\reg\projects\Notes\Political_Economy\S&P+Fed-Assets\Python\s&p-fed-assets-v0.2.3.py", line 25, in <module>
dfData.set_index('DATE', inplace=True)
File "C:\Users\reg\Anaconda3\lib\site-packages\pandas\core\frame.py", line 4545, in set_index
raise KeyError(f"None of {missing} are in the columns")
KeyError: "None of ['DATE'] are in the columns"
</code></pre>
<pre class="lang-py prettyprint-override"><code> # load data
dfData = pd.read_csv(sPathIn+sFname, skiprows = 0, parse_dates=['DATE'], index_col='DATE')
#set up plot fxn
dfData.set_index('DATE', inplace=True)
ax = dfData.plot('DATE', 'WALCL')
# format the ticks
ax.xaxis.set_major_locator(years)
ax.xaxis.set_major_formatter(years_fmt)
ax.xaxis.set_minor_locator(months)
datemin = np.datetime64(dfData['DATE'][0], 'Y')
datemax = np.datetime64(dfData['DATE'].iloc[-1], 'Y') + np.timedelta64(1, 'Y')
ax.set_xlim( datemin, datemax)
plt.show()
</code></pre> | <ul>
<li>Dataset is at <a href="https://fred.stlouisfed.org/series/WALCL" rel="nofollow noreferrer">Assets: Total Assets: Total Assets (Less Eliminations from Consolidation): Wednesday Level (WALCL)</a></li>
<li>Verify the <code>DATE</code> column is in a datetime format by using <code>parse_dates</code> with <code>.read_csv</code>.</li>
</ul>
<h2>Set <code>DATE</code> as the index</h2>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import numpy as np
# verify the DATE column is in a datetime format and set it as the index
dfData = pd.read_csv('WALCL.csv', skiprows=0, parse_dates=['DATE'], index_col='DATE')
# plot the data
ax = dfData.plot(figsize=(20, 8))
datemin = np.datetime64(dfData.index.min(), 'Y')
datemax = np.datetime64(dfData.index.max(), 'Y') + np.timedelta64(1, 'Y')
ax.set_xlim(datemin, datemax)
</code></pre>
<p><a href="https://i.stack.imgur.com/SLOo4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SLOo4.png" alt="enter image description here" /></a></p>
<h2>Leave <code>DATE</code> as a column</h2>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
# read file
dfData = pd.read_csv('WALCL.csv', skiprows=0, parse_dates=['DATE'])
# plot data
ax = dfData.plot('DATE', 'WALCL', figsize=(20, 8))
</code></pre>
<p><a href="https://i.stack.imgur.com/9JrtV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9JrtV.png" alt="enter image description here" /></a></p> | python|pandas|matplotlib|datetime64 | 3 |
503 | 63,525,673 | How to split time series index based on continuity | <p>So I have a series of dates and I want to split it into chunks based on continuity.Series looks like the following:</p>
<pre><code>2019-01-01 36.581647
2019-01-02 35.988585
2019-01-03 35.781111
2019-01-04 35.126273
2019-01-05 34.401451
2019-01-06 34.351714
2019-01-07 34.175517
2019-01-08 33.622116
2019-01-09 32.861861
2019-01-10 32.915251
2019-01-11 32.866832
2019-01-12 32.214259
2019-01-13 31.707626
2019-01-14 32.556175
2019-01-15 32.674965
2019-01-16 32.391766
2019-01-17 32.463836
2019-01-18 32.151290
2019-01-19 31.952946
2019-01-20 31.739855
2019-01-21 31.355354
2019-01-22 31.271243
2019-01-23 31.273255
2019-01-24 31.442803
2019-01-25 32.034161
2019-01-26 31.455956
2019-01-27 31.408881
2019-01-28 31.066477
2019-01-29 30.489070
2019-01-30 30.356210
2019-01-31 30.470496
2019-02-01 29.949312
2019-02-02 29.916971
2019-02-03 29.865447
2019-02-04 29.512595
2019-02-05 29.297967
2019-02-06 28.743329
2019-02-07 28.509800
2019-02-08 27.681294
2019-02-10 26.441899
2019-02-11 26.787360
2019-02-12 27.368621
2019-02-13 27.085167
2019-02-14 26.856398
2019-02-15 26.793370
2019-02-16 26.334788
2019-02-17 25.906381
2019-02-18 25.367705
2019-02-19 24.939880
2019-02-20 25.021575
2019-02-21 25.006527
2019-02-22 24.984512
2019-02-23 24.372664
2019-02-24 24.183728
2019-10-10 23.970567
2019-10-11 24.755944
2019-10-12 25.155136
2019-10-13 25.273033
2019-10-14 25.490775
2019-10-15 25.864637
2019-10-16 26.168158
2019-10-17 26.600422
2019-10-18 26.959990
2019-10-19 26.965104
2019-10-20 27.128877
2019-10-21 26.908657
2019-10-22 26.979930
2019-10-23 26.816817
2019-10-24 27.058753
2019-10-25 27.453882
2019-10-26 27.358057
2019-10-27 27.374445
2019-10-28 27.418648
2019-10-29 27.458521
2019-10-30 27.859687
2019-10-31 28.093942
2019-11-01 28.494706
2019-11-02 28.517255
2019-11-03 28.492476
2019-11-04 28.723757
2019-11-05 28.835151
2019-11-06 29.367227
2019-11-07 29.920598
2019-11-08 29.746370
2019-11-09 29.498023
2019-11-10 29.745044
2019-11-11 30.935084
2019-11-12 31.710737
2019-11-13 32.890792
2019-11-14 33.011911
2019-11-15 33.121803
2019-11-16 32.805403
2019-11-17 32.887447
2019-11-18 33.350492
2019-11-19 33.525344
2019-11-20 33.791458
2019-11-21 33.674697
2019-11-22 33.642584
2019-11-23 33.704386
2019-11-24 33.472346
2019-11-25 33.317035
2019-11-26 32.934307
2019-11-27 33.573193
2019-11-28 32.840514
2019-11-29 33.085686
2019-11-30 33.138131
2019-12-01 33.344264
2019-12-02 33.524948
2019-12-03 33.694687
2019-12-04 33.836534
2019-12-05 34.343416
2019-12-06 34.321793
2019-12-07 34.156796
2019-12-08 34.399591
2019-12-09 34.931185
2019-12-10 35.294034
2019-12-11 35.021331
2019-12-12 34.292483
2019-12-13 34.330898
2019-12-14 34.354278
2019-12-15 34.436500
2019-12-16 34.869841
2019-12-17 34.932567
2019-12-18 34.855816
2019-12-19 35.226241
2019-12-20 35.184222
2019-12-21 35.456716
2019-12-22 35.730350
2019-12-23 35.739911
2019-12-24 35.800030
2019-12-25 35.896615
2019-12-26 35.871280
2019-12-27 35.509646
2019-12-28 35.235416
2019-12-29 34.848605
2019-12-30 34.926700
2019-12-31 34.787211
</code></pre>
<p>And I want to split it like:</p>
<pre><code>chunk,start,end,value
0,2019-01-01,2019-02-24,35.235416
1,2019-10-10,2019-12-31,34.787211
</code></pre>
<p>Values are random and can be of any aggregated function. About that I dont care. But still cannot find a way to do it. The important thing is the chunks I get</p> | <p>I assume that your DataFrame:</p>
<ul>
<li>has columns named <em>Date</em> and <em>Amount</em>,</li>
<li><em>Date</em> column is of <em>datetime</em> type (not <em>string</em>).</li>
</ul>
<p>To generate your result, define the following function, to be applied
to each group of rows:</p>
<pre><code>def grpRes(grp):
return pd.Series([grp.Date.min(), grp.Date.max(), grp.Amount.mean()],
index=['start', 'end', 'value'])
</code></pre>
<p>Then apply it to each group and rename the index:</p>
<pre><code>res = df.groupby(df.Date.diff().dt.days.fillna(1, downcast='infer')
.gt(1).cumsum()).apply(grpRes)
res.index.name = 'chunk'
</code></pre>
<p>I noticed that your data sample has no row for <em>2019-02-09</em>, but you
dot't treat such <strong>single</strong> missing day as a violation of the
"continuity rule".</p>
<p>If you realy want such behaviour, change <em>gt(1)</em> to e.g. <em>gt(2)</em>.</p> | python|pandas | 0 |
504 | 21,559,866 | Pandas align multiindex dataframe with other with regular index | <p>I have one dataframe, let's call it <code>df1</code>, with a a MultiIndex (just a snippet, there are many more columns and rows)</p>
<pre><code> M1_01 M1_02 M1_03 M1_04 M1_05
Eventloc Exonloc
chr10:52619746-52623793|- 52622648-52622741 0 0 0 0 0
chr19:58859211-58865080|+ 58864686-58864827 0 0 0 0 0
58864686-58864840 0 0 0 0 0
58864744-58864840 0 0 0 0 0
chr19:58863054-58863649|- 58863463-58863550 0 0 0 0 0
</code></pre>
<p>And another dataframe, let's go with the creative name <code>df2</code>, like this (these are the results of different algorithms, which is why they have different indices). The columns are the same, though in the first df they are not sorted.</p>
<pre><code> M1_01 M1_02 M1_03 M1_04 M1_05
chr3:53274267:53274364:-@chr3:53271813:53271836:-@chr3:53268999:53269190:- 0.02 NaN NaN NaN NaN
chr2:9002720:9002852:-@chr2:9002401:9002452:-@chr2:9000743:9000894:- 0.04 NaN NaN NaN NaN
chr1:160192441:160192571:-@chr1:160190249:160190481:-@chr1:160188639:160188758:- NaN NaN NaN NaN NaN
chr7:100473194:100473333:+@chr7:100478317:100478390:+@chr7:100478906:100479034:+ NaN NaN NaN NaN NaN
chr11:57182088:57182204:-@chr11:57177408:57177594:-@chr11:57176648:57176771:- NaN NaN NaN NaN NaN
</code></pre>
<p>And I have this dataframe, again let's be creative and call it <code>df3</code>, which unifies the indices of <code>df1</code> and <code>df2</code>:</p>
<pre><code> Eventloc Exonloc
event_id
chr3:53274267:53274364:-@chr3:53271813:53271836:-@chr3:53268999:53269190:- chr3:53269191-53274267|- 53271812-53271836
chr2:9002720:9002852:-@chr2:9002401:9002452:-@chr2:9000743:9000894:- chr2:9000895-9002720|- 9002400-9002452
chr1:160192441:160192571:-@chr1:160190249:160190481:-@chr1:160188639:160188758:- chr1:160188759-160192441|- 160190248-160190481
chr7:100473194:100473333:+@chr7:100478317:100478390:+@chr7:100478906:100479034:+ chr7:100473334-100478906|+ 100478316-100478390
chr4:55124924:55124984:+@chr4:55127262:55127579:+@chr4:55129834:55130094:+ chr4:55124985-55129834|+ 55127261-55127579
</code></pre>
<p>I need to do a 1:1 comparison of these results, so I tried doing both</p>
<pre><code>df1.ix[df3.head().values]
</code></pre>
<p>and</p>
<pre><code>df1.ix[pd.MultiIndex.from_tuples(df3.head().values.tolist(), names=['Eventloc', 'Exonloc'])]
</code></pre>
<p>But they both give me dataframes of NAs. The only thing that works is:</p>
<pre><code>event_id = df2.index[0]
df1.ix[df3.ix[event_id]]
</code></pre>
<p>But this obviously suboptimal as it is not vectorized and very slow. I think I'm missing some critical concept of MultiIndexes.</p>
<p>Thanks,
Olga</p> | <p>If I understand what you are doing, you need to either explicity construct the tuples (they must be fully qualifiied tuples though, e.g. have a value for EACH level), or easier, construct a boolean indexer)</p>
<pre><code>In [7]: df1 = DataFrame(0,index=MultiIndex.from_product([list('abc'),[range(2)]]),columns=['A'])
In [8]: df1
Out[8]:
A
a 0 0
b 1 0
c 0 0
[3 rows x 1 columns]
In [9]: df1 = DataFrame(0,index=MultiIndex.from_product([list('abc'),list(range(2))]),columns=['A'])
In [10]: df1
Out[10]:
A
a 0 0
1 0
b 0 0
1 0
c 0 0
1 0
[6 rows x 1 columns]
In [11]: df3 = DataFrame(0,index=['a','b'],columns=['A'])
In [12]: df3
Out[12]:
A
a 0
b 0
[2 rows x 1 columns]
</code></pre>
<p>These are all the values of level 0 in the first frame</p>
<pre><code>In [13]: df1.index.get_level_values(level=0)
Out[13]: Index([u'a', u'a', u'b', u'b', u'c', u'c'], dtype='object')
</code></pre>
<p>Construct a boolean indexer of the result</p>
<pre><code>In [14]: df1.index.get_level_values(level=0).isin(df3.index)
Out[14]: array([ True, True, True, True, False, False], dtype=bool)
In [15]: df1.loc[df1.index.get_level_values(level=0).isin(df3.index)]
Out[15]:
A
a 0 0
1 0
b 0 0
1 0
[4 rows x 1 columns]
</code></pre> | python|indexing|pandas|alignment|dataframe | 2 |
505 | 24,799,257 | Saving numpy arrays as part of a larger text file | <p>How can I save NumPy arrays as part of a larger text file? I can write the arrays to a temp file using <code>savetxt</code>, and then read them back into a string, but this seems like redundant and inefficient coding (some of the arrays will be large). For example:</p>
<pre><code>from numpy import *
a=reshape(arange(12),(3,4))
b=reshape(arange(30),(6,5))
with open ('d.txt','w') as fh:
fh.write('Some text\n')
savetxt('tmp.txt', a, delimiter=',')
with open ("tmp.txt", "r") as th:
str=th.read()
fh.write(str)
fh.write('Some other text\n')
savetxt('tmp.txt', b, delimiter=',')
with open ("tmp.txt", "r") as th:
str=th.read()
fh.write(str)
</code></pre> | <p>First parameter of <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.savetxt.html" rel="nofollow">savetxt</a></p>
<blockquote>
<p><strong>fname</strong> : filename or <em><strong>file handle</strong></em></p>
</blockquote>
<p>So, you can open file in <a href="https://stackoverflow.com/questions/4706499/how-do-you-append-to-file-in-python">append mode</a> and write to it:</p>
<pre><code>with open ('d.txt','a') as fh:
fh.write('Some text\n')
savetxt(fh, a, delimiter=',')
fh.write('Some other text\n')
savetxt(fh, b, delimiter=',')
</code></pre> | python|numpy | 2 |
506 | 30,249,337 | `numpy.tile()` sorts automatically - is there an alternative? | <p>I'd like to initialize a <code>pandas</code> DataFrame so that I can populate it with multiple time series. </p>
<pre><code>import pandas as pd
import numpy as np
from string import ascii_uppercase
dt_rng = pd.date_range(start = pd.tseries.tools.to_datetime('2012-12-31'),
end = pd.tseries.tools.to_datetime('2014-12-28'),
freq = 'D')
df = pd.DataFrame(index = xrange(len(dt_rng) * 10),
columns = ['product', 'dt', 'unit_sales'])
df.product = sorted(np.tile([chr for chr in ascii_uppercase[:10]], len(dt_rng)))
df.dt = np.tile(dt_rng, 10)
df.unit_sales = np.random.random_integers(0, 25, len(dt_rng) * 10)
</code></pre>
<p>However, when I check the first few values of <code>df.dt</code>, I see that all values in the field have already been sorted, e.g.
<code>df.dt[:10]</code> yields <code>2012-12-31</code> ten times.
I'd like to have this output to be <code>2012-12-31</code>, <code>2013-01-01</code>, ..., <code>2013-01-08</code>, <code>2013-01-09</code> (first ten values).</p>
<p>In general, I'm looking for behavior similar to <code>R</code>'s "recycling".</p> | <p>A combination of <code>reduce()</code> and the <code>append()</code> method of a <code>pandas.tseries.index.DatetimeIndex</code>object did the trick.</p>
<pre><code>import pandas as pd
import numpy as np
from string import ascii_uppercase
dt_rng = pd.date_range(start = pd.tseries.tools.to_datetime('2012-12-31'),
end = pd.tseries.tools.to_datetime('2014-12-28'),
freq = 'D')
df = pd.DataFrame(index = xrange(len(dt_rng) * 10),
columns = ['product', 'dt', 'unit_sales'])
df.product = sorted(np.tile([chr for chr in ascii_uppercase[:10]], len(dt_rng)))
df.dt = reduce(lambda x, y: x.append(y), [dt_rng] * 10)
df.unit_sales = np.random.random_integers(0, 25, len(dt_rng) * 10)
</code></pre> | python|numpy|pandas | 0 |
507 | 53,543,705 | pandas: How to search by a list of values and return in the same order? | <p>Forgive me if this is a dupe, I've searched all morning and only found pieces of the puzzles and couldn't quite fit it all together.</p>
<h1>My Quest:</h1>
<p>I have a simple <code>DataFrame</code> where I want to extract a view by the a search <code>list</code> <code>searches</code> in the same order of said <code>list</code>. Example:</p>
<pre><code>import pandas as pd
data = {k: [v+str(i) for i in range(10)] for k, v in zip(('OrderNo','Name', 'Useless','Description'),('1000','Product ', 'Junk ','Short Desc '))}
df = pd.DataFrame(data)
df.loc[2:6, ('Useless',)] = pd.np.nan
# to mock some nan data in my real one.
</code></pre>
<hr>
<p>Resulting <code>df</code>:</p>
<pre><code> OrderNo Name Useless Description
0 10000 Product 0 Junk 0 Short Desc 0
1 10001 Product 1 Junk 1 Short Desc 1
2 10002 Product 2 Nan Short Desc 2
3 10003 Product 3 Nan Short Desc 3
4 10004 Product 4 Nan Short Desc 4
5 10005 Product 5 Nan Short Desc 5
6 10006 Product 6 Nan Short Desc 6
7 10007 Product 7 Junk 7 Short Desc 7
8 10008 Product 8 Junk 8 Short Desc 8
9 10009 Product 9 Junk 9 Short Desc 9
</code></pre>
<p>Now I want to search by a <code>list</code> of the <code>OrderNos</code> like so:</p>
<pre><code>searches = ['10005','10009','10003','10000']
</code></pre>
<p>I'm trying to get to a view like this:</p>
<pre><code> OrderNo Name Useless Description
5 10005 Product 5 Nan Short Desc 5
9 10009 Product 9 Junk 9 Short Desc 9
3 10003 Product 3 Nan Short Desc 3
0 10000 Product 0 Junk 0 Short Desc 0
</code></pre>
<p>So I can finally transpose the view into this (notice I dropped some useless column):</p>
<pre><code> 0 1 2 3
OrderNo 10005 10009 10003 10000
Name Product 5 Product 9 Product 3 Product 0
Description Short Desc 5 Short Desc 9 Short Desc 3 Short Desc 0
</code></pre>
<h1>What I've tried:</h1>
<p><a href="https://stackoverflow.com/questions/17071871/select-rows-from-a-dataframe-based-on-values-in-a-column-in-pandas">This great question/answer</a> helped me do a search by the <code>searches</code>, but the returned view is not in my order:</p>
<pre><code>found = df.loc[df['OrderNo'].isin(searches)]
OrderNo Name Useless Description
0 10000 Product 0 Junk 0 Short Desc 0
3 10003 Product 3 Nan Short Desc 3
5 10005 Product 5 Nan Short Desc 5
9 10009 Product 9 Junk 9 Short Desc 9
</code></pre>
<p>I tried adding a column <code>['my_sort']</code> to <code>found</code> so I can reorder based on the list:</p>
<pre><code>found['my_sort'] = found['OrderNo'].apply(lambda x: searches.index(x))
found.sort_values(by='my_sort', inplace=True)
# For now assume index will always be matched and ValueError will be handled.
# This detail is not critical
</code></pre>
<p>While this <em>kinda</em> works, <code>pandas</code> is throwing <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy" rel="nofollow noreferrer"><code>SettingWithCopyWarning</code></a> all over the place, telling me to use <code>.loc[row_indexer,col_indexer] = ...</code> instead. I tried that too and it's <em>still</em> throwing me the same warning. In fact it seems anything I try to assign under <code>found</code> throws the same, so I suspected the problem came from the search. I ended up wrapping it as a new <code>DataFrame</code> to not see the warning anymore:</p>
<pre><code>found = pd.DataFrame(df.loc[df['OrderNo'].isin(searches)])
found['my_sort'] = found['OrderNo'].apply(lambda x: searches.index(x))
found = found[columns].T
</code></pre>
<p>While this works, I can't help but feel this is very convoluted and not very efficient as I had to introduce a new column just to sort and then drop again. I looked into a few relevant functions like <code>reindex</code> or combo of <code>where</code> and <code>dropna</code> (doesn't work because there are other <code>nan</code> objects in my real data) but none of them seem to work towards my goal.</p>
<p><strong>Is there a better way to approach this?</strong></p> | <h3><a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_index.html" rel="nofollow noreferrer"><code>set_index</code></a> + <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.loc.html" rel="nofollow noreferrer"><code>loc</code></a> + <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.T.html" rel="nofollow noreferrer"><code>T</code></a></h3>
<p>You can utilize Pandas indexing capabilities:</p>
<pre><code>df = df.set_index('OrderNo')
searches = ['10005','10009','10003','10000']
df_search = df.loc[searches]
print(df_search)
Description Name Useless
OrderNo
10005 Short Desc 5 Product 5 NaN
10009 Short Desc 9 Product 9 Junk 9
10003 Short Desc 3 Product 3 NaN
10000 Short Desc 0 Product 0 Junk 0
res = df_search.T
print(res)
OrderNo 10005 10009 10003 10000
Description Short Desc 5 Short Desc 9 Short Desc 3 Short Desc 0
Name Product 5 Product 9 Product 3 Product 0
Useless NaN Junk 9 NaN Junk 0
</code></pre>
<p>If you require numbered column labels:</p>
<pre><code>print(df_search.reset_index().T)
0 1 2 3
OrderNo 10005 10009 10003 10000
Description Short Desc 5 Short Desc 9 Short Desc 3 Short Desc 0
Name Product 5 Product 9 Product 3 Product 0
Useless NaN Junk 9 NaN Junk 0
</code></pre> | python|pandas | 3 |
508 | 53,549,167 | Have pandas produce error when multiplying with nan | <p>I'd like to have pandas produce an error when trying to do arithmetic involving cells with nan values. So if I create a dummy DataFrame:</p>
<pre><code>test_input = pd.DataFrame(columns=['a','b','c'],
index=[1,2],
data=[[np.nan, np.nan, 2.0],[np.nan, 1.0, 3.0]])
</code></pre>
<p>which looks like this:</p>
<p><a href="https://i.stack.imgur.com/z0MWx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/z0MWx.png" alt="enter image description here"></a></p>
<p>If I then multiply this by some other set of values, it multiplies the valid entries in the DataFrame and just leaves the NaNs as they are:</p>
<pre><code>test_input * np.array([2,2,2])
</code></pre>
<p><a href="https://i.stack.imgur.com/55pzC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/55pzC.png" alt="enter image description here"></a></p>
<p>Whereas I'd like it to produce an error whenever it tries to do arithmetic on a cell containing an NaN.</p>
<p>I've tried using .fillna to replace the NaNs with <code>None</code> (so far as I can see, can't be done because fillna thinks you've entered no value) and replacing the NaNs with strings (which produces an error if you try to multiply by a float, but not an int), but I was wondering if there was some more obvious method that I'm missing?</p>
<p>Thanks in advance!</p> | <p><code>NaN</code> values are of type <code>float</code>. As such, they work fine with arithmetic operations in Pandas / NumPy. You would have to override Pandas / NumPy methods to achieve your goal. <strong>This is not recommended.</strong></p>
<p>Instead, just perform an explicit check before your computation:</p>
<pre><code>assert test_input.notnull().values.all() # AssertionError if null value exists
</code></pre> | python|python-3.x|pandas|numpy | 2 |
509 | 20,292,995 | Numpy: Comparing two data sets for fitness | <p>I'm drawing a blank on this.</p>
<p>I have two data sets: </p>
<pre><code>d1 = [(x1,y1), (x2,y2)...]
d2 = [(x1,y1), (x2,y2)...]
</code></pre>
<p>I would like to get some type of statistical value, maybe something like an r-value, that tells me how well <code>d2</code> fits to <code>d1</code>.</p> | <p>It dependents on what are those two vectors. you may want to be more specific.</p>
<p>If they are something like X-Y coordinates in Cartesian system, distance correlation is probably the most appropriate (<a href="http://en.wikipedia.org/wiki/Distance_correlation#Alternative_formulation:_Brownian_covariance" rel="nofollow">http://en.wikipedia.org/wiki/Distance_correlation#Alternative_formulation:_Brownian_covariance</a>). </p>
<p>If the <code>x</code> values are the same and <code>d1</code> has the expected <code>y</code> under each <code>x</code> values based on a certain model (i.e. a linear model) and <code>d2</code> has the observed <code>y</code> values, then Pearson's r may be a good choose <code>scipy.stats.pearsonr</code> (<a href="http://en.wikipedia.org/wiki/Pearson_product-moment_correlation_coefficient" rel="nofollow">http://en.wikipedia.org/wiki/Pearson_product-moment_correlation_coefficient</a>). </p>
<p>If both <code>d1</code> and <code>d2</code> are relative frequency data (observed <code>y</code> count of events of value <code>x</code>), then some type of goodness of fit test may be the right direction to go. <code>scipy.stats.chisquare</code>, <code>scipy.stats.chi2_contingency</code>, <code>scipy.stats.ks_2samp</code>, to name a few.</p> | python|numpy|data-fitting | 2 |
510 | 71,965,040 | Check a given word in each cell and extract it to put in an array | <p>I have a multiple column and in one of a column There is a paragraph written along with a keyword. I need to extract that keyword and put it in an array.</p>
<p>EX: <a href="https://i.stack.imgur.com/rtfKn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rtfKn.png" alt="Example" /></a></p>
<p>Now I have to go through every row of column 3 and find Keyword "TEST" and extract after : word. Like ENGLISH, MATH, PSYCHLOGY etc into an array.
Further I'll create a text file and make a sentance using these extracted words. I am not able to find exact logic to extract these words.</p> | <p>You can use <code>pandas.Series.str.extract</code>.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.DataFrame({
'col1': ['a', 'b', 'c'],
'col2': ['a', 'b', 'c'],
'col3': ['a\n\nTEST: MATH', 'b\nTEST: ENG', 'c\n\nTEST: PSY']
})
df['col3'].str.extract('TEST:(.*)$')
</code></pre> | python|python-3.x|excel|pandas|dataframe | 1 |
511 | 72,015,289 | Read .xls file with Python pandas read_excel not working, says it is a .xlsb file | <p>I'm trying to read several .xls files, saved on a NAS folder, with Apache Airflow, using the read_excel python pandas function.</p>
<p>This is the code I'm using:</p>
<pre><code>df = pd.read_excel('folder/sub_folder_1/sub_folder_2/file_name.xls', sheet_name=April, usecols=[0,1,2,3], dtype=str, engine='xlrd')
</code></pre>
<p>This worked for a time, but recently I have been getting this error for several of those files:</p>
<blockquote>
<p>Excel 2007 xlsb file; not supported</p>
<p>[...]</p>
<p>xlrd.biffh.XLRDError: Excel 2007 xlsb file; not supported</p>
</blockquote>
<p>These files are clearly .xls files, yet my code seems to detect them as .xlsb files, which are not supported. I would prefer a way to specify they are .xls file, or alternatively, a way to read xlsb files.</p>
<p>Not sure if this is relevant, but these files are updated by an external team, who may have modified some parameter of these files without me knowing so, but I think that if this was the case, I would be getting a different error.</p> | <p>Try:</p>
<pre><code>import openpyxl
xls = pd.ExcelFile('data.xls', engine='openpyxl')
df = pd.read_excel(xls)
</code></pre>
<p>XLRD has removed the ability to read in some excel datatypes recently like xlxs</p> | python|pandas|excel-2007|xls|xlsb | 0 |
512 | 72,103,054 | Create a column in a dataframe based on probabilities stored in another dataframe | <p>I have a python dataframe, pop, which has a few hundred thousand rows, the first of which are presented here:</p>
<p><strong>pop:</strong></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: center;">Index</th>
<th>Unique_ID</th>
<th style="text-align: center;">Code</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center;">0</td>
<td>5426845315</td>
<td style="text-align: center;">1</td>
</tr>
<tr>
<td style="text-align: center;">1</td>
<td>5426848464</td>
<td style="text-align: center;">6</td>
</tr>
<tr>
<td style="text-align: center;">2</td>
<td>5484651315</td>
<td style="text-align: center;">1</td>
</tr>
<tr>
<td style="text-align: center;">3</td>
<td>5426808654</td>
<td style="text-align: center;">39</td>
</tr>
</tbody>
</table>
</div>
<p>...</p>
<p>I want to create another column in this dataframe based on the value in the "code" and another dataframe, prob, which contains the probability of a value of 1, 2, or 3 appearing for each code.</p>
<p><strong>prob:</strong></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Index</th>
<th>Code</th>
<th>Prob_1</th>
<th>Prob_2</th>
<th>Prob_3</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>1</td>
<td>0.50</td>
<td>0.25</td>
<td>0.25</td>
</tr>
<tr>
<td>1</td>
<td>2</td>
<td>0.80</td>
<td>0.10</td>
<td>0.10</td>
</tr>
</tbody>
</table>
</div>
<p>...</p>
<p>I'm trying to apply the values "1", "2", or "3" to each row in the pop dataframe depending on the probabilities associated with the value in the "Code" row in the prob dataframe.</p>
<p>So far, I've tried this, but it didn't work, seems as though I can't use iloc when listing probabilities.</p>
<pre><code>response = [1,2,3]
def categorise_response(row):
if row['Code'] == 1:
return random.choice(response, 1, p=[prob.iloc[0][Prob_1], prob.iloc[0][Prob_2], prob.iloc[0][Prob_3]])
</code></pre>
<p>I get the error:</p>
<blockquote>
<p>TypeError: choice() got an unexpected keyword argument 'p'</p>
</blockquote>
<p>There's probably a much better way to do this, I'm a bit of a python novice, so any help is appreciated.</p>
<p>Edited to add:</p>
<p>I'm hoping to get an output like this, where a column "response" is added to the pop dataframe that includes a value based on the probabilities in the table prob:</p>
<p><strong>New pop:</strong></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: center;">Index</th>
<th>Unique_ID</th>
<th style="text-align: center;">Code</th>
<th style="text-align: center;">Response</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center;">0</td>
<td>5426845315</td>
<td style="text-align: center;">1</td>
<td style="text-align: center;">2</td>
</tr>
<tr>
<td style="text-align: center;">1</td>
<td>5426848464</td>
<td style="text-align: center;">6</td>
<td style="text-align: center;">3</td>
</tr>
<tr>
<td style="text-align: center;">2</td>
<td>5484651315</td>
<td style="text-align: center;">1</td>
<td style="text-align: center;">1</td>
</tr>
<tr>
<td style="text-align: center;">3</td>
<td>5426808654</td>
<td style="text-align: center;">39</td>
<td style="text-align: center;">1</td>
</tr>
</tbody>
</table>
</div>
<p>...</p> | <p><a href="https://docs.python.org/3/library/random.html#random.choices" rel="nofollow noreferrer"><code>random.choices()</code></a> allows you to use a <code>weights</code> sequence. So, if you want for each row of <code>pop</code> a <code>response</code> based on the distributions in <code>prob</code>, then you could try:</p>
<pre><code>from random import choices
response = [1,2,3]
def categorise_response(row):
return choices(response, weights=row[["Prob_1", "Prob_2", "Prob_3"]], k=1)[0]
pop["Response"] = (
pop.merge(prob, on="Code", how="left").apply(categorise_response, axis=1)
)
</code></pre>
<p>To build the new <code>Response</code> column:</p>
<ul>
<li><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.merge.html" rel="nofollow noreferrer"><code>merge</code></a> <code>prob</code> on <code>pop</code> along the <code>Code</code> column.</li>
<li>Then <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.apply.html" rel="nofollow noreferrer"><code>apply</code></a> the <code>categorise_response</code> function on each row: The function uses <code>choices</code> on <code>response</code> with the <code>weights</code> from the <code>Prob_...</code> columns.</li>
</ul>
<p>If you don't want individual draws for each row of <code>pop</code>, then you could try:</p>
<pre><code>pop["Response"] = pop[["Code"]].merge(
prob.assign(Response=prob.apply(categorise_response, axis=1)),
on="Code", how="left"
)["Response"]
</code></pre>
<p>(But your example indicates that this is not the case.)</p> | python|pandas|dataframe|probability|data-wrangling | 0 |
513 | 72,047,392 | Python SUMIF with one condition across dataframes | <p>I'm working with two dataframes</p>
<ul>
<li>MRP:</li>
</ul>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Material</th>
<th>Description</th>
<th>Septiembre</th>
</tr>
</thead>
<tbody>
<tr>
<td>1208181</td>
<td>ADSV,NA,MX,ADH HOTMET 814433PM</td>
<td>630.2888856</td>
</tr>
<tr>
<td>1206500</td>
<td>SHWP,NA,MX,WRAP M-WRAP 18'</td>
<td>459.4193011</td>
</tr>
<tr>
<td>3049172</td>
<td>INSR,LUFTL,BR,LUFTAL</td>
<td>0</td>
</tr>
<tr>
<td>3049173</td>
<td>CLOS,LUFTL,BR,BERRY</td>
<td>0</td>
</tr>
<tr>
<td>3060614</td>
<td>BOTL,LUFTL,BR,LDPE/HDPE 15 ML</td>
<td>0</td>
</tr>
</tbody>
</table>
</div>
<ul>
<li>SMCalc:</li>
</ul>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Material</th>
<th>Description</th>
<th>Brand</th>
<th>Available Qty</th>
<th>Avail. - 6mth.</th>
<th>sep</th>
</tr>
</thead>
<tbody>
<tr>
<td>0324583</td>
<td>MEAS,MUCNX,US,BLUE,20ML</td>
<td>MCN</td>
<td>921888</td>
<td></td>
<td>980554.96</td>
</tr>
<tr>
<td>0327757</td>
<td>CLOS,MUCNX,US,CR24MM</td>
<td>MCN</td>
<td>9509400</td>
<td>6219256.172</td>
<td>975724.64</td>
</tr>
<tr>
<td>1019906</td>
<td>ACETAMINOPHEN DC 90 COARSE L</td>
<td>TEM</td>
<td>43900</td>
<td>-4443.531438</td>
<td>7372.2407</td>
</tr>
<tr>
<td>1020442</td>
<td>ACETAMINOPHEN POWDER</td>
<td>NA</td>
<td>64203.289</td>
<td>38020.3542</td>
<td>6784.4993</td>
</tr>
<tr>
<td>1120252</td>
<td>TARTARIC ACID</td>
<td>PIC</td>
<td>43217.08</td>
<td></td>
<td>9370.0843</td>
</tr>
</tbody>
</table>
</div>
<p>And I'm using this formula in excel: <code>=+SUMIF(MRP!$A:$A,$A2,MRP!C:C)</code> where:</p>
<ul>
<li><strong>Range</strong> is MRP!A:A (Material)</li>
<li><strong>Criteria</strong> is SMCalc $A2 (Material)</li>
<li><strong>Sum range</strong> is MRP!C:C (Septiembre)</li>
</ul>
<p>The output I'm looking for is the column F in <code>SMCalc</code>.</p> | <p>If I'm not wrong, that excel formula calculates the sum of 'Septiembre' in column C of MRP when 'Material' in SMCalc matches 'Material' in MRP...</p>
<p>Assuming you have both excel sheets as pandas dataframes, I would then do:</p>
<pre><code>mrp.groupby('Material')['Septiembre'].sum().reset_index()
</code></pre>
<p>To find the sum of 'Septiembre' per material in mrc. Then merge that with the other dataframe:</p>
<pre><code>smcalc.merge(mrp.groupby('Material')['Septiembre'].sum().reset_index(),how='left')
</code></pre>
<p>To then bring back those values to smcalc where we want them to be</p> | python|pandas|sumifs | 0 |
514 | 22,043,601 | Count NaNs when unicode values present | <p>Good morning all, </p>
<p>I have a <code>pandas</code> dataframe containing multiple series. For a given series within the dataframe, the datatypes are unicode, NaN, and int/float. I want to determine the number of NaNs in the series but cannot use the built in <code>numpy.isnan</code> method because it cannot safely cast unicode data into a format it can interpret. I have proposed a work around, but I'm wondering if there is a better/more Pythonic way of accomplishing this task. </p>
<p>Thanks in advance,
Myles</p>
<pre><code>import pandas as pd
import numpy as np
test = pd.Series(data = [NaN, 2, u'string'])
np.isnan(test).sum()
#Error
#Work around
test2 = [x for x in test if not(isinstance(x, unicode))]
numNaNs = np.isnan(test2).sum()
</code></pre> | <p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.common.isnull.html" rel="noreferrer">pandas.isnull</a>:</p>
<pre><code>In [24]: test = pd.Series(data = [NaN, 2, u'string'])
In [25]: pd.isnull(test)
Out[25]:
0 True
1 False
2 False
dtype: bool
</code></pre>
<p>Note however, that <code>pd.isnull</code> also regards <code>None</code> as <code>True</code>:</p>
<pre><code>In [28]: pd.isnull([NaN, 2, u'string', None])
Out[28]: array([ True, False, False, True], dtype=bool)
</code></pre> | python|numpy|pandas|nan|python-unicode | 7 |
515 | 22,391,462 | converting a list of strings into integers in python, skipping masked terms | <p>Let's say I have a list of strings:</p>
<pre><code>fnew:
masked_array(data = [-- -- '56527.9529' '56527.9544' '109.7147' '0.0089' '14.3638' '0.0779'
'14.3136' '0.0775' '14.3305' '0.1049' '14.3628' '0.0837' '14.3628'
'0.0837' '70.9990' '40.0050' '173.046' '-30.328' '73' '-99.175' '0.000'
'0.000' '59.8' '0.0' '1.0'],
mask = [ True True False False False False False False False False False False
False False False False False False False False False False False False
False False False],
fill_value = N/A)
</code></pre>
<p>How do I get rid of the quotes from other elements, that is converting the other numbers into integer values so that I can do calculations with them?</p> | <p>Something like this:</p>
<pre><code>>>> import numpy as np
>>> a = ['Foo', '59.8', 'bar', 'spam']
>>> arr = np.ma.array(a, mask=[True, False, True, True])
>>> arr.compressed().astype(float)
array([ 59.8])
>>> arr[arr.mask].data
array(['Foo', 'bar', 'spam'],
dtype='|S4')
</code></pre> | python|numpy|integer|masking | 2 |
516 | 17,698,975 | Pandas: Convert DataFrame Column Values Into New Dataframe Indices and Columns | <p>I have a dataframe that looks like this:</p>
<pre><code>a b c
0 1 10
1 2 10
2 2 20
3 3 30
4 1 40
4 3 10
</code></pre>
<p>The dataframe above as default (0,1,2,3,4...) indices. I would like to convert it into a dataframe that looks like this:</p>
<pre><code> 1 2 3
0 10 0 0
1 0 10 0
2 0 20 0
3 0 0 30
4 40 0 10
</code></pre>
<p>Where column 'a' in the first dataframe becomes the index in the second dataframe, the values of 'b' become the column names and the values of c are copied over, with 0 or NaN filling missing values. The original dataset is large and will result in a very sparse second dataframe. I then intend to add this dataframe to a much larger one, which is straightforward.</p>
<p>Can anyone advise the best way to achieve this please?</p> | <p>You can use the <code>pivot</code> method for this.</p>
<p>See the docs: <a href="http://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-by-pivoting-dataframe-objects">http://pandas.pydata.org/pandas-docs/stable/reshaping.html#reshaping-by-pivoting-dataframe-objects</a></p>
<p>An example:</p>
<pre><code>In [1]: import pandas as pd
In [2]: df = pd.DataFrame({'a':[0,1,2,3,4,4], 'b':[1,2,2,3,1,3], 'c':[10,10,20,3
0,40,10]})
In [3]: df
Out[3]:
a b c
0 0 1 10
1 1 2 10
2 2 2 20
3 3 3 30
4 4 1 40
5 4 3 10
In [4]: df.pivot(index='a', columns='b', values='c')
Out[4]:
b 1 2 3
a
0 10 NaN NaN
1 NaN 10 NaN
2 NaN 20 NaN
3 NaN NaN 30
4 40 NaN 10
</code></pre>
<p>If you want zeros instead of NaN's as in your example, you can use <code>fillna</code>:</p>
<pre><code>In [5]: df.pivot(index='a', columns='b', values='c').fillna(0)
Out[5]:
b 1 2 3
a
0 10 0 0
1 0 10 0
2 0 20 0
3 0 0 30
4 40 0 10
</code></pre> | python|pandas | 12 |
517 | 8,822,370 | Plot line graph from histogram data in matplotlib | <p>I have a numpy array of ints representing time periods, which I'm currently plotting in a histogram to get a nice distribution graph, using the following code:</p>
<pre><code>ax.hist(data,bins=100,range=(minimum,maximum),facecolor="r")
</code></pre>
<p>However I'm trying to modify this graph to represent the exact same data using a line instead of bars, so I can overlay more samples to the same plot and have them be clear (otherwise the bars overlap each other). What I've tried so far is to collate the data array into an array of tuples containing (time, count), and then plot it using</p>
<pre><code>ax.plot(data[:,0],data[:,1],color="red",lw=2)
</code></pre>
<p>However that's not giving me anything close, as I can't accurately simulate the bins option of the histogram in my plot. Is there a better way to do this?</p> | <p>I am very late to the party - but maybe this will be useful to someone else. I think what you need to do is set the histtype parameter to 'step', i.e.</p>
<pre><code>ax.hist(data,bins=100,range=(minimum,maximum),facecolor="r", histtype = 'step')
</code></pre>
<p>See also <a href="http://matplotlib.sourceforge.net/examples/pylab_examples/histogram_demo_extended.html" rel="noreferrer">http://matplotlib.sourceforge.net/examples/pylab_examples/histogram_demo_extended.html</a></p> | python|numpy|matplotlib | 52 |
518 | 55,527,383 | Cannot import tensorflow in python3 and ImportError: This package should not be accessible on Python 3 | <p>I am trying to use tensorflow for research in my macbook. I use pip3 to install tensorflow in the system (not in virtual environment). </p>
<p>At first, I just want to verify tensorflow can be correctly imported via python3 in terminal. However, sometimes, I got the following problem when importing. </p>
<pre><code>>>>import tensorflow as tf
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/cyan/Library/Python/3.5/lib/python/site-packages/tensorflow/__init__.py", line 24, in <module>
from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import
File "/Users/cyan/Library/Python/3.5/lib/python/site-packages/tensorflow/python/__init__.py", line 47, in <module>
import numpy as np
File "/Library/Python/2.7/site-packages/numpy/__init__.py", line 142, in <module>
from . import add_newdocs
File "/Library/Python/2.7/site-packages/numpy/add_newdocs.py", line 13, in <module>
from numpy.lib import add_newdoc
File "/Library/Python/2.7/site-packages/numpy/lib/__init__.py", line 8, in <module>
from .type_check import *
File "/Library/Python/2.7/site-packages/numpy/lib/type_check.py", line 11, in <module>
import numpy.core.numeric as _nx
File "/Library/Python/2.7/site-packages/numpy/core/__init__.py", line 14, in <module>
from . import multiarray
ImportError: dlopen(/Library/Python/2.7/site-packages/numpy/core/multiarray.so, 2): Symbol not found: _PyBuffer_Type
Referenced from: /Library/Python/2.7/site-packages/numpy/core/multiarray.so
Expected in: flat namespace in /Library/Python/2.7/site-packages/numpy/core/multiarray.so
</code></pre>
<p>This error could only be solved if I ran the following code firstly before python3 execution</p>
<pre><code>unset PYTHONPATH
</code></pre>
<p>If I didn't unset PYTHONPATH, I also found errors when checking the version of pip3 using </p>
<pre><code>pip3 --version
</code></pre>
<p>The errors are shown as follows. </p>
<pre><code>>> pip3 --version
Traceback (most recent call last):
File "/usr/local/bin/pip3", line 6, in <module>
from pip._internal import main
File "/Library/Python/2.7/site-packages/pip/_internal/__init__.py", line 19, in <module>
from pip._vendor.urllib3.exceptions import DependencyWarning
File "/Library/Python/2.7/site-packages/pip/_vendor/urllib3/__init__.py", line 8, in <module>
from .connectionpool import (
File "/Library/Python/2.7/site-packages/pip/_vendor/urllib3/connectionpool.py", line 11, in <module>
from .exceptions import (
File "/Library/Python/2.7/site-packages/pip/_vendor/urllib3/exceptions.py", line 2, in <module>
from .packages.six.moves.http_client import (
File "/Library/Python/2.7/site-packages/pip/_vendor/urllib3/packages/six.py", line 203, in load_module
mod = mod._resolve()
File "/Library/Python/2.7/site-packages/pip/_vendor/urllib3/packages/six.py", line 115, in _resolve
return _import_module(self.mod)
File "/Library/Python/2.7/site-packages/pip/_vendor/urllib3/packages/six.py", line 82, in _import_module
__import__(name)
File "/Library/Python/2.7/site-packages/http/__init__.py", line 7, in <module>
raise ImportError('This package should not be accessible on Python 3. '
ImportError: This package should not be accessible on Python 3. Either you are trying to run from the python-future src folder or your installation of python-future is corrupted.
</code></pre>
<p>I thought it was so inconvenient to unset PYTHONPATH every time, so is there any solutions for this problem? I also want to import tensorflow in other text editor, such as Sublime and Pycharm, so I was really not sure what to do next.</p> | <p>I tried the same scenario. It is working fine for me. In the first error it seems your python installation is messed up. If you are using python3 in terminal, it should not refer to 2.7 libraries. </p>
<p>Also I dont think you require unset PYTHONPATH everytime. First thing is you dont need to setup PYTHONPATH. It seems the installation got issue. </p>
<p>You using homebrew in mac to install packages. If not I will say use homebrew and it will work as charm. As it adds dependency properly. </p>
<p>Thanks,
Ashish </p> | python|python-3.x|macos|tensorflow | 0 |
519 | 55,556,315 | TensorFlow: slice Tensor and keep original shape | <p>I have a Tensor <code>tensor</code> of shape <code>(?, 1082)</code> and I want to slice this Tensor into <code>n</code> subparts in a for-loop but I want to keep the original shape, including the unknown dimension <code>?</code>.</p>
<p>Example:</p>
<pre><code>lst = []
for n in range(15):
sub_tensor = tensor[n] # this will reduce the first dimension
print(sub_tensor.get_shape())
</code></pre>
<p>Print output I'm looking for:</p>
<pre><code>(?, 1082)
(?, 1082)
</code></pre>
<p>etc.</p>
<p>How can this be achieved in TensorFlow?</p> | <p>Considering that your problem can have many constraints, I can think of at least 3 solutions.
You can use <code>tf.split</code>. I'll use tf.placeholder, but it's applicable to tensors and variables as well.</p>
<pre><code>p = tf.placeholder(shape=[None,10], dtype=tf.int32)
s1, s2 = tf.split(value=p, num_or_size_splits=2, axis=1)
</code></pre>
<p>However, this approach can become unfeasible if number of splits required is large. Note that it can split <code>None</code> axis as well.</p>
<pre><code>for n in range(15):
sub_tensor = tensor[n, :]
s = tf.slice(p, [0,2], [-1, 2])
</code></pre>
<p>Slice can be used for multidimensional tensors, but it' pretty tricky to use. And you can use <code>tf.Tensor.getitem</code> method, almost as you described in your question. It acts similar to <code>NumPy</code>. So this should do the job:</p>
<pre><code>for n in range(10):
print(p[n, :])
</code></pre>
<p>However, usage of these methods heavily depend on your particular application. Hope this helps.</p> | tensorflow|slice | 1 |
520 | 55,523,104 | Impute value based on other columns value | <p>There is a dataframe (df) in below format:</p>
<pre><code>Name, Col-1, Col-2, Col-3, Col-4
abc, 0, 1, 0, 0
cba, 1, 0, 0, 0
bns 1, 0, 0, 0
abd 0 0, 0, 1
</code></pre>
<p>Now i am trying to add new column to this dataframe like below:</p>
<pre><code>Name, Col-1, Col-2, Col-3, Col-4, Type
abc, 0, 1, 0, 0, Col-2
cba, 1, 0, 0, 0, Col-1
bns 1, 0, 0, 0, Col-1
abd 0 0, 0, 1, Col-4
</code></pre>
<p>Please suggest how to get it done, i tried below but throws error.</p>
<pre><code>df['Type'] = [lambda x: x if x == 1 for x in df.columns]
</code></pre> | <p>You can use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.idxmax.html" rel="nofollow noreferrer">idxmax</a>:</p>
<pre><code>In [11]: df
Out[11]:
Name Col-1 Col-2 Col-3 Col-4
0 abc 0 1 0 0
1 cba 1 0 0 0
2 bns 1 0 0 0
3 abd 0 0 0 1
In [12]: df.iloc[:, 1:]
Out[12]:
Col-1 Col-2 Col-3 Col-4
0 0 1 0 0
1 1 0 0 0
2 1 0 0 0
3 0 0 0 1
In [13]: df.iloc[:, 1:].idxmax(axis=1)
Out[13]:
0 Col-2
1 Col-1
2 Col-1
3 Col-4
dtype: object
In [14]: df["Type"] = df.iloc[:, 1:].idxmax(axis=1)
In [15]: df
Out[15]:
Name Col-1 Col-2 Col-3 Col-4 Type
0 abc 0 1 0 0 Col-2
1 cba 1 0 0 0 Col-1
2 bns 1 0 0 0 Col-1
3 abd 0 0 0 1 Col-4
</code></pre> | pandas | 2 |
521 | 55,353,703 | How to calculate all combinations of difference between array elements in 2d? | <p>Given an array <code>arr = [10, 11, 12]</code> I want to calculate all the ways that one element can be subtracted from another. For a <code>1xN</code> array the desired output is a NxN array where <code>output[i, j] = arr[i] - arr[j]</code>. My approach was to generate all the possible pairings of two numbers, subtract, and reshape. As follows </p>
<pre><code>opts = np.array(list(product(arr, arr)))
[[10 10]
[10 11]
[10 12]
[11 10]
[11 11]
[11 12]
[12 10]
[12 11]
[12 12]]
diffs = (opts[:, 0] - opts[:, 1]).reshape(len(arr), -1)
[[ 0 -1 -2]
[ 1 0 -1]
[ 2 1 0]]
</code></pre>
<p>This works quite nicely, what I would like to do next is to generalize this to a 2d input. Essentially what I would like to accomplish is given an <code>MxN</code> array to output an <code>MxNxN</code> array, and for each layer (depth-wise) perform the above functionality for each row. </p>
<p>I attempted to reshape the <code>MxN</code> input array to be <code>MxNx1</code> and then calculate the product as before. My assumption was that it would behave element-wise the same as before, unfortunately not. </p>
<p>My first thought is to initialize an output of the appropriate shape and loop over the rows and set the values "manually" but I was hoping for a vectorized approach. Does anyone know how I can accomplish this in 2 dimensions without looping over thousands of rows?</p> | <p>Here's a generic vectorized way to cover both 1D and 2D cases leveraging <a href="https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html" rel="nofollow noreferrer"><code>broadcasting</code></a> after reshaping the input array to broadcastable shpaes against each other -</p>
<pre><code>def permute_axes_subtract(arr, axis):
# Get array shape
s = arr.shape
# Get broadcastable shapes by introducing singleton dimensions
s1 = np.insert(s,axis,1)
s2 = np.insert(s,axis+1,1)
# Perform subtraction after reshaping input array to
# broadcastable ones against each other
return arr.reshape(s1) - arr.reshape(s2)
</code></pre>
<p>To perform any other elementwise <a href="https://docs.scipy.org/doc/numpy/reference/ufuncs.html" rel="nofollow noreferrer"><code>ufunc</code></a> operation, simply replace the subtraction operation with it.</p>
<p>Sample run -</p>
<pre><code>In [184]: arr = np.random.rand(3)
In [185]: permute_axes_subtract(arr, axis=0).shape
Out[185]: (3, 3)
In [186]: arr = np.random.rand(3,4)
In [187]: permute_axes_subtract(arr, axis=0).shape
Out[187]: (3, 3, 4)
In [188]: permute_axes_subtract(arr, axis=1).shape
Out[188]: (3, 4, 4)
</code></pre>
<p>Timings on <a href="https://stackoverflow.com/a/55353909/">@ClimbingTheCurve's posted solution func - <code>permute_difference</code></a> and the one posted in this one on large <code>2D</code> arrays -</p>
<pre><code>In [189]: arr = np.random.rand(100,100)
In [190]: %timeit permute_difference(arr, axis=0)
...: %timeit permute_axes_subtract(arr, axis=0)
1 loop, best of 3: 295 ms per loop
1000 loops, best of 3: 1.17 ms per loop
In [191]: %timeit permute_difference(arr, axis=1)
...: %timeit permute_axes_subtract(arr, axis=1)
1 loop, best of 3: 303 ms per loop
1000 loops, best of 3: 1.12 ms per loop
</code></pre> | python|numpy|itertools | 2 |
522 | 56,697,419 | Slicing on Pandas Series Object with Index List Having Multiple Datatypes | <p>I just started learning Pandas and I don't understand how slicing works when the index list contains objects of multiple types. </p>
<pre><code>import pandas as pd
arr = pd.Series([10, 20, 30, 40], index = [2, 3, 'six', 'eight'])
arr[2:3] #Output -- 30
arr[3:'six'] #TypeError: cannot do slice indexing on <class 'pandas.core.indexes.base.Index'> with these indexers [3] of <class 'int'>
arr['six':'eight'] #Output -- 30, 40
</code></pre>
<p>Isn't arr[2:3] supposed to be 20 and isn't arr['six':'eight'] supposed to be just 30?</p> | <p>Pandas working best if not mixed types of values in index.</p>
<p>Working general solution here is get positions for each index by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Index.get_loc.html" rel="nofollow noreferrer"><code>Index.get_loc</code></a> and select by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.iloc.html" rel="nofollow noreferrer"><code>Series.iloc</code></a>:</p>
<pre><code>arr = pd.Series([10, 20, 30, 40], index = [2, 3, 'six', 'eight'])
print (arr.iloc[arr.index.get_loc(2):arr.index.get_loc(3)])
2 10
dtype: int64
print (arr.iloc[arr.index.get_loc(3):arr.index.get_loc('six')])
3 20
dtype: int64
print (arr.iloc[arr.index.get_loc('six'):arr.index.get_loc('eight')])
six 30
dtype: int64
</code></pre>
<p>Your solution partly working:</p>
<p>First if select by both integers, pandas indexing like by positions (like with <code>iloc</code>):</p>
<pre><code>print (arr[2:3])
six 30
dtype: int64
print (arr.iloc[2:3])
six 30
dtype: int64
</code></pre>
<p>And if select by both labels, pandas select like by labels (like with <code>loc</code>):</p>
<pre><code>print (arr['six':'eight'])
six 30
eight 40
dtype: int64
print (arr.loc['six':'eight'])
six 30
eight 40
dtype: int64
</code></pre>
<p>Selecting by mixed values is not implemented, so raised error - pandas try select by labels (like with <code>loc</code>), but found integer (what is used for select by positions).</p> | python|pandas|slice | 2 |
523 | 56,497,242 | Grouping rows in list with multiple columns | <p>I have this in <code>pandas</code>:</p>
<pre><code> a b c
0 A 1 6
1 A 2 5
2 B 3 4
3 B 4 3
4 B 5 2
5 C 6 1
</code></pre>
<p>and I want to tranform it to this:</p>
<pre><code> a b c
0 A [1, 2] [6, 5]
1 B [3, 4, 5] [4, 3, 3]
2 C [6] [1]
</code></pre>
<p>What is the most efficient way to do this?</p> | <p>Ok so it is:</p>
<pre><code>df = df.groupby('a').agg({'b': list, 'c':list}).reset_index()
</code></pre>
<p>If there is anything better then you can let me know.</p> | python|python-3.x|pandas | 0 |
524 | 25,500,217 | Groupby Sum ignoring few columns | <p>In this DataFrame I would like to groupby 'Location' and get the sum of 'Score' but I wouldn't want 'Lat','Long' & 'Year' to be affected in the process;</p>
<pre><code>sample = pd.DataFrame({'Location':['A','B','C','A','B','C'],
'Year':[2001,2002,2003,2001,2002,2003],
'Lat':[24,32,14,24,32,14],
'Long':[81,85,79,81,85,79],
'Score':[123,234,10,25,46,11]})
grouped = sample.groupby(['Location']).sum().reset_index()
</code></pre>
<p><code>grouped</code> gives me this;</p>
<pre><code> Location Lat Long Score Year
0 A 48 162 148 4002
1 B 64 170 280 4004
2 C 28 158 21 4006
</code></pre>
<p>But I'm looking for this result;</p>
<pre><code> Location Lat Long Score Year
0 A 24 81 148 2001
1 B 32 85 280 2002
2 C 12 79 21 2003
</code></pre> | <p>You have to provide some form of aggregation method for the other columns. But you can use <code>mean</code>, <code>first</code> or <code>last</code> in this case, which would all work.</p>
<pre><code>grouped = sample.groupby(['Location']).agg({'Lat': 'first',
'Long': 'first',
'Score': 'sum',
'Year': 'first'}).reset_index()
</code></pre>
<p>Gives:</p>
<pre><code> Location Score Lat Long Year
0 A 148 24 81 2001
1 B 280 32 85 2002
2 C 21 14 79 2003
</code></pre>
<p>Note that you can also provide your own function instead of the build-in functions in Pandas which can be identified with a string. </p>
<p>It messes up the order of columns, if you care about that simply index with:</p>
<pre><code>grouped[['Location', 'Lat', 'Long', 'Score', 'Year']]
</code></pre> | python|pandas | 6 |
525 | 25,675,301 | Using rolling_apply with a function that requires 2 arguments in Pandas | <p>I'm trying to use rollapply with a formula that requires 2 arguments. To my knowledge the only way (unless you create the formula from scratch) to calculate kendall tau correlation, with standard tie correction included is:</p>
<pre><code>>>> import scipy
>>> x = [5.05, 6.75, 3.21, 2.66]
>>> y = [1.65, 26.5, -5.93, 7.96]
>>> z = [1.65, 2.64, 2.64, 6.95]
>>> print scipy.stats.stats.kendalltau(x, y)[0]
0.333333333333
</code></pre>
<p>I'm also aware of the problem with rollapply and taking two arguments, as documented here:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/21025821/python-custom-function-using-rolling-apply-for-pandas">Related Question 1</a></li>
<li><a href="https://github.com/pydata/pandas/issues/5071" rel="nofollow">Github Issue</a></li>
<li><a href="https://stackoverflow.com/questions/17998284/how-to-compute-rolling-rank-correlation-using-pandas">Related Question 2</a></li>
</ul>
<p>Still, I'm struggling to find a way to do the kendalltau calculation on a dataframe with multiple columns on a rolling basis.</p>
<p>My dataframe is something like this</p>
<pre><code>A = pd.DataFrame([[1, 5, 1], [2, 4, 1], [3, 3, 1], [4, 2, 1], [5, 1, 1]],
columns=['A', 'B', 'C'], index = [1, 2, 3, 4, 5])
</code></pre>
<p>Trying to create a function that does this</p>
<pre><code>In [1]:function(A, 3) # A is df, 3 is the rolling window
Out[2]:
A B C AB AC BC
1 1 5 2 NaN NaN NaN
2 2 4 4 NaN NaN NaN
3 3 3 1 -0.99 -0.33 0.33
4 4 2 2 -0.99 -0.33 0.33
5 5 1 4 -0.99 0.99 -0.99
</code></pre>
<p>In a very preliminary approach I entertained the idea of defining the function like this:</p>
<pre><code>def tau1(x):
y = np.array(A['A']) # keep one column fix and run it in the other two
tau, p_value = sp.stats.kendalltau(x, y)
return tau
A['AB'] = pd.rolling_apply(A['B'], 3, lambda x: tau1(x))
</code></pre>
<p>Off course It didn't work. I got:</p>
<pre><code>ValueError: all keys need to be the same shape
</code></pre>
<p>I understand is not a trivial problem. I appreciate any input.</p> | <p><a href="https://github.com/pydata/pandas/issues/5071#ref-pullrequest-27077199" rel="noreferrer">As of Pandas 0.14</a>, <code>rolling_apply</code> only passes NumPy arrays to the function. A possible workaround is to pass <code>np.arange(len(A))</code> as the first argument to <code>rolling_apply</code>, so that the <code>tau</code> function receives <em>the index</em> of the rows you wish to use. Then within the <code>tau</code> function,</p>
<pre><code>B = A[[col1, col2]].iloc[idx]
</code></pre>
<p>returns a DataFrame containing all the rows required.</p>
<hr>
<pre><code>import numpy as np
import pandas as pd
import scipy.stats as stats
import itertools as IT
A = pd.DataFrame([[1, 5, 2], [2, 4, 4], [3, 3, 1], [4, 2, 2], [5, 1, 4]],
columns=['A', 'B', 'C'], index = [1, 2, 3, 4, 5])
for col1, col2 in IT.combinations(A.columns, 2):
def tau(idx):
B = A[[col1, col2]].iloc[idx]
return stats.kendalltau(B[col1], B[col2])[0]
A[col1+col2] = pd.rolling_apply(np.arange(len(A)), 3, tau)
print(A)
</code></pre>
<p>yields</p>
<pre><code> A B C AB AC BC
1 1 5 2 NaN NaN NaN
2 2 4 4 NaN NaN NaN
3 3 3 1 -1 -0.333333 0.333333
4 4 2 2 -1 -0.333333 0.333333
5 5 1 4 -1 1.000000 -1.000000
</code></pre> | python|numpy|pandas|scipy|dataframe | 6 |
526 | 25,528,320 | MS SQL Server Management Studio export to CSV introduces extra character when reading from pandas | <p>I'm using MS SQL Server Management Studio and I have a simple table with the following data:</p>
<pre><code>CountryId CommonName FormalName
--------- ---------- ----------
1 Afghanistan Islamic State of Afghanistan
2 Albania Republic of Albania
3 Algeria People's Democratic Republic of Algeria
4 Andorra Principality of Andorra
</code></pre>
<p>I use "Save Results As" to save this data into <code>countries.csv</code> using the default UTF8 encoding. Then I go into iPython and read it into a data frame using pandas:</p>
<pre><code>df = pd.read_csv("countries.csv")
</code></pre>
<p>If I do</p>
<pre><code>df.columns
</code></pre>
<p>I get:</p>
<pre><code>Index([u'CountryId', u'CommonName', u'FormalName'], dtype='object')
</code></pre>
<p>The weird thing is that when I copy the column names, paste it into a new cell, and press Enter, I get:</p>
<pre><code>u'\ufeffCountryId', u'CommonName', u'FormalName'
</code></pre>
<p>An unicode character <code>\ufeff</code> shows up in the beginning of the first column name.</p>
<p>I tried the procedure with different tables and every time I got the extra character. And it happens to the first column name only.</p>
<p>Can anyone explain to me why the extra unicode character showed up? </p> | <p>Try using the <code>encoding = "utf-8-sig"</code> option with <code>read_csv</code>. For example:</p>
<pre><code>df = pd.read_csv("countries.csv", encoding = "utf-8-sig")
</code></pre>
<p>That should get it to ignore the Unicode Byte Order Mark (BOM) at the start of the CSV file. The use of BOM unnecessary here as UTF-8 files don't have an byte order, but Microsoft tools like to use it as a magic number to identify UTF-8 encoded text files.</p> | python|pandas|ssms | 3 |
527 | 26,325,652 | Fastest way to keep one element over two from long numpy arrays? | <p>Let's consider a numpy array </p>
<pre><code>a = array([1,2,25,13,10,9,4,5])
</code></pre>
<p>containing an even number of elements.
I need to keep only one element of the array every two at random:
either the first or the second, then either the third or the fourth, etc.
For example, using a, it should result into:</p>
<pre><code>c = array([1,13,9,5])
d = array([2,13,10,4])
e = array([2,25,10,5])
</code></pre>
<p>I have to do that on long array of hundred elements and on thousand of arrays along huge loops. What would be the fastest algorithm that iterating over element and keeping or deleting one on two using <code>pair_index+random.randint(0,1)</code>
A generalised method that keeps one element every three, four, etc. would be nice ;-)
Thanks</p>
<p>results:</p>
<pre><code>import timeit
import numpy
def soluce1():
k=2
a = numpy.array([1,2,25,13,10,9,4,5])
aa = a.reshape(-1, k)
i = numpy.random.randint(k, size = aa.shape[0])
return numpy.choose(i, aa.T)
def soluce2():
k=2
a = numpy.array([1,2,25,13,10,9,4,5])
w = len(a) // k
i = numpy.random.randint(0, 2, w) + numpy.arange(0, 2 * w, 2)
return a[i]
def random_skip():
a= numpy.array([1,2,25,13,10,9,4,5])
k=2
idx = numpy.arange(0, len(a), k)
idx += numpy.random.randint(k, size=len(idx))
idx = numpy.clip(idx, 0, len(a)-1)
return a[idx]
> ts1=timeit.timeit(stmt='soluce1()',setup='from __main__ import soluce1',number=10000)
> --> 161 µs
> ts2=timeit.timeit(stmt='soluce2()',setup='from __main__ import soluce2',number=10000)
> --> 159 µs
> ts3=timeit.timeit(stmt='random_skip()',setup='from __main__ import random_skip',number=10000)
> --> 166 µs
</code></pre>
<p>Seem to be equivalent proposals. Thanks again all.</p> | <p>You can select the elements using fancy indexing, <code>a[idx]</code>:</p>
<pre><code>def random_skip(a, skipsize=2):
idx = np.arange(0, len(a), skipsize)
idx += np.random.randint(skipsize, size=len(idx))
idx = np.clip(idx, 0, len(a)-1)
return a[idx]
In [141]: a = array([1,2,25,13,10,9,4,5])
In [142]: random_skip(a)
Out[142]: array([ 1, 13, 9, 4])
In [143]: random_skip(a, skipsize=3)
Out[143]: array([1, 9, 5])
In [144]: random_skip(a, skipsize=4)
Out[144]: array([ 1, 10])
</code></pre>
<p><code>idx = np.arange(0, len(a), skipsize)</code> selects the first item in each group.</p>
<p><code>idx += np.random.randint(skipsize, size=len(idx))</code>
randomizes the index to some item in each group.</p>
<p><code>idx = np.clip(idx, 0, len(a)-1)</code> protects the index from going out of bounds in case the skipsize is not a multiple of the length of <code>a</code>.</p> | python|arrays|numpy | 4 |
528 | 66,829,925 | Reading nested JSON file in Pandas dataframe | <p>I have a JSON file with the following structure (it's not the complete json file, but the structure is the same):</p>
<pre><code>{"data":[{"referenced_tweets":[{"type":"retweeted","id":"xxxxxxx"}],"text":"abcdefghijkl","created_at":"2020-03-09T00:11:41.000Z","author_id":"xxxxx","id":"xxxxxxxxxxx"},{"referenced_tweets":[{"type":"retweeted","id":"xxxxxxxxxxxx"}],"text":"abcdefghijkl","created_at":"2020-03-09T00:11:41.000Z","author_id":"xxxxxxxx","id":"xxxxxxxxxxx"}]}
.....
//The rest of json continues with the same structure, but referenced_tweets is not always present
</code></pre>
<p><strong>My question:</strong> How can I load this data into a dataframe with these columns: <code>type</code>, <code>id(referenced_tweet id)</code>, <code>text</code>, <code>created_at</code>, <code>author_id</code>, and <code>id (tweet id)</code>?</p>
<p><strong>What I could do so far:</strong> I could get the following columns:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: center;">referenced_tweets</th>
<th style="text-align: center;">text</th>
<th style="text-align: center;">cerated_at</th>
<th style="text-align: center;">author_id</th>
<th style="text-align: center;">id (tweet id)</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center;">[{'type': 'xx', 'id': 'xxx'}]</td>
<td style="text-align: center;">xxx</td>
<td style="text-align: center;">xxxx</td>
<td style="text-align: center;">xxxxx</td>
<td style="text-align: center;">xxxxxxxxxxxx</td>
</tr>
</tbody>
</table>
</div>
<p>Here is the code to get the above table:</p>
<pre><code>with open('Test_SampleRetweets.json') as json_file:
data_list = json.load(json_file)
df1 = json_normalize(data_list, 'data')
df1.head()
</code></pre>
<p>However, I'd like to get the <code>type</code> and <code>id</code> (in referenced_tweets) in separate columns and I could get the following so far:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: center;">type</th>
<th style="text-align: center;">id (referenced_tweet id)</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center;">xxxx</td>
<td style="text-align: center;">xxxxxxxxxxxxxxxxxxxxxxx</td>
</tr>
</tbody>
</table>
</div>
<p>and here is the code to get the above table:</p>
<pre><code>df2 = json_normalize(data_list, record_path=['data','referenced_tweets'], errors='ignore')
df2.head()
</code></pre>
<p><strong>What is the problem?</strong> I'd like to get everything in one table, i.e., a table similar to the first one here but with <code>type</code> and <code>id</code> in separate columns (like the 2nd table). So, the final columns should be : <code>type</code>, <code>id (referenced_tweet id)</code>, <code>text</code>, <code>created_at</code>, <code>author_id</code>, and <code>id (tweet id)</code>.</p> | <pre><code>import pandas as pd
with open('Test_SampleRetweets.json') as json_file:
raw_data = json.load(json_file)
data = []
for item in raw_data["data"]:
item["tweet_id"] = item["id"]
item.update(item["referenced_tweets"][0])
del item["referenced_tweets"]
data.append(item)
df1 = pd.DataFrame(data)
print(df1.head())
</code></pre> | python|json|pandas|nested|tweets | 1 |
529 | 66,877,578 | How to pass multiple values from a panda df to pyspark SQL using IN OPERATOR | <p>The output of my df returns three distinct values as below</p>
<pre><code>print(df["ID"])
</code></pre>
<p>returns three ID's <code>1</code>,<code>2</code> and <code>3</code>.</p>
<p>I want to pass these values within a pyspark SQL</p>
<pre><code> Query = 'select col 1 from temptable where ID IN (*need to pass the ID's here*)
</code></pre>
<p>Any ideas on how this can be achieved?</p> | <p>If <code>Query</code> is a string then convert <code>df["ID"]</code> to string</p>
<p>For example</p>
<pre><code>','.join( df['ID'].astype(str).to_list() )
</code></pre>
<p>gives string</p>
<pre><code>'1,2,3'
</code></pre>
<p>And then you can use it in string with query using ie. f-string.</p>
<hr />
<p>Minimal working example</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'ID':[1,2,3]})
text = ','.join( df['ID'].astype(str).to_list() )
Query = f'SELECT col 1 FROM temptable WHERE ID IN ({text})'
print(Query)
</code></pre>
<p>Result:</p>
<pre><code>'SELECT col 1 FROM temptable WHERE ID IN (1,2,3)'
</code></pre>
<hr />
<p><strong>EDIT:</strong></p>
<p>This also works for me</p>
<pre><code>text = ','.join( df['ID'].astype(str) )
</code></pre>
<p>or</p>
<pre><code>text = df['ID'].astype(str).str.cat(sep=',')
</code></pre>
<p>or</p>
<pre><code>item = df[ ['ID'] ].astype(str).agg(','.join) #, axis=0)
text = item['ID']
</code></pre>
<p>or</p>
<pre><code>item = df[ ['ID'] ].astype(str).apply(lambda column: ','.join(column)) #, axis=0)
text = item['ID']
</code></pre> | python|pandas|pyspark|apache-spark-sql | 1 |
530 | 67,173,745 | Group by on pandas datetime with null values | <p>So im looking at manipulating data to perform autoregression forecasting on the data. I have managed to group by the week without any issues, however the weeks that do not have any flagged values is left out of the created dataframe. The shape of the data frame is (28, 141), meaning only 28 weeks are grouped, the missing weeks of the year need to show null so the shape is (52, 141)</p>
<p>Thanks in advance.</p>
<p><a href="https://i.stack.imgur.com/XYOYS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XYOYS.png" alt="enter image description here" /></a></p> | <p>You can do a <strong>right</strong> <strong>join</strong> to set of dates you want.</p>
<pre><code>import pandas as pd
df = pd.DataFrame({"date":pd.date_range("1-feb-2020", freq="4d", periods=28)})
(df.groupby(df.date.dt.to_period("w")).count()
.join(pd.DataFrame(index=pd.date_range("1-jan-2020", "31-dec-2020", freq="3d").to_period("w").unique())
,how="right")
)
</code></pre> | pandas|datetime|jupyter-notebook|grouping | 0 |
531 | 67,103,144 | How to convert index and values to a proper dataframe with callable column names? Python Pandas | <p>I am working on this dataset where I have used the <code>sort_values()</code> function to get two distinct columns: the <code>index</code> and the <code>values</code>. I can even rename the index and the values columns. However, if I rename the dataset columns and assign everything to a new dataframe, I am not able to call the index column with the name that I assigned to it earlier.</p>
<pre><code>pm_freq = df["payment_method"].value_counts()
pm_freq = pm_freq.head().to_frame()
pm_freq.index.name="Method"
pm_freq.rename(columns={"payment_method":"Frequency"},inplace=True)
pm_freq
</code></pre>
<p>Now I want to call it like this:</p>
<pre><code>pm_freq["Method"]
</code></pre>
<p>But there's an error which states:</p>
<blockquote>
<p>" KeyError: 'Method'"</p>
</blockquote>
<p>Can someone please help me out with this?</p> | <p>Check out the comment here, not sure if still correct:
<a href="https://stackoverflow.com/a/18023468/15600610">https://stackoverflow.com/a/18023468/15600610</a></p> | python|python-3.x|pandas|dataframe | 0 |
532 | 66,796,929 | find all - to find all occurrence of matching pattern one column of a data frame to other and get the corresponding value | <p>I am working on a requirement, there are 2 CSV as below -</p>
<p>CSV.csv</p>
<pre><code> Short Description Category
Device is DOWN! Server Down
CPU Warning Monitoron XSSXSXSXSXSX.com CPU Utilization
CPU Warning Monitoron XSSXSXSXSXSX.com CPU Utilization
CPU Warning Monitoron XSSXSXSXSXSX.com CPU Utilization
CPU Warning Monitoron XSSXSXSXSXSX.com CPU Utilization
Device Performance Alerts was triggered on Physical memory Memory Utilization
Device Performance Alerts was triggered on Physical memory Memory Utilization
Device Performance Alerts was triggered on Physical memory Memory Utilization
Disk Space Is Lowon ;E: Disk Space Utilization
Disk Space Is Lowon;C: Disk Space Utilization
Network Interface Down Interface Down
and reference.csv
Category Complexity
Server Down Simple
Network Interface down Complex
Drive Cleanup Windows Medium
CPU Utilization Medium
Memory Utilization Medium
Disk Space Utilization Unix Simple
Windows Service Restart Medium
UNIX Service Restart Medium
Web Tomcat Instance Restart Simple
Expected Output
Short Description Category Complexity
Device is DOWN! Server Down Simple
CPU Warning Monitoron XSSXSXSXSXSX.com CPU Utilization Medium
CPU Warning Monitoron XSSXSXSXSXSX.com CPU Utilization Medium
CPU Warning Monitoron XSSXSXSXSXSX.com CPU Utilization Medium
CPU Warning Monitoron XSSXSXSXSXSX.com CPU Utilization Medium
Device Performance Alerts was triggered on Physical memory Memory Utilization Medium
Device Performance Alerts was triggered on Physical memory Memory Utilization Medium
Device Performance Alerts was triggered on Physical memory Memory Utilization Medium
Disk Space Is Lowon ;E: Disk Space Utilization Medium
Disk Space Is Lowon;C: Disk Space Utilization Medium
Network Interface Down Interface Down Complex
</code></pre>
<p>Now, I need query <code>CSV1.csv</code> and pick values of <code>'Category'</code> and find for all possible match in <code>Category</code> column of <code>reference.csv</code> and get the corresponding <code>'Complexity'</code> from <code>reference.csv</code> and put the data against each category of <code>CSV1.csv</code>.</p>
<p>I am using find.all to achive this. I am unable to do it as expected. Is there any better way to achieve the same.</p>
<p>I tried using <code>disct</code> functions, that did not give result as expected.</p> | <p>A possible approach:</p>
<pre class="lang-py prettyprint-override"><code>my_dict = dict(zip(reference_df['Category'].values, reference_df['Complexity'].values))
def match_key(key, default_value):
for d_key in my_dict.keys():
if key in d_key or d_key in key:
return my_dict[d_key]
return default_value
CSV1_df['Complexity'] = CSV1_df['Category'].apply(lambda x: match_key(x, 'default'))
</code></pre>
<p><strong>Explanation:</strong></p>
<ol>
<li>Build a <code>dict</code> by zipping <em>Category</em> and <em>Complexity</em> columns in reference <code>Dataframe</code>, i.e. <code>{'Server Down': 'Simple', 'Network Interface down': 'Complex'...}</code></li>
<li>Use <code>apply</code> and a <code>lambda</code> function to get corresponding <em>Complexity</em> values from the dictionary using each <em>Category</em> value in CSV1 <code>Dataframe</code> as key</li>
<li>We define a function to find if <em>Category</em> value in CSV1 <code>Dataframe</code> is a substring of any key in the dictionary or wise-versa and use it in <code>apply</code></li>
<li>Save it to new column in <em>CSV1</em> <code>Dataframe</code></li>
</ol> | python|python-3.x|pandas|dataframe|findall | 1 |
533 | 47,490,576 | How to do a rolling week-to-date sum in python's Pandas | <p>Suppose I have the following code:</p>
<pre><code>import pandas as pd
frame = pd.DataFrame(
{
"Date":["11/1/2017","11/2/2017","11/3/2017","11/4/2017",
"11/5/2017","11/6/2017","11/7/2017","11/8/2017","11/9/2017","11/10/2017","11/11/2017",
"11/12/2017"],
"Day":["Wed","Thr","Fri","Sat",
"Sun","Mon","Tue","Wed","Thr","Fri","Sat",
"Sun"],
"Sales":[5,5,10,5,
5,10,5,5,15,5,0,
5]
})
frame.reset_index()
print(frame)
print()
frame['RollingSum']=frame["Sales"].rolling(window=7, min_periods = 1, win_type='boxcar').sum()
print(frame)
</code></pre>
<p>Which prints the following output:</p>
<pre><code> Date Day Sales
0 11/1/2017 Wed 5
1 11/2/2017 Thr 5
2 11/3/2017 Fri 10
3 11/4/2017 Sat 5
4 11/5/2017 Sun 5
5 11/6/2017 Mon 10
6 11/7/2017 Tue 5
7 11/8/2017 Wed 5
8 11/9/2017 Thr 15
9 11/10/2017 Fri 5
10 11/11/2017 Sat 0
11 11/12/2017 Sun 5
Date Day Sales RollingSum
0 11/1/2017 Wed 5 5.0
1 11/2/2017 Thr 5 10.0
2 11/3/2017 Fri 10 20.0
3 11/4/2017 Sat 5 25.0
4 11/5/2017 Sun 5 30.0
5 11/6/2017 Mon 10 40.0
6 11/7/2017 Tue 5 45.0
7 11/8/2017 Wed 5 45.0
8 11/9/2017 Thr 15 55.0
9 11/10/2017 Fri 5 50.0
10 11/11/2017 Sat 0 45.0
11 11/12/2017 Sun 5 45.0
</code></pre>
<p>Now suppose that rather than a flat 7 day window, I want the left endpoint to be the most recent Sunday (or first day in the data set)... how could I achieve that? Desired output:</p>
<pre><code> Date Day Sales WTDSum
0 11/1/2017 Wed 5 5.0
1 11/2/2017 Thr 5 10.0
2 11/3/2017 Fri 10 20.0
3 11/4/2017 Sat 5 25.0
4 11/5/2017 Sun 5 5.0
5 11/6/2017 Mon 10 15.0
6 11/7/2017 Tue 5 20.0
7 11/8/2017 Wed 5 25.0
8 11/9/2017 Thr 15 40.0
9 11/10/2017 Fri 5 45.0
10 11/11/2017 Sat 0 45.0
11 11/12/2017 Sun 5 5.0
</code></pre> | <p>We using <code>cumsum</code> twice in group and group calculation</p>
<pre><code>df['WTDSum']=df.groupby(df.Day.eq('Sun').cumsum()).Sales.cumsum()
df
Out[520]:
Date Day Sales WTDSum
0 11/1/2017 Wed 5 5
1 11/2/2017 Thr 5 10
2 11/3/2017 Fri 10 20
3 11/4/2017 Sat 5 25
4 11/5/2017 Sun 5 5
5 11/6/2017 Mon 10 15
6 11/7/2017 Tue 5 20
7 11/8/2017 Wed 5 25
8 11/9/2017 Thr 15 40
9 11/10/2017 Fri 5 45
10 11/11/2017 Sat 0 45
11 11/12/2017 Sun 5 5
</code></pre> | python|pandas | 4 |
534 | 47,475,611 | Fastest coincidence matrix | <p>I have two arrays and I want to compute a list/array of coincidences. That is, a list of all the indices i, j so that a[i] == b[j]. This is my code now:</p>
<pre><code>b = np.array([3, 5, 6, 4])
a = np.array([1, 2, 3, 4])
np.array([[i, j] for i in range(a.size) for j in range(b.size) if a[i] == b[j]])
</code></pre>
<p>Is there a faster (maybe numpy-powered) way to do this?</p> | <h3>Approach #1</h3>
<p>One approach would be using <code>np.in1d</code> -</p>
<pre><code>m_a = np.in1d(a,b)
I = np.flatnonzero(m_a)
J = np.flatnonzero(np.in1d(b, a[m_a]))
</code></pre>
<p>Sample input, output -</p>
<pre><code>In [367]: a
Out[367]: array([1, 2, 3, 4])
In [368]: b
Out[368]: array([3, 5, 6, 4])
In [370]: I
Out[370]: array([2, 3])
In [371]: J
Out[371]: array([0, 3])
</code></pre>
<h3>Approach #2</h3>
<p>Another straight-forward but memory heavy way would be with <code>broadcasting</code> -</p>
<pre><code>I,J = np.nonzero(a[:,None] == b)
</code></pre>
<h3>Approach #3</h3>
<p>For the case where we have no duplicates within the input arrays, we could use <code>np.searchsorted</code>. There are two variants here - One for sorted <code>a</code>, and another for a generic <code>a</code>.</p>
<p><strong>Variant #1 :</strong> For sorted <code>a</code> -</p>
<pre><code>idx = np.searchsorted(a, b)
idx[idx==a.size] = 0
mask = a[idx] == b
I = np.searchsorted(a,b[mask])
J = np.flatnonzero(mask)
</code></pre>
<p><strong>Variant #2 :</strong> For this generic variant case, we need to use argsort indices of <code>a</code> -</p>
<pre><code>sidx = a.argsort()
a_sort = a[sidx]
idx = np.searchsorted(a_sort, b)
idx[idx==a.size] = 0
mask = a_sort[idx] == b
I = sidx[np.searchsorted(a_sort,b[mask])]
J = np.flatnonzero(mask)
</code></pre> | python|numpy | 2 |
535 | 68,048,889 | Why do I have to write both 'read' and 'r' to write a file by using pandas | <pre><code>import pandas as pd
import numpy as np
df = pd.read_csv(r'C:\Users\OneDrive\Desktop\Python\Python_coursera\Course 1 - Notebook Resources\resources\week-2\datasets\census.csv')
</code></pre>
<p>If I omit the 'r', I cannot read the csv file. Is it normal to write both 'read' and 'r'? Because the course tutorial I followed did not introduce it.</p>
<p>If I just do not code 'r', this message comes out</p>
<pre><code>SyntaxError: (unicode error) 'unicodeescape' codec can't decode bytes in position 2-3: truncated \UXXXXXXXX escape
</code></pre>
<p>Also, if I move this file to my desktop and try to read it, I cannot read it at all. Do I need other code for reading?</p>
<pre><code>>>> df = pd.read_csv('Desktop\census.csv')
FileNotFoundError: [Errno 2] No such file or directory: 'census.csv'
</code></pre> | <p>The <code>r</code> preceding a string literal in Python indicates that it's a <a href="https://docs.python.org/3/reference/lexical_analysis.html#string-and-bytes-literals" rel="nofollow noreferrer">raw string</a>. This allows it to treat the backslashes (<code>\</code>) as literal backslashes, instead of Unicode escapes. This feature is commonly used to write regex patterns. It has nothing to do with reading. Very unfortunately, Windows chose to use <code>\</code> as their path separator. To write Windows literal paths in Python you can use a raw string as you've done. Alteratively, just write it using the regular <code>/</code> path separator (i.e. <code>'C:/Users/OneDrive/Desktop/Python/Python_coursera/Course 1 - Notebook Resources/resources/week-2/datasets/census.csv'</code>). This won't work for low-level system APIs, but Python will generally handle converting the path separator to a system-appropriate one for you.</p>
<p>As for why <code>'Desktop\census.csv'</code> didn't work:</p>
<ol>
<li>It should be either <code>r'Desktop\census.csv'</code> or <code>'Desktop/census.csv'</code></li>
<li>Your program's current directory is not the directory containing your <code>Desktop</code> directory, so the relative path is incorrect. To check what directory your Python program is running in, you can use the following code: <code>import os; os.getcwd()</code></li>
</ol> | python|pandas|dataframe|csv | 2 |
536 | 68,088,504 | My accuracy stays around the same, what can I do to fix this | <p>After finally fixing all the errors this code gave me, I have stumbled upon a new problem. This time it is the working model that supplies me with it. Here is the code I have created, this is now my third Deep Learning code I made and I am having a lot of fun making it, however, because I am a beginner in Python in general, some ideas are hard to grasp.</p>
<pre><code>import pandas as pd
from sklearn.model_selection import train_test_split
import keras as kr
from keras import layers
from keras import Sequential
from keras.layers import Dense
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
import tensorflow as tf
from sklearn.preprocessing import LabelEncoder
from pandas import DataFrame
config = tf.compat.v1.ConfigProto()
config.gpu_options.allow_growth = True
session = tf.compat.v1.InteractiveSession(config=config)
pd.set_option('display.max_columns', None)
headers = ['id', 'rated', 'created_at', 'last_move_at', 'turns', 'victory_status', 'winner', 'increment_code',
'white_id', 'white_rating', 'black_id', 'black_rating', 'moves', 'opening_eco', 'opening_name',
'opening_ply']
data = pd.read_csv(r'C:\games.csv', header=None, names=headers)
dataset = DataFrame(data)
dd = dataset.drop([0])
df = dd.drop(columns=['id', 'rated', 'opening_name', 'created_at', 'last_move_at', 'increment_code', 'white_id',
'black_id', 'opening_ply', 'opening_name', 'turns', 'victory_status', 'moves', 'opening_eco'],
axis=1)
df['winner'] = df['winner'].map({'black': 0, 'white': 1})
y = df['winner']
encoder = LabelEncoder()
encoder.fit(y)
encoded_y = encoder.transform(y)
X = df.drop('winner', axis=1)
X = X.astype("float32")
X_train, X_test, y_train, y_test = train_test_split(X, encoded_y, test_size=0.2)
sc = MinMaxScaler()
scaled_X_train = sc.fit_transform(X_train)
scaled_X_test = sc.fit_transform(X_test)
model = Sequential()
model.add(Dense(2, input_dim=2, activation='relu'))
model.add(tf.keras.Input(shape=(12, 2)))
model.add(Dense(4, activation='relu'))
model.add(Dense(8, activation='relu'))
model.add(Dense(16, activation='relu'))
model.add(Dense(8, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss=kr.losses.binary_crossentropy, optimizer='adam',
metrics=['accuracy'])
history = model.fit(scaled_X_train, y_train, batch_size=50, epochs=100, verbose=1, validation_data=(scaled_X_test,
y_test))
print(history.history)
score = model.evaluate(scaled_X_train, y_train, verbose=1)
</code></pre>
<p>My code seems to work fine with the first few epochs an increase in the accuracy. After that however, the accuracy doesn't seem to be making any progress anymore and lands on a modest accuracy of around 0.610, or specifically as seen below. With no idea on how to get this to be higher, I have come to you to ask you the question: 'How do I fix this?'</p>
<pre><code>Epoch 1/100
321/321 [==============================] - 0s 2ms/step - loss: 0.6386 - accuracy: 0.5463 - val_loss: 0.6208 - val_accuracy: 0.5783
Epoch 2/100
321/321 [==============================] - 0s 925us/step - loss: 0.6098 - accuracy: 0.6091 - val_loss: 0.6078 - val_accuracy: 0.5960
Epoch 3/100
321/321 [==============================] - 0s 973us/step - loss: 0.6055 - accuracy: 0.6102 - val_loss: 0.6177 - val_accuracy: 0.5833
Epoch 4/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6042 - accuracy: 0.6129 - val_loss: 0.6138 - val_accuracy: 0.5850
Epoch 5/100
321/321 [==============================] - 0s 973us/step - loss: 0.6041 - accuracy: 0.6106 - val_loss: 0.6233 - val_accuracy: 0.5763
Epoch 6/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6046 - accuracy: 0.6097 - val_loss: 0.6276 - val_accuracy: 0.5733
Epoch 7/100
321/321 [==============================] - 0s 973us/step - loss: 0.6033 - accuracy: 0.6086 - val_loss: 0.6238 - val_accuracy: 0.5733
Epoch 8/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6023 - accuracy: 0.6116 - val_loss: 0.6202 - val_accuracy: 0.5770
Epoch 9/100
321/321 [==============================] - 0s 973us/step - loss: 0.6030 - accuracy: 0.6091 - val_loss: 0.6210 - val_accuracy: 0.5738
Epoch 10/100
321/321 [==============================] - 0s 973us/step - loss: 0.6028 - accuracy: 0.6098 - val_loss: 0.6033 - val_accuracy: 0.5932
Epoch 11/100
321/321 [==============================] - 0s 973us/step - loss: 0.6022 - accuracy: 0.6094 - val_loss: 0.6166 - val_accuracy: 0.5780
Epoch 12/100
321/321 [==============================] - 0s 925us/step - loss: 0.6025 - accuracy: 0.6104 - val_loss: 0.6026 - val_accuracy: 0.5947
Epoch 13/100
321/321 [==============================] - 0s 925us/step - loss: 0.6021 - accuracy: 0.6099 - val_loss: 0.6243 - val_accuracy: 0.5733
Epoch 14/100
321/321 [==============================] - 0s 876us/step - loss: 0.6027 - accuracy: 0.6098 - val_loss: 0.6176 - val_accuracy: 0.5775
Epoch 15/100
321/321 [==============================] - 0s 925us/step - loss: 0.6029 - accuracy: 0.6091 - val_loss: 0.6286 - val_accuracy: 0.5690
Epoch 16/100
321/321 [==============================] - 0s 876us/step - loss: 0.6025 - accuracy: 0.6083 - val_loss: 0.6104 - val_accuracy: 0.5840
Epoch 17/100
321/321 [==============================] - 0s 876us/step - loss: 0.6021 - accuracy: 0.6102 - val_loss: 0.6039 - val_accuracy: 0.5897
Epoch 18/100
321/321 [==============================] - 0s 973us/step - loss: 0.6021 - accuracy: 0.6113 - val_loss: 0.6046 - val_accuracy: 0.5887
Epoch 19/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6019 - accuracy: 0.6083 - val_loss: 0.6074 - val_accuracy: 0.5860
Epoch 20/100
321/321 [==============================] - 0s 971us/step - loss: 0.6021 - accuracy: 0.6089 - val_loss: 0.6194 - val_accuracy: 0.5738
Epoch 21/100
321/321 [==============================] - 0s 876us/step - loss: 0.6025 - accuracy: 0.6099 - val_loss: 0.6093 - val_accuracy: 0.5857
Epoch 22/100
321/321 [==============================] - 0s 925us/step - loss: 0.6020 - accuracy: 0.6097 - val_loss: 0.6154 - val_accuracy: 0.5773
Epoch 23/100
321/321 [==============================] - 0s 973us/step - loss: 0.6027 - accuracy: 0.6104 - val_loss: 0.6044 - val_accuracy: 0.5895
Epoch 24/100
321/321 [==============================] - 0s 973us/step - loss: 0.6015 - accuracy: 0.6112 - val_loss: 0.6305 - val_accuracy: 0.5710
Epoch 25/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6016 - accuracy: 0.6114 - val_loss: 0.6067 - val_accuracy: 0.5867
Epoch 26/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6017 - accuracy: 0.6102 - val_loss: 0.6140 - val_accuracy: 0.5800
Epoch 27/100
321/321 [==============================] - 0s 973us/step - loss: 0.6025 - accuracy: 0.6075 - val_loss: 0.6190 - val_accuracy: 0.5755
Epoch 28/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6029 - accuracy: 0.6087 - val_loss: 0.6337 - val_accuracy: 0.5666
Epoch 29/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6021 - accuracy: 0.6095 - val_loss: 0.6089 - val_accuracy: 0.5840
Epoch 30/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6026 - accuracy: 0.6106 - val_loss: 0.6273 - val_accuracy: 0.5690
Epoch 31/100
321/321 [==============================] - 0s 925us/step - loss: 0.6020 - accuracy: 0.6083 - val_loss: 0.6146 - val_accuracy: 0.5785
Epoch 32/100
321/321 [==============================] - 0s 973us/step - loss: 0.6017 - accuracy: 0.6116 - val_loss: 0.6093 - val_accuracy: 0.5837
Epoch 33/100
321/321 [==============================] - 0s 973us/step - loss: 0.6025 - accuracy: 0.6096 - val_loss: 0.6139 - val_accuracy: 0.5780
Epoch 34/100
321/321 [==============================] - 0s 973us/step - loss: 0.6022 - accuracy: 0.6087 - val_loss: 0.6090 - val_accuracy: 0.5850
Epoch 35/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6018 - accuracy: 0.6096 - val_loss: 0.6127 - val_accuracy: 0.5810
Epoch 36/100
321/321 [==============================] - 0s 876us/step - loss: 0.6024 - accuracy: 0.6091 - val_loss: 0.6001 - val_accuracy: 0.5975
Epoch 37/100
321/321 [==============================] - 0s 973us/step - loss: 0.6027 - accuracy: 0.6104 - val_loss: 0.6083 - val_accuracy: 0.5862
Epoch 38/100
321/321 [==============================] - 0s 973us/step - loss: 0.6020 - accuracy: 0.6090 - val_loss: 0.6073 - val_accuracy: 0.5875
Epoch 39/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6023 - accuracy: 0.6109 - val_loss: 0.6149 - val_accuracy: 0.5785
Epoch 40/100
321/321 [==============================] - 0s 973us/step - loss: 0.6022 - accuracy: 0.6085 - val_loss: 0.6175 - val_accuracy: 0.5758
Epoch 41/100
321/321 [==============================] - 0s 973us/step - loss: 0.6017 - accuracy: 0.6079 - val_loss: 0.6062 - val_accuracy: 0.5865
Epoch 42/100
321/321 [==============================] - 0s 973us/step - loss: 0.6018 - accuracy: 0.6097 - val_loss: 0.6060 - val_accuracy: 0.5867
Epoch 43/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6018 - accuracy: 0.6082 - val_loss: 0.6074 - val_accuracy: 0.5862
Epoch 44/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6019 - accuracy: 0.6096 - val_loss: 0.6150 - val_accuracy: 0.5785
Epoch 45/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6014 - accuracy: 0.6112 - val_loss: 0.6241 - val_accuracy: 0.5740
Epoch 46/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6023 - accuracy: 0.6111 - val_loss: 0.6118 - val_accuracy: 0.5815
Epoch 47/100
321/321 [==============================] - 0s 973us/step - loss: 0.6017 - accuracy: 0.6073 - val_loss: 0.6110 - val_accuracy: 0.5835
Epoch 48/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6021 - accuracy: 0.6074 - val_loss: 0.6107 - val_accuracy: 0.5835
Epoch 49/100
321/321 [==============================] - 0s 973us/step - loss: 0.6020 - accuracy: 0.6097 - val_loss: 0.6081 - val_accuracy: 0.5862
Epoch 50/100
321/321 [==============================] - 0s 973us/step - loss: 0.6014 - accuracy: 0.6078 - val_loss: 0.6214 - val_accuracy: 0.5770
Epoch 51/100
321/321 [==============================] - 0s 973us/step - loss: 0.6023 - accuracy: 0.6093 - val_loss: 0.6011 - val_accuracy: 0.5952
Epoch 52/100
321/321 [==============================] - 0s 973us/step - loss: 0.6028 - accuracy: 0.6094 - val_loss: 0.6013 - val_accuracy: 0.5950
Epoch 53/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6022 - accuracy: 0.6079 - val_loss: 0.6158 - val_accuracy: 0.5770
Epoch 54/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6019 - accuracy: 0.6103 - val_loss: 0.6080 - val_accuracy: 0.5862
Epoch 55/100
321/321 [==============================] - 0s 973us/step - loss: 0.6020 - accuracy: 0.6095 - val_loss: 0.6180 - val_accuracy: 0.5775
Epoch 56/100
321/321 [==============================] - 0s 973us/step - loss: 0.6018 - accuracy: 0.6099 - val_loss: 0.6106 - val_accuracy: 0.5842
Epoch 57/100
321/321 [==============================] - 0s 973us/step - loss: 0.6022 - accuracy: 0.6078 - val_loss: 0.6232 - val_accuracy: 0.5740
Epoch 58/100
321/321 [==============================] - 0s 973us/step - loss: 0.6017 - accuracy: 0.6099 - val_loss: 0.6155 - val_accuracy: 0.5788
Epoch 59/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6026 - accuracy: 0.6119 - val_loss: 0.6150 - val_accuracy: 0.5775
Epoch 60/100
321/321 [==============================] - 0s 973us/step - loss: 0.6014 - accuracy: 0.6092 - val_loss: 0.5982 - val_accuracy: 0.6012
Epoch 61/100
321/321 [==============================] - 0s 973us/step - loss: 0.6025 - accuracy: 0.6087 - val_loss: 0.6022 - val_accuracy: 0.5947
Epoch 62/100
321/321 [==============================] - 0s 973us/step - loss: 0.6017 - accuracy: 0.6099 - val_loss: 0.6265 - val_accuracy: 0.5735
Epoch 63/100
321/321 [==============================] - 0s 899us/step - loss: 0.6019 - accuracy: 0.6099 - val_loss: 0.6172 - val_accuracy: 0.5775
Epoch 64/100
321/321 [==============================] - 0s 982us/step - loss: 0.6018 - accuracy: 0.6099 - val_loss: 0.6116 - val_accuracy: 0.5815
Epoch 65/100
321/321 [==============================] - 0s 969us/step - loss: 0.6015 - accuracy: 0.6099 - val_loss: 0.6230 - val_accuracy: 0.5738
Epoch 66/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6019 - accuracy: 0.6094 - val_loss: 0.6058 - val_accuracy: 0.5870
Epoch 67/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6019 - accuracy: 0.6103 - val_loss: 0.6250 - val_accuracy: 0.5723
Epoch 68/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6015 - accuracy: 0.6109 - val_loss: 0.6129 - val_accuracy: 0.5790
Epoch 69/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6016 - accuracy: 0.6099 - val_loss: 0.6061 - val_accuracy: 0.5867
Epoch 70/100
321/321 [==============================] - 0s 2ms/step - loss: 0.6031 - accuracy: 0.6084 - val_loss: 0.5999 - val_accuracy: 0.5980
Epoch 71/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6020 - accuracy: 0.6080 - val_loss: 0.6065 - val_accuracy: 0.5862
Epoch 72/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6015 - accuracy: 0.6097 - val_loss: 0.6193 - val_accuracy: 0.5745
Epoch 73/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6024 - accuracy: 0.6081 - val_loss: 0.6183 - val_accuracy: 0.5753
Epoch 74/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6017 - accuracy: 0.6094 - val_loss: 0.6165 - val_accuracy: 0.5778
Epoch 75/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6016 - accuracy: 0.6091 - val_loss: 0.6008 - val_accuracy: 0.5955
Epoch 76/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6021 - accuracy: 0.6094 - val_loss: 0.6235 - val_accuracy: 0.5733
Epoch 77/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6020 - accuracy: 0.6083 - val_loss: 0.6178 - val_accuracy: 0.5773
Epoch 78/100
321/321 [==============================] - 0s 973us/step - loss: 0.6016 - accuracy: 0.6099 - val_loss: 0.6232 - val_accuracy: 0.5715
Epoch 79/100
321/321 [==============================] - 0s 973us/step - loss: 0.6024 - accuracy: 0.6052 - val_loss: 0.6262 - val_accuracy: 0.5705
Epoch 80/100
321/321 [==============================] - 0s 973us/step - loss: 0.6022 - accuracy: 0.6050 - val_loss: 0.6150 - val_accuracy: 0.5785
Epoch 81/100
321/321 [==============================] - 0s 973us/step - loss: 0.6011 - accuracy: 0.6111 - val_loss: 0.6177 - val_accuracy: 0.5755
Epoch 82/100
321/321 [==============================] - 0s 973us/step - loss: 0.6025 - accuracy: 0.6087 - val_loss: 0.6124 - val_accuracy: 0.5783
Epoch 83/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6018 - accuracy: 0.6090 - val_loss: 0.6107 - val_accuracy: 0.5833
Epoch 84/100
321/321 [==============================] - 0s 973us/step - loss: 0.6025 - accuracy: 0.6102 - val_loss: 0.6110 - val_accuracy: 0.5800
Epoch 85/100
321/321 [==============================] - 0s 973us/step - loss: 0.6018 - accuracy: 0.6094 - val_loss: 0.6077 - val_accuracy: 0.5845
Epoch 86/100
321/321 [==============================] - 0s 973us/step - loss: 0.6016 - accuracy: 0.6069 - val_loss: 0.6109 - val_accuracy: 0.5798
Epoch 87/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6020 - accuracy: 0.6092 - val_loss: 0.6117 - val_accuracy: 0.5798
Epoch 88/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6021 - accuracy: 0.6089 - val_loss: 0.6105 - val_accuracy: 0.5808
Epoch 89/100
321/321 [==============================] - 0s 973us/step - loss: 0.6020 - accuracy: 0.6063 - val_loss: 0.6190 - val_accuracy: 0.5753
Epoch 90/100
321/321 [==============================] - 0s 973us/step - loss: 0.6022 - accuracy: 0.6083 - val_loss: 0.6211 - val_accuracy: 0.5740
Epoch 91/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6023 - accuracy: 0.6058 - val_loss: 0.6117 - val_accuracy: 0.5785
Epoch 92/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6019 - accuracy: 0.6077 - val_loss: 0.6200 - val_accuracy: 0.5740
Epoch 93/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6014 - accuracy: 0.6078 - val_loss: 0.6230 - val_accuracy: 0.5735
Epoch 94/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6018 - accuracy: 0.6087 - val_loss: 0.6113 - val_accuracy: 0.5810
Epoch 95/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6019 - accuracy: 0.6086 - val_loss: 0.6203 - val_accuracy: 0.5755
Epoch 96/100
321/321 [==============================] - 0s 1ms/step - loss: 0.6013 - accuracy: 0.6088 - val_loss: 0.6273 - val_accuracy: 0.5693
Epoch 97/100
321/321 [==============================] - 0s 925us/step - loss: 0.6019 - accuracy: 0.6071 - val_loss: 0.6023 - val_accuracy: 0.5927
Epoch 98/100
321/321 [==============================] - 0s 973us/step - loss: 0.6023 - accuracy: 0.6072 - val_loss: 0.6093 - val_accuracy: 0.5810
Epoch 99/100
321/321 [==============================] - 0s 925us/step - loss: 0.6012 - accuracy: 0.6091 - val_loss: 0.6018 - val_accuracy: 0.5937
Epoch 100/100
321/321 [==============================] - 0s 973us/step - loss: 0.6015 - accuracy: 0.6092 - val_loss: 0.6255 - val_accuracy: 0.5710
</code></pre> | <p>Your model is not sufficiently big enough to handle the data.<br />
So, try increasing your model size.<br />
However, increasing your model size makes it more vulnerable to overfitting, but using some <a href="https://keras.io/api/layers/regularization_layers/dropout/" rel="nofollow noreferrer">Dropout</a> layers solves the issue.</p>
<pre><code>model = Sequential()
model.add(Input(shape=(12, 2)))
model.add(Dense(48, activation='relu'))
model.add(Dropout(0.20))
model.add(Dense(32, activation='relu'))
model.add(Dense(24, activation='relu'))
model.add(Dropout(0.15))
model.add(Dense(16, activation='relu'))
model.add(Dropout(0.1))
model.add(Dense(8, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
</code></pre>
<p>Furthermore, a lower learning rate allows the model to get to a better accuracy & loss.<br />
You can tweak the learning rate of the Adam optimizer when compiling,</p>
<pre><code>model.compile(keras.optimizers.Adam(lr=0.0002), loss=keras.losses.binary_crossentropy, metrics=['accuracy'])
</code></pre>
<p>The default value of learning rate of the Adam optimizer is 0.001</p> | python|tensorflow|machine-learning|keras|neural-network | 0 |
537 | 68,238,839 | replace/remove everything before a specified string | <p>I have a string in a pandas dataframe.</p>
<p><code>This is target 1</code></p>
<p><code>We also have target 2</code></p>
<p>I want to remove all text before the <code>target</code> string so it turns into:</p>
<p><code>target 1</code></p>
<p><code>target 2</code></p>
<p>I've been looking through regex patterns, but I'm struggling to find a solution. I've looked at lstrip() but that's for white space. It wont always be two words to be removed so I can't split by spaces and use a tail select [-2].</p>
<p>This is as far as I have got so far, but I'm at a loss of the best way to replace everything before <code>target</code> and retain <code>target</code>.</p>
<p><code>df['point_id'].str.replace()</code></p> | <p>To remove everything before the first <code>target</code>, you can use <code>^.*?(?=target)</code>, where:</p>
<ol>
<li><code>^</code> matches the beginning of string;</li>
<li><code>.*?</code> matches everything non-greedily;</li>
<li><code>?=target</code> asserts that the match is before pattern <code>target</code>;</li>
</ol>
<hr />
<pre><code>df = pd.DataFrame({'col': ['This is target 1', 'we also have target 2', 'With target 2 and target 3']})
df.col.str.replace('^.*?(?=target)', '')
0 target 1
1 target 2
2 target 2 and target 3
Name: col, dtype: object
</code></pre> | pandas|replace | 1 |
538 | 68,301,270 | How do I vstack or concatenate matricies of different shapes? | <p>In a situation like the one below, how do I vstack the two matrices?</p>
<pre><code>import numpy as np
a = np.array([[3,3,3],[3,3,3],[3,3,3]])
b = np.array([[2,2],[2,2],[2,2]])
a = np.vstack([a, b])
Output:
ValueError: all the input array dimensions for the concatenation axis must match exactly, but along dimension 1, the array at index 0 has size 3 and the array at index 1 has size 2
</code></pre>
<p>The output I would like would look like this:</p>
<pre><code>a = array([[[3, 3, 3],
[3, 3, 3],
[3, 3, 3]],
[[2, 2],
[2, 2],
[2, 2]]])
</code></pre>
<p>My goal is to then to loop over the content of the stacked matrices, index each matrix and call a function on a specific row.</p>
<pre><code>for matrix in a:
row = matrix[1]
print(row)
Output:
[3, 3, 3]
[2, 2]
</code></pre> | <p>Be careful with those "Numpy is faster" claims. If you already have arrays, and make full use of array methods, <code>numpy</code> is indeed faster. But if you start with lists, or have to use Python level iteration (as you do in <code>Pack...</code>), the <code>numpy</code> version might well be slower.</p>
<p>Just doing a time test on the <code>Pack</code> step:</p>
<pre><code>In [12]: timeit Pack_Matrices_with_NaN([a,b,c],5)
221 µs ± 9.02 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
</code></pre>
<p>Compare that with fetching the first row of each array with a simple list comprehension:</p>
<pre><code>In [13]: [row[1] for row in [a,b,c]]
Out[13]: [array([3., 3., 3.]), array([2., 2.]), array([4., 4., 4., 4.])]
In [14]: timeit [row[1] for row in [a,b,c]]
808 ns ± 2.17 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
</code></pre>
<p>200 µs compared to less than 1 µs!</p>
<p>And timing your <code>Unpack</code>:</p>
<pre><code>In [21]: [Unpack_Matrix_with_NaN(packed_matrices.reshape(3,3,5),i)[1,:] for i in range(3)]
...:
Out[21]: [array([3., 3., 3.]), array([2., 2.]), array([4., 4., 4., 4.])]
In [22]: timeit [Unpack_Matrix_with_NaN(packed_matrices.reshape(3,3,5),i)[1,:] for i in ra
...: nge(3)]
199 µs ± 10.8 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
</code></pre> | python|arrays|numpy|concatenation|vstack | 1 |
539 | 68,440,087 | Is there a way to show activation function in model plots tensorflow ( tf.keras.utils.plot_model() )? | <p>The model plot in TensorFlow shows the shape of input, dtype and layer name. Is there some way to show the type of activation function as well?
If there is some other better way of showing/plotting the neural networks please tell.</p> | <p>Updating the tensorflow version solved the <em>'Error: bad label' issue</em> when setting both show_shapes and show_layer_activation to True for me. (TF Version 2.8.0)
Update tensorflow version with</p>
<pre><code>pip install -U tensorflow
</code></pre>
<p>plot model call</p>
<pre><code>tf.keras.utils.plot_model(model, show_shapes=True, show_layer_activations=True)
</code></pre> | python|tensorflow|neural-network | 2 |
540 | 59,100,865 | trying to append a dense layer to vgg19 network | <p>I am trying to append a dense layer to vgg19 network, but it gives me the below error. Can anyone help me with this?</p>
<pre><code>import tensorflow
from tensorflow.keras.applications.vgg19 import VGG19
model = VGG19()
x = tensorflow.keras.layers.Dense(10,
activation="relu",name="",trainable=True)(model.layers[-1])
model = tensorflow.keras.Model(inputs = model.layers[0], outputs = x)
</code></pre>
<blockquote>
<p>Python 3.7.0 (default, Jun 28 2018, 07:39:16) Type "copyright",
"credits" or "license" for more information.</p>
<p>IPython 7.8.0 -- An enhanced Interactive Python.</p>
<p>runfile('/Users/sadegh/Dropbox/Moosavi Khorzooghi-04/test',
wdir='/Users/sadegh/Dropbox/Moosavi Khorzooghi-04') 2019-11-29
01:51:22.516366: I tensorflow/core/platform/cpu_feature_guard.cc:142]
Your CPU supports instructions that this TensorFlow binary was not
compiled to use: AVX2 FMA 2019-11-29 01:51:22.526913: I
tensorflow/compiler/xla/service/service.cc:168] XLA service
0x7fc84c7a2700 executing computations on platform Host. Devices:
2019-11-29 01:51:22.526926: I
tensorflow/compiler/xla/service/service.cc:175] StreamExecutor
device (0): Host, Default Version Traceback (most recent call last):</p>
<p>File "", line 1, in
runfile('/Users/sadegh/Dropbox/Moosavi Khorzooghi-04/test', wdir='/Users/sadegh/Dropbox/Moosavi Khorzooghi-04')</p>
<p>File
"/opt/anaconda3/lib/python3.7/site-packages/spyder_kernels/customize/spydercustomize.py",
line 827, in runfile
execfile(filename, namespace)</p>
<p>File
"/opt/anaconda3/lib/python3.7/site-packages/spyder_kernels/customize/spydercustomize.py",
line 110, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)</p>
<p>File "/Users/sadegh/Dropbox/Moosavi Khorzooghi-04/test", line 11, in
x = tensorflow.keras.layers.Dense(10, activation="relu",name="",trainable=True)(model.layers[-1])</p>
<p>File
"/opt/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py",
line 887, in <strong>call</strong>
self._maybe_build(inputs)</p>
<p>File
"/opt/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py",
line 2122, in _maybe_build
self.input_spec, inputs, self.name)</p>
<p>File
"/opt/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/input_spec.py",
line 163, in assert_input_compatibility
if x.shape.ndims is None:</p>
<p>AttributeError: 'Dense' object has no attribute 'shape'</p>
</blockquote> | <p>Assuming it's a classification problem, therefore, you should use <code>softmax</code> activation instead of <code>relu</code> in the output layer. Also you can access input and output of the backbone <code>VGG19</code> model. You should manually pool or flatten the output of base model if you instantiate it with default setting. Instead you can set <code>pooling="avg" or pooling="max"</code> for global average or max pooling respectively. Moreover, You can use something like following:</p>
<pre><code>base_model = VGG19(input_shape=(224, 224, 3), weights='imagenet', pooling="avg", include_top=False)
x = Dense(10, activation="softmax", name="output")(base_model.output)
model = Model(inputs=base_model.input, outputs=x)
print(model.summary())
</code></pre> | python|tensorflow|keras | 0 |
541 | 59,337,848 | Why is this simple reindex Panda df function not working? | <p>I'm trying to reindex dataframes within a function but it's not working. It works outside of the function so I'm totally lost. Here's what I'm doing:</p>
<p>Reindexing df2 based on index from df1</p>
<p>Outside of function:</p>
<pre class="lang-py prettyprint-override"><code>df2 = df2.reindex(df1.index)
</code></pre>
<p>This result is what I want ,and works. However, within this function:</p>
<pre><code>def reindex_df(a,b):
a = a.reindex(b.index)
</code></pre>
<p>where <code>a = df2</code> and <code>b = df1</code>. </p>
<p>What's going on here? I've researched and thought something to do with local vs. global variables, but tweaked code (to this) and still not working. What am I missing???? </p> | <p>Compare 2 following examples:</p>
<ol>
<li><p>A function <strong>substituting</strong> a new value under a parameter:</p>
<pre><code>def f1(a):
a = a + 1
a = 10
print(f'Before: {a}')
f1(a)
print(f'After: {a}')
</code></pre>
<p>The result is:</p>
<pre><code>Before: 10
After: 10
</code></pre>
<p>so that the substitution in <em>f1</em> is <strong>not</strong> visible outside this function.</p></li>
<li><p>A function <strong>returning</strong> the new value:</p>
<pre><code>def f2(a):
return a + 1
a = 10
print(f'Before: {a}')
a = f2(a)
print(f'After: {a}')
</code></pre>
<p>This time the result is:</p>
<pre><code>Before: 10
After: 11
</code></pre></li>
</ol>
<p>So change your function the same way. It should <strong>return</strong> the new
(reindexed) DataFrame and when you call it, substitute the result
under the same variable.</p> | python|pandas|python-3.6 | 1 |
542 | 59,423,859 | Swapping to numpy arrays element wise | <p>I have two <code>NumPy</code> arrays <code>l</code> and <code>g</code>, and I want to swap the elements of <code>l</code> that are greater than the corresponding elements in <code>g</code>.</p>
<p>for example:</p>
<pre><code>l = [0,19,1]
g = [1,17,2]
</code></pre>
<p>after the operation</p>
<pre><code>l = [0,17,1]
g = [1,19,2]
</code></pre>
<p>the arrays could be multi dimensional. How do I do this efficiently in <code>NumPy</code>?</p> | <p>Just use <code>np.minimum</code> and <code>np.maximum</code>:</p>
<pre><code>l = np.array([0,19,1])
g = np.array([1,17,2])
l, g = np.minimum(l, g), np.maximum(l, g)
</code></pre> | python|numpy | 2 |
543 | 59,349,536 | A target array with shape (15000, 250) was passed for an output of shape (None, 1) while using as loss `binary_crossentropy`. What do I do? | <p>I have created a model but I can't run it because of the target array shape and output shape. I am trying to just train it but not sure what to make out of the error.</p>
<p>Error:</p>
<pre><code>---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-9-4c7f3cb9ee70> in <module>()
44 y_train = train_data[10000:]
45
---> 46 fitModel = model.fit(x_train, y_train, epochs=5, batch_size=512, validation_data=(x_val, y_val), verbose=1)
47
48 result = model.evaluate(test_data, test_labels)
3 frames
/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training_utils.py in check_loss_and_target_compatibility(targets, loss_fns, output_shapes)
739 raise ValueError('A target array with shape ' + str(y.shape) +
740 ' was passed for an output of shape ' + str(shape) +
--> 741 ' while using as loss `' + loss_name + '`. '
742 'This loss expects targets to have the same shape '
743 'as the output.')
ValueError: A target array with shape (15000, 250) was passed for an output of shape (None, 1) while using as loss `binary_crossentropy`. This loss expects targets to have the same shape as the output.
</code></pre>
<p>I am supposed to get an output of accuracy and execution time. I tried changing values for the output layer but it did not work for me at all.</p>
<p>My code:</p>
<pre><code>import tensorflow as tf
from tensorflow import keras
import numpy as np
import time
start_time = time.time()
data = tf.keras.datasets.imdb
(train_data, train_labels), (test_data, test_labels) = data.load_data(num_words=7500)
word_index = data.get_word_index()
word_index = {k:(v+3) for k, v in word_index.items()}
word_index["<PAD>"] = 0
word_index["<START>"] = 1
word_index["<UNKNOWN>"] = 2
word_index["<UNUSED>"] = 3
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
train_data = keras.preprocessing.sequence.pad_sequences(train_data, value= word_index["<PAD>"], padding="post", maxlen=250)
test_data = keras.preprocessing.sequence.pad_sequences(train_data, value= word_index["<PAD>"], padding="post", maxlen=250)
def decode_review(text):
return " ".join([reverse_word_index.get(i, "?") for i in text])
model = keras.Sequential()
model.add(keras.layers.Embedding(10000, 16))
model.add(keras.layers.GlobalAveragePooling1D())
model.add(keras.layers.Dense(16, activation="relu"))
model.add(keras.layers.Dense(1, activation="sigmoid"))
model.summary()
model.compile(optimizer="adam", loss="binary_crossentropy", metrics=["accuracy"])
x_val = train_data[:10000]
x_train = train_data[10000:]
y_val = train_data[:10000]
y_train = train_data[10000:]
fitModel = model.fit(x_train, y_train, epochs=5, batch_size=512, validation_data=(x_val, y_val), verbose=1)
result = model.evaluate(test_data, test_labels)
print(results)
time1 = time.time() - start_time
start_time = time.time()
print(float(test_acc1) / 1)
print(float(time1) / 1)
</code></pre> | <p>Change </p>
<pre><code>y_val = train_data[:10000]
y_train = train_data[10000:]
</code></pre>
<p>to</p>
<pre><code>y_val = train_labels[:10000]
y_train = train_labels[10000:]
</code></pre> | python|tensorflow|machine-learning|keras|deep-learning | 1 |
544 | 44,855,380 | Tensorflow: runn error with reader_test.py in models.tutorials.rnn | <p>I use anaconda2 with python 3.5 based tensorflow-gpu environment in wind10. I test the installation of tensorflow (v1.2) by run:</p>
<pre><code>import tensorflow as tf
hello = tf.constant('Hello, TensorFlow!')
sess = tf.Session()
print(sess.run(hello))
</code></pre>
<p>There is no problem with the installation. </p>
<p>Then I further test it by running two the provided examples: </p>
<pre><code>reader_test.py
ptb_word_lm.py #this is to use LSTM to model penntree bank data
</code></pre>
<p>But the two programs cannot be run successfully:</p>
<p>For the first case:
<a href="https://i.stack.imgur.com/js1zG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/js1zG.png" alt="enter image description here"></a></p>
<p>For the second case:</p>
<pre><code>#implementation in anaconda prompt
(tensorflow-gpu) D:\Research\Manuscript\Simplified LSTM\models-master\models-master\tutorials\rnn\ptb>python ptb_word_lm.py --data_path=D:\simple-examples\data
</code></pre>
<p>Resultant error messages:</p>
<pre><code>2017-06-30 18:06:05.819002: W c:\tf_jenkins\home\workspace\release-win\m\windows-gpu\py\35\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE instructions, but these are available on your machine and could speed up CPU computations.
2017-06-30 18:06:05.819089: W c:\tf_jenkins\home\workspace\release-win\m\windows-gpu\py\35\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE2 instructions, but these are available on your machine and could speed up CPU computations.
2017-06-30 18:06:05.819770: W c:\tf_jenkins\home\workspace\release-win\m\windows-gpu\py\35\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations.
2017-06-30 18:06:05.819816: W c:\tf_jenkins\home\workspace\release-win\m\windows-gpu\py\35\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2017-06-30 18:06:05.819843: W c:\tf_jenkins\home\workspace\release-win\m\windows-gpu\py\35\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-06-30 18:06:05.819866: W c:\tf_jenkins\home\workspace\release-win\m\windows-gpu\py\35\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-06-30 18:06:05.819889: W c:\tf_jenkins\home\workspace\release-win\m\windows-gpu\py\35\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2017-06-30 18:06:05.819911: W c:\tf_jenkins\home\workspace\release-win\m\windows-gpu\py\35\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
2017-06-30 18:06:06.317871: I c:\tf_jenkins\home\workspace\release-win\m\windows-gpu\py\35\tensorflow\core\common_runtime\gpu\gpu_device.cc:940] Found device 0 with properties:
name: GeForce 940M
major: 5 minor: 0 memoryClockRate (GHz) 1.176
pciBusID 0000:01:00.0
Total memory: 2.00GiB
Free memory: 1.66GiB
2017-06-30 18:06:06.317961: I c:\tf_jenkins\home\workspace\release-win\m\windows-gpu\py\35\tensorflow\core\common_runtime\gpu\gpu_device.cc:961] DMA: 0
2017-06-30 18:06:06.321380: I c:\tf_jenkins\home\workspace\release-win\m\windows-gpu\py\35\tensorflow\core\common_runtime\gpu\gpu_device.cc:971] 0: Y
2017-06-30 18:06:06.322688: I c:\tf_jenkins\home\workspace\release-win\m\windows-gpu\py\35\tensorflow\core\common_runtime\gpu\gpu_device.cc:1030] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce 940M, pci bus id: 0000:01:00.0)
WARNING:tensorflow:Standard services need a 'logdir' passed to the SessionManager
Epoch: 1 Learning rate: 1.000
2017-06-30 18:06:11.106452: E c:\tf_jenkins\home\workspace\release-win\m\windows-gpu\py\35\tensorflow\stream_executor\cuda\cuda_blas.cc:365] failed to create cublas handle: CUBLAS_STATUS_ALLOC_FAILED
2017-06-30 18:06:11.106573: W c:\tf_jenkins\home\workspace\release-win\m\windows-gpu\py\35\tensorflow\stream_executor\stream.cc:1601] attempting to perform BLAS operation using StreamExecutor without BLAS support
Traceback (most recent call last):
File "C:\Users\Y L\Anaconda2\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\client\session.py", line 1139, in _do_call
return fn(*args)
File "C:\Users\Y L\Anaconda2\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\client\session.py", line 1121, in _run_fn
status, run_metadata)
File "C:\Users\Y L\Anaconda2\envs\tensorflow-gpu\lib\contextlib.py", line 66, in __exit__
next(self.gen)
File "C:\Users\Y L\Anaconda2\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 466, in raise_exception_on_not_ok_status
pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors_impl.InternalError: Blas GEMM launch failed : a.shape=(20, 400), b.shape=(400, 800), m=20, n=800, k=400
[[Node: Train/Model/RNN/RNN/multi_rnn_cell/cell_0/cell_0/basic_lstm_cell/basic_lstm_cell/MatMul = MatMul[T=DT_FLOAT, transpose_a=false, transpose_b=false, _device="/job:localhost/replica:0/task:0/gpu:0"](Train/Model/RNN/RNN/multi_rnn_cell/cell_0/cell_0/basic_lstm_cell/basic_lstm_cell/concat, Model/RNN/multi_rnn_cell/cell_0/basic_lstm_cell/kernel/read)]]
[[Node: Train/Model/RNN/RNN/multi_rnn_cell/cell_1/cell_1/basic_lstm_cell/add_39/_123 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_6049_Train/Model/RNN/RNN/multi_rnn_cell/cell_1/cell_1/basic_lstm_cell/add_39", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "ptb_word_lm.py", line 395, in <module>
tf.app.run()
File "C:\Users\Y L\Anaconda2\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\platform\app.py", line 48, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "ptb_word_lm.py", line 381, in main
verbose=True)
File "ptb_word_lm.py", line 310, in run_epoch
vals = session.run(fetches, feed_dict)
File "C:\Users\Y L\Anaconda2\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\client\session.py", line 789, in run
run_metadata_ptr)
File "C:\Users\Y L\Anaconda2\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\client\session.py", line 997, in _run
feed_dict_string, options, run_metadata)
File "C:\Users\Y L\Anaconda2\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\client\session.py", line 1132, in _do_run
target_list, options, run_metadata)
File "C:\Users\Y L\Anaconda2\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\client\session.py", line 1152, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InternalError: Blas GEMM launch failed : a.shape=(20, 400), b.shape=(400, 800), m=20, n=800, k=400
[[Node: Train/Model/RNN/RNN/multi_rnn_cell/cell_0/cell_0/basic_lstm_cell/basic_lstm_cell/MatMul = MatMul[T=DT_FLOAT, transpose_a=false, transpose_b=false, _device="/job:localhost/replica:0/task:0/gpu:0"](Train/Model/RNN/RNN/multi_rnn_cell/cell_0/cell_0/basic_lstm_cell/basic_lstm_cell/concat, Model/RNN/multi_rnn_cell/cell_0/basic_lstm_cell/kernel/read)]]
[[Node: Train/Model/RNN/RNN/multi_rnn_cell/cell_1/cell_1/basic_lstm_cell/add_39/_123 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_6049_Train/Model/RNN/RNN/multi_rnn_cell/cell_1/cell_1/basic_lstm_cell/add_39", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
Caused by op 'Train/Model/RNN/RNN/multi_rnn_cell/cell_0/cell_0/basic_lstm_cell/basic_lstm_cell/MatMul', defined at:
File "ptb_word_lm.py", line 395, in <module>
tf.app.run()
File "C:\Users\Y L\Anaconda2\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\platform\app.py", line 48, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "ptb_word_lm.py", line 357, in main
m = PTBModel(is_training=True, config=config, input_=train_input)
File "ptb_word_lm.py", line 157, in __init__
(cell_output, state) = cell(inputs[:, time_step, :], state)
File "C:\Users\Y L\Anaconda2\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py", line 180, in __call__
return super(RNNCell, self).__call__(inputs, state)
File "C:\Users\Y L\Anaconda2\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\layers\base.py", line 441, in __call__
outputs = self.call(inputs, *args, **kwargs)
File "C:\Users\Y L\Anaconda2\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py", line 916, in call
cur_inp, new_state = cell(cur_inp, cur_state)
File "C:\Users\Y L\Anaconda2\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py", line 180, in __call__
return super(RNNCell, self).__call__(inputs, state)
File "C:\Users\Y L\Anaconda2\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\layers\base.py", line 441, in __call__
outputs = self.call(inputs, *args, **kwargs)
File "C:\Users\Y L\Anaconda2\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py", line 383, in call
concat = _linear([inputs, h], 4 * self._num_units, True)
File "C:\Users\Y L\Anaconda2\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py", line 1021, in _linear
res = math_ops.matmul(array_ops.concat(args, 1), weights)
File "C:\Users\Y L\Anaconda2\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\ops\math_ops.py", line 1816, in matmul
a, b, transpose_a=transpose_a, transpose_b=transpose_b, name=name)
File "C:\Users\Y L\Anaconda2\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 1217, in _mat_mul
transpose_b=transpose_b, name=name)
File "C:\Users\Y L\Anaconda2\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 767, in apply_op
op_def=op_def)
File "C:\Users\Y L\Anaconda2\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\framework\ops.py", line 2506, in create_op
original_op=self._default_original_op, op_def=op_def)
File "C:\Users\Y L\Anaconda2\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\framework\ops.py", line 1269, in __init__
self._traceback = _extract_stack()
InternalError (see above for traceback): Blas GEMM launch failed : a.shape=(20, 400), b.shape=(400, 800), m=20, n=800, k=400
[[Node: Train/Model/RNN/RNN/multi_rnn_cell/cell_0/cell_0/basic_lstm_cell/basic_lstm_cell/MatMul = MatMul[T=DT_FLOAT, transpose_a=false, transpose_b=false, _device="/job:localhost/replica:0/task:0/gpu:0"](Train/Model/RNN/RNN/multi_rnn_cell/cell_0/cell_0/basic_lstm_cell/basic_lstm_cell/concat, Model/RNN/multi_rnn_cell/cell_0/basic_lstm_cell/kernel/read)]]
[[Node: Train/Model/RNN/RNN/multi_rnn_cell/cell_1/cell_1/basic_lstm_cell/add_39/_123 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_6049_Train/Model/RNN/RNN/multi_rnn_cell/cell_1/cell_1/basic_lstm_cell/add_39", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
</code></pre> | <p>I solved the problem by updating anaconda (<code>conda upate --all</code>) and then restarting PC.</p> | tensorflow|lstm | 1 |
545 | 45,199,910 | Dates from 1900-01-01 are added to my 'Time' after using df['Time'] = pd.to_datetime(phData['Time'], format='%H:%M:%S') | <p>I am a self taught coder (for around a year, so new). Here is my data</p>
<pre><code>phData = pd.read_excel('phone call log & duration.xlsx')
called from called to Date Time Duration in (sec)
0 7722078014 7722012013 2017-07-01 10:00:00 303
1 7722078014 7722052018 2017-07-01 10:21:00 502
2 7722078014 7450120521 2017-07-01 10:23:00 56
The dtypes are:
called from int64
called to int64
Date datetime64[ns]
Time object
Duration in (sec) int64
dtype: object
phData['Time'] = pd.to_datetime(phData['Time'], format='%H:%M:%S')
phData.head(2)
called from called to Date Time Duration in (sec)
0 7722078014 7722012013 2017-07-01 1900-01-01 10:00:00 303
1 7722078014 7722052018 2017-07-01 1900-01-01 10:21:00 502
</code></pre>
<p>I've managed to change the 'Time' to datetime64[ns] but somehow dates have been added?? From where I have no idea? I want to be able to analyse the <code>Date</code> and <code>Time</code> using Pandas which I'm happy to do. To explore calls made between dates and time, frequency etc. I think also I will be able to save it so it will work in Orange3. But Orange3 won't recognise the Time as a time format. I've tried stripping out the <code>1900-01-01</code> but get an error saying it can only be done if an object. I think the Time isn't a <code>datetime</code> but a <code>datetime.time</code> ??? and I'm not sure why this matters and how to simply have 2 <code>columns</code> one <code>Date</code> and another <code>Time</code>, that Pandas will recognise for me to mine. I have looked at countless posts and that's where I found how to use <code>pd.to_datetime</code> and that my issue might be <code>datetime.time</code> but I'm stuck after this.</p> | <p>Pandas doesn't have such dtype as Time. You can have either <code>datetime</code> or <code>timedelta</code> dtype.</p>
<p><strong>Option 1</strong>: combine Date and Time into single column:</p>
<pre><code>In [23]: df['TimeStamp'] = pd.to_datetime(df.pop('Date') + ' ' + df.pop('Time'))
In [24]: df
Out[24]:
called from called to Duration in (sec) TimeStamp
0 7722078014 7722012013 303 2017-07-01 10:00:00
1 7722078014 7722052018 502 2017-07-01 10:21:00
2 7722078014 7450120521 56 2017-07-01 10:23:00
</code></pre>
<p><strong>Option 2</strong>: convert <code>Date</code> to <code>datetime</code> and <code>Time</code> to <code>timedelta</code> dtype:</p>
<pre><code>In [27]: df.Date = pd.to_datetime(df.Date)
In [28]: df.Time = pd.to_timedelta(df.Time)
In [29]: df
Out[29]:
called from called to Date Time Duration in (sec)
0 7722078014 7722012013 2017-07-01 10:00:00 303
1 7722078014 7722052018 2017-07-01 10:21:00 502
2 7722078014 7450120521 2017-07-01 10:23:00 56
In [30]: df.dtypes
Out[30]:
called from int64
called to int64
Date datetime64[ns]
Time timedelta64[ns]
Duration in (sec) int64
dtype: object
</code></pre> | python|pandas|datetime | 1 |
546 | 44,889,692 | Zscore Normalize column in Dataframe using groupby | <p>I have a dataframe representing customers orders with many columns, two of those being 'user_id' and 'dollar'. </p>
<p>for example : </p>
<pre><code> user_id dollar
0 1 0.34592 5
1 1 0.02857 7
2 1 0.26672 6
3 1 0.34592 5
4 1 0.02857 9
5 1 0.26672 10
6 1 0.34592 6
[...]
7 40 0.02857 20
8 40 0.26672 19
9 40 0.34592 8
10 40 0.02857 18
11 40 0.26672 26
</code></pre>
<p>I want to normalize the value of dollar with respect to the other values in each users row. I want the following result for the previous example:</p>
<pre><code> user_id dollar norm_dollar
0 1 0.34592 5 -1.02774024
1 1 0.02857 7 0.07905694
2 1 0.26672 6 -0.47434165
3 1 0.34592 5 -1.02774024
4 1 0.02857 9 1.18585412
5 1 0.26672 10 1.73925271
6 1 0.34592 6 -0.47434165
[...]
7 40 0.02857 20 0.7787612
8 40 0.26672 19 0.57109154
9 40 0.34592 8 -1.71327463
10 40 0.02857 18 0.36342189
</code></pre>
<p>EDIT: </p>
<p>I would like each results to be normalized wrt each user individually and not the values of the whole column, so for example with user2,[20,19,8,18] should be normalized as if the mean is the mean of user2 orders, here for example the mean is 16,25 and not the mean of the whole dataframe column.</p>
<p>I know how to do it with one user: </p>
<pre><code>user1 = data.loc[data['user_id']==1]
data.loc[data['user_id']==1]['norm_dollar'] = sp.stats.mstats.zscore(user1['dollar'])
</code></pre>
<p>I tried to do it this way for all the users:</p>
<pre><code>data.dollar.div(sp.stats.mstats.zscore(data.groupby('user_id').dollar))
</code></pre>
<p>But I got an error, do you have any idea on how to proceed?</p>
<p>Thank you</p> | <p>Different ways to do this—like joining the <code>groupby</code> dataframe back to the original—but I'm starting to like the use of <code>transform</code> for stuff like this.</p>
<p>The syntax is still verbose, but I think it's more readable than the join method.</p>
<pre><code>df['norm_dollar'] = (df['dollar']
- df.groupby('user_id')['dollar'].transform(np.mean)) \
/ df.groupby('user_id')['dollar'].transform(np.std)
</code></pre>
<p>If you need to specify degrees of freedom on <code>np.std</code>, you can turn that into</p>
<pre><code>lambda x: np.std(x, ddof=n)
</code></pre> | python|pandas|numpy|scipy | 1 |
547 | 57,243,727 | Gradients in Keras loss function with RNNs | <p>I have a simple test LSTM model:</p>
<pre><code>inputs = Input(shape=(k, m))
layer1 = LSTM(128, activation='relu', return_sequences=True)(inputs)
layer2 = LSTM(128, activation='relu')(layer1)
predictions = Dense(1, activation='linear')(layer2)
model = Model(inputs=inputs, outputs=predictions)
</code></pre>
<p>and a custom loss function that uses output gradients wrt inputs:</p>
<pre><code>def custom_loss(model, input_tensor):
def loss(y_true, y_pred):
grads = K.gradients(model.output, model.input)[0]
loss_f = losses.mean_squared_error(y_true, y_pred) + K.exp(-K.sum(grads))
return loss_f
return loss
</code></pre>
<p>Model training fails with error "Second-order gradient for while loops not supported":</p>
<pre><code>model.compile(optimizer='adam', loss=custom_loss(model_reg, inputs_reg), metrics=['mean_absolute_error'])
model_reg.fit(x_train, y_train, batch_size=32, epochs=20, verbose=1, validation_data=(x_val, y_val))
-----
....
159
160 if op_ctxt.grad_state:
--> 161 raise TypeError("Second-order gradient for while loops not supported.")
162
163 if isinstance(grad, ops.Tensor):
TypeError: Second-order gradient for while loops not supported.
</code></pre>
<p>Why TF tries to compute second-order gradients here? It should be just first order. </p>
<p>The same loss function works well for non-RNN models. </p> | <p>Setting Unroll property helped to resolve the issue:</p>
<pre><code>layer1 = LSTM(128, activation='relu', return_sequences=True, unroll=True)(inputs)
layer2 = LSTM(128, activation='relu', unroll=True)(layer1)
</code></pre> | tensorflow|keras|loss-function | 1 |
548 | 57,283,440 | Compute number of occurance of each value and Sum another column in Pandas | <p>I have a pandas dataframe with some columns in it. The column I am interested in is something like this,</p>
<pre><code>df['col'] = ['A', 'A', 'B', 'C', 'B', 'A']
</code></pre>
<p>I want to make another column say, <code>col_count</code> such that it shows count value in <code>col</code> from that index to the end of the column.</p>
<p>The first <code>A</code> in the column should have a value 3 because there is 3 occurrence of <code>A</code> in the column from that index. The second <code>A</code> will have value <code>2</code> and so on. </p>
<p>Finally, I want to get the following result,</p>
<pre><code> col col_count
0 A 3
1 A 2
2 B 2
3 C 1
4 B 1
5 A 1
</code></pre>
<p>How can I do this effectively in pandas.? I was able to do this by looping through the dataframe and taking a unique count of that value for a sliced dataframe.</p>
<p>Is there an efficient method to do this.? Something without loops preferable.</p>
<p>Another part of the question is, I have another column like this along with <code>col</code>,</p>
<pre><code>df['X'] = [10, 40, 10, 50, 30, 20]
</code></pre>
<p>I want to sum up this column in the same fashion I wanted to count the column <code>col</code>.</p>
<p>For instance, At index 0, I will have 10 + 40 + 20 as the sum. At index 1, the sum will be 40 + 20. In short, instead of counting, I want to sum up another column.</p>
<p>The result will be like this,</p>
<pre><code> col col_count X X_sum
0 A 3 10 70
1 A 2 40 60
2 B 2 10 40
3 C 1 50 50
4 B 1 30 30
5 A 1 20 20
</code></pre> | <p>Use <code>pandas.Series.groupby</code> with <code>cumcount</code> and <code>cumsum</code>.</p>
<pre><code>g = df[::-1].groupby('col')
df['col_count'] = g.cumcount().add(1)
df['X_sum'] = g['X'].cumsum()
print(df)
</code></pre>
<p>Output:</p>
<pre><code> col X col_count X_sum
0 A 10 3 70
1 A 40 2 60
2 B 10 2 40
3 C 50 1 50
4 B 30 1 30
5 A 20 1 20
</code></pre> | python|pandas | 2 |
549 | 57,195,650 | Non-reproducible results in pytorch after saving and loading the model | <p>I am unable to reproduce my results in PyTorch after saving and loading the model whereas the in-memory model works as expected. Just for context, I am seeding my libraries, using model.eval to turn off the dropouts but still, results are not reproducible. Any suggestions if I am missing something. Thanks in advance.</p>
<p>Libraries that I am using:-</p>
<pre><code>import matplotlib.pyplot as plt
import torch
from torch import nn
from torch import optim
import torch.nn.functional as F
from torchvision import datasets, transforms
import numpy as np
import random
</code></pre>
<p>Libraries that I am seeding</p>
<pre><code>manualSeed = 1
np.random.seed(manualSeed)
random.seed(manualSeed)
torch.manual_seed(manualSeed)
random.seed(manualSeed)
</code></pre>
<p>Below are the results on in memory and loaded model. </p>
<pre><code>In Memory model Loss : 1.596395881312668, In Memory model Accuracy : tensor(0.3989)
Loaded model Loss : 1.597083057567572, Loaded model Accuracy : tensor(0.3983)
</code></pre> | <p>Since the date that Szymon Maszke posted his response above (2019), a new API has been added, <a href="https://pytorch.org/docs/stable/generated/torch.use_deterministic_algorithms.html" rel="nofollow noreferrer">torch.use_deterministic_algorithms()</a>.</p>
<p>This new function does everything that <code>torch.backends.cudnn.deterministic</code> did (namely, makes CuDNN convolution operations deterministic), plus much more (makes every known normally-nondeterministic function either deterministic or throw an error if a deterministic implementation is not available). CuDNN convolution is only one of the many possible sources of nondeterminism in PyTorch, so <code>torch.use_deterministic_algorithms()</code> should now be used instead of the old <code>torch.backends.cudnn.deterministic</code>.</p>
<p>The link to the <a href="https://pytorch.org/docs/stable/notes/randomness.html" rel="nofollow noreferrer">reproducibility documentation</a> is still relevant. However, note that this page has been changed a fair bit since 2019.</p> | deep-learning|pytorch | 2 |
550 | 23,013,440 | how to read "\n\n" in python module pandas? | <p>There is a data file which has <code>\n\n</code> at the end of every line.<br>
<a href="http://pan.baidu.com/s/1o6jq5q6" rel="nofollow noreferrer">http://pan.baidu.com/s/1o6jq5q6</a><br>
My system:win7+python3.3+R-3.0.3<br>
In R </p>
<pre><code>sessionInfo()
[1] LC_COLLATE=Chinese (Simplified)_People's Republic of China.936
[2] LC_CTYPE=Chinese (Simplified)_People's Republic of China.936
[3] LC_MONETARY=Chinese (Simplified)_People's Republic of China.936
[4] LC_NUMERIC=C
[5] LC_TIME=Chinese (Simplified)_People's Republic of China.936
</code></pre>
<p>In python: chcp 936</p>
<p>I can read it in R. </p>
<pre><code>read.table("test.pandas",sep=",",header=TRUE)
</code></pre>
<p>It is so simple.</p>
<p>and I can read it in python to get almost same output. </p>
<pre><code>fr=open("g:\\test.pandas","r",encoding="gbk").read()
data=[x for x in fr.splitlines() if x.strip() !=""]
for id,char in enumerate(data):
print(str(id)+","+char)
</code></pre>
<p>When i read it in python module pandas, </p>
<pre><code>import pandas as pd
pd.read_csv("test.pandas",sep=",",encoding="gbk")
</code></pre>
<p>I found two problems in the output:<br>
1)how to make right alignment(the problem i have asked in other post)<br>
<a href="https://stackoverflow.com/questions/23008636/how-to-set-alignment-in-pandas-in-python">how to set alignment in pandas in python with non-ANSI characters</a><br>
2)there is a NaN line in every real data. </p>
<p>Can i improve my pandas code to get better display in console?</p>
<p><img src="https://i.stack.imgur.com/tRQJZ.jpg" alt="enter image description here"><br>
<img src="https://i.stack.imgur.com/iiMFh.jpg" alt="enter image description here"><br>
<img src="https://i.stack.imgur.com/UahIb.jpg" alt="enter image description here"> </p> | <p>Your file when read with <code>open('test.pandas', 'rb')</code> seems to contain '\r\r\n' as its line terminators. Python 3.3 does seem to convert this to '\n\n' while Python 2.7 converts it to '\r\n' when read with <code>open('test.pandas', 'r', encoding='gbk')</code>.</p>
<p><a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html?highlight=lineterminator" rel="nofollow noreferrer">pandas.read_csv</a> does have a lineterminator parameter but it only accepts single character terminators.</p>
<p>What you can do is process the file a bit before passing it to <code>pandas.read_csv()</code>, and you can use <a href="https://docs.python.org/3.3/library/io.html?#io.StringIO" rel="nofollow noreferrer">StringIO</a> which will wrap a string buffer in a file interface so that you don't need to write out a temporary file first.</p>
<pre><code>import pandas as pd
from io import StringIO
with open('test.pandas', 'r', encoding='gbk') as in_file:
contents = in_file.read().replace('\n\n', '\n')
df = pd.read_csv(StringIO(contents))
</code></pre>
<p>(I don't have the GBK charset for the output below.)
</p>
<pre><code>>>> df[0:10]
??????? ??? ????????
0 HuangTianhui ?? 1948/05/28
1 ?????? ? 1952/03/27
2 ??? ? 1994/12/09
3 LuiChing ? 1969/08/02
4 ???? ?? 1982/03/01
5 ???? ?? 1983/08/03
6 YangJiabao ? 1988/08/25
7 ?????????????? ?? 1979/07/10
8 ?????? ? 1949/10/20
9 ???»? ? 1951/10/21
</code></pre>
<p>In Python 2.7 <code>StringIO()</code> was in module <code>StringIO</code> instead of <code>io</code>.</p> | python-3.x|pandas | 2 |
551 | 23,393,397 | Code / Loop optimization with pandas for creating two matrixes | <p>I need to optimize this loop which takes 2.5 second. The needs is that I call it more than 3000 times in my script.
The aim of this code is to create two matrix which are used after in a linear system.</p>
<p>Has someone any idea in Python or Cython?</p>
<pre><code> ## df is only here for illustration and date_indicatrice changes upon function call
df = pd.DataFrame(0, columns=range(6),
index=pd.date_range(start = pd.datetime(2010,1,1),
end = pd.datetime(2020,1,1), freq="H"))
mat = pd.DataFrame(0,index=df.index,columns=range(6))
mat_bp = pd.DataFrame(0,index=df.index,columns=range(6*2))
date_indicatrice = [(pd.datetime(2010,1,1), pd.datetime(2010,4,1)),
(pd.datetime(2012,5,1), pd.datetime(2019,4,1)),
(pd.datetime(2013,4,1), pd.datetime(2019,4,1)),
(pd.datetime(2014,3,1), pd.datetime(2019,4,1)),
(pd.datetime(2015,1,1), pd.datetime(2015,4,1)),
(pd.datetime(2013,6,1), pd.datetime(2018,4,1))]
timer = time.time()
for j, (d1,d2) in enumerate(date_indicatrice):
result = df[(mat.index>=d1)&(mat.index<=d2)]
result2 = df[(mat.index>=d1)&(mat.index<=d2)&(mat.index.hour>=8)]
mat.loc[result.index,j] = 1.
mat_bp.loc[result2.index,j*2] = 1.
mat_bp[j*2+1] = (1 - mat_bp[j*2]) * mat[j]
print time.time()-timer
</code></pre> | <p>Here you go. I tested the following and I get the same resultant matrices in mat and mat_bp as in your original code, but in 0.07 seconds vs. 1.4 seconds for the original code on my machine.</p>
<p>The real slowdown was due to using result.index and result2.index. Looking up by a datetime is much slower than looking up using an index. I used binary searches where possible to find the right indices.</p>
<pre><code>import pandas as pd
import numpy as np
import time
import bisect
## df is only here for illustration and date_indicatrice changes upon function call
df = pd.DataFrame(0, columns=range(6),
index=pd.date_range(start = pd.datetime(2010,1,1),
end = pd.datetime(2020,1,1), freq="H"))
mat = pd.DataFrame(0,index=df.index,columns=range(6))
mat_bp = pd.DataFrame(0,index=df.index,columns=range(6*2))
date_indicatrice = [(pd.datetime(2010,1,1), pd.datetime(2010,4,1)),
(pd.datetime(2012,5,1), pd.datetime(2019,4,1)),
(pd.datetime(2013,4,1), pd.datetime(2019,4,1)),
(pd.datetime(2014,3,1), pd.datetime(2019,4,1)),
(pd.datetime(2015,1,1), pd.datetime(2015,4,1)),
(pd.datetime(2013,6,1), pd.datetime(2018,4,1))]
timer = time.time()
for j, (d1,d2) in enumerate(date_indicatrice):
ind_start = bisect.bisect_left(mat.index, d1)
ind_end = bisect.bisect_right(mat.index, d2)
inds = np.array(xrange(ind_start, ind_end))
valid_inds = inds[mat.index[ind_start:ind_end].hour >= 8]
mat.loc[ind_start:ind_end,j] = 1.
mat_bp.loc[valid_inds,j*2] = 1.
mat_bp[j*2+1] = (1 - mat_bp[j*2]) * mat[j]
print time.time()-timer
</code></pre> | python|performance|optimization|pandas|cython | 1 |
552 | 23,447,023 | How to create an array of array indexes? | <p>I'm trying to create an array of of the number of elements I have in another array, but appending to the array in a loop gives me too many numbers. </p>
<pre><code> xaxis = np.zeros(len(newdates))
for i in newdates:
xaxis = np.append(xaxis, i)
</code></pre>
<p>Instead of [1,2,3,4,.....] like I want, it's giving me an array of [1,1,2,1,2,3,1,2,3,4,.....].</p>
<p>This seems like an easy question, but it's hanging me up for some reason. </p> | <p>You can avoid the loop entirely with something like (assuming len(newdates) is 3):</p>
<pre><code>>>> np.array(range(1, len(newdates)+1))
array([1, 2, 3])
</code></pre> | python|arrays|numpy|indexing | 1 |
553 | 35,371,499 | Vectorize repetitive math function in Python | <p>I have a mathematical function of this form <code>$f(x)=\sum_{j=0}^N x^j * \sin(j*x)$</code> that I would like to compute efficiently in <code>Python</code>. N is of order ~100. This function f is evaluated thousands of times for all entries x of a huge matrix, and therefore I would like to improve the performance (profiler indicates that calculation of f takes up most of the time). In order to avoid the loop in the definition of the function f I wrote:</p>
<pre><code>def f(x)
J=np.arange(0,N+1)
return sum(x**J*np.sin(j*x))
</code></pre>
<p>The issue is that if I want to evaluate this function for all entries of a matrix, I would need to use <code>numpy.vectorize</code> first, but as far as I know this not necessarily faster than a for loop. </p>
<p>Is there an efficient way to perform a calculation of this type?</p> | <p>Welcome to Sack Overflow! ^^</p>
<p>Well, calculating <code>something ** 100</code> is some serious thing. But notice how, when you declare your array <code>J</code>, you are forcing your function to calculate <code>x, x^2, x^3, x^4, ...</code> (and so on) independently.</p>
<p>Let us take for example this function (which is what you are using):</p>
<pre><code>def powervector(x, n):
return x ** np.arange(0, n)
</code></pre>
<p>And now this other function, which does not even use NumPy:</p>
<pre><code>def power(x, n):
result = [1., x]
aux = x
for i in range(2, n):
aux *= x
result.append(aux)
return result
</code></pre>
<p>Now, let us verify that they both calculate the same thing:</p>
<pre><code>In []: sum(powervector(1.1, 10))
Out[]: 15.937424601000005
In []: sum(power(1.1, 10))
Out[]: 15.937424601000009
</code></pre>
<p>Cool, now let us compare the performance of both (in iPython):</p>
<pre><code>In [36]: %timeit sum(powervector(1.1, 10))
The slowest run took 20.42 times longer than the fastest. This could mean that an intermediate result is being cached
100000 loops, best of 3: 3.52 µs per loop
In [37]: %timeit sum(power(1.1, 10))
The slowest run took 5.28 times longer than the fastest. This could mean that an intermediate result is being cached
1000000 loops, best of 3: 1.13 µs per loop
</code></pre>
<p>It is faster, as you are not calculating all the powers of <code>x</code>, because you know that <code>x ^ N == (x ^ N - 1) * x</code> and you take advantage of it.</p>
<p>You could use this to see if your performance improves. Of course you can change <code>power()</code> to use NumPy vectors as output. You can also have a look at <a href="http://numba.pydata.org/" rel="noreferrer">Numba</a>, which is easy to try and may improve performance a bit as well.</p>
<p>As you see, this is only a hint on how to improve some part of your problem. I bet there are a couple of other ways to further improve your code! :-)</p>
<h2>Edit</h2>
<p>It seems that Numba might not be a bad idea... Simply adding <code>@numba.jit</code> decorator:</p>
<pre><code>@numba.jit
def powernumba(x, n):
result = [1., x]
aux = x
for i in range(2, n):
aux *= x
result.append(aux)
return result
</code></pre>
<p>Then:</p>
<pre><code>In [52]: %timeit sum(power(1.1, 100))
100000 loops, best of 3: 7.67 µs per loop
In [51]: %timeit sum(powernumba(1.1, 100))
The slowest run took 5.64 times longer than the fastest. This could mean that an intermediate result is being cached
100000 loops, best of 3: 2.64 µs per loop
</code></pre>
<p>It seems Numba can do some magic there. ;-)</p> | python|numpy|vectorization|mathematical-expressions | 5 |
554 | 35,697,404 | Why it's a convention to import pandas as pd? | <p>I see that in the pandas documentation they recommend importing pandas as:</p>
<pre><code>import pandas as pd
</code></pre>
<p>I can see some sense in doing that for when you are using pandas in an interactive context (as with a ipython/jupyter notebook), but I've seen it in production code and in widespread libraries (like Bokeh: <a href="https://github.com/bokeh/bokeh/search?p=2&q=pd&type=Code&utf8=%E2%9C%93" rel="noreferrer">https://github.com/bokeh/bokeh/search?p=2&q=pd&type=Code&utf8=%E2%9C%93</a>). Is there a reason for that apart from convention? </p> | <p>Because there are <code>built-in</code> methods in python which overlap with the pandas methods. Like <code>map(), all(), any(), filter(), max(), min()</code> and many others. In order to avoid the confusion that these methods used are from pandas or built-in. It is always better to import pandas as <code>import pandas as pd</code> and call the pandas methods using the <code>pd</code> prefix.</p>
<p>There might be other libraries which have the same method names, so to avoid <code>overriding</code> we use the prefix part.</p> | python|pandas | 8 |
555 | 50,914,362 | Pandas: concatenate strings in column based on a flag in another column until flag changes | <p>I'm trying to concatenate strings in a column based on the values in another column. While this inherently isn't difficult, here the order of the flags matter so I can't think of a pythonic method to accomplish this task (currently trying multiple counters/loops).</p>
<p>Example table:</p>
<pre><code>text flag
a 0
b 0
c 1
d 0
e 1
f 1
g 1
</code></pre>
<p>Example output:</p>
<pre><code>text flag
ab 0
c 1
d 0
efg 1
</code></pre>
<p>I want to <code>''.join</code> every string for consecutive flags until the next flag is hit. The only flags are 1 and 0. Any ideas?</p>
<p>Here's a quick way to generate the example data so you don't have to do it yourself:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'text':['a','b','c','d','e','f','g'], 'flag':[0,0,1,0,1,1,1]})
</code></pre> | <p>I'd do it this way:</p>
<pre><code>In [6]: (df.groupby(df.flag.diff().ne(0).cumsum(), as_index=False)
.agg({'text':'sum','flag':'first'}))
Out[6]:
text flag
0 ab 0
1 c 1
2 d 0
3 efg 1
</code></pre> | python|pandas | 3 |
556 | 50,844,505 | Using Pandas Getdummies or isin to create Bool Features from feature that contains lists | <p>I have a pandas dataframe with one column containing a list of unique strings for each instance:</p>
<pre><code>obj_movies['unique_genres'].head()
0 [Action, Fantasy, Adventure, Science Fiction]
1 [Action, Fantasy, Adventure]
2 [Action, Adventure, Crime]
3 [Action, Drama, Thriller, Crime]
4 [Action, Science Fiction, Adventure]
Name: unique_genres, dtype: object
</code></pre>
<p>I would like to use pandas get_dummies() to create boolean features (to add on to the same dataframe) based on values in the list. For example, feature 'Action_Movie' would be True (or have value 1) for all first five instances.</p>
<p>To complete this task, I created a set of unique values from all the lists contained in the feature. With a for loop, for each movie tag feature (i.e. unique value in the set) I then used a boolean conversion method I created separately to create a list of 1's or a 0's based on the method outcomes. Finally, I simply appended as new pandas series.</p>
<p>However, I am thinking there must be a faster way! What about pandas df.isin() method for example? I also looked into that but it doesn't seem to work when you pass it a series of lists </p>
<p>What would be the best way to go about doing this? Can anyone recommend a good pandas advanced data manipulation tutorial online?</p> | <p>so if your column is composed of lists, you can indeed use <code>get_dummies</code> on your column with a bit of transformation (<code>apply(pd.Series)</code>, <code>stack</code> and then <code>groupby</code>):</p>
<pre><code>df_dummies = pd.get_dummies(obj_movies['unique_genres']
.apply(pd.Series).stack()).groupby(level=0).sum()
</code></pre>
<p>then do add the column to your previous dataframe, use join:</p>
<pre><code>obj_movies = obj_movies.join(df_dummies)
</code></pre>
<p>you should get your expected output</p> | python|pandas | 1 |
557 | 33,160,949 | cosine distance between two matrices | <p>Take two matrices, arr1, arr2 of size mxn and pxn respectively. I'm trying to find the cosine distance of their respected rows as a mxp matrix. Essentially I want to take the the pairwise dot product of the rows, then divide by the outer product of the norms of each rows.</p>
<pre><code>import numpy as np
def cosine_distance(arr1, arr2):
numerator = np.dot(arr1, arr2.T)
denominator = np.outer(
np.sqrt(np.square(arr1).sum(1)),
np.sqrt(np.square(arr2).sum(1)))
return np.nan_to_num(np.divide(numerator, denominator))
</code></pre>
<p>I Think this should be returning an mxn matrix with entries in [-1.0, 1.0] but for some reason I'm getting values out of that interval. I'm thinking that my one of these numpy functions is doing something other than what I think it does.</p> | <p>It sounds like you need to divide by the outer product of the L2 norms of your arrays of vectors:</p>
<pre><code>arr1.dot(arr2.T) / np.outer(np.linalg.norm(arr1, axis=1),
np.linalg.norm(arr2, axis=1))
</code></pre>
<p>e.g.</p>
<pre><code>In [4]: arr1 = np.array([[1., -2., 3.],
[0., 0.5, 2.],
[-1., 1.5, 1.5],
[2., -0.5, 0.]])
In [5]: arr2 = np.array([[0., -3., 1.],
[1.5, 0.25, 1.]])
In [6]: arr1.dot(arr2.T)/np.outer(np.linalg.norm(arr1, axis=1),
np.linalg.norm(arr2, axis=1))
Out[6]:
array([[ 0.76063883, 0.58737848],
[ 0.0766965 , 0.56635211],
[-0.40451992, 0.08785611],
[ 0.2300895 , 0.7662411 ]])
</code></pre> | python|numpy|cosine-similarity | 4 |
558 | 9,104,968 | Slow python image processing with PIL and numpy | <p>I'm trying to implement some image processing (finding regions of similar colour) in Python with PIL and Numpy. Can't figure out how to speed up this code. Could you help?</p>
<pre><code>def findRegions(self, data):
#data is numpy.array
ret = [[False for _ in range(self.width)] for _ in range(self.heigth)]
for i in range(self.heigth):
for j in range(self.width):
k = 0
acc = 0
for x,y in [(-1,0),(0,-1),(0,1),(1,0)]:
if (self.heigth>i+x>=0 and self.width>j+y>=0):
k = k+1
acc += math.sqrt(sum((data[i][j][c]-data[i+x][j+y][c])**2 for c in range(3)))
if (acc/k<self.threshold):
ret[i][j]= True
return ret
</code></pre>
<p>PIL and other image libraries have got many filtering and processing functions which are really quick. But what is the best way to implement own image processing functions?</p> | <p>Rather than looping over each row and column you can shift the array left, right, up, and down for the appropriate number of elements. On each shift you accumulate your values in a base array. After the shifting and accumulating you compute your average and apply your threshold to return a mask. See this <a href="https://stackoverflow.com/a/4937418/673590">post</a> which has a general discussion on the topic. The idea is take advantage of numpy's broadcasting, which will apply a function or operator to all elements of an array in C rather than Python.</p>
<p>I've adapted the code from the linked post to fit what I believe you are trying to accomplish. In any case the general pattern should speed things up. You have to work out what to do with the edges in the return mask. Here I've simply set the return mask to False, but you could also eliminate the edges by expanding the input data by one pixel in each direction and filling with the nearest pixel, zeros, gray, etc.</p>
<pre><code>def findRegions(self,data):
#define the shifts for the kernel window
shifts = [(-1,0),(0,-1),(0,1),(1,0)]
#make the base array of zeros
# array size by 2 in both dimensions
acc = numpy.zeros(data.shape[:2])
#compute the square root of the sum of squared color
# differences between a pixel and it's
# four cardinal neighbors
for dx,dy in shifts:
xstop = -1+dx or None
ystop = -1+dy or None
#per @Bago's comment, use the sum method to add up the color dimension
# instead of the list comprehension
acc += ((data[1:-1,1:-1] - data[1+dx:xstop, 1+dy:ystop])**2).sum(-1)**.5
#compute the average
acc /= (len(shifts) + 1)
#build a mask array the same size as the original
ret = numpy.zeros(data.shape[:2],dtype=numpy.bool)
#apply the threshold
# note that the edges will be False
ret[1:-1,1:-1] acc < self.threshold
return ret
</code></pre> | python|image-processing|numpy|python-imaging-library | 4 |
559 | 6,267,008 | Modifying axes on matplotlib colorbar plot of 2D array | <p>I have a 2D numpy array that I want to plot in a colorbar. I am having trouble changing the axis so that they display my dataset. The vertical axis goes 'down' from 0 to 100, whereas I want it to go 'up' from 0.0 to 0.1. So I need to do two things:</p>
<ul>
<li>Flip the array using np.flipud() and then 'flip' the axis as well</li>
<li>Change the labels to go from 0.0 to 0.1, instead of 0 to 100</li>
</ul>
<p>Here is an example of what my colorbar plot currently looks like:
<img src="https://i.stack.imgur.com/3YQvl.png" alt="Example of Colorbar plot"></p>
<p>And here is the code:</p>
<pre><code>data = np.load('scorr.npy')
(x,y) = np.unravel_index(data.argmax(), data.shape)
max=data[x][y]
fig = plt.figure()
ax = fig.add_subplot(111)
cax = ax.imshow(data, interpolation='nearest')
cbar = fig.colorbar(cax, ticks=[-max, 0, max])
cbar.ax.set_yticklabels([str(-max), '0', str(max)])
plt.show()
</code></pre>
<p>Does anybody have any suggestions? Thanks in advance!</p> | <p>You want to look at the imshow options "origin" and "extent", I think. </p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
x,y = np.mgrid[-2:2:0.1, -2:2:0.1]
data = np.sin(x)*(y+1.05**(x*np.floor(y))) + 1/(abs(x-y)+0.01)*0.03
fig = plt.figure()
ax = fig.add_subplot(111)
ticks_at = [-abs(data).max(), 0, abs(data).max()]
cax = ax.imshow(data, interpolation='nearest',
origin='lower', extent=[0.0, 0.1, 0.0, 0.1],
vmin=ticks_at[0], vmax=ticks_at[-1])
cbar = fig.colorbar(cax,ticks=ticks_at,format='%1.2g')
fig.savefig('out.png')
</code></pre>
<p><img src="https://i.stack.imgur.com/w6WAa.png" alt="extent and origin"></p> | python|numpy|matplotlib|colorbar | 9 |
560 | 66,377,925 | Python: Running Code using Jupyter Notebook (Online) | <p>I am new to the world of Python. I am using a computer with very little space left, so I decided to try to use the online version of Python without explicitly installing anacondas or python.</p>
<p>I used this link over here: <a href="https://notebooks.gesis.org/binder/jupyter/user/ipython-ipython-in-depth-eer5tgdf/notebooks/binder/Index.ipynb#" rel="nofollow noreferrer">https://notebooks.gesis.org/binder/jupyter/user/ipython-ipython-in-depth-eer5tgdf/notebooks/binder/Index.ipynb#</a> , and then I opened a new file. I am trying to re-run the code from this github repository : <a href="https://github.com/brohrer/byo_decision_tree/blob/main/decision_tree.py" rel="nofollow noreferrer">https://github.com/brohrer/byo_decision_tree/blob/main/decision_tree.py</a></p>
<p>I tried running the following code and got this error:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from tree_node import TreeNode
import numpy as np
import matplotlib.pyplot as plt
from tree_node import TreeNode
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-1-984dee29eb66> in <module>
2 import matplotlib.pyplot as plt
3
----> 4 from tree_node import TreeNode
ModuleNotFoundError: No module named 'tree_node'
</code></pre>
<p>This line is preventing me from running the rest of the code. Can someone please show me what am I doing wrong?</p>
<ol>
<li><p>Is it simply not possible to run python code online without having downloaded anaconda?</p>
</li>
<li><p>Or am I approaching this problem the wrong way? Perhaps the version of python I am using is incorrect? Or there are some dependencies required that I have not yet installed.</p>
</li>
</ol>
<p>Can someone please show me how to resolve this problem?</p>
<p>Thanks</p>
<p><a href="https://i.stack.imgur.com/FuChk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FuChk.png" alt="enter image description here" /></a></p> | <p>To simplify the development different functionalities are allocated to different scripts/modules.</p>
<p>You are simply taking the main script (<code>decision_tree.py</code>) and trying to run it. But it has some imports from other modules. For example, in the directory where you opened <code>decision_tree.py</code>, there is also <code>tree_node.py</code>, which has <code>TreeNode</code> class, which is imported into <code>decision_tree.py</code>.</p>
<p>So, one of the options is to copy this <code>TreeNode</code> class into <code>decision_tree.py</code> and put it above <code>DecisionTree</code> class.</p> | python|numpy|matplotlib|module|jupyter-notebook | 1 |
561 | 66,411,797 | Is there an alternative, more efficient way to unstack columns from a multiindex of a pandas dataframe? | <p>I have an object that I got from performing a groupby(["A", "B"] combined with .nlargest(3) function in pandas.</p>
<p>i.e:</p>
<pre><code>df.groupby(["A", "B"])["Column"].nlargest(3).reset_index().unstack()
</code></pre>
<p>Now I have 3 values per "A" "B".
I did an unstack and it works, but I hit the memory capacity and it crashes sometimes.</p>
<p>I somewhat remember finding a (built-in) solution to this very problem long ago, but couldn't find it again. Apologies if this is a duplicate and thanks in advance!</p> | <p>As far as I understand <code>pivot_table</code> should help after some initial prep</p>
<p>create the data:</p>
<pre><code>import numpy as np
np.random.seed(2021)
df = pd.DataFrame({'A':np.random.randint(1,3,15), 'B':np.random.randint(1,3,15), 'C':np.random.normal(0,1,15)})
df
</code></pre>
<p>looks like this</p>
<pre><code> A B C
0 1 1 2.044890
1 2 1 1.075268
2 2 1 0.079020
3 1 1 0.493282
4 2 1 -0.791367
5 1 2 -2.130595
6 1 2 0.317206
7 1 2 -1.364617
8 2 2 0.358428
9 1 1 -1.305624
10 2 2 2.020718
11 2 1 -2.686804
12 2 2 0.557872
13 2 1 0.776176
14 1 1 0.202650
</code></pre>
<p>then we choose the 3 largest, <code>groupby</code> with <code>cumcount</code> to assign the rank, and pivot on the rank:</p>
<pre><code>df2 = df.groupby(["A", "B"])["C"].nlargest(3).reset_index()
df2['rank'] = df2.groupby(["A", "B"]).cumcount()
pd.pivot_table(df2, values = 'C', index = ['A','B'], columns = 'rank')
</code></pre>
<p>this produces</p>
<pre><code>
rank 0 1 2
A B
1 1 2.044890 0.493282 0.202650
2 0.317206 -1.364617 -2.130595
2 1 1.075268 0.776176 0.079020
2 2.020718 0.557872 0.358428
</code></pre>
<p>Please let me know if this is what you are after and if it works memory-wise</p> | pandas|pandas-groupby | 3 |
562 | 16,339,704 | Converting a numpy string array to integers in base-16 | <p>I am looking for a way to convert an array of strings in numpy to the integers they represent in hexadecimal. So in other words, the array version of:</p>
<pre><code>int("f040", 16)
</code></pre>
<p>I can convert a string array to integers base-10 by calling arr.astype(numpy.int32), but I can't see any obvious way to convert them base-16. Does anyone know of a way to do this?</p> | <pre><code>ar = ['f040', 'deadbeaf']
int_array = [int(a, 16) for a in ar]
print int_array
</code></pre>
<p>output:</p>
<p>[61504, 3735928495L]</p> | python|numpy | 3 |
563 | 57,629,697 | How to Transform sklearn tfidf vector pandas output to a meaningful format | <p>I have used sklearn to obtain tfidf scores for my corpus but the output is not in the format I wanted. </p>
<p>Code:</p>
<pre><code>vect = TfidfVectorizer(ngram_range=(1,3))
tfidf_matrix = vect.fit_transform(df_doc_wholetext['csv_text'])
df = pd.DataFrame(tfidf_matrix.toarray(),columns=vect.get_feature_names())
df['filename'] = df.index
</code></pre>
<p><strong>What I have:</strong> </p>
<p><a href="https://i.stack.imgur.com/H6gsF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/H6gsF.png" alt="enter image description here"></a></p>
<p>word1, word2, word3 could be any words in the corpus. I mentioned them as word1 , word2, word3 for example.</p>
<p><strong>What I need:</strong></p>
<p><a href="https://i.stack.imgur.com/weJkk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/weJkk.png" alt="enter image description here"></a></p>
<p>I tried transforming it but it transforms all the columns to rows. Is there a way to achieve this ?</p> | <pre><code>df1 = df.filter(like='word').stack().reset_index()
df1.columns = ['filename','word_name','score']
</code></pre>
<p>Output:</p>
<pre><code> filename word_name score
0 0 word1 0.01
1 0 word2 0.04
2 0 word3 0.05
3 1 word1 0.02
4 1 word2 0.99
5 1 word3 0.07
</code></pre>
<p><strong>Update</strong> for general column headers:</p>
<pre><code>df1 = df.iloc[:,1:].stack().reset_index()
</code></pre> | python|pandas|scikit-learn|tf-idf|tfidfvectorizer | 2 |
564 | 57,374,748 | Pandas: How to convert string to DataFrame | <p>Hi I have the following data (string) and am struggling to convert it into a pandas dataframe.</p>
<p>Any help would be greatly appreciated!</p>
<p>pd.DataFrame with "," as the delim doesnt work given the commas else where in the data. </p>
<pre><code>[["Time","Forecast"],["2019-07-08T23:00:00Z",20],["2019-07-08T23:30:00Z",26],["2019-07-09T00:00:00Z",24],["2019-07-09T00:30:00Z",26]]
</code></pre> | <p>IIUC, you can use <code>ast.literal_eval</code>:</p>
<pre><code>s='[["Time","Forecast"],["2019-07-08T23:00:00Z",20],["2019-07-08T23:30:00Z",26],["2019-07-09T00:00:00Z",24],["2019-07-09T00:30:00Z",26]]'
l=ast.literal_eval(s) #convert to actual list of list
df=pd.DataFrame(l[1:],columns=l[0])
</code></pre>
<hr>
<pre><code> Time Forecast
0 2019-07-08T23:00:00Z 20
1 2019-07-08T23:30:00Z 26
2 2019-07-09T00:00:00Z 24
3 2019-07-09T00:30:00Z 26
</code></pre> | python|string|pandas|dataframe | 3 |
565 | 57,677,197 | How to extract single word (not larger word containing it) in pandas dataframe? | <p>I would like to extract the word like this:</p>
<pre><code>a dog ==> dog
some dogs ==> dog
dogmatic ==> None
</code></pre>
<p>There is a similar link:
<a href="https://stackoverflow.com/questions/46921465/extract-substring-from-text-in-a-pandas-dataframe-as-new-column">Extract substring from text in a pandas DataFrame as new column</a></p>
<p>But it does not fulfill my requirements.</p>
<p>From this dataframe:</p>
<pre><code>df = pd.DataFrame({'comment': ['A likes cat', 'B likes Cats',
'C likes cats.', 'D likes cat!',
'E is educated',
'F is catholic',
'G likes cat, he has three of them.',
'H likes cat; he has four of them.',
'I adore !!cats!!',
'x is dogmatic',
'x is eating hotdogs.',
'x likes dogs, he has three of them.',
'x likes dogs; he has four of them.',
'x adores **dogs**'
]})
</code></pre>
<p>How to get correct output?</p>
<pre><code> comment label EXTRACT
0 A likes cat cat cat
1 B likes Cats cat cat
2 C likes cats. cat cat
3 D likes cat! cat cat
4 E is educated None cat
5 F is catholic None cat
6 G likes cat, he has three of them. cat cat
7 H likes cat; he has four of them. cat cat
8 I adore !!cats!! cat cat
9 x is dogmatic None dog
10 x is eating hotdogs. None dog
11 x likes dogs, he has three of them. dog dog
12 x likes dogs; he has four of them. dog dog
13 x adores **dogs** dog dog
</code></pre>
<h1>NOTE: The column EXTRACT gives wrong answer, I need like the column label.</h1>
<p><a href="https://i.stack.imgur.com/rmwex.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rmwex.png" alt="enter image description here"></a></p> | <p>We can use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.extract.html" rel="nofollow noreferrer"><code>str.extract</code></a> with <code>negative lookahead</code>: <code>?!</code>. We check if the the characters after the match are not more than 2 letters. For example <code>dogmatic</code>:</p>
<p>After that we use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html" rel="nofollow noreferrer"><code>np.where</code></a> with <code>positive lookahead</code>. The pseudo logic is like following:</p>
<blockquote>
<p>All the rows which have "dog" or "cat" with alphabetic characters in front of it will be be replaced by NaN</p>
</blockquote>
<pre><code>words = ['cat', 'dog']
df['label'] = df['comment'].str.extract('(?i)'+'('+'|'.join(words)+')(?![A-Za-z]{2,})')
df['label'] = np.where(df['comment'].str.contains('(?<=\wdog)|(?<=\wcat)'), np.NaN, df['label'])
</code></pre>
<p><strong>Output</strong></p>
<pre><code> comment label
0 A likes cat cat
1 B likes Cats Cat
2 C likes cats. cat
3 D likes cat! cat
4 E is educated NaN
5 F is catholic NaN
6 G likes cat, he has three of them. cat
7 H likes cat; he has four of them. cat
8 I adore !!cats!! cat
9 x is dogmatic NaN
10 x is eating hotdogs. NaN
11 x likes dogs, he has three of them. dog
12 x likes dogs; he has four of them. dog
13 x adores **dogs** dog
</code></pre> | python|regex|pandas | 4 |
566 | 57,461,352 | Python n-dimensional array combinations | <p>Suppose an arbitrary number of arrays of arbitrary length. I would like to construct the n-dimensional array of all the combinations from the values in the arrays. Or even better, a list of all the combinations.</p>
<p>However, I would also like the previous "diagonal" element along each combination, except when such an element does not exist, in which case the values which do not exist are set to say -inf.</p>
<p>Take for ex. the following simple 2-D case:</p>
<pre><code>v1=[-2,2]
v2=[-3,3]
</code></pre>
<p>From which I would get all the combinations</p>
<pre><code>[[-2,-3],
[-2,3],
[2,-3],
[2,3]]
</code></pre>
<p>Or in 2D array / matrix form</p>
<pre><code> -3 3
-2 -2,-3 -2,3
2 2,-3 2,3
</code></pre>
<p>Now I would also like a new column with the previous "diagonal" elements (in this case there is only 1 real such case) for each element. By previous "diagonal" element I mean the element at index i-1, j-1, k-1, ..., n-1. On the margins we take all the previous values that are possible.</p>
<pre><code> 1 2
-2,-3 -inf,-inf
-2, 3 -inf,-3
2,-3 -2,-inf
2, 3 -2,-3
</code></pre>
<p><strong>Edit:</strong> here is the code for the 2D case, which is not much use for the general n-case.</p>
<pre><code>import math
v1=[-3,-1,2,4]
v2=[-2,0,2]
tmp=[]
tmp2=[]
for i in range(0,len(v1)):
for j in range(0,len(v2)):
tmp.append([v1[i],v2[j]])
if i==0 and j==0:
tmp2.append([-math.inf,-math.inf])
elif i==0:
tmp2.append([-math.inf,v2[j-1]])
elif j==0:
tmp2.append([v1[i-1],-math.inf])
else:
tmp2.append([v1[i-1],v2[j-1]])
</code></pre>
<p>And so</p>
<pre><code>tmp
[[-3, -2],
[-3, 0],
[-3, 2],
[-1, -2],
[-1, 0],
[-1, 2],
[2, -2],
[2, 0],
[2, 2],
[4, -2],
[4, 0],
[4, 2]]
</code></pre>
<p>and</p>
<pre><code>tmp2
[[-inf, -inf],
[-inf, -2],
[-inf, 0],
[-3, -inf],
[-3, -2],
[-3, 0],
[-1, -inf],
[-1, -2],
[-1, 0],
[2, -inf],
[2, -2],
[2, 0]]
</code></pre> | <p>Take a look at <a href="https://docs.python.org/3.7/library/itertools.html#itertools.product" rel="nofollow noreferrer">itertools.product()</a>.</p>
<p>To get the "diagonals" you could take the product of the vectors indices instead of the vectors themselves. That way you can access the values of each combination aswell as the previous values of the combination.</p>
<p>Example:</p>
<pre><code>import itertools
v1=[-2,2]
v2=[-3,3]
vectors = [v1, v2]
combs = list(itertools.product(*[range(len(v)) for v in vectors]))
print(combs)
</code></pre>
<p>[(0, 0), (0, 1), (1, 0), (1, 1)]</p>
<pre><code>print([[vectors[vi][ci] for vi, ci in enumerate(comb)] for comb in combs])
</code></pre>
<p>[[-2, -3], [-2, 3], [2, -3], [2, 3]]</p>
<pre><code>print([[(vectors[vi][ci-1] if ci > 0 else np.inf) for vi, ci in enumerate(comb)] for comb in combs])
</code></pre>
<p>[[inf, inf], [inf, -3], [-2, inf], [-2, -3]]</p> | python|pandas|numpy|itertools|n-dimensional | 1 |
567 | 57,646,858 | Python3: fast decode bytes to signed integer, special encoding | <p>I have a connection from my PC to a sensor thru an ethernet connection (UDP protocol) that sends me data.
The data looks is a long array of bytes, like this</p>
<pre><code>data
Out[260]: b'03000248023003e802a003a002f8044003c80478038802f002d8024002b00258030003a80300035002a803c0031002e802c8030802e001f8029002a003c8045002d803f003100378038002a002d803700308029003e00260032002e0027002c0028002a802e80338036804c803300398'
</code></pre>
<p>This data is actually coded, and I want to generate out of it a numpy.ndarray made of signed integers. One data sample per ndarray element.
Each array element once decoded represents an ADC data sample: 12bit + sign integer.
The coding is the following: the bytes must be grouped by 4. Each byte actually represent an hexa number. Then the 4 hexa numbers are put together, divided by 8 and we take the 2's complement.</p>
<p>The code I use so far works fine is below:</p>
<pre><code> j = 0
while j < nb_sample: # loop on each adc data sample
adc_sample_str = chr(udp_frame[4*j]) + chr(udp_frame[4*j+1]) + chr(udp_frame[4*j+2]) + chr(udp_frame[4*j+3]) # create the sample by concatenating 4 hexadecimals characters (2 bytes)
adc_sample_int = int(adc_sample_str, 16) # convert into hexa number
adc_sample_int = (adc_sample_int >> 3) # right shift by 3 (divide by 8)
if adc_sample_int > 2047: # check if it is a negative value (2048 = 0b 0000 1000 0000 0000)
adc_sample_int = (adc_sample_int - (2*4096) + 1) # 2's complement
result[j] = adc_sample_int
j+=1
</code></pre>
<p>The big problem is this loop is super slow.</p>
<p>So I am looking for another more clever way that would be much faster (~10x)
So I've tried a lot of things: convert to a string with .decode('UTF-8') or use numpy.frombuffer with many different dtypes.
But I could not find a coding (dtype) that can read my weird format.</p>
<p>Does anybody know in which direction I should look ?
Maybe I should write a custom-made encoding schemes for the .decode ? But I don't know how to express my coding scheme.
Or shall I rather convert to a string? But then ...?
All I tried so far makes me run into circle.
Any hint would help me...
Thanks</p>
<p>The result of the loop code is the following:</p>
<pre><code>result[0:260]
Out[268]:
array([ 96, 73, 70, 125, 84, 116, 95, 136, 121, 143, 113, 94, 91,
72, 86, 75, 96, 117, 96, 106, 85, 120, 98, 93, 89, 97,
92, 63, 82, 84, 121, 138, 91, 126, 98, 111, 112, 84, 91,
110, 97, 82, 124, 76, 100, 92, 78, 88, 80, 85, 93, 103,
109, 153, 102, 115, 89, 134, 105, 108, 84, 100, 76, 101, 81,
96, 98, 106, 98, 116, 109, 98, 93, 118, 111, 94, 95, 98,
91, 141, 76, 97, 110, 92, 104, 103, 89, 86, 101, 85, 114,
82, 83, 104, 72, 103, 118, 92, 133, 111, 104, 85, 101, 92,
108, 108, 108, 100, 81, 102, 99, 102, 125, 121, 68, 75, 104,
85, 90, 96, 127, 102, 112, 118, 106, 92, 78, 98, 98, 96,
105, 77, 79, 107, 100, 88, 89, 115, 86, 98, 106, 100, 105,
79, 121, 109, 115, 80, 113, 84, 131, 91, 114, 126, 93, 95,
119, 73, 100, 121, 102, 98, 100, 117, 111, 63, 99, 97, 108,
109, 95, 75, 102, 93, 127, 112, 91, 86, 79, 68, 104, 104,
84, 116, 85, 79, 120, 95, 91, 75, 135, 116, 115, 119, 102,
90, 131, 57, 102, 86, 104, 99, 106, 97, 95, 116, 116, 123,
99, 87, 61, 105, 81, 104, 91, 108, 114, 82, 122, 84, 108,
107, 93, 101, 95, 76, 84, 74, 104, 113, 110, 104, 123, 91,
99, 120, 92, 107, 120, 97, 119, 76, 87, 118, 73, 85, 113,
104, 123, 99, 94, 101, 97, 103, 65, 103])
</code></pre> | <p>you didn't provide a complete minimal reproducable question, so i improvised somewhat to make your provided code work.</p>
<pre class="lang-py prettyprint-override"><code>data = b"03000248023003e802a003a002f8044003c80478038802f002d8024002b00258030003a80300035002a803c0031002e802c8030802e001f8029002a003c8045002d803f003100378038002a002d803700308029003e00260032002e0027002c0028002a802e80338036804c803300398"
from numpy import array
import numpy as np
def func(data):
nb_sample = len(data) // 4
udp_frame = data
result = array(range(nb_sample))
j = 0
while j < nb_sample: # loop on each adc data sample
adc_sample_str = (
chr(udp_frame[4 * j])
+ chr(udp_frame[4 * j + 1])
+ chr(udp_frame[4 * j + 2])
+ chr(udp_frame[4 * j + 3])
) # create the sample by concatenating 4 hexadecimals characters (2 bytes)
adc_sample_int = int(adc_sample_str, 16) # convert into hexa number
adc_sample_int = adc_sample_int >> 3 # right shift by 3 (divide by 8)
if (
adc_sample_int > 2047
): # check if it is a negative value (2048 = 0b 0000 1000 0000 0000)
adc_sample_int = adc_sample_int - (2 * 4096) + 1 # 2's complement
result[j] = adc_sample_int
j += 1
return result
def func4(data):
for j in range(len(data) // 4):
adc_sample = int(data[4 * j : 4 * j + 4], 16) >> 3
if adc_sample > 2047:
adc_sample -= 2 * 2096 + 1
yield adc_sample
def func5(data):
result = np.fromiter(func4(data), int)
return result
</code></pre>
<p>these are the results i got with the timeit module</p>
<pre class="lang-py prettyprint-override"><code>>>> import timeit
>>> from functools import partial
>>> def avg(seq): return sum(seq) / len(seq)
...
>>> avg(timeit.repeat(partial(func, data), number=10_000))
0.9369430501999887
>>> avg(timeit.repeat(partial(func5, data), number=10_000))
0.3753632108000602
</code></pre>
<h2>edit:</h2>
<p>so i am somewhat new to numpy. in my original code there was a mistake,
to create a numpy array from a generator you should use <code>np.fromiter</code> instead of <code>np.array</code>
i had a problem in my tests that resulted in me not spotting the invalid func5.</p>
<p>fixing the code lead to drastic performance decreases,
<em>func5 is now only about twice as fast as func</em>, and i think that most of that performance gain is from a restructuring of the inner loop.
things like using data[j:j+4] in stead of data[j] + data[j+1] + data[j+2] + data[j+3]</p>
<p>I also was also mistaken about when to use generators,
Generators are more about space performance than time performance.
it might still be a good idea for your application, but that is out of scope for this question</p> | python|decode|numpy-ndarray | 0 |
568 | 24,166,112 | Pandas Dataframe: Adding the occurrence of values | <p>I have a dataframe that contains a list of integers that represent the occurrence of an event. I'm looking to add another column that adds the number of events within an occurrence.</p>
<pre><code>d = {'Occurrence_col' : pd.Series([1., 1., 2., 2., 2.]),
'Values' : pd.Series([101, 102, 103, 104, 105])}
df = pd.DataFrame(d)
Occurrence_col Values
1 101
1 102
2 103
2 104
2 105
Occurrence_col Desired_Output Values
1 1 101
1 2 102
2 1 103
2 2 104
2 3 105
</code></pre>
<p>I know it's possible to do this through looping, but what is a more pandas-like solution?</p> | <p>You can use <code>groupby</code> with <a href="http://pandas.pydata.org/pandas-docs/stable/groupby.html#enumerate-group-items" rel="nofollow"><code>cumcount</code></a> in pandas >= 0.13.0:</p>
<pre><code>>>> df["Desired_Output"] = df.groupby("Occurrence").cumcount() + 1
>>> df
Occurrence Values Desired_Output
0 1 101 1
1 1 102 2
2 2 103 1
3 2 104 2
4 2 105 3
</code></pre> | python|pandas|dataframe | 2 |
569 | 43,896,553 | Variable bidirectional_rnn/fw/lstm_cell/weights already exists, disallowed. Did you mean to set reuse=True in VarScope? Originally defined at: | <p>When I call the function tf.nn.bidirectional_dynamic_rnn, it returns
Variable bidirectional_rnn/fw/lstm_cell/weights already exists, disallowed. Did you mean to set reuse=True in VarScope? Originally defined at:</p>
<p>My code:</p>
<pre><code>tf.reset_default_graph()
sess = tf.InteractiveSession()
PAD = 0
EOS = 1
sequence_size = 10 #vocal_size
input_embedding_size = 20 #length of sequence
encoder_hidden_units = 20
decoder_hidden_units = encoder_hidden_units * 2
encoder_inputs = tf.placeholder(shape=(None, None), dtype=tf.int32,
name='encoder_inputs')
#length of each sequence in the batch
encoder_inputs_length = tf.placeholder(shape=(None,), dtype=tf.int32,
name='encoder_inputs_length')
decoder_targets = tf.placeholder(shape=(None, None), dtype=tf.int32,
name='decoder_targets')
embeddings = tf.Variable(tf.random_uniform([sequence_size,
input_embedding_size], -1.0, 1), dtype=tf.float32 )
encoder_inputs_embedded = tf.nn.embedding_lookup(embeddings,
encoder_inputs)
encoder_cell = tf.contrib.rnn.LSTMCell(encoder_hidden_units)
( (encoder_fw_outputs,
encoder_bw_outputs),
(encoder_fw_final_state,
encoder_bw_final_state)) = (
tf.nn.bidirectional_dynamic_rnn(cell_fw=encoder_cell,
cell_bw=encoder_cell,
inputs=encoder_inputs_embedded,
sequence_length=encoder_inputs_length,
dtype=tf.float32, time_major=False)
)
</code></pre>
<p>I got this as error:</p>
<pre><code>ValueError Traceback (most recent call last)
<ipython-input-211-41a8d71d81df> in <module>()
23
inputs=encoder_inputs_embedded,
24
sequence_length=encoder_inputs_length,
---> 25 dtype=tf.float32,
time_major=False)
26 )
27
/home/cesar/anaconda2/envs/tensorflow/lib/python2.7/site-
packages/tensorflow/python/ops/rnn.pyc in
bidirectional_dynamic_rnn(cell_fw, cell_bw, inputs, sequence_length,
initial_state_fw, initial_state_bw, dtype, parallel_iterations,
swap_memory, time_major, scope)
348 initial_state=initial_state_fw, dtype=dtype,
349 parallel_iterations=parallel_iterations,
swap_memory=swap_memory,
--> 350 time_major=time_major, scope=fw_scope)
351
352 # Backward direction
/home/cesar/anaconda2/envs/tensorflow/lib/python2.7/site-
packages/tensorflow/python/ops/rnn.pyc in dynamic_rnn(cell, inputs,
sequence_length, initial_state, dtype, parallel_iterations,
swap_memory, time_major, scope)
544 swap_memory=swap_memory,
545 sequence_length=sequence_length,
--> 546 dtype=dtype)
547
548 # Outputs of _dynamic_rnn_loop are always shaped [time, batch,
depth].
/home/cesar/anaconda2/envs/tensorflow/lib/python2.7/site-
packages/tensorflow/python/ops/rnn.pyc in _dynamic_rnn_loop(cell,
inputs, initial_state, parallel_iterations, swap_memory,
sequence_length, dtype)
711 loop_vars=(time, output_ta, state),
712 parallel_iterations=parallel_iterations,
--> 713 swap_memory=swap_memory)
714
715 # Unpack final output if not using output tuples.
/home/cesar/anaconda2/envs/tensorflow/lib/python2.7/site-
packages/tensorflow/python/ops/control_flow_ops.pyc in
while_loop(cond, body, loop_vars, shape_invariants,
parallel_iterations, back_prop, swap_memory, name)
2603 context = WhileContext(parallel_iterations, back_prop,
swap_memory, name)
2604 ops.add_to_collection(ops.GraphKeys.WHILE_CONTEXT, context)
-> 2605 result = context.BuildLoop(cond, body, loop_vars,
shape_invariants)
2606 return result
2607
/home/cesar/anaconda2/envs/tensorflow/lib/python2.7/site-
packages/tensorflow/python/ops/control_flow_ops.pyc in
BuildLoop(self, pred, body, loop_vars, shape_invariants)
2436 self.Enter()
2437 original_body_result, exit_vars = self._BuildLoop(
-> 2438 pred, body, original_loop_vars, loop_vars,
shape_invariants)
2439 finally:
2440 self.Exit()
/home/cesar/anaconda2/envs/tensorflow/lib/python2.7/site-
packages/tensorflow/python/ops/control_flow_ops.pyc in
_BuildLoop(self, pred, body, original_loop_vars, loop_vars,
shape_invariants)
2386 structure=original_loop_vars,
2387 flat_sequence=vars_for_body_with_tensor_arrays)
-> 2388 body_result = body(*packed_vars_for_body)
2389 if not nest.is_sequence(body_result):
2390 body_result = [body_result]
/home/cesar/anaconda2/envs/tensorflow/lib/python2.7/site-
packages/tensorflow/python/ops/rnn.pyc in _time_step(time,
output_ta_t, state)
694 call_cell=call_cell,
695 state_size=state_size,
--> 696 skip_conditionals=True)
697 else:
698 (output, new_state) = call_cell()
/home/cesar/anaconda2/envs/tensorflow/lib/python2.7/site-
packages/tensorflow/python/ops/rnn.pyc in _rnn_step(time,
sequence_length, min_sequence_length, max_sequence_length,
zero_output, state, call_cell, state_size, skip_conditionals)
175 # steps. This is faster when max_seq_len is equal to the
number of unrolls
176 # (which is typical for dynamic_rnn).
--> 177 new_output, new_state = call_cell()
178 nest.assert_same_structure(state, new_state)
179 new_state = nest.flatten(new_state)
/home/cesar/anaconda2/envs/tensorflow/lib/python2.7/site-
packages/tensorflow/python/ops/rnn.pyc in <lambda>()
682
683 input_t = nest.pack_sequence_as(structure=inputs,
flat_sequence=input_t)
--> 684 call_cell = lambda: cell(input_t, state)
685
686 if sequence_length is not None:
/home/cesar/anaconda2/envs/tensorflow/lib/python2.7/site-
packages/tensorflow/contrib/rnn/python/ops/core_rnn_cell_impl.pyc in
__call__(self, inputs, state, scope)
336 # i = input_gate, j = new_input, f = forget_gate, o =
output_gate
337 lstm_matrix = _linear([inputs, m_prev], 4 * self._num_units,
bias=True,
--> 338 scope=scope)
339 i, j, f, o = array_ops.split(
340 value=lstm_matrix, num_or_size_splits=4, axis=1)
/home/cesar/anaconda2/envs/tensorflow/lib/python2.7/site-
packages/tensorflow/contrib/rnn/python/ops/core_rnn_cell_impl.pyc in
_linear(args, output_size, bias, bias_start, scope)
745 with vs.variable_scope(scope) as outer_scope:
746 weights = vs.get_variable(
--> 747 "weights", [total_arg_size, output_size],
dtype=dtype)
748 if len(args) == 1:
749 res = math_ops.matmul(args[0], weights)
/home/cesar/anaconda2/envs/tensorflow/lib/python2.7/site-
packages/tensorflow/python/ops/variable_scope.pyc in
get_variable(name, shape, dtype, initializer, regularizer, trainable,
collections, caching_device, partitioner, validate_shape,
custom_getter)
986 collections=collections, caching_device=caching_device,
987 partitioner=partitioner, validate_shape=validate_shape,
--> 988 custom_getter=custom_getter)
989 get_variable_or_local_docstring = (
990 """%s
/home/cesar/anaconda2/envs/tensorflow/lib/python2.7/site-
packages/tensorflow/python/ops/variable_scope.pyc in
get_variable(self, var_store, name, shape, dtype, initializer,
regularizer, trainable, collections, caching_device, partitioner,
validate_shape, custom_getter)
888 collections=collections, caching_device=caching_device,
889 partitioner=partitioner, validate_shape=validate_shape,
--> 890 custom_getter=custom_getter)
891
892 def _get_partitioned_variable(self,
/home/cesar/anaconda2/envs/tensorflow/lib/python2.7/site-
packages/tensorflow/python/ops/variable_scope.pyc in
get_variable(self, name, shape, dtype, initializer, regularizer,
reuse, trainable, collections, caching_device, partitioner,
validate_shape, custom_getter)
346 reuse=reuse, trainable=trainable,
collections=collections,
347 caching_device=caching_device, partitioner=partitioner,
--> 348 validate_shape=validate_shape)
349
350 def _get_partitioned_variable(
/home/cesar/anaconda2/envs/tensorflow/lib/python2.7/site-
packages/tensorflow/python/ops/variable_scope.pyc in
_true_getter(name, shape, dtype, initializer, regularizer, reuse,
trainable, collections, caching_device, partitioner, validate_shape)
331 initializer=initializer, regularizer=regularizer,
reuse=reuse,
332 trainable=trainable, collections=collections,
--> 333 caching_device=caching_device,
validate_shape=validate_shape)
334
335 if custom_getter is not None:
/home/cesar/anaconda2/envs/tensorflow/lib/python2.7/site-
packages/tensorflow/python/ops/variable_scope.pyc in
_get_single_variable(self, name, shape, dtype, initializer,
regularizer, partition_info, reuse, trainable, collections,
caching_device, validate_shape)
637 " Did you mean to set reuse=True in
VarScope? "
638 "Originally defined at:\n\n%s" % (
--> 639 name,
"".join(traceback.format_list(tb))))
640 found_var = self._vars[name]
641 if not shape.is_compatible_with(found_var.get_shape()):
ValueError: Variable bidirectional_rnn/fw/lstm_cell/weights already
exists, disallowed. Did you mean to set reuse=True in VarScope?
Originally defined at:
File "/home/cesar/anaconda2/envs/tensorflow/lib/python2.7/site-
packages/tensorflow/contrib/rnn/python/ops/core_rnn_cell_impl.py",
line
747, in _linear
"weights", [total_arg_size, output_size], dtype=dtype)
File "/home/cesar/anaconda2/envs/tensorflow/lib/python2.7/site-
packages/tensorflow/contrib/rnn/python/ops/core_rnn_cell_impl.py",
line 338, in __call__
scope=scope)
File "<ipython-input-23-f4f28501e56f>", line 24, in <module>
time_major=True
</code></pre> | <p>I would try to define the encoder as follows:</p>
<pre><code>with tf.variable_scope('encoder_cell_fw'):
encoder_cell_fw = LSTMCell(encoder_hidden_units)
with tf.variable_scope('encoder_cell_bw'):
encoder_cell_bw = LSTMCell(encoder_hidden_units)
( (encoder_fw_outputs,
encoder_bw_outputs),
(encoder_fw_final_state,
encoder_bw_final_state)) = (
tf.nn.bidirectional_dynamic_rnn(cell_fw=encoder_cell_fw,
cell_bw=encoder_cell_bw,
inputs=encoder_inputs_embedded,
sequence_length=encoder_inputs_length,
dtype=tf.float32, time_major=False))
</code></pre>
<p>It worked for me, this happens because the tf it is not managing to use the same name twice, bw an fw, another option would be defining with reuse=True but it didn't worked.</p> | python|tensorflow | 0 |
570 | 73,137,848 | Fill larger dataframe from a smaller dataframe using identical column names | <p>I am trying to merge two dataframes where one has duplicate column names and one with unique names. I am trying to fill in the empty larger one with values from the smaller one based on the column names but the merge or concat statement doesn't seem to work in this case</p>
<pre><code>df = pd.DataFrame(data=([1,2,3],[6,7,8]),columns=['A','B','C'])
finaldf = pd.DataFrame(columns=['A','B','C','B'])
# try to copy all rows from df to finaldf based on the column names
finaldf=pd.merge(df,finaldf) #MergeError: Data columns not unique: Index(['A', 'B', 'B', 'C'], dtype='object')
finaldf=pd.concat([finaldf,df],axis=0) #ValueError: Plan shapes are not aligned
</code></pre> | <p>Would it help you to subset the smaller <code>df</code> with the column names of the empty wider <code>finaldf</code>?</p>
<pre class="lang-py prettyprint-override"><code>df[finaldf.columns]
A B C B
0 1 2 3 2
1 6 7 8 7
</code></pre> | python|pandas|merge|concatenation | 0 |
571 | 73,174,486 | How to get a correct group in my geopandas project? | <p>I'm making a project which find the nearest linestring(simulate a river) to points or a single point,looks like this:</p>
<pre><code>linestrings points
linestring1 point1
linestring2 point4
linestring1 point2
linestring2 point5
linestring1 point3
linestring2 point6
</code></pre>
<p>And it looks like this in intellij idea:
<a href="https://i.stack.imgur.com/7CS20.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7CS20.png" alt="enter image description here" /></a></p>
<p>I want to group the dataframe by linestrings to insert points,like this:</p>
<pre><code>linestrings points
linestring1 point1
linestring1 point2
linestring1 point3
linestring2 point4
linestring2 point5
linestring2 point6
</code></pre>
<p>So that I can snap linestring1 to point 1,2,3 and so on.</p>
<p>Look at this picture,the same linestring should be snapped on 3 points:
<a href="https://i.stack.imgur.com/ntH9P.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ntH9P.png" alt="image of joined geodataframe" /></a></p>
<p>However when I launch my code,I can only see dtypes in form of DataFrame:</p>
<p><a href="https://i.stack.imgur.com/MAcCP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MAcCP.png" alt="enter image description here" /></a></p>
<p>And group look like this:</p>
<p><a href="https://i.stack.imgur.com/AbfQf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AbfQf.png" alt="enter image description here" /></a></p>
<p>It's obvious that my effort is failed,and in pandas'document,a correct group should looks like this:
<a href="https://pandas.pydata.org/docs/user_guide/groupby.html#dataframe-column-selection-in-groupby" rel="nofollow noreferrer">https://pandas.pydata.org/docs/user_guide/groupby.html#dataframe-column-selection-in-groupby</a></p>
<p>So what's my problem and how I can solve it?</p>
<p>This is a part of my code:</p>
<pre><code>list_point_line_tuple = []
for point in gpd_nodes_df.geometry:
list_point_line_tuple.append((point.to_wkt(), geopandas_min_dist(point, gpd_network_df, 200).geometry.to_wkt()))
graph_frame = gpd.GeoDataFrame(list_point_line_tuple, columns=['near_stations', 'nearest_line'])
grouped_graph_frame = graph_frame.groupby('nearest_line', as_index=False)
</code></pre>
<p>And all code is here: <a href="https://github.com/forestbat/stream-simulate-conda" rel="nofollow noreferrer">https://github.com/forestbat/stream-simulate-conda</a></p> | <p>you can't group on geometries at all. they're not hashable and can't be used as a pandas index. instead, use geopandas' spatial join tools, such as <a href="https://geopandas.org/en/stable/docs/reference/api/geopandas.sjoin_nearest.html" rel="nofollow noreferrer"><code>geopandas.sjoin_nearest</code></a>:</p>
<pre class="lang-py prettyprint-override"><code>merged = geopandas.sjoin_nearest(points_geodataframe, lines_geodataframe, how='left')
</code></pre>
<p>See the geopandas <a href="https://geopandas.org/en/stable/gallery/spatial_joins.html" rel="nofollow noreferrer">docs on spatial joins</a> for more info.</p> | python|pandas|group-by|geopandas | 1 |
572 | 10,422,504 | line, = plot(x,sin(x)) what does comma stand for? | <p>I'm trying to make an animated plot. Here is an example code:</p>
<pre><code>from pylab import *
import time
ion()
tstart = time.time() # for profiling
x = arange(0,2*pi,0.01) # x-array
line, = plot(x,sin(x))
for i in arange(1,200):
line.set_ydata(sin(x+i/10.0)) # update the data
draw() # redraw the canvas
print 'FPS:' , 200/(time.time()-tstart)
</code></pre>
<p>I don't understand the <code>line,</code>. Without comma, the code doesn't work.</p> | <p>The comma is Python syntax that denotes either a single-element tuple. E.g.,</p>
<pre><code>>>> tuple([1])
(1,)
</code></pre>
<p>In this case, it is used for argument unpacking: <code>plot</code> returns a single-element list, which is unpacked into <code>line</code>:</p>
<pre><code>>>> x, y = [1, 2]
>>> x
1
>>> y
2
>>> z, = [3]
>>> z
3
</code></pre>
<p>An alternative, perhaps more readable way of doing this is to use list-like syntax:</p>
<pre><code>>>> [z] = [4]
>>> z
4
</code></pre>
<p>though the <code>z, =</code> is more common in Python code.</p> | python|numpy|matplotlib | 17 |
573 | 3,910,301 | ndarray field names for both row and column? | <p>I'm a computer science teacher trying to create a little gradebook for myself using NumPy. But I think it would make my code easier to write if I could create an ndarray that uses field names for both the rows and columns. Here's what I've got so far:</p>
<pre><code>import numpy as np
num_stud = 23
num_assign = 2
grades = np.zeros(num_stud, dtype=[('assign 1','i2'), ('assign 2','i2')]) #etc
gv = grades.view(dtype='i2').reshape(num_stud,num_assign)
</code></pre>
<p>So, if my first student gets a 97 on 'assign 1', I can write either of:</p>
<pre><code>grades[0]['assign 1'] = 97
gv[0][0] = 97
</code></pre>
<p>Also, I can do the following:</p>
<pre><code>np.mean( grades['assign 1'] ) # class average for assignment 1
np.sum( gv[0] ) # total points for student 1
</code></pre>
<p>This all works. But what I <strong>can't</strong> figure out how to do is use a student id number to refer to a particular student (assume that two of my students have student ids as shown):</p>
<pre><code>grades['123456']['assign 2'] = 95
grades['314159']['assign 2'] = 83
</code></pre>
<p>...or maybe create a second view with the different field names?</p>
<pre><code>np.sum( gview2['314159'] ) # total points for the student with the given id
</code></pre>
<p>I know that I could create a dict mapping student ids to indices, but that seems fragile and crufty, and I'm hoping there's a better way than:</p>
<pre><code>id2i = { '123456': 0, '314159': 1 }
np.sum( gv[ id2i['314159'] ] )
</code></pre>
<p>I'm also willing to re-architect things if there's a cleaner design. I'm new to NumPy, and I haven't written much code yet, so starting over isn't out of the question if I'm Doing It Wrong.</p>
<p>I <em>am</em> going to be needing to sum all the assignment points for over a hundred students once a day, as well as run standard deviations and other stats. Plus, I'll be waiting on the results, so I'd like it to run in only a couple of seconds.</p>
<p>Thanks in advance for any suggestions.</p> | <p>From you description, you'd be better off using a different data structure than a standard numpy array. <code>ndarray</code>s aren't well suited to this... They're not spreadsheets. </p>
<p>However, there has been extensive recent work on a type of numpy array that <em>is</em> well suited to this use. <a href="http://projects.scipy.org/numpy/wiki/NdarrayWithNamedAxes" rel="noreferrer">Here's a description</a> of the recent work on DataArrays. It will be a while before this is fully incorporated into numpy, though...</p>
<p>One of the projects that the upcoming numpy DataArrays is (sort of) based on is <a href="http://github.com/kwgoodman/la" rel="noreferrer">"larry"</a> (Short for "Labeled Array"). This project sounds like exactly what you're wanting to do... (Have named rows and columns but otherwise act transparently as a numpy array.) It should be stable enough to use, (and from my limited playing around with it, it's pretty slick!) but keep in mind that it will probably be replaced by a built-in numpy class eventually.</p>
<p>Nonetheless, you can make good use of the fact than (simple) indexing of a numpy array returns a view, into that array, and make a class that provides both interfaces...</p>
<p>Alternatively, @unutbu's suggestion above is another (more simple and direct) way of handling it, if you decide to roll your own.</p> | python|numpy | 11 |
574 | 70,430,166 | Is there any vectorized way of check string of column is substring in Pandas? | <p>I have a series of pandas, and I want filter it by checking if strings in columns are substring of another string.</p>
<p>For example,</p>
<pre class="lang-py prettyprint-override"><code>sentence = "hello world"
words = pd.Series(["hello", "wo", "d", "panda"])
</code></pre>
<p>And then,I want to get series (which is series of substring "hello world") as below.</p>
<pre><code>filtered_words = pd.Series(["hello", "wo", "d"])
</code></pre>
<p>Maybe there are some ways like "apply", or something, but it doesn't look like vectorized things.</p>
<p>How can I make it?</p> | <p>How about:</p>
<pre><code>out = words[words.apply(lambda x: x in sentence)]
</code></pre>
<p>But list comprehension is still pretty fast:</p>
<pre><code>out = [w for w in words if w in sentence]
</code></pre> | python|pandas | 1 |
575 | 70,649,379 | attributeerror: 'dataframe' object has no attribute 'data_type' | <p>I am getting the following error : <code>attributeerror: 'dataframe' object has no attribute 'data_type'"</code> . I am trying to recreate the code <a href="https://gist.github.com/susanli2016/92c41e7222a5d6d2db933e6d22294d7e" rel="nofollow noreferrer">from this link</a> which is based on this <a href="https://towardsdatascience.com/multi-class-text-classification-with-deep-learning-using-bert-b59ca2f5c613" rel="nofollow noreferrer">article</a> with my own dataset which is similar to the article</p>
<pre class="lang-ipython prettyprint-override"><code>from sklearn.model_selection import train_test_split
X_train, X_val, y_train, y_val = train_test_split(df.index.values,
df.label.values,
test_size=0.15,
random_state=42,
stratify=df.label.values)
df['data_type'] = ['not_set']*df.shape[0]
df.loc[X_train, 'data_type'] = 'train'
df.loc[X_val, 'data_type'] = 'val'
df.groupby(['Conference', 'label', 'data_type']).count()
</code></pre>
<pre class="lang-ipython prettyprint-override"><code>tokenizer = BertTokenizer.from_pretrained('bert-base-uncased',
do_lower_case=True)
encoded_data_train = tokenizer.batch_encode_plus(
df[df.data_type=='train'].example.values,
add_special_tokens=True,
return_attention_mask=True,
pad_to_max_length=True,
max_length=256,
return_tensors='pt'
)
</code></pre>
<p>and this is the error I get:</p>
<pre><code>---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_24180/2662883887.py in <module>
3
4 encoded_data_train = tokenizer.batch_encode_plus(
----> 5 df[df.data_type=='train'].example.values,
6 add_special_tokens=True,
7 return_attention_mask=True,
C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\generic.py in __getattr__(self, name)
5485 ):
5486 return self[name]
-> 5487 return object.__getattribute__(self, name)
5488
5489 def __setattr__(self, name: str, value) -> None:
AttributeError: 'DataFrame' object has no attribute 'data_type'
</code></pre>
<p>I am using python: 3.9; pytorch :1.10.1; pandas: 1.3.5; transformers: 4.15.0</p> | <p>The error means you have no <code>data_type</code> column in your dataframe because you missed <a href="https://towardsdatascience.com/multi-class-text-classification-with-deep-learning-using-bert-b59ca2f5c613#5a37" rel="nofollow noreferrer">this step</a></p>
<pre><code>from sklearn.model_selection import train_test_split
X_train, X_val, y_train, y_val = train_test_split(df.index.values,
df.label.values,
test_size=0.15,
random_state=42,
stratify=df.label.values)
df['data_type'] = ['not_set']*df.shape[0] # <- HERE
df.loc[X_train, 'data_type'] = 'train' # <- HERE
df.loc[X_val, 'data_type'] = 'val' # <- HERE
df.groupby(['Conference', 'label', 'data_type']).count()
</code></pre>
<p><strong>Demo</strong></p>
<ol>
<li>Setup</li>
</ol>
<pre><code>import pandas as pd
from sklearn.model_selection import train_test_split
# The Data
df = pd.read_csv('data/title_conference.csv')
df['label'] = pd.factorize(df['Conference'])[0]
# Train and Validation Split
X_train, X_val, y_train, y_val = train_test_split(df.index.values,
df.label.values,
test_size=0.15,
random_state=42,
stratify=df.label.values)
df['data_type'] = ['not_set']*df.shape[0]
df.loc[X_train, 'data_type'] = 'train'
df.loc[X_val, 'data_type'] = 'val'
</code></pre>
<ol start="2">
<li>Code</li>
</ol>
<pre><code>from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased',
do_lower_case=True)
encoded_data_train = tokenizer.batch_encode_plus(
df[df.data_type=='train'].Title.values,
add_special_tokens=True,
return_attention_mask=True,
pad_to_max_length=True,
max_length=256,
return_tensors='pt'
)
</code></pre>
<p>Output:</p>
<pre><code>>>> encoded_data_train
{'input_ids': tensor([[ 101, 8144, 1999, ..., 0, 0, 0],
[ 101, 2152, 2836, ..., 0, 0, 0],
[ 101, 22454, 25806, ..., 0, 0, 0],
...,
[ 101, 1037, 2047, ..., 0, 0, 0],
[ 101, 13229, 7375, ..., 0, 0, 0],
[ 101, 2006, 1996, ..., 0, 0, 0]]), 'token_type_ids': tensor([[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]]), 'attention_mask': tensor([[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
...,
[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0]])}
</code></pre> | python|pandas | 1 |
576 | 42,680,133 | Copy just two columns from one DataFrame to another in pandas | <p>I have a DataFrame with shape of (418, 13) and I want to just copy the two columns into a new DataFrame for outputting to a csv file. (I am writing a prediction)</p>
<pre class="lang-py prettyprint-override"><code>csv_pred = prediction[["PassengerId", "Survived"]].copy()
csv_pred.to_csv('n.csv')
</code></pre>
<p>However when I look into the outputted csv file I see this:</p>
<pre><code>,PassengerId,Survived
0,892,0
1,893,1
2,894,0
. . .
. . .
</code></pre>
<p>instead of what I expected which is:</p>
<pre><code>"PassengerId","Survived"
892,0
893,1
894,0
</code></pre>
<p>Does anyone know why my code doesn't work? Thanks in advance.</p> | <p>There is no need to create a new DF:</p>
<pre><code>prediction[["PassengerId", "Survived"]].to_csv('/path/to/file.csv', index=False)
</code></pre> | python|csv|pandas | 5 |
577 | 42,752,096 | How to get the average of a group every 9 years | <p>I have a data frame called EPI.
it looks like this:</p>
<p><a href="https://i.stack.imgur.com/OyzZz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OyzZz.png" alt="enter image description here"></a></p>
<p>It has 104 countries. Each country has values from 1991 till 2008 (18 years).
I want to have average every 9 years. So, each country will have 2 averages.</p>
<p>An edit:
This is the command I used to use it to get average. But it gives me one value (average) for each country. </p>
<pre><code>aver_economic_growth <- aggregate( HDI_growth_rate[,3], list(economic_growth$cname), mean, na.rm=TRUE)
</code></pre>
<p>But I need to get an average for each 9 years of a country. </p>
<p>Please note that I am a new user of r and I didn't find pandas in packages installment!</p> | <p>I think you can first convert years to datetime and then <code>groupby</code> with <code>resample</code> <code>mean</code>. Last convert to <code>year</code>s.</p>
<pre><code>#sample data for testing
np.random.seed(100)
start = pd.to_datetime('1991-02-24')
rng = pd.date_range(start, periods=36, freq='A')
df = pd.DataFrame({'cname': ['Albania'] * 18 + ['Argentina'] * 18,
'year': rng.year,
'rgdpna.pop': np.random.choice([0,1,2], size=36)})
#print (df)
df.year = pd.to_datetime(df.year, format='%Y')
df1 = df.set_index('year').groupby('cname').resample('9A',closed='left').mean().reset_index()
df1.year = df1.year.dt.year
print (df1)
cname year rgdpna.pop
0 Albania 1999 1.000000
1 Albania 2008 1.000000
2 Argentina 2017 0.888889
3 Argentina 2026 0.888889
</code></pre> | pandas|average|pandas-groupby | 0 |
578 | 42,676,859 | numpy condition loop np.where | <p>I am trying to compare a sample column to two reference columns, D and R. If sample matches D or R it replaces that data with D or R; unless ./. is in the sample column then I want the call to be NR. I have added the LogicCALL column to demonstrate-- in my actual data dataframe those calls would replace (1,0, ./.)</p>
<pre><code> ReferenceD ReferenceR sample LogicCALL
0 1 0 1 D
1 1 1 ./. NC
2 1 0 0 R
Index(['ReferenceD', 'ReferenceR', 'sample', 'LogicCALL'], dtype='object')
</code></pre>
<p>To this point I have construct the loop below; where Alt is a list of samples. The loop works for calling D and R's but not NC's, instead the script returns "R". </p>
<pre><code>for sample in Alt:
gtdata[(sample)] = np.where((gtdata[(sample)] == gtdata['ReferenceD']) & (gtdata[sample] != gtdata['ReferenceR']), "D",
np.where((gtdata[(sample)] == "D") & (gtdata[(sample)] is not ('\./.')), "D",
np.where((gtdata[(sample)] == "D") & (gtdata[(sample)].str.contains('\./.')), "NC",
"R")))
</code></pre> | <p>It's not a functional syntax but the most readable way to do this would be to procedurally make the assignemnts:</p>
<pre><code>df.loc[df['ReferenceD'].astype(str) == df['sample'], 'LogicCALL'] = 'D'
df.loc[df['ReferenceR'].astype(str) == df['sample'], 'LogicCALL'] = 'R'
df.loc[df['sample'] == './.', 'LogicCALL'] = 'NR'
</code></pre> | python|loops|numpy|conditional-statements | 0 |
579 | 43,023,665 | using pandas to create a multi-tile multi-series scatter chart | <p>Consider the following sample data frame:</p>
<pre><code>rng = pd.date_range('1/1/2011', periods=72, freq='H')
df = pd.DataFrame({
'cat': list('ABCD'*int(len(rng)/4)),
'D1': np.random.randn(72),
'D2': np.random.randn(72),
'D3': np.random.randn(72),
'D4': np.random.randn(72)
}, index=rng)
</code></pre>
<p>I'm looking for an idiomatic way to scatter-plot this as following:</p>
<ol>
<li>4 subplots (tiles), one for each category <code>(A, B, C, or D)</code></li>
<li>each D series plotted in its own color</li>
</ol>
<p>I can do this with a bunch of filtering and for-loops, but I'm looking for a more compact pandas-like way.</p> | <p>This is my guess at what you want.</p>
<pre><code>fig, axes = plt.subplots(2, 2, figsize=(8, 6), sharex=True, sharey=True)
for i, (cat, g) in enumerate(df.groupby('cat')):
ax = axes[i // 2, i % 2]
for j, c in g.filter(like='D').iteritems():
c.plot(ax=ax, title=cat, label=j, style='o')
ax.legend(loc='best', fontsize=8)
fig.tight_layout()
</code></pre>
<p><a href="https://i.stack.imgur.com/hsEYu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hsEYu.png" alt="enter image description here"></a></p> | python|pandas|plot|scatter|tiling | 1 |
580 | 42,784,768 | python: Multiply slice i of a matrix stack by column i of a matrix efficiently | <pre><code>import numpy as np
M,N,R = 2,3,4
# For a 3-dimensional array A:
A = np.reshape(np.r_[0:M*N*R], [M,N,R], order = 'C')
# and 2-dimensional array B:
B = np.reshape(np.r_[0:M*R], [R,M], order = 'C')
</code></pre>
<p>I would like the <code>N*M</code> matrix that results from multiplying slice <code>i</code> of <code>A</code> by column <code>i</code> of <code>B</code>. I have tried <code>np.dot</code>, and <code>np.einsum</code> and have been unable to obtain what I need. </p>
<p>Could anybody help, please? Thanks!</p> | <p>With <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.einsum.html" rel="nofollow noreferrer"><code>np.einsum</code></a>, we would have -</p>
<pre><code>np.einsum('ijk,ki->ji',A,B)
</code></pre>
<p>Let's verify the results using the given sample and using matrix-multiplication with <code>np.dot</code> -</p>
<pre><code>In [35]: A.shape
Out[35]: (2, 3, 4)
In [36]: B.shape
Out[36]: (4, 2)
In [37]: A[0].dot(B[:,0])
Out[37]: array([ 28, 76, 124])
In [38]: A[1].dot(B[:,1])
Out[38]: array([226, 290, 354])
In [39]: np.einsum('ijk,ki->ji',A,B)
Out[39]:
array([[ 28, 226],
[ 76, 290],
[124, 354]])
</code></pre>
<p>For aspects related to when to use <code>einsum</code> over <code>dot-based</code> tools like <code>np.dot</code>/<code>np.tensordot</code>, here's a <a href="https://stackoverflow.com/questions/41443444/numpy-element-wise-dot-product"><code>related post</code></a>.</p> | python|arrays|numpy | 2 |
581 | 26,996,140 | Ploting the same numerical relationship in multiple units / representations | <p>Consider an <code>X</code> and <code>Y</code> relationship where, for a change in the <strong>unit system</strong> in <code>X</code> and <code>Y</code>, the numerical relationship does <strong>not</strong> change.</p>
<p>How can I plot this relationship in both unit systems on the <strong>same</strong> plot? (i.e. show <strong>two</strong> <code>X</code> axis and <strong>two</strong> <code>Y</code> axis for the same single plot, as opposed to showing this relationship in two separate plots)</p>
<p>In my particular case, I have the following two plots. The one on the right simply renders the same data normalized to represent <a href="https://stackoverflow.com/questions/26984789/sorted-cumulative-plots">percentages of the total quantities involved</a>.</p>
<p>Here is an example :</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
ar = np.array([80, 64, 82, 72, 9, 35, 94, 58, 19, 41, 42, 18, 29, 46, 60, 14, 38,
19, 20, 34, 59, 64, 46, 39, 24, 36, 86, 64, 39, 15, 76, 93, 54, 14,
52, 25, 14, 4, 51, 55, 16, 32, 14, 46, 41, 40, 1, 2, 84, 61, 13,
26, 60, 76, 22, 77, 50, 7, 83, 4, 42, 71, 23, 56, 41, 35, 37, 86,
3, 95, 76, 37, 40, 53, 36, 24, 97, 89, 58, 63, 69, 24, 23, 95, 7,
55, 33, 42, 54, 92, 87, 37, 99, 71, 53, 71, 79, 15, 52, 37])
ar[::-1].sort()
y = np.cumsum(ar).astype("float32")
# Normalize to a percentage
y /=y.max()
y *=100.
# Prepend a 0 to y as zero stores have zero items
y = np.hstack((0,y))
# Get cumulative percentage of stores
x = np.linspace(0,100,y.size)
# Plot the normalized chart (the one on the right)
f, ax = plt.subplots(figsize=(3,3))
ax.plot(x,y)
# Plot the unnormalized chart (the one on the left)
f, ax = plt.subplots(figsize=(3,3))
ax.plot(x*len(ar), y*np.sum(ar))
</code></pre>
<p><img src="https://i.stack.imgur.com/XgiaI.png" alt="enter image description here" />
<img src="https://i.stack.imgur.com/rfon3.png" alt="enter image description here" /></p>
<h2>References:</h2>
<ul>
<li><a href="https://stackoverflow.com/questions/25493052/plotting-with-multiple-y-axes">This thread</a> discusses how to plot multiple axes on the same plot.</li>
<li><a href="https://stackoverflow.com/questions/26984789/sorted-cumulative-plots">This thread</a> shows how to get the Lorenz plot, which is what these plots represent, given the input array <code>ar</code>.</li>
</ul> | <h3>The Original Answer</h3>
<p><a href="http://matplotlib.org/examples/subplots_axes_and_figures/fahrenheit_celsius_scales.html" rel="nofollow noreferrer">This example</a>, from the excellent matploblib documentation, exactly solves your problem, except that you may want to use an <em>ad hoc</em> solution for the secondary axes' ticks (I mean, the ones with percentages).</p>
<h3>No news from the OP</h3>
<p>I thought "Is it possible that my answer was completely afar from the OP needs?" and shortly I felt compelled to check for myself... Following the link and working a little bit I was able to obtain the plot below (please note, my data differ from OP's data)</p>
<p><img src="https://i.stack.imgur.com/VhlCb.png" alt="enter image description here"></p>
<p>that seems similar to the OP request, but who knows? Is my original answer not good enough?
What else have I to do for my answer being accepted?</p> | python|numpy|matplotlib | 1 |
582 | 14,493,026 | Read Values from .csv file and convert them to float arrays | <p>I stumbled upon a little coding problem. I have to basically read data from a .csv file which looks a lot like this:</p>
<pre><code>2011-06-19 17:29:00.000,72,44,56,0.4772,0.3286,0.8497,31.3587,0.3235,0.9147,28.5751,0.3872,0.2803,0,0.2601,0.2073,0.1172,0,0.0,0,5.8922,1,0,0,0,1.2759
</code></pre>
<p>Now, I need to basically an entire file consisting of rows like this and parse them into numpy arrays. Till now, I have been able to get them into a big string type object using code similar to this:</p>
<pre><code>order_hist = np.loadtxt(filename_input,delimiter=',',dtype={'names': ('Year', 'Mon', 'Day', 'Stock', 'Action', 'Amount'), 'formats': ('i4', 'i4', 'i4', 'S10', 'S10', 'i4')})
</code></pre>
<p>The format for this file consists of a set of S20 data types as of now. I need to basically extract all of the data in the big ORDER_HIST data type into a set of arrays for each column. I do not know how to save the date time column (I've kept it as String for now). I need to convert the rest to float, but the below code is giving me an error:</p>
<pre><code> temparr=float[:len(order_hist)]
for x in range(len(order_hist['Stock'])):
temparr[x]=float(order_hist['Stock'][x]);
</code></pre>
<p>Can someone show me just how I can convert all the columns to the arrays that I need??? Or possibly direct me to some link to do so?</p> | <p>Boy, have I got a treat for you. <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.genfromtxt.html" rel="nofollow">numpy.genfromtxt</a> has a <code>converters</code> parameter, which allows you to specify a function for each column as the file is parsed. The function is fed the CSV string value. Its return value becomes the corresponding value in the numpy array.</p>
<p>Morever, the <code>dtype = None</code> parameter tells <code>genfromtxt</code> to make an intelligent guess as to the type of each column. In particular, numeric columns are automatically cast to an appropriate dtype.</p>
<p>For example, suppose your data file contains</p>
<pre><code>2011-06-19 17:29:00.000,72,44,56
</code></pre>
<p>Then</p>
<pre><code>import numpy as np
import datetime as DT
def make_date(datestr):
return DT.datetime.strptime(datestr, '%Y-%m-%d %H:%M:%S.%f')
arr = np.genfromtxt(filename, delimiter = ',',
converters = {'Date':make_date},
names = ('Date', 'Stock', 'Action', 'Amount'),
dtype = None)
print(arr)
print(arr.dtype)
</code></pre>
<p>yields</p>
<pre><code>(datetime.datetime(2011, 6, 19, 17, 29), 72, 44, 56)
[('Date', '|O4'), ('Stock', '<i4'), ('Action', '<i4'), ('Amount', '<i4')]
</code></pre>
<p>Your real csv file has more columns, so you'd want to add more items to <code>names</code>, but otherwise, the example should still stand.</p>
<p>If you don't really care about the extra columns, you can assign a fluff-name like this:</p>
<pre><code>arr = np.genfromtxt(filename, delimiter=',',
converters={'Date': make_date},
names=('Date', 'Stock', 'Action', 'Amount') +
tuple('col{i}'.format(i=i) for i in range(22)),
dtype = None)
</code></pre>
<p>yields</p>
<pre><code>(datetime.datetime(2011, 6, 19, 17, 29), 72, 44, 56, 0.4772, 0.3286, 0.8497, 31.3587, 0.3235, 0.9147, 28.5751, 0.3872, 0.2803, 0, 0.2601, 0.2073, 0.1172, 0, 0.0, 0, 5.8922, 1, 0, 0, 0, 1.2759)
</code></pre>
<hr>
<p>You might also be interested in checking out the <a href="http://pandas.pydata.org/" rel="nofollow">pandas</a> module which is built on top of <code>numpy</code>, and which takes parsing CSV to an even higher level of luxury: It has a <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.io.parsers.read_csv.html" rel="nofollow">pandas.read_csv</a> function whose <code>parse_dates = True</code> parameter will automatically parse date strings (using <a href="http://labix.org/python-dateutil#head-c0e81a473b647dfa787dc11e8c69557ec2c3ecd2" rel="nofollow">dateutil</a>).</p>
<p>Using pandas, your csv could be parsed with</p>
<pre><code>df = pd.read_csv(filename, parse_dates = [0,1], header = None,
names=('Date', 'Stock', 'Action', 'Amount') +
tuple('col{i}'.format(i=i) for i in range(22)))
</code></pre>
<p>Note there is no need to specify the <code>make_date</code> function<code>. Just to be clear --</code>pands.read_csv<code>returns a</code>DataFrame, not a numpy array. The <code>DataFrame</code> may actually be more useful for your purpose, but you should be aware it is a different object with a whole new world of methods to exploit and explore.</p> | python|arrays|csv|numpy|data-conversion | 6 |
583 | 25,024,679 | Convert years into date time pandas | <p>If I have a list of integers (e.g. [2006, 2007, 2008, 2009, ...]), and I have them set as a Pandas DataFrame index, how do I convert that index into a Datetime index, so when plotting the x-axis is not represented as [0, 1, 2, 3...] +2.006E3?</p> | <p>You can construct a DatetimeIndex if you cast the integers as strings first, i.e.</p>
<pre><code>index = pd.DatetimeIndex([str(x) for x in [2005,2006,2007]])
</code></pre>
<p>will give you a DatetimeIndex with January 1 of each year.</p> | datetime|pandas | 1 |
584 | 39,138,299 | Dataframe Simple Moving Average (SMA) Calculation | <p>Is there any simple tool/lib that can help me easily calculate the Simple Moving Average SMA(N) of dataframe ?</p>
<pre><code> GLD SMA(5)
Date
2005-01-03 00:00:00+00:00 43.020000 Nan
2005-01-04 00:00:00+00:00 42.740002 Nan
2005-01-05 00:00:00+00:00 42.669998 Nan
2005-01-06 00:00:00+00:00 42.150002 Nan
2005-01-07 00:00:00+00:00 41.840000 ..
2005-01-10 00:00:00+00:00 41.950001 ..
2005-01-11 00:00:00+00:00 42.209999 ..
2005-01-12 00:00:00+00:00 42.599998 ..
2005-01-13 00:00:00+00:00 42.599998 ..
2005-01-14 00:00:00+00:00 42.320000 ..
</code></pre> | <pre><code>df['SMA(5)'] = df.GLD.rolling(5).mean()
df
</code></pre>
<p><a href="https://i.stack.imgur.com/Li6Ke.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Li6Ke.png" alt="enter image description here"></a></p> | python|pandas|dataframe | 14 |
585 | 39,001,457 | hdf5 not supported (please install/reinstall h5py) Scipy not supported! when importing TFLearn? | <p>I'm getting this error:</p>
<pre><code>hdf5 not supported (please install/reinstall h5py)
Scipy not supported!
</code></pre>
<p>when I try to import <code>tflearn</code>. And I think due to this problem my TFLearn code is not working properly?</p> | <p>I ran into the same issue a few minutes ago, pretty much you just need to reinstall h5py using the package manager of your current environment. </p>
<p><a href="http://docs.h5py.org/en/latest/build.html" rel="noreferrer">http://docs.h5py.org/en/latest/build.html</a></p> | python|scipy|tensorflow | 10 |
586 | 12,840,847 | Filling higher-frequency windows when upsampling with pandas | <p>I am converting low-frequency data to a higher frequency with pandas (for instance monthly to daily). When making this conversion, I would like the resulting higher-frequency index to span the entire low-frequency window. For example, suppose I have a monthly series, like so:</p>
<pre><code>import numpy as np
from pandas import *
data = np.random.randn(2)
s = Series(data, index=date_range('2012-01-01', periods=len(data), freq='M'))
s
2012-01-31 0
2012-02-29 1
</code></pre>
<p>Now, I convert it to daily frequency:</p>
<pre><code>s.resample('D')
2012-01-31 0
2012-02-01 NaN
2012-02-02 NaN
2012-02-03 NaN
...
2012-02-27 NaN
2012-02-28 NaN
2012-02-29 1
</code></pre>
<p>Notice how the resulting output goes from 2012-01-31 to 2012-02-29. But what I really want is days from 2011-01-01 to 2012-02-29, so that the daily index "fills" the entire January month, even if 2012-01-31 is still the only non-NaN observation in that month. </p>
<p>I'm also curious if there are built-in methods that give more control over how the higher-frequency period is filled with the lower frequency values. In the monthly to daily example, the default is to fill in just the last day of each month; if I use a <code>PeriodIndex</code> to index my series I can also <code>s.resample('D', convention='start')</code> to have only the first observation filled in. However, I also would like options to fill every day in the month with the monthly value, and to fill every day with the daily average (the monthly value divided by the number of days in the month).</p>
<p>Note that basic backfill and forward fill would not be sufficient to fill every daily observation in the month with the monthly value. For example, if the monthly series runs from January to March but the February value is NaN, then a forward fill would carry the January values into February, which is not desired. </p> | <p>How about this?</p>
<pre><code>s.reindex(DatetimeIndex(start=s.index[0].replace(day=1), end=s.index[-1], freq='D'))
</code></pre> | python|pandas | 5 |
587 | 33,673,971 | Find a string in a huge string file | <p>I have to find a list of strings in a txt.file</p>
<p>The file has 200k+ lines</p>
<p><strong>This is my code:</strong> </p>
<pre><code>with open(txtfile, 'rU') as csvfile:
tp = pd.read_csv(csvfile, iterator=True, chunksize=6000, error_bad_lines=False,
header=None, skip_blank_lines=True, lineterminator="\n")
for chunk in tp:
if string_to_find in chunk:
print "hurrà"
</code></pre>
<p><strong>The problem</strong> is that with this code only the first 9k lines are analyzed.
Why?</p> | <p>Do you really need to open the file first then use pandas? If it's an option you can just read with pandas then <a href="http://pandas.pydata.org/pandas-docs/stable/merging.html" rel="nofollow">concatenate</a>.</p>
<p>To do that just use <code>read_csv</code>, <code>concat</code> the files, then loop through them.</p>
<pre><code>import pandas as pd
df = pd.read_csv('data.csv', iterator=True, chunksize=6000, error_bad_lines=False,
header=None, skip_blank_lines=True)
df = pd.concat(df)
# start the for loop
</code></pre>
<p>It depends on your for loop, <code>pandas</code> most likely will have a function that you won't need to loop as it's slower to process large data.</p> | python|performance|csv|pandas | 1 |
588 | 33,831,395 | How do I split repeated blocks of data into multiple columns and parse datetime? | <pre><code>import pandas as pd
f = pd.read_table('151101.mnd',header = 30)
print f.head()
print f.shape
2015-11-01 00:10:00 00:10:00
0 # z speed dir W sigW bck error
1 30 5.05 333.0 0.23 0.13 1.44E+05 0.00
2 40 5.05 337.1 -0.02 0.14 7.69E+03 0.00
3 50 5.03 338.5 0.00 0.15 4.83E+03 0.00
4 60 6.21 344.3 -0.09 0.18 6.13E+03 0.00
(4607, 1)
</code></pre>
<p>Basically I have this file that I read in with pandas. There is 2 things that I would like to do. </p>
<ol>
<li><p>I would like to store the <code>Time</code> header as a variable called time. The tricky part of this is that every 33 rows another block of data starts with the next 10 min in the day's data. So I guess every 33 rows I would need to grab the <code>Time</code> header and store it as the variable time.</p></li>
<li><p>When I print out the shape of the file it says there are <code>4,607 rows</code> and 1 column. However I would like to split this "one column of text" into 8 columns. <code>index</code>, <code>z</code>, <code>speed</code>, <code>dir</code>, <code>w</code>, <code>sigw</code>, <code>bck</code>, <code>error</code>.</p></li>
</ol>
<p>How do I accomplish these two things?</p> | <h2>Case ( 1 ) rows repeat themselves at the same step</h2>
<hr>
<pre><code>pd.read_table(sep = '\s+', skiprows = np.arange(0 , 4607, 32))
</code></pre>
<h2>Case ( 2 ) the unwanted rows appear randomly</h2>
<hr>
<p>if not so you've to remove it manually , so you need first to load your data into single column</p>
<pre><code>df = pd.read_table()
</code></pre>
<p>then you need to detect the unwanted columns by doing the following</p>
<pre><code>indices_to_remove = df.iloc[: , 0].str.contains('^\s*\d{4}\-\d{2}\-\d{2}')
</code></pre>
<p>then slice only the needed columns</p>
<pre><code>df[~indices_to_remove]
</code></pre>
<p>then finally create your final <code>dataframe</code></p>
<pre><code>pd.DataFrame(list(df[~indices_to_remove].iloc[: , 0].str.split('\s+')))
</code></pre> | python|file|pandas|dataframe|multiple-columns | 1 |
589 | 23,492,409 | Adding labels to a matplotlib graph | <p>Having the following code:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
days, impressions = np.loadtxt('results_history.csv', unpack=True, delimiter=',',usecols=(0,1) ,
converters={ 0: mdates.strpdate2num('%d-%m-%y')})
plt.plot_date(x=days, y=impressions, fmt="r-")
plt.title("Load Testing Results")
#params = {'legend.labelsize': 500,
#'legend.handletextpad': 1,
#'legend.handlelength': 2,
#'legend.loc': 'upper left',
#'labelspacing':0.25,
#'legend.linewidth': 50}
#plt.rcParams.update(params)
plt.legend("response times")
plt.ylabel("Date")
plt.grid(True)
plt.show()
</code></pre>
<p>The graph is generated but i can't figure how can i add some xy labels. The generated graph: <img src="https://i.stack.imgur.com/YWCwo.jpg" alt="enter image description here"></p>
<p>Also tried to increase the legend text size but the text is not displayed. And the labels from the X axis are overlapped. CSV file:</p>
<pre><code>01-05-14, 55494, Build 1
10-05-14, 55000, Build 2
15-05-14, 55500, Build 3
20-05-14, 57482, Build 4
25-05-14, 58741, Build 5
</code></pre>
<p>How can i add the xytext from the csv and also change the format for legend and X axis?</p> | <p>You need <a href="http://matplotlib.org/users/annotations_intro.html" rel="nofollow">annotate</a>, e.g.:</p>
<pre><code>plt.annotate('some text',xy=(days[0],impressions[0]))
</code></pre>
<p>To adjust the x axis text you could add:</p>
<pre><code>fig=plt.figure() # below the import statements
...
fig.autofmt_xdate() # after plotting
</code></pre>
<p>To change the legend text use the label parameter in your plot function:</p>
<pre><code>plt.plot_date(x=days, y=impressions, fmt="r-",label="response times")
</code></pre>
<p>To increase the legend font size do this:</p>
<pre><code>plt.legend(fontsize='x-large')
</code></pre> | python|csv|numpy|matplotlib | 5 |
590 | 22,516,290 | Pandas idiom for attaching a predictions column to a dataframe | <p>What is the Pandas idiom for attaching the results of a prediction to the dataframe on which the prediction was made.</p>
<p>For example, if I have something like (where <code>qualityTrain</code> is the result of a <code>stats models</code> <code>fit</code>)</p>
<pre><code>qualityTrain = quality_data[some_selection_criterion]
pred1 = QualityLog.predict(qualityTrain)
qualityTrain = pd.concat([qualityTrain, pd.DataFrame(pred1, columns=['Pred1'])], axis=1)
</code></pre>
<p>the 'Pred1' values are not aligned correctly with the rest of <code>qualityTrain</code>. If I modify the last line so to reads</p>
<p><code> ...pd.DataFrame(pred1, columns=['Pred1'], <b>index=qualityTrain.index</b>)...</code></p>
<p>I get the results I expect.</p>
<p>Is there a better idiom for attaching results to a dataframe where the dataframe's may have an arbitrary index?</p> | <p>You can just do</p>
<pre><code>qualityTrain['Pred1'] = pred1
</code></pre>
<p>Note that we're (statsmodels) going to have pandas-in, pandas-out for predict pretty soon, so it'll hopefully alleviate some of these pain points.</p> | indexing|pandas|dataframe|statsmodels | 1 |
591 | 13,592,618 | python pandas dataframe thread safe? | <p>I am using multiple threads to access and delete data in my pandas dataframe. Because of this, I am wondering is pandas dataframe threadsafe?</p> | <p>No, pandas is not thread safe. And its not thread safe in surprising ways.</p>
<ul>
<li>Can I delete from pandas dataframe while another thread is using?</li>
</ul>
<p>Fuggedaboutit! Nope. And generally no. Not even for GIL-locked python datastructures.</p>
<ul>
<li>Can I read from a pandas object while someone else is writing to it?</li>
<li>Can I copy a pandas dataframe in my thread, and work on the copy?</li>
</ul>
<p>Definitely not. There's a long standing open issue: <a href="https://github.com/pandas-dev/pandas/issues/2728" rel="noreferrer">https://github.com/pandas-dev/pandas/issues/2728</a></p>
<p>Actually I think this is pretty reasonable (i.e. expected) behavior. I wouldn't expect to be able to simultaneouls write and read from, or copy, any datastructure unless either: i) it had been designed for concurrency, or ii) I have an exclusive lock on that object <em>and all the view objects derived from it</em> (<code>.loc</code>, <code>.iloc</code> are views and pandas has may others).</p>
<ul>
<li>Can I read from a pandas object while no-one else is writing to it?</li>
</ul>
<p>For almost all data structures in Python, the answer is yes. For pandas, no. And it seems, its not a design goal at present.</p>
<p>Typically, you can perform 'reading' operations on objects if no-one is performing mutating operations. You have to be a little cautious though. Some datastructures, including pandas, perform memoization, to cache expensive operations that are otherwise functionally pure. Its generally easy to implement lockless memoization in Python:</p>
<pre><code>@property
def thing(self):
if _thing is MISSING:
self._thing = self._calc_thing()
return self._thing
</code></pre>
<p>... it simple and safe (assuming assignment is safely atomic -- which has not always been the case for every language, but is in CPython, unless you override <code>__setattribute__</code>).</p>
<p>Pandas, series and dataframe indexes are computed lazily, on first use. I hope (but I do not see guarantees in the docs), that they're done in a similar safe way.</p>
<p>For all libraries (including pandas) I would <em>hope</em> that all types of read-only operations (or more specifically, 'functionally pure' operations) would be thread safe if no-one is performing mutating operations. I think this is a 'reasonable' easily-achievable, common, lower-bar for thread safeness.</p>
<p>For pandas, however, you <strong>cannot</strong> assume this. <em>Even if you can guarantee no-one is performing 'functionally impure' operations on your object (e.g. writing to cells, adding/deleting columns'), pandas is not thread safe.</em></p>
<p>Here's a recent example: <a href="https://github.com/pandas-dev/pandas/issues/25870" rel="noreferrer">https://github.com/pandas-dev/pandas/issues/25870</a> (its marked as a duplicate of the .copy-not-threadsafe issue, but it seems it could be a separate issue).</p>
<pre><code>s = pd.Series(...)
f(s) # Success!
# Thread 1:
while True: f(s)
# Thread 2:
while True: f(s) # Exception !
</code></pre>
<p>... fails for <code>f(s): s.reindex(..., copy=True)</code>, which returns it's result a as new object -- you would think it would be functionally pure and thread safe. Unfortunately, it is not.</p>
<p>The result of this is that we could not use pandas in production for our healthcare analytics system - and I now discourage it for internal development since it makes in-memory parallelization of read-only operations unsafe. (!!)</p>
<p>The <code>reindex</code> behavior is weird and surprising. If anyone has ideas about why it fails, please answer here: <a href="https://stackoverflow.com/questions/55347139/whats-the-source-of-thread-unsafety-in-this-usage-of-pandas">What's the source of thread-unsafety in this usage of pandas.Series.reindex(, copy=True)?</a></p>
<p>The maintainers marked this as a duplicate of <a href="https://github.com/pandas-dev/pandas/issues/2728" rel="noreferrer">https://github.com/pandas-dev/pandas/issues/2728</a> . I'm suspicious, but if <code>.copy</code> is the source, then <em>almost all of pandas</em> is not thread safe in any situation (which is their advice).</p>
<p>!</p> | python|thread-safety|pandas | 33 |
592 | 29,688,899 | Pandas: Checking if a date is a holiday and assigning boolean value | <p>I have a pandas data frame with date column, and I am trying to add a new column of boolean values indicating whether a given date is a holiday or not.</p>
<p>Following is the code, but it does not work (all the values are False) because the types seem to be different, and I can't figure out how to get the 'date' in the pandas data frame to be of the same type as the holidays:</p>
<pre><code>cal = USFederalHolidayCalendar()
holidays = cal.holidays(start=train_df['date'].min(),
end=train_df['date'].max()).to_pydatetime()
train_df['holiday'] = train_df['date'].isin(holidays)
print type(train_df['date'][1])
print type(holidays[0])
</code></pre> | <p>You don't need to convert anything. Just compare straight up. <code>pandas</code> is smart enough to compare a lot of different types with regards to dates and times. You have to have a slightly more esoteric format if you're having issues with date/time compatibility.</p>
<pre><code>import pandas as pd
from pandas.tseries.holiday import USFederalHolidayCalendar as calendar
dr = pd.date_range(start='2015-07-01', end='2015-07-31')
df = pd.DataFrame()
df['Date'] = dr
cal = calendar()
holidays = cal.holidays(start=dr.min(), end=dr.max())
df['Holiday'] = df['Date'].isin(holidays)
print df
</code></pre>
<p>Result:</p>
<pre><code> Date Holiday
0 2015-07-01 False
1 2015-07-02 False
2 2015-07-03 True
3 2015-07-04 False
4 2015-07-05 False
5 2015-07-06 False
6 2015-07-07 False
7 2015-07-08 False
8 2015-07-09 False
9 2015-07-10 False
10 2015-07-11 False
11 2015-07-12 False
12 2015-07-13 False
13 2015-07-14 False
14 2015-07-15 False
15 2015-07-16 False
16 2015-07-17 False
17 2015-07-18 False
18 2015-07-19 False
19 2015-07-20 False
20 2015-07-21 False
21 2015-07-22 False
22 2015-07-23 False
23 2015-07-24 False
24 2015-07-25 False
25 2015-07-26 False
26 2015-07-27 False
27 2015-07-28 False
28 2015-07-29 False
29 2015-07-30 False
30 2015-07-31 False
</code></pre>
<p>Note that July 4, 2015 falls on a Saturday.</p> | python|pandas | 50 |
593 | 62,239,130 | What is the best way to calculate the standard deviation | <p>I have a 50x22 matrix/vector, and i need to calculate the standard deviation from each column, precisely it looks like this.<a href="https://i.stack.imgur.com/idagY.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/idagY.jpg" alt="enter image description here"></a></p>
<p>For instance, the first column: [245, 244, 247, 243... 239], then second column [115, 106, 99...149] and on...</p>
<p>I did try calculating it using a calculator, i guess it wasn't a good idea, it took me a while, but i figured numpy could help me. </p>
<p>The thing is, what i tried was:</p>
<pre><code>std_vec = np.std(img_feats[0])
</code></pre>
<p>The output was all wrong, when i compared to the firt column that i obtained using the calculator.</p>
<p>This is the actual output, since the image picture does not look good.</p>
<pre><code>[247, 111, 162, 47, 39, 42, 47, 42, 204, 215, 50, 57, 196, 209, 199, 184, 219, 201, 204, 218]
[247, 108, 160, 47, 39, 44, 48, 44, 204, 213, 51, 60, 198, 205, 201, 184, 219, 201, 202, 216]
[245, 96, 160, 46, 38, 43, 44, 40, 195, 213, 51, 57, 199, 202, 204, 187, 222, 201, 202, 207]
[245, 102, 161, 46, 40, 41, 46, 40, 198, 213, 51, 55, 199, 210, 202, 187, 220, 202, 204, 210]
[246, 104, 160, 49, 38, 41, 46, 41, 198, 214, 52, 55, 199, 210, 204, 185, 220, 204, 204, 208]
[246, 107, 160, 47, 40, 40, 46, 40, 196, 213, 51, 57, 199, 207, 201, 185, 220, 202, 205, 208]
[246, 116, 161, 48, 40, 43, 45, 41, 201, 213, 51, 58, 201, 201, 204, 184, 219, 204, 205, 210]
[246, 123, 169, 48, 40, 42, 46, 43, 202, 213, 51, 64, 199, 198, 201, 189, 217, 199, 207, 216]
[246, 123, 167, 48, 40, 40, 45, 43, 204, 213, 51, 67, 198, 199, 202, 187, 219, 199, 207, 214]
[246, 122, 167, 48, 40, 43, 46, 43, 202, 213, 51, 64, 199, 197, 200, 185, 219, 199, 208, 213]
[245, 115, 165, 47, 38, 43, 44, 43, 199, 213, 51, 58, 199, 191, 203, 187, 217, 194, 206, 208]
[244, 106, 162, 47, 40, 40, 43, 40, 194, 214, 49, 58, 199, 188, 205, 191, 216, 199, 203, 208]
[247, 99, 160, 46, 42, 40, 46, 42, 201, 213, 51, 55, 198, 204, 201, 185, 220, 199, 202, 216]
[243, 98, 161, 47, 41, 43, 43, 40, 193, 214, 50, 58, 199, 191, 207, 190, 214, 197, 205, 207]
[242, 96, 159, 46, 40, 41, 44, 43, 188, 214, 50, 56, 200, 197, 208, 190, 214, 197, 205, 205]
[243, 99, 164, 47, 40, 41, 44, 41, 189, 213, 49, 55, 201, 198, 208, 192, 215, 198, 204, 205]
[245, 105, 166, 49, 40, 41, 44, 40, 193, 215, 49, 56, 199, 202, 207, 192, 218, 201, 207, 208]
[246, 112, 167, 46, 40, 44, 44, 43, 195, 211, 50, 60, 198, 204, 205, 190, 218, 201, 205, 210]
[245, 112, 166, 47, 40, 43, 44, 43, 199, 214, 49, 59, 202, 205, 205, 188, 219, 201, 208, 210]
[245, 112, 165, 47, 39, 44, 46, 43, 197, 213, 50, 58, 200, 204, 205, 188, 219, 202, 208, 211]
[245, 111, 167, 45, 42, 42, 42, 41, 199, 213, 50, 59, 200, 203, 210, 187, 219, 200, 208, 211]
[245, 115, 167, 48, 41, 42, 44, 42, 199, 212, 51, 62, 200, 205, 206, 187, 219, 202, 209, 211]
[244, 122, 168, 47, 42, 41, 45, 41, 199, 212, 51, 63, 200, 203, 206, 185, 219, 200, 208, 212]
[247, 99, 160, 45, 40, 42, 48, 43, 200, 211, 51, 55, 197, 202, 199, 184, 220, 199, 204, 214]
[244, 121, 168, 50, 39, 42, 43, 40, 202, 211, 50, 63, 200, 199, 208, 183, 220, 203, 208, 212]
[245, 121, 167, 50, 40, 42, 43, 40, 200, 212, 50, 63, 202, 197, 208, 186, 220, 203, 208, 215]
[245, 119, 165, 48, 40, 42, 45, 42, 199, 212, 50, 62, 202, 194, 206, 186, 218, 200, 209, 211]
[245, 124, 165, 47, 37, 42, 45, 40, 202, 211, 50, 63, 202, 194, 206, 185, 217, 199, 209, 215]
[244, 129, 168, 47, 39, 40, 45, 42, 208, 209, 50, 71, 202, 197, 206, 187, 214, 199, 209, 215]
[244, 134, 173, 50, 39, 42, 45, 42, 208, 209, 51, 80, 202, 199, 206, 189, 209, 199, 208, 217]
[243, 140, 176, 54, 39, 40, 45, 40, 209, 211, 50, 83, 205, 201, 208, 189, 205, 198, 212, 220]
[242, 142, 177, 65, 39, 42, 44, 40, 215, 212, 50, 97, 201, 203, 206, 189, 203, 200, 211, 220]
[241, 147, 182, 74, 39, 42, 44, 41, 212, 214, 51, 106, 204, 203, 209, 191, 200, 195, 209, 221]
[241, 151, 182, 78, 39, 39, 45, 42, 212, 214, 51, 108, 206, 203, 206, 192, 197, 194, 206, 220]
[246, 99, 159, 46, 38, 41, 46, 41, 197, 213, 50, 54, 199, 203, 202, 182, 222, 197, 200, 213]
[239, 151, 183, 81, 37, 42, 45, 40, 212, 211, 50, 112, 206, 203, 206, 191, 194, 197, 209, 220]
[238, 149, 185, 78, 41, 41, 47, 41, 211, 211, 51, 111, 207, 204, 206, 192, 192, 195, 207, 220]
[238, 147, 182, 74, 39, 39, 45, 44, 211, 211, 51, 107, 209, 206, 207, 192, 195, 194, 207, 221]
[241, 146, 183, 73, 39, 41, 45, 44, 212, 211, 51, 107, 208, 206, 206, 191, 197, 195, 209, 221]
[237, 147, 182, 80, 41, 41, 45, 44, 210, 212, 51, 108, 209, 209, 209, 195, 195, 197, 210, 221]
[240, 150, 183, 85, 41, 42, 45, 42, 210, 213, 53, 112, 210, 207, 209, 194, 197, 195, 209, 220]
[241, 150, 180, 83, 36, 44, 45, 40, 210, 213, 51, 112, 207, 204, 207, 192, 195, 192, 207, 219]
[239, 149, 180, 83, 42, 42, 46, 42, 210, 213, 51, 109, 209, 204, 209, 192, 198, 190, 209, 221]
[239, 149, 183, 84, 40, 42, 46, 40, 208, 213, 51, 111, 210, 204, 208, 192, 198, 187, 208, 219]
[238, 151, 184, 87, 40, 42, 43, 43, 208, 213, 51, 113, 213, 205, 208, 190, 195, 190, 208, 220]
[246, 99, 158, 44, 38, 40, 44, 41, 196, 213, 54, 55, 199, 205, 200, 182, 220, 196, 200, 213]
[246, 98, 156, 46, 40, 41, 47, 44, 197, 214, 51, 55, 199, 210, 202, 184, 220, 196, 199, 211]
[245, 96, 158, 44, 40, 41, 46, 43, 194, 213, 54, 55, 202, 205, 202, 187, 220, 199, 202, 210]
[244, 92, 157, 44, 38, 40, 46, 41, 191, 213, 51, 54, 199, 205, 205, 187, 219, 198, 202, 208]
[244, 92, 157, 46, 38, 40, 44, 44, 191, 213, 51, 54, 198, 205, 207, 188, 219, 199, 202, 208]
</code></pre>
<p>These results are comming from features extracted from my images, not sure if its relevant though, but here's my actual code:</p>
<pre><code>for img1 in imgs1:
img_feats = []
for coords in coord_list:
std_vec = np.std(img_feats[0])
img_feats.append(img1[coords[0], coords[1]]) # extracts the features that composes my matrix
print(img_feats)
</code></pre>
<p>The output should look like this [1.879 3.0284 5.9333 2.0156 2.2467 2.0092 4.7983 4.3554 3.6372 1.3159 2.6174 2.2336 0.9625 5.6285 5.4040 2.7887 0 3.4632 0 2.7370]</p> | <p>I am not sure what you are really asking, but if your array is <code>foo</code>, then</p>
<pre><code>np.std(foo, axis=0)
</code></pre>
<p>Will compute the standard deviations of all the columns.</p> | python|arrays|python-3.x|numpy|statistics | 1 |
594 | 62,419,661 | Pandas style applymap highlight duplicates with lambda function | <p>I have a Pandas dataframe and am working in a Jupyter notebook. I want to highlight rows in which column pairs are duplicated. Here is an example:</p>
<pre><code>colA = list(range(1,6))
colB = ['aa', 'bb', 'aa', 'cc', 'aa']
colC = [14,3,14,9,12]
colD = [108, 2001, 152, 696, 696]
df = pd.DataFrame(list(zip(colA, colB, colC, colD)), columns =['colA', 'colB', 'colC', 'colD'])
display(df)
</code></pre>
<p><a href="https://i.stack.imgur.com/L4mze.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/L4mze.png" alt="enter image description here"></a></p>
<p>I want to highlight these rows, because the values in colB and colC are duplicates:</p>
<p><a href="https://i.stack.imgur.com/NHceY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NHceY.png" alt="enter image description here"></a></p>
<p>I am trying this lambda function, but it throws an error (and it's only for one column):</p>
<pre><code>df.style.applymap(lambda x: 'background-color : yellow' if x[colB].duplicated(keep=False) else '')
TypeError: ("'int' object is not subscriptable", 'occurred at index colA')
</code></pre>
<p>Thanks for any help</p> | <p>Personally, I would break the problem into two steps rather than use one complicated lambda function. We can find the index of all the duplicate rows, then highlight the rows by index number. Also don't forget that in your lambda function, you should use a list comprehension in what you are returning.</p>
<pre><code>rows_series = df[['colB','colC']].duplicated(keep=False)
rows = rows_series[rows_series].index.values
df.style.apply(lambda x: ['background: yellow' if x.name in rows else '' for i in x], axis=1)
</code></pre>
<p><a href="https://i.stack.imgur.com/sH0eU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sH0eU.png" alt="enter image description here"></a></p> | python-3.x|pandas|jupyter-notebook|duplicates|highlight | 4 |
595 | 62,109,730 | Sorting columns having lists with reference to each other in pandas | <p>Suppose my data frame is like:</p>
<pre><code> A B Date
[1,3,2] ['a','b','c'] date1
</code></pre>
<p>I want to sort both the columns but with reference to each other. Like the output should be:</p>
<pre><code> A B Date
[1,2,3] ['a','c','b'] date1
</code></pre>
<p>Now if these had been two lists only I would have sorted through <a href="https://stackoverflow.com/questions/9764298/how-to-sort-two-lists-which-reference-each-other-in-the-exact-same-way">zip method</a>.</p>
<p>But as these are columns of data frame. I am not sure how to use apply method to sort these with reference to each other.</p>
<p>My dataframe as a whole is sorted on the basis of third column (date). Now for each such dates the other two columns have list, each having same number of values. I want to sort those lists on the basis of each other </p> | <p>If all cells have the same number of values, try this flattening and groupby approach:</p>
<pre><code>df
A B
0 [1, 3, 2] [a, b, c]
1 [4, 6, 5] [d, f, e] # added an extra row for demonstration
</code></pre>
<p></p>
<pre><code>(df.apply(pd.Series.explode)
.groupby(level=0)
.apply(lambda x: x.sort_values('A'))
.groupby(level=0)
.agg(list))
A B
0 [1, 2, 3] [a, c, b]
1 [4, 5, 6] [d, e, f]
</code></pre>
<p>Obligatory disclaimer: Please don't use pandas to store lists in columns.</p> | python|pandas | 2 |
596 | 62,110,398 | I can't install tensorflow on win10 | <p>I can't install tensorflow
I commanded pip3 install tensorflow --user and result is </p>
<blockquote>
<p><code>ERROR: Could not install packages due to an EnvironmentError: [Errno 2] No such file or directory: 'C:\\Users\\kosh9\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python38\\site-packages\\tensorboard_plugin_wit\\_vendor\\tensorflow_serving\\sources\\storage_path\\__pycache__\\file_system_storage_path_source_pb2.cpython-38.pyc'</code></p>
</blockquote>
<p>Namely, I could not install packages due to an Environment Error.</p>
<p>How could I solve this error?</p> | <p>Try installing directly form PyPi.</p>
<p>Your on a Windows system so try one of these:</p>
<p><code>pip install tensorflow-gpu</code>
or if on an Intel processor
<code>pip install intel-tensorflow</code></p>
<p>If that dons't work try to grab the <code>.whl</code> file. For Python 3.8 on windows that's:
<code>pip install https://storage.googleapis.com/tensorflow/windows/gpu/tensorflow_gpu-2.2.0-cp38-cp38-win_amd64.whl</code></p>
<p><strong>Note:</strong> You don't normally need to use <code>pip3</code> or <code>python3</code> on Windows </p> | python|tensorflow|environment | 0 |
597 | 51,349,186 | ValueError: could not broadcast input array from shape (11253,1) into shape (11253) | <p>I am creating a Neural Network using this <a href="https://machinelearningmastery.com/time-series-prediction-with-deep-learning-in-python-with-keras/" rel="nofollow noreferrer">example</a> and I am getting the error "ValueError: could not broadcast input array from shape (11253,1)" into shape (11253), in the line : <code>trainPredictPlot[look_back:len(trainPredict)+look_back] = trainPredicty</code> My code is:</p>
<pre><code>import csv
import math
import numpy as np
import pandas as pd
from keras.models import Sequential
from keras.layers import Dense
import datetime
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
X1 = [1:16801] #16,800 values
Y1 = [1:16801]#16,800 values
train_size = int(len(X1) * 0.67)
test_size = len(X1) - train_size
train, test = X1[0:train_size,], X1[train_size:len(X1),]
def Data(X1, look_back=1):
dataX, dataY = [], []
for i in range(len(X1)-look_back-1):
a = X1[i:(i+look_back), 0]
dataX.append(a)
dataY.append(Y1[i + look_back, 0])
return numpy.array(dataX), numpy.array(dataY)
look_back = 1
trainX, testX = Data(train, look_back)
testX, testY = Data(test, look_back)
look_back = 1
trainX, testX = Data(train, look_back)
testX, testY = Data(test, look_back)
trainPredict = model.predict(trainX)
testPredict = model.predict(testX)
trainPredictPlot = numpy.empty_like(Y1)
trainPredictPlot[look_back:len(trainPredict)+look_back] = trainPredict
testPredictPlot = numpy.empty_like(Y1)
testPredictPlot[len(trainPredict)+(look_back*2)+1:len(X1)-1] = testPredict
</code></pre>
<p>I have 16,800 values for X1 which look like:</p>
<pre><code>[0.03454225 0.02062136 0.00186715 ... 0.92857565 0.64930691 0.20325924]
</code></pre>
<p>And my Y1 data looks like:</p>
<pre><code>[ 2.25226244 1.44078451 0.99174488 ... 12.8397099 9.75722427 7.98525797]
</code></pre>
<p>My traceback error message is:</p>
<pre><code>ValueError Traceback (most recent call last)
<ipython-input-9-e4da8990335b> in <module>()
116 trainPredictPlot = numpy.empty_like(Y1)
117
--> 118 trainPredictPlot[look_back:len(trainPredict)+look_back] = trainPredict
119
120 testPredictPlot = numpy.empty_like(Y1)
ValueError: could not broadcast input array from shape (11253,1) into shape (11253)
</code></pre> | <p>Convert <code>trainPredict</code> from 2D array to 1D vector before assigning:</p>
<pre><code>trainPredictPlot[look_back:len(trainPredict)+look_back] = trainPredict.ravel()
</code></pre> | python|tensorflow|neural-network | 1 |
598 | 51,274,847 | How to optimize chunking of pandas dataframe? | <p>I need to split my dataset into chunks, which i currently do with the following simple code:</p>
<pre><code> cases = []
for i in set(df['key']):
cases.append(df[df['key']==i].copy())
</code></pre>
<p>But my dataset is huge and this ends up taking a couple hours, so I was wondering if there is a way to maybe use multithreading to accelerate this? Or if there is any other method to make this go faster?</p> | <p>I'm fairly certain you want to group-by unique keys. Use the built-in functionality to do this.</p>
<pre><code>cases = list(df.groupby('key'))
</code></pre> | python|pandas|python-multiprocessing|python-multithreading | 1 |
599 | 51,294,657 | I need to get only the next sequence of variable in timestamp format | <pre><code>#count ,date
98.000000, 2014-03-16
267.000000, 2014-03-23
298.000000, 2014-03-30
313.000000, 2014-04-06
225.000000, 2014-04-13
226.000000 2014-04-20
</code></pre>
<p>I have two variables: one is count and other is date time with week sequence in it.</p>
<p>When I concatenate the first variable with some values, I need to fill the date time variable by their sequence in the DataFrame.</p> | <pre><code>import pandas as pd
import datetime
df = pd.read_csv("ex1.csv",
names=['count', 'date'],
index_col='date')
df2 = pd.DataFrame();
df2['count'] = [50];
df2['date'] = pd.to_datetime(df.index.values[len(df)-1]) + ( datetime.datetime.strptime(df.index.values[len(df)-1], "%Y-%m-%d") - datetime.datetime.strptime(df.index.values[len(df)-2], "%Y-%m-%d"))
df2.set_index('date',inplace=True)
dat = pd.concat((df,df2))
print(dat)
</code></pre> | python|python-3.x|pandas|datetime|dataframe | 1 |