Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
---|---|---|---|---|---|---|
1,200 | 58,350,527 | How to get activation values from Tensor for Keras model? | <p>I am trying to access the activation values from my nodes in a layer.</p>
<pre><code>l0_out = model.layers[0].output
print(l0_out)
print(type(l0_out))
</code></pre>
<pre><code>Tensor("fc1_1/Relu:0", shape=(None, 10), dtype=float32)
<class 'tensorflow.python.framework.ops.Tensor'>
</code></pre>
<p>I've tried several different ways of <code>eval()</code> and <code>K.function</code> without success. I've also tried every method in this post <a href="https://stackoverflow.com/questions/41711190/keras-how-to-get-the-output-of-each-layer">Keras, How to get the output of each layer?</a></p>
<p>How can I work with this object?</p>
<hr>
<p><strong>MODEL</strong>
Just using something everyone is familiar with.</p>
<pre><code>import numpy as np
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import OneHotEncoder
from keras.models import Sequential
from keras.layers import Dense
from keras.optimizers import Adam
iris_data = load_iris()
x = iris_data.data
y_ = iris_data.target.reshape(-1, 1)
encoder = OneHotEncoder(sparse=False)
y = encoder.fit_transform(y_)
train_x, test_x, train_y, test_y = train_test_split(x, y, test_size=0.20)
model = Sequential()
model.add(Dense(10, input_shape=(4,), activation='relu', name='fc1'))
model.add(Dense(10, activation='relu', name='fc2'))
model.add(Dense(3, activation='softmax', name='output'))
model.compile(optimizer=Adam(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy'])
print(model.summary())
# Train
model.fit(train_x, train_y, verbose=2, batch_size=5, epochs=200)
</code></pre> | <p>Try to use <code>K.function</code> and feed one batch of <code>train_x</code> into the function. </p>
<pre><code>from keras import backend as K
get_relu_output = K.function([model.layers[0].input], [model.layers[0].output])
relu_output = get_relu_output([train_x])
</code></pre> | tensorflow|keras|tensorflow2.0 | 1 |
1,201 | 58,279,548 | Using split-apply-combine to remove some values with a customized function and combine what's left | <p>So this isn't the dataset I need to work with but it's a template for a huge one I'm working with (~1.8 million data points) for a cancer research project, so I figured if I could get this to work with a smaller one, then I can adapt it for my large one! So as a sample, let's say I have the following data set:</p>
<pre><code>import numpy as np
import pandas as pd
df = pd.DataFrame({
'cond': ['A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'B', 'B','B', 'B', 'B', 'B', 'B','B','B'],
'Array': ['S', 'S', 'TT', 'TT','S', 'S', 'TT', 'TT','S', 'S', 'TT', 'TT','S', 'S', 'TT', 'TT','SS','TT'],
'X': [1, 2, 3, 1, 2 , 3, 4, 7.3, 5.1, 3.2, 1.4, 5.5, 9.9, 3.2, 1.1, 3.3, 1.2, 5.4],
'Y': [3.1, 2.2, 2.1, 1.2, 2.4, 1.2, 1.5, 1.33, 1.5, 1.6, 1.4, 1.3, 0.9, 0.78, 1.2, 4.0, 5.0, 6.0],
'Marker': [2.0, 1.2, 1.2, 2.01, 2.55, 2.05, 1.66, 3.2, 3.21, 3.04, 8.01, 9.1, 7.06, 8.1, 7.9, 5.12, 5.23, 5.15],
'Area': [3.0, 2.0, 2.88, 1.33, 2.44, 1.25, 1.53, 1.0, 0.156, 2.0, 2.4, 6.3, 6.9, 9.78, 10.2, 15.0, 16.0, 19.0]
})
print(df)
</code></pre>
<p>This produces an output that looks like this:</p>
<pre><code> cond Array X Y Marker Area
0 A S 1.0 3.10 2.00 3.000
1 A S 2.0 2.20 1.20 2.000
2 A TT 3.0 2.10 1.20 2.880
3 A TT 1.0 1.20 2.01 1.330
4 A S 2.0 2.40 2.55 2.440
5 A S 3.0 1.20 2.05 1.250
6 A TT 4.0 1.50 1.66 1.530
7 A TT 7.3 1.33 3.20 1.000
8 A S 5.1 1.50 3.21 0.156
9 B S 3.2 1.60 3.04 2.000
10 B TT 1.4 1.40 8.01 2.400
11 B TT 5.5 1.30 9.10 6.300
12 B S 9.9 0.90 7.06 6.900
13 B S 3.2 0.78 8.10 9.780
14 B TT 1.1 1.20 7.90 10.200
15 B TT 3.3 4.00 5.12 15.000
16 B SS 1.2 5.00 5.23 16.000
17 B TT 5.4 6.00 5.15 19.000
</code></pre>
<p>Ok so now what I need to do is to split them based on two labels, "cond" and "Array". I did that using</p>
<pre><code>g=df.groupby(['cond','Array'])['Marker']
</code></pre>
<p>This breaks it into 4 smaller sets split as the pairings A-S, A-TT, B-S, B-TT. Now I have a customized function to work with. This is part of the function and I'll explain how it works:</p>
<pre><code>def num_to_delete(p,alpha,N):
if p==0.950:
if 1-alpha==0.90:
if N<=60:
m=1
if 60<N<80:
m=round(N/20-2)
if 80<=N:
m=2
if 1-alpha==0.95:
if N<=80:
m=1
if 80<N<=100:
m=round(N/20 -3)
if 100<N:
m=2
return m
</code></pre>
<p>Ok so the way it works is that I feed into it a "p" and "alpha" that I pick (the real function covers many more cases of p and alpha). The N that gets fed into it is the the number of elements of my smaller data set (in this case for A-S it's 5, for A-TT it's 4, etc.). So what I'm trying to have happen is that for each smaller data set, spit out a number of points to delete (in this example, the function will always give us 1, but I'm trying to code this with the function for application to a super large data set). Since it gives the number 1, then I want it to delete the 1 largest data point for that set, and tell me what the highest point is left. </p>
<p>So as an example, for the A-S coupling, I have 5 data points: 2.0, 1.2, 2.55, 2.05, and 3.21. Since there's 5 data points, my function tells me to delete 1 of them, so ignore the 3.21, and tell me what's the highest data point left which in this case is 2.55. I want to do this for each coupling, but in my real data set, I will have different numbers of elements so the function will tell me to delete a different number for each coupling.</p>
<p>My ultimate goal is to have a final table that looks like this:</p>
<pre><code> cond Array NumDeleted p95/a05 p95/a10
0 A S 1.0 2.55 2.55
1 A TT 1.0 2.01 2.01
2 B S 1.0 7.06 7.06
3 B TT 1.0 8.01 8.01
</code></pre>
<p>For the larger set, the values in the last 2 columns will be different because in the large data set, there's a lot more difference in the number of values that will be deleted, and hence the remaining values will differ. I will eventually need to alter a second dataset based on the values I get for p95/a05 and p95/a10</p>
<p>Anyway, I'm sorry that was such a long explanation, but if anyone can help, that would be amazing! I'm hoping it's a rather simple thing to do since I've been stuck on this for over a week now.</p> | <p>EDIT: more general solution</p>
<p>First, it would help to make a <code>closure</code> to define your configurations. This is under the assumption that you will have more configurations in the future:</p>
<pre><code>def create_num_to_delete(p, alpha):
"""Create a num_to_delete function given p and alpha."""
def num_to_delete(N):
if p == 0.950:
if 1 - alpha == 0.90:
if N <= 60:
m = 1
if 60 < N < 80:
m = round(N/20 - 2)
if 80 <= N:
m = 2
if 1-alpha == 0.95:
if N <= 80:
m = 1
if 80 < N <= 100:
m = round(N/20 -3)
if 100 < N:
m = 2
return m
return num_to_delete
</code></pre>
<p>You can then use this closure to define a dictionary of configurations:</p>
<pre><code>configurations = {
'p95/a05': create_num_to_delete(0.95, 0.05),
'p95/a10': create_num_to_delete(0.95, 0.10),
}
</code></pre>
<p>Then, define a function that summarizes your data. This function should rely on your configuration so that it remains dynamic.</p>
<pre><code>def summarize(x):
# The syntax on the right-hand side is called list comprehension.
# As you can probably guess, it's essentially a flattened for-loop that
# produces a list. The syntax starting with "for" is your basic for loop
# statement, and the syntax to the left of "for" is an expression that
# that serves as the value of the resulting list for each iteration
# of the loop.
#
# Here, we are looping through the "num_to_delete" functions we defined in
# our `configurations` dictionary. And calling it in our group `x`.
Ns = [num_to_delete(len(x)) for num_to_delete in configurations.values()]
markers = x['Marker'].sort_values(ascending=False)
highest_markers = []
for N in Ns:
if N == len(x):
highest_markers.append(None)
else:
# Since we know that `markers` is already sorted in descending
# order, all we need to get the highest remaining value is to get
# the value in the *complete list* of values offset by the
# the number of values that need to be deleted (this is `N`).
#
# Since sequences are 0-indexed, simply indexing by `N` is enough.
# For example, if `N` is 1, indexing by `N` would give us
# the marker value *indexed by* 1, which is,
# in a 0-sequenced index, simply the second value.
highest_markers.append(markers.iloc[N])
# Returning a list from an applied groupby function translates into
# a DataFrame which the series index as the columns and the series values
# as the row values. Index in this case is just the list of configuration
# names we have in the `configurations` dictionary.
return pd.Series(highest_markers, index=list(configurations.keys()))
</code></pre>
<p>Lastly, <code>apply</code> the function to your data set and reset the index. This keeps <code>cond</code> and <code>Array</code> as columns:</p>
<pre><code>grouped = df.groupby(['cond', 'Array'])
grouped.apply(summarize).reset_index()
</code></pre>
<p>Output is:</p>
<pre><code> cond Array p95/a05 p95/a10
0 A S 2.55 2.55
1 A TT 2.01 2.01
2 B S 7.06 7.06
3 B SS NaN NaN
4 B TT 8.01 8.01
</code></pre>
<p>Hope this helps.</p> | python|pandas|split-apply-combine | 2 |
1,202 | 69,276,635 | Pandas MultiIndex Dataframe Styling error when writing to Excel | <p>I am trying to write a multi-index data frame to excel using pandas styling and I am getting an error.</p>
<pre><code>import pandas as pd
import numpy as np
df=pd.DataFrame(np.random.randn(9,4), pd.MultiIndex.from_product([['A', 'B','C'], ['r1', 'r2','r3']]), columns=[['E1','E1','E2','E2'],['d1','d2','d1','d2']])
def highlight_max(s, props=''):
return np.where(s == np.nanmax(s.values), props, '')
def highlight_all_by_condition (value, condition, props=''):
return np.where(value >= condition, props, '')
def highlight_max_value_by_condition(value, condition, props=''):
return np.where(np.nanmax(value) >= condition, props, '')
df_formatted = df.style.set_properties(**{'font-family': 'Arial','font-size': '10pt'})
unique_column_list = list(set(df.columns.get_level_values(0)))
idx = pd.IndexSlice
for each in unique_column_list:
slice_=idx[idx[each]]
df_formatted = df_formatted.apply(highlight_max, props='color:black; font-weight: bold', axis=1, subset=slice_)\
.apply(highlight_all_by_condition, condition = 0.55, props='color:red;font-weight: bold; background-color: #ffe6e6', axis=1, subset=slice_)\
.apply(highlight_max_value_by_condition, condition = 1, props='color:green;font-weight: bold; background-color: #ffff33', axis=1, subset=slice_)
df_formatted.to_excel("test.xlsx", engine = 'openpyxl')
</code></pre>
<p>I am getting the following error:</p>
<pre><code>ValueError: Function <function highlight_max_value_by_condition at 0x000001EE1394E940> returned the wrong shape.
Result has shape: (9,)
Expected shape: (9, 2)
</code></pre>
<p>The second styling function (highlight_max_value_by_condition) is a conditional styling, where it needs to highlight the max value only if it satisfies the condition and if I remove that styling function, then I don't get any error.</p>
<p>Any help is much appreciated. Thanks in advance.</p> | <p>Assuming we're looking for <code>highlight_max_value_by_condition</code> is meant to apply styles to cells which are both the max in the subset and fulfill the condition we can add an <code>&</code> to combine the conditions:</p>
<pre><code>def highlight_max_value_by_condition(value, condition, props=''):
return np.where(
(value == np.nanmax(value)) & (value >= condition),
props,
''
)
</code></pre>
<hr />
<p>Beyond that, however, there are quite a few things we can do to cleanup the general approach.</p>
<p><code>Styler</code> objects compound naturally, there is no need to assign back. In addition instead of using <code>list(set(</code> to get the level values, <a href="https://pandas.pydata.org/docs/reference/api/pandas.MultiIndex.levels.html" rel="nofollow noreferrer">MultiIndex.levels</a> will already provide the unique values for each level. Furthermore, since we're working with the top-most level we don't need <code>pd.IndexSlice</code> since access column access by top-level MultiIndex key will provide all child columns.</p>
<p>All this together means that <code>df_formatted</code> can be build like:</p>
<pre><code>df_formatted = df.style.set_properties(**{
'font-family': 'Arial',
'font-size': '10pt'
})
for slice_ in df.columns.levels[0]:
df_formatted.apply(
highlight_max,
props='color:black; font-weight: bold',
axis=1, subset=slice_
).apply(
highlight_all_by_condition, condition=0.55,
props='color:red;font-weight: bold; background-color: #ffe6e6',
axis=1, subset=slice_
).apply(
highlight_max_value_by_condition, condition=1,
props='color:green;font-weight: bold; background-color: #ffff33',
axis=1, subset=slice_
)
</code></pre>
<p><a href="https://i.stack.imgur.com/nNUFc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nNUFc.png" alt="styled table" /></a></p>
<hr />
<p>Setup made reproducible with seed(6) and with modified function</p>
<pre><code>import numpy as np
import pandas as pd
np.random.seed(6)
df = pd.DataFrame(
np.random.randn(9, 4),
pd.MultiIndex.from_product([['A', 'B', 'C'], ['r1', 'r2', 'r3']]),
columns=[['E1', 'E1', 'E2', 'E2'], ['d1', 'd2', 'd1', 'd2']]
)
def highlight_max(s, props=''):
return np.where(s == np.nanmax(s.values), props, '')
def highlight_all_by_condition(value, condition, props=''):
return np.where(value >= condition, props, '')
def highlight_max_value_by_condition(value, condition, props=''):
return np.where(
(value == np.nanmax(value)) & (value >= condition),
props,
''
)
</code></pre>
<p><code>df</code>:</p>
<pre><code> E1 E2
d1 d2 d1 d2
A r1 -0.311784 0.729004 0.217821 -0.899092
r2 -2.486781 0.913252 1.127064 -1.514093
r3 1.639291 -0.429894 2.631281 0.601822
B r1 -0.335882 1.237738 0.111128 0.129151
r2 0.076128 -0.155128 0.634225 0.810655
r3 0.354809 1.812590 -1.356476 -0.463632
C r1 0.824654 -1.176431 1.564490 0.712705
r2 -0.181007 0.534200 -0.586613 -1.481853
r3 0.857248 0.943099 0.114441 -0.021957
</code></pre> | pandas|multi-index|pandas-styles | 0 |
1,203 | 61,108,307 | unable to read csv file on jupyter notebook | <pre><code>import pandas as pd
import os
df=pd.read_csv(r"C:/Users/tom/Desktop/misc/number-of-motor-vehicles-2018-census-csv.csv")
df
</code></pre>
<p>the above 4 lines are my code and im getting the error as shown below</p>
<pre><code>FileNotFoundError: [Errno 2] File C:/Users/tom/Desktop/misc/number-of-motor-vehicles-2018-census-
csv.csv does not exist: 'C:/Users/tom/Desktop/misc/number-of-motor-vehicles-2018-census-csv.csv'
</code></pre>
<p>i tried removing "r"
i tried front and back slash
single and double quotes...plz help me out</p> | <p>Have you made sure that the file does not have a second extension (possibly .txt)? This might happen when e.g. saving a file with Notepad and appending <code>.csv</code> to the file name but disregarding the dropdown box "Save as type" ...</p>
<p>You could try</p>
<ul>
<li>go to the “View” tab on the ribbon in Windows Explorer and activate the “File name extensions” box in the Show/hide section</li>
<li>hit Win+R, type 'cmd' and try <code>Dir C:\Users\tom\Desktop\misc\</code></li>
<li>use the function <code>os.listdir(r'C:\Users\Tobi\Desktop\misc')</code> in python</li>
</ul> | python|pandas|csv | 1 |
1,204 | 60,915,294 | How to plot graph where the indexes are strings | <p>I'm running with <code>python 3.7.6</code> and I have the following <code>dataframe</code>:</p>
<pre><code> col_1 col_2 col_3 col_4
GP 1 1 1 1
MIN 1 1 1 1
PTS 1 1 1 1
FGM 1 1 0 1
FGA 0 1 0 0
FG% 0 1 1 1
3P Made 0 1 1 0
AST 0 1 1 0
STL 0 1 0 0
BLK 0 1 1 0
TOV 0 0 1 0
</code></pre>
<p>I want to plot the <code>dataframe</code> as <code>scatter plot or other (dot's plot)</code> where:</p>
<p>X axis - dataframe indexes</p>
<p>Y axis - dataframe columns</p>
<p>points on the graph are according to the values from <code>dataframe</code> (1 - show on graph and 0 not)</p>
<p>How can I do it ?</p> | <p>I'm not sure about scatter, but you can use <code>imshow</code> to display the binary values:</p>
<pre><code>fig, ax = plt.subplots()
ax.imshow(df, cmap='gray')
ax.set_xticks(range(df.shape[1]))
ax.set_xticklabels(df.columns)
ax.set_yticks(range(df.shape[0]))
ax.set_yticklabels(df.index)
plt.show()
</code></pre>
<p>Output:</p>
<p><a href="https://i.stack.imgur.com/2m7zV.png" rel="noreferrer"><img src="https://i.stack.imgur.com/2m7zV.png" alt="enter image description here"></a></p>
<hr>
<p><strong>Update</strong>: scatter also possible:</p>
<pre><code>plt.scatter(*np.where(df.T))
plt.xticks(range(df.shape[1]), df.columns)
plt.yticks(range(df.shape[0]), df.index)
plt.show()
</code></pre>
<p>Output:</p>
<p><a href="https://i.stack.imgur.com/87cPC.png" rel="noreferrer"><img src="https://i.stack.imgur.com/87cPC.png" alt="enter image description here"></a></p> | python|pandas | 5 |
1,205 | 61,152,043 | Python: Calculate cumulative amount in Pandas dataframe over a period of time | <p>Objective: Calculate cumulative revenue since 2020-01-01. </p>
<p>I have a python dictionary as shown below</p>
<pre><code>data = [{"game_id":"Racing","user_id":"ABC123","amt":5,"date":"2020-01-01"},
{"game_id":"Racing","user_id":"ABC123","amt":1,"date":"2020-01-04"},
{"game_id":"Racing","user_id":"CDE123","amt":1,"date":"2020-01-04"},
{"game_id":"DH","user_id":"CDE123","amt":100,"date":"2020-01-03"},
{"game_id":"DH","user_id":"CDE456","amt":10,"date":"2020-01-02"},
{"game_id":"DH","user_id":"CDE789","amt":5,"date":"2020-01-02"},
{"game_id":"DH","user_id":"CDE456","amt":1,"date":"2020-01-03"},
{"game_id":"DH","user_id":"CDE456","amt":1,"date":"2020-01-03"}]
</code></pre>
<p>The same dictionary above looks like this as a table</p>
<pre><code> game_id user_id amt activity date
'Racing', 'ABC123', 5, '2020-01-01'
'Racing', 'ABC123', 1, '2020-01-04'
'Racing', 'CDE123', 1, '2020-01-04'
'DH', 'CDE123', 100, '2020-01-03'
'DH', 'CDE456', 10, '2020-01-02'
'DH', ' CDE789', 5, '2020-01-02'
'DH', 'CDE456', 1, '2020-01-03'
'DH', 'CDE456', 1, '2020-01-03'
</code></pre>
<p>Age is calculated as the difference between transaction date and 2020-01-01. Total Payer count is number of payers in each game.</p>
<p>I'm trying to create a dataframe having the cumulative results for a each day from the day of first transaction to the next day of transaction. eg:for game_id Racing we start with an amount of 5 on 2020-01-01 so Age is 0. on 2020-01-02 the amount is still 5 because we don't have a transaction on that day. on 2020-01-03 the amount is 5. but on 2020-01-04 the amount is 7 because we have 2 transactions on this day.</p>
<p><strong>Expected output</strong></p>
<pre><code>Game Age Cum_rev Total_unique_payers_per_game
Racing 0 5 2
Racing 1 5 2
Racing 2 5 2
Racing 3 7 2
DH 0 0 3
DH 1 15 3
DH 2 117 3
DH 3 117 3
</code></pre>
<p>How to use window functions in python like how we use in SQL. Is there any better approach to solve this problem?</p> | <p>Here the very complicated part is to fill dates. I used an apply but I'm not sure this is the best way</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
data = [{"game_id":"Racing","user_id":"ABC123","amt":5,"date":"2020-01-01"},
{"game_id":"Racing","user_id":"ABC123","amt":1,"date":"2020-01-04"},
{"game_id":"Racing","user_id":"CDE123","amt":1,"date":"2020-01-04"},
{"game_id":"DH","user_id":"CDE123","amt":100,"date":"2020-01-03"},
{"game_id":"DH","user_id":"CDE456","amt":10,"date":"2020-01-02"},
{"game_id":"DH","user_id":"CDE789","amt":5,"date":"2020-01-02"},
{"game_id":"DH","user_id":"CDE456","amt":1,"date":"2020-01-03"},
{"game_id":"DH","user_id":"CDE456","amt":1,"date":"2020-01-03"}]
df = pd.DataFrame(data)
# we want datetime not object
df["date"] = df["date"].astype("M8[us]")
# we will need to merge this at the end
grp = df.groupby("game_id")['user_id']\
.nunique()\
.reset_index(name="Total_unique_payers_per_game")
# sum amt per game_id date
df = df.groupby(["game_id", "date"])["amt"].sum().reset_index()
# dates from 2020-01-01 till the max date in df
dates = pd.DataFrame({"date": pd.date_range("2020-01-01", df["date"].max())})
# add missing dates
def expand_dates(x):
x = pd.merge(dates, x.drop("game_id", axis=1), how="left")
x["amt"] = x["amt"].fillna(0)
return x
df = df.groupby("game_id")\
.apply(expand_dates)\
.reset_index().drop("level_1", axis=1)
df["Cum_rev"] = df.groupby("game_id")['amt'].transform("cumsum")
# this is equivalent as long as data is sorted
# df["Cum_rev"] = df.groupby("game_id")['amt'].cumsum()
# merge unique payers per game
df = pd.merge(df, grp, how="left")
# dates difference
df["Age"] = "2020-01-01"
df["Age"] = df["Age"].astype("M8[us]")
df["Age"] = (df["date"]-df["Age"]).dt.days
# then you can eventually filter
df = df[["game_id", "Age",
"Cum_rev", "Total_unique_payers_per_game"]]\
.rename(columns={"game_id":"Game"})
</code></pre> | python|pandas|numpy|dictionary | 1 |
1,206 | 71,533,791 | How to create a scatterplot of data using `matplotlib.pyplot.scatter` | <p>I've problem with <code>matplotlib.pyplot.scatter</code>.</p>
<p>Firstly, I need to download the data on Iris classification and paste headlines.</p>
<pre class="lang-py prettyprint-override"><code> import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
plt.style.use('seaborn')
%matplotlib inline
df = pd.read_csv('http://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data', header = None)
df_names = ['sepal length in cm', 'sepal width in cm', 'petal length in cm', 'petal width in cm', 'class']
df.columns = df_names
df
</code></pre>
<p>Secondly, I should Create a scatterplot of data using <code>matplotlib.pyplot.scatter</code> in a following manner:</p>
<pre><code> * for x and y coordinates use sepal length and width respectively
* for size use the petal length
* for alpha (opacity/transparency) use the petal width
* illustrate iris belonging to each class by using 3 distinct colours (RGB for instance, but be creative if you want)
* *some columns will need to be scaled, to be passed as parameters; you might also want to scale some other columns to increase the readability of the illustration.
</code></pre>
<p>Then, I found this site: <a href="https://www.geeksforgeeks.org/matplotlib-pyplot-scatter-in-python/" rel="nofollow noreferrer">https://www.geeksforgeeks.org/matplotlib-pyplot-scatter-in-python/</a></p>
<p>After that, I uses their draft for my tasks:</p>
<pre><code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
plt.style.use('seaborn')
%matplotlib inline
# dataset-df
x1 = [4.3, 7.9, 5.84, 0.83, 0.7826]
y1 = [2.0, 4.4, 3.05, 0.43, -0.4194]
plt.scatter(x1, y1, c ="red",
alpha = 1.0, 6.9, 3.76, 1.76, 0.9490,
linewidth = 2,
marker ="s",
s = [1.0, 6.9, 3.76, 1.76, 0.9490])
plt.xlabel("X-axis")
plt.ylabel("Y-axis")
plt.show()
</code></pre>
<p>However, I always get this error:</p>
<pre><code>File "C:\Users\felix\AppData\Local\Temp/ipykernel_32284/4113309647.py", line 21
s = [1.0, 6.9, 3.76, 1.76, 0.9490])
^
SyntaxError: positional argument follows keyword argument
</code></pre>
<p>Could you advise me on how to sort it out this problem and complete my task?</p>
<p>In addition, I copied the data from <code>iris.names</code>:</p>
<pre><code>1. Title: Iris Plants Database
Updated Sept 21 by C.Blake - Added discrepency information
2. Sources:
(a) Creator: R.A. Fisher
(b) Donor: Michael Marshall (MARSHALL%[email protected])
(c) Date: July, 1988
3. Past Usage:
- Publications: too many to mention!!! Here are a few.
1. Fisher,R.A. "The use of multiple measurements in taxonomic problems"
Annual Eugenics, 7, Part II, 179-188 (1936); also in "Contributions
to Mathematical Statistics" (John Wiley, NY, 1950).
2. Duda,R.O., & Hart,P.E. (1973) Pattern Classification and Scene Analysis.
(Q327.D83) John Wiley & Sons. ISBN 0-471-22361-1. See page 218.
3. Dasarathy, B.V. (1980) "Nosing Around the Neighborhood: A New System
Structure and Classification Rule for Recognition in Partially Exposed
Environments". IEEE Transactions on Pattern Analysis and Machine
Intelligence, Vol. PAMI-2, No. 1, 67-71.
-- Results:
-- very low misclassification rates (0% for the setosa class)
4. Gates, G.W. (1972) "The Reduced Nearest Neighbor Rule". IEEE
Transactions on Information Theory, May 1972, 431-433.
-- Results:
-- very low misclassification rates again
5. See also: 1988 MLC Proceedings, 54-64. Cheeseman et al's AUTOCLASS II
conceptual clustering system finds 3 classes in the data.
4. Relevant Information:
--- This is perhaps the best known database to be found in the pattern
recognition literature. Fisher's paper is a classic in the field
and is referenced frequently to this day. (See Duda & Hart, for
example.) The data set contains 3 classes of 50 instances each,
where each class refers to a type of iris plant. One class is
linearly separable from the other 2; the latter are NOT linearly
separable from each other.
--- Predicted attribute: class of iris plant.
--- This is an exceedingly simple domain.
--- This data differs from the data presented in Fishers article
(identified by Steve Chadwick, [email protected] )
The 35th sample should be: 4.9,3.1,1.5,0.2,"Iris-setosa"
where the error is in the fourth feature.
The 38th sample: 4.9,3.6,1.4,0.1,"Iris-setosa"
where the errors are in the second and third features.
5. Number of Instances: 150 (50 in each of three classes)
6. Number of Attributes: 4 numeric, predictive attributes and the class
7. Attribute Information:
1. sepal length in cm
2. sepal width in cm
3. petal length in cm
4. petal width in cm
5. class:
-- Iris Setosa
-- Iris Versicolour
-- Iris Virginica
8. Missing Attribute Values: None
Summary Statistics:
Min Max Mean SD Class Correlation
sepal length: 4.3 7.9 5.84 0.83 0.7826
sepal width: 2.0 4.4 3.05 0.43 -0.4194
petal length: 1.0 6.9 3.76 1.76 0.9490 (high!)
petal width: 0.1 2.5 1.20 0.76 0.9565 (high!)
9. Class Distribution: 33.3% for each of 3 classes.
</code></pre> | <p>There is no problem with <code>iris</code> datasets, just with the part you defined the alpha argument in the scatter function. You should change the way of assigning value to arguments in the way you did:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
plt.style.use('seaborn')
%matplotlib inline
# dataset-df
x1 = [4.3, 7.9, 5.84, 0.83, 0.7826]
y1 = [2.0, 4.4, 3.05, 0.43, -0.4194]
plt.scatter(x1, y1, c ="red",alpha = 1,
linewidth = 2,
marker ="s",
s = [1.0, 6.9, 3.76, 1.76, 0.9490])
plt.xlabel("X-axis")
plt.ylabel("Y-axis")
plt.show()
</code></pre>
<p>Note that, <code>alpha</code> just takes one number which might be <code>0.9</code>, <code>0.8</code> or even <code>0.823425</code>, and not a list or anything else.</p>
<h4>Output</h4>
<p><a href="https://i.stack.imgur.com/JFRaB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JFRaB.png" alt="Your desire output" /></a></p> | python|python-3.x|pandas|matplotlib | 1 |
1,207 | 71,766,800 | Select all rows between specific row values in a within columns | <p>Im trying to select all the rows of data between rows with these values in E01032739 and E01033708, does anyone know how to do this. Trying to do this so i can count the number of casualties between these area codes.</p>
<p>At the minute i can find all of the data with each set of values but cannot modify the code to get everything in between using this;</p>
<pre><code> accidents.loc[accidents['LSOA_of_Accident_Location'] == 'E01032739']
accidents.loc[accidents['LSOA_of_Accident_Location'] == 'E01033708']
</code></pre>
<p>Data snippet here if needed;</p>
<pre><code>Accident_Index Number_of_Casualties LSOA_of_Accident_Location
97459 34 E01032739
97461 32 E01033708
97762 12 E01033708
</code></pre> | <p>Is this what you are looking for ?</p>
<pre><code>accidents[(accidents['LSOA_of_Accident_Location'] >= 'E01032739')&(accidents['LSOA_of_Accident_Location'] <= 'E01033708')]
</code></pre> | python|pandas|dataframe | 1 |
1,208 | 71,557,950 | Why are non-trainable parameters zero in model's summary, despite loading the weights of the model? | <p>I used the command</p>
<pre><code>torch.save(model.state_dict(), 'model.pth')
</code></pre>
<p>to save the parameters after training the model.
But, when I use the commands</p>
<pre><code>model = EfficientNetModel()
MODEL_PATH = 'model.pth'
model.load_state_dict(torch.load(MODEL_PATH, map_location=map_location))
model.eval()
summary(model,(1,224,224) )
</code></pre>
<p>to load the pre-trained weights, the number of non-trainable parameters is 0, as per the attached screenshot.
<a href="https://i.stack.imgur.com/47abG.png" rel="nofollow noreferrer">screenshot</a></p>
<p>Why is it happening and how can I rectify this?</p>
<p>Thank You</p> | <p>Saving and loading weights back on a model doesn't affect the fact they are <em>trainable</em> or not. Nontrainable params contain tensors that do not require gradient computation, and as such won't get modified by your optimizer during training.</p> | deep-learning|pytorch|computer-vision|transfer-learning|pre-trained-model | 0 |
1,209 | 71,574,827 | How to print the layers of the tensorflow 2 saved_model | <p>I am using tensorflow 2.6.2 and I downloaded the model from the <a href="https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md" rel="nofollow noreferrer">Tensorflow 2 Model zoo</a>
I am able to load the model using this</p>
<pre><code>import tensorflow as tf
if __name__ == "__main__":
try:
model = tf.saved_model.load("/home/user/git/models_zoo/ssd_mobilenet_v2_320x320_coco17_tpu-8/saved_model/")
</code></pre>
<p>But unfortunately I am not able to see all the layers of the model using the below</p>
<pre><code>for v in model.trainable_variables:
print(v.name)
</code></pre>
<p>which should ideally print all the layers in the network, but I am getting the following error</p>
<pre><code> print(model.trainable_variables)
AttributeError: '_UserObject' object has no attribute 'trainable_variables'
</code></pre>
<p>Can someone please tell, what I am doing wrong here.</p> | <p>I was able to print using this</p>
<pre><code> loaded = tf.saved_model.load("/home/user/git/models_zoo/ssd_mobilenet_v2_320x320_coco17_tpu-8/saved_model/")
infer = loaded.signatures["serving_default"]
for v in infer.trainable_variables:
print(v.name)
</code></pre> | tensorflow|deep-learning|tensorflow2.0 | 0 |
1,210 | 69,936,753 | Using Pandas Count number of Cells in Column that is within a given radius | <p>To set up the question. I have a dataframe containing spots and their x,y positions. I want to iterate over each spot and check all other spots to see if they are within a radius. I then want to count the number of spots within the radius in a new column of the dataframe. I would like to iterate over the index as I have a decent understanding on how that works. I know that I am missing something simple but I have not been able to find a solution that works for me yet. Thank you in advance!</p>
<pre><code>radius = 3
df = pd.DataFrame({'spot_id':[1,2,3,4,5],'x_pos':[5,4,10,3,8],'y_pos':[4,10,8,6,3]})
spot_id x_pos y_pos
0 1 5 4
1 2 4 10
2 3 10 8
3 4 3 6
4 5 8 3
</code></pre>
<p>I then want to get something that looks like this</p>
<pre><code>spot_id x_pos y_pos spots_within_radius
0 1 5 4 1
1 2 4 10 0
2 3 10 8 0
3 4 3 6 1
4 5 8 3 0
</code></pre> | <p>To do it in a vectorized way, you can use <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance_matrix.html" rel="nofollow noreferrer"><code>scipy.spatial.distance_matrix</code></a> to compute the distance matrix, <code>D</code>, between all the <code>N</code> row/position vectors ('x_pos', 'y_pos'). <code>D</code> is a <code>N x N</code> matrix (<code>2D numpy.ndarray</code>) whose entry <code>(i, j)</code> is the Euclidean distance between the ith and jth rows/ positions .</p>
<p>Then, check which positions are a distance <code>= radius</code> from each other (<code>D <= radius</code>), which will give you a boolean matrix. Finally, you can count all the True values row-wise using <code>sum(axis=0)</code>. You have to subtract 1 in the end since the former counts the distance between a vector with itself (diagonal entries).</p>
<pre><code>import pandas as pd
from scipy.spatial import distance_matrix
df = pd.DataFrame({'spot_id':[1,2,3,4,5],'x_pos':[5,4,10,3,8],'y_pos':[4,10,8,6,3]})
radius = 3
pos = df[['x_pos','y_pos']]
df['spots_within_radius'] = (distance_matrix(pos, pos) <= radius).sum(axis=0) - 1
</code></pre>
<p><strong>Output</strong></p>
<pre><code>>>> df
spot_id x_pos y_pos spots_within_radius
0 1 5 4 1
1 2 4 10 0
2 3 10 8 0
3 4 3 6 1
4 5 8 3 0
</code></pre>
<hr />
<p>If you don't want to use <code>scipy.spatial.distance_matrix</code>, you can compute <code>D</code> yourself using numpy's broadcasting.</p>
<pre><code>import numpy as np
pos = df[['x_pos','y_pos']].to_numpy()
D = np.sum((pos - pos[:, None])**2, axis=-1) ** 0.5
df['spots_within_radius'] = (D <= radius).sum(axis=0) - 1
</code></pre> | python|pandas|dataframe | 3 |
1,211 | 70,013,562 | Inserting column with specifics | <p>I have a specific question: I need to create a column name called "Plane type" for a column that contains the first 4 characters of the "TAIL_NUM" column.</p>
<p>How can I do this? I already imported the data and I can see it.</p>
<p><a href="https://i.stack.imgur.com/tjD8b.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tjD8b.png" alt="screenshot of code" /></a></p> | <p>Creating new columns with Pandas (assuming that's what you're talking about) is very simple. Pandas also provides common string methods. <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.str.html" rel="nofollow noreferrer">Pandas Docs</a>, <a href="https://stackoverflow.com/questions/32213330/split-string-in-a-column-based-on-character-position">Similar SO Question</a></p>
<p>You will use <a href="https://www.w3schools.com/python/gloss_python_string_slice.asp" rel="nofollow noreferrer">'string slicing'</a> which is worth reading about.</p>
<pre><code>df['new_col'] = 'X'
</code></pre>
<p>or in your case:</p>
<pre><code>df['Plane type'] = df['tail_num'].str[:4]
</code></pre> | python|pandas|dataframe | 0 |
1,212 | 43,214,978 | How to display custom values on a bar plot | <p>I'm looking to see how to do two things in Seaborn with using a bar chart to display values that are in the dataframe, but not in the graph.</p>
<ol>
<li>I'm looking to display the values of one field in a dataframe while graphing another. For example, below, I'm graphing 'tip', but I would like to place the value of <code>'total_bill'</code> centered above each of the bars (i.e.325.88 above Friday, 1778.40 above Saturday, etc.)</li>
<li>Is there a way to scale the colors of the bars, with the lowest value of <code>'total_bill'</code> having the lightest color (in this case Friday) and the highest value of <code>'total_bill'</code> having the darkest? Obviously, I'd stick with one color (i.e., <strong>blue</strong>) when I do the scaling.</li>
</ol>
<p>While I see that others think that this is a duplicate of another problem (or two), I am missing the part of how I use a value that is not in the graph as the basis for the label or the shading. How do I say, use total_bill as the basis. I'm sorry, but I just can't figure it out based on those answers.</p>
<p>Starting with the following code,</p>
<pre><code>import pandas as pd
import seaborn as sns
%matplotlib inline
df = pd.read_csv("https://raw.githubusercontent.com/wesm/pydata-book/1st-edition/ch08/tips.csv", sep=',')
groupedvalues = df.groupby('day').sum().reset_index()
g = sns.barplot(x='day', y='tip', data=groupedvalues)
</code></pre>
<p>I get the following result:</p>
<p><a href="https://i.stack.imgur.com/0GmTW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0GmTW.png" alt="Enter image description here" /></a></p>
<p>Interim Solution:</p>
<pre><code>for index, row in groupedvalues.iterrows():
g.text(row.name, row.tip, round(row.total_bill, 2), color='black', ha="center")
</code></pre>
<p><a href="https://i.stack.imgur.com/LGily.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LGily.png" alt="Enter image description here" /></a></p>
<p>On the <em><strong>shading</strong></em>, using the example below, I tried the following:</p>
<pre><code>import pandas as pd
import seaborn as sns
%matplotlib inline
df = pd.read_csv("https://raw.githubusercontent.com/wesm/pydata-book/1st-edition/ch08/tips.csv", sep=',')
groupedvalues = df.groupby('day').sum().reset_index()
pal = sns.color_palette("Greens_d", len(data))
rank = groupedvalues.argsort().argsort()
g = sns.barplot(x='day', y='tip', data=groupedvalues)
for index, row in groupedvalues.iterrows():
g.text(row.name, row.tip, round(row.total_bill, 2), color='black', ha="center")
</code></pre>
<p>But that gave me the following error:</p>
<blockquote>
<p>AttributeError: 'DataFrame' object has no attribute 'argsort'</p>
</blockquote>
<p>So I tried a modification:</p>
<pre><code>import pandas as pd
import seaborn as sns
%matplotlib inline
df = pd.read_csv("https://raw.githubusercontent.com/wesm/pydata-book/1st-edition/ch08/tips.csv", sep=',')
groupedvalues = df.groupby('day').sum().reset_index()
pal = sns.color_palette("Greens_d", len(data))
rank = groupedvalues['total_bill'].rank(ascending=True)
g = sns.barplot(x='day', y='tip', data=groupedvalues, palette=np.array(pal[::-1])[rank])
</code></pre>
<p>and that leaves me with</p>
<blockquote>
<p>IndexError: index 4 is out of bounds for axis 0 with size 4</p>
</blockquote> | <h2>New in matplotlib 3.4.0</h2>
<p>There is now a built-in <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.axes.Axes.bar_label.html" rel="noreferrer"><code>Axes.bar_label</code></a> to automatically label bar containers:</p>
<ul>
<li><p>For <strong>single-group</strong> bar plots, pass the single bar container:</p>
<pre class="lang-py prettyprint-override"><code>ax = sns.barplot(x='day', y='tip', data=groupedvalues)
ax.bar_label(ax.containers[0])
</code></pre>
<p><a href="https://i.stack.imgur.com/Pjmcy.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Pjmcy.png" width="230" alt="seaborn bar plot labeled"></a></p>
</li>
<li><p>For <strong>multi-group</strong> bar plots (with <code>hue</code>), iterate the multiple bar containers:</p>
<pre class="lang-py prettyprint-override"><code>ax = sns.barplot(x='day', y='tip', hue='sex', data=df)
for container in ax.containers:
ax.bar_label(container)
</code></pre>
<p><a href="https://i.stack.imgur.com/5ourr.png" rel="noreferrer"><img src="https://i.stack.imgur.com/5ourr.png" width="230" alt="seaborn grouped bar plot labeled"></a></p>
</li>
</ul>
<p>More details:</p>
<ul>
<li><a href="https://stackoverflow.com/a/68334380/13138364">How to label count plots</a> (<code>sns.countplot</code> and <code>sns.catplot</code>)</li>
<li><a href="https://stackoverflow.com/a/68322925/13138364">How to label percentage counts</a> (<code>fmt</code> param)</li>
<li><a href="https://stackoverflow.com/a/70516643/13138364">How to label with commas as thousands separators</a> (<code>labels</code> param)</li>
<li><a href="https://stackoverflow.com/a/68707056/13138364">How to label thresholded bar plots</a></li>
<li><a href="https://stackoverflow.com/a/70530696/13138364">How to label horizontal bar plots</a></li>
</ul>
<hr />
<h2>Color-ranked version</h2>
<blockquote>
<p>Is there a way to scale the colors of the bars, with the lowest value of <code>total_bill</code> having the lightest color (in this case Friday) and the highest value of <code>total_bill</code> having the darkest?</p>
</blockquote>
<ol>
<li><p>Find the rank of each <code>total_bill</code> value:</p>
<ul>
<li><p>Either use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.sort_values.html" rel="noreferrer"><code>Series.sort_values</code></a>:</p>
<pre class="lang-py prettyprint-override"><code>ranks = groupedvalues.total_bill.sort_values().index
# Int64Index([1, 0, 3, 2], dtype='int64')
</code></pre>
</li>
<li><p>Or condense Ernest's <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.rank.html" rel="noreferrer"><code>Series.rank</code></a> version by chaining <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.sub.html" rel="noreferrer"><code>Series.sub</code></a>:</p>
<pre class="lang-py prettyprint-override"><code>ranks = groupedvalues.total_bill.rank().sub(1).astype(int).array
# [1, 0, 3, 2]
</code></pre>
</li>
</ul>
</li>
<li><p>Then reindex the color palette using <code>ranks</code>:</p>
<pre class="lang-py prettyprint-override"><code>palette = sns.color_palette('Blues_d', len(ranks))
ax = sns.barplot(x='day', y='tip', palette=np.array(palette)[ranks], data=groupedvalues)
</code></pre>
<p><a href="https://i.stack.imgur.com/hWjiC.png" rel="noreferrer"><img src="https://i.stack.imgur.com/hWjiC.png" width="230" alt="seaborn bar plot color-ranked"></a></p>
</li>
</ol> | python|pandas|matplotlib|seaborn|bar-chart | 117 |
1,213 | 72,154,060 | Breast cancer Dataset high loss function and low accuracy | <p>i am new to the ml topic and tried some training today. I ran into several problems until i reached the position where i am now. Can anyone explane to me why the accurcy is not changing and why the loss function is so high? I used the wisconsin breast cancer data set.</p>
<p>Here is my code:</p>
<pre><code>import pandas as pd
import tensorflow as tf
df = pd.read_csv('data.csv',)
df['diagnosis']=df['diagnosis'].replace(['M'], 1)
df['diagnosis']=df['diagnosis'].replace(['B'], 0)
df = df.iloc[: , :-1]
df.head
x = df.drop(columns=["diagnosis"])
y = df["diagnosis"]
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.1)
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Dense(256, input_shape=(x_train.shape[1],), activation='sigmoid'))
model.add(tf.keras.layers.Dense(1, activation='sigmoid'))
model.summary()
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
df.dtypes
model.fit(x_train, y_train, epochs=500)
</code></pre> | <p>Looking at the Kaggle dataset you provide in the question comments, I run the model again. I faced the same problem you were describing:</p>
<pre><code>Epoch 500/500
16/16 [==============================] - 0s 3ms/step - loss: 0.6577 - accuracy: 0.6328
</code></pre>
<p>The reason is that in the dataset is present the column <code>id</code>. Drop this column before training phase:</p>
<pre><code>df = df.drop('id', axis=1)
</code></pre>
<p>I obtained better results:</p>
<pre><code>Epoch 500/500
16/16 [==============================] - 0s 3ms/step - loss: 0.0770 - accuracy: 0.9648
</code></pre>
<p>(almost) Always remove IDs and identifiers columns from your dataset. If you need it, set it as the index of the dataframe but not as column. They confused the predictor in training phase since they do not provide any useful information.</p> | python|pandas|tensorflow|keras | 1 |
1,214 | 50,310,389 | Extracting data from web page to CSV file, only last row saved | <p>I'm faced with the following challenge: I want to get all financial data about companies and I wrote a code that does it and let's say that the result is like below:</p>
<pre>Unnamed: 0 I Q 2017 II Q 2017 \
0 Przychody netto ze sprzedaży (tys. zł) 137 134
1 Zysk (strata) z działal. oper. (tys. zł) -423 -358
2 Zysk (strata) brutto (tys. zł) -501 -280
3 Zysk (strata) netto (tys. zł)* -399 -263
4 Amortyzacja (tys. zł) 134 110
5 EBITDA (tys. zł) -289 -248
6 Aktywa (tys. zł) 27 845 26 530
7 Kapitał własny (tys. zł)* 22 852 22 589
8 Liczba akcji (tys. szt.) 13 921,975 13 921,975
9 Zysk na akcję (zł) -0029 -0019
10 Wartość księgowa na akcję (zł) 1641 1623
11 Raport zbadany przez audytora N N
</pre>
<p>but 464 times more.</p>
<p>Unfortunately when I want to save all 464 results in one CSV file I can save only one last result. Not all 464 results, just one... Could you help me save all? Below is my code.</p>
<pre><code>import requests
from bs4 import BeautifulSoup
import pandas as pd
url = 'https://www.bankier.pl/gielda/notowania/akcje'
page = requests.get(url)
soup = BeautifulSoup(page.content,'lxml')
# Find the second table on the page
t = soup.find_all('table')[0]
#Read the table into a Pandas DataFrame
df = pd.read_html(str(t))[0]
#get
names_of_company = df["Walor AD"].values
links_to_financial_date = []
#all linkt with the names of companies
links = []
for i in range(len(names_of_company)):
new_string = 'https://www.bankier.pl/gielda/notowania/akcje/' + names_of_company[i] + '/wyniki-finansowe'
links.append(new_string)
############################################################################
for i in links:
url2 = f'https://www.bankier.pl/gielda/notowania/akcje/{names_of_company[0]}/wyniki-finansowe'
page2 = requests.get(url2)
soup = BeautifulSoup(page2.content,'lxml')
# Find the second table on the page
t2 = soup.find_all('table')[0]
df2 = pd.read_html(str(t2))[0]
df2.to_csv('output.csv', index=False, header=None)
</code></pre> | <p>You've almost got it. You're just overwriting your CSV each time. Replace</p>
<pre><code>df2.to_csv('output.csv', index=False, header=None)
</code></pre>
<p>with </p>
<pre><code>with open('output.csv', 'a') as f:
df2.to_csv(f, header=False)
</code></pre>
<p>in order to append to the CSV instead of overwriting it.</p>
<p>Also, your example doesn't work because this:</p>
<pre><code>for i in links:
url2 = f'https://www.bankier.pl/gielda/notowania/akcje/{names_of_company[0]}/wyniki-finansowe'
</code></pre>
<p>should be:</p>
<pre><code>for i in links:
url2 = i
</code></pre>
<p>When the website has no data, skip and move on to the next one:</p>
<pre><code> try:
t2 = soup.find_all('table')[0]
df2 = pd.read_html(str(t2))[0]
with open('output.csv', 'a') as f:
df2.to_csv(f, header=False)
except:
pass
</code></pre> | python-3.x|pandas|web-scraping|beautifulsoup | 1 |
1,215 | 50,255,849 | Merging DataFrames that don't have unique indexes with Python and Pandas | <p>I'm presented with two dataframes. One contains school food ratings for types of foods at different campuses. The first df is student ratings, the second is teacher ratings. The order of the results and the length of the df cannot be guaranteed. Thats said, I need to join the two together.</p>
<pre><code>import pandas as pd
student_ratings = pd.DataFrame({'food': ['chinese', 'mexican', 'american', 'chinese', 'mexican', 'american'],
'campus': [37, 37, 37, 25, 25, 25],
'student_rating': [97, 90, 83, 96, 89, 82]})
teacher_ratings = pd.DataFrame({'food': ['chinese', 'mexican', 'american', 'chinese', 'mexican', 'american', 'chinese', 'mexican', 'american'],
'campus': [25, 25, 25, 37, 37, 37, 45, 45, 45],
'teacher_rating': [87, 80, 73, 86, 79, 72, 67, 62, 65]})
#...
# SOMETHING LIKE WHAT I'M AFTER...
combined_ratings = pd.DataFrame({'food': ['chinese', 'mexican', 'american', 'chinese', 'mexican', 'american', 'chinese', 'mexican', 'american'],
'campus': [25, 25, 25, 37, 37, 37, 45, 45, 45],
'student_rating': [96, 89, 82, 97, 90, 83, Nan, NaN, NaN],
'teacher_rating': [87, 80, 73, 86, 79, 72, 67, 62, 65]})
</code></pre>
<p>I basically want to add columns (possibly more than one additional column) but I need to match everything up by <code>food</code> AND <code>campus</code></p> | <p>Seems like you need an outer merge:</p>
<pre><code>res = pd.merge(student_ratings, teacher_ratings, how='outer')
print(res)
campus food student_rating teacher_rating
0 37 chinese 97.0 86
1 37 mexican 90.0 79
2 37 american 83.0 72
3 25 chinese 96.0 87
4 25 mexican 89.0 80
5 25 american 82.0 73
6 45 chinese NaN 67
7 45 mexican NaN 62
8 45 american NaN 65
</code></pre> | python|pandas | 2 |
1,216 | 50,493,478 | How array size affects numpy matrix operation execution time and CPU usage | <p>My question is about the following code:</p>
<pre><code>%%time
import numpy as np
n_elems = 95
n_repeats = 100000
for i in range(n_repeats):
X = np.random.rand(n_elems, n_elems)
y = np.random.rand(n_elems)
_ = X.dot(y)
</code></pre>
<p>I run this in iPython (version <code>6.2.1</code>) with Python <code>3.5.5</code> and numpy version <code>1.14.0</code> on an 8-core machine.</p>
<p>I get the following output:</p>
<pre><code>CPU times: user 8.93 s, sys: 439 ms, total: 9.37 s
Wall time: 8.79 s
</code></pre>
<p>When <code>n_elems</code> is set between <code>1</code> and <code>95</code>, the CPU and wall time are roughly equivalent. In addition, the CPU usage of the process (as seen using <code>top</code>) only goes up to <code>100%</code>.</p>
<p>However, when <code>n_elems</code> is set to <code>96</code>, I get the following:</p>
<pre><code>CPU times: user 39.4 s, sys: 1min 28s, total: 2min 8s
Wall time: 16.2 s
</code></pre>
<p>There is now a noticeable difference between the CPU and wall time. Also, the CPU usage reaches close to <code>800%</code>.<br>
Similar behaviour is observed for larger values of <code>n_elems</code>.</p>
<p>I think this is because at a certain array size the numpy operation becomes multithreaded.<br>
Could someone clarify this?<br>
Also is there a way to restrict CPU usage of the process to <code>100%</code>.</p> | <p>It seems that some numpy operations (e.g. <code>numpy.dot</code>) use BLAS which can execute in parallel.</p>
<p>Other numpy operations (e.g. <code>numpy.einsum</code>) are implemented directly in C and execute serially.</p>
<p>See <a href="https://stackoverflow.com/questions/16617973/why-isnt-numpy-mean-multithreaded">why isn't numpy.mean multithreaded?</a> for more details.</p>
<p>To restrict execution of <code>numpy.dot</code> to a single thread irregardless of array size, I had to set the environment variable <code>OMP_NUM_THREADS</code> to 1 before importing numpy.</p>
<p>I found the following helpful:<br>
<a href="http://How%20do%20you%20stop%20numpy%20from%20multithreading?" rel="nofollow noreferrer">How do you stop numpy from multithreading?</a><br>
<a href="https://stackoverflow.com/questions/30791550/limit-number-of-threads-in-numpy/31622299#31622299">Limit number of threads in numpy</a></p> | python|numpy | 0 |
1,217 | 50,366,921 | Regex Replace Expected Strings Or Byte Like Objects | <p>I have <a href="https://i.stack.imgur.com/NYg2T.png" rel="nofollow noreferrer">the following code</a> , I have imported a dataset via Pandas, and am trying to substitute numbers with a comma out of it (for example, <code>"12,000"</code>) but I seem to always hit the error of <code>"TypeError: expected string or bytes-like object"</code></p>
<pre><code>df = pd.read_csv("C:/Users/Dell/Downloads/osc_samples_without.csv")
df2=df.loc[:,['Id','Description']]
df['Description'] = df['Description'].apply(lambda x:re.sub(r'(?<=\d)[,\.]','', df2))
</code></pre>
<p>Am a newbie with both Python and Regex, so any help would be appreciated. </p> | <p>You may use <code>replace</code> directly without using <code>re</code> explicitly:</p>
<pre><code>df2['Description'] = df2['Description'].str.replace(r'(?<=\d)[.,]', '')
</code></pre>
<p>Here,</p>
<ul>
<li><code>(?<=\d)</code> - a positive lookbehind that matches a position immediately preceded with a digit</li>
<li><code>[.,]</code> - matches a <code>.</code> or <code>,</code>.</li>
</ul> | python|regex|pandas | 0 |
1,218 | 45,615,439 | Regex for Transformations (without using multiple statements) | <p>What is the best way to use Regex to extract and transform one statement to another?</p>
<p>Specifically, I have implemented the below to find and extract a sudent number from a block of text and transform it as follows: <em>AB123CD</em> to <em>AB-123-CD</em></p>
<p>Right now, this is implemented as 3 statements as follows:</p>
<pre><code>gg['student_num'] = gg['student_test'].str.extract('(\d{2})\w{3}\d{2}') + \
'-' + gg['student_num'].str.extract('\d{2}(\w{3})\d{2}') + \
'-' + gg['student_test'].str.extract('\d{2}\w{3}(\d{2})')
</code></pre>
<p>It doesn't feel right to me that I would need to have three statements -
one for each group - concatenated together below (or even more if this was more complicated) and wondered if there was a better way to find and transform some text?</p> | <p>You could get list of segments using regexp and then join them this way:</p>
<pre><code>'-'.join(re.search(r'(\d{2})(\w{3})(\d{2})', string).groups())
</code></pre>
<p>You could get <code>AttributeError</code> if <code>string</code> doesn't contain needed pattern (<code>re.search()</code> returns <code>None</code>), so you might want to wrap this expression in <code>try...except</code> block.</p> | python|regex|pandas | 2 |
1,219 | 62,505,181 | Heatmap on only a part of the dataframe? | <p>I'm trying to make a heatmap plot but would like to omit the first row from it. So that I have a table where the first row wouldn't have any background colour. Somewhat like this <a href="https://i.stack.imgur.com/8LJ7c.png" rel="noreferrer">paint example</a></p>
<p>But I'm not even sure if that is possible. I've tried making a mulitIndex as a column so that the first row would become a part of the column name, but I want the row name, 'fixed' to still be there. Is it even possible?</p>
<p>This is what I'm working with so far. I would appreciate any input!</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
SO = pd.DataFrame(np.random.randint(100,size=(4,5)))
SO.iloc[0] = [5, 10, 15, 10, 5]
SO.index = ['fixed','val1', 'sd2', 'val2']
SO.columns = ['Prod1', 'Prod2', 'Prod3', 'Prod4', 'Prod5']
sns.set(font_scale=1.5)
fig, ax = plt.subplots(figsize=(20,10))
ax = sns.heatmap(SO, annot=True, fmt="", cbar=False, cmap="RdYlGn", vmin=0, vmax=100)
plt.tick_params(axis='both', which='major', labelsize=19, labelbottom = False, bottom=False, top = False, labeltop=True)
</code></pre> | <p>The masking idea of Stupid Wolf is great, but if you are looking for simpler
ways you can simply incorporate the first row in the column names and plot the heatmap as usual.</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
SO = pd.DataFrame(np.random.randint(100,size=(4,5)))
SO.iloc[0] = [5, 10, 15, 10, 5]
SO.index = ['fixed','val1', 'sd2', 'val2']
SO.columns = ['Prod1', 'Prod2', 'Prod3', 'Prod4', 'Prod5']
first_row = [str(i) for i in SO.iloc[0]]
labels = [i + '\n' + j for i,j in zip(SO.columns, first_row)]
sns.set(font_scale=1.5)
fig, ax = plt.subplots(figsize=(20,10))
ax = sns.heatmap(SO.iloc[1:], annot=True, fmt="", cbar=False, cmap="RdYlGn",
vmin=0, vmax=100)
ax.set_xticklabels(labels)
plt.tick_params(axis='both', which='major', labelsize=19,
labelbottom = False, bottom=False, top = False, labeltop=True)
</code></pre>
<h1>Result</h1>
<p><a href="https://i.stack.imgur.com/hzFOk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hzFOk.png" alt="enter image description here" /></a></p> | python|pandas|seaborn|heatmap | 2 |
1,220 | 62,548,136 | pandas column didn't change permanently | <p>This is my code having trouble. In double for loop, I created a temporary data frame which will be added into the original data frame. Before I add, I changed columns. But the column didn't change when I checked final data frame.</p>
<pre><code>df['sd'] = labelencoder.fit_transform(df['sd'])
copy_columns_of_x = ['consumer_price_index', 'households', 'income', 'avg_price_jeonse', 'total_unsold_households_rate']
copy_columns_of_y = ['avg_price_meme']
df[copy_columns_of_x] = standard_scaler.fit_transform(df[copy_columns_of_x])
df_orgin = df.copy()
columns = copy_columns_of_x
for c in copy_columns_of_x:
for h in HISTORY_FRAME_TO_PREDICT:
tmp_df = df_orgin.sort_values(['sd','yyyymmdd']).groupby(['sd'])[c].transform(lambda x:x.shift(h))
changed_column = c + '_' + str(h) # todo : 이거 왜 안먹지?
tmp_df.columns = [changed_column]
df = pd.concat([df, tmp_df], axis=1)
</code></pre> | <p>You have to add <code>inplace=True</code> in your code <code>df = pd.concat([df, tmp_df], axis=1)</code></p> | pandas | 0 |
1,221 | 62,509,730 | group by rank continuous date by pandas | <p>I refer this <a href="https://stackoverflow.com/questions/53265362/pandas-groupby-rank-date-time">post</a> . But My goal is something different.</p>
<p><strong>Example</strong></p>
<pre><code>ID TIME
01 2018-07-11
01 2018-07-12
01 2018-07-13
01 2018-07-15
01 2018-07-16
01 2018-07-17
02 2019-09-11
02 2019-09-12
02 2019-09-15
02 2019-09-16
</code></pre>
<p>Notice: For each id, the date is unique.</p>
<p><strong>Expected</strong></p>
<pre><code>ID TIME RANK
01 2018-07-11 1
01 2018-07-12 2
01 2018-07-13 3
01 2018-07-15 1
01 2018-07-16 2
01 2018-07-17 3
02 2019-09-11 1
02 2019-09-12 2
02 2019-09-15 1
02 2019-09-16 2
</code></pre>
<p>For each id, the rank of continuous date does not change.IF not, the rank restarts.</p>
<p><strong>Goal</strong></p>
<p>How to get the result.</p>
<p><strong>Try</strong></p>
<p><code>df.groupby('ID')['TIME'].rank(ascending=True)</code> failed</p> | <p>First we check the difference between the dates, which are <code>> 1 day</code>. Then we groupby on <code>ID</code> and the <code>cumsum</code> of these differences and <code>cumulative count</code> each group`</p>
<pre><code># df['TIME'] = pd.to_datetime(df['TIME'])
s = df['TIME'].diff().fillna(pd.Timedelta(days=1)).ne(pd.Timedelta(days=1))
df['RANK'] = s.groupby([df['ID'], s.cumsum()]).cumcount().add(1)
ID TIME RANK
0 1 2018-07-11 1
1 1 2018-07-12 2
2 1 2018-07-13 3
3 1 2018-07-15 1
4 1 2018-07-16 2
5 1 2018-07-17 3
6 2 2019-09-11 1
7 2 2019-09-12 2
8 2 2019-09-15 1
9 2 2019-09-16 2
</code></pre> | pandas | 2 |
1,222 | 62,494,412 | TypeError: Expected float32 passed to parameter 'y' of op 'Equal', got 'auto' of type 'str' instead | <p>I am making a neural network to predict audio data (to learn more about how neural networks function and how to use tensorflow), and everything is going pretty smoothly so far, with one exception. I've looked around quite a bit to solve this issue and haven't been able to find anything specific enough to help me. I set up the dataset and model and those work fine, but for some reason when I try to train the model, it gives me a type error, even though all of the values in the dataset are 32 bit floats. It'd be much appreciated if someone could answer this for me, or at least push in the right direction to figuring this out. Code and console outputs are below. (BTW all values in dataset are between 0 and 1, I don't know if that's relevant but I thought I'd add that in)</p>
<p>EDIT: I've included the AudioHandler class as well, which you can use to reproduce the error. <code>get_audio_array</code> or <code>get_audio_arrays</code> can be used to convert a single mp3 or a directory of mp3s into array(s) of the audio data. You can also use <code>dataset_from_arrays</code> to generate a dataset from the audio arrays created with <code>dataset_from_arrays</code>.</p>
<pre><code>from AudioHandler import AudioHandler
import os
seq_length = 22050
BATCH_SIZE = 64
BUFFER_SIZE = 10000
audio_arrays = AudioHandler.get_audio_arrays("AudioDataset", normalized=True)
dataset = AudioHandler.dataset_from_arrays(audio_arrays, seq_length, BATCH_SIZE, buffer_size=BUFFER_SIZE)
print(dataset)
rnn_units = 256
def build_model(rnn_units, batch_size):
model = tf.keras.Sequential([
tf.keras.layers.InputLayer(batch_input_shape=(batch_size, None, 2)),
tf.keras.layers.GRU(rnn_units, return_sequences=True, stateful=True),
tf.keras.layers.Dense(2)
])
return model
model = build_model(rnn_units, BATCH_SIZE)
model.summary()
model.compile(optimizer='adam', loss=tf.keras.losses.MeanSquaredError)
EPOCHS = 10
history = model.fit(dataset, epochs=EPOCHS)
</code></pre>
<pre><code>from pydub import AudioSegment
import numpy as np
from pathlib import Path
from tensorflow import data
import os
class AudioHandler:
@staticmethod
def print_audio_info(file_name):
audio_segment = AudioSegment.from_file(file_name)
print("Information of '" + file_name + "':")
print("Sample rate: " + str(audio_segment.frame_rate) + "kHz")
# Multiply frame_width by 8 to get bits, since it is given in bytes
print("Sample width: " + str(audio_segment.frame_width * 8) + " bits per sample (" + str(
int(audio_segment.frame_width * 8 / audio_segment.channels)) + " bits per channel)")
print("Channels: " + str(audio_segment.channels))
@staticmethod
def get_audio_array(file_name, normalized=True):
audio_segment = AudioSegment.from_file(file_name)
# Get bytestring of raw audio data
raw_audio_bytestring = audio_segment.raw_data
# Adjust sample width to accommodate multiple channels in each sample
sample_width = audio_segment.frame_width / audio_segment.channels
# Convert bytestring to numpy array
if sample_width == 1:
raw_audio = np.array(np.frombuffer(raw_audio_bytestring, dtype=np.int8))
elif sample_width == 2:
raw_audio = np.array(np.frombuffer(raw_audio_bytestring, dtype=np.int16))
elif sample_width == 4:
raw_audio = np.array(np.frombuffer(raw_audio_bytestring, dtype=np.int32))
else:
raw_audio = np.array(np.frombuffer(raw_audio_bytestring, dtype=np.int16))
# Normalize the audio data
if normalized:
# Cast the audio data as 32 bit floats
raw_audio = raw_audio.astype(dtype=np.float32)
# Make all values positive
raw_audio += np.power(2, 8*sample_width)/2
# Normalize all values between 0 and 1
raw_audio *= 1/np.power(2, 8*sample_width)
# Reshape the array to accommodate multiple channels
if audio_segment.channels > 1:
raw_audio = raw_audio.reshape((-1, audio_segment.channels))
return raw_audio
@staticmethod
# Return an array of all audio files in directory, as arrays of audio data
def get_audio_arrays(directory, filetype='mp3', normalized=True):
file_count_total = len([name for name in os.listdir(directory) if os.path.isfile(os.path.join(directory, name))]) - 1
audio_arrays = []
# Iterate through all audio files
pathlist = Path(directory).glob('**/*.' + filetype)
# Keep track of progress
file_count = 0
print("Loading audio files... 0%")
for path in pathlist:
path_string = str(path)
audio_array = AudioHandler.get_audio_array(path_string, normalized=normalized)
audio_arrays.append(audio_array)
# Update Progress
file_count += 1
print('Loading audio files... ' + str(int(file_count/file_count_total*100)) + '%')
return audio_arrays
@staticmethod
def export_to_file(audio_data_array, file_name, normalized=True, file_type="mp3", bitrate="256k"):
if normalized:
audio_data_array *= np.power(2, 16)
audio_data_array -= np.power(2, 16)/2
audio_data_array = audio_data_array.astype(np.int16)
audio_data_array = audio_data_array.reshape((1, -1))[0]
raw_audio = audio_data_array.tostring()
audio_segment = AudioSegment(data=raw_audio, sample_width=2, frame_rate=44100, channels=2)
audio_segment.export(file_name, format=file_type, bitrate=bitrate)
# Splits a sequence into input values and target values
@staticmethod
def __split_input_target(chunk):
input_audio = chunk[:-1]
target_audio = chunk[1:]
return input_audio, target_audio
@staticmethod
def dataset_from_arrays(audio_arrays, sequence_length, batch_size, buffer_size=10000):
# Create main data set, starting with first audio array
dataset = data.Dataset.from_tensor_slices(audio_arrays[0])
dataset = dataset.batch(sequence_length + 1, drop_remainder=True)
# Split each audio array into sequences individually,
# then concatenate each individual data set with the main data set
for i in range(1, len(audio_arrays)):
audio_data = audio_arrays[i]
tensor_slices = data.Dataset.from_tensor_slices(audio_data)
audio_dataset = tensor_slices.batch(sequence_length + 1, drop_remainder=True)
dataset.concatenate(audio_dataset)
dataset = dataset.map(AudioHandler.__split_input_target)
dataset = dataset.shuffle(buffer_size).batch(batch_size, drop_remainder=True)
return dataset
</code></pre>
<pre><code>Loading audio files... 0%
Loading audio files... 25%
Loading audio files... 50%
Loading audio files... 75%
Loading audio files... 100%
2020-06-21 00:20:10.796993: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-06-21 00:20:10.811357: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7fddb7b23fd0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-06-21 00:20:10.811368: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
<BatchDataset shapes: ((64, 22050, 2), (64, 22050, 2)), types: (tf.float32, tf.float32)>
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
gru (GRU) (64, None, 256) 199680
_________________________________________________________________
dense (Dense) (64, None, 2) 514
=================================================================
Total params: 200,194
Trainable params: 200,194
Non-trainable params: 0
_________________________________________________________________
Epoch 1/10
Traceback (most recent call last):
File "/Users/anonteau/Desktop/Development/Python/Lo-FiGenerator/RNN.py", line 57, in <module>
history = model.fit(dataset, epochs=EPOCHS)
File "/Users/anonteau/Desktop/Development/Python/Lo-FiGenerator/venv/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py", line 66, in _method_wrapper
return method(self, *args, **kwargs)
File "/Users/anonteau/Desktop/Development/Python/Lo-FiGenerator/venv/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py", line 848, in fit
tmp_logs = train_function(iterator)
File "/Users/anonteau/Desktop/Development/Python/Lo-FiGenerator/venv/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py", line 580, in __call__
result = self._call(*args, **kwds)
File "/Users/anonteau/Desktop/Development/Python/Lo-FiGenerator/venv/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py", line 627, in _call
self._initialize(args, kwds, add_initializers_to=initializers)
File "/Users/anonteau/Desktop/Development/Python/Lo-FiGenerator/venv/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py", line 506, in _initialize
*args, **kwds))
File "/Users/anonteau/Desktop/Development/Python/Lo-FiGenerator/venv/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 2446, in _get_concrete_function_internal_garbage_collected
graph_function, _, _ = self._maybe_define_function(args, kwargs)
File "/Users/anonteau/Desktop/Development/Python/Lo-FiGenerator/venv/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 2777, in _maybe_define_function
graph_function = self._create_graph_function(args, kwargs)
File "/Users/anonteau/Desktop/Development/Python/Lo-FiGenerator/venv/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 2667, in _create_graph_function
capture_by_value=self._capture_by_value),
File "/Users/anonteau/Desktop/Development/Python/Lo-FiGenerator/venv/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py", line 981, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File "/Users/anonteau/Desktop/Development/Python/Lo-FiGenerator/venv/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py", line 441, in wrapped_fn
return weak_wrapped_fn().__wrapped__(*args, **kwds)
File "/Users/anonteau/Desktop/Development/Python/Lo-FiGenerator/venv/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py", line 968, in wrapper
raise e.ag_error_metadata.to_exception(e)
TypeError: in user code:
/Users/anonteau/Desktop/Development/Python/Lo-FiGenerator/venv/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py:571 train_function *
outputs = self.distribute_strategy.run(
/Users/anonteau/Desktop/Development/Python/Lo-FiGenerator/venv/lib/python3.7/site-packages/tensorflow/python/distribute/distribute_lib.py:951 run **
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/Users/anonteau/Desktop/Development/Python/Lo-FiGenerator/venv/lib/python3.7/site-packages/tensorflow/python/distribute/distribute_lib.py:2290 call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/Users/anonteau/Desktop/Development/Python/Lo-FiGenerator/venv/lib/python3.7/site-packages/tensorflow/python/distribute/distribute_lib.py:2649 _call_for_each_replica
return fn(*args, **kwargs)
/Users/anonteau/Desktop/Development/Python/Lo-FiGenerator/venv/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py:533 train_step **
y, y_pred, sample_weight, regularization_losses=self.losses)
/Users/anonteau/Desktop/Development/Python/Lo-FiGenerator/venv/lib/python3.7/site-packages/tensorflow/python/keras/engine/compile_utils.py:205 __call__
loss_value = loss_obj(y_t, y_p, sample_weight=sw)
/Users/anonteau/Desktop/Development/Python/Lo-FiGenerator/venv/lib/python3.7/site-packages/tensorflow/python/keras/losses.py:143 __call__
losses = self.call(y_true, y_pred)
/Users/anonteau/Desktop/Development/Python/Lo-FiGenerator/venv/lib/python3.7/site-packages/tensorflow/python/keras/losses.py:246 call
return self.fn(y_true, y_pred, **self._fn_kwargs)
/Users/anonteau/Desktop/Development/Python/Lo-FiGenerator/venv/lib/python3.7/site-packages/tensorflow/python/keras/losses.py:313 __init__
mean_squared_error, name=name, reduction=reduction)
/Users/anonteau/Desktop/Development/Python/Lo-FiGenerator/venv/lib/python3.7/site-packages/tensorflow/python/keras/losses.py:229 __init__
super(LossFunctionWrapper, self).__init__(reduction=reduction, name=name)
/Users/anonteau/Desktop/Development/Python/Lo-FiGenerator/venv/lib/python3.7/site-packages/tensorflow/python/keras/losses.py:94 __init__
losses_utils.ReductionV2.validate(reduction)
/Users/anonteau/Desktop/Development/Python/Lo-FiGenerator/venv/lib/python3.7/site-packages/tensorflow/python/ops/losses/loss_reduction.py:67 validate
if key not in cls.all():
/Users/anonteau/Desktop/Development/Python/Lo-FiGenerator/venv/lib/python3.7/site-packages/tensorflow/python/ops/math_ops.py:1491 tensor_equals
return gen_math_ops.equal(self, other, incompatible_shape_error=False)
/Users/anonteau/Desktop/Development/Python/Lo-FiGenerator/venv/lib/python3.7/site-packages/tensorflow/python/ops/gen_math_ops.py:3224 equal
name=name)
/Users/anonteau/Desktop/Development/Python/Lo-FiGenerator/venv/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py:479 _apply_op_helper
repr(values), type(values).__name__, err))
TypeError: Expected float32 passed to parameter 'y' of op 'Equal', got 'auto' of type 'str' instead. Error: Expected float32, got 'auto' of type 'str' instead.```
</code></pre> | <p>Try changing</p>
<pre><code>model.compile(optimizer='adam', loss=tf.keras.losses.MeanSquaredError)
</code></pre>
<p>to</p>
<pre><code>model.compile(optimizer='adam', loss=tf.keras.losses.MeanSquaredError())
</code></pre> | python|tensorflow|machine-learning|keras|recurrent-neural-network | 59 |
1,223 | 62,474,816 | Create a Pandas Dataframe from nested dict | <p>I have a nested dict with following structure:
course_id, nested dict with: 2 recommended courses and number of purchases for every course.
For example entries of this dict look smth like this: </p>
<pre><code> {490: {566: 253, 551: 247},
357: {571: 112, 356: 100},
507: {570: 172, 752: 150}}
</code></pre>
<p>I tried this code to make a dataframe from this dict:</p>
<pre><code>result=pd.DataFrame.from_dict(dicts, orient='index').stack().reset_index()
result.columns=['Course ID','Recommended course','Number of purchases']
</code></pre>
<p><img src="https://i.stack.imgur.com/XLh8x.png" alt="Pls. see the output"></p>
<p>This doesn't quite work for me, because I want an output where there will be 5 columns.
Course ID, recommended course 1, purchases 1, recommended course 2, purchases 2.
Is there any solution for this?
Thanks in advance.</p> | <p>I would recommend you just re-shape your dictionary then re-create your dataframe, however you're not far off from getting your target output from your current dataframe.</p>
<p>we can <code>groupby</code> and use <code>cumcount</code> to create our unique column then <code>unstack</code> and assign our column from the multi index header that was created. </p>
<pre><code>s1 = result.groupby(['Course ID',
result.groupby(['Course ID']).cumcount() + 1]).first().unstack()
s1.columns = [f"{x}_{y}" for x,y in s1.columns]
Recommended course_1 Recommended course_2 Number of purchases_1 \
Course ID
357 571 356 112.0
490 566 551 253.0
507 570 752 172.0
Number of purchases_2
Course ID
357 100.0
490 247.0
507 150.0
</code></pre> | python|pandas|dataframe | 1 |
1,224 | 62,758,621 | Matching lists to dataframes | <p>I have a dataframe of people with Age as a column. I would like to match this age to a group, i.e. Baby=0-2 years old, Child=3-12 years old, Young=13-18 years old, Young Adult=19-30 years old, Adult=31-50 years old, Senior Adult=51-65 years old.</p>
<p>I created the lists that define these year groups, e.g. <code>Adult=list(range(31,51))</code> etc.
How do I match the name of the list 'Adult' to the dataframe by creating a new column?</p>
<p>Small input: the dataframe is made up of three columns: df['Name'], df['Country'], df['Age'].</p>
<pre><code>Name Country Age
Anthony France 15
Albert Belgium 54
.
.
.
Zahra Tunisia 14
</code></pre>
<p>So I need to match the age column with lists that I already have. The output should look like:</p>
<pre><code>Name Country Age Group
Anthony France 15 Young
Albert Belgium 54 Adult
.
.
.
Zahra Tunisia 14 Young
</code></pre>
<p>Thanks!</p> | <p>Here's a way to do that using <code>pd.cut</code>:</p>
<pre><code>df = pd.DataFrame({"person_id": range(25), "age": np.random.randint(0, 100, 25)})
print(df.head(10))
==>
person_id age
0 0 30
1 1 42
2 2 78
3 3 2
4 4 44
5 5 43
6 6 92
7 7 3
8 8 13
9 9 76
df["group"] = pd.cut(df.age, [0, 18, 50, 100], labels=["child", "adult", "senior"])
print(df.head(10))
==>
person_id age group
0 0 30 adult
1 1 42 adult
2 2 78 senior
3 3 2 child
4 4 44 adult
5 5 43 adult
6 6 92 senior
7 7 3 child
8 8 13 child
9 9 76 senior
</code></pre>
<hr />
<p>Per your question, if you have a few lists (like the ones below), and would like to convert use them for 'binning', you can do:</p>
<pre><code># for example, these are the lists
Adult = list(range(18,50))
Child = list(range(0, 18))
Senior = list(range(50, 100))
# Creating bins out of the lists.
bins = [min(l) for l in [Child, Adult, Senior]]
bins.append(max([max(l) for l in [Child, Adult, Senior]]))
labels = ["Child", "Adult", "Senior"]
# using the bins:
df["group"] = pd.cut(df.age, bins, labels=labels)
</code></pre> | python|pandas|matching | 1 |
1,225 | 62,817,483 | 'Wrong number of items passed 2, placement implies 1' error with pandas dataframe by operating two columns | <p>I have dataframe - for the purpose of sample data every day has only 10 minutes:</p>
<pre><code> Date Close
0 2019-06-20 07:00:00 2927.25
1 2019-06-20 07:05:00 2927.00
2 2019-06-20 07:10:00 2926.75
183 2019-06-21 07:00:00 2932.25
184 2019-06-21 07:05:00 2932.25
185 2019-06-21 07:10:00 2931.00
366 2019-06-24 07:00:00 2941.75
367 2019-06-24 07:05:00 2942.25
368 2019-06-24 07:10:00 2941.50
549 2019-06-25 07:00:00 2925.50
550 2019-06-25 07:05:00 2926.50
551 2019-06-25 07:10:00 2926.50
732 2019-06-26 07:00:00 2903.25
</code></pre>
<p>I want to get the daily range of the closing price.<br>
I grouped the data by day and get the min max of close:</p>
<pre><code>rangeofday = df.groupby(pd.Grouper(key='Date', freq='1D')).agg({'Close':[np.min, np.max]})
rangeofday = rangeofday.dropna()
Close
amin amax
Date
2019-06-20 2926.75 2927.25
2019-06-21 2931.00 2932.25
2019-06-24 2941.50 2942.25
2019-06-25 2925.50 2926.50
2019-06-26 2903.25 2904.00
... ... ...
</code></pre>
<p>So far so good, but what happens is that the names of the columns are weird, and somehow not accessible by name.<br></p>
<pre><code> rangeofday.amin
or
rangeofday.Closeamin
= 'DataFrame' object has no attribute 'amin'
</code></pre>
<p>So I can access them with iloc:</p>
<pre><code>rangeofday.iloc[:, [1]]
Close
amax
Date
2019-06-20 2927.25
2019-06-21 2932.25
2019-06-24 2942.25
2019-06-25 2926.50
</code></pre>
<p>Now I try to subtract min from max:</p>
<pre><code>rangeofday['range'] = (rangeofday.iloc[:, [0]] - rangeofday.iloc[:, [1]])/0.25
</code></pre>
<p>and get the error:</p>
<pre><code>Wrong number of items passed 2, placement implies 1
</code></pre>
<p>What does this mean and how can get around this error?</p> | <p>Since you applied multiple <code>agg</code> functions, <code>pandas</code> automatically applied a <code>MultiIndex</code> to your grouped frame. See more details: <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/advanced.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/user_guide/advanced.html</a></p>
<p>In particular, if you want to access the columns, you can access by passing the column names as a <code>tuple</code>:</p>
<p><code>rangeofday[('Close', 'amax')]</code></p>
<p>or</p>
<p><code>rangeofday.loc[:, ('Close', 'amin')]</code></p> | python|pandas|dataframe | 1 |
1,226 | 54,477,866 | Efficiently aggregate a resampled collection of datetimes in pandas | <p>Given the following dataset as a pandas dataframe df:</p>
<pre><code>index(as DateTime object) | Name | Amount | IncomeOutcome
---------------------------------------------------------------
2019-01-28 | Customer1 | 200.0 | Income
2019-01-31 | Customer1 | 200.0 | Income
2019-01-31 | Customer2 | 100.0 | Income
2019-01-28 | Customer2 | -100.0 | Outcome
2019-01-31 | Customer2 | -100.0 | Outcome
</code></pre>
<p>We perform the following steps:</p>
<pre><code>grouped = df.groupby("Name", "IncomeOutcome")
sampled_by_month = grouped.resample("M")
aggregated = sampled_by_month.agg({"MonthlyCount": "size", "Amount": "sum"})
</code></pre>
<p>The desired output should look like this:</p>
<pre><code>Name | IncomeOutcome | Amount | MonthlyCount
------------------------------------------------------------
Customer1 | Income | 400.0 | 2
Customer2 | Income | 100.0 | 1
Customer2 | Outcome | -200.0 | 2
</code></pre>
<p>The last step performs very poorly, possibly related to <a href="https://github.com/pandas-dev/pandas/issues/20660" rel="nofollow noreferrer">Pandas Issue #20660</a>
My first intention was to convert all datetime objects to int64, which leaves me with the question on how to resample the converted data by month.</p>
<p>Any suggestions on that issue?</p>
<p>Thank you in advance</p> | <p>Perhaps we can optimise your solution by having the resampling done only on a single column ("Amount", the column of interest).</p>
<pre><code>(df.groupby(["Name", "IncomeOutcome"])['Amount']
.resample("M")
.agg(['sum','size'])
.rename({'sum':'Amount', 'size': 'MonthlyCount'}, axis=1)
.reset_index(level=-1, drop=True)
.reset_index())
Name IncomeOutcome Amount MonthlyCount
0 Customer1 Income 400.0 2
1 Customer2 Income 100.0 1
2 Customer2 Outcome -200.0 2
</code></pre>
<p>If this is still too slow, then I think the problem could be that the <code>resample</code> being <em>within</em> the <code>groupby</code> slows things down. Perhaps you can try grouping by all 3 predicates with a single <code>groupby</code> call. For the date resampling, try <code>pd.Grouper</code>.</p>
<pre><code>(df.groupby(['Name', 'IncomeOutcome', pd.Grouper(freq='M')])['Amount']
.agg([ ('Amount', 'sum'), ('MonthlyCount', 'size')])
.reset_index(level=-1, drop=True)
.reset_index())
Name IncomeOutcome Amount MonthlyCount
0 Customer1 Income 400.0 2
1 Customer2 Income 100.0 1
2 Customer2 Outcome -200.0 2
</code></pre>
<p>Performance wise, this should come out faster.</p>
<hr>
<p><strong>Performance</strong> </p>
<p>Let's try setting up a more general DataFrame for the purpose of testing.</p>
<pre><code># Setup
df_ = df.copy()
df1 = pd.concat([df_.reset_index()] * 100, ignore_index=True)
df = pd.concat([
df1.replace({'Customer1': f'Customer{i}', 'Customer2': f'Customer{i+1}'})
for i in range(1, 98, 2)], ignore_index=True)
df = df.set_index('index')
df.shape
# (24500, 3)
</code></pre>
<p></p>
<pre><code>%%timeit
(df.groupby(["Name", "IncomeOutcome"])['Amount']
.resample("M")
.agg(['sum','size'])
.rename({'sum':'Amount', 'size': 'MonthlyCount'}, axis=1)
.reset_index(level=-1, drop=True)
.reset_index())
%%timeit
(df.groupby(['Name', 'IncomeOutcome', pd.Grouper(freq='M')])['Amount']
.agg([ ('Amount', 'sum'), ('MonthlyCount', 'size')])
.reset_index(level=-1, drop=True)
.reset_index())
1.71 s ± 85.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
24.2 ms ± 1.82 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
</code></pre> | python|pandas|performance|numpy | 6 |
1,227 | 54,544,513 | How to check the highest score among specific columns and compute the average in pandas? | <p>Help with homework problem: "Let us define the "data science experience" of a given person as the person's largest score among Regression, Classification, and Clustering. Compute the average data science experience among all MSIS students."</p>
<p>Beginner to coding. I am trying to figure out how to check amongst columns and compare those columns to each other for the largest value. And then take the average of those found values. </p>
<p>I greatly appreciate your help in advance!</p>
<p>Picture of the sample data set: <a href="https://i.stack.imgur.com/wRjmr.png" rel="nofollow noreferrer">1</a>: <a href="https://i.stack.imgur.com/9OSjz.png" rel="nofollow noreferrer">https://i.stack.imgur.com/9OSjz.png</a></p>
<pre><code>Provided Code:
import pandas as pd
df = pd.read_csv("cleaned_survey.csv", index_col=0)
df.drop(['ProgSkills','Languages','Expert'],axis=1,inplace=True)
</code></pre>
<p>Sample Data:
<a href="https://i.stack.imgur.com/wRjmr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wRjmr.png" alt="enter image description here"></a></p>
<p>What I have tried so far: </p>
<pre><code>df[data_science_experience]=df[["Regression","Classification","Clustering"]].values.max()
df['z']=df[['Regression','Classification','Clustering']].apply(np.max,axis=1)
df[data_science_experience]=df[["Regression","Classification","Clustering"]].apply(np.max,axis=1)
</code></pre> | <p>If you want to get the highest score of column 'hw1' you can get it with:<br>
<code>pd['hw1'].max()</code>. <br>this gives you a series of all the values in that column and max returns the maximum. for average use mean:<br></p>
<p><code>pd['hw1'].mean()</code><br></p>
<p><br> if you want to find the maximum of multiple columns, you can use:<br></p>
<pre><code>maximum_list = list()
for col in pd.columns:
maximum_list.append(pd[col].max)
max = maximum_list.max()
avg = maximum_list.mean()
</code></pre>
<p>hope this helps.</p> | pandas|dataframe | 0 |
1,228 | 73,578,333 | How to iterate through a dataframe to format values? | <p>I have a dataframe where every column has numeric values like <code>5,12; 3,14; 12,01...</code> in object dtype.
I want to iterate through the table to convert the dtype to float.
Therefore, I made a list of all column names to replace the ',' with '.' of every value and then convert it into the right type.</p>
<p>My code looks like this:</p>
<pre><code>for x in columnList:
x.replace(',' , '.')
x.astype(float)
</code></pre>
<p>Data:</p>
<pre><code>Timestamp Ins_W/m2 GenPowerW1 GenPowerW2 GenPowerW3
2020-01-01 5,12 3,14 12,1
2020-01-02 6,84 16,4 12,1
.
.
.
</code></pre>
<p>Unfortunately, I always get an AttributeError.
I hope someone can give me a hint on how to fix it.</p> | <p>You need to iterate over each of the columns, converting each column to strings (with <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.str.html" rel="nofollow noreferrer"><code>Series.str</code></a>) to allow replacement and then converting those values to floats. To convert empty cells to <code>NaN</code> we first replace them with the string <code>'NaN'</code>:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({
'Timestamp': ['2020-01-01', '2020-01-02'],
'Ins_W/m2': ['5,12', '6,84'],
'GenPowerW1': ['3,14', ''],
'GenPowerW2': ['12,1', '16,4'],
'GenPowerW3': ['', '12,1']
})
df
# Timestamp Ins_W/m2 GenPowerW1 GenPowerW2 GenPowerW3
# 0 2020-01-01 5,12 3,14 12,1
# 1 2020-01-02 6,84 16,4 12,1
columnList = ['Ins_W/m2', 'GenPowerW1', 'GenPowerW2', 'GenPowerW3']
for col in columnList :
df[col] = df[col].str.replace(',', '.').replace('', 'NaN').astype(float)
df
# Timestamp Ins_W/m2 GenPowerW1 GenPowerW2 GenPowerW3
# 0 2020-01-01 5.12 3.14 12.1 NaN
# 1 2020-01-02 6.84 NaN 16.4 12.1
df['GenPowerW1']
# 0 3.14
# 1 NaN
# Name: GenPowerW1, dtype: float64
</code></pre> | python|pandas|for-loop | 1 |
1,229 | 73,616,449 | Pandas.read_csv stops at 50 characters | <p>i've got some finance data stored in a csv. The problem comes when loading my bids and asks, which are lists of lists, stored in a csv. My goal now is to load the csv into normal lists i can zip and map to whatever i need. Currently have found a way using .to_string() to create a string representation of my list of lists, and then ast.literal_eval for conversion to actual list.</p>
<p>raw csv file data(just the 'Bids' column, didnt wanna crowd my post with the whole file):</p>
<pre><code>[['1548.36000000', '36.94670000'], ['1548.35000000', '2.75850000'], ['1548.32000000', '4.56580000'], ['1548.31000000', '7.59050000'], ['1548.30000000', '18.99930000'], ['1548.26000000', '3.60850000'], ['1548.25000000', '4.30280000'], ['1548.17000000', '0.02000000'], ['1548.12000000', '29.70940000'], ['1548.03000000', '0.20000000']]
</code></pre>
<p>what i want to do:</p>
<pre><code>df = pandas.read_csv('C:/Users/Ethan/Desktop/csv/test.csv', names=['E','ID','Bids','Asks'])
df = (df['Bids']).to_string(index=False)
bids = ast.literal_eval(df)
</code></pre>
<p>This yields an error:</p>
<pre><code> [['1548.36000000', '36.94670000'], ['1548.35000...
^SyntaxError: unterminated string literal (detected at line 1)
</code></pre>
<p>simply running len(df) without ast.literal_eval() yields a string length of 50, which is several hundred short obviously</p>
<pre><code>df = pandas.read_csv('C:/Users/Ethan/Desktop/csv/test.csv', names=['E','ID','Bids','Asks'])
df = (df['Bids']).to_string(index=False)
print(len(df))
>50
</code></pre>
<p>So it seems python cant literal_eval my string because it simply isn't loading the entire thing. So then it would make sense why the string literal is unterminated. Has anyone encountered this before? Another method for loading a list of lists from a single csv column would be appreciated as well but would be great to know why this wont work.</p> | <p>thanks everyone! with your help my solution wound up being simpler, using .at[]
.at[] returns a string instead of any kind of object which i can use with literal_eval to convert it to list!</p>
<pre><code>df = pandas.read_csv('C:/Users/Ethan/Desktop/csv/test.csv', names=['E','ID','Bids','Asks'])
df2 = df.at[0,'Bids']
dflist = ast.literal_eval(df2)
print(type(dflist))
print(dflist)
print(dflist[0])
</code></pre>
<p>results:</p>
<pre><code><class 'list'>
[['1548.36000000', '36.94670000'], ['1548.35000000', '2.75850000'], ['1548.32000000', '4.56580000'], ['1548.31000000', '7.59050000'], ['1548.30000000', '18.99930000'], ['1548.26000000', '3.60850000'], ['1548.25000000', '4.30280000'], ['1548.17000000', '0.02000000'], ['1548.12000000', '29.70940000'], ['1548.03000000', '0.20000000']]
['1548.36000000', '36.94670000']
</code></pre> | python|pandas|csv | 2 |
1,230 | 73,550,004 | cross join pandas dataframe | <div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>A</th>
<th>B</th>
<th>C</th>
</tr>
</thead>
<tbody>
<tr>
<td>aaa</td>
<td>01-03-2022 12:40:00</td>
<td>orange</td>
</tr>
<tr>
<td>aaa</td>
<td>01-03-2022 12:40:10</td>
<td>apple</td>
</tr>
<tr>
<td>aaa</td>
<td>01-03-2022 12:40:00</td>
<td>kiwi</td>
</tr>
<tr>
<td>aaa</td>
<td>01-03-2022 12:40:08</td>
<td>apple</td>
</tr>
<tr>
<td>bbb</td>
<td>15-03-2022 13:10:10</td>
<td>orange</td>
</tr>
<tr>
<td>bbb</td>
<td>15-03-2022 13:10:18</td>
<td>apple</td>
</tr>
<tr>
<td>bbb</td>
<td>15-03-2022 13:10:40</td>
<td>kiwi</td>
</tr>
<tr>
<td>bbb</td>
<td>15-03-2022 13:10:15</td>
<td>apple</td>
</tr>
</tbody>
</table>
</div>
<p>In the above dataframe, whenever a value 'orange' is present for a user in column C, i want to select the earliest date for the same user and that date should correspond to value apple in column C. So if a value 'orange' is present in column C then that value should be retained but the values (date) in column B should correspond to that of 'apple'</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>A</th>
<th>B</th>
<th>C</th>
</tr>
</thead>
<tbody>
<tr>
<td>aaa</td>
<td>01-03-2022 12:40:08</td>
<td>orange</td>
</tr>
<tr>
<td>bbb</td>
<td>15-03-2022 13:10:15</td>
<td>orange</td>
</tr>
</tbody>
</table>
</div> | <pre><code># Import Your Data
df = pd.DataFrame({'A':['aaa','aaa','aaa','aaa','bbb','bbb','bbb','bbb'],
'B':['01-03-2022 12:40:00','01-03-2022 12:40:10','01-03-2022 12:40:00','01-03-2022 12:40:08','15-03-2022 13:10:10','15-03-2022 13:10:18','15-03-2022 13:10:40','15-03-2022 13:10:15'],
'C':['orange','apple','kiwi','apple','orange','apple','kiwi','apple']})
df.sort_values(['A','C','B'],ascending=[True,True,True],inplace=True)
df_orange = df.loc[df['C']=='orange']
df_apple = df.loc[df['C']=='apple']
# Data Pre-Process
df_orange_v2 = df_orange.groupby(['A']).first().reset_index()
df_apple_v2 = df_apple.groupby(['A']).first().reset_index()
# Merge Process and Clean Data
df_v2 = pd.merge(df_orange_v2, df_apple_v2[['A','B']], how='left', left_on='A',right_on='A')
df_v2.rename(columns={'B_y':'B'}, inplace = True)
df_v2.drop(['B_x'],axis = 1, inplace=True)
df_final = df_v2[['A','B','C']]
df_final
</code></pre>
<p>Hi Brother, I have adjust the code for you, please run it and see if this is what you want, if you have any question please let me know</p>
<p>Thanks
Leon</p> | python|pandas|dataframe | 2 |
1,231 | 71,418,817 | is there any way to include a counter(a variable that count something) in a loss function in pytorch? | <p>These are some lines from my loss function. <code>output</code> is the output of a multiclass classification network.</p>
<pre class="lang-py prettyprint-override"><code>bin_count=torch.bincount(torch.where(output>.1)[0], minlength=output.shape[0])
dr_output = (bin_count == 1) & (torch.argmax(output, dim=1)==labels)
</code></pre>
<p>I want <code>dr_output.sum()</code> to be part of my loss function. But there are many limitations in my implementation. Some functions are non-differentiable in pytorch, and also <code>dr_output</code> may be zero which is also not allowed if I only use <code>dr_output</code> as my loss. Can anyone please suggest to me a way around these problems?</p> | <p>If I got it correctly:</p>
<pre class="lang-py prettyprint-override"><code>bin_count=torch.bincount(torch.where(output>.1)[0], minlength=output.shape[0])
</code></pre>
<p>computes how many elements are greater than <code>.1</code>, for each row.</p>
<p>Instead:</p>
<pre class="lang-py prettyprint-override"><code>dr_output = (bin_count == 1) & (torch.argmax(output, dim=1)==labels)
</code></pre>
<p>is true if there is only an element greater than <code>.1</code> in the corresponding row, and the prediction is correct.</p>
<p><code>dr_output.sum()</code> then counts how many rows verify this condition, so minimizing the loss may enforce incorrect predictions or distributions with more values greater than <code>.1</code>.</p>
<p>Given these considerations, you can approximate your loss with the following:</p>
<pre class="lang-py prettyprint-override"><code>import torch.nn.functional as F
# x are the inputs, y the labels
mask = x > 0.1
p = F.softmax(x, dim=1)
out = p * (mask.sum(dim=1, keepdim=True) == 1)
loss = out[torch.arange(x.shape[0]), y].sum()
</code></pre>
<p>You can devise similar variants that are more fit for your problem.</p> | pytorch|gradient|backpropagation | 0 |
1,232 | 52,388,933 | Pandas: Mark values between flags from another column | <p>The workflow is as below:</p>
<ol>
<li>Groupby LineNum then</li>
<li>Mark values in LWS column greater than 50 as 'start'</li>
<li>Mark values in Text column containing ':'(colon) as 'end'</li>
<li>Mark values between start and end as 1 in 'ExpectedFlag'</li>
</ol>
<p><strong>I have finished upto step 3 i.e upto column named 'end'</strong></p>
<p>I am not able to figure out how to mark values between start and end as in ExpectedFlag. Is there any way to mark this using pandas operation?</p>
<pre><code> text LWS LineNum start end ExpectedFlag
0 somethin 3 2 0 0 0
1 somethin 3 2 0 0 0
2 somethin 2 2 0 0 0
3 value 70 2 1 0 1
4 value 3 2 0 0 1
5 value: 3 2 0 1 1
6 val1 200 3 1 0 1
7 val1: 3 3 0 1 1
8 val2 3 3 0 0 0
9 val2 100 3 1 0 1
10 val2: 3 3 0 1 1
11 djsal 3 3 0 0 0
12 jdsal 3 3 0 0 0
13 ajsd 3 3 0 0 0
</code></pre> | <p>Regarding filling values between <code>start</code> and <code>end</code>, this can be done as follows, based on <a href="https://stackoverflow.com/questions/45118710/fill-in-values-between-given-indices-of-2d-numpy-array">this answer</a>:</p>
<p>Data:</p>
<p><code>df = pd.DataFrame([[0,0],[0,0],[0,0],[1,0],[0,0],[0,1],[0,0],[0,0],[1,0],[0,1],[0,0],[0,0],[0,0],[0,0],[1,0],[0,0],[0,0],[0,1],[0,0],[0,0],[0,0],],columns=['start','end'])</code></p>
<pre><code> start end
0 0 0
1 0 0
2 0 0
3 1 0
4 0 0
5 0 1
6 0 0
7 0 0
8 1 0
9 0 1
10 0 0
</code></pre>
<p>Take indices of <code>start</code> and <code>end</code>:</p>
<pre><code>s = df.start.nonzero()[0]
e = df.end.nonzero()[0]
>>> s, e
(array([3, 8], dtype=int64), array([5, 9], dtype=int64))
</code></pre>
<p>Reshape original index:</p>
<pre><code>>>> index = df.index.values.reshape(-1,1)
array([[ 0],
[ 1],
[ 2],
[ 3],
[ 4],
[ 5],
[ 6],
[ 7],
[ 8],
[ 9],
[10]], dtype=int64)
</code></pre>
<p>Then we can utilize numpy's <a href="https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html" rel="nofollow noreferrer">broadcasting</a>:</p>
<pre><code>>>> index < [1] >>> index < [1,2,3,4,5]
array([[ True], array([[ True, True, True, True, True],
[False], [False, True, True, True, True],
[False], [False, False, True, True, True],
[False], [False, False, False, True, True],
[False], [False, False, False, False, True],
[False], [False, False, False, False, False],
[False], [False, False, False, False, False],
[False], [False, False, False, False, False],
[False], [False, False, False, False, False],
[False], [False, False, False, False, False],
[False]]) [False, False, False, False, False]])
</code></pre>
<p>For each <code>start</code>-<code>end</code> pair generate a condition:</p>
<pre><code>>>> ((s <= index) & (index <= e))
array([[False, False],
[False, False],
[False, False],
[ True, False],
[ True, False],
[ True, False],
[False, False],
[False, False],
[False, True],
[False, True],
[False, False]])
</code></pre>
<p>And then use <code>sum</code>:</p>
<pre><code> df['Expected Flag'] = ((s <= index) & (index <= e)).sum(axis=1)
start end Expected Flag
0 0 0 0
1 0 0 0
2 0 0 0
3 1 0 1
4 0 0 1
5 0 1 1
6 0 0 0
7 0 0 0
8 1 0 1
9 0 1 1
10 0 0 0
</code></pre>
<p>One-liner:
<code>((df.start.nonzero()[0] <= df.index.values.reshape(-1,1)) & (df.index.values.reshape(-1,1) <= df.end.nonzero()[0])).sum(axis=1)</code></p> | python|pandas|data-science | 1 |
1,233 | 52,035,184 | Pandas plot line graph with both error bars and markers | <p>I'm trying to use <code>pandas.plot</code> to plot a line chart that should contain both markers and error bars. But for some reason markers are not shown if I specify <code>yerr</code> values. </p>
<p>These are data frames:</p>
<pre><code>df = pd.DataFrame({
'Time': [0, 5, 10, 15, 20, 25],
'Capomulin': [45.0, 44.26608641544399, 43.08429058188399, 42.06431734681251, 40.71632532212173, 39.93952782686818],
'Infubinol': [45.0, 47.062001033088, 49.40390857087143, 51.29639655633334, 53.19769093422999, 55.71525236228889],
'Ketapril': [45.0, 47.38917452114348, 49.582268974622714, 52.39997374321578, 54.92093473734737, 57.678981717731574],
'Placebo': [45.0, 47.125589188437495, 49.42332947868749, 51.35974169802999, 54.36441702681052, 57.48257374394706]})
df.set_index('Time', inplace=True)
errors = pd.DataFrame({
'Time': [0, 5, 10, 15, 20, 25],
'Capomulin': [0.0, 0.44859285020103756, 0.7026843745238932, 0.8386172472985688, 0.9097306924832056, 0.8816421535181787],
'Infubinol': [0.0, 0.23510230430767506, 0.2823459146215716, 0.35770500497539054, 0.4762095134790833, 0.5503145721542003],
'Ketapril': [0.0, 0.26481852016728674, 0.35742125637213723, 0.5802679659678779, 0.7264838239834724, 0.7554127528910378],
'Placebo': [0.0, 0.21809078325219497, 0.40206380730509245, 0.6144614435805993, 0.8396091719248746, 1.0348719877946384]})
errors.set_index('Time', inplace=True)
</code></pre>
<p>Here is what happens when I plot without errors:</p>
<pre><code>df.plot(figsize=(12,8),
style=['^-', 'o--', 'x-.', 'D-'],
markersize=14)
</code></pre>
<p><a href="https://i.stack.imgur.com/6t9PK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6t9PK.png" alt="enter image description here"></a></p>
<p>And this one is with error bars:</p>
<pre><code>df.plot(figsize=(12, 8),
style=['^-', 'o--', 'x-.', 'D-'],
yerr=errors,
markersize=14)
</code></pre>
<p><a href="https://i.stack.imgur.com/UTIRw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UTIRw.png" alt="enter image description here"></a></p>
<p>So how can I plot them both?</p>
<p><strong>UPD</strong>: sample data as data frames</p>
<p><strong>Environment:</strong> OS - Win10 x64, pandas 0.23, matplotlib 2.2.2 </p> | <p>Well, I can confirm that this happens also in Ubuntu 18.04, pandas 0.23.4, matplotlib 2.2.3 with TkAgg backend. I am not sure, if this is a bug or a feature, but you can emulate the expected behavior:</p>
<pre><code>from matplotlib import pyplot as plt
import pandas as pd
#create your sample data
df = pd.DataFrame({
'Time': [0, 5, 10, 15, 20, 25],
'Capomulin': [45.0, 44.26608641544399, 43.08429058188399, 42.06431734681251, 40.71632532212173, 39.93952782686818],
'Infubinol': [45.0, 47.062001033088, 49.40390857087143, 51.29639655633334, 53.19769093422999, 55.71525236228889],
'Ketapril': [45.0, 47.38917452114348, 49.582268974622714, 52.39997374321578, 54.92093473734737, 57.678981717731574],
'Placebo': [45.0, 47.125589188437495, 49.42332947868749, 51.35974169802999, 54.36441702681052, 57.48257374394706]})
df.set_index('Time', inplace=True)
errors = pd.DataFrame({
'Time': [0, 5, 10, 15, 20, 25],
'Capomulin': [0.0, 0.44859285020103756, 0.7026843745238932, 0.8386172472985688, 0.9097306924832056, 0.8816421535181787],
'Infubinol': [0.0, 0.23510230430767506, 0.2823459146215716, 0.35770500497539054, 0.4762095134790833, 0.5503145721542003],
'Ketapril': [0.0, 0.26481852016728674, 0.35742125637213723, 0.5802679659678779, 0.7264838239834724, 0.7554127528910378],
'Placebo': [0.0, 0.21809078325219497, 0.40206380730509245, 0.6144614435805993, 0.8396091719248746, 1.0348719877946384]})
errors.set_index('Time', inplace=True)
#plot error bars
ax = df.plot(figsize=(12,8), yerr = errors, legend = False)
#reset color cycle so that the marker colors match
ax.set_prop_cycle(None)
#plot the markers
df.plot(figsize=(12,8), style=['^-', 'o--', 'x-.', 'D-'], markersize=14, ax = ax)
plt.show()
</code></pre>
<p>Sample output:
<a href="https://i.stack.imgur.com/ErZtZ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ErZtZ.jpg" alt="enter image description here"></a></p> | python|pandas|matplotlib | 3 |
1,234 | 60,444,575 | How to use mongodb query operation on a very large database (have 3 shards of around 260-300 million in each) | <p>I have to find data in between different date ranges column in a sharded database having total of around 800 million documents. I am using this query:</p>
<pre><code>cursordata=event.aggregate([{"$match":{}},{"$unwind":},{"$project":{}}])
</code></pre>
<p>However, when I change it to a pandas dataframe</p>
<pre><code>df=pd.DataFrame(cursordata)
</code></pre>
<p>It is taking for ever and not working at all, it just got stuck.</p>
<p>I have 2 choices:</p>
<ol>
<li>Either keep doing query for different conditions directly from mongodb or</li>
<li>After changing to data to dataframe, perform operation for different conditions</li>
</ol>
<p>Please suggest how to proceed.</p> | <p>Could we have a sample of documents?
I think you should look for an index matching the fields you're querying.</p>
<p>As a reminder, try to keep in mind the <a href="https://www.mongodb.com/blog/post/performance-best-practices-indexing" rel="nofollow noreferrer">Equality, Sort, Range</a> rule in MongoDB indexing.<br/>
Besides, since you're in a sharded cluster you might want to have your sharding key in you query, otherwise the mongos will query all the shards (more info <a href="https://docs.mongodb.com/manual/core/distributed-queries/#read-operations-to-sharded-clusters" rel="nofollow noreferrer">here</a>)</p> | python|pandas|mongodb|mongodb-query|sharding | 0 |
1,235 | 60,580,626 | Resampling data monthly R or Python | <p>I have data recorded in the format as below,</p>
<p><strong>Input</strong></p>
<pre><code>name year value
Afghanistan 1800 68
Albania 1800 23
Algeria 1800 54
Afghanistan 1801 59
Albania 1801 38
Algeria 1801 72
---
Afghanistan 2040 142
Albania 2040 165
Algeria 2040 120
</code></pre>
<p>I would like to resample all of my data which is recorded for years <strong>1800 to 2040</strong> using 1 month and exactly use the format as shown below,</p>
<p><strong>Expected output</strong></p>
<pre><code>name year value
Afghanistan Jan 1800 5.6667
Afghanistan Feb 1800 11.3333
Afghanistan Mar 1800 17.0000
Afghanistan Apr 1800 22.6667
Afghanistan May 1800 28.3333
Afghanistan Jun 1800 34.0000
Afghanistan Jul 1800 39.6667
Afghanistan Aug 1800 45.3333
Afghanistan Sep 1800 51.0000
Afghanistan Oct 1800 56.6667
Afghanistan Nov 1800 62.3333
Afghanistan Dec 1800 68.0000
Albania Jan 1800 1.9167
Albania Feb 1800 3.8333
Albania Mar 1800 5.7500
Albania Apr 1800 7.6667
Albania May 1800 9.5833
Albania Jun 1800 11.5000
Albania Jul 1800 13.4167
Albania Aug 1800 15.3333
Albania Sep 1800 17.2500
Albania Oct 1800 19.1667
Albania Nov 1800 21.0833
Albania Dec 1800 23.0000
Algeria Jan 1800 4.5000
Algeria Feb 1800 9.0000
Algeria Mar 1800 13.5000
Algeria Apr 1800 18.0000
Algeria May 1800 22.5000
Algeria Jun 1800 27.0000
Algeria Jul 1800 31.5000
Algeria Aug 1800 36.0000
Algeria Sep 1800 40.5000
Algeria Oct 1800 45.0000
Algeria Nov 1800 49.5000
Algeria Dec 1800 54.000
</code></pre>
<p>I would like my data to look as above for all of the years, i.e from 1800 - 2040.
The value column is interpolated.
NB: My model will accept months as abbreviations like above.</p>
<p><strong>My closest</strong> trial is as below but did not produce the expected result. </p>
<pre><code>data['year'] = pd.to_datetime(data.year, format='%Y')
data.head(3)
name year value
Afghanistan 1800-01-01 00:00:00 68
Albania 1800-01-01 00:00:00 23
Algeria 1800-01-01 00:00:00 54
resampled = (data.groupby(['name']).apply(lambda x: x.set_index('year').resample('M').interpolate()))
resampled.head(3)
name year name value
Afghanistan 1800-01-31 00:00:00 NaN NaN
1800-02-28 00:00:00 NaN NaN
1800-03-31 00:00:00 NaN NaN
</code></pre>
<p>Your thoughts will save me here.</p> | <p>Here's a <code>tidyverse</code> approach that also requires the <code>zoo</code> package for the interpolation part.</p>
<pre><code>library(dplyr)
library(tidyr)
library(zoo)
df <- data.frame(country = rep(c("Afghanistan", "Algeria"), each = 3),
year = rep(seq(1800,1802), times = 2),
value = rep(seq(3), times = 2),
stringsAsFactors = FALSE)
df2 <- df %>%
# make a grid of all country/year/month possibilities within the years in df
tidyr::expand(year, month = seq(12)) %>%
# join that to the original data frame to add back the values
left_join(., df) %>%
# put the result in chronological order
arrange(country, year, month) %>%
# group by country so the interpolation stays within those sets
group_by(country) %>%
# make a version of value that is NA except for Dec, then use na.approx to replace
# the NAs with linearly interpolated values
mutate(value_i = ifelse(month == 12, value, NA),
value_i = zoo::na.approx(value_i, na.rm = FALSE))
</code></pre>
<p>Note that the resulting column, <code>value_i</code>, is <code>NA</code> until the first valid observation, in December of the first observed year. So here's what the tail of <code>df2</code> looks like.</p>
<pre><code>> tail(df2)
# A tibble: 6 x 5
# Groups: country [1]
year month country value value_i
<int> <int> <chr> <int> <dbl>
1 1802 7 Algeria 3 2.58
2 1802 8 Algeria 3 2.67
3 1802 9 Algeria 3 2.75
4 1802 10 Algeria 3 2.83
5 1802 11 Algeria 3 2.92
6 1802 12 Algeria 3 3
</code></pre>
<p>If you want to replace those leading NAs, you'd have to do linear extrapolation, which you can do with <code>na.spline</code> from <code>zoo</code> instead. And if you'd rather have the observed values in January instead of December and get trailing instead of leading NAs, just change the relevant bit of the second-to-last line to <code>month == 1</code>.</p> | python|r|pandas|data.table | 2 |
1,236 | 59,646,219 | Test set accuracy of 1. How to debug | <p>I am trying to create a simple neural network using tensorflow as a learning exercise. These are the details of the NN I created.. </p>
<pre><code>def multilayer_perceptron(x, weights, biases, keep_prob):
layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
layer_1 = tf.nn.relu(layer_1)
layer_1 = tf.nn.dropout(layer_1, keep_prob)
out_layer = tf.matmul(layer_1, weights['out']) + biases['out']
return out_layer
n_hidden_1 = 38
n_input = train_x.shape[1]
n_classes = train_y.shape[1]
weights = {
'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),
'out': tf.Variable(tf.random_normal([n_hidden_1, n_classes]))
}
biases = {
'b1': tf.Variable(tf.random_normal([n_hidden_1])),
'out': tf.Variable(tf.random_normal([n_classes]))
}
keep_prob = tf.placeholder("float")
training_epochs = 500
display_step = 100
batch_size = 320
x = tf.placeholder("float", [None, n_input])
y = tf.placeholder("float", [None, n_classes])
predictions = multilayer_perceptron(x, weights, biases, 1)
cost = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=predictions, labels=y))
optimizer = tf.train.AdamOptimizer(learning_rate=1).minimize(cost)
</code></pre>
<p>This is my code for the tf.session</p>
<pre><code>from tensorflow import keras
with tf.Session() as sess:
# Step 1. Initializing the session
init = tf.global_variables_initializer()
writer = tf.summary.FileWriter('/home/dileep/Desktop', sess.graph)
sess.run(init)
# Step 2. Dividing x and y to batches
for epoch in range(training_epochs):
avg_cost = 0.0
total_batch = int(len(train_x)//batch_size)
x_batches = np.array_split(train_x, total_batch)
y_batches = np.array_split(train_y, total_batch)
# Step 3. Run session, calculate cost.
for i in range(total_batch):
batch_x, batch_y = x_batches[i], y_batches[i]
_, c = sess.run([optimizer, cost], feed_dict={
x:batch_x,
y:batch_y, keep_prob:0.4})
avg_cost += c/total_batch
# Step 4. Print the outputs
if epoch % display_step == 0:
print("Epoch:", '%0d' % (epoch+1), "cost=", "{:.9f}".format(avg_cost))
cost_summary = tf.summary.scalar(name='cost_summary', tensor=avg_cost)
summary = sess.run(cost_summary)
writer.add_summary(summary, epoch)
print("Optimization finished!")
correct_prediction = tf.equal(tf.argmax(predictions, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, 'float'))
print('Accuracy:', accuracy.eval({x: test_x, y: test_y, keep_prob:0.8}))
</code></pre>
<p>When I run this, the cost function is nicely decreasing with epochs, but the test accuracy is showing as 1.
I tried substituting test_x and test_y with random numbers and still it is giving an accuracy of 1 so it is obviously wrong. But I am not able to trouble shoot. Could anyone please show me where the problem is? Thank you. </p>
<p>This is the cost graph that I could plot from the above code.
<a href="https://i.stack.imgur.com/yYF47.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yYF47.png" alt="enter image description here"></a></p> | <p>There can be multiple reasons for it, first, let's look at the hyperparameter you are defining. </p>
<p><strong>Learning rate:</strong></p>
<p>There are multiple ways to select a good starting point for the learning rate. A naive approach is to try a few different values and see which one gives you the best loss without sacrificing speed of training. We might start with a large value like 0.1, then try exponentially lower values: 0.01, 0.001, etc. When we start training with a large learning rate, the loss doesn’t improve and probably even grows while we run the first few iterations of training. When training with a smaller learning rate, at some point the value of the loss function starts decreasing in the first few iterations. This learning rate is the maximum we can use, any higher value doesn’t let the training converge. Even this value is too high: it won’t be good enough to train for multiple epochs because over time the network will require more fine-grained weight updates. Therefore, a reasonable learning rate to start training from will be probably 1–2 orders of magnitude lower. </p>
<p><strong>Training data:</strong> </p>
<p>Maybe the number of training data is very less in volume for the model to learn, try including more data by sampling, augmentation techniques. </p>
<p><strong>Epochs:</strong> </p>
<p>In your code, decrease the <code>display_step</code> to 10, so for every 10 steps you will print the loss, if the loss is not changing much for continuous steps, you can bring the epochs number to that range where the loss is not changing. Else if you keep large number of epochs, your model will overfit. </p>
<p><strong>Test Data:</strong> </p>
<p>Test data should be unseen data from the train instances. Try to give different varients of test data. Including train_test_split to 80/20 % and do data shuffling before splitting train and test data. </p>
<p>I have tried your code on <code>mnist data</code> for 10 classes by chaning the above-mentioned parameters and I have got a near benchmark result. </p>
<pre><code>from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
train_x=mnist.train.images
train_y = mnist.train.labels
test_x = mnist.test.images
test_y = mnist.test.labels
def multilayer_perceptron(x, weights, biases, keep_prob):
layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
layer_1 = tf.nn.relu(layer_1)
layer_1 = tf.nn.dropout(layer_1, keep_prob)
out_layer = tf.matmul(layer_1, weights['out']) + biases['out']
return out_layer
n_hidden_1 = 38
n_input = train_x.shape[1]
n_classes = 10
weights = {
'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),
'out': tf.Variable(tf.random_normal([n_hidden_1, n_classes]))
}
biases = {
'b1': tf.Variable(tf.random_normal([n_hidden_1])),
'out': tf.Variable(tf.random_normal([n_classes]))
}
keep_prob = tf.placeholder("float")
training_epochs = 100
display_step = 10
batch_size = 32
x = tf.placeholder("float", [None, n_input])
y = tf.placeholder("float", [None, n_classes])
predictions = multilayer_perceptron(x, weights, biases, 1)
cost = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=predictions, labels=y))
optimizer = tf.train.AdamOptimizer(learning_rate=0.001).minimize(cost)
</code></pre>
<p>Train and Test: </p>
<pre><code>from tensorflow import keras
with tf.Session() as sess:
# Step 1. Initializing the session
init = tf.global_variables_initializer()
writer = tf.summary.FileWriter('/home/dileep/Desktop', sess.graph)
sess.run(init)
# Step 2. Dividing x and y to batches
for epoch in range(training_epochs):
avg_cost = 0.0
total_batch = int(len(train_x)//batch_size)
x_batches = np.array_split(train_x, total_batch)
y_batches = np.array_split(train_y, total_batch)
# Step 3. Run session, calculate cost.
for i in range(total_batch):
batch_x, batch_y = x_batches[i], y_batches[i]
_, c = sess.run([optimizer, cost], feed_dict={
x:batch_x,
y:batch_y, keep_prob:0.4})
avg_cost += c/total_batch
# Step 4. Print the outputs
if epoch % display_step == 0:
print("Epoch:", '%0d' % (epoch+1), "cost=", "{:.9f}".format(avg_cost))
cost_summary = tf.summary.scalar(name='cost_summary', tensor=avg_cost)
summary = sess.run(cost_summary)
writer.add_summary(summary, epoch)
print("Optimization finished!")
correct_prediction = tf.equal(tf.argmax(predictions, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, 'float'))
print('Accuracy:', accuracy.eval({x: test_x, y: test_y, keep_prob:0.8}))
</code></pre>
<p><strong>Output:</strong> </p>
<pre><code>Epoch: 1 cost= 1.206482108
Epoch: 11 cost= 0.040194626
Epoch: 21 cost= 0.025972947
Epoch: 31 cost= 0.019996314
Epoch: 41 cost= 0.016305469
Epoch: 51 cost= 0.013636761
Epoch: 61 cost= 0.011600128
Epoch: 71 cost= 0.009993816
Epoch: 81 cost= 0.008687532
Epoch: 91 cost= 0.007587933
Optimization finished!
Accuracy: 0.9588
</code></pre> | python|tensorflow|machine-learning|deep-learning|neural-network | 0 |
1,237 | 32,343,743 | Replace data frame values matching given condition | <p>I have the following data in a tab-separated file <code>test.tsv</code>.</p>
<pre><code>Class Length Frag
I 100 True
I 200 True
P 300 False
I 400 False
P 500 True
P 600 True
N 700 True
</code></pre>
<p>I have loaded the data into a <code>pandas.DataFrame</code> object, and anywhere that Class = I and Frag = True I would like to set Class = F. <a href="https://gist.github.com/standage/0d44652ff7b4406f3a14" rel="nofollow">The following code</a> does not seem to be working. What am I doing wrong, and what should I be doing?</p>
<pre><code>import pandas
data = pandas.read_table('test.tsv')
data.loc[(data.Class == 'I') & (data.Frag is True), 'Class'] = 'F'
</code></pre> | <p>In your line</p>
<pre><code>data.loc[(data.Class == 'I') & (data.Frag is True), 'Class'] = 'F'
</code></pre>
<p>you shouldn't use <code>is</code>. <code>is</code> tests identity, not equality. So when you're asking if <code>data.Frag is True</code>, it's comparing the Series object <code>data.Frag</code> and asking whether it's the same object as <code>True</code>, and that's not true. Really you want to use <code>==</code>, so you get a Series result:</p>
<pre><code>>>> data.Frag is True
False
>>> data.Frag == True
0 True
1 True
2 False
3 False
4 True
5 True
6 True
Name: Frag, dtype: bool
</code></pre>
<p>But since we're working with a series of bools anyway, the <code>== True</code> part doesn't add anything, and we can drop it:</p>
<pre><code>>>> data.loc[(data.Class == 'I') & (data.Frag), 'Class'] = 'F'
>>> data
Class Length Frag
0 F 100 True
1 F 200 True
2 P 300 False
3 I 400 False
4 P 500 True
5 P 600 True
6 N 700 True
</code></pre> | python|pandas | 3 |
1,238 | 40,666,316 | How to get Tensorflow tensor dimensions (shape) as int values? | <p>Suppose I have a Tensorflow tensor. How do I get the dimensions (shape) of the tensor as integer values? I know there are two methods, <code>tensor.get_shape()</code> and <code>tf.shape(tensor)</code>, but I can't get the shape values as integer <code>int32</code> values.</p>
<p>For example, below I've created a 2-D tensor, and I need to get the number of rows and columns as <code>int32</code> so that I can call <code>reshape()</code> to create a tensor of shape <code>(num_rows * num_cols, 1)</code>. However, the method <code>tensor.get_shape()</code> returns values as <code>Dimension</code> type, not <code>int32</code>.</p>
<pre><code>import tensorflow as tf
import numpy as np
sess = tf.Session()
tensor = tf.convert_to_tensor(np.array([[1001,1002,1003],[3,4,5]]), dtype=tf.float32)
sess.run(tensor)
# array([[ 1001., 1002., 1003.],
# [ 3., 4., 5.]], dtype=float32)
tensor_shape = tensor.get_shape()
tensor_shape
# TensorShape([Dimension(2), Dimension(3)])
print tensor_shape
# (2, 3)
num_rows = tensor_shape[0] # ???
num_cols = tensor_shape[1] # ???
tensor2 = tf.reshape(tensor, (num_rows*num_cols, 1))
# Traceback (most recent call last):
# File "<stdin>", line 1, in <module>
# File "/usr/local/lib/python2.7/site-packages/tensorflow/python/ops/gen_array_ops.py", line 1750, in reshape
# name=name)
# File "/usr/local/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 454, in apply_op
# as_ref=input_arg.is_ref)
# File "/usr/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 621, in convert_to_tensor
# ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
# File "/usr/local/lib/python2.7/site-packages/tensorflow/python/framework/constant_op.py", line 180, in _constant_tensor_conversion_function
# return constant(v, dtype=dtype, name=name)
# File "/usr/local/lib/python2.7/site-packages/tensorflow/python/framework/constant_op.py", line 163, in constant
# tensor_util.make_tensor_proto(value, dtype=dtype, shape=shape))
# File "/usr/local/lib/python2.7/site-packages/tensorflow/python/framework/tensor_util.py", line 353, in make_tensor_proto
# _AssertCompatible(values, dtype)
# File "/usr/local/lib/python2.7/site-packages/tensorflow/python/framework/tensor_util.py", line 290, in _AssertCompatible
# (dtype.name, repr(mismatch), type(mismatch).__name__))
# TypeError: Expected int32, got Dimension(6) of type 'Dimension' instead.
</code></pre> | <p>To get the shape as a list of ints, do <code>tensor.get_shape().as_list()</code>.</p>
<p>To complete your <code>tf.shape()</code> call, try <code>tensor2 = tf.reshape(tensor, tf.TensorShape([num_rows*num_cols, 1]))</code>. Or you can directly do <code>tensor2 = tf.reshape(tensor, tf.TensorShape([-1, 1]))</code> where its first dimension can be inferred.</p> | python|tensorflow|machine-learning|artificial-intelligence | 141 |
1,239 | 40,531,543 | In distributed tensorflow, how to write to summary from workers as well | <p>I am using google cloud ml distributed sample for training a model on a cluster of computers. Input and output (ie rfrecords, checkpoints, tfevents) are all on gs:// (google storage)</p>
<p>Similarly to the distributed sample, I use an evaluation step that is called at the end, and the result is written as a summary, in order to use parameter hypertuning / either within Cloud ML, or using my own stack of tools. </p>
<p>But rather than performing a single evaluation on a large batch of data, I am running several evaluation steps, in order to retrieve statistics on the performance criteria, because I don't want to limited to a single value. I want to get information regarding the performance interval. In particular, the variance of performance is important to me. I'd rather select a model with lower average performance but with better worst cases. </p>
<p>I therefore run several evaluation steps. What I would like to do is to parallelize these evaluation steps because right now, only the master is evaluating. When using large clusters, it is a source of inefficiency, and task workers to evaluate as well. </p>
<p>Basically, the supervisor is created as :</p>
<pre><code>self.sv = tf.train.Supervisor(
graph,
is_chief=self.is_master,
logdir=train_dir(self.args.output_path),
init_op=init_op,
saver=self.saver,
# Write summary_ops by hand.
summary_op=None,
global_step=self.tensors.global_step,
# No saving; we do it manually in order to easily evaluate immediately
# afterwards.
save_model_secs=0)
</code></pre>
<p>At the end of training I call the summary writer. : </p>
<pre><code> # only on master, this is what I want to remove
if self.is_master and not self.should_stop:
# I want to have an idea of statistics of accuracy
# not just the mean, hence I run on 10 batches
for i in range(10):
self.global_step += 1
# I call an evaluator, and extract the accuracy
evaluation_values = self.evaluator.evaluate()
accuracy_value = self.model.accuracy_value(evaluation_values)
# now I dump the accuracy, ready to use within hptune
eval_summary = tf.Summary(value=[
tf.Summary.Value(
tag='training/hptuning/metric', simple_value=accuracy_value)
])
self.sv.summary_computed(session, eval_summary, self.global_step)
</code></pre>
<p>I tried to write summaries from workers as well , but I got an error : basically summary can be written from masters only. Is there any easy way to workaround ? The error is : <code>"Writing a summary requires a summary writer."</code></p> | <p>My guess is you'd create a separate summary writer on each worker yourself, and write out summaries directly rather.</p>
<p>I suspect you wouldn't use a supervisor for the eval processing either. Just load a session on each worker for doing eval with the latest checkpoint, and writing out independent summaries.</p> | tensorflow|google-cloud-ml | 2 |
1,240 | 18,346,673 | Preserving the distinctions between bools and floats when adding NaN to a pandas Series? | <p>I am adding data to a pandas <code>Series</code> via the <code>Series#append</code> method. Unfortunately, when <code>nan</code> is added to a <code>bool</code> Series, it is automatically converted to a <code>float</code> Series. Is there any way to avoid this conversion, or at least coerce it to <code>object</code> dtype, so as to preserve the distinction between <code>bool</code>s and <code>float</code>s?</p>
<pre><code>>>> Series([True])
0 True
dtype: bool
>>> Series([True]).append(Series([np.nan]))
0 1
0 NaN
dtype: float64
</code></pre> | <p>As @Jeff said, the best way is going to be to append a <code>Series</code> with <code>object</code> <code>dtype</code></p>
<p>Here's an example using <code>Series</code></p>
<pre><code>s = Series([True])
s.append(Series([nan], index=[1], dtype=object))
</code></pre>
<p>yielding</p>
<pre><code>0 True
1 NaN
dtype: object
</code></pre>
<p>And one with a <code>DataFrame</code>:</p>
<pre><code>df = DataFrame({'a': rand(10) > 0.5, 'b': randn(10)}, columns=list('ab'))
df2 = DataFrame({'a': Series([nan], dtype=object), 'b': [1.0]}, columns=df.columns, index=[len(df)])
df3 = df.append(df2)
print df3
print
print df3.dtypes
</code></pre>
<p>which gives</p>
<pre><code> a b
0 False -0.865
1 True -0.186
2 True 0.078
3 True 0.995
4 False -1.420
5 True -0.340
6 True 0.042
7 True -0.627
8 True -0.217
9 True 1.226
10 NaN 1.000
a object
b float64
dtype: object
</code></pre>
<p>It's a bit clunky looking, but if you've already got the <code>Series</code> then you can do <code>s.astype(object)</code> to convert them to <code>object</code> <code>dtype</code> before appending.</p> | python|numpy|pandas | 1 |
1,241 | 61,653,333 | I cannot understand why "in" doesn't work correctly | <p>sp01 is dataframe which contains S&P 500 index. And I have a dataframe,interest, which contains daily interest rate. The two data started from same date, but their size were not same. It's error. </p>
<p>I want to get exact same date, so tried to check every date using "in" function. But "in" function doesn't work well. This is code :</p>
<pre><code>print(sp01.Date[0], type(sp01.Date[0]) )
->1976-06-01, str
print(interest.DATE[0], type(interest.DATE[0]) )
->1976-06-01, str
print(sp01.Date[0] in interest.DATE)
->False
</code></pre>
<p>I can never understand why the result becomes False.
Of course, first date of sp01 and interest is totally same,
I checked it too by typing code. So, True should be come out, but False came out. I got mad!!!please Help me.</p> | <p>I solved it! the problem is that "in" function does not work for pandas series data. Those two data are pandas series, so I have to change one of them to list</p> | pandas | 1 |
1,242 | 61,688,550 | CNN having high overfitting despite having dropout layers? | <p>For some background, my dataset is roughly 75000+ images, 200x200 greyscale, with 26 classes (the letters of the alphabet). My model is:</p>
<pre><code>model = Sequential()
model.add(Conv2D(32, (3, 3), activation='relu', input_shape=(200, 200, 1)))
model.add(MaxPooling2D((2, 2)))
model.add(Dropout(0.2))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D((2, 2)))
model.add(Dropout(0.2))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D((2, 2)))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(26, activation='softmax'))
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=[tf.keras.metrics.CategoricalAccuracy()])
model.fit(X_train, y_train, epochs=1, batch_size=64, verbose=1, validation_data=(X_test, y_test))
</code></pre>
<p>The output of the model.fit is: </p>
<pre><code>Train on 54600 samples, validate on 23400 samples
Epoch 1/1
54600/54600 [==============================] - 54s 984us/step - loss: nan - categorical_accuracy: 0.9964 - val_loss: nan - val_categorical_accuracy: 0.9996
</code></pre>
<p>99.9+ valadiation accuracy. When I run a test, it gets all the predictions incorrect. So, I assume it is overfitting. Why is this happening, despite adding the dropout layers? What other options do I have to fix this? Thank you!</p> | <p>The only way you would get all the predictions on a held-out test set incorrect while simultaneously getting almost 100% on validation accuracy is if you have a data leak. i.e. Your training data must contain the same images as your validation data (or they are VERY similar to the point of being identical). </p>
<p>Or the data in your test set is very different than your training and validation datasets.</p>
<p>To fix this ensure that across all your datasets no single image exists in more than one of the datasets. Also ensure that the images are generally similar. i.e. if training using cell phone photos, do not then test with images taken using a DSLR or images that have watermarks pulled from Google.</p>
<p>It is also odd that your loss is <code>nan</code>. It may be due to using categorical accuracy. To fix this just put the metric to be 'accuracy'. This will dynamically determine the best accuracy to use. One of <code>[binary, categorical or sparse_categorical]</code>.</p>
<p>Hope this helps.</p> | python|tensorflow|deep-learning|conv-neural-network|dropout | 1 |
1,243 | 61,875,790 | speed up a Pandas fillna by subcatagory mean (how to replace a for loop) | <p>My data contains several sub categories coded in the column "RID", I'm filling by the mean of each sub category. The code I've been using is very slow. Looking for a better method that gets rid of the for loop.</p>
<pre><code>filled = mergedf.copy()
for c,v in enumerate(mergedf.RID.unique()):
filled.loc[filled.RID == v, :] = filled.loc[filled.RID == v, :].fillna(filled.loc[filled.RID == v, :].mean())
filled.info()
</code></pre>
<p>I've been trying the following to speed it up as someone suggested groupby, but I can't get the merges to work properly.</p>
<pre><code>pts_mean = mergedf.groupby("RID").mean()
fill2 = merge.combine_first(pts_mean)
fill3 = pd.merge(mergedf, pts_mean, on="RID", how="left")
</code></pre>
<p>I've experimented with how = "inner" as well as how = "outer"</p>
<p>looking at my test data, before:</p>
<pre><code>print(mergedf.loc[mergedf.RID==2,"FDG"])
0 1.36926
1 1.21655
2 NaN
3 NaN
4 NaN
5 NaN
6 NaN
7 NaN
8 NaN
9 NaN
10 NaN
11 NaN
12 NaN
</code></pre>
<p>after the slow method (this is the desired result, I just don't want it to take so long)</p>
<pre><code>print(filled.loc[filled.RID==2,"FDG"])
0 1.369260
1 1.216550
2 1.292905
3 1.292905
4 1.292905
5 1.292905
6 1.292905
7 1.292905
8 1.292905
9 1.292905
10 1.292905
11 1.292905
12 1.292905
</code></pre>
<p>after the combine_first method</p>
<pre><code>print(fill2.loc[fill2.RID==2,"FDG"])
0 1.369260
1 1.216550
2 1.292905
3 1.074235
4 NaN
5 1.319690
6 NaN
7 NaN
8 1.264300
9 NaN
10 1.042469
11 NaN
12 NaN
</code></pre>
<p>after the pd.merge</p>
<pre><code>print(fill3.loc[fill3.RID==2,["FDG_x","FDG_y"]])
FDG_x FDG_y
0 1.36926 1.292905
1 1.21655 1.292905
2 NaN 1.292905
3 NaN 1.292905
4 NaN 1.292905
5 NaN 1.292905
6 NaN 1.292905
7 NaN 1.292905
8 NaN 1.292905
9 NaN 1.292905
10 NaN 1.292905
11 NaN 1.292905
12 NaN 1.292905
</code></pre> | <p>Let's try the following, using <code>groupby</code> with <code>transform</code>:</p>
<pre><code>filled['FDG'].fillna(filled.groupby('RID')['FDG'].transform('mean'))
</code></pre>
<p>or </p>
<pre><code>fill4 = filled.fillna(filled.groupby('RID').transform('mean'))
</code></pre> | pandas|dataframe|pandas-groupby|fillna | 0 |
1,244 | 58,061,111 | Java TFLITE error when allocating memory for runForMultipleInputsOutputs | <p>I'm getting an Error when preparing outputs for TFLITE interpreter for Android Java. The model has 1 input and 4 outputs. </p>
<pre><code>interpreter.runForMultipleInputsOutputs(input, map_of_indices_to_outputs);
E/Run multiple: Internal error: Unexpected failure when preparing tensor allocations: tensorflow/lite/kernels/tile.cc:53 num_dimensions != num_multipliers (1 != 2)Node number 4 (TILE) failed to prepare.
</code></pre>
<p>The output requirements are a list of 4 float arrays:</p>
<pre><code>[ [1x1],[1x2], [1x2], [1x2] ]
</code></pre>
<p>The output in python for the predict is:</p>
<pre><code>In [56] output = model.predict(new_observation_scaled)
Out[56]:
[array([[0.]], dtype=float32),
array([[137.66626, 335.7148 ]], dtype=float32),
array([[0.16666616, 0.40643442]], dtype=float32),
array([[9.9915421e-01, 8.4577635e-04]], dtype=float32)]
</code></pre>
<p>So I prepared a Object list in JAVA:</p>
<pre><code>float [][] output0 = new float [1][1];
float [][] output1 = new float [1][2];
float [][] output2 = new float [1][2];
float [][] output3 = new float [1][2];
Object[] outputs = {output0,output1,output2,output3};
Map<Integer, Object> map_of_indices_to_outputs = new HashMap<>();
map_of_indices_to_outputs.put(0, output0);
map_of_indices_to_outputs.put(1, output1);
map_of_indices_to_outputs.put(2, output2);
map_of_indices_to_outputs.put(3, output3);
</code></pre>
<p>Can you help me find the error ?</p>
<p>EDIT:
This is the interpreter detail read from the tflite file generated with tf 2.0-rc1:</p>
<pre><code>f='.\\models\\pdb_20190923-163632.tflite'
interpreter = tf.lite.Interpreter(model_path=f)
interpreter
Out[16]: <tensorflow.lite.python.interpreter.Interpreter at 0x20684c69608>
interpreter.get_input_details()
Out[17]:
[{'name': 'RSSI',
'index': 4,
'shape': array([ 1, 15]),
'dtype': numpy.float32,
'quantization': (0.0, 0)}]
interpreter.get_output_details()
Out[18]:
[{'name': 'Identity',
'index': 0,
'shape': array([1, 1]),
'dtype': numpy.float32,
'quantization': (0.0, 0)},
{'name': 'Identity_1',
'index': 1,
'shape': array([1, 2]),
'dtype': numpy.float32,
'quantization': (0.0, 0)},
{'name': 'Identity_2',
'index': 2,
'shape': array([1, 2]),
'dtype': numpy.float32,
'quantization': (0.0, 0)},
{'name': 'Identity_3',
'index': 3,
'shape': array([1, 2]),
'dtype': numpy.float32,
'quantization': (0.0, 0)}]
</code></pre> | <p>I got the solution of the issue:</p>
<p>In the same way outputs require a List, so do Inputs:</p>
<pre><code>Object[] outputs = {output0,output1,output2,output3};
Object[] inputs = {input};
interpreter.runForMultipleInputsOutputs(inputs, map_of_indices_to_outputs);
</code></pre> | java|android|tensorflow-lite | 0 |
1,245 | 57,854,204 | Why model.predict computes different values in this sample? | <p><strong>Context</strong></p>
<p>Given the following sample, I'm using Jupyter Notebook :</p>
<pre><code>import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
import numpy as np
x_input = np.array([[1,2,3,4,5]])
y_input = np.array([[10]])
model = Sequential()
model.add(Dense(units=32, activation="tanh", input_dim=x_input.shape[1], kernel_initializer='random_normal'))
model.add(Dense(units=1, kernel_initializer='random_normal'))
model.compile(loss='mse', optimizer='sgd', metrics=['accuracy'])
history = model.fit(x_input, y_input, epochs=10, batch_size=32)
</code></pre>
<p>When I run <code>model.predict(x_input)</code> I got:</p>
<blockquote>
<p>array([[9.993563]], dtype=float32)</p>
</blockquote>
<p>When I run <code>model.predict(np.array([[1,2,5,4,5]]))</code> I got:</p>
<blockquote>
<p>array([[10.180285]], dtype=float32)</p>
</blockquote>
<p><strong>Question</strong></p>
<p>Should I get the very same prediction in both cases? (While using the same fit model)</p> | <p>Well, the inputs are no the same, the first is <code>[1,2,3,4,5]</code> and the second is <code>[1,2,5,4,5]</code>. The third element of both arrays are not the same.</p> | python|numpy|tensorflow|keras | 1 |
1,246 | 58,155,138 | Pandas Error: sequence item 0: expected str instance, NoneType found | <p>I have a dataframe as below: </p>
<pre><code>Car_Modal Color Number_Passenger
Proton Black 5
Proton Black 7
Perudua White 5
Perudua White 7
Perudua Red 7
Honda 5
</code></pre>
<p>Due to the Honda row have Null value at Color column, is show me error when I used below code:</p>
<p><code>df["Join"]=df.groupby("Car_Modal")["Color"].transform(lambda x :'<br>'.join(x.unique()))</code></p>
<p>Expected Output:</p>
<pre><code>Car_Modal Color Number_Passenger Join
Proton Black 5 Black
Proton Black 7 Black
Perudua White 5 White<br>Red
Perudua White 7 White<br>Red
Perudua Red 7 White<br>Red
Honda 5
</code></pre>
<p>Anyone can share me ideas how to solve this?</p> | <p>try filtering data that are not null</p>
<pre class="lang-py prettyprint-override"><code>df["Join"]=df[~df["Color"].isnull()].groupby("Car_Modal")["Color"] \
.transform(lambda x :'<br>'.join(x.unique()))
</code></pre> | python|pandas|pandas-groupby | 2 |
1,247 | 57,826,841 | Calculate the maximum number of items allowed based on a fixed given value | <p>I have a set of items in this dataframe:</p>
<pre><code>Items Calories
Beer 320
Hotdog 200
Popcorn 100
Coca-Cola 75
</code></pre>
<p>I need to calculate the fewest number of items I can have from the list to achieve <code>400</code> calories. Any suggestions?</p>
<p>I have calculated the total value of the calories, and I got stuck there.</p>
<pre><code>row_total = df_calories['Calories'].sum()
</code></pre> | <p>If you cannot pick a single item multiple times.</p>
<ol>
<li>Sort by calories</li>
</ol>
<pre class="lang-py prettyprint-override"><code>df = df.sort_values(by=['Calories'], ascending=False).reset_index()
</code></pre>
<p>Output</p>
<pre><code> index Items Calories
0 0 Beer 320
1 1 Hotdog 200
2 4 Sweet 160
3 2 Popcorn 100
</code></pre>
<ol start="2">
<li>Take cumulative sum & pick the first element where sum is greater than 400</li>
</ol>
<pre class="lang-py prettyprint-override"><code>idx = df[(df['Calories'].cumsum() > 400) == True].index[0]
df[0:idx+1]['Items'].tolist()
</code></pre>
<p>Output</p>
<pre><code>['Beer', 'Hotdog']
</code></pre>
<p>Hope this helps!!</p> | python|pandas | 0 |
1,248 | 36,988,677 | How to create matrices with different names inside a for loop | <p>I want to create the matrices 1x5: <code>matriz1</code>, <code>matriz2</code> and <code>matriz3</code>, with the values <code>i + j</code>, but my code doesn't work. Can someone help me?</p>
<pre><code>import numpy as np
for i in range(3):
name= 'matriz%d'%i
name= np.zeros((1,5))
for i in range(3):
name2 = 'matriz%d'%i
for j in range(5):
name2[j]=i+j
for i in range(3):
name3 = 'matriz%d'%i
print(name3)
</code></pre> | <p>In Python, these 2 lines just assign two different objects to the variable <code>name</code>.</p>
<pre><code>name= 'matriz%d'%i # assign a string
name= np.zeros((1,5)) # assign an array
</code></pre>
<p>Some other languages have a mechanism that lets you use the string as variable name, e.g. <code>$name = ...</code>. But in Python that is awkward, if not impossible. Instead you should use structures, such as a dictionary.</p>
<p>e.g.</p>
<pre><code>adict = {}
for i in range(3):
name= 'matriz%d'%i
adict[name] = np.zeros((1,5))
</code></pre>
<p>You can then access this array via a dictionary reference like: <code>adict['matriz3']</code> </p>
<p>You could also use a list, and access individual arrays by number or list iteration:</p>
<pre><code>alist = [np.zeros((1,5)) for i in range(3)]
for i,A in enumerate(alist): # iteration with index
A[:] = i+np.arange(5)
for a in alist: # simple iteration
print(a)
</code></pre> | python|numpy|matrix | 1 |
1,249 | 54,943,923 | Dataframe with conflicting float formatting | <p>I have the below dataframe:</p>
<pre><code>pd.DataFrame({'Full Dataset': m1_baseline.params,
'Train Set': m1_train.params})
</code></pre>
<p>Which produces the below table:</p>
<pre><code> Full Dataset Train Set
Intercept 6.078966e+01 62.479667
DISTANCE 4.418002e-03 0.001389
AP_TOTAL_ARRIVALS -8.944526e-07 -0.000006
AL_TOTAL_FLIGHTS -7.643211e-06 -0.000008
Lunch -4.391630e+00 -5.179138
</code></pre>
<p>Obviously the use of scientific notation in the first column but not the second is confusing.</p>
<p>Is there a way to fix this?</p> | <p>You can try <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/style.html" rel="nofollow noreferrer">df.style:</a></p>
<pre><code>df.style.format('{:.2f}')
</code></pre>
<p>This fill have numbers upto 2 decimal places and you can change the number to change it how many ever decimal places you want</p> | python|pandas|scientific-notation | 1 |
1,250 | 54,733,971 | How to sum column values into a new df | <p>I'm pretty new to pandas/python and coding overall. Thus I got a question about coding sums of columns with pandas.</p>
<p>I have a 306x7 dataframe about past soccer results. Now I want to sum both the home goals and away goals for each club and put it into a new dataframe (18 rows for 18 clubs and 2 columns for homegoals and awaygoals fullseason).</p>
<p>Could anyone give me an idea on how to proceed?</p>
<pre><code>teams = Liga2['HomeTeam'].unique()
df = pd.DataFrame(index=teams, columns=['FTHG','FTAG'])
for team in teams:
df.loc[team, 'FTHG'] = [Liga2.HomeTeam == team].FTHG.sum()
df.loc[team, 'FTAG'] = [Liga2.AwayTeam == team].FTHG.sum()
</code></pre>
<p>Error:</p>
<hr>
<pre><code>AttributeError Traceback (most recent call last)
<ipython-input-12-a1b735dbadf3> in <module>
4
5 for team in teams:
----> 6 df.loc[team, 'FTHG'] = [Liga2.HomeTeam == team].FTHG.sum()
7 df.loc[team, 'FTAG'] = [Liga2.AwayTeam == team].FTHG.sum()
AttributeError: 'list' object has no attribute 'FTHG'
</code></pre>
<p>This is the df:</p>
<p><a href="https://imgur.com/a/4bKrYRz" rel="nofollow noreferrer">https://imgur.com/a/4bKrYRz</a></p>
<p>Thank you for your ideas.</p> | <p>The easiest way to think through this (no groupby) is to just create a unique list of teams and a df with home and away goals, then to add the sum of home and away goals for each team.</p>
<pre><code># list of unique teams (assuming home and away teams are identical)
teams = liga2['HomeTeam'].unique()
# create the dataframe
df = pd.DataFrame(index=teams, columns=['home_goals','away_goals'])
# for each team, populate the df with the sum of their home and away goals
for team in teams:
df.loc[team,'home_goals'] = liga2[ liga2.HomeTeam == team ].FTHG.sum()
df.loc[team,'away_goals'] = liga2[ liga2.AwayTeam == team ].FTAG.sum()
</code></pre>
<p>With <code>groupby</code>, all you need is:</p>
<pre><code># create the groupby sums, where the team name is the index
home = liga2.groupby('HomeTeam').sum()['FTHG']
away = liga2.groupby('AwayTeam')['FTAG'].sum()
# concat them as columns in a df
df = pd.concat( [home, away],axis=1 )
</code></pre> | pandas | 0 |
1,251 | 49,420,301 | Numpy salt and pepper image on color region? | <p>I have tons of images that look like the following:</p>
<p><a href="https://i.stack.imgur.com/issKU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/issKU.png" alt="enter image description here"></a></p>
<p>I want to add random black and white pixels (salt and pepper) to those images, but only within the colored circle. The black border around the circle must remain <code>[0, 0, 0]</code>. The purpose for this is to augment a machine learning dataset.</p>
<h3>Question</h3>
<p>How can this be done using Numpy?</p> | <p>The simplest way - generate random coordinates in given rectangle and check whether pixel at this position is not black. If not, change its color to random choice of black and white. Pseudocode:</p>
<pre><code>while saltcount < limit:
rx = random(width)
ry = random(height)
c = pixel[ry][rx]
if (c != 0):
pixel[ry][rx] = 0xFFFFFF * random(2)
saltcount++
</code></pre>
<p>This method rejects about 21% of tryouts (black area ratio for perfectly inscribed circle) but is very simple. If you know circle parameters, generate points only inside the circle:</p>
<pre><code>x = cx + r * sqrt(t) * cos(2 * Pi * a)
y = cy + r * sqrt(t) * sin(2 * Pi * a)
</code></pre>
<p>where <code>cx,cy,r</code> are circle center coordinates and radius, <code>t</code> and <code>a</code> are randoms in range <code>0..1</code></p>
<p>If you need to find circle at the picture, you can make grayscale picture or use any channel of r,g,b and use Hough transform to discover circle parameters.</p> | numpy|image-processing|machine-learning|dataset | 5 |
1,252 | 49,540,703 | How to use dictionary to replace the items in the inner list of a list? | <p>How to use dictionary to replace the items in the inner list of a list?</p>
<p>This works fine</p>
<pre><code>import numpy as np
ss_dict = {
1 : np.array([1,0,0,0,0,0]),
2 : np.array([0,1,0,0,0,0]),
3 : np.array([0,0,1,0,0,0]),
4 : np.array([0,0,0,1,0,0]),
5 : np.array([0,0,0,0,1,0]),
6 : np.array([0,0,0,0,0,1]),
}
l=np.array([1,2,3])
l = np.array([ss_dict[i] for i in l])
print(l)
</code></pre>
<p>Output:</p>
<pre><code>[[1 0 0 0 0 0]
[0 1 0 0 0 0]
[0 0 1 0 0 0]]
</code></pre>
<p>But this got errors <code>TypeError: unhashable type: 'numpy.ndarray'</code></p>
<pre><code>l=np.array([[1,2,3],[4,5,6]])
l = np.array([ss_dict[i] for i in (j for j in l)])
print(l)
</code></pre>
<p>I want to output:</p>
<pre><code>[
[[1 0 0 0 0 0]
[0 1 0 0 0 0]
[0 0 1 0 0 0]]
[[0 0 0 1 0 0]
[0 0 0 0 1 0]
[0 0 0 0 0 1]]
]
</code></pre> | <p>This should help. You just need to create a list comprehension within the list comprehension.</p>
<pre><code>res = np.array([[ss_dict[j] for j in i] for i in l])
</code></pre>
<p>Result:</p>
<pre><code>[[[1 0 0 0 0 0]
[0 1 0 0 0 0]
[0 0 1 0 0 0]]
[[0 0 0 1 0 0]
[0 0 0 0 1 0]
[0 0 0 0 0 1]]]
</code></pre> | python|python-3.x|list|numpy|dictionary | 1 |
1,253 | 28,058,563 | Write to StringIO object using Pandas Excelwriter? | <p>I can pass a StringIO object to pd.to_csv() just fine:</p>
<pre><code>io = StringIO.StringIO()
pd.DataFrame().to_csv(io)
</code></pre>
<p>But when using the excel writer, I am having a lot more trouble. </p>
<pre><code>io = StringIO.StringIO()
writer = pd.ExcelWriter(io)
pd.DataFrame().to_excel(writer,"sheet name")
writer.save()
</code></pre>
<p>Returns an </p>
<pre><code>AttributeError: StringIO instance has no attribute 'rfind'
</code></pre>
<p>I'm trying to create an <code>ExcelWriter</code> object without calling <code>pd.ExcelWriter()</code> but am having some trouble. This is what I've tried so far:</p>
<pre><code>from xlsxwriter.workbook import Workbook
writer = Workbook(io)
pd.DataFrame().to_excel(writer,"sheet name")
writer.save()
</code></pre>
<p>But now I am getting an <code>AttributeError: 'Workbook' object has no attribute 'write_cells'</code> </p>
<p>How can I save a pandas dataframe in excel format to a <code>StringIO</code> object?</p> | <p>Pandas expects a filename path to the ExcelWriter constructors although each of the writer engines support <code>StringIO</code>. Perhaps that should be raised as a bug/feature request in Pandas.</p>
<p>In the meantime here is a workaround example using the Pandas <code>xlsxwriter</code> engine:</p>
<pre><code>import pandas as pd
import StringIO
io = StringIO.StringIO()
# Use a temp filename to keep pandas happy.
writer = pd.ExcelWriter('temp.xlsx', engine='xlsxwriter')
# Set the filename/file handle in the xlsxwriter.workbook object.
writer.book.filename = io
# Write the data frame to the StringIO object.
pd.DataFrame().to_excel(writer, sheet_name='Sheet1')
writer.save()
xlsx_data = io.getvalue()
</code></pre>
<p><strong>Update</strong>: As of Pandas 0.17 it is now possible to do this more directly:</p>
<pre><code># Note, Python 2 example. For Python 3 use: output = io.BytesIO().
output = StringIO.StringIO()
# Use the StringIO object as the filehandle.
writer = pd.ExcelWriter(output, engine='xlsxwriter')
</code></pre>
<p>See also <a href="http://xlsxwriter.readthedocs.io/working_with_pandas.html#saving-the-dataframe-output-to-a-string" rel="noreferrer">Saving the Dataframe output to a string</a> in the XlsxWriter docs.</p> | python|excel|pandas|stringio|xlsxwriter | 42 |
1,254 | 73,324,267 | Group by and create new column in python | <p>I have a large dataset and I would like to create a new column that shows the State base off the many zip codes from the Postal code column.</p>
<pre><code>data = {'Name':['Tom', 'nick', 'krish', 'jack', 'Petter'], 'Age':[20, 21, 19, 18, 52], 'Postal Code': [12345, 56789,12345, 96385, 56789]}
</code></pre>
<p>this is what I tried:</p>
<pre><code>def city (row):
if row['Postal Code'] == 12345 | 96385:
return 'Utah'
k = data.apply (lambda row: city(row), axis=1)
</code></pre>
<p>I get the error "AttributeError: 'dict' object has no attribute 'apply'"</p>
<p>expected result:</p>
<pre><code>name | age | postal code | State|
--------------------------------
Tom | 20 | 12345 | Utah |
</code></pre>
<p>I think there are better ways to do this, do you any any better approach?
Thank you!</p> | <p>You can try</p>
<pre class="lang-py prettyprint-override"><code># create a dictionary that maps postcode to state
d = {
12345: 'Utah',
96385: 'Utah',
}
df = pd.DataFrame(data)
df['State'] = df['Postal Code'].map(d)
# or
df = (pd.DataFrame(data)
.pipe(lambda df: df.assign(State=df['Postal Code'].map(d))))
# or
df = (pd.DataFrame(data)
.eval('State = `Postal Code`.map(@d)'))
</code></pre>
<pre><code>print(df)
Name Age Postal Code State
0 Tom 20 12345 Utah
1 nick 21 56789 NaN
2 krish 19 12345 Utah
3 jack 18 96385 Utah
4 Petter 52 56789 NaN
</code></pre> | python|python-3.x|pandas|group-by | 2 |
1,255 | 73,352,619 | How can I build an advanced formula from a dict of functions without using eval()? | <p>I have two dicts of functions that I want to use to build a larger function. The goal is to be able to substitute different functions in their place based on the dict keys. I know using eval() is not the best way in terms of security and speed, but I cannot come up with another way.</p>
<pre><code>def formula(x):
p1 = 10
p2 = 20
p3 = 30
p4 = 40
p5 = 50
p6 = 60
basic = {0:'+', 1:'-', 2:'*', 3:'/'}
advanced = {0:np.exp, 1:np.sin, 2:np.cos, 3:np.tan, 4:np.arcsin}
f = advanced[0](x basic[1] p1)
print(f)
</code></pre>
<p>I know the syntax is incorrect for f above but the goal is:</p>
<p>f = np.exp(x - p1)</p>
<p>Currently, I have it working by:</p>
<pre><code>f = eval(advanced[0] + '(' + 'x' + basic[1] + 'p1' + ')')
</code></pre> | <p><code>eval()</code> can be a security issue when you're evaluating input from an user, since in most cases you can't predict what they're going to input. But it seems safe to use in this scenario.</p>
<p>Anyway, here's a solution using lambda functions:</p>
<pre><code>def formula(x):
p1 = 10
p2 = 20
p3 = 30
p4 = 40
p5 = 50
p6 = 60
basic = {0: lambda a,b: a+b, 1: lambda a,b: a-b, 2: lambda a,b: a*b, 3: lambda a,b: a/b}
advanced = {0: lambda n: np.exp(n), 1: lambda n: np.sin(n), 2: lambda n: np.cos(n), 3: lambda n: np.tan(n), 4: lambda n: np.arcsin(n)}
f = advanced[0](basic[1](x, p1))
print(f)
</code></pre>
<p><strong>PS:</strong> This is an illustrative example. As noted by @wwii, lambda functions are not necessary for <code>advanced</code> since you are holding functions anyway. So <code>advanced[0](3)</code> will work as <code>np.exp(3)</code>.</p>
<p>Note that you can hold functions with dictionary keys, as if they were any other value, e.g.:</p>
<pre><code>advanced = {0:np.exp, 1:np.sin, 2:np.cos, 3:np.tan, 4:np.arcsin}
foo = np.exp
print(np.exp(3))
print(advanced[0](3))
print(foo(3))
20.085536923187668
20.085536923187668
20.085536923187668
</code></pre> | python|numpy | 0 |
1,256 | 73,250,207 | How to use pre-trained models for text classification?Comparing a fine-tuned model with a pre-trained model without fine-tuning | <p>I want to know how much the fine-tuned model improves compared to the model without fine-tuning.I want to compare the performance of the pre-trained model(BERT) and the model(fine-tuned BERT) obtained by fine-tuning the pre-trained model on text classification.I know how to fine-tune BERT for text classification, but not very clear on how to use BERT directly for classification.what should I do?The following is the code for fine-tuning the model, how to rewrite it to directly use the pre-trained model.</p>
<pre><code> <!-- language: python -->
from transformers import BertTokenizer, BertModel
import torch
import torch.nn as nn
import torch.utils.data as Data
import torch.optim as optim
from sklearn.metrics import accuracy_score,matthews_corrcoef
from sklearn.model_selection import train_test_split
tokenizer_model = BertTokenizer.from_pretrained('bert-base-uncased')
pretrained_model = BertModel.from_pretrained("bert-base-uncased")
class MyDataSet(Data.Dataset):
def __init__ (self, data, label):
self.data = data
self.label = label
self.tokenizer = tokenizer_model
def __getitem__(self, idx):
text = self.data[idx]
label = self.label[idx]
inputs = self.tokenizer(text, return_tensors="pt",padding='max_length',max_length=256,truncation=True)
input_ids = inputs.input_ids.squeeze(0)
#token_type_ids = inputs.token_type_ids.squeeze(0)
attention_mask = inputs.attention_mask.squeeze(0)
#return input_ids, token_type_ids, attention_mask, label
return input_ids, attention_mask, label
def __len__(self):
return len(self.data)
data,label = [],[]
with open(path) as f:
for line in f.readlines():
a,b = line.strip().split('\t')
data.append(b)
if a == 'LOW':
label.append('0')
elif a == 'MEDIUM':
label.append('1')
else:
label.append('2')
label = [int(i) for i in label]
train_x,test_x,train_y,test_y = train_test_split(data, label, test_size = 0.15,random_state = 32, stratify=label)
dataset_train = MyDataSet(train_x,train_y)
dataset_test = MyDataSet(test_x,test_y)
dataloader_train = Data.DataLoader(dataset_train, batch_size=128, shuffle=True,num_workers=32,pin_memory=True)
dataloader_test = Data.DataLoader(dataset_test, batch_size=128, shuffle=True,num_workers=32,pin_memory=True)
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.bert = pretrained_model
self.linear = nn.Linear(768,3)
def forward(self, input_ids, attention_mask):
output = self.bert(input_ids, attention_mask).pooler_output
print(output.shape) # torch.Size([1, 768])
output = self.linear(output)
return output
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
if torch.cuda.device_count() > 1:
print("Use", torch.cuda.device_count(), 'gpus')
model = MyModel()
model = nn.DataParallel(model)
model = model.to(device)
## model = MyModel().to(device)
loss_fn = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=1e-5)
for epoch in range(10):
for input_ids,attention_mask,label in dataloader_train:
train_input_ids,train_attention_mask,train_label = input_ids.to(device),attention_mask.to(device),label.to(device)
model.train()
pred = model(train_input_ids,train_attention_mask)
print('epoch:',epoch)
#print('pred,label:',pred,label)
loss = loss_fn(pred, train_label)
print('Loss:',loss.item())
pred = torch.argmax(pred,dim=1)
acc = (pred == train_label).float().mean()
print('acc:',acc)
loss.backward()
optimizer.step()
optimizer.zero_grad()
savename_train = str(path) +'_' + str(name) + '_train' + '.txt'
with open(savename_train,'a') as f:
f.write(str(epoch)+'\t'+str(loss.item())+'\t'+str(acc.item())+'\n')
model.eval()
with torch.no_grad():
for input_ids,attention_mask,label in dataloader_test:
validation_input_ids,validation_attention_mask,validation_label = input_ids.to(device),attention_mask.to(device),label.to(device)
pred = model(validation_input_ids,validation_attention_mask)
loss = loss_fn(pred, validation_label)
pred = torch.argmax(pred, dim=1)
acc = (pred == validation_label).float().mean()
print('acc:',acc)
savename_eval = str(path) +'_' + str(name) + '_val' + '.txt'
with open(savename_eval,'a') as f:
f.write(str(epoch)+'\t'+str(loss.item())+'\t'+str(acc.item())+'\n')
</code></pre> | <p>What you are trying to do does not make sense. The naive BERT model was retrained using a combination of masked language modelling objective and next sentence prediction. So, all it can do is predicting masked tokens, predicting if a pair of given sentence can be next to each other in a text. Most importantly, it can provide embeddings.</p>
<p>To use for classification you have to add a classification head to the end of the model. Initially, the weights of that layer is randomly initialised. If you do not fine tune the last layer, what do you really expect from random weights?</p>
<p>If you really want to compare the fine-tuned model to a baseline, take the embeddings vector from the BERT and use a tradional ML model like SVM or Tree based calssifier.</p> | python|pytorch|huggingface-transformers | 0 |
1,257 | 73,281,327 | Is there a more efficient way to create a data frame (Pandas) from semi structured data? | <p><strong>What i want to achieve:</strong></p>
<p>I want to create a (Pandas) data frame from a text file with variable-width formatted lines. For example the text file looks like</p>
<pre class="lang-none prettyprint-override"><code>Time_stamp:0.0, Column_0:1.0, Column_1:2.0
Time_stamp:1.0, Column_2:3.0, Column_3:4.0, Column_4:5.0
Time_stamp:2.0, Column_5:6.0
Time_stamp:3.0, Column_2:3.0, Column_3:4.0, Column_4:5.0
Time_stamp:4.0, Column_0:1.0, Column_1:2.0
Time_stamp:5.0, Column_2:3.0, Column_3:4.0, Column_4:5.0
...
</code></pre>
<p>and the file size can be up to several GBs (> 1 million lines). At the end, i want to convert the data from the text file to
a data frame with a similiar structure to</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Time_stamp</th>
<th>Column_0</th>
<th>Column_1</th>
<th>Column_2</th>
<th>Column_3</th>
<th>Column_4</th>
<th>Column_5</th>
</tr>
</thead>
<tbody>
<tr>
<td>0.0</td>
<td>1.0</td>
<td>2.0</td>
<td>NULL</td>
<td>NULL</td>
<td>NULL</td>
<td>NULL</td>
</tr>
<tr>
<td>1.0</td>
<td>NULL</td>
<td>NULL</td>
<td>3.0</td>
<td>4.0</td>
<td>5.0</td>
<td>NULL</td>
</tr>
<tr>
<td>2.0</td>
<td>NULL</td>
<td>NULL</td>
<td>NULL</td>
<td>NULL</td>
<td>NULL</td>
<td>6.0</td>
</tr>
<tr>
<td>3.0</td>
<td>NULL</td>
<td>NULL</td>
<td>3.0</td>
<td>4.0</td>
<td>5.0</td>
<td>NULL</td>
</tr>
<tr>
<td>4.0</td>
<td>1.0</td>
<td>2.0</td>
<td>NULL</td>
<td>NULL</td>
<td>NULL</td>
<td>NULL</td>
</tr>
<tr>
<td>5.0</td>
<td>NULL</td>
<td>NULL</td>
<td>3.0</td>
<td>4.0</td>
<td>5.0</td>
<td>NULL</td>
</tr>
</tbody>
</table>
</div>
<p>The final data frame will have around 140 columns.</p>
<p><strong>What i have tried:</strong></p>
<p>To create a Pandas data frame from the text file, I use the following Python function that creates a Python dictionary
for each line in the text file with the column name as the key. After a certain number of lines, all dictionaries are
converted to a Pandas data frame with sparse data types to avoid high memory consumption. This step is repeated until all lines
in the text file are processed. At the end all pandas data frames are concatenated to one data frame.</p>
<pre><code>def generate_data_frame(data: TextIO, delimiter: list, save_point: int) -> pd.DataFrame:
"""generate_data_frame takes a text file pointer and builds a Pandas data frame out of it
Args:
data (TextIO): a python file pointer with lines containing data
separated by the delimiters
delimiter (list): char separating the data in a line
save_point (int): number of lines from which the data is transformed to
a Pandas data frame. Regulates the memory usage
Returns:
pandas.DataFrame: containing data in tabular form with columns
casted as sparse data type
"""
# saves the dictionaries for each line of the data
line_dictionaries = []
# after a specified number of iterations (save_point) the dictionaries of
# the lines are converted to a Pandas data frame with columns
# casted as sparse data type. These Pandas data frames are saved in
# list_of_sub_data_frames and will be concatenated in a later step
list_of_sub_data_frames = []
for i, line in enumerate(data):
# each line is saved as dictionary with the column names as keys
line_dict = {}
# the data in each line can separated by first delimiter, default ","
columns = line.split(delimiter[0])
for column in columns:
column_parts = column.split(delimiter[1])
# first element of the column_parts is the column name
# second element of the column_parts is the column value in this line
try:
value = column_parts[1].rstrip("\n")
except IndexError as e:
# checks for empty lines in the file
raise IndexError(
f"In line number {i} the data can not be separated in "
"column name and value"
) from e
line_dict[column_parts[0]] = value
line_dictionaries.append(line_dict)
if len(line_dictionaries) >= save_point:
logging.info("save data")
# creates Pandas data frame with sparse data type to reduce the memory
# usage
sub_data_frame = pd.DataFrame.from_dict(
line_dictionaries, dtype=pd.SparseDtype(object, np.nan)
)
list_of_sub_data_frames.append(sub_data_frame)
line_dictionaries = []
# save last data in data frame
sub_data_frame = pd.DataFrame.from_dict(
line_dictionaries, dtype=pd.SparseDtype(object, np.nan)
)
list_of_sub_data_frames.append(sub_data_frame)
line_dictionaries = []
# concatenate all sub data frames to one data frame
data_frame = pd.concat(list_of_sub_data_frames)
# the column names contain a unit specification at the end
# cast columns with units to float
columns_with_unit = [
column for column in data_frame.columns if not "None" in column
]
data_frame[columns_with_unit] = data_frame[columns_with_unit].astype(
pd.SparseDtype(np.float64, np.nan)
)
list_of_sub_data_frames = []
return data_frame
</code></pre>
<p><strong>What is the problem:</strong></p>
<p>The above code works, but it is very slow and therefore I am looking for a faster and memory efficient way to
genereate a data frame. The data does not have to be structured to a data frame with pandas. In case there is
a better package to do the job, I am open to everything.</p> | <p>From the looks of it, you can split on whitespace into one massive row of data, then stack it, split it again on the <code>:</code> to separate the key/value pairs.</p>
<p>Then you can flag each group incrementally by checking if the value is <code>Time_stamp</code>. From this point it's a pivot.</p>
<pre><code>import pandas as pd
import io
data = io.StringIO("""Time_stamp:0.0, Column_0:1.0, Column_1:2.0 Time_stamp:1.0, Column_2:3.0, Column_3:4.0, Column_4:5.0 Time_stamp:2.0, Column_5:6.0 Time_stamp:3.0, Column_2:3.0, Column_3:4.0, Column_4:5.0 Time_stamp:4.0, Column_0:1.0, Column_1:2.0 Time_stamp:5.0, Column_2:3.0, Column_3:4.0, Column_4:5.0""")
df = pd.read_csv(data, header=None, delim_whitespace=True)
df = df.stack().str.replace(',','').str.split(':', expand=True)
df.columns = ['key','value']
df['group'] = df['key'].eq('Time_stamp').cumsum()
df = df.pivot_table(index='group', columns='key', values='value').reset_index(drop=True).rename_axis(None, axis=1)
print(df)
</code></pre>
<p>Output</p>
<pre><code> Column_0 Column_1 Column_2 Column_3 Column_4 Column_5 Time_stamp
0 1.0 2.0 NaN NaN NaN NaN 0.0
1 NaN NaN 3.0 4.0 5.0 NaN 1.0
2 NaN NaN NaN NaN NaN 6.0 2.0
3 NaN NaN 3.0 4.0 5.0 NaN 3.0
4 1.0 2.0 NaN NaN NaN NaN 4.0
5 NaN NaN 3.0 4.0 5.0 NaN 5.0
</code></pre> | python|pandas|dataframe | 0 |
1,258 | 73,224,555 | Converting dataframes created by groupby with multiple conditions to nested dict | <p>I have a dataframe with 10 columns that it follows:</p>
<pre><code>df:
| x | y | z | t | a | b | c | ....
1: | x1 | y1 | z1 | t1 | [a1, a2] | {b1: 1, b2: 2} | 0 | ....
2: | x2 | y2 | z2 | t2 | [a3, a4] | {b1: 3, b2: 4} | 2 | ....
3: | x1 | y3 | z2 | t1 | [a1, a4] | {b1: 1, b2: 3} | 2 | ....
4: | x3 | y1 | z5 | t3 | [a4, a5] | {b1: 6, b2: 2, b3: 1} | 24 | ....
.
.
.
</code></pre>
<p>I would like to split this dataframe based on values of 4 columns (let's say columns are: <code>x, y, z, t</code>) and convert it to a nested dict like:</p>
<pre><code>x1:
y1:
z1:
t1: df
t2: df
z2:
t1: df
t2: df
z3:
t1: df
t2: df
y2:
z1:
t1: df
t2: df
z2:
t1: df
t2: df
z3:
t1: df
t2: df
x2:
y1:
z1:
t1: df
t2: df
z2:
t1: df
t2: df
z3:
t1: df
t2: df
.
.
.
</code></pre>
<p>What I did:</p>
<pre><code>grouped_df = dict(list(df.groupby('x')))
for x_elem in [*x]:
grouped_df[x_elem] = dict(list(grouped_df[x_elem].groupby('y')))
for y_elem in [*grouped_df[x_elem]]:
grouped_df[x_elem][y_elem] = dict(list(grouped_df[x_elem][y_elem].groupby('z')))
for z_elem in [*grouped_df[x_elem][y_elem]]:
grouped_df[x_elem][y_elem][z_elem] = dict(list(grouped_df[x_elem][y_elem][z_elem].groupby('t')))
</code></pre>
<p>After this function, <code>grouped_df</code> becomes like nested dicts of dfs where each key corresponds to a variable of the column as I described on top.</p>
<pre><code>type(grouped_df): dict
grouped_df:{
x1: {
y1: {
z1: {
t1: df,
t2: df,
},
z2: {...},
},
y2: {...},
},
x2:{...},
x3:{...},
}
</code></pre>
<p>It is perfectly working so far. However, this method can be highly costly when working on big dataframes and I am sure there is a better way to do it. Any recommendations?</p> | <p>With the dataframe you provided:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.DataFrame(
{
"x": ["x1", "x2", "x1", "x3"],
"y": ["y1", "y2", "y3", "y1"],
"z": ["z1", "z2", "z2", "z5"],
"t": ["t1", "t2", "t1", "t3"],
"a": [["a1", "a2"], ["a3", "a4"], ["a1", "a4"], ["a4", "a5"]],
"b": [
{"b1": 1, "b2": 2},
{"b1": 3, "b2": 4},
{"b1": 1, "b2": 3},
{"b1": 6, "b2": 2, "b3": 1},
],
"c": [0, 2, 2, 24],
}
)
</code></pre>
<p>Here is one way to do what you're trying to achieve (I'm a bit guessing here, since, your example is not reproducible):</p>
<pre class="lang-py prettyprint-override"><code>grouped_df = df.groupby(["x", "y", "z", "t"]).agg(list).to_dict(orient="index")
grouped_df = {key: pd.DataFrame(value) for key, value in grouped_df.items()}
grouped_df = {
x: {y: {z: {t: df}}} for (x, y, z, t), df in grouped_df.items()
}
</code></pre>
<p>Using <a href="https://docs.python.org/3/library/pprint.html#pprint.pprint" rel="nofollow noreferrer">pprint</a> and replacing <code>df</code> by <code>str(df)</code> to get a more readable representation, here is what grouped_df looks like:</p>
<pre class="lang-py prettyprint-override"><code>{
"x1": {
"y3": {
"z2": {
"t1": " a b c"
"0 [a1, a4] {'b1': 1, 'b2': 3} 2"
}
}
},
"x2": {
"y2": {
"z2": {
"t2": " a b c"
"0 [a3, a4] {'b1': 3, 'b2': 4} 2"
}
}
},
"x3": {
"y1": {
"z5": {
"t3": " a b c"
"0 [a4, a5] {'b1': 6, 'b2': 2, 'b3': 1} 24"
}
}
},
}
</code></pre> | python|pandas|dataframe|data-processing | 0 |
1,259 | 73,433,082 | Find max/mean in range defined by values from another column | <p><a href="https://i.stack.imgur.com/L6cQL.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/L6cQL.jpg" alt="example pic" /></a>I have a df as follows:</p>
<pre><code>import pandas as pd
import numpy as np
np.random.seed(5)
df = pd.DataFrame(np.random.randint(10, size = (20, 1)),columns=['A'])
s = [0,0,1,0,0,0,2,0,1,2,0,0,1,0,0,2,0,1,0,0]
df['B'] = s
>>> print(df)
A B
0 4 0
1 1 0
2 6 1
3 3 0
4 4 0
5 3 0
6 1 2
7 4 0
8 2 1
9 3 2
10 4 0
11 9 0
12 4 1
13 0 0
14 6 0
15 6 2
16 9 0
17 2 1
18 9 0
19 3 0
</code></pre>
<p>My goal is add two new columns, namely 'C_Max' and 'D_Mean'.</p>
<p>If the value of column B is 1, then the index from ‘1’ to the next occurrence of ‘2’ is [2:6], put the maximum value between [2:6] in column A into column C_Max at the same position with B1's 1, i.e. [2] in column C_Max, then average all the numbers [2:6] in column A, and put the result in the same position as column B in column D_Mean, that is, [2] in column D_mean. And so on.</p>
<p>Ignored if 2 does not appear after 1.The values of other cells in columns C_Max and D_min do not matter.</p>
<p>Desired output:</p>
<pre><code>>>> df
A B C_Max D_Mean
0 4 0 NaN NaN
1 1 0 NaN NaN
2 6 1 6.0 3.4
3 3 0 NaN NaN
4 4 0 NaN NaN
5 3 0 NaN NaN
6 1 2 NaN NaN
7 4 0 NaN NaN
8 2 1 3.0 2.5
9 3 2 NaN NaN
10 4 0 NaN NaN
11 9 0 NaN NaN
12 4 1 6.0 4.0
13 0 0 NaN NaN
14 6 0 NaN NaN
15 6 2 NaN NaN
16 9 0 NaN NaN
17 2 1 NaN NaN
18 9 0 NaN NaN
19 3 0 NaN NaN
</code></pre> | <p>You can achieve that with <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.agg.html" rel="nofollow noreferrer"><code>agg</code></a> and <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.merge.html" rel="nofollow noreferrer"><code>merge</code></a>.</p>
<p>Setup:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.DataFrame({'A': [4, 1, 6, 3, 4, 3, 1, 4, 2, 3, 4, 9, 4, 0, 6, 6, 9, 2, 9, 3],
'B': [0, 0, 1, 0, 0, 0, 2, 0, 1, 2, 0, 0, 1, 0, 0, 2, 0, 1, 0, 0]
})
B_groups = (df.B.eq(1) | df.B.shift().eq(2)).cumsum()
funcs = ["max", "mean"]
</code></pre>
<p>Merge the grouped and aggregated data with the original dataframe.
(Note how this is very maintainable – if you need additional metrics, just amend the <code>funcs</code> list.)</p>
<pre class="lang-py prettyprint-override"><code>df2 = df.merge(
df.groupby(B_groups).A.agg(funcs),
left_on=B_groups,
right_index=True,
).drop("key_0", axis=1) # drop new column introduced by merge
</code></pre>
<p>And that is basically it. You get:</p>
<pre class="lang-py prettyprint-override"><code>>>> df2.head(10)
A B max mean
0 4 0 4 2.5
1 1 0 4 2.5
2 6 1 6 3.4
3 3 0 6 3.4
4 4 0 6 3.4
5 3 0 6 3.4
6 1 2 6 3.4
7 4 0 4 4.0
8 2 1 3 2.5
9 3 2 3 2.5
</code></pre>
<p>To get rid of the superfluous values, you can re-assign the two new columns, keeping everything up to the last
non-zero value in <code>B</code>.</p>
<pre class="lang-py prettyprint-override"><code>df2 = df2.loc[:df2.B[::-1].idxmax()]
df2[funcs] = df2.loc[df2.B.eq(1), funcs]
</code></pre>
<p>Final result:</p>
<pre class="lang-py prettyprint-override"><code> A B max mean
0 4 0 NaN NaN
1 1 0 NaN NaN
2 6 1 6.0 3.4
3 3 0 NaN NaN
4 4 0 NaN NaN
5 3 0 NaN NaN
6 1 2 NaN NaN
7 4 0 NaN NaN
8 2 1 3.0 2.5
9 3 2 NaN NaN
10 4 0 NaN NaN
11 9 0 NaN NaN
12 4 1 6.0 4.0
13 0 0 NaN NaN
14 6 0 NaN NaN
15 6 2 NaN NaN
</code></pre> | python|pandas|dataframe | 3 |
1,260 | 73,496,777 | How to convert n*1 array into n*m array where m is unique values in the array? | <p>I have an array of array( [1,2,3, 4, 1,2 ,3 ,3,3,3]) having shape (10,)</p>
<pre><code>a = np.array([1,2,3, 4, 1,2 ,3 ,3,3,3])
print(a)
print(a.shape)
</code></pre>
<p>which has unique values 1,2,3,4 ie m = 4, unique values.
Actual data i quite large nad has nuniuqe of ~300
How to pivot it to get an array of shape (10,4)</p>
<p>Expected output</p>
<pre><code> array([[1, 0, 0, 0],
[0, 1, 0, 0],
[0, 0, 1, 0],
[0, 0, 0, 1],
[1, 0, 0, 0],
...,
[0, 0, `, 0]])
</code></pre> | <pre><code>np.eye(a.max(),dtype=int)[a - 1]
array([[1, 0, 0, 0],
[0, 1, 0, 0],
[0, 0, 1, 0],
[0, 0, 0, 1],
[1, 0, 0, 0],
[0, 1, 0, 0],
[0, 0, 1, 0],
[0, 0, 1, 0],
[0, 0, 1, 0],
[0, 0, 1, 0]])
</code></pre> | python|arrays|numpy | 4 |
1,261 | 73,225,508 | Finding the minimum euclidian distance from all points of class 0 to all points of class 1 | <p><strong>The Problem</strong></p>
<p>I have a dataset with 4 columns and ~90k rows. Columns 1, 2, 3 are the features and column 4 is the target class (binary classification, either 0 or 1).</p>
<p>I want to add a 5th column to my dataset that will contain the closest Euclidian distance from row[i] to another point of the opposite category and then sort all rows based on that column. The category is not to be taken as a dimension when calculating the distance.</p>
<p>The feature column names are P1, P2, P3 and the target is T1</p>
<p><strong>My Attempt</strong></p>
<pre><code>df = get_transformed_df()
df_cat0 = df[df["T1"] == 0]
df_cat1 = df[df["T1"] == 1]
df_cat0 = df_cat0.drop(columns=["T1"])
df_cat1 = df_cat1.drop(columns=["T1"])
#add new columns to df_cat0 and df_cat1 filled with zeros
for ix in range(0, df_cat0.shape[0]):
for iy in range(0, df_cat1.shape[0]):
dist = np.linalg.norm(df_cat0.iloc[ix] - df_cat1.iloc[iy])
#closest = min(previous_min, dist)
#add closest to row[i], new_col
</code></pre>
<p><strong>Reasoning</strong></p>
<p>I have tried to represent the idea with two nested for loops and by splitting the initial dataframe into two based on the target class. However, this is horribly inefficient and slow and I did not bother trying to finish implementing it as it would compute probably for hours.</p>
<p><strong>Question</strong></p>
<p>How can I do this efficiently using perhaps broadcasting?</p> | <p>Thanks to <strong>Quang Hoang</strong> from the comments and his reference to the spatial distance matrix from the Scipy module I managed to solve my problem. I decided to post it as the answer to this question in case somebody has a similar problem.</p>
<pre><code>import pandas as pd
from scipy.spatial import distance_matrix
df = pd.read_csv("dataset.csv")
df_cat0 = df[df["T1"] == 0]
df_cat1 = df[df["T1"] == 1]
C0 = df_cat0.iloc[:,:-1]
C1 = df_cat1.iloc[:,:-1]
dm = distance_matrix(C0, C1)
df_cat0.insert(4, "DIST_MIN", dm.min(axis=1))
df_cat1.insert(4, "DIST_MIN", dm.min(axis=0))
df = pd.concat([df_cat0, df_cat1])
df = df.sort_values(by=["DIST_MIN"])
</code></pre>
<p>First I split the dataframe into two dataframes based on the categories (category 0 and 1).</p>
<p>Next I keep a reference to these frames but slice them so that I only keep the features (target column is last in my dataset hence :-1)</p>
<p>Then I compute the distance matrix with the <strong>distance_matrix</strong> module from the <strong>scipy.spatial</strong> library which creates a <strong>(C0.size[0] x C1.size[0])</strong> matrix containing all the distance measures from each point to each other point.</p>
<p>Then I use the min aggregation functions to collapse it into two <strong>(C0.size[0] x 1)</strong> and <strong>(C1.size[0] x 1)</strong> vectors and depending on which axis I've collapsed I insert it as a new column to either <strong>df_cat0</strong> or <strong>df_cat1</strong>.</p>
<p>Lastly I use the pandas <strong>concat</strong> function to stich these two frames back together into one and sort it by the new <strong>DIST_MIN</strong> column.</p> | python|pandas|dataframe|data-analysis | 0 |
1,262 | 35,068,722 | Pandas assign value of one column based on another | <p>Given the following data frame:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(
{'A':[10,20,30,40,50,60],
'B':[1,2,1,4,5,4]
})
df
A B
0 10 1
1 20 2
2 30 1
3 40 4
4 50 5
5 60 4
</code></pre>
<p>I would like a new column 'C' to have values be equal to those in 'A' where the corresponding values for 'B' are less than 3 else 0.
The desired result is as follows:</p>
<pre><code> A B C
0 10 1 10
1 20 2 20
2 30 1 30
3 40 4 0
4 50 5 0
5 60 4 0
</code></pre>
<p>Thanks in advance!</p> | <p>Use <a href="http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.where.html" rel="noreferrer"><code>np.where</code></a>:</p>
<pre><code>df['C'] = np.where(df['B'] < 3, df['A'], 0)
>>> df
A B C
0 10 1 10
1 20 2 20
2 30 1 30
3 40 4 0
4 50 5 0
5 60 4 0
</code></pre> | python-3.x|pandas | 5 |
1,263 | 67,361,002 | How to train and save multi class artificial neural network model using tensorflow? | <p>I'm trying to train a multi class classification neural network model using tensorflow. So I have 24 feature vectors that's in the form of numpy array that looks like this when I print it:</p>
<pre><code>[[1 0 0 ... 0 1 1]
[1 0 0 ... 0 1 1]
[1 0 0 ... 0 1 1]
...
[1 0 0 ... 2 0 0]
[1 0 0 ... 2 0 0]
[1 0 0 ... 2 0 0]]
</code></pre>
<p>Above is the <code>x_train</code> dataset that I want to train. It has a shape of</p>
<pre><code>(10799, 24)
</code></pre>
<p>Then the <code>y_train</code> dataset looks like this</p>
<pre><code>[ 307 307 307 ... 257947 257947 257947]
</code></pre>
<p>And it has a shape of:</p>
<pre><code>(10799,)
</code></pre>
<p>So <code>y_train</code> as a label contains different classes, the number shown there are the IDs. And it has a total of 480 classes. So far my attempt in training this is like this:</p>
<pre><code>#Normalize the data
x_train = x_train/x_train.max()
#Convert the y_train to be one-hot encoded because they're not a regression problem, to do categorical analysis by Keras.
from keras.utils import to_categorical
y_cat_train = to_categorical(y_train)
#BUILDING THE MODEL
from keras.models import Sequential
from keras.layers import Dense
model = Sequential()
model.add(Dense(24,input_dim=24,activation='relu'))
model.add(Dense(units=48,activation='relu'))
model.add(Dense(units=96,activation='relu'))
model.add(Dense(units=192,activation='relu'))
model.add(Dense(units=384,activation='relu'))
model.add(Dense(units=420,activation='relu'))
model.add(Dense(units=450,activation='relu'))
model.add(Dense(480,activation='softmax'))
model.compile(loss='categorical_crossentropy',optimizer='rmsprop',metrics=['accuracy'])
print(model.summary())
#TRAINING THE MODEL
model.fit(x_train,y_cat_train,epochs=25)
#SAVING THE MODEL
model.save('myModel.h5')
</code></pre>
<p>But then I get an error saying:</p>
<p><code>ValueError: Shapes (None, 257948) and (None, 480) are incompatible</code></p>
<p>Can anyone teach me or explain how to use tensorflow properly to train and save the model for multi class classification? And please explain what I did wrong in the code and possible solutions to this problem?</p> | <p>You have to provide <code>input_shape</code> parameter</p>
<pre class="lang-py prettyprint-override"><code>
#Normalize the data
x_train = x_train/x_train.max()
#Convert the y_train to be one-hot encoded because they're not a regression problem, to do categorical analysis by Keras.
from keras.utils import to_categorical
y_cat_train = to_categorical(y_train)
#BUILDING THE MODEL
from keras.models import Sequential
from keras.layers import Dense
model = tf.keras.Sequential(
[
layers.Dense(24, activation="relu", input_shape=(x_train.shape[1],)),
layers.Dense(48,activation='relu'),
layers.Dense(96,activation='relu'),
layers.Dense(192,activation='relu'),
layers.Dense(384,activation='relu'),
layers.Dense(420,activation='relu'),
layers.Dense(450,activation='relu'),
layers.Dense(len(set(y_cat_train)), activation="softmax")
]
)
model.compile(loss='categorical_crossentropy',optimizer='rmsprop',metrics=['accuracy'])
print(model.summary())
#TRAINING THE MODEL
model.fit(x_train,y_cat_train,epochs=25)
#SAVING THE MODEL
model.save('myModel.h5')
</code></pre>
<p>NOTE:
With a network that contains more than 3 <code>Dense</code> layer, you have to increase the number of <code>epoch</code> in order to get the network converge.</p>
<p>In my opinion is better to start with only 1 layer that have the size of the number of features in input, and then modify the network.</p>
<p>In your case, it will be better start with a single layer that contains <code>len(x_train.shape[1])</code>.</p>
<p>Avoid to use (when possible) numpy array as X,y. Instead, cast the data using the <code>Tensorflow</code> tf.Data structure.</p>
<p>You can refer to the following code example:<br />
<a href="https://github.com/alessiosavi/tensorflow-face-recognition/blob/90d4acbea8f79539826b50c82a63a7c151441a1a/dense_embedding.py#L155" rel="nofollow noreferrer">https://github.com/alessiosavi/tensorflow-face-recognition/blob/90d4acbea8f79539826b50c82a63a7c151441a1a/dense_embedding.py#L155</a></p> | python|tensorflow|deep-learning|neural-network | 0 |
1,264 | 67,541,172 | Python - Pandas - Reading excel file from o365 | <p>I'm trying to read an o365 excel file into a pandas dataframe for analysis. I'm able to connect and authenticate, however am getting the error: "Unsupported format, or corrupt file: Expected BOF record; found b'\r\n<!DOCT' "</p>
<p>Some googling of the error showed that this can be an encoding issue, or that xlrd is misinterpreting the file as being encrypted. However, none of the solutions I found apply to my exact scenario, trying to read from o365 into pandas.</p>
<p>I know this is possible, does anybody see anything inherently wrong with my method of reading the spreadsheet?</p>
<p>Code:</p>
<pre><code>from office365.runtime.auth.authentication_context import AuthenticationContext
from office365.sharepoint.client_context import ClientContext
from office365.sharepoint.files.file import File
import io
from xlrd import *
import pandas as pd
url = 'https://somedomain.somesite.com/:x:/r/sites/IT/_layouts/15/guestaccess.aspx?e=4%3Ay8lZaY&at=9&CID=0CEFB96F-C585-4B93-95D8-7B9161922C05&wdLOR=c2549E09D-B403-4600-9D64-4E3AFD70A2D3&share=EbCMUuWuEsRJpPItV4SAhHQBum7Fe0ISfki4Na-k0VIlsA'
username = '[email protected]'
password = 'fakepw'
def pullO365():
ctx_auth = AuthenticationContext(url)
if ctx_auth.acquire_token_for_user(username, password):
ctx = ClientContext(url, ctx_auth)
web = ctx.web
ctx.load(web)
ctx.execute_query()
print("O365 authentication successful")
else:
print("O365 authentication failed.")
response = File.open_binary(ctx, url)
#save data to BytesIO stream
bytes_file_obj = io.BytesIO()
bytes_file_obj.write(response.content)
bytes_file_obj.seek(0) #set file object to start
#read excel file and each sheet into pandas dataframe
normResults = pd.read_excel(bytes_file_obj, sheet_name=None, usecols="H,G,I,J,F")
df = pd.DataFrame(normResults)
return df
</code></pre> | <p>This worked for me. Don't fill spaces with special characters for your folder names.</p>
<pre><code>from shareplum import Site
from shareplum import Office365
from shareplum.site import Version
authcookie = Office365('https://<organization>.sharepoint.com/', username='<your username>', password='<your pw>').GetCookies()
site = Site('https://<organization>.sharepoint.com/teams/<team name>', version=Version.v365, authcookie=authcookie)
#version=Version.v2016,
folder = site.Folder('Shared Documents/<Folder>/<Subfolder>/<Subfolder>')
file = folder.get_file('xxxxx.csv')
with open("xxxxx.csv", "wb") as fh:
fh.write(file)
print('---')
folder.upload_file('xlsx', 'xxxxx.xlsx')
</code></pre> | python|python-3.x|pandas | 1 |
1,265 | 34,857,708 | How to group pandas DF entries and progress column values? | <p>I have groupby my Dataframe by customer, year and month:</p>
<pre><code>my_list = ['Customer','Year','Month']
g = df.groupby(my_list)['COST'].sum()
Customer Year Month COST
1000061 2013 12 122.77
2014 1 450.40
2 249.61
3 533.58
4 337.32
5 482.49
1000063 2013 12 875.67
2014 1 376.95
2 308.90
3 469.76
4 394.34
</code></pre>
<p>But, now I want to add 2 new columns (progress column COST, one or two possitions):
- 1. Expected costs on the next month
- 2. Expected costs on the 2nd month</p>
<pre><code>Customer Year Month COST COST_NextMonth COST_2Months
1000061 2013 12 122.77 450.40 249.61
2014 1 450.40 249.61 533.58
2 249.61 533.58 337.32
3 533.58 337.32 482.49
4 337.32 482.49 0
5 482.49 0 0
1000063 2013 12 875.67 376.95 308.9
2014 1 376.95 308.9 469.76
2 308.90 469.76 394.34
3 469.76 394.34 0
4 394.34 0 0
</code></pre>
<p>How do I achieve this?</p> | <p>IIUC you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html" rel="nofollow"><code>concat</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.shift.html" rel="nofollow"><code>shift</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.fillna.html" rel="nofollow"><code>fillna</code></a>:</p>
<pre><code>print pd.concat([g,
g.groupby(level=0).shift(-1).fillna(0),
g.groupby(level=0).shift(-2).fillna(0)], axis=1,
keys=['COST','COST_NextMonth','COST_2Months'])
COST COST_NextMonth COST_2Months
Customer Year Month
1000061 2013 12 122.77 450.40 249.61
2014 1 450.40 249.61 533.58
2 249.61 533.58 337.32
3 533.58 337.32 482.49
4 337.32 482.49 0.00
5 482.49 0.00 0.00
1000063 2013 12 875.67 376.95 308.90
2014 1 376.95 308.90 469.76
2 308.90 469.76 394.34
3 469.76 394.34 0.00
4 394.34 0.00 0.00
</code></pre>
<p>Next solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reset_index.html" rel="nofollow"><code>reset_index</code></a>:</p>
<pre><code>df['COST_NextMonth'] = g.reset_index().groupby('Customer')['COST'].shift(-1).fillna(0)
df['COST_2Months'] = g.reset_index().groupby('Customer')['COST'].shift(-2).fillna(0)
print df
Customer Year Month COST COST_NextMonth COST_2Months
0 1000061 2013 12 122.77 450.40 249.61
1 1000061 2014 1 450.40 249.61 533.58
2 1000061 2014 2 249.61 533.58 337.32
3 1000061 2014 3 533.58 337.32 482.49
4 1000061 2014 4 337.32 482.49 0.00
5 1000061 2014 5 482.49 0.00 0.00
6 1000063 2013 12 875.67 376.95 308.90
7 1000063 2014 1 376.95 308.90 469.76
8 1000063 2014 2 308.90 469.76 394.34
9 1000063 2014 3 469.76 394.34 0.00
10 1000063 2014 4 394.34 0.00 0.00
</code></pre> | python|pandas | 1 |
1,266 | 59,909,041 | Python - How to convert from object to float | <p>I had an XLSX file with 2 columns namely <code>months</code> and <code>revenue</code> and saved it as a CSV file. By using pandas to read my csv file, the <code>revenue</code> column has now turned into object. How can I change this column to float?</p>
<pre><code>data = pd.DataFrame
dat['revenue']
7980.79
Nan
1000.25
17800.85
.....
Nan
2457.85
6789.33
</code></pre>
<p>This is the column I want to change but it has been given me different errors</p>
<p>I tried, <code>astype</code>, <code>to_numeric</code> but no success. </p>
<p>Some of the errors I got is:</p>
<blockquote>
<p>Cannot parse a string '798.79'</p>
</blockquote> | <p>Now using nucsit026's answer to create a slightly different dataFrame with strings </p>
<pre><code>dic = {'revenue':['7980.79',np.nan,'1000.25','17800.85','None','2457.85','6789.33']}
print(df)
print(df['revenue'].dtypes
</code></pre>
<p>Output:</p>
<pre><code> revenue
0 7980.79
1 NaN
2 1000.25
3 17800.85
4 None
5 2457.85
6 6789.33
dtype('O')
</code></pre>
<p>try this:</p>
<pre><code>df['revenue']=pd.to_numeric(data['revenue'], errors='coerce').fillna(0, downcast='infer')
</code></pre>
<p>it will replace <code>nan</code> with 0s</p>
<p>Output:</p>
<pre><code>0 7980.79
1 0.00
2 1000.25
3 17800.85
4 0.00
5 2457.85
6 6789.33
Name: revenue, dtype: float64
</code></pre>
<p><strong>EDIT</strong>:</p>
<p>From your shared error if quotes are the problem you can use</p>
<pre><code>df['revenue']=df['revenue'].str.strip("'")
</code></pre>
<p>and then try to convert to float using above mentioned code</p>
<p><strong>EDIT2</strong></p>
<p>OP had some spaces in the column values like this </p>
<pre><code>Month Revenue
Apr-13 16 004 258.24
May-13
Jun-13 16 469 157.71
Jul-13 19 054 861.01
Aug-13 20 021 803.71
Sep-13 21 285 537.45
Oct-13 22 193 453.80
Nov-13 21 862 298.20
Dec-13 10 053 557.64
Jan-14 17 358 063.34
Feb-14 19 469 161.04
Mar-14 22 567 078.21
Apr-14 20 401 188.64
</code></pre>
<p>In this case use following code:</p>
<pre><code>df['revenue']=df['revenue'].replace(' ', '', regex=True)
</code></pre>
<p>and then perform the conversion</p> | python|pandas | 2 |
1,267 | 60,048,624 | Pandas: Compare rows from two columns with several RegEx and copy right ones into a own column | <p>I'd like to ask for help for my problem. So, I have this dataframe with two columns and have a huge dataset of about 9500~ rows with 2 columns. Sometimes I have to take a subset from column A, sometimes from B - depending on the RegEx. But I have more than two of them (RegEx) but they are kinda unique. The result should be written into a third column with the 'right' value. It must be done with RegEx.</p>
<p>I hope I can make it more clear with this (small) example:</p>
<p><strong>Input</strong>: </p>
<pre><code>df = pd.DataFrame({'A': ['No animal', 'No animal', 'Zoo One', 'Zoo Two', 'Me-Lo-N', 'Ap-Pl-E'], 'B': ['EE.Elephant', 'SS.Penguin', 'EE.Elephant', 'SS.Penguin', 'GB One', 'GB Two']})
>>> df
A B
0 No animal EE.Elephant
1 No animal SS.Penguin
2 Zoo One EE.Elephant
3 Zoo Two SS.Penguin
4 Me-Lo-N GB One
5 Ap-Pl-E GB Two
</code></pre>
<p>Now I 'identified' several patterns.</p>
<ul>
<li>If in column 'A' is 'No animal', take the value from column 'B' no matter what.</li>
<li>If in column 'A' is 'Zoo ...' and in column 'B' something like 'XX.Animalname', take the left value from 'A' (Zoo ...)</li>
<li>If in column 'A' is something like 'XX-YY-Z' and in column 'B' 'GB ...', take the value/s from column 'A'.</li>
</ul>
<p><strong>The output should look like</strong>:</p>
<pre><code> A B C
0 No animal EE.Elephant EE.Elephant
1 No animal SS.Penguin SS.Penguin
2 Zoo One EE.Elephant Zoo One
3 Zoo Two SS.Penguin Zoo Two
4 Me-Lo-N GB One Me-Lo-N
5 Ap-Pl-E GB Two Ap-Pl-E
</code></pre>
<p>I built follow RegEx for them:</p>
<ul>
<li>(No animal)</li>
<li>(\w{2}..*) f.e. for EE. Bla</li>
<li>(Zoo.*) f.e. for Zoo...</li>
<li>(\w{2}-.+-.+) f.e. for Me-Lo-N</li>
<li>(GB.+) ...</li>
</ul>
<p>That's it. What's the best approach to compare specific RegEx to eachother between two columns and paste the answer into a own column?</p>
<p>Really appreciated! Thank you! </p> | <p>Use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.select.html" rel="nofollow noreferrer">np.select</a> and <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.match.html" rel="nofollow noreferrer">str.match</a> as below to get your desired output.</p>
<pre><code>df['C']=np.select([df.A=='No animal', df.A.str.match('Zoo.*') & df.B.str.match('\w{2}[.].'), df.A.str.match('\w{2}-.+-.+') & df.B.str.match('GB.+')], [df.B, df.A,df.A])
</code></pre>
<p>print(df)</p>
<p><strong>Output</strong></p>
<pre><code> A B C
0 No animal EE.Elephant EE.Elephant
1 No animal SS.Penguin SS.Penguin
2 Zoo One EE.Elephant Zoo One
3 Zoo Two SS.Penguin Zoo Two
4 Me-Lo-N GB One Me-Lo-N
5 Ap-Pl-E GB Two Ap-Pl-E
</code></pre> | python|pandas|dataframe|data-science | 2 |
1,268 | 59,906,550 | How to get output of middel layers in LSTM autoencoder in keras | <p>I have a multi-layer LSTM autoencoder with the following characteristics.</p>
<pre><code>model = Sequential()
model.add(LSTM(250, dropout_U = 0.2, dropout_W = 0.2)) #L1
model.add(LSTM(150, dropout_U = 0.2, dropout_W = 0.2)) #L2
model.add(LSTM(100, dropout_U = 0.2, dropout_W = 0.2)) #L3
model.add(LSTM(150, dropout_U = 0.2, dropout_W = 0.2)) #L4
model.add(LSTM(250, dropout_U = 0.2, dropout_W = 0.2)) #L5
model.compile(optimizer='adam', loss='mse')
</code></pre>
<p>Simply in the test phase, I want to feed data in #L2 and get the output of #L4 then calculate the difference between the input and output of this representation layer.</p>
<p>How can I feed data in this middle layer? when I define input for #L2 layer Keras back error to me that the graph disconnected that it is reasonable.</p> | <p>Thanks to @mahsa-monavari and @frogatto for your answers</p>
<pre><code>from keras import backend as K
# with a Sequential model
get_3rd_layer_output = K.function([model.layers[0].input],
[model.layers[3].output])
layer_output = get_3rd_layer_output([x])[0]
</code></pre> | python|tensorflow|keras|lstm | 0 |
1,269 | 60,240,925 | How to find the variance between two groups in python pandas? | <p>I have a dataframe like this,</p>
<pre><code>ID total_sec is_weekday
1 300 1
1 200 0
2 280 1
2 260 0
3 190 1
4 290 0
5 500 1
5 520 0
</code></pre>
<p>I want to find the ID with the largest variance between weekdays and weekends. If we missed the records for either weekdays or weekends, we calculate the variance as 0.
My expected output will be,</p>
<pre><code>ID variance
1 100
2 20
3 0
4 0
5 20
</code></pre> | <p>You can do:</p>
<pre class="lang-py prettyprint-override"><code>df.pivot(index="ID", columns="is_weekday", values="total_sec").diff(axis=1)[1].fillna(0)
</code></pre>
<p>Outputs:</p>
<pre class="lang-py prettyprint-override"><code>ID
1 100.0
2 20.0
3 0.0
4 0.0
5 -20.0
Name: 1, dtype: float64
</code></pre> | python|pandas|numpy|dataframe | 4 |
1,270 | 65,107,263 | Error while trying to load dictionary data into a Data Frame | <pre><code>import pandas as pd
data={'Company':['GOOG','GOOG','MSFT','MSFT','FB','FB'],'Person':['Sam','charlie','Amy','vanessa','Sarah'],'Sales':[200,120,340,124,243,350]}
df = pd.DataFrame(data)
</code></pre>
<p>And here is the error:</p>
<hr />
<pre><code>ValueError Traceback (most recent call last)
<ipython-input-4-039b238b38ef> in <module>
----> 1 df = pd.DataFrame(data)
</code></pre> | <p>I suggest you this code:</p>
<pre><code>import pandas as pd
data={'Company':['GOOG','GOOG','MSFT','MSFT','FB','FB'],'Person':
['Sam','charlie','Amy','vanessa','Sarah'],'Sales':[200,120,340,124,243,350]}
df = pd.DataFrame.from_dict(data, orient='index')
df = df.transpose()
print(df)
</code></pre>
<p>output is:</p>
<pre><code> Company Person Sales
0 GOOG Sam 200
1 GOOG charlie 120
2 MSFT Amy 340
3 MSFT vanessa 124
4 FB Sarah 243
5 FB None 350
</code></pre> | python|pandas|dataframe | 0 |
1,271 | 65,255,061 | Split pandas column into multiple columns based on 'key=value' items | <p>I have a dataframe where one column contains several information in a 'key=value' format. There are almost a hundred different 'key=value' that can appear in that column but for simplicity sake I'll use this example with only 4 (<code>_browser, _status, _city, tag</code>)</p>
<pre><code>id name properties
0 A {_browser=Chrome, _status=TRUE, _city=Paris}
1 B {_browser=null, _status=TRUE, _city=London, tag=XYZ}
2 C {_status=FALSE, tag=ABC}
</code></pre>
<p>How can I convert this splitting the properties string column into multiple columns?</p>
<p>The expected output is:</p>
<pre><code>id name _browser _status _city tag
0 A Chrome TRUE Paris
1 B null TRUE London XYZ
2 C FALSE ABC
</code></pre>
<p>Note: this value can also contain spaces (eg. <code>_city=Rio de Janeiro</code>)</p> | <p>Let's use <code>str.findall</code> with regex capture groups to extract key-value pairs from the <code>properties</code> column:</p>
<pre><code>df.join(pd.DataFrame(
[dict(l) for l in df.pop('properties').str.findall(r'(\w+)=([^,\}]+)')]))
</code></pre>
<p>Result:</p>
<pre><code> id name _browser _status _city tag
0 A Chrome TRUE Paris NaN
1 B null TRUE London XYZ
2 C NaN FALSE NaN ABC
</code></pre> | python|pandas|dataframe | 5 |
1,272 | 49,808,042 | TensorFlow DLL load failed: A dynamic link library (DLL) initialization routine failed | <p>I installed and ran TensorFlow on my PC. When i ran it on this error appeared in jupyter notebook. I tried to reinstall Anaconda with Python 3.6 many times but I always get the same error. I tried to install a new operating system
and install Visual C++ Redistributable for Visual Studio 2015 but there was no improvement.
i also see <a href="https://stackoverflow.com/questions/41000780/error-running-theano-test-importerror-dll-load-failed-a-dynamic-link-libra">this url</a> but it tells me to install Visual Studio 2015 with some features. I install Visual Studio 2015 but it did not help.</p>
<p>Thanks in advance </p>
<pre><code>---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
C:\Anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py in swig_import_helper()
13 try:
---> 14 return importlib.import_module(mname)
15 except ImportError:
C:\Anaconda3\lib\importlib\__init__.py in import_module(name, package)
125 level += 1
--> 126 return _bootstrap._gcd_import(name[level:], package, level)
127
C:\Anaconda3\lib\importlib\_bootstrap.py in _gcd_import(name, package, level)
C:\Anaconda3\lib\importlib\_bootstrap.py in _find_and_load(name, import_)
C:\Anaconda3\lib\importlib\_bootstrap.py in _find_and_load_unlocked(name, import_)
C:\Anaconda3\lib\importlib\_bootstrap.py in _load_unlocked(spec)
C:\Anaconda3\lib\importlib\_bootstrap.py in module_from_spec(spec)
C:\Anaconda3\lib\importlib\_bootstrap_external.py in create_module(self, spec)
C:\Anaconda3\lib\importlib\_bootstrap.py in _call_with_frames_removed(f, *args, **kwds)
ImportError: DLL load failed: A dynamic link library (DLL) initialization routine failed.
During handling of the above exception, another exception occurred:
ModuleNotFoundError Traceback (most recent call last)
C:\Anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow.py in <module>()
57
---> 58 from tensorflow.python.pywrap_tensorflow_internal import *
59 from tensorflow.python.pywrap_tensorflow_internal import __version__
C:\Anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py in <module>()
16 return importlib.import_module('_pywrap_tensorflow_internal')
---> 17 _pywrap_tensorflow_internal = swig_import_helper()
18 del swig_import_helper
C:\Anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py in swig_import_helper()
15 except ImportError:
---> 16 return importlib.import_module('_pywrap_tensorflow_internal')
17 _pywrap_tensorflow_internal = swig_import_helper()
C:\Anaconda3\lib\importlib\__init__.py in import_module(name, package)
125 level += 1
--> 126 return _bootstrap._gcd_import(name[level:], package, level)
127
ModuleNotFoundError: No module named '_pywrap_tensorflow_internal'
During handling of the above exception, another exception occurred:
ImportError Traceback (most recent call last)
<ipython-input-1-d6579f534729> in <module>()
----> 1 import tensorflow
C:\Anaconda3\lib\site-packages\tensorflow\__init__.py in <module>()
22
23 # pylint: disable=wildcard-import
---> 24 from tensorflow.python import * # pylint: disable=redefined-builtin
25 # pylint: enable=wildcard-import
26
C:\Anaconda3\lib\site-packages\tensorflow\python\__init__.py in <module>()
47 import numpy as np
48
---> 49 from tensorflow.python import pywrap_tensorflow
50
51 # Protocol buffers
C:\Anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow.py in <module>()
72 for some common reasons and solutions. Include the entire stack trace
73 above this error message when asking for help.""" % traceback.format_exc()
---> 74 raise ImportError(msg)
75
76 # pylint: enable=wildcard-import,g-import-not-at-top,unused-import,line-too-long
ImportError: Traceback (most recent call last):
File "C:\Anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 14, in swig_import_helper
return importlib.import_module(mname)
File "C:\Anaconda3\lib\importlib\__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 658, in _load_unlocked
File "<frozen importlib._bootstrap>", line 571, in module_from_spec
File "<frozen importlib._bootstrap_external>", line 922, in create_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
ImportError: DLL load failed: A dynamic link library (DLL) initialization routine failed.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 17, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "C:\Anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 16, in swig_import_helper
return importlib.import_module('_pywrap_tensorflow_internal')
File "C:\Anaconda3\lib\importlib\__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
ModuleNotFoundError: No module named '_pywrap_tensorflow_internal'
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/install_sources#common_installation_problems
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.
</code></pre>
<p></p> | <p>These might be possible scenarios:</p>
<pre><code>1.You need to install the MSVC 2019 redistributable
2.Your CPU does not support AVX2 instructions
3.Your CPU/Python is on 32 bits
4.There is a library that is in a different location/not installed on your system that cannot be loaded.
</code></pre> | python-3.x|tensorflow | -1 |
1,273 | 50,206,222 | Tensorflow How can I make a classifier from a CSV file using TensorFlow? | <p>I need to create a classifier to identify some aphids.</p>
<p>My project has two parts, one with a computer vision (OpenCV), which I already conclude. The second part is with Machine Learning using TensorFlow. But I have no idea how to do it.</p>
<p>I have these data below that have been removed starting from the use of OpenCV, are HuMoments (I believe that is the path I must follow), each line is the HuMoments of an aphid (insect), I have 500 more data lines that I passed to one CSV file.</p>
<p>How can I make a classifier from a CSV file using TensorFlow?</p>
<blockquote>
<p>HuMoments (in CSV file):
0.27356047,0.04652453,0.00084231,7.79486673,-1.4484489,-1.4727380,-1.3752532
0.27455502,0.04913969,3.91102408,1.35705980,3.08570234,2.71530819,-5.0277362
0.20708829,0.01563241,3.20141907,9.45211423,1.53559373,1.08038279,-5.8776765
0.23454372,0.02820523,5.91665789,6.96682467,1.02919203,7.58756583,-9.7028848</p>
</blockquote> | <p>You can start with this tutorial, and try it first without changing anything; I strongly suggest this unless you are already familiar with Tensorflow so that you gain some familiarity with it.</p>
<p>Now you can modify the input layer of this network to match the dimensions of the HuMoments. Next, you can give a numeric label to each type of aphid that you want to recognize, and adjust the size of the output layer to match them.</p>
<p>You can now read the CSV file using python, and remove any text like "HuMoments". If your file has names of aphids, remove them and replace them with numerical class labels. Replace the training data of the code in the above link, with these data.</p>
<p>Now you can train the network according to the description under the title "Train the Model".</p>
<p>One more note. Unless it is essential to use Tensorflow to match your project requirements, I suggest using Keras. Keras is a higher level library that is much easier to learn than Tensorflow, and you have more sample code online.</p> | python|csv|tensorflow | 0 |
1,274 | 49,937,365 | replace values in series according to threshold | <p>I have a pandas series and I would like to replace the values with 0 if the value < 3 and with 1 if the value >=3</p>
<pre><code>se = pandas.Series([1,2,3,4,5,6])
se[se<3]=0
se[se>=3]=1
</code></pre>
<p>Is there a better/pythonic way to do so?</p> | <p>In my opinion here is best/fast cast boolean mask to <code>integer</code>s:</p>
<pre><code>se = (se >= 3).astype(int)
</code></pre>
<p>Or use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html" rel="nofollow noreferrer"><code>numpy.where</code></a>, but <code>Series</code> constructor is necessary, because returned numpy array:</p>
<pre><code>se = pd.Series(np.where(se < 3, 0, 1), index=se.index)
print (se)
0 0
1 0
2 1
3 1
4 1
5 1
dtype: int32
</code></pre> | python|pandas|series | 1 |
1,275 | 49,968,105 | Does the TensorFlow backend of Keras rely on the eager execution? | <p>Does the TensorFlow backend of Keras rely on the eager execution?</p>
<p>If it isn't the case, can I build a TensorFlow graph based on Keras and TensorFlow operations, then train the whole model using Keras high-level API?</p> | <blockquote>
<p>It is for a research purpose which I can't present here.</p>
</blockquote>
<p>That makes it really difficult to answer your question. It would be better if you could find a toy example -- unrelated with your research -- of what you want and we try to build something from there.</p>
<blockquote>
<p>Does the TensorFlow backend of Keras rely on the eager execution?</p>
</blockquote>
<p>No, it doesn't. Keras was built before eager execution introduction. Keras (the one inside tf) can, however, work in eager execution mode (see fchollet's <a href="https://groups.google.com/a/tensorflow.org/forum/#!topic/discuss/G7PoqzY1Hck" rel="noreferrer">answer</a>).</p>
<blockquote>
<p>can I build a TensorFlow graph and combine it with a Keras model then train them jointly using Keras high-level API?</p>
</blockquote>
<p>I'm not sure what you mean by "build a TensorFlow graph", because a graph already exists whenever you use keras. If you are talking about adding a bunch of operations to the existing graph, then it's definitely possible. You just need to wrap it up with a Lambda layer, just like you'd do if using Keras on symbolic mode:</p>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
from sacred import Experiment
ex = Experiment('test-18')
tf.enable_eager_execution()
@ex.config
def my_config():
pass
@ex.automain
def main():
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
x_train, x_test = (e.reshape(e.shape[0], -1) for e in (x_train, x_test))
y_train, y_test = (tf.keras.utils.to_categorical(e) for e in (y_train, y_test))
def complex_tf_fn(x):
u, v = tf.nn.moments(x, axes=[1], keep_dims=True)
return (x - u) / tf.sqrt(v)
with tf.device('/cpu:0'):
model = tf.keras.Sequential([
tf.keras.layers.Lambda(complex_tf_fn, input_shape=[784]),
tf.keras.layers.Dense(1024, activation='relu'),
tf.keras.layers.Lambda(complex_tf_fn),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer=tf.train.AdamOptimizer(),
loss='categorical_crossentropy')
model.fit(x_train, y_train,
epochs=10,
validation_data=(x_test, y_test),
batch_size=1024,
verbose=2)
</code></pre>
<pre class="lang-sh prettyprint-override"><code>python test-18.py with seed=21
INFO - test-18 - Running command 'main'
INFO - test-18 - Started
Train on 60000 samples, validate on 10000 samples
Epoch 1/10
- 9s - loss: 3.4012 - val_loss: 1.3575
Epoch 2/10
- 9s - loss: 0.9870 - val_loss: 0.7270
Epoch 3/10
- 9s - loss: 0.6097 - val_loss: 0.6071
Epoch 4/10
- 9s - loss: 0.4459 - val_loss: 0.4824
Epoch 5/10
- 9s - loss: 0.3352 - val_loss: 0.4436
Epoch 6/10
- 9s - loss: 0.2661 - val_loss: 0.3997
Epoch 7/10
- 9s - loss: 0.2205 - val_loss: 0.4048
Epoch 8/10
- 9s - loss: 0.1877 - val_loss: 0.3788
Epoch 9/10
- 9s - loss: 0.1511 - val_loss: 0.3506
Epoch 10/10
- 9s - loss: 0.1304 - val_loss: 0.3330
INFO - test-18 - Completed after 0:01:31
Process finished with exit code 0
</code></pre> | python|tensorflow|keras | 5 |
1,276 | 50,064,833 | Fine tuned VGG-16 gives the exact same prediction for all test images | <p>I have fine-tuned a VGG-16 network to predict the presence of disease on medical images. I've then tested the model by using <code>model.predict()</code> but what I'm seeing is that the network predicts the exact same <strong><em>22.310%</em></strong> and <strong><em>77.690%</em></strong> for the presence and absence of disease, respectively, for <em>all 100 test images</em> (see screenshot.) I'm attaching my code and training output below. The training looks okay. Please note, the training was done on a server and the prediction on my PC hence the directories are different.
Can you please help me find what the problem might be?</p>
<p><a href="https://i.stack.imgur.com/OjNaT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OjNaT.png" alt="RESULT"></a></p>
<p>Training code:</p>
<pre><code>import numpy as np
import os
import time
from vgg16 import VGG16
from keras.preprocessing import image
from imagenet_utils import preprocess_input, decode_predictions
from keras.layers import Dense, Activation, Flatten
from keras.layers import merge, Input
from keras.models import Model
from keras.utils import np_utils
from sklearn.utils import shuffle
from sklearn.cross_validation import train_test_split
# Loading the training data
PATH = '/mount'
# Define data path
data_path = PATH
data_dir_list = os.listdir(data_path)
img_data_list=[]
y=0;
for dataset in data_dir_list:
img_list=os.listdir(data_path+'/'+ dataset)
print ('Loaded the images of dataset-'+'{}\n'.format(dataset))
for img in img_list:
img_path = data_path + '/'+ dataset + '/'+ img
img = image.load_img(img_path, target_size=(224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
x = x/255
y=y+1
print('Input image shape:', x.shape)
print(y)
img_data_list.append(x)
img_data = np.array(img_data_list)
#img_data = img_data.astype('float32')
print (img_data.shape)
img_data=np.rollaxis(img_data,1,0)
print (img_data.shape)
img_data=img_data[0]
print (img_data.shape)
# Define the number of classes
num_classes = 2
num_of_samples = img_data.shape[0]
labels = np.ones((num_of_samples,),dtype='int64')
labels[0:3001]=0
labels[3001:]=1
names = ['YES','NO']
# convert class labels to on-hot encoding
Y = np_utils.to_categorical(labels, num_classes)
#Shuffle the dataset
x,y = shuffle(img_data,Y, random_state=2)
# Split the dataset
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=2)
image_input = Input(shape=(224, 224, 3))
model = VGG16(input_tensor=image_input, include_top=True,weights='imagenet')
model.summary()
last_layer = model.get_layer('block5_pool').output
x= Flatten(name='flatten')(last_layer)
x = Dense(16, activation='relu', name='fc1')(x)
x = Dense(8, activation='relu', name='fc2')(x)
out = Dense(num_classes, activation='softmax', name='output')(x)
custom_vgg_model2 = Model(image_input, out)
# freeze all the layers except the dense layers
for layer in custom_vgg_model2.layers[:-6]:
layer.trainable = False
custom_vgg_model2.summary()
custom_vgg_model2.compile(loss='categorical_crossentropy',optimizer='adam',metrics=['accuracy'])
t=time.time()
# t = now()
hist = custom_vgg_model2.fit(X_train, y_train, batch_size=128, epochs=10, verbose=1, validation_data=(X_test, y_test))
print('Training time: %s' % (t - time.time()))
(loss, accuracy) = custom_vgg_model2.evaluate(X_test, y_test, batch_size=10, verbose=1)
print("[INFO] loss={:.4f}, accuracy: {:.4f}%".format(loss,accuracy * 100))
custom_vgg_model2.save("vgg_3000_92percent_real.h5")
</code></pre>
<p>Training output:</p>
<blockquote>
<p>Train on 4800 samples, validate on 1200 samples<br>
Epoch 1/10<br>
4800/4800 [==============================] - 100s - loss: 0.6098 - acc: 0.7567 - val_loss: 0.3252 - val_acc: 0.8667<br>
Epoch 2/10<br>
4800/4800 [==============================] - 82s - loss: 0.2644 - acc: 0.8985 - val_loss: 0.2930 - val_acc: 0.8783<br>
Epoch 3/10<br>
4800/4800 [==============================] - 83s - loss: 0.2297 - acc: 0.9127 - val_loss: 0.2386 - val_acc: 0.9042<br>
Epoch 4/10<br>
4800/4800 [==============================] - 83s - loss: 0.1844 - acc: 0.9327 - val_loss: 0.2273 - val_acc: 0.9083<br>
Epoch 5/10<br>
4800/4800 [==============================] - 83s - loss: 0.1754 - acc: 0.9354 - val_loss: 0.2080 - val_acc: 0.9167<br>
Epoch 6/10<br>
4800/4800 [==============================] - 83s - loss: 0.1357 - acc: 0.9515 - val_loss: 0.2403 - val_acc: 0.9183<br>
Epoch 7/10<br>
4736/4800 [============================>.] - ETA: 0s - loss: 0.1241 - acc: 0.9525 </p>
</blockquote>
<p>The prediction code</p>
<pre><code>import numpy as np
from keras.preprocessing import image
from imagenet_utils import preprocess_input
from keras import models
import matplotlib.pyplot as plt
import os
model128 = models.load_model('16_8_finally.h5')
list=[]
flag=0
#Path0="D:\\download dump for Deep learnng\\dataset\\kaggle general competition\\test"
Path0="I:\\greenchTestsample\\greendr"
list=os.listdir(Path0)
pred0=[0]*len(list)
pred1=[0]*len(list)
for x in list:
img_path=Path0+'\\'+ x
img = image.load_img(img_path, target_size=(224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
x=x/255
preds = model128.predict(x)
z=100*preds
x1=float(z[0][0])
x2=float(z[0][1])
pred0[flag]=x1
pred1[flag]=x2
flag=flag+1
</code></pre> | <p>Ok so this is not really an answer but a step towards debugging. Please change the prediction loop to the code below and post the output.</p>
<pre><code>for x in list[ :3 ]: # let's do the first 3 only
img_path=Path0+'\\'+ x
print() # leave an empty line before each image
print( image_path ) # let's see if the correct files are loaded
img = image.load_img(img_path, target_size=(224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
x /= 255 # just nitpicking :)
print( x ) # let's see if the values make sense
preds = model128.predict(x)
print( preds ) # see if the error is already present here
z=100*preds
x1=float(z[0][0])
x2=float(z[0][1])
print( x1, x2 )
pred0[flag]=x1
pred1[flag]=x2
flag += 1 # nitpicking again :)
</code></pre> | python-3.x|image-processing|tensorflow|deep-learning|keras | 0 |
1,277 | 64,053,537 | Appending elements of arrays as a line to a file | <p>I want to append the array as a line to a flie ,</p>
<pre><code>import numpy as np
data1 = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
data2 = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 0])
g = open(f'data.csv', 'w')
for data in [data1,data2]:
g.write(data)
g.close()
</code></pre>
<p>I got</p>
<pre><code>Traceback (most recent call last):
File "test.py", line 12, in <module>
g.write(data)
TypeError: write() argument must be str, not numpy.ndarray
</code></pre>
<p>Then I use</p>
<pre><code> g.write(f'{data}\n')
</code></pre>
<p>The output is</p>
<pre><code>[0 1 2 3 4 5 6 7 8 9]
[1 2 3 4 5 6 7 8 9 0]
</code></pre>
<p>But how can I get rid of the <code>[]</code> sign</p>
<pre><code>0 1 2 3 4 5 6 7 8 9
1 2 3 4 5 6 7 8 9 0
</code></pre>
<p><strong>ANSWER UPDATE</strong></p>
<p>Thanks to the comment, I can first stack the arrays, then save it. Maybe there's still some elegent ways as directly appending them! Unforturnately the question is closed, please write as a comment and I will update here!</p>
<pre><code>import numpy as np
data1 = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
data2 = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 0])
data = np.stack((data1,data2))
np.savetxt('data.csv', data, fmt='%i ', newline='\n')
</code></pre> | <p>Try:</p>
<pre><code>np.savetxt('data.csv', data, fmt='%i ', newline='')
</code></pre>
<p>Example for more arrays:</p>
<pre><code>data1 = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
data2 = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 0])
data = np.array([data1, data2])
np.savetxt('data.csv', data, fmt='%i ', newline='\n')
</code></pre> | python|numpy | 3 |
1,278 | 64,103,683 | PyTorch LSTM not learning in training | <p>I have the following simple LSTM network:</p>
<pre><code>class LSTMModel(nn.Module):
def __init__(self, input_dim, hidden_dim, layer_dim, output_dim):
super().__init__()
self.hidden_dim = hidden_dim
self.layer_dim = layer_dim
self.rnn = nn.LSTM(input_dim, hidden_dim, layer_dim, batch_first=True)
self.fc = nn.Linear(hidden_dim, output_dim)
self.batch_size = None
self.hidden = None
def forward(self, x):
h0, c0 = self.init_hidden(x)
out, (hn, cn) = self.rnn(x, (h0, c0))
out = self.fc(out[:, -1, :])
return out
def init_hidden(self, x):
h0 = torch.zeros(self.layer_dim, x.size(0), self.hidden_dim)
c0 = torch.zeros(self.layer_dim, x.size(0), self.hidden_dim)
return [t for t in (h0, c0)]
</code></pre>
<p>I am initialising this model as"</p>
<pre><code>model = LSTMClassifier(28, 10, 6, 1)
</code></pre>
<p>i.e. each input instance has 6 time steps and the dimension of each time step is 28, and the hidden dimension is 10. The inputs are being mapped to an output dim of 1.</p>
<p>The training data is being prepared in batches of size 16, meaning that the data passed in the training loop has the shape:</p>
<pre><code>torch.Size([16, 6, 28])
</code></pre>
<p>With labels of shape:</p>
<pre><code>batches[1][0].size()
</code></pre>
<p>An example of the input is:</p>
<pre><code>tensor([[-0.3674, 0.0347, -0.2169, -0.0821, -0.3673, -0.1773, 1.1840, -0.2669,
-0.4202, -0.1473, -0.1132, -0.4756, -0.3565, 0.5010, 0.1274, -0.1147,
0.2783, 0.0836, -1.3251, -0.8067, -0.6447, -0.7396, -0.3241, 1.3329,
1.3801, 0.8198, 0.6098, 0.0697],
[-0.2710, 0.1596, -0.2524, -0.0821, -0.3673, -0.1773, 0.0302, -0.2099,
-0.4550, 0.1451, -0.4561, -0.5207, -0.5657, -0.5287, -0.2690, -0.1147,
-0.0346, -0.1043, -0.7515, -0.8392, -0.4745, -0.7396, -0.3924, 0.8122,
-0.1624, -1.2198, 0.0326, -0.9306],
[-0.1746, 0.0972, -0.2702, -0.0821, -0.3673, -0.1773, -0.0468, -1.1225,
-0.4480, -0.4397, 0.4011, -1.1073, -1.0536, -0.1855, -0.7502, -0.1147,
-0.0146, -0.1545, -0.1919, -0.1674, 0.0930, -0.7396, 0.8106, 1.1594,
0.4546, -1.2198, -0.5446, -1.2640],
[-0.2710, 0.0660, -0.2524, -0.0821, -0.4210, -0.1773, 1.8251, -0.5236,
-0.4410, -0.7321, 0.4011, -0.6110, -0.2171, 1.1875, -0.2973, -0.1147,
-0.1278, 0.7728, -0.9334, -0.5141, -2.1202, 1.3521, -0.9393, 0.5085,
-0.4709, 0.8198, -1.1218, 0.0697],
[-0.3674, -0.0277, -0.2347, -0.0821, -0.0448, -0.1773, 0.2866, -0.1386,
-0.4271, 0.4375, -0.2847, -0.1146, -0.4262, -0.3571, -0.0425, -0.1147,
-0.4207, -0.4552, -0.5277, -0.9584, -0.4177, -0.7396, -0.2967, 0.5085,
0.4546, -1.2198, -0.3522, -1.2640],
[-0.3674, -0.1447, -0.1991, -0.0821, 0.1701, -0.1773, 0.0430, 0.1324,
-0.4271, 0.7299, -0.4561, 0.2915, -0.5657, -0.1855, -0.2123, -0.1147,
-0.0413, -0.8311, -0.6396, -1.0451, -0.4177, -0.7396, -0.2967, -0.4028,
0.7631, -1.2198, -0.3522, -1.2640]])
</code></pre>
<p>When I train the model as:</p>
<pre><code>Epochs = 10
batch_size = 32
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=5e-4)
for epoch in range(Epochs):
print(f"Epoch {epoch + 1}")
for n, (X, y) in enumerate(batches):
model.train()
optimizer.zero_grad()
y_pred = model(X)
loss = criterion(y_pred, y)
loss.backward()
optimizer.step()
model.eval()
accurate = 0
for X_instance, y_instance in zip(test_X, test_y):
if y_instance == round(model(X_instance.view(-1, 6, 28)).detach().item()):
accurate += 1
print(f"Accuracy test set: {accurate/len(test_X)}")
</code></pre>
<p>The accuracy does not converge:</p>
<pre><code>Epoch 1
Accuracy test set: 0.23169107856191745
Sample params:
tensor([-0.3356, -0.0105, -0.3405, -0.0049, 0.0037, 0.1707, 0.2685, -0.3893,
-0.4707, -0.2872, -0.1544, -0.1455, 0.0393, 0.0774, -0.4194, 0.0780,
-0.2177, -0.3829, -0.4679, 0.0370, -0.0794, 0.0455, -0.1331, -0.0169,
-0.1551, -0.0348, 0.1746, -0.5163], grad_fn=<SelectBackward>)
tensor([ 0.2137, -0.2558, 0.1509, -0.0975, 0.5591, 0.0907, -0.1249, 0.3095,
0.2112, 0.3134, -0.1581, -0.3051, -0.3559, -0.0177, 0.1485, 0.4397,
-0.1441, 0.1705, 0.3230, -0.3236, 0.0692, 0.0920, -0.2691, -0.3695,
-0.0692, 0.3747, 0.0149, 0.5216], grad_fn=<SelectBackward>)
Epoch 2
Accuracy test set: 0.23049267643142476
Sample params:
tensor([-0.3483, -0.0144, -0.3512, 0.0213, -0.0081, 0.1777, 0.2674, -0.4031,
-0.4628, -0.3041, -0.1651, -0.1511, 0.0216, 0.0513, -0.4320, 0.0839,
-0.2602, -0.3629, -0.4541, 0.0398, -0.0768, 0.0432, -0.1150, -0.0160,
-0.1346, -0.0727, 0.1801, -0.5253], grad_fn=<SelectBackward>)
tensor([ 0.1879, -0.2534, 0.1461, -0.1141, 0.5735, 0.0872, -0.1286, 0.3273,
0.2084, 0.3037, -0.1535, -0.2934, -0.3870, -0.0252, 0.1492, 0.4752,
-0.1709, 0.1776, 0.3390, -0.3318, 0.0734, 0.1077, -0.2790, -0.3777,
-0.0518, 0.3726, 0.0228, 0.5404], grad_fn=<SelectBackward>)
Epoch 3
Accuracy test set: 0.22982689747003995
Sample params:
tensor([-0.3725, -0.0069, -0.3623, 0.0393, -0.0167, 0.1748, 0.2577, -0.4183,
-0.4681, -0.3196, -0.1657, -0.1613, 0.0122, 0.0268, -0.4361, 0.0838,
-0.2962, -0.3566, -0.4344, 0.0366, -0.0822, 0.0486, -0.1150, -0.0295,
-0.1080, -0.1094, 0.1841, -0.5336], grad_fn=<SelectBackward>)
tensor([ 0.1664, -0.2456, 0.1477, -0.1332, 0.5820, 0.0819, -0.1228, 0.3426,
0.2066, 0.2985, -0.1464, -0.2824, -0.4199, -0.0323, 0.1530, 0.5057,
-0.1991, 0.1856, 0.3407, -0.3347, 0.0800, 0.1203, -0.2791, -0.3863,
-0.0426, 0.3760, 0.0327, 0.5641], grad_fn=<SelectBackward>)
Epoch 4
Accuracy test set: 0.23249001331557922
Sample params:
tensor([-0.3945, 0.0032, -0.3765, 0.0600, -0.0248, 0.1713, 0.2442, -0.4297,
-0.4741, -0.3311, -0.1653, -0.1667, 0.0029, 0.0066, -0.4373, 0.0738,
-0.3320, -0.3530, -0.4136, 0.0390, -0.0731, 0.0552, -0.1117, -0.0517,
-0.0871, -0.1455, 0.1841, -0.5359], grad_fn=<SelectBackward>)
tensor([ 0.1495, -0.2292, 0.1524, -0.1473, 0.5938, 0.0661, -0.1157, 0.3626,
0.2013, 0.2927, -0.1350, -0.2661, -0.4558, -0.0411, 0.1562, 0.5381,
-0.2279, 0.1927, 0.3319, -0.3431, 0.0852, 0.1402, -0.2747, -0.4026,
-0.0297, 0.3757, 0.0396, 0.5856], grad_fn=<SelectBackward>)
</code></pre>
<p>Have I made a mistake in the model definition?</p> | <p>So normally 6 layers in your LSTM are way to much. The input dimension is 28 (are you training MNIST, or are the inputs letters?) so 10 as hidden dimension is acutally way to small. Try the following parameters:</p>
<pre><code>hidden_dim = 128 to 512
layer_dim = 2 to max. 4
</code></pre>
<p>I see your output-shape is 1 and you dont use an activation function. Are you trying to predict intergers (like 1 for class "dog", 2 for class "cat")? If so you should switch to one-hot encoding, so that your output shape is equal to the classes you want to predict. And then use softmax as activation for your last layer.</p> | python|pytorch | 2 |
1,279 | 46,872,336 | Mean each row of nonzero values and avoid RuntimeWarning and NaN as some rows are all zero | <p>I already checked <a href="https://stackoverflow.com/questions/38542548/numpy-mean-of-nonzero-values">Numpy mean of nonzero values</a> and it worked nicely. However, some rows of my matrix are all zero element. What is a good way to avoid <code>RuntimeWarning: invalid value encountered in true_divide</code> in this case? Also, I don't want the zero element to be replaced by <code>Nan</code> here.</p>
<pre><code>eachPSM = np.ones([3,4])
eachPSM[0] = 0
print eachPSM
>> [[ 0. 0. 0. 0.]
[ 1. 1. 1. 1.]
[ 1. 1. 1. 1.]]
print np.true_divide(eachPSM.sum(1),(eachPSM!=0).sum(1))
>> RuntimeWarning: invalid value encountered in true_divide
[ nan 1. 1.]
</code></pre> | <p>With <code>a</code> as the input array, you could use <code>masking</code> -</p>
<pre><code>invalid_val = np.nan # specifies mean value to be assigned for all zeros rows
out = np.full(a.shape[0],invalid_val)
count = (a!=0).sum(1)
valid_mask = count!=0
out[valid_mask] = a[valid_mask].sum(1)/count[valid_mask]
</code></pre> | python-2.7|numpy|mean | 2 |
1,280 | 33,058,590 | Pandas Dataframe: Replacing NaN with row average | <p>I am trying to learn pandas but I have been puzzled with the following. I want to replace NaNs in a DataFrame with the row average. Hence something like <code>df.fillna(df.mean(axis=1))</code> should work but for some reason it fails for me. Am I missing anything, is there something wrong with what I'm doing? Is it because its not implemented? see <a href="https://stackoverflow.com/questions/29478641/how-to-replace-nan-with-sum-of-the-row-in-pandas-datatframe">link here</a></p>
<pre><code>import pandas as pd
import numpy as np
pd.__version__
Out[44]:
'0.15.2'
In [45]:
df = pd.DataFrame()
df['c1'] = [1, 2, 3]
df['c2'] = [4, 5, 6]
df['c3'] = [7, np.nan, 9]
df
Out[45]:
c1 c2 c3
0 1 4 7
1 2 5 NaN
2 3 6 9
In [46]:
df.fillna(df.mean(axis=1))
Out[46]:
c1 c2 c3
0 1 4 7
1 2 5 NaN
2 3 6 9
</code></pre>
<p>However something like this looks to work fine</p>
<pre><code>df.fillna(df.mean(axis=0))
Out[47]:
c1 c2 c3
0 1 4 7
1 2 5 8
2 3 6 9
</code></pre> | <p>As commented the axis argument to fillna is <a href="https://github.com/pydata/pandas/issues/4514" rel="nofollow noreferrer">NotImplemented</a>.</p>
<pre><code>df.fillna(df.mean(axis=1), axis=1)
</code></pre>
<p><em>Note: this would be critical here as you don't want to fill in your nth columns with the nth row average.</em></p>
<p>For now you'll need to iterate through:</p>
<pre><code>m = df.mean(axis=1)
for i, col in enumerate(df):
# using i allows for duplicate columns
# inplace *may* not always work here, so IMO the next line is preferred
# df.iloc[:, i].fillna(m, inplace=True)
df.iloc[:, i] = df.iloc[:, i].fillna(m)
print(df)
c1 c2 c3
0 1 4 7.0
1 2 5 3.5
2 3 6 9.0
</code></pre>
<p>An alternative is to fillna the transpose and then transpose, which may be more efficient...</p>
<pre><code>df.T.fillna(df.mean(axis=1)).T
</code></pre> | python|pandas|dataframe|missing-data | 42 |
1,281 | 38,931,631 | ValueError: Invalid parameter solver for estimator LogisticRegression | <p>I am trying to run a gridsearch for Logistic regression and I am getting this very weird error. I run the same thing on my machine and it works fine but when I try to run it on my remote machine, it fails.</p>
<p>The only visible difference is in the version of python, on my local machine it is 2.7.10 and on the remote machine where it doesn't work it's 2.7.6.</p>
<p>Following is the code snippet where apparently I am getting the error:</p>
<pre><code>tuned_parameters = [{'C': [0.01, 0.1, 1],
'penalty': ['l2'],
'solver': ['liblinear', 'lbfgs']},
{'C': [0.01, 0.1, 1],
'penalty': ['l1'],
'solver': ['liblinear']}]
print("# Tuning hyper-parameters for accuracy")
clf = GridSearchCV(LogisticRegression(), tuned_parameters, cv=3, n_jobs=-1,scoring='accuracy')
clf.fit(xtrain, ytrain)
</code></pre>
<p>I have 2 dense/sparse numpy array on which I am trying to do the regression.</p>
<p>Following is the traceback I am getting:</p>
<pre><code>Traceback (most recent call last):
File "./ml/run_logistic_regr.py", line 67, in <module>
clf.fit(xtrain, ytrain)
File "/usr/lib/python2.7/dist-packages/sklearn/grid_search.py", line 707, in fit
return self._fit(X, y, ParameterGrid(self.param_grid))
File "/usr/lib/python2.7/dist-packages/sklearn/grid_search.py", line 493, in _fit
for parameters in parameter_iterable
File "/usr/lib/pymodules/python2.7/joblib/parallel.py", line 519, in __call__
self.retrieve()
File "/usr/lib/pymodules/python2.7/joblib/parallel.py", line 450, in retrieve
raise exception_type(report)
joblib.my_exceptions.JoblibValueError/usr/lib/pymodules/python2.7/joblib/my_exceptions.py:26: DeprecationWarning: BaseException.message has been deprecated as of Python 2.6
self.message,
: JoblibValueError
</code></pre>
<p>I have no clue why I am getting this error, I searched on google as well but I don't even see any question with invalid parameter solver. Any help is really appreciated.</p>
<p>Edit: (Didn't add the error message which I listed)</p>
<p>And this is what I get after the traceback:</p>
<pre><code>___________________________________________________________________________
Multiprocessing exception:
...........................................................................
/home/bbdc/code/ml/run_logistic_regr.py in <module>()
62 print("# Tuning hyper-parameters for accuracy")
63 clf = GridSearchCV(LogisticRegression(), tuned_parameters, cv=3, n_jobs=-1,
64 scoring='accuracy')
65
66 # regr = linear_model.LogisticRegression(C=0.1, penalty='l2', solver='newton-cg', max_iter=1000)
---> 67 clf.fit(xtrain, ytrain)
68
69 print("Best parameters set on training data:")
70 print(clf.best_params_)
71 print("Grid scores on training data:")
...........................................................................
/usr/lib/python2.7/dist-packages/sklearn/grid_search.py in fit(self=GridSearchCV(cv=3,
estimator=LogisticRegr..._func=None,
scoring='accuracy', verbose=0), X=array([[ 1.12306100e+06, 6.00000000e+00, 1....000000e+00, 3.00000000e+00, 1.00000000e+00]]), y=array([ 4., 2., 4., 4., 2., 2., 2., 4., ...2., 2., 2., 2., 2., 2., 4., 2., 2., 4.]), **params={})
702 """
703 if params:
704 warnings.warn("Additional parameters to GridSearchCV are ignored!"
705 " The params argument will be removed in 0.15.",
706 DeprecationWarning)
--> 707 return self._fit(X, y, ParameterGrid(self.param_grid))
self._fit = <bound method GridSearchCV._fit of GridSearchCV(cv=3,
estimator=LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True,
intercept_scaling=1, penalty='l2', random_state=None, tol=0.0001),
fit_params={}, iid=True, loss_func=None, n_jobs=-1,
param_grid=[{'penalty': ['l2'], 'C': [0.01, 0.1, 1], 'solver': ['liblinear', 'lbfgs']}, {'penalty': ['l1'], 'C': [0.01, 0.1, 1], 'solver': ['liblinear']}],
pre_dispatch='2*n_jobs', refit=True, score_func=None,
scoring='accuracy', verbose=0)>
X = array([[ 1.12306100e+06, 6.00000000e+00, 1.00000000e+01, ...,
7.00000000e+00, 8.00000000e+00, 1.00000000e+01],
[ 1.26957400e+06, 4.00000000e+00, 1.00000000e+00, ...,
1.00000000e+00, 1.00000000e+00, 1.00000000e+00],
[ 1.23894800e+06, 8.00000000e+00, 5.00000000e+00, ...,
6.00000000e+00, 6.00000000e+00, 1.00000000e+00],
...,
[ 1.17484100e+06, 5.00000000e+00, 3.00000000e+00, ...,
1.00000000e+00, 1.00000000e+00, 1.00000000e+00],
[ 1.17702700e+06, 3.00000000e+00, 1.00000000e+00, ...,
3.00000000e+00, 1.00000000e+00, 1.00000000e+00],
[ 4.28903000e+05, 7.00000000e+00, 2.00000000e+00, ...,
3.00000000e+00, 3.00000000e+00, 1.00000000e+00]])
y = array([ 4., 2., 4., 4., 2., 2., 2., 4., 4., 2., 4., 2., 2.,
2., 4., 2., 2., 2., 4., 4., 2., 2., 2., 2., 2., 4.,
2., 4., 4., 2., 4., 4., 2., 2., 4., 4., 2., 2., 4.,
2., 2., 2., 2., 4., 2., 2., 2., 4., 2., 2., 2., 2.,
2., 4., 2., 2., 2., 2., 4., 2., 4., 2., 4., 4., 4.,
2., 4., 2., 4., 4., 2., 4., 4., 2., 2., 2., 4., 2.,
2., 2., 4., 2., 4., 2., 2., 4., 4., 2., 2., 2., 2.,
2., 4., 2., 2., 4., 2., 4., 2., 4., 2., 2., 2., 2.,
2., 4., 2., 2., 2., 2., 2., 4., 2., 4., 2., 4., 4.,
2., 2., 4., 2., 2., 2., 2., 4., 2., 4., 4., 2., 2.,
4., 2., 4., 2., 2., 4., 2., 2., 2., 2., 2., 4., 2.,
2., 2., 2., 4., 2., 2., 2., 2., 2., 2., 4., 2., 2.,
4., 2., 2., 2., 2., 2., 2., 2., 2., 4., 2., 4., 2.,
4., 4., 4., 2., 4., 2., 2., 4., 4., 4., 2., 2., 2.,
2., 4., 4., 4., 4., 2., 2., 2., 2., 4., 2., 4., 2.,
4., 4., 4., 2., 2., 4., 2., 2., 2., 2., 4., 4., 2.,
4., 4., 2., 4., 2., 2., 2., 2., 4., 4., 4., 2., 2.,
2., 2., 2., 2., 4., 4., 2., 2., 4., 2., 2., 4., 2.,
2., 2., 2., 2., 2., 4., 2., 4., 2., 2., 2., 2., 4.,
4., 2., 4., 2., 2., 4., 2., 2., 2., 2., 2., 4., 2.,
2., 4., 2., 2., 2., 2., 2., 2., 2., 2., 4., 4., 2.,
4., 2., 4., 2., 4., 2., 2., 4., 2., 4., 2., 4., 4.,
2., 2., 4., 4., 2., 2., 2., 4., 2., 2., 2., 4., 2.,
2., 2., 2., 2., 2., 2., 4., 2., 2., 4., 4., 2., 2.,
2., 2., 2., 2., 2., 2., 4., 4., 4., 4., 4., 2., 2.,
4., 4., 2., 2., 4., 4., 2., 4., 2., 2., 4., 4., 2.,
2., 2., 2., 4., 4., 2., 4., 2., 2., 2., 2., 2., 2.,
2., 4., 2., 2., 2., 2., 4., 4., 4., 2., 2., 2., 4.,
2., 2., 2., 2., 2., 4., 2., 4., 2., 2., 2., 2., 4.,
2., 4., 4., 4., 2., 4., 2., 2., 2., 2., 2., 4., 2.,
4., 2., 2., 2., 4., 4., 2., 4., 2., 2., 2., 4., 2.,
2., 2., 2., 4., 4., 2., 2., 2., 4., 2., 2., 2., 2.,
2., 4., 4., 2., 4., 2., 2., 2., 4., 2., 2., 2., 4.,
2., 2., 4., 4., 2., 2., 2., 2., 4., 2., 2., 2., 2.,
2., 2., 4., 4., 2., 2., 2., 2., 4., 2., 2., 2., 2.,
2., 4., 4., 2., 4., 2., 2., 2., 4., 2., 2., 2., 4.,
4., 4., 2., 4., 2., 2., 2., 2., 4., 4., 2., 2., 2.,
4., 4., 2., 4., 2., 2., 4., 4., 4., 2., 2., 2., 2.,
4., 2., 4., 2., 4., 2., 4., 2., 2., 2., 4., 4., 2.,
2., 4., 2., 2., 2., 4., 2., 2., 2., 2., 2., 2., 4.,
2., 2., 4., 2., 4., 4., 2., 4., 2., 2., 2., 4., 2.,
4., 2., 4., 2., 2., 2., 2., 2., 2., 4., 2., 2., 4.])
self.param_grid = [{'penalty': ['l2'], 'C': [0.01, 0.1, 1], 'solver': ['liblinear', 'lbfgs']}, {'penalty': ['l1'], 'C': [0.01, 0.1, 1], 'solver': ['liblinear']}]
708
709
710 class RandomizedSearchCV(BaseSearchCV):
711 """Randomized search on hyper parameters.
...........................................................................
/usr/lib/python2.7/dist-packages/sklearn/grid_search.py in _fit(self=GridSearchCV(cv=3,
estimator=LogisticRegr..._func=None,
scoring='accuracy', verbose=0), X=array([[ 1.12306100e+06, 6.00000000e+00, 1....000000e+00, 3.00000000e+00, 1.00000000e+00]]), y=array([ 4., 2., 4., 4., 2., 2., 2., 4., ...2., 2., 2., 2., 2., 2., 4., 2., 2., 4.]), parameter_iterable=<sklearn.grid_search.ParameterGrid object>)
488 n_jobs=self.n_jobs, verbose=self.verbose,
489 pre_dispatch=pre_dispatch)(
490 delayed(fit_grid_point)(
491 X, y, base_estimator, parameters, train, test,
492 self.scorer_, self.verbose, **self.fit_params)
--> 493 for parameters in parameter_iterable
parameters = undefined
parameter_iterable = <sklearn.grid_search.ParameterGrid object at 0x7f8e15e4d150>
494 for train, test in cv)
495
496 # Out is a list of triplet: score, estimator, n_test_samples
497 n_fits = len(out)
...........................................................................
/usr/lib/pymodules/python2.7/joblib/parallel.py in __call__(self=Parallel(n_jobs=-1), iterable=<itertools.islice object>)
514 self.n_dispatched = 0
515 try:
516 for function, args, kwargs in iterable:
517 self.dispatch(function, args, kwargs)
518
--> 519 self.retrieve()
self.retrieve = <bound method Parallel.retrieve of Parallel(n_jobs=-1)>
520 # Make sure that we get a last message telling us we are done
521 elapsed_time = time.time() - self._start_time
522 self._print('Done %3i out of %3i | elapsed: %s finished',
523 (len(self._output),
---------------------------------------------------------------------------
Sub-process traceback:
---------------------------------------------------------------------------
ValueError Sat Aug 13 11:42:58 2016
PID: 29604 Python 2.7.6: /usr/bin/python
...........................................................................
/usr/lib/python2.7/dist-packages/sklearn/grid_search.pyc in fit_grid_point(X=array([[ 1.12306100e+06, 6.00000000e+00, 1....000000e+00, 3.00000000e+00, 1.00000000e+00]]), y=array([ 4., 2., 4., 4., 2., 2., 2., 4., ...2., 2., 2., 2., 2., 2., 4., 2., 2., 4.]), base_estimator=LogisticRegression(C=1.0, class_weight=None, dua...g=1, penalty='l2', random_state=None, tol=0.0001), parameters={'C': 0.01, 'penalty': 'l2', 'solver': 'liblinear'}, train=array([False, True, False, True, False, True,..., False, True, True, True, True], dtype=bool), test=array([ True, False, True, False, True, False,..., True, False, False, False, False], dtype=bool), scorer=make_scorer(accuracy_score), verbose=0, loss_func=None, **fit_params={})
274 for k, v in parameters.items()))
275 print("[GridSearchCV] %s %s" % (msg, (64 - len(msg)) * '.'))
276
277 # update parameters of the classifier after a copy of its base structure
278 clf = clone(base_estimator)
--> 279 clf.set_params(**parameters)
parameters = {'penalty': 'l2', 'C': 0.01, 'solver': 'liblinear'}
280
281 if hasattr(base_estimator, 'kernel') and callable(base_estimator.kernel):
282 # cannot compute the kernel values with custom function
283 raise ValueError("Cannot use a custom kernel function. "
...........................................................................
/usr/lib/python2.7/dist-packages/sklearn/base.pyc in set_params(self=LogisticRegression(C=0.01, class_weight=None, du...g=1, penalty='l2', random_state=None, tol=0.0001), **params={'C': 0.01, 'penalty': 'l2', 'solver': 'liblinear'})
252 sub_object.set_params(**{sub_name: value})
253 else:
254 # simple objects case
255 if not key in valid_params:
256 raise ValueError('Invalid parameter %s ' 'for estimator %s'
--> 257 % (key, self.__class__.__name__))
258 setattr(self, key, value)
259 return self
260
261 def __repr__(self):
ValueError: Invalid parameter solver for estimator LogisticRegression
___________________________________________________________________________
</code></pre> | <p>I hope you have resolved this issue.</p>
<p>If you use <code>estimator.get_params()</code>(in your case estimator is LogisticRegression), you can see that possible are:</p>
<pre><code>{'bootstrap': True, 'class_weight': None, 'criterion': 'gini', 'max_depth': None, 'max_features': 'auto', 'max_leaf_nodes': None, 'min_impurity_decrease': 0.0, 'min_impurity_split': None, 'min_samples_leaf': 1, 'min_samples_split': 2, 'min_weight_fraction_leaf': 0.0, 'n_estimators': 'warn', 'n_jobs': None, 'oob_score': False, 'random_state': None, 'verbose': 0, 'warm_start': False}
</code></pre>
<p>and them are differents from yours.</p> | python|numpy|scikit-learn|logistic-regression|grid-search | 2 |
1,282 | 62,927,710 | How to get the output I want? | <p>I have started using Tensorflow of Machine learning. I am a beginner to don't understand much like the functions and their purpose. I started with a simple Hello world program. The I used is this:</p>
<pre><code>import tensorflow as tf
hello = tf.constant('hello Tensorflow!')
sess = tf.Session()
print(sess.run(hello))
</code></pre>
<p>This is the output:</p>
<pre><code>FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
WARNING:tensorflow:From C:/Users/visha/PycharmProjects/Object Detection/A.I. for object detection.py:5: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.
2020-07-16 00:50:02.131405: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2
**b'hello Tensorflow!'**
Process finished with exit code 0
</code></pre>
<p><strong>The output I wanted was "hello Tensorflow" but there is a b at the beginning. I was wondering why is that</strong></p> | <p><code>hello Tensorflow!</code> gets converted into bytes. Ex: <code>type('Hello')</code> is <code>str</code> and <code>type(b'Hello')</code> is <code>bytes</code>. There's no way you can fix this. You <strong>can</strong> edit the code of <code>tensorflow</code>, but if you try this code on another device, it will again print out <code>b'hello Tensorflow!'</code></p> | python|tensorflow | 0 |
1,283 | 62,972,802 | Python package with sample datasets but deferred download? | <p>I have a data analysis tool that I made a Python package for and I'd like to include some sample datasets, but I don't want to include all the datasets directly in the Python package because it will bloat the size and slow down install for people who don't use them.</p>
<p>The behavior I want is when a sample dataset is referenced it automatically gets downloaded from a URL and saved to the package locally, but then the next time it is used it will read the local version instead of re-downloading it. And this caching should persist permanently for my package, not only the during of the Python instance.</p>
<p>How can I do this?</p> | <p>I ended up making a folder under AppData using the <code>appdirs</code> package</p>
<hr />
<p><code>datasets.py</code></p>
<pre><code>import os
import pandas as pd
from pandasgui.utility import get_logger
from appdirs import user_data_dir
from tqdm import tqdm
logger = get_logger(__name__)
__all__ = ["all_datasets",
"country_indicators",
"us_shooting_incidents",
"diamonds",
"pokemon",
"anscombe",
"attention",
"car_crashes",
"dots",
"exercise",
"flights",
"fmri",
"gammas",
"geyser",
"iris",
"mpg",
"penguins",
"planets",
"tips",
"titanic",
"gapminder",
"stockdata"]
dataset_names = [x for x in __all__ if x != "all_datasets"]
all_datasets = {}
root_data_dir = os.path.join(user_data_dir(), "pandasgui", "dataset_files")
# Open local data CSVs if they exists
if all([os.path.exists(os.path.join(root_data_dir, f"{name}.csv")) for name in dataset_names]):
for name in dataset_names:
data_path = os.path.join(root_data_dir, f"{name}.csv")
if os.path.isfile(data_path):
all_datasets[name] = pd.read_csv(data_path)
# Download data if it doesn't exist locally
else:
os.makedirs(root_data_dir, exist_ok=True)
logger.info(f"Downloading PandasGui sample datasets into {root_data_dir}...")
pbar = tqdm(dataset_names, bar_format='{percentage:3.0f}% {bar} | {desc}')
for name in pbar:
pbar.set_description(f"{name}.csv")
data_path = os.path.join(root_data_dir, f"{name}.csv")
if os.path.isfile(data_path):
all_datasets[name] = pd.read_csv(data_path)
else:
all_datasets[name] = pd.read_csv(
os.path.join("https://raw.githubusercontent.com/adamerose/datasets/master/",
f"{name}.csv"))
all_datasets[name].to_csv(data_path, index=False)
# Add the datasets to globals so they can be imported like `from pandasgui.datasets import iris`
for name in all_datasets.keys():
globals()[name] = all_datasets[name]
</code></pre> | python|pandas|pip|dataset | 0 |
1,284 | 63,095,126 | Python3 numpy array size compare to list | <p>I always thought numpy array is more compact and takes less memory size compare to list, however, for a 3-D float64 np array,</p>
<pre><code>print (sys.getsizeof(result2)/1024./1024./1024.)
print (sys.getsizeof(result2.astype('float16'))/1024./1024./1024.)
print (sys.getsizeof(list(result2))/1024./1024./1024.)
print (sys.getsizeof(result2.tolist())/1024./1024./1024.)
</code></pre>
<p>The output is,</p>
<pre><code>0.6521792411804199
0.16304489970207214
0.00033977627754211426
0.0003019943833351135
</code></pre>
<p>The list took much smaller memory. Is it validate to use <code>sys.getsizeof</code> for list? If it is, can I do anything to improve np array memory use?</p>
<p>######################</p>
<p>Using pympler @J_H, (it seems pympler can't deal with arraies in a list, like list(a 3-D array ). )</p>
<pre><code>print (result2.nbytes/1024./1024./1024.)
print (asizeof.asizeof(result2)/1024./1024./1024.)
print (result2.astype('float16').nbytes/1024./1024./1024.)
print (asizeof.asizeof(list(result2))/1024./1024./1024.)
print (asizeof.asizeof(result2.tolist())/1024./1024./1024.)
0.6521791219711304
0.6521792411804199
0.1630447804927826
0.004566863179206848
4.078836984932423
</code></pre>
<p>Thank you all!</p> | <p>You show that each of your <code>list</code> elements consumes 8 bytes.</p>
<p>But each element is just a pointer to a 24-byte float object.</p>
<p>Additionally, when you start with a 3-D array,
you'll be looking at lists within lists.
You could recurse through the data structures yourself
to accurately add up the allocated bytes.
Or you could use <a href="https://pythonhosted.org/Pympler/#usage-examples" rel="nofollow noreferrer">pympler</a>.</p> | python-3.x|numpy | 1 |
1,285 | 67,707,669 | Pyarrow: How to specify the dtype of partition keys in partitioned parquet datasets? | <p>I would like to create a partitioned pyarrow dataset with strings as partition keys:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import pyarrow as pa
from pyarrow import parquet as pq
data = {'key': ['001', '001', '002', '002'],
'value_1': [10, 20, 100, 200],
'value_2': ['a', 'b', 'a', 'b']}
df = pd.DataFrame(data)
tbl = pa.Table.from_pandas(df)
# write data to partitioned dataset
pq.write_to_dataset(tbl, 'partitioned_data', partition_cols=['key'])
print(pa.__version__)
print(df)
print(tbl)
</code></pre>
<pre><code>4.0.0
key value_1 value_2
0 001 10 a
1 001 20 b
2 002 100 a
3 002 200 b
pyarrow.Table
keys: string
values_1: int64
values_2: string
</code></pre>
<p>The partitioned dataset appears as follows in the filesystem:</p>
<pre><code>partitioned_data
├── key=001
│ └── 9acb170b99d14f1eba72af3697c71b8c.parquet
└── key=002
└── 836f365800f0449b956eb35de67bbc8c.parquet
</code></pre>
<p>Keys and values appear as folder names for the different partitions as expected. Now I import the partitioned data again:</p>
<pre class="lang-py prettyprint-override"><code># read the partitioned data and convert to DataFrame
imported_tbl = pq.read_table('partitioned_data')
imported_df = imported_tbl.to_pandas()
print(imported_tbl)
print(imported_df)
</code></pre>
<pre><code>pyarrow.Table
value_1: int64
value_2: string
key: dictionary<values=int32, indices=int32, ordered=0>
value_1 value_2 key
0 10 a 1
1 20 b 1
2 100 a 2
3 200 b 2
</code></pre>
<p>In the imported data, the dtype of 'key' has changed from <em>string</em> to <em>dictionary<values=int32</em>, resulting in incorrect values. In particular, the trailing zeros are lost ('001' becomes 1).</p>
<p>Is there any way to specify the dtype of the key, either on export or import to preserve the values?</p> | <p>With this file structure, there is no explicit metadata (or schema information) about the partition keys stored anywhere. So <code>pq.read_table</code> tries to guess the type. In your case (even with the trailing zeros) it can't guess it is a string and think key is an integer.</p>
<p>You can use the <code>dataset</code> api in order to provide some information about the partition:</p>
<pre><code>my_partitioning = pa.dataset.partitioning(pa.schema([pa.field("key", pa.string())]), flavor='hive')
my_data_set = pa.dataset.dataset("partitioned_data", partitioning=my_partitioning)
table = my_data_set.to_table()
table.to_pandas()
| | value_1 | value_2 | key |
|---:|----------:|:----------|------:|
| 0 | 10 | a | 001 |
| 1 | 20 | b | 001 |
| 2 | 100 | a | 002 |
| 3 | 200 | b | 002 |
</code></pre> | python|pandas|parquet|pyarrow | 2 |
1,286 | 67,699,830 | Pass every excel file in python from assigning a specific name | <p>I have the following excel files in a directory:</p>
<p>excel_sheet_01</p>
<p>excel_sheet_02</p>
<p>.
.
.</p>
<p>excel_sheet_nm</p>
<p>How can I do using pandas, that every excel sheet gets stored in a dataframe variable whose name
corresponds to the two last digits. i.e. I would get in python the following variables:</p>
<p>01</p>
<p>02</p>
<p>...</p>
<p>nm</p>
<p>Thank you so much</p> | <p>If you do not want to type out every single variable (and especially if you have unknown number of files) you could think of storing the DataFrames in a List (or Dict). something like:</p>
<pre><code>import os
import pandas as pd
excel_sheets = [f.name for f in os.scandir(path) if not f.is_dir() and 'excel_sheet' in f.name]
my_dataframes = []
for f in excel_sheets:
my_dataframes.append(pd.read_excel(f))
my_dataframes_dict = {}
for f in excel_sheets:
my_dataframes_dict.update({f: pd.read_excel(f)})
</code></pre>
<p>In the case of the list you can access it through the index. In the case of the dictionary you can choose whatever (unique) name you want.</p> | python|pandas | 3 |
1,287 | 67,790,590 | How to flatten list of dictionaries in multiple columns of pandas dataframe | <p>I have a dataframe and each record stores a list of dictionaries like this:</p>
<pre><code>row prodect_id recommend_info
0 XQ002 [{"recommend_key":"XXX567","recommend_point":50},
{"recommend_key":"XXX236","recommend_point":20},
{"recommend_key":"XXX090","recommend_point":35}]
1 XQ003 [{"recommend_key":"XXX089","recommend_point":30},
{"recommend_key":"XXX567","recommend_point":20}]
</code></pre>
<p>I would like to flatten lists of dictionaries, so that it will look like this</p>
<pre><code>row prodect_id recommend_info_recommend_key recommend_info_recommend_point
0 XQ002 XXX567 50
1 XQ002 XXX236 20
2 XQ002 XXX090 35
3 XQ003 XXX089 30
4 XQ003 XXX567 20
</code></pre>
<p>I know how to convert only one list of dictionaries to a dataframe.
like this:</p>
<pre><code>d = [{"recommend_key":"XXX089","recommend_point":30},
{"recommend_key":"XXX567","recommend_point":20}]
df = pd.DataFrame(d)
row recommend_key recommend_point
0 XXX089 30
1 XXX567 20
</code></pre>
<p>But I don't know how to do this to a dataframe when there is one column storing list of dicts, or there are multiple columns storing list of dicts</p>
<pre><code>row col_a col_b col_c
0 B001 [{"a":"b"},{"a":"c"}] [{"y":11},{"a":"c"}]
1 D009 [{"c":"o"},{"g":"c"}] [{"y":11},{"a":"c"},{"l":"c"}]
2 G068 [{"c":"b"},{"a":"c"}] [{"a":56},{"d":"c"}]
3 C004 [{"d":"a"},{"b":"c"}] [{"c":22},{"a":"c"},{"b":"c"}]
4 F011 [{"h":"u"},{"d":"c"}] [{"h":27},{"d":"c"}]
</code></pre> | <p>Try:</p>
<pre><code>pd.concat([df.explode('recommend_info').drop(['recommend_info'], axis=1),
df.explode('recommend_info')['recommend_info'].apply(pd.Series)],
axis=1)
</code></pre>
<p>You can do the same thing over and over again with every column</p>
<p>Here is an example:</p>
<pre><code>>>> df = pd.DataFrame({'a': [[{3: 4, 5: 6}, {3:8, 5: 1}],
... [{3:2, 5:4}, {3: 8, 5: 10}]],
... 'b': ['X', "Y"]})
>>> df
a b
0 [{3: 4, 5: 6}, {3: 8, 5: 1}] X
1 [{3: 2, 5: 4}, {3: 8, 5: 10}] Y
>>> df = pd.concat([df.explode('a').drop(['a'], axis=1),
... df.explode('a')['a'].apply(pd.Series)],
... axis=1)
>>> df
b 3 5
0 X 4 6
0 X 8 1
1 Y 2 4
1 Y 8 10
</code></pre> | python|pandas|dataframe|dictionary|flatten | 2 |
1,288 | 32,126,758 | Fastest way to create a numpy array from text file | <p>I have 60mb file with lots of lines.</p>
<p>Each line has the following format:</p>
<pre><code>(x,y)
</code></pre>
<p>Each line will be parsed as a numpy vector at shape (1,2).</p>
<p>At the end it should be concatenated into a big numpy array at shpae (N,2)
where N is the number of lines.</p>
<p>What is the fastest way to do that? Because now it takes too much time(more than 30 min).</p>
<p>My Code:</p>
<pre><code>with open(fname) as f:
for line in f:
point = parse_vector_string_to_array(line)
if points is None:
points = point
else:
points = np.vstack((points, point))
</code></pre>
<p>Where the parser is:</p>
<pre><code>def parse_vector_string_to_array(string):
x, y =eval(string)
array = np.array([[x, y]])
return array
</code></pre> | <p>One thing that would improve speed is to imitate <code>genfromtxt</code> and accumulate each line in a list of lists (or tuples). Then do one <code>np.array</code> at the end.</p>
<p>for example (roughly):</p>
<pre><code>points = []
for line in file:
x,y = eval(line)
points.append((x,y))
result = np.array(points)
</code></pre>
<p>Since your file lines look like tuples I'll leave your <code>eval</code> parsing. We don't usually recommend <code>eval</code>, but in this limited case it might the simplest.</p>
<p>You could try to make <code>genfromtxt</code> read this, but the <code>()</code> on each line will give some headaches.</p>
<p><code>pandas</code> is supposed to have a faster <code>csv</code> reader, but I don't know if it can be configured to handle this format or now.</p> | python|arrays|performance|numpy | 2 |
1,289 | 41,668,786 | How do you create a dynamic_rnn with dynamic "zero_state" (Fails with Inference) | <p>I have been working with the "dynamic_rnn" to create a model.</p>
<p>The model is based upon a 80 time period signal, and I want to zero the "initial_state" before each run so I have setup the following code fragment to accomplish this:</p>
<pre><code>state = cell_L1.zero_state(self.BatchSize,Xinputs.dtype)
outputs, outState = rnn.dynamic_rnn(cell_L1,Xinputs,initial_state=state, dtype=tf.float32)
</code></pre>
<p>This works great for the training process. The problem is once I go to the inference, where my BatchSize = 1, I get an error back as the rnn "state" doesn't match the new Xinputs shape. So what I figured is I need to make "self.BatchSize" based upon the input batch size rather than hard code it. I tried many different approaches, and none of them have worked. I would rather not pass a bunch of zeros through the feed_dict as it is a constant based upon the batch size.</p>
<p>Here are some of my attempts. They all generally fail since the input size is unknown upon building the graph:</p>
<pre><code>state = cell_L1.zero_state(Xinputs.get_shape()[0],Xinputs.dtype)
</code></pre>
<p>.....</p>
<pre><code>state = tf.zeros([Xinputs.get_shape()[0], self.state_size], Xinputs.dtype, name="RnnInitializer")
</code></pre>
<p>Another approach, thinking the initializer might not get called until run-time, but still failed at graph build:</p>
<pre><code>init = lambda shape, dtype: np.zeros(*shape)
state = tf.get_variable("state", shape=[Xinputs.get_shape()[0], self.state_size],initializer=init)
</code></pre>
<p>Is there a way to get this constant initial state to be created dynamically or do I need to reset it through the feed_dict with tensor-serving code? Is there a clever way to do this only once within the graph maybe with an tf.Variable.assign?</p> | <p>The solution to the problem was how to obtain the "batch_size" such that the variable is not hard coded.</p>
<p>This was the correct approach from the given example:</p>
<pre><code>Xinputs = tf.placeholder(tf.int32, (None, self.sequence_size, self.num_params), name="input")
state = cell_L1.zero_state(Xinputs.get_shape()[0],Xinputs.dtype)
</code></pre>
<p>The problem is the use of "get_shape()[0]", this returns the "shape" of the tensor and takes the batch_size value at [0]. The documentation doesn't seem to be that clear, but this appears to be a constant value so when you load the graph into an inference, this value is still hard coded (maybe only evaluated at graph creation?).</p>
<p>Using the "tf.shape()" function, seems to do the trick. This doesn't return the shape, but a tensor. So this seems to be updated more at run-time. Using this code fragment solved the problem of a training batch of 128 and then loading the graph into TensorFlow-Service inference handling a batch of just 1.</p>
<pre><code>Xinputs = tf.placeholder(tf.int32, (None, self.sequence_size, self.num_params), name="input")
batch_size = tf.shape(Xinputs)[0]
state = self.cell_L1.zero_state(batch_size,Xinputs.dtype)
</code></pre>
<p>Here is a good link to TensorFlow FAQ which describes this approach '<strong>How do I build a graph that works with variable batch sizes?</strong>':
<a href="https://www.tensorflow.org/resources/faq" rel="nofollow noreferrer">https://www.tensorflow.org/resources/faq</a></p> | tensorflow|tensorflow-serving | 3 |
1,290 | 41,382,719 | Sum of next n rows in python | <p>I have a dataframe which is grouped at product store day_id level Say it looks like the below and I need to create a column with rolling sum </p>
<pre><code>prod store day_id visits
111 123 1 2
111 123 2 3
111 123 3 1
111 123 4 0
111 123 5 1
111 123 6 0
111 123 7 1
111 123 8 1
111 123 9 2
</code></pre>
<p>need to create a dataframe as below</p>
<pre><code>prod store day_id visits rolling_4_sum cond
111 123 1 2 6 1
111 123 2 3 5 1
111 123 3 1 2 1
111 123 4 0 2 1
111 123 5 1 4 0
111 123 6 0 4 0
111 123 7 1 NA 0
111 123 8 1 NA 0
111 123 9 2 NA 0
</code></pre>
<p>i am looking for create a
<strong>cond column</strong>: that recursively checks a condition , say if rolling_4_sum is greater than 5 then make the next 4 rows as 1 else do nothing ,i.e. even if the condition is not met retain what was already filled before , do this check for each row until 7 th row.</p>
<p>How can i achieve this using python ? i am trying </p>
<pre><code>d1['rolling_4_sum'] = d1.groupby(['prod', 'store']).visits.rolling(4).sum()
</code></pre>
<p>but getting an error.</p> | <p>The formation of rolling sums can be done with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.rolling.html#pandas.Series.rolling" rel="nofollow noreferrer"><code>rolling</code></a> method, using boxcar window:</p>
<pre><code>df['rolling_4_sum'] = df.visits.rolling(4, win_type='boxcar', center=True).sum().shift(-2)
</code></pre>
<p>The shift by -2 is because you apparently want the sums to be placed at the left edge of the window. </p>
<p>Next, the condition about rolling sums being less than 4:</p>
<pre><code>df['cond'] = 0
for k in range(1, 4):
df.loc[df.rolling_4_sum.shift(k) < 7, 'cond'] = 1
</code></pre>
<p>A new column is inserted and filled with 0; then for each k=1,2,3,4, look k steps back; if the sum then less than 7, then set the condition to 1. </p> | python-3.x|pandas | 4 |
1,291 | 27,488,622 | How to get a value from every column in a Numpy matrix | <p>I'd like to get the index of a value for every column in a matrix <code>M</code>. For example:</p>
<pre><code>M = matrix([[0, 1, 0],
[4, 2, 4],
[3, 4, 1],
[1, 3, 2],
[2, 0, 3]])
</code></pre>
<p>In pseudocode, I'd like to do something like this:</p>
<pre><code>for col in M:
idx = numpy.where(M[col]==0) # Only for columns!
</code></pre>
<p>and have <code>idx</code> be <code>0</code>, <code>4</code>, <code>0</code> for each column.</p>
<p>I have tried to use <code>where</code>, but I don't understand the return value, which is a tuple of matrices. </p> | <p>The tuple of matrices is a collection of items suited for indexing. The output will have the shape of the indexing matrices (or arrays), and each item in the output will be selected from the original array using the first array as the index of the first dimension, the second as the index of the second dimension, and so on. In other words, this: </p>
<pre><code>>>> numpy.where(M == 0)
(matrix([[0, 0, 4]]), matrix([[0, 2, 1]]))
>>> row, col = numpy.where(M == 0)
>>> M[row, col]
matrix([[0, 0, 0]])
>>> M[numpy.where(M == 0)] = 1000
>>> M
matrix([[1000, 1, 1000],
[ 4, 2, 4],
[ 3, 4, 1],
[ 1, 3, 2],
[ 2, 1000, 3]])
</code></pre>
<p>The sequence may be what's confusing you. It proceeds in flattened order -- so <code>M[0,2]</code> appears second, not third. If you need to reorder them, you could do this: </p>
<pre><code>>>> row[0,col.argsort()]
matrix([[0, 4, 0]])
</code></pre>
<p>You also might be better off using arrays instead of matrices. That way you can manipulate the shape of the arrays, which is often useful! Also note <a href="https://stackoverflow.com/a/27489086/577088">ajcr</a>'s transpose-based trick, which is probably preferable to using <code>argsort</code>.</p>
<p>Finally, there is also a <code>nonzero</code> method that does the same thing as <code>where</code> in this case. Using the transpose trick now:</p>
<pre><code>>>> (M == 0).T.nonzero()
(matrix([[0, 1, 2]]), matrix([[0, 4, 0]]))
</code></pre> | python|numpy|matrix | 3 |
1,292 | 27,888,835 | IPython with and Without Notebook Differences | <p>One of the most important improvisations of Python that are my favorites are IPython and IPython Notebook.</p>
<p>I was watching and repeating what's shown in this <a href="https://www.youtube.com/watch?v=3Fp1zn5ao2M" rel="nofollow">video</a> and found some issues. </p>
<p>As specified in the video, I use <code>ipython --pylab</code> to launch IPython.
And I use <code>ipython notebook --pylab</code> to launch IPython Notebook.</p>
<p>Issues: <code>scatter()</code> would not work in IPython NoteBook (I get a <code>NameError</code>) but works fine in IPython.
Same is the case with the function <code>rand()</code>. I guess <code>pylab</code> is loaded along with <code>matplotlib</code>, <code>scipy</code>, <code>numpy</code>, <code>random</code> and other essential libraries. </p>
<p>Please tell me if I am wrong. By the way, both my IPython and IPython NoteBook load from my <strong>Anaconda Dist.</strong>, if that means anything.</p>
<p>Also any resource where I can know what all is loaded when I use <code>--pylab</code> would help. </p>
<p>Thanks.</p> | <p>This is what the <code>pylab</code> flag does:</p>
<pre><code>import numpy
import matplotlib
from matplotlib import pylab, mlab, pyplot
np = numpy
plt = pyplot
from IPython.core.pylabtools import figsize, getfigs
from pylab import *
from numpy import *
</code></pre>
<p>That said, it is recommended that you launch the notebook without the flag (just <code>ipython notebook</code>) and then run:</p>
<pre><code>%matplotlib inline
</code></pre>
<p>For more details see <a href="http://carreau.github.io/posts/10-No-PyLab-Thanks.ipynb.html" rel="nofollow">No Pylab Thanks</a>.</p>
<p>Regarding your scatter problem, you should try the following:</p>
<pre><code>%matplotlib inline
import matplotlib.pyplot as plt
plt.scatter([1,2], [1,2])
</code></pre> | python|numpy|matplotlib|ipython|ipython-notebook | 2 |
1,293 | 27,574,563 | Correcting cumulatives in Pandas | <p>I have a DataFrame that has the following columns:</p>
<blockquote>
<p><strong>DeviceId</strong> | <strong>Timestamp</strong> | <strong>Total_Data</strong><br>
001 08/12/2014 500<br>
001 08/13/2014 600<br>
001 08/14/2014 750<br>
001 08/15/2014 150 (device restarted here) (correct value:750+150)<br>
001 08/16/2014 300 (correct value: 750+150+300)<br>
002 10/01/2014 98<br>
...<br>
.. </p>
</blockquote>
<p>For a bunch of different devices, I have the data they consumed on different occasions (noted by the timestamps).</p>
<p>The <strong>Total_Data</strong> column is cumulative in nature and therefore, for a given device, calculates the total data consumed over time. For example, if device A used 3KB on <code>12 August 2012</code> and 5KB on <code>14 August 2012</code>, the DataFrame would have two entries with the second entry having its <strong>Total_Data</strong> value as 8KB.</p>
<p>The glitch however, is that the cumulative values reset to 0 (and started counting again) when the devices were rebooted. And therefore, need to be corrected. What would be the best way to alter my current DataFrame in Pandas to solve this problem</p>
<p>Until now, I've thought of iterating through the DataFrame on a row by row basis but it just seems too complex.</p> | <ol>
<li>First divide your Dataframe according to your reset</li>
<li>Make cumulative sum in each part</li>
<li>Add extra value from previous part</li>
</ol>
<p>Code is given below:</p>
<pre><code>grouped = df.groupby((df.TotalData.diff() <= 0).cumsum())
parts = [g.reset_index(drop=True) for k, g in grouped]
for i in range(1, len(parts)):
parts[i]['TotalData']=parts[i]['TotalData'].cumsum().add(parts[i-1]['TotalData'].max())
DF = pd.concat(parts)
print DF
</code></pre>
<p>Result:</p>
<pre><code> Date TotalData
0 2014-08-12 500
1 2014-08-13 600
2 2014-08-14 750
0 2014-08-15 900
1 2014-08-16 1200
</code></pre> | python|pandas | 0 |
1,294 | 61,462,597 | Reading numbers into grid | <p>I have a numbers grid, that looks like this and goes on for a while further.</p>
<pre><code>08 02 22 97 38 15 00 40 00 75 04 05 07 78 52 12 50 77 91 08
49 49 99 40 17 81 18 57 60 87 17 40 98 43 69 48 04 56 62 00
81 49 31 73 55 79 14 29 93 71 40 67 53 88 30 03 49 13 36 65
52 70 95 23 04 60 11 42 69 24 68 56 01 32 56 71 37 02 36 91
22 31 16 71 51 67 63 89 41 92 36 54 22 40 40 28 66 33 13 80
24 47 32 60 99 03 45 02 44 75 33 53 78 36 84 20 35 17 12 50
32 98 81 28 64 23 67 10 26 38 40 67 59 54 70 66 18 38 64 70
67 26 20 68 02 62 12 20 95 63 94 39 63 08 40 91 66 49 94 21
24 55 58 05 66 73 99 26 97 17 78 78 96 83 14 88 34 89 63 72
21 36 23 09 75 00 76 44 20 45 35 14 00 61 33 97 34 31 33 95
</code></pre>
<p>I saved this grid in a .txt file assigned it to a file variable like so:</p>
<pre><code>grid = open("grid.txt" )
print(grid.readlines())
grid.close()
</code></pre>
<p>When I print out the contents of grid with <code>grid.readlines()</code> some problems pop up: Firstly, it is saved as a list of long string(i.e. every line is one list entry, secondly, there is the newline sign <code>\n</code> at the end of every list entry. Lastly, to convert this data into a numpy array as the grid it is, numbers can't start with a zero. I.e. 02 in the first row second column should be 2.</p>
<p>I'm pretty new to numpy. Is there any way to convert this data into a numpy array that would save me all the legwork of manually implementing an edited version into my code?
The only python read possibilites I know of are of csv or excel files.</p>
<p>Best of days to all of you :)</p> | <p>Some notes:</p>
<ol>
<li>Make sure you are using <code>open()</code> with the keyword <code>with</code>. Reference <a href="https://docs.python.org/3/tutorial/inputoutput.html#reading-and-writing-files" rel="nofollow noreferrer">here</a>.</li>
</ol>
<blockquote>
<p>It is good practice to use the with keyword when dealing with file objects. The advantage is that the file is properly closed after its suite finishes, even if an exception is raised at some point.</p>
</blockquote>
<ol start="2">
<li>You can use <a href="https://docs.python.org/3/library/stdtypes.html#str.splitlines" rel="nofollow noreferrer"><code>str.splitlines()</code></a> to achieve this. </li>
</ol>
<hr>
<pre class="lang-py prettyprint-override"><code>with open('file.txt') as f:
lines = f.read().splitlines()
print(lines)
</code></pre>
<p>Outputs:</p>
<pre><code>['08 02 22 97 38 15 00 40 00 75 04 05 07 78 52 12 50 77 91 08', '49 49 99 40 17 81 18 57 60 87 17 40 98 43 69 48 04 56 62 00', '81 49 31 73 55 79 14 29 93 71 40 67 53 88 30 03 49 13 36 65', '52 70 95 23 04 60 11 42 69 24 68 56 01 32 56 71 37 02 36 91', '22 31 16 71 51 67 63 89 41 92 36 54 22 40 40 28 66 33 13 80', '24 47 32 60 99 03 45 02 44 75 33 53 78 36 84 20 35 17 12 50', '32 98 81 28 64 23 67 10 26 38 40 67 59 54 70 66 18 38 64 70', '67 26 20 68 02 62 12 20 95 63 94 39 63 08 40 91 66 49 94 21', '24 55 58 05 66 73 99 26 97 17 78 78 96 83 14 88 34 89 63 72', '21 36 23 09 75 00 76 44 20 45 35 14 00 61 33 97 34 31 33 95']
</code></pre>
<hr>
<pre class="lang-py prettyprint-override"><code>import numpy as np
file = np.loadtxt('file.txt')
print(file)
</code></pre>
<pre><code>array([[ 8., 2., 22., 97., 38., 15., 0., 40., 0., 75., 4., 5., 7.,
78., 52., 12., 50., 77., 91., 8.],
[49., 49., 99., 40., 17., 81., 18., 57., 60., 87., 17., 40., 98.,
43., 69., 48., 4., 56., 62., 0.],
[81., 49., 31., 73., 55., 79., 14., 29., 93., 71., 40., 67., 53.,
88., 30., 3., 49., 13., 36., 65.],
[52., 70., 95., 23., 4., 60., 11., 42., 69., 24., 68., 56., 1.,
32., 56., 71., 37., 2., 36., 91.],
[22., 31., 16., 71., 51., 67., 63., 89., 41., 92., 36., 54., 22.,
40., 40., 28., 66., 33., 13., 80.],
[24., 47., 32., 60., 99., 3., 45., 2., 44., 75., 33., 53., 78.,
36., 84., 20., 35., 17., 12., 50.],
[32., 98., 81., 28., 64., 23., 67., 10., 26., 38., 40., 67., 59.,
54., 70., 66., 18., 38., 64., 70.],
[67., 26., 20., 68., 2., 62., 12., 20., 95., 63., 94., 39., 63.,
8., 40., 91., 66., 49., 94., 21.],
[24., 55., 58., 5., 66., 73., 99., 26., 97., 17., 78., 78., 96.,
83., 14., 88., 34., 89., 63., 72.],
[21., 36., 23., 9., 75., 0., 76., 44., 20., 45., 35., 14., 0.,
61., 33., 97., 34., 31., 33., 95.]])
</code></pre> | python|numpy | 1 |
1,295 | 61,384,421 | LSTM, Exploding gradients or wrong approach? | <p>Having a dataset of monthly activity of users, segment to country and browser. each row is 1 day of user activity summed up and a score for that daily activity. For example: number of sessions per day is one feature. The score is a floating point number calculated from that daily features.</p>
<p>My goal is to try and predict the <strong>"average user"</strong> score at the end of the month using just 2 days of users data.</p>
<p>I have 25 month of data, some are full and some have only partial of the total days, in order to have a fixed batch size I've padded the sequences like so:</p>
<pre><code>from keras.preprocessing.sequence import pad_sequences
padded_sequences = pad_sequences(sequences, maxlen=None, dtype='float64', padding='pre', truncating='post', value=-10.)
</code></pre>
<p>so sequences with less then the max where padded with -10 rows.<br>
I've decided to create an LSTM model to digest the data, so at the end of each batch the model should predict the average user score. Then later I'll try to predict using just 2 days sample.</p>
<p>My Model look like that:</p>
<pre><code>import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dropout,Dense,Masking
from tensorflow.keras import metrics
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.optimizers import Adam
import datetime, os
model = Sequential()
opt = Adam(learning_rate=0.0001, clipnorm=1)
num_samples = train_x.shape[1]
num_features = train_x.shape[2]
model.add(Masking(mask_value=-10., input_shape=(num_samples, num_features)))
model.add(LSTM(64, return_sequences=True, activation='relu'))
model.add(Dropout(0.3))
#this is the last LSTM layer, use return_sequences=False
model.add(LSTM(64, return_sequences=False, stateful=False, activation='relu'))
model.add(Dropout(0.3))
model.add(Dense(1))
model.compile(loss='mse', optimizer='adam' ,metrics=['acc',metrics.mean_squared_error])
logdir = os.path.join(logs_base_dir, datetime.datetime.now().strftime("%Y%m%d-%H%M%S"))
tensorboard_callback = TensorBoard(log_dir=logdir, update_freq=1)
model.summary()
Model: "sequential_13"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
masking_5 (Masking) (None, 4283, 16) 0
_________________________________________________________________
lstm_20 (LSTM) (None, 4283, 64) 20736
_________________________________________________________________
dropout_14 (Dropout) (None, 4283, 64) 0
_________________________________________________________________
lstm_21 (LSTM) (None, 64) 33024
_________________________________________________________________
dropout_15 (Dropout) (None, 64) 0
_________________________________________________________________
dense_9 (Dense) (None, 1) 65
=================================================================
Total params: 53,825
Trainable params: 53,825
Non-trainable params: 0
_________________________________________________________________
</code></pre>
<p>While training I get NaN value on the 19th epoch</p>
<pre><code>Epoch 16/1000
16/16 [==============================] - 14s 855ms/sample - loss: 298.8135 - acc: 0.0000e+00 - mean_squared_error: 298.8135 - val_loss: 220.7307 - val_acc: 0.0000e+00 - val_mean_squared_error: 220.7307
Epoch 17/1000
16/16 [==============================] - 14s 846ms/sample - loss: 290.3051 - acc: 0.0000e+00 - mean_squared_error: 290.3051 - val_loss: 205.3393 - val_acc: 0.0000e+00 - val_mean_squared_error: 205.3393
Epoch 18/1000
16/16 [==============================] - 14s 869ms/sample - loss: 272.1889 - acc: 0.0000e+00 - mean_squared_error: 272.1889 - val_loss: nan - val_acc: 0.0000e+00 - val_mean_squared_error: nan
Epoch 19/1000
16/16 [==============================] - 14s 852ms/sample - loss: nan - acc: 0.0000e+00 - mean_squared_error: nan - val_loss: nan - val_acc: 0.0000e+00 - val_mean_squared_error: nan
Epoch 20/1000
16/16 [==============================] - 14s 856ms/sample - loss: nan - acc: 0.0000e+00 - mean_squared_error: nan - val_loss: nan - val_acc: 0.0000e+00 - val_mean_squared_error: nan
Epoch 21/1000
</code></pre>
<p>I tried to apply the methods described <a href="https://stackoverflow.com/questions/37232782/nan-loss-when-training-regression-network">here</a> with no real success.</p>
<p><strong>Update:</strong>
I've changed my activation from relu to tanh and it solved the NaN issue. However it seems that the accuracy of my model stays 0 while the loss goes down</p>
<pre><code>Epoch 100/1000
16/16 [==============================] - 14s 869ms/sample - loss: 22.8179 - acc: 0.0000e+00 - mean_squared_error: 22.8179 - val_loss: 11.7422 - val_acc: 0.0000e+00 - val_mean_squared_error: 11.7422
</code></pre>
<hr>
<p><strong>Q:</strong> What am I doing wrong here?</p> | <p>You are solving a regression task, using accuracy is not meaningful here.</p>
<p>Use <code>mean_absollute_error</code> to check if your error is decreasing over time or not.</p>
<p>Instead of blindly predicting the score, you can make the score bounded to <code>(0, 1)</code>.</p>
<p>Just use a min max normalization to bring the output in a range <a href="https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html" rel="nofollow noreferrer">https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html</a></p>
<p>After that you can use sigmoid in last layer.</p>
<p>Also, you're choosing slightly longer sequences for this simple model <code>4283</code>, how skewed your sequence lengths are?</p>
<p>Maybe do a histogram plot of all the signal length and see if <code>4283</code> is, in fact, a good choice or not. Maybe you can bring this down to something like <code>512</code> which may become easier for the model.</p>
<p>Also, padding with -10 seems a pretty weird choice is it something specific for your data or you're choosing randomly? This -10 also suggests you're not normalizing your input data which can become a problem with an LSTM with relu, maybe you should try to normalizing it before training.</p>
<p>After these add a validation plot of the mean absolute error if the performance is still not good.</p> | python|tensorflow|keras | 1 |
1,296 | 61,427,432 | Linear regression with multiple features - How to make a prediction after training a neural network using an array | <p>I designed an artificial neural networks model following the tutorial in here: <a href="https://www.tensorflow.org/tutorials/keras/regression" rel="nofollow noreferrer">https://www.tensorflow.org/tutorials/keras/regression</a></p>
<p>Afterwards, I saved the model using model.save(), and I tried loading it into a different notebook because that's how i expect people use trained models (importing them). Also I'm trying to design a code that allows me to predict any number of values i want (6, 7, 8, 2, whatever), so I'm trying to get this prediction data into an array to feed it to model_predict. </p>
<p>I was trying to make a simple prediction, but I'm failing everytime. How do I use model.predict() in situations like this?</p>
<p>Here's the code I was trying to use: </p>
<pre><code>import pandas as pd
from sklearn import datasets
import tensorflow as tf
import itertools
model = tf.keras.models.load_model('MPG_Model.model')
prediction_input = {
'Cylinders' : [4],
'Displacement' : [140.0],
'Horsepower' : [86.0],
'Weight' : [2790.0],
'Acceleration' : [15.6],
'Model Year' : [82],
'Origin' : [1],
}
dataset = tf.convert_to_tensor(prediction_input)
predictions = model.predict(dataset).flatten()
</code></pre>
<p>It returns the following error message: </p>
<pre><code>ValueError: Attempt to convert a value ({'Cylinders': [4], 'Displacement': [140.0], 'Horsepower': [86.0], 'Weight': [2790.0], 'Acceleration': [15.6], 'Model Year': [82], 'Origin': [1]}) with an unsupported type (<class 'dict'>) to a Tensor.
</code></pre>
<p>What should I do?</p> | <p>The error you describe in your comment arises because your model expects an input of size (9, n), where 'n' is the number of data points you are feeding in - that's why it says <code>(9,)</code> is expected. But when you're feeding in while attempting to predict is actually a vector of size 9, which in two-dimensions is (1, 9) - this is why it says that it is getting <code>(1,)</code>. You can fix this by reshaping the input from (1, 9) to (9, 1). Do this just before you call the <code>predict()</code> method:</p>
<pre><code>dataset = tf.reshape(dataset, [9, 1])
</code></pre> | python|tensorflow|neural-network|linear-regression | 0 |
1,297 | 68,726,799 | How should I implement a tf.keras.Metric that computes on the whole prediction? | <p>The <code>tf.keras.Metric</code> interface provides a useful tool for implementing additive metrics such as loss/accuracy. The interface is designed to update on a batch when <code>update_state(self, y_pred, y_true)</code> is called and the result should be returned at <code>result(self)</code>. However when implementing some metrics like FID, Inception Score, Expected Calibration Error, we have to look at the whole set of the predictions instead of iterative views of individual samples. How am I supposed to implement such custom metrics in Tensorflow? Should I use another API?</p> | <p>As I understand you question;</p>
<pre><code>class BinaryTruePositives(tf.keras.metrics.Metric):
def __init__(self, name='binary_true_positives', **kwargs):
super(BinaryTruePositives, self).__init__(name=name, **kwargs)
self.true_positives = self.add_weight(name='tp', initializer='zeros')
def update_state(self, y_true, y_pred, sample_weight=None):
y_true = tf.cast(y_true, tf.bool)
y_pred = tf.cast(y_pred, tf.bool)
values = tf.logical_and(tf.equal(y_true, True), tf.equal(y_pred, True))
values = tf.cast(values, self.dtype)
if sample_weight is not None:
sample_weight = tf.cast(sample_weight, self.dtype)
sample_weight = tf.broadcast_to(sample_weight, values.shape)
values = tf.multiply(values, sample_weight)
self.true_positives.assign_add(tf.reduce_sum(values))
def result(self):
return self.true_positives
</code></pre>
<p>[example from <a href="https://www.tensorflow.org/api_docs/python/tf/keras/metrics/Metric" rel="nofollow noreferrer">tf2 docs</a>]<br />
in the <code>update_state()</code> method we implements the metrics calculation function. In essence, <code>update_state()</code> returns a single for whole mini-batch. So, mini-batch evaluation is possible in tf2. But, to support batch evaluation metrics you have to update whole your input pipeline as well.</p> | python|tensorflow|keras | 0 |
1,298 | 68,597,540 | Problem with iterations and dataframes to store variables in dict | <p>I received a code I'm trying to reduce and make more flexible.</p>
<p>The code is for obtaining climatical values from a .csv with many entries (+1M).</p>
<p>Since I don't want to overextend the code, I've made so that variables are selected by the user, and therefore, when the user selects this variables via terminal, a new variable is created.</p>
<p>This worked well, but the problem comes when trying to store the climatic median variable for each point (ie, 85 stations).
If I do it like this</p>
<pre><code>for a in range(len(ciudad)):
nombre = ciudad[a] # We obtain the name of each city (~85)
datoa = localidades.loc[localidades['Nombre'] == nombre] # We focus on the "a" city each time
if 'datos_TMax' in locals(): # If variable has been created as the user requested it exists
tmax=datoa.loc[:,['TMax']] # We obtain the "Tmax" for each day in the selected period & iteration city
tmax_m = tmax.mean() # Average for the period in the "a" city
datos_TMax.append(tmax_m) # "datos_TMax" is a list created dinamycally, and for each city the value is appended
</code></pre>
<p>The example above works perfectly. At the end, I obtain a file with the city name and it's max temperatura average for the period user chose.</p>
<p>However, this way of coding it has problems: I've got to repeat an "if" statement for each possible variable, and then, when I transform to pd.DataFrame, I have to make tons of "if" possibilities and combinations so that whatever the case, no error is raised.</p>
<p>Therefore, I decided to do it using a dict where lists (ie: dictionary{'Tmax' : [1, 2, 3, ...]}) would store all the values for each selected variable.</p>
<p>The code for the loop looks like this:</p>
<pre><code>dicvar = {}
for a in range(len(ciudad)):
nombre = ciudad[a] # We obtain the name of each city (~85)
datoa = localidades.loc[localidades['Nombre']==nombre] # We focus on the "a" city each time
for b in bucles: # For each selected user variable, iterate
if b in ['TMax', 'TMin', 'TMed', 'Racha', 'Dir', 'Velmedia', 'Sol', 'Presmax', 'Presmin']: # This is because some variables require different treatment
valor = datoa.loc[:,[b]] # Obtain the "b" value for each day in city "a" (valor is generic, as each iteration would have a different name)
valor_m = valor.mean() # Make the mean for the "b" variable of this iteration
print(valor_m) # Just to check
dicvar[b].append(valor_m) # In theory, append to key "b" (ie: "TMax") the value "valor_m" from each iteration
</code></pre>
<p>Well, the code runs and indeed 85 values are stored. However, when outputting, this is the result:</p>
<p>x | Cityname | Tmax</p>
<p>0 | Coruña | <strong>TMax 19.78 dtype: float64</strong></p>
<p>Where it should be</p>
<p>x | Cityname | Tmax</p>
<p>0 | Coruña | 19.78</p>
<p>Any ideas on how to solve the problem? I've been trying to fix it for hours, but I don't see how.</p>
<p>Thanks!!</p> | <p>As you see in the documentation here:</p>
<p><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.mean.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.mean.html</a></p>
<p>the <code>.mean()</code> method returns a series or dataframe. To retrieve the values you can write:</p>
<pre><code>dicvar[b].append(valor_m.values[0])
</code></pre>
<p>The <code>.values</code> property returns a numpy array of the dataframe values; the <code>[0]</code> index retrieves the first value.</p> | python|pandas | 0 |
1,299 | 68,603,426 | Fitting numerical data in Python | <p>I have more than 100,000 numbers to analyze in Python. Part of it is given in this sample: 84.49, 60.885, 33.6, 0, 6.4, 89.361, 0, 0, 5.6, 0, 39.828.</p>
<p>The sum of this sample is 320.164 and I want to scale so that the new figures add up to 500 and plot these values.</p>
<p>I previously divided my desired sum (500) by the old sum (320.164) and multiplied each value. The 0 values remain 0 since 0 cannot be "scaled". Is there a way to do this in Python? And will it be possible to plot the new histogram/distribution?
Can you give some examples?</p> | <p>You can do it this way:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
numbers = [84.49, 60.885, 33.6, 0, 6.4, 89.361, 0, 0, 5.6, 0, 39.828]
current_sum = np.sum(numbers)
desired_sum = 500
new_numbers = [desired_sum/current_sum * x for x in numbers]
plt.hist(new_numbers)
</code></pre> | python|pandas|numpy|seaborn|libraries | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.