Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
400
58,514,197
Difference between nn.MaxPool2d vs.nn.functional.max_pool2d?
<p>Whats the difference between: <code>nn.MaxPool2d(kernel_size, stride)</code> and <code>nn.functional.max_pool2d(t, kernel_size, stride)</code>?</p> <p>The first one I define in the module and the second in the forward function?</p> <p>Thanks</p>
<p>They are essentially the same. The difference is that <code>torch.nn.MaxPool2d</code> is an explicit <code>nn.Module</code> that calls through to <code>torch.nn.functional.max_pool2d()</code> it its own <code>forward()</code> method.</p> <p>You can look at the source for <code>torch.nn.MaxPool2d</code> here and see the call for yourself: <a href="https://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html#MaxPool2d" rel="noreferrer">https://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html#MaxPool2d</a></p> <p>Reproduced below:</p> <pre><code>def forward(self, input): return F.max_pool2d(input, self.kernel_size, self.stride, self.padding, self.dilation, self.ceil_mode, self.return_indices) </code></pre> <p>Why have two approaches for the same task? I suppose it's to suit the coding style of the many people who might use PyTorch. Some prefer a stateful approach while others prefer a more functional approach.</p> <p>For example having <code>torch.nn.MaxPool2d</code> means that we could very easily drop it into a <code>nn.Sequential</code> block.</p> <pre><code>model = nn.Sequential( nn.Conv2d(1,3,3), nn.ReLU(), nn.MaxPool2d((2, 2)) ) </code></pre>
python|module|pytorch|forward
7
401
58,260,612
python list comprehension with if condition looping
<p>Is it possible to use list comprehension for a dataframe if I want to change one column's value based on the condition of another column's value.</p> <p>The code I'm hoping to make work would be something like this:</p> <p><code>return ['lower_level' for x in usage_time_df['anomaly'] if [y &lt; lower_outlier for y in usage_time_df['device_years']]</code></p> <p>Thanks!</p>
<p>I don't think what you want to do can be done in a list comprehension, and if it can, it will definitely not be efficient.</p> <p>Assuming a dataframe <code>usage_time_df</code> with two columns, <code>anomaly</code> and <code>device_years</code>, if I understand correctly, you want to set the value in <code>anomaly</code> to <code>lower_level</code> when the value in <code>device_years</code> does not reach <code>lower_outlier</code> (which I guess is a float). The natural way to do that is:</p> <pre class="lang-py prettyprint-override"><code>usage_time_df.loc[usage_time_df['device_years'] &lt; lower_outlier, 'anomaly'] = 'lower_level' </code></pre>
python|pandas|python-2.7|list-comprehension
1
402
58,268,118
How to replace a column of a pandas dataframe with only words that exist in the dictionary or a text file?
<p>Hi I have a pandas dataframe and a text file that look a little like this:</p> <pre><code>df: +----------------------------------+ | Description | +----------------------------------+ | hello this is a great test $5435 | | this is an432 entry | | ... | | entry number 43535 | +----------------------------------+ txt: word1 word2 word3 ... wordn </code></pre> <p>The descriptions are not important. </p> <p>I want to go through each row in the df split by ' ' and for each word if the word is in text then keep it otherwise delete it.</p> <p>Example:</p> <p>Suppose my text file looks like this</p> <pre><code>hello this is a test </code></pre> <p>and a description looks like this</p> <pre><code>"hello this is a great test $5435" </code></pre> <p>then the output would be <code>hello this is a test</code> because <code>great</code> and <code>$5435</code> are not in text.</p> <p>I can write something like this:</p> <pre><code>def clean_string(rows): for row in rows: string = row.split() cleansed_string = [] for word in string: if word in text: cleansed_string.append(word) row = ' '.join(cleansed_string) </code></pre> <p>But is there a better way to achieve this?</p>
<p>Use:</p> <pre><code>with open('file.txt', encoding="utf8") as f: L = f.read().split('\n') print (L) ['hello', 'this', 'is', 'a', 'test'] f = lambda x: ' '.join(y for y in x.split() if y in set(L)) df['Description'] = df['Description'].apply(f) </code></pre> <p>For improve performance:</p> <pre><code>s = set(L) df['Description'] = [' '.join(y for y in x.split() if y in s) for x in df['Description']] print (df) Description 0 hello this is a test 1 this is 2 </code></pre>
python|python-3.x|pandas
1
403
58,526,171
How to use aggregate with condition in pandas?
<p>I have a dataframe. Following code works </p> <pre><code>stat = working_data.groupby(by=['url', 'bucket_id'], as_index=False).agg({'delta': 'max','id': 'count'}) </code></pre> <p>Now i need to count ids with different statuses. I have "DOWNLOADED", "NOT_DOWNLOADED" and "DOWNLOADING" for the status.</p> <p>I would like to have <code>df</code> with columns <code>bucket_id</code>, <code>max</code>, <code>downloaded</code> (how many have "DOWNLOADED" status) , <code>not_downloaded</code> (how many have "NOT_DOWNLOADED" status) , <code>downloading</code> (how many have "DOWNLOADING" status). How to make it?</p> <p>Input I have: <img src="https://i.stack.imgur.com/eLzeE.png" alt="enter image description here">.</p> <p>Output i have: <img src="https://i.stack.imgur.com/2AUv1.png" alt="enter image description here"></p> <p>As you can see count isn't devided by status. But i want to know that there are x downloaded, y not_downloaded, z downloading for each bucket_id bucket_id (so they should be in separate columns, but info for one bucket_id should be in one row) </p>
<p>One way to use assign to create columns then aggregate this new column.</p> <pre><code>working_data.assign(downloaded=df['status'] == 'DOWNLOADED', not_downloaded=df['status'] == 'NOT_DOWNLOADED', downloading=df['status'] == 'DOWNLOADING')\ .groupby(by=['url', 'bucket_id'], as_index=False).agg({'delta': 'max', 'id': 'count', 'downloaded': 'sum', 'not_donwloaded':'sum', 'downloading':'sum'}) </code></pre>
pandas
0
404
69,194,321
Merge two dataframe based on condition
<p>I'm trying to merge two dataframes conditionally.</p> <p>In <code>df1</code>, it has <code>duration</code>. In <code>df2</code>, it has <code>usageTime</code>. On <code>df3</code>, I want to set <code>totalTime</code> as <code>df1</code>'s <code>duration</code> value if <code>df2</code> has no <code>usageTime</code> value.</p> <p>Here is df1:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt; df1 duration device 1110100 53.8 1110101 64.7 1110102 52.6 1110103 14.4 </code></pre> <p>And df2:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt; df2 usageTime deviceId 1110100 87.6 1110101 94.3 1110102 None 1110103 None </code></pre> <p>The next dataframe I want to create is:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt; df3 totalUsage device 1110100 87.6 1110101 94.3 1110102 52.6 1110103 14.4 </code></pre> <p>Things I tried:</p> <ol> <li><p><code>pandas.DataFrame.combine_first()</code></p> <pre class="lang-py prettyprint-override"><code>df3 = df2.combine_first(df1.rename(columns={'duration': 'totalUsage'})) </code></pre> <p>Returns:</p> <pre class="lang-py prettyprint-override"><code> totalUsage usageTime device 1110100 53.8 87.6 1110101 64.7 94.3 1110102 52.6 None 1110103 14.3 None </code></pre> </li> <li><p><code>pandas.DataFrame.fillna()</code></p> <pre class="lang-py prettyprint-override"><code>df3 = df2.fillna(df1) df3.columns = ['totalUsage'] </code></pre> <p>Returns:</p> <pre class="lang-py prettyprint-override"><code> totalUsage device 1110100 87.6 1110101 94.3 1110102 NaN 1110103 NaN </code></pre> </li> </ol> <p>I am open to all ideas.</p>
<p>Specify the column names when using <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.fillna.html" rel="nofollow noreferrer"><code>fillna</code></a> and then convert the result <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.to_frame.html" rel="nofollow noreferrer"><code>to_frame</code></a>:</p> <pre class="lang-py prettyprint-override"><code>df3 = df2.usageTime.fillna(df1.duration).to_frame(name='totalUsage') # totalUsage # deviceId # 1110100 87.6 # 1110101 94.3 # 1110102 52.6 # 1110103 14.4 </code></pre>
python|pandas|dataframe
2
405
44,595,338
How to parallelize RNN function in Pytorch with DataParallel
<p>Here's an RNN model to run character based language generation:</p> <pre><code>class RNN(nn.Module): def __init__(self, input_size, hidden_size, output_size, n_layers): super(RNN, self).__init__() self.input_size = input_size self.hidden_size = hidden_size self.output_size = output_size self.n_layers = n_layers self.encoder = nn.Embedding(input_size, hidden_size) self.GRU = nn.GRU(hidden_size, hidden_size, n_layers, batch_first=True) self.decoder = nn.Linear(hidden_size, output_size) def forward(self, input, batch_size): self.init_hidden(batch_size) input = self.encoder(input) output, self.hidden = self.GRU(input, self.hidden) output = self.decoder(output.view(batch_size, self.hidden_size)) return output def init_hidden(self, batch_size): self.hidden = Variable(torch.randn(self.n_layers, batch_size, self.hidden_size).cuda()) </code></pre> <p>I instantiate the model using DataParallel, to split the batch of inputs across my 4 GPUs:</p> <pre><code>net = torch.nn.DataParallel(RNN(n_chars, hidden_size, n_chars, n_layers)).cuda() </code></pre> <p>Here's the <a href="https://gist.github.com/michaelklachko/26565dfac747d628961901d3dd84bd71" rel="nofollow noreferrer">full code</a>.</p> <p>Unfortunately, DataParallel requires the inputs to have batch_size as the first dimension, but GRU function expects hidden tensor to have batch_size as second dimension:</p> <pre><code>output, self.hidden = self.GRU(input, self.hidden) </code></pre> <p>The code as is throws the following error (note the printouts showing that encoder is correctly executed on 4 GPUs):</p> <pre><code>... forward function: encoding input of shape: (16L, 1L) forward function: encoding input of shape: (16L, 1L) forward function: encoding input of shape: (16L, forward function: encoding input of shape: forward function: GRU processing input of shape: 1L) ( (16L, 16L1L, 1L), 100L) forward function: GRU processing input of shape: (16L, 1L, forward function: GRU processing input of shape:100L) (16L forward function: GRU processing input of shape:, 1L, 100L) ( 16L, 1L, 100L) Traceback (most recent call last): File "gru2.py", line 166, in &lt;module&gt; output = net(c, batch_size) File "/root/miniconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 206, in __call__ result = self.forward(*input, **kwargs) File "/root/miniconda2/lib/python2.7/site-packages/torch/nn/parallel/data_parallel.py", line 61, in forward outputs = self.parallel_apply(replicas, inputs, kwargs) File "/root/miniconda2/lib/python2.7/site-packages/torch/nn/parallel/data_parallel.py", line 71, in parallel_apply return parallel_apply(replicas, inputs, kwargs) File "/root/miniconda2/lib/python2.7/site-packages/torch/nn/parallel/parallel_apply.py", line 45, in parallel_apply raise output RuntimeError: Expected hidden size (2, 16L, 100), got (2L, 64L, 100L) </code></pre> <p>Here the model has 2 layers, batch_size=64, and hidden_size = 100. </p> <p>How do I parallelize the GRU operation in the forward function? </p>
<p>You can simply set the parameter dim=1, e.g.</p> <pre><code>net = torch.nn.DataParallel(RNN(n_chars, hidden_size, n_chars, n_layers), dim=1).cuda() </code></pre>
torch|recurrent-neural-network|pytorch
4
406
61,070,707
Input contains infinity of value too large for dtype "float64"
<p>So I am pretty new to python in general and I am trying to follow a tutorial to normalize and scale all of my data; however, I keep getting an error. I am using Scikit-learn with pandas. I've searched around and have tried just about everything I can think of, but I am still getting this error.</p> <p>I keep receiving this error, which traces back to preprocessing.scale: </p> <pre><code>ValueError: Input contains infinity or a value too large for dtype('float64'). </code></pre> <p>The column that's kicking back the error has a min of <code>-10.3800048828125</code> and a max of <code>10.209991455078123</code>. All data types are <code>float64</code> or <code>int64</code> (not in this column though). I've tried multiple methods of getting rid of the infinities and NaNs but none of them seem to be working. If anyone has any advice it would be greatly appreciated!</p> <p>The code that is getting the issue is here:</p> <pre><code>def preprocess_df(df): df = df.drop('future', 1) df.replace([np.inf, -np.inf], np.nan) df.fillna(method='bfill', inplace=True) df.dropna(inplace=True) for col in df.columns: print("Trying Column: " + col) if col != "target": df[col] = df[col].pct_change() df.dropna(inplace=True) df[col] = preprocessing.scale(df[col].values) df.dropna(inplace=True) sequential_data = [] prev_days = deque(maxlen=SEQ_LEN) for i in df.values: prev_days.append([n for n in i[:-1]]) #appends every column to the prev days list, except for target (we don't want that to be known) if len(prev_days) == SEQ_LEN: sequential_data.append([np.array(prev_days), i[:-1]]) random.shuffle(sequential_data) </code></pre>
<p>Here your problem: <code>df.replace([np.inf, -np.inf], np.nan)</code>.</p> <p>Change the code as <code>df = df.replace([np.inf, -np.inf], np.nan)</code>.</p>
python|python-3.x|pandas|scikit-learn
1
407
60,806,747
Initialize variable as np array
<p>How do I transform a list into an numpy array? Is there a function that allows you to work on a list as if it were an array?</p> <pre><code>import numpy as np container = [0,1,2,3,4] container[container &lt; 2] = 0 </code></pre> <p>Returns:</p> <pre><code>'&lt;' not supported between instances of 'list' and 'int' </code></pre>
<p>I am not sure if I completely understand what you are looking for, but is it maybe <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.asarray.html" rel="nofollow noreferrer">numpy.asarray</a>:</p> <blockquote> <p><strong>numpy.asarray(a, dtype=None, order=None)</strong></p> <p>Convert the input to an array.</p> </blockquote> <p>you can then work with your former list as a numpy array.</p>
python|arrays|list|numpy
1
408
60,923,982
The model.predict(),model.predict_classes() and model.predict_on_batch() seems to produce no result
<p>I have created a model that makes use of deep learning to classify the input data using CNN. The classification is multi-class though, actually with 5 classes. On training the model seems to be fine, i.e. it doesn't overfit or underfit. Yet, on saving and loading the model I always get the same output regardless of the input image. The final prediction array contains the output as 0 for all the classes.</p> <p>So, I am not sure if the model doesn't predict anything or it always produces the same result.</p> <p>The model created by me after using tensorboard to find the best fit model is below.</p> <pre><code>import tensorflow as tf from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Dropout, Activation, Flatten, Conv2D, MaxPooling2D from tensorflow.keras.callbacks import TensorBoard import pickle import time X=pickle.load(open("X.pickle","rb")) y=pickle.load(open("y.pickle","rb")) X=X/255.0 dense_layers=[0] layer_sizes=[64] conv_layers=[3] for dense_layer in dense_layers: for layer_size in layer_sizes: for conv_layer in conv_layers: NAME="{}-conv-{}-nodes-{}-dense-{}".format(conv_layer,layer_size,dense_layer,int(time.time())) print(NAME) tensorboard=TensorBoard(log_dir='logs\{}'.format(NAME)) model = Sequential() model.add(Conv2D(layer_size, (3,3), input_shape=X.shape[1:])) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2,2))) for l in range(conv_layer-1): model.add(Conv2D(layer_size, (3,3))) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Flatten()) for l in range(dense_layer): model.add(Dense(layer_size)) model.add(Activation('relu')) model.add(Dense(5)) model.add(Activation('sigmoid')) model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy']) model.fit(X,y,batch_size=32,epochs=10,validation_split=0.3,callbacks=[tensorboard]) model.save('0x64x3-CNN-latest.model') </code></pre> <p>The loading model snippet is as below,</p> <pre><code>import cv2 import tensorflow as tf CATEGORIES= ["fifty","hundred","ten","thousand","twenty"] def prepare(filepath): IMG_SIZE=100 img_array=cv2.imread(filepath) new_array=cv2.resize(img_array,(IMG_SIZE,IMG_SIZE)) return new_array.reshape(-1,IMG_SIZE,IMG_SIZE,3) model=tf.keras.models.load_model("0x64x3-CNN-latest.model") prediction=model.predict([prepare('30.jpg')]) print(prediction) </code></pre> <p>The output is always <code>[[0. 0. 0. 0. 0.]]</code>.</p> <p>On converting to categories, it always results in fifty.</p> <p>My dataset contains almost 2200 images with an average of 350-500 images for each class.</p> <p>Can someone help out with this..?</p>
<p>I see that when you train, you normalize your images:</p> <pre><code>X = X/255.0 </code></pre> <p>but when you test, i.e., in prediction time, you just read your image and resize but not normalize. Try:</p> <pre><code>def prepare(filepath): IMG_SIZE=100 img_array=cv2.imread(filepath) img_array = img_array/255.0 new_array=cv2.resize(img_array,(IMG_SIZE,IMG_SIZE)) return new_array.reshape(-1,IMG_SIZE,IMG_SIZE,3) </code></pre> <p>and also, your <code>prepare</code> function returns your image in 4 dimensions (including the batch dimension), so when you call <code>predict</code>, you do not have to give the input as a list. Instead of:</p> <pre><code>prediction=model.predict([prepare('30.jpg')]) </code></pre> <p>you should do:</p> <pre><code>prediction=model.predict(prepare('30.jpg')) </code></pre> <p>Hope it helps.</p>
python|tensorflow|keras|deep-learning|conv-neural-network
1
409
71,664,469
Python Pandas ValueError: cannot reindex from a duplicate axis
<p>This might be really simple but I'm trying to get a subset of values from a dataframe by selecting values where a column meets a specific value. So this:</p> <pre><code>test_df[test_df.ProductID == 18] </code></pre> <p>But I'm getting this error: *** ValueError: cannot reindex from a duplicate axis</p> <p>Which is weird cause when I run this:</p> <pre><code>test_df.index.is_unique </code></pre> <p>I get true since my indices are unique. So what am I doing wrong?</p> <p>I was able to get the exact same thing to work with another column:</p> <pre><code>test_df[test_df.AttributeID == 1111] </code></pre> <p>and that works exactly as expected.</p> <p>Solved. I didn't check my dataframe close enough, there were duplicate columns called ProductID and that was what was causing the error.</p>
<p>If something is wrong with your index, you might reset the index with:</p> <pre><code>test_df.reset_index(level=0, inplace=True) </code></pre>
python|pandas
0
410
70,010,933
Pandas check if cell is null in any of two dataframes and if it is, make both cells nulls
<p>I have two dataframes with same shape:</p> <pre><code>&gt;&gt;&gt; df1.shape (400,1200) &gt;&gt;&gt; df2.shape (400,1200) </code></pre> <p>I would like to compare cell-by-cell and if a value is missing in one of the dataframes make the equivalent value in the other dataframe NaN as well.</p> <p>Here's a (pretty inefficient) piece of code that works:</p> <pre><code>for i in df.columns: # iterate over columns for j in range(len(df1): # iterate over rows if pd.isna(df1[i][j]) | pd.isna(df2[i][j]): df1[i][j] = np.NaN df2[i][j] = np.NaN </code></pre> <p>How would be a better way to do this? I'm very sure there is.</p>
<p>This is a simple problem to solve with pandas. You can use this code:</p> <pre class="lang-py prettyprint-override"><code>df1[df2.isna()] = df2[df1.isna()] = np.nan </code></pre> <p>It first creates <em>mask</em> of <code>df2</code>, i.e., a copy of dataframe containing <strong>only</strong> <code>True</code> or <code>False</code> values. Each NaN in <code>df2</code> will have a <code>True</code> in the mask, and every other value will have a <code>False</code> in the mask.</p> <p>With pandas, you can use such masks to do bulk operations. So you can pass that mask to the <code>[]</code> of <code>df1</code>, and then assign it a value, and where each value in the mask is <code>True</code>, the corresponding value in <code>df1</code> will be assigned the value.</p>
python|pandas
0
411
69,940,105
In the Jupyter Notebook, using geopandas, the x- y-axis is not showing
<pre><code>import geopandas map_df = geopandas.read_file(geopandas.datasets.get_path('naturalearth_lowres')) map_df.plot() </code></pre> <p>Error Traceback and Output:</p> <p><a href="https://i.stack.imgur.com/XzrVC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XzrVC.png" alt="enter image description here" /></a></p>
<p>The map has a transparent background where the axes are. Since the axes are black by default and your background is also black, you can't see them. But they are there :).</p> <p>You can specify the background using matplotlib.</p> <pre class="lang-py prettyprint-override"><code>import geopandas import matplotlib.pyplot as plt map_df = geopandas.read_file(geopandas.datasets.get_path('naturalearth_lowres')) fig, ax = plt.subplots() ax = map_df.plot(ax=ax) fig.patch.set_facecolor('pink') </code></pre> <p><a href="https://i.stack.imgur.com/AxPeF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AxPeF.png" alt="result" /></a></p>
python|jupyter-notebook|geopandas
1
412
72,388,739
Python Pandas Match 2 columns from 2 files to fill values in a file
<p>Please help filling values in &quot;file1.csv&quot; (daily data) from &quot;file2.csv&quot; (weekly data).</p> <p>Below is &quot;file1.csv&quot;(daily data)::</p> <pre><code>date,value,week_num,year,fill_col_1,fill_col_2 01-01-2018,1763.95,1,2018,, 02-01-2018,1736.2,1,2018,, 03-01-2018,1741.1,1,2018,, 04-01-2018,1779.95,1,2018,, 05-01-2018,1801.1,1,2018,, 08-01-2018,1816,2,2018,, 09-01-2018,1823,2,2018,, 10-01-2018,1812.05,2,2018,, 11-01-2018,1825,2,2018,, 12-01-2018,1805,2,2018,, </code></pre> <p>And below is &quot;file2.csv&quot;(weekly data)::</p> <pre><code>date,value,week_num,year,fill_col_1,fill_col_2 07-01-2018,1764.46,1,2018,1768.953333,1756.542153 14-01-2018,1816.21,2,2018,1811.966667,1801.030007 </code></pre> <p>The 2 columns to be filled in &quot;file1.csv&quot; is <strong>&quot;fill_col_1&quot;</strong> and <strong>&quot;fill_col_2&quot;</strong>, by matching <strong>&quot;week_num&quot;</strong> and <strong>&quot;year&quot;</strong> from &quot;file2.csv&quot;.</p> <p>Is there some way where 2 columns from 2 files can be compared, with or without considering indexes(&quot;dates&quot;) which are obviously different in both files.</p> <p>(If you see, the &quot;file2.csv&quot;(weekly) was derived using the 'resample' function in pandas, based on (daily values from) &quot;file1.csv&quot;.. But now I am unable to join/concatentate the 2 files based on match of multiple columns/conditions.)</p> <p>The expected output/result, needs to be as below:</p> <pre><code>date,value,match_col_1,match_col_2,fill_col_1,fill_col_2 01-01-2018,1763.95,1,2018,1768.953333,1756.542153 02-01-2018,1736.2,1,2018,1768.953333,1756.542153 03-01-2018,1741.1,1,2018,1768.953333,1756.542153 04-01-2018,1779.95,1,2018,1768.953333,1756.542153 05-01-2018,1801.1,1,2018,1768.953333,1756.542153 08-01-2018,1816,2,2018,1811.966667,1801.030007 09-01-2018,1823,2,2018,1811.966667,1801.030007 10-01-2018,1812.05,2,2018,1811.966667,1801.030007 11-01-2018,1825,2,2018,1811.966667,1801.030007 12-01-2018,1805,2,2018,1811.966667,1801.030007 </code></pre> <p>The code I have tried is as below: (which is absolutely rookie thought process, as I see now)</p> <pre><code>df_daily = pd.read_csv(&quot;file1.csv&quot;) df_weekly = pd.read_csv(&quot;file2.csv&quot;) df_weekly.loc[df_weekly[&quot;year&quot;]]==df_daily.loc[df_daily[&quot;year&quot;]] &amp; df_weekly.loc[df_weekly[&quot;week_num&quot;]]==df_daily.loc[df_daily[&quot;week_num&quot;]] </code></pre> <p>which gives an error: <em>KeyError: &quot;None of [Int64Index([2018, 2018, 2018, 2018, 2018, 2018, 2018, ],\n dtype='int64', length=8)] are in the [index]&quot;</em></p>
<p>Try the <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.merge.html" rel="nofollow noreferrer">pandas.DataFrame.merge</a> func:</p> <pre><code>import pandas as pd df_daily = pd.read_csv('1.csv').iloc[:,:4] df_weekly = pd.read_csv('2.csv').iloc[:,2:] print(df_daily.merge(df_weekly,on=['week_num','year'],how='left')) </code></pre>
pandas|comparison|multiple-columns|python-3.9
1
413
72,224,804
how to get smallest index in dataframe after using groupby
<p>If create_date field does not correspond to period between from_date and to_date, I want to extract only the large index records using group by 'indicator' and record correspond to period between from_date to end_date.</p> <pre><code>from_date = '2022-01-01' to_date = '2022-04-10' indicator create_date 0 A 2022-01-03 1 B 2021-12-30 2 B 2021-07-11 3 C 2021-02-10 4 C 2021-09-08 5 C 2021-07-24 6 C 2021-01-30 </code></pre> <p>Here is the result I want:</p> <pre><code> indicator create_date 0 A 2022-01-03 2 B 2021-07-11 6 C 2021-01-30 </code></pre> <p>I've been looking for a solution for a long time, but I only found a way &quot;How to get the index of smallest value&quot;, and I can't find a way to compare the index number.</p>
<p>You can try</p> <pre class="lang-py prettyprint-override"><code>df['create_date'] = pd.to_datetime(df['create_date']) m = df['create_date'].between(from_date, to_date) df_ = df[~m].groupby('indicator', as_index=False).apply(lambda g: g.loc[[max(g.index)]]).droplevel(level=0) out = pd.concat([df[m], df_], axis=0).sort_index() </code></pre> <pre><code>print(out) indicator create_date 0 A 2022-01-03 2 B 2021-07-11 6 C 2021-01-30 </code></pre>
python|pandas|dataframe
3
414
72,317,821
Read multiple csv files into a single dataframe and rename columns based on file of origin - Pandas
<p>I have around 100 csv files with each one containing the same three columns. There are several ways to read the files into a single dataframe, but is there a way that I could append the file name to the column names in order to keep track of the origin of the columns?</p> <p>I have now tried to import the files using the following code:</p> <pre><code>import glob import os import pandas as pd df = pd.concat(map(pd.read_csv, glob.glob(os.path.join('', &quot;my_files*.csv&quot;)))) </code></pre> <p>For example, if the inital files are:</p> <p>&quot;A_reduced.csv&quot; and &quot;B_increased.csv&quot; and each file contains three columns (Time, X, Y)</p> <p>The expected output would be:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Time</th> <th>X_A_reduced</th> <th>X_B_increased</th> <th>Y_A_reduced</th> <th>Y_B_increased</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>34</td> <td></td> <td></td> <td></td> </tr> <tr> <td>2</td> <td>42</td> <td></td> <td></td> <td></td> </tr> </tbody> </table> </div>
<p>You coud add a <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.add_prefix.html" rel="nofollow noreferrer">prefix</a> (or <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.add_suffix.html" rel="nofollow noreferrer">suffix</a>) to the column names prior to concatenating the dataframes, eg:</p> <pre><code>def f(i): return pd.read_csv(i).add_prefix(i.split('_')[0] + '_') df = pd.concat(map(f, glob.glob(os.path.join('', &quot;my_files*.csv&quot;))) </code></pre>
python|pandas|dataframe
1
415
72,234,738
Refactoring an algorithm to avoid inefficient slicing of a large Numpy array
<p>I have a working algorithm to analyse experimental datasets. This algorithm is made of two main functions. The first one takes a large array as one of its inputs and returns an intermediate 3D, complex-valued array that often does not fit in memory. For this reason, I use Numpy’s <code>memmap</code> to save this array on the disk. With larger datasets, the second function starts to take a lot more time and it appears to be linked to memory access. In some instances, if a computation lasts 20 minutes, then increasing <code>n2</code> by 50% results in a computation that takes almost 24 hours.</p> <p>After stripping about 99% of the whole program, a minimal working example looks like this:</p> <pre><code>import numpy as np n1, n2 = 10, 100 Nb = 12 Nf = int(n2 / Nb) X = np.random.rand(n1, n2) # Function 1: Compute X2 from X1 and save on disk X2 = np.memmap(&quot;X2.dat&quot;, shape=(Nb, n1, Nf), dtype=np.complex128, mode='w+') for n in range(Nb): Xn = X[:, n*Nf:(n+1)*Nf] X2[n,:,:] = np.fft.fft(Xn, axis=1) X2.flush() del X2 # Function 2: Do something with slices of X2 X2 = np.memmap(&quot;X2.dat&quot;, shape=(Nb, n1, Nf), dtype=np.complex128, mode='r') for k in range(Nf): Y = np.pi * X2[:,:,k] # &lt;- Problematic step # do things with Y... </code></pre> <p>The values of <code>n1</code>, <code>n2</code>, <code>Nb</code> and <code>Nf</code> and typically much larger.</p> <p>As you can see, in function 1, <code>X2</code> is populated using its first index, which according to <a href="https://stackoverflow.com/q/44115571">this post</a> is the most efficient way to do it in terms of speed. My problem occurs in function 2, where I need to work on <code>X2</code> by slicing it on its third dimension. This is required by the nature of the algorithm.</p> <p>I would like to find a way to refactor these functions to reduce the execution time. I already improved some parts of the algorithm. For example, since I know <code>X</code> contains only real values, I can cut the size of <code>X2</code> by 2 if I neglect its conjugate values. However, the slicing of <code>X2</code> remains problematic. (Actually, I just learned about <code>np.fft.rfft</code> while writing this, which will definitly help me).</p> <p>Do you see a way I could refactor function 2 (and/or function 1) so I can access <code>X2</code> more efficiently?</p> <p><strong>Update</strong></p> <p>I tested <a href="https://stackoverflow.com/a/72235717/6080325">yut23's solution</a> on one of my largest dataset and it turns out that the first optimization, moving <code>Nf</code> to axis 1, is slightly faster overall than moving it to axis 0.</p> <p>Below are three charts showing the profiling results for the total execution time, time spent in function 1 (excluding <code>X2.flush()</code>) and time spent in function 2, respectively. On the x axis, <code>Nr</code> is a value proportional to <code>n2</code> and <code>Nb</code>. I tested my initial code with both optimizations and the modified code using Numpy's <code>rfft()</code>, also with both optimizations.</p> <p>With my initial code, opt. 1 is the better option with a total time reduction of more than one order of magnitude for <code>Nr=12</code>. Using <code>rfft()</code> almost gives another order of magnitude in time reduction, but in this case, both optimizations are equivalent (everything fits in the available RAM so time reduction from swapping the array axes is minimal). However, this will make it possible to work on larger datasets more efficiently!</p> <p><a href="https://i.stack.imgur.com/qj5ixm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qj5ixm.png" alt="Total execution time" /></a> <a href="https://i.stack.imgur.com/rlYqxm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rlYqxm.png" alt="Execution time of function 1" /></a> <a href="https://i.stack.imgur.com/6Wdz7m.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6Wdz7m.png" alt="Execution time of function 2" /></a></p>
<p>An easy optimization is to swap the last two axes, which shouldn't change the speed of function 1 (assuming the out-of-order memory accesses from the transpose are negligible compared to disk access) but should make function 2 run faster, for the same reasons discussed in the question you linked:</p> <pre class="lang-python prettyprint-override"><code># Function 1: Compute X2 from X1 and save on disk X2 = np.memmap(&quot;X2.dat&quot;, shape=(Nb, Nf, n1), dtype=np.complex128, mode='w+') for n in range(Nb): Xn = X[:, n*Nf:(n+1)*Nf] X2[n,:,:] = np.fft.fft(Xn, axis=1).T # swap axes so Nf changes slower X2.flush() del X2 # Function 2: Do something with slices of X2 X2 = np.memmap(&quot;X2.dat&quot;, shape=(Nb, Nf, n1), dtype=np.complex128, mode='r') for k in range(Nf): Y = np.pi * X2[:,k,:] # do things with Y... </code></pre> <p>You can make function 2 even faster at the cost of a slowdown in function 1 by moving <code>Nf</code> to axis 0:</p> <pre class="lang-python prettyprint-override"><code># Function 1: Compute X2 from X1 and save on disk X2 = np.memmap(&quot;X2.dat&quot;, shape=(Nf, Nb, n1), dtype=np.complex128, mode='w+') for n in range(Nb): Xn = X[:, n*Nf:(n+1)*Nf] X2[:,n,:] = np.fft.fft(Xn, axis=1).T # swap axes so Nf changes slower X2.flush() del X2 # Function 2: Do something with slices of X2 X2 = np.memmap(&quot;X2.dat&quot;, shape=(Nf, Nb, n1), dtype=np.complex128, mode='r') for k in range(Nf): Y = np.pi * X2[k,:,:] # do things with Y... </code></pre> <p>It might make sense to use this version if X2 is read more times than it's written. Also, the slowdown of function 1 should get smaller as <code>n1</code> gets bigger, since the contiguous chunks are larger.</p> <p>With the data files stored on a hard drive and <code>n1, n2, Nb = 1000, 10000, 120</code>, my timings are</p> <pre class="lang-none prettyprint-override"><code>function 1, original: 1.53 s ± 41.9 ms function 1, 1st optimization: 1.53 s ± 27.8 ms function 1, 2nd optimization: 1.57 s ± 34.9 ms function 2, original: 111 ms ± 1.2 ms function 2, 1st optimization: 45.5 ms ± 197 µs function 2, 2nd optimization: 27.8 ms ± 29.7 µs </code></pre>
python|numpy|performance|optimization|memory
1
416
50,494,789
What are the inputs to GradientDescentOptimizer?
<p>I'm trying to create a minimal code snippet to understand the <code>GradientDescentOptimizer</code> class to help me to understand the tensorflow API docs in more depth. </p> <p>I would like to provide some hardcoded inputs to the <code>GradientDescentOptimizer</code>, run the <code>minimize()</code> method and inspect the output. So far, I have created the following:</p> <pre class="lang-py prettyprint-override"><code>loss_data = tf.Variable([2.0], dtype=tf.float32) train_data = tf.Variable([20.0], dtype=tf.float32, name='train') optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.02) gradients = optimizer.compute_gradients(loss_data, var_list=[train_data]) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) print(sess.run(gradients)) </code></pre> <p>The error I get is:</p> <pre><code>TypeError: Fetch argument None has invalid type &lt;class 'NoneType'&gt; </code></pre> <p>However, I'm just guessing what the inputs could look like. Any pointers appreciated to help me understand what I should be passing to this function.</p> <hr> <p>Some more context ...</p> <p>I followed a similar process to understand activation functions by isolating them and treating them as a black-box where I send a range of inputs and inspect the corresponding outputs.</p> <pre class="lang-py prettyprint-override"><code># I could have used a list of values but I wanted to experiment # by passing in one parameter value at a time. placeholder = tf.placeholder(dtype=tf.float32, shape=[1], name='placeholder') activated = tf.nn.sigmoid(placeholder) with tf.Session() as sess: x_y = {} for x in range(-10, 10): x_y[x] = sess.run(activated, feed_dict={ placeholder: [x/1.0]}) import matplotlib.pyplot as plt %matplotlib inline x, y = zip(*x_y.items()) plt.plot(x, y) </code></pre> <p><a href="https://i.stack.imgur.com/cNvud.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cNvud.png" alt="enter image description here"></a></p> <p>The above process was really useful for my understanding of activation functions and I was hoping to do something similar for optimizers.</p>
<p>Your loss shouldn't be a variable (it is not a parameter of your model) but the result of an operation, e.g.</p> <pre><code>loss_data = train_data**2 </code></pre> <p>Currently your loss does not depend on <code>train_data</code> which explains why no gradient can be computed.</p>
tensorflow
2
417
50,530,544
Tensorflow not displaying the right amount of free memory
<p>I've been trying to run a neural network of mine on my GPU but for some reason upon creating the device, Tensorflow won't see the full RAM memory and instead focuses on a 2GB free memory available... </p> <pre><code>Using TensorFlow backend. 2018-05-25 11:00:56.992852: I T:\src\github\tensorflow\tensorflow\core\platform\cpu_feature_guard.cc:140] Your CPU supports instructions that this Ten sorFlow binary was not compiled to use: AVX2 2018-05-25 11:00:57.307883: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1356] Found device 0 with properties: name: Quadro K620 major: 5 minor: 0 memoryClockRate(GHz): 1.124 pciBusID: 0000:02:00.0 totalMemory: 2.00GiB freeMemory: 1.77GiB 2018-05-25 11:00:57.307883: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1435] Adding visible gpu devices: 0 2018-05-25 11:00:59.637116: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:923] Device interconnect StreamExecutor with s trength 1 edge matrix: 2018-05-25 11:00:59.638116: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:929] 0 2018-05-25 11:00:59.638116: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:942] 0: N 2018-05-25 11:00:59.644117: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1053] Created TensorFlow device (/job:localhos t/replica:0/task:0/device:GPU:0 with 1331 MB memory) -&gt; physical GPU (device: 0, name: Quadro K620, pci bus id: 0000:02:00.0, compute capability: 5.0) </code></pre> <p>The GPU is a Quadro K620 on a Windows 7 with (according to task manager) 16GB of RAM. But upon looking in the Nvidia panel, it apparently has 10GB of available memory and only 2GB of dedicated video memory (I guess TF is using this part of the RAM instead of the rest). It is pretty annoying because I always end up running out of memory really quickly...</p> <p>Can someone shed light on what is going on and why is TF doing that?</p>
<p>There is nothing wrong with TF. Your graphics card has exactly 2GB of DDR3 RAM. I think you are confusing GPU RAM with your CPU RAM which might be indeed 16 GB.</p>
tensorflow|gpu|nvidia|ram
2
418
45,547,400
GAN changing input size causes error
<p>Below code takes only 32*32 input, I want to feed in 128*128 images, how to go about it. The code is from the tutorial - <a href="https://github.com/awjuliani/TF-Tutorials/blob/master/DCGAN.ipynb" rel="nofollow noreferrer">https://github.com/awjuliani/TF-Tutorials/blob/master/DCGAN.ipynb</a> </p> <p>def generator(z):</p> <pre><code>zP = slim.fully_connected(z,4*4*256,normalizer_fn=slim.batch_norm,\ activation_fn=tf.nn.relu,scope='g_project',weights_initializer=initializer) zCon = tf.reshape(zP,[-1,4,4,256]) gen1 = slim.convolution2d_transpose(\ zCon,num_outputs=64,kernel_size=[5,5],stride=[2,2],\ padding="SAME",normalizer_fn=slim.batch_norm,\ activation_fn=tf.nn.relu,scope='g_conv1', weights_initializer=initializer) gen2 = slim.convolution2d_transpose(\ gen1,num_outputs=32,kernel_size=[5,5],stride=[2,2],\ padding="SAME",normalizer_fn=slim.batch_norm,\ activation_fn=tf.nn.relu,scope='g_conv2', weights_initializer=initializer) gen3 = slim.convolution2d_transpose(\ gen2,num_outputs=16,kernel_size=[5,5],stride=[2,2],\ padding="SAME",normalizer_fn=slim.batch_norm,\ activation_fn=tf.nn.relu,scope='g_conv3', weights_initializer=initializer) g_out = slim.convolution2d_transpose(\ gen3,num_outputs=1,kernel_size=[32,32],padding="SAME",\ biases_initializer=None,activation_fn=tf.nn.tanh,\ scope='g_out', weights_initializer=initializer) return g_out </code></pre> <p>def discriminator(bottom, reuse=False):</p> <pre><code>dis1 = slim.convolution2d(bottom,16,[4,4],stride=[2,2],padding="SAME",\ biases_initializer=None,activation_fn=lrelu,\ reuse=reuse,scope='d_conv1',weights_initializer=initializer) dis2 = slim.convolution2d(dis1,32,[4,4],stride=[2,2],padding="SAME",\ normalizer_fn=slim.batch_norm,activation_fn=lrelu,\ reuse=reuse,scope='d_conv2', weights_initializer=initializer) dis3 = slim.convolution2d(dis2,64,[4,4],stride=[2,2],padding="SAME",\ normalizer_fn=slim.batch_norm,activation_fn=lrelu,\ reuse=reuse,scope='d_conv3',weights_initializer=initializer) d_out = slim.fully_connected(slim.flatten(dis3),1,activation_fn=tf.nn.sigmoid,\ reuse=reuse,scope='d_out', weights_initializer=initializer) return d_out </code></pre> <p>Below is the error which I get when I feed 128*128 images.</p> <pre><code> Trying to share variable d_out/weights, but specified shape (1024, 1) and found shape (16384, 1). </code></pre>
<p>The generator is generating 32*32 images, and thus when we feed any other dimension in discriminator, it results in the given error. </p> <p>The solution is to generate 128*128 images from the generator, by 1. Adding more no. of layers(2 in this case) 2. Changing the input to the generator </p> <pre><code>zP = slim.fully_connected(z,16*16*256,normalizer_fn=slim.batch_norm,\ activation_fn=tf.nn.relu,scope='g_project',weights_initializer=initializer) zCon = tf.reshape(zP,[-1,16,16,256]) </code></pre>
machine-learning|tensorflow|computer-vision|deep-learning|dcgan
0
419
45,442,797
Cannot import Tensorflow with Eclipse on Ubuntu16.04
<p>The bug happens when I try to import Tensorflow on Eclipse. Tensorflow can be imported when I directly run the python code without using IDEs (I test it and it works perfectly). I've also tested my codes on PyCharm, it's fine with Pycharm....</p> <p>I've tested the LD_LIBRARY_PATH,PATH,CUDA_HOME variables with echo. I also tried to directly append the cuda libraries into the Ecplipse pydev interpreter setting. So it is really confusing me. I did face a similar question with another machine, but I solved it by modifying the ~/.bashrc file.</p> <p>I'm using Ubuntu16.04, python2.7,eclipse Neon3, GTX1080ti.</p> <p>Any ideas? Following is the bug information:</p> <pre><code>Traceback (most recent call last): File "/home/zernmern/workspace/test/p1/test.py", line 2, in &lt;module&gt; import tensorflow as tf File "/home/zernmern/.local/lib/python2.7/site-packages/tensorflow/__init__.py", line 24, in &lt;module&gt; from tensorflow.python import * File "/home/zernmern/.local/lib/python2.7/site-packages/tensorflow/python/__init__.py", line 49, in &lt;module&gt; from tensorflow.python import pywrap_tensorflow File "/home/zernmern/.local/lib/python2.7/site-packages/tensorflow/python/pywrap_tensorflow.py", line 52, in &lt;module&gt; raise ImportError(msg) ImportError: Traceback (most recent call last): File "/home/zernmern/.local/lib/python2.7/site-packages/tensorflow/python/pywrap_tensorflow.py", line 41, in &lt;module&gt; from tensorflow.python.pywrap_tensorflow_internal import * File "/home/zernmern/.local/lib/python2.7/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in &lt;module&gt; _pywrap_tensorflow_internal = swig_import_helper() File "/home/zernmern/.local/lib/python2.7/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) ImportError: libcusolver.so.8.0: cannot open shared object file: No such file or directory Failed to load the native TensorFlow runtime. </code></pre> <p>Please let me know if more information is needed xD.</p>
<p>Finally, I find the solution from '<a href="https://stackoverflow.com/questions/33812902/pycharm-cannot-find-library">PyCharm cannot find library</a>' As the user 'Laizer' suggested:</p> <pre><code>The issue is that PyCharm(Here is Eclipse) was invoked from the desktop, and wasn't getting the right environment variables. Solution is to either: invoke from the command line(i.e. Directly start eclipse by sh), create a script to set environment and then invoke, and make a link to that script on the desktop, or set environment variables on the desktop item </code></pre>
eclipse|tensorflow|ubuntu-16.04|pydev
0
420
45,548,708
Transforming excel index to pandas index
<p>To give a bit of backstory, I created and excel sheet that transforms the excel column index to pandas index. Which in essense is just a simple Vlookup, on a defined table e.g Column A=0, Column B=1. It gets the job done, however it's not as efficient as I would like it to be. </p> <p>I use these index on my function to rename those fields to follow our current nomenclature. e.g</p> <pre><code>df = df.rename(columns={df.columns[5]: "Original Claim Type", df.columns [1]:"Date of Loss", df.columns[3]:"Date Reported (tpa)", df.columns[2]:"Employer Report Date", df.columns[4]:"Original Status", df.columns[6]:"Date Closed", df.columns[27]:"(net)Total Paid", df.columns[23]:"(net) Total Incurred", df.columns[25]:"NET Paid(Med)", df.columns[26]:"NET Paid(Exp)", df.columns[24]:"NET Paid (Ind)", df.columns[18]:"Original Litigation", df.columns[7]:"Date of Hire", df.columns[8]:"Date of Birth", df.columns[9]:"Benefit State", df.columns[15]:"Original Cause", df.columns[17]:"Body Part", df.columns[32]:"TTD Days"}) </code></pre> <p>My new solution was to create a Dictionairy that maps the values, and their corresponding index.</p> <p><a href="https://i.stack.imgur.com/OXXOH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OXXOH.png" alt="The image is a snippet of the excel file, so I list the column header where it can be found on the file, and the vlookup formula gives the corresponding python index (It takes into account that pandas index starts at 0, thus subtracting all values by 1)"></a></p> <pre><code>excel_index={'A':0,'B':1,'C':2} test={"Claim Number":[0,1,2,3,4,5]} test=pd.DataFrame(test) test=test.rename(columns={ test.columns[excel_index['A']]: "Frog"}) </code></pre> <p>It works, however the only problem I have is that I would have to manually type out all the index values beforehand.</p> <p>What would be a more efficient way to carry this out?</p> <p>-Brandon</p>
<p>If you have a column of index names, such as the on in your example, you can specify that in the pd.read_excel command. Since your index column is 5, it would read like this:</p> <pre><code>df = pd.read_excel('yourfilename.xlsx', index_col=5) </code></pre>
python|pandas
1
421
62,808,580
Rearranging a column based on a list order and then mapping the list as a new column
<p>I need to map a list as a new column in a dataframe based on another column with the same values but may have different cases fro different letters:</p> <pre><code> Input DF (df_temp): Name Class ABC 1 EFG 2 HIJ 3 ABC 4 param_list: ['AbC', 'EfG', 'HiJ'] Output DF (df_temp): Name Class DB_Name ABC 1 AbC EFG 2 EfG HIJ 3 HiJ ABC 4 AbC </code></pre> <p>I have written a small piece of code using 2 for loops but is there a better way to the same:</p> <pre><code> for param in param_list: for i in range(len(df_temp.Param_Name.str.lower().tolist())): if param.lower() == df_temp['Name'][i].lower(): df_temp['DB_Name'][i] = param </code></pre>
<p>Using <code>join</code>:</p> <pre><code>s = pd.Series(L, name='DB_name', index=map(str.upper, L)) df_temp = df_temp.assign(k=df['Name'].str.upper()).join(s, on='k').drop('k', 1) </code></pre> <p>Result:</p> <pre><code> Name Class DB_name 0 ABC 1 AbC 1 EFG 2 EfG 2 HIJ 3 HiJ 3 ABC 4 AbC </code></pre>
python|pandas|list|dataframe|for-loop
0
422
62,506,543
I met an Error: line 21, in <module> ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
<pre><code>import numpy as np Q = np.loadtxt(open(&quot;D:\data_homework_4\Q.csv&quot;,&quot;rb&quot;), delimiter = &quot;,&quot;, skiprows = 0) b = np.loadtxt(open(&quot;D:\data_homework_4\B.csv&quot;,&quot;rb&quot;), delimiter = &quot;,&quot;, skiprows = 0) def f(x): return 1/2 * x.T @ Q @ x + b.T @ x def gradient(x): return Q @ x - b n = 2000 x_t = np.zeros((100, 1)) alpha = 0.1 beta = 0.3 eta_t = 3.887e-6 for t in range(n + 2): g_t = gradient(x_t) k = 0 while True: if f(x_t - beta**k * g_t) &lt;= f(x_t) - alpha * beta**k * np.linalg.norm(g_t)**2: eta_t = beta**k break k += 1 x_t -= eta_t * g_t print(x_t) </code></pre> <p>line 21 is</p> <pre><code>if f(x_t - beta**k * g_t) &lt;= f(x_t) - alpha * beta**k * np.linalg.norm(g_t)**2: </code></pre> <p>Q is 100x100, b is 100x1, x is 100x1. I looked up similiar errors, but none of them are like mine. Can somebody help me with this error. Thank you.</p>
<p>As it says, in the <code>if</code> condition you are comparing between 2 arrays - both of which has multiple values and the if condition evaluates all of them but doesn't know how to collapse them into a single value of Truth - that's why it's asking you to use <code>any</code> or <code>all</code>:</p> <p>Try this for example :</p> <pre><code>import numpy as np arr = np.array([2,3]) arr1 = np.array([1,4]) arr, arr1 if (arr&lt;arr1): pass </code></pre> <p><strong>It would give you the same error that you have.</strong></p> <blockquote> <p>And to solve that I've added an <code>all</code> condition so that all the elements in those arrays have to satisfy the <code>&lt;</code> condition</p> </blockquote> <p>So:</p> <pre><code>import numpy as np arr = np.array([2,3]) arr1 = np.array([1,4]) arr, arr1 if (arr&lt;arr1).all(): pass </code></pre> <p>Think about what makes sense in your case and use that (be it <code>any</code> or <code>all</code>)</p>
python|numpy
1
423
54,618,397
Counting repeating sequences Pandas
<p>I have a data scattered in a chaotic manner. </p> <pre><code>store_id period_id sales_volume 0 4186684 226 1004.60 1 5219836 226 989.00 2 4185865 226 827.45 3 4186186 226 708.40 4 4523929 226 690.75 5 4186441 226 592.55 ... ... ... ... 846960 11710234 195 0.60 846961 11693671 236 0.60 846962 27105667 212 0.60 846963 11693725 201 0.60 846964 27078031 234 0.60 846965 11663800 231 0.60 </code></pre> <p>In the <code>period_id</code> column the values give an indication of how long the process lasted only if they go continuously, as soon as the series is interrupted, this means that a new period has started. This representation of periods is relevant for each <code>store_id</code>. Since I could not sort the data in order I present them as an example below:</p> <pre><code> store_id period_id sales_volume 0 4168621 208 1004.60 1 4168621 209 989.00 #end of period 2 4168621 211 827.45 3 4168621 212 708.40 4 4168621 213 690.75 5 4168621 214 592.55 #end of period 6 41685 208 4634 7 41685 209 3356563 #end of period </code></pre> <p>I've grouped the values by store_id:</p> <p><code>df.groupby('store_id').agg(lambda x: x.tolist())</code></p> <p>and received</p> <pre><code>store_id sales_volume period_id 4168621 [226, 202, 199, 204, 224, 193 ... [27.45,10.0,8.15,7.6, ... 4168624 [226, 216, 215, 225, 214, 217 ... [429.8, 131.35,92.0 ... 4168636 [226, 217, 238, 223, 234, 240, ... [33.30, 9.3, 6.4, ... 4168639 [226, 204, 211, 208, 232, 207, ... [19.3,8.05, 6.5, 6.4, ... ... ... ... </code></pre> <p>It turns out, I need to sort the values in <code>period_id</code> somehow in order to calculate the number of sequences that turned out for each <code>store_id</code>, that is, as in code 2. It shows 3 sequences</p> <p>Don't know how I can do it...</p>
<p>If you only need to sort by <code>period_id</code> within each <code>store_id</code>, you can use <code>df.sort_values</code>. Using your example DataFrame as input:</p> <pre><code>df.sort_values(['store_id', 'period_id']).reset_index(drop=True) df store_id period_id sales_volume 0 41685 208 4634.00 1 41685 209 3356563.00 2 4168621 208 1004.60 3 4168621 209 989.00 4 4168621 211 827.45 5 4168621 212 708.40 6 4168621 213 690.75 7 4168621 214 592.55 </code></pre> <p>If you want to detect each period (and then group by period, for example), here is one way:</p> <pre><code>df['period_group'] = df['period_id'].diff().fillna(1).ne(1).astype(int).cumsum() df store_id period_id sales_volume period_group 0 4168621 208 1004.60 0 1 4168621 209 989.00 0 2 4168621 211 827.45 1 3 4168621 212 708.40 1 4 4168621 213 690.75 1 5 4168621 214 592.55 1 6 41685 208 4634.00 2 7 41685 209 3356563.00 2 </code></pre> <p>You can then group by this new column <code>period_group</code> to analyze "runs" of contiguous period ids.</p>
python|pandas
0
424
73,569,716
Pandas group in series
<p>Given <br></p> <pre><code>df = pd.DataFrame({'group': [1, 1, 2, 1, 1], 'value':['a','b','c','d','e']}) </code></pre> <p>I need to treat <strong>a</strong> and <strong>b</strong> as one group, <strong>c</strong> as second group, <strong>d</strong> and <strong>e</strong> as third group. How to get first element from every group?</p> <pre><code>pd.DataFrame({'group': [1, 2, 1,], 'value':['a','c','d']}) </code></pre>
<p>Try this:</p> <pre class="lang-py prettyprint-override"><code>df1 = df[df['group'].ne(df['group'].shift())] </code></pre> <p>Check <a href="https://stackoverflow.com/a/59136415/13174934">this answer</a> for more details</p>
python|pandas|series|group
0
425
73,628,386
Cannot concatenate object of type '<class 'numpy.ndarray'>'
<p>i have a problem when i try to concatenate train set and validation set. I split my dataset into train set, validation set and test set. Then i scale them with 'StandardScaler()':</p> <pre><code>X_train, X_test, t_train, t_test = train_test_split(x, t, test_size=0.20, random_state=1) X_train, X_valid, t_train, t_valid = train_test_split(X_train, t_train, test_size=0.25, random_state=1) sc = StandardScaler() X_train = sc.fit_transform(X_train) X_valid = sc.transform(X_valid) X_test = sc.transform(X_test) </code></pre> <p>Then after model selection i want concatenate training and validation set:</p> <pre><code>X_train = pd.concat([X_train, X_valid]) t_train = pd.concat([t_train, t_valid]) </code></pre> <p>But it doesn't work. I give me that error:</p> <pre><code>cannot concatenate object of type '&lt;class 'numpy.ndarray'&gt;'; only Series and DataFrame objs are valid </code></pre> <p>Can someone help me? Thanks</p>
<p><code>X_train</code>, <code>X_valid</code>, <code>t_train</code>, <code>t_valid</code> are all numpy arrays so they need to be concatenated using numpy:</p> <pre><code>X_train = np.concatenate([X_train, X_valid]) t_train = np.concatenate([t_train, t_valid]) </code></pre> <p>As suggested in the comments it is most likely not a good idea to merge train and validation sets together. Make sure you understand why datasets are split into training, testing and validation parts. You can apply cross validation to use all the data for train/test/valid in multiple steps.</p>
python|pandas|numpy
0
426
73,595,231
Pandas to_datetime doesn't work as hoped with format %d.%m.%Y
<p>I have a pandas dataframe with a text column containing strings in the format:</p> <pre><code>28.08.1958 29.04.1958 01.02.1958 05.03.1958 </code></pre> <p>that I want to interpret as dates. The dataframe arises from using beautifulsoup, i.e. I've not read it in from csv, so I planned to use pd.to_datetime(). There are occasional non-date entries so I added the errors='ignore'.</p> <pre><code>df[&quot;Date2&quot;] = pd.to_datetime(df[&quot;Date&quot;], format='%d.%m.%Y', errors='ignore') </code></pre> <p>It looks to me as if this isn't working as I have used a subsequent sort operation:</p> <pre><code>df.sort_values(by=&quot;Date2&quot;, ascending=True) </code></pre> <p>and this does change the order but seemingly randomly, not into date order. I wondered if there might be whitespace amongst the dates so as a precaution I used:</p> <pre><code>['Date'].str.strip() </code></pre> <p>but no improvement.</p> <p>I also tried adding:</p> <pre><code>inplace=True </code></pre> <p>to the sort values but this results in the whole column being sorted by the days part of the date, which is really telling me that there is no conversion to date going on.</p> <p><strong>In summary</strong> I presume that all the input strings are being treated as errors and ignored. Perhaps this means that the parameter format='%d.%m.%Y' isn't right.</p> <p><strong>EDIT</strong> In response to the comments/answer so far I've found a way of inducing an input dataset that has no errors in it. This seems to sort ok. What's more I can see that the data type of the &quot;Date2&quot; column depends upon whether there are errors in that column: if there is non-date text then the column is of type object, if no errors it is datetime64[ns]</p> <p><strong>Solution</strong> I've set errors='coerce' within the to_datetime statement.</p>
<p>Try this:</p> <pre><code>import pandas as pd import io csv_data = ''' Date 28.08.1958 29.04.1958 01.02.1958 05.03.1958 ''' df = pd.read_csv(io.StringIO(csv_data)) df[&quot;Date2&quot;] = pd.to_datetime(df[&quot;Date&quot;], format='%d.%m.%Y') df.sort_values(by=&quot;Date2&quot;, ascending=True, inplace=True) print(df) </code></pre> <hr /> <pre class="lang-none prettyprint-override"><code> Date Date2 2 01.02.1958 1958-02-01 3 05.03.1958 1958-03-05 1 29.04.1958 1958-04-29 0 28.08.1958 1958-08-28 </code></pre>
python|pandas|date|sorting
2
427
73,684,178
Pandas csv dataframe to json array
<p>I am reading a csv file and trying to convert the data into json array.But I am facing issues as &quot;only size-1 arrays can be converted to Python scalars&quot;</p> <p>The csv file contents are</p> <pre><code> 4.4.4.4 5.5.5.5 </code></pre> <p>My code is below</p> <pre><code>import numpy as np import pandas as pd df1 = pd.read_csv('/Users/Documents/datasetfiles/test123.csv', header=None) df1.head(5) 0 0 4.4.4.4 1 5.5.5.5 df_to_array = np.array(df1) app_json = json.dumps(df_to_array,default=int) </code></pre> <p>I need output as</p> <pre><code>[&quot;4.4.4.4&quot;, &quot;5.5.5.5&quot;, &quot;3.3.3.3&quot;] </code></pre>
<p>As other answers mentioned, just use <code>list</code>: <code>json.dumps(list(df[0]))</code></p> <p>FYI, the data shape is your problem:</p> <p><a href="https://i.stack.imgur.com/VNouL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VNouL.png" alt="enter image description here" /></a></p> <p>if you absolutely must use numpy, then transpose the array first:</p> <pre class="lang-py prettyprint-override"><code>json.dumps(list(df_to_array.transpose()[0])) </code></pre>
python|pandas|dataframe
1
428
60,518,664
Python pandas DataFrame, sum row's value which data is Tru
<p>I am very new to pandas and even new to programming. </p> <p>I have DataFrame of [500 rows x 24 columns]</p> <p>500 rows are rank of data and 24 columns are years and months.</p> <p>What I want is </p> <ol> <li><p>select data from df</p></li> <li><p>get all data's row value by int</p></li> <li><p>sum all row value</p></li> </ol> <p>I did <code>DATAF = df1[df1.isin(['MYDATA'])]</code></p> <p>DATAF is something like below</p> <pre><code> 19_01 19_02 19_03 19_04 19_05 0 NaN MYDATA NaN NaN NaN 1 MYDATA NaN MYDATA NaN NaN 2 NaN NaN NaN MYDATA NaN 3 NaN NaN NaN NaN NaN 4 NaN NaN NaN NaN NaN </code></pre> <p>so I want to sum all the row value</p> <p>which would be like 1 + 0 + 1 + 2 </p> <p>it would be nicer if sum is like 2 + 1 +2 + 3. because rows are rank of data</p> <p>is there any way to do this?</p>
<p>You can use <code>np.where</code>:</p> <pre><code>rows, cols = np.where(DATAF .notna()) # rows: array([0, 1, 1, 2], dtype=int64) print((rows+1).sum()) # 8 </code></pre>
python|pandas|dataframe
1
429
60,645,104
maximum difference between two time series of different resolution
<p>I have two time series data that gives the electricity demand in one-hour resolution and five-minute resolution. I am trying to find the maximum difference between these two time series. So the one-hour resolution data has 8760 rows (hourly for an year) and the 5-minute resolution data has 104,722 rows (5-minutly for an year).</p> <p>I can only think of a method that will expand the hourly data into 5 minute resolution that will have 12 times repeating of the hourly data and find the maximum of the difference of the two data sets.</p> <p>If this technique is the way to go, is there an easy way to convert my hourly data into 5-minute resolution by repeating the hourly data 12 times?</p> <p>for your reference I posted a plot of this data for one day.</p> <p>P.S> I am using Python to do this task<a href="https://i.stack.imgur.com/g6r9S.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/g6r9S.png" alt="enter image description here"></a></p>
<h2><a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.repeat.html" rel="nofollow noreferrer">Numpy's .repeat() function</a></h2> <p>You can change your hourly data into 5-minute data by using numpy's repeat function</p> <pre><code>import numpy as np np.repeat(hourly_data, 12) </code></pre>
python|pandas|time-series
1
430
60,472,670
pandas and folium not able to convert str to float
<p><a href="https://i.stack.imgur.com/8Hg9W.png" rel="nofollow noreferrer">enter image description here</a>I'm using python folium to develop a map which displays the airports in india, im using pandas to read the data from csv and assign the coordinates to <code>folium.Maker</code> location and im getting this error</p> <pre><code>Traceback (most recent call last): File "/Users/user/Downloads/mapping/folium/folium/utilities.py", line 59, in validate_location float(coord) ValueError: could not convert string to float: '#geo +lat' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "map1.py", line 15, in &lt;module&gt; fg.add_child(folium.Marker(location=[lt, ln], popup="Hi there", icon=folium.Icon(color="green"))) File "/Users/user/Downloads/mapping/folium/folium/map.py", line 277, in __init__ self.location = validate_location(location) File "/Users/user/Downloads/mapping/folium/folium/utilities.py", line 63, in validate_location .format(coord, type(coord))) ValueError: Location should consist of two numerical values, but '#geo +lat' of type &lt;class 'str'&gt; is not convertible to float. </code></pre> <pre><code>import folium import pandas data = pandas.read_csv("aiports.csv") lat = list(data["latitude_deg"]) lon = list(data["longitude_deg"]) map = folium.Map(location = [20.5937 ,78.9629], zoom_start=4, tiles = "Stamen Terrain") fg = folium.FeatureGroup(name="My Map") for lt, ln in zip(lat, lon): fg.add_child(folium.Marker(location=[lt, ln], popup="Hi there", icon=folium.Icon(color="green"))) map.add_child(fg) map.save("Map1.html") </code></pre>
<p>Take a look at the contents of the dataframe and you will see first row is meta-data not data. I just skipped first row using <code>loc[]</code>. For good measure I've include airport name in the marker.</p> <pre><code>df = pd.read_csv(&quot;airports.csv&quot;) map = folium.Map(location = [20.5937 ,78.9629], zoom_start=4, tiles = &quot;Stamen Terrain&quot;) fg = folium.FeatureGroup(name=&quot;My Map&quot;) for r in df.loc[1:,[&quot;name&quot;,&quot;latitude_deg&quot;,&quot;longitude_deg&quot;]].to_dict(orient=&quot;records&quot;): fg.add_child(folium.Marker(location=[r[&quot;latitude_deg&quot;], r[&quot;longitude_deg&quot;]], popup=r[&quot;name&quot;], icon=folium.Icon(color=&quot;green&quot;))) map.add_child(fg) map </code></pre> <p><strong>output</strong> <a href="https://i.stack.imgur.com/efKls.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/efKls.png" alt="enter image description here" /></a></p>
python|python-3.x|pandas|maps|folium
0
431
60,671,971
How to reshape an Image in pytorch
<p>I have an image of shape <code>(32, 3, 32, 32)</code>. I know it's of the form <code>(batch_size, Channel, Height, Width)</code>.</p> <p>Q. How do I convert it to be <code>(32, 32, 32)</code> overriding the <code>Channel</code>?</p>
<p>If you want to convert to grayscale you could do this:</p> <p><code>image.mean(dim=1)</code></p>
python|image-processing|dataset|pytorch
0
432
72,756,016
Pandas Set row value based on another column value but do nothing on else
<p>this is my dataframe:</p> <p><a href="https://i.stack.imgur.com/meKUR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/meKUR.png" alt="enter image description here" /></a></p> <p>its got 455 rows with a secuence of a period of days in range of 4 hours each row.</p> <p>i need to replace each 'demand' value with 0 if the timestamp hours are &quot;23&quot;</p> <p>so i write this:</p> <pre><code>datadf['value']=datadf['timestamp'].apply(lambda x, y=datadf['value']: 0 if x.hour==23 else y) </code></pre> <p>i know the Y value is wrong, but i couldnt find the way to refer to the same row &quot;demand&quot; value inside the lambda.</p> <p>how can i refer to that demand value? is any alternative that my else do nothing?</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd import numpy as np #data preparation df = pd.DataFrame() df['date'] = pd.date_range(start='2022-06-01',periods=7,freq='4h') + pd.Timedelta('3H') df['val'] = np.random.rand(7) print(df) &gt;&gt; date val 0 2022-06-01 03:00:00 0.601889 1 2022-06-01 07:00:00 0.017787 2 2022-06-01 11:00:00 0.290662 3 2022-06-01 15:00:00 0.179150 4 2022-06-01 19:00:00 0.763534 5 2022-06-01 23:00:00 0.680892 6 2022-06-02 03:00:00 0.585380 #if your dates not datetime format, you must convert it df['date'] = pd.to_datetime(df['date']) df.loc[df['date'].dt.hour == 23, 'val'] = 0 #if you don't want to change data in &quot;demand&quot; column you can copy it #df['val_2'] = df['val'] #df.loc[df['date'].dt.hour == 23, 'val_2'] = 0 print(df) &gt;&gt; date val 0 2022-06-01 03:00:00 0.601889 1 2022-06-01 07:00:00 0.017787 2 2022-06-01 11:00:00 0.290662 3 2022-06-01 15:00:00 0.179150 4 2022-06-01 19:00:00 0.763534 5 2022-06-01 23:00:00 0.000000 6 2022-06-02 03:00:00 0.585380 </code></pre>
python|pandas|dataframe
1
433
59,614,071
values in the array change when turned to numpy array
<p>I have data stored in a pandas DataFrame that I move to a numpy array using the following code </p> <pre><code># used to be train_X = np.array(train_df.iloc[1:,3:].values.tolist()) # but was split for me to find he source of change pylist = train_df.iloc[1:,3:].values.tolist() print(pylist[0]) train_X = np.array(pylist) print(train_X[0]) </code></pre> <p>the first print returns :</p> <pre><code>[0.0, 0.0, 0.0, 0.0, 1.0, 504.0, 0.0, 2.0, 8.0, 0.0, 0.0, 0.0, 0.0, 2.0, 8.0, 0.0, 189.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 85143.0, 57219.0, 62511.267857142804, 2649.26669430866] </code></pre> <p>the second print after the I move it to a Numpy array returns this </p> <pre><code>[0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 1.00000000e+00 5.04000000e+02 0.00000000e+00 2.00000000e+00 8.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 2.00000000e+00 8.00000000e+00 0.00000000e+00 1.89000000e+02 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00 8.51430000e+04 5.72190000e+04 6.25112679e+04 2.64926669e+03] </code></pre> <p>why does this happen ? and how do I stop it </p>
<p>As mentioned in the comments, NumPy represents the data to exponential notation. If you would like to change the way it's printed, you can do:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np np.set_printoptions(precision=2) pylist = train_df.iloc[1:,3:].values.tolist() print(pylist[0]) train_X = np.array(pylist) print(train_X[0]) </code></pre>
python|pandas|numpy
1
434
59,853,054
Tensorflow Lite Object Detection with Custom AutoML Model
<p>I like to test the Object Detection Example of TFLite. </p> <p><a href="https://github.com/tensorflow/examples/tree/master/lite/examples/object_detection/android" rel="nofollow noreferrer">https://github.com/tensorflow/examples/tree/master/lite/examples/object_detection/android</a></p> <p>The example with the default Model works great. But I want to test a custom Model generated from AutoML. When I am replace the "detect.tflite" and "labelmap fille" in the "\src\main\assets" directory and build then the App crashes after launch.</p> <p>My Model is very simple ... can detect only 2 objects (Tiger and Lion). And my labelmap file contains below:</p> <p>??? Tiger Lion</p> <p>Also I comment the line "//apply from:'download_model.gradle'" in "build.gradle" to stop the download of default Model and use my custom Model from asset.</p> <p>I new in both Android and this ML space. I'll be glad if anyone can advise for App crash after launch with custom AutoML Model.</p> <p>Thanks in advance.</p> <p>Regards.</p>
<p>Two probable errors on log might be:</p> <p><code>Cannot convert between a TensorFlowLite buffer with 1080000 bytes and a ByteBuffer with 270000 bytes.</code> Modify TF_OD_API_INPUT_SIZE accordingly.</p> <p><code>tflite ml google [1, 20, 4] and a Java object with shape [1, 10, 4].</code> Modify NUM_DETECTIONS according to custom model.</p>
android|tensorflow|machine-learning|tensorflow-lite|automl
0
435
40,515,677
Graphical Selection of Data in Python 3
<p><strong>The Problem to Solve</strong></p> <p>I have two 2D numpy arrays. One of which is an array of floats, the other is an array of strings. Each float array element is extracted from a file with a name in the corresponding string array element.</p> <p>I want to plot a 2D heatmap of the array of floats, with the color corresponding to the magnitude of the array element.</p> <p>Based on this I would like to interactively select an area (e.g. with a polygon or lasso tool). I want the indices of the array which have been selected to then be written to a list/array so the corresponding filenames can be extracted for further processing.</p> <p>I am using Python 3.</p> <p>I have spent several hours trying to make headway with this problem in Bokeh but have had no success. </p> <p><strong>My Questions are the following</strong></p> <p>Which Python libraries best suit this problem?</p> <p>Given the above library(s), do you have any tips to get started?</p> <p><strong>Many thanks.</strong></p> <p><em>Note: I have been using Python for some scientific data programming but would not consider myself an experienced programmer.</em></p>
<p>There is no easy way to do this, however it is quite doable with matplotlib. Matplotlib has two classes SpanSelector and LassoSelector that you can use.You can find some documentation <a href="http://matplotlib.org/api/widgets_api.html" rel="nofollow noreferrer">here</a>. <a href="http://matplotlib.org/examples/widgets/lasso_selector_demo.html" rel="nofollow noreferrer">Here</a> is an example with LassoSelector.</p>
python|numpy|plot|interactive
1
436
18,701,569
pandas: DataFrame.mean() very slow. How can I calculate means of columns faster?
<p>I have a rather large CSV file, it contains 9917530 rows (without the header), and 54 columns. Columns are real or integer, only one contains dates. There is a few NULL values on the file, which are translated to <code>nan</code> after I load it to pandas <code>DataFrame</code>, which I do like this:</p> <pre><code>import pandas as pd data = pd.read_csv('data.csv') </code></pre> <p>After loading, which I think was very fast, cause it took around 30 seconds (pretty much the same time as counting lines with the Unix tool <code>wc</code>), the process was taking around 4Gb of RAM (the size of of the file on disk: 2.2 Gb. So far so good.</p> <p>Then I tried to do the following:</p> <pre><code>column_means = data.mean() </code></pre> <p>The process' occupied memory grew to ~22Gb very quickly. I could also see the processor (one core) was very very busy - for like three hours, after that I killed the process, cause I needed to use the machine for other things. I have a pretty fast PC with Linux - it has 2 processors, each having 4 cores, so it's 8 cores all together, and 32 Gb of RAM. I cannot believe calculating column means should take so long.</p> <p>Can anybody explain why <code>DataFrame.mean()</code> is so slow? And more importantly, what is a better way of calculating means of columns of a file like that? Did I not load the file the best way possible, should I use a different function instead of <code>DataFrame.mean()</code> or perhaps a completely different tool?</p> <p>Many thanks in advance.</p> <p>EDIT. Here is what <code>df.info()</code> shows:</p> <pre><code>&lt;class 'pandas.core.frame.DataFrame'&gt; Int64Index: 9917530 entries, 0 to 9917529 Data columns (total 54 columns): srch_id 9917530 non-null values date_time 9917530 non-null values site_id 9917530 non-null values visitor_location_country_id 9917530 non-null values visitor_hist_starrating 505297 non-null values visitor_hist_adr_usd 507612 non-null values prop_country_id 9917530 non-null values prop_id 9917530 non-null values prop_starrating 9917530 non-null values prop_review_score 9902900 non-null values prop_brand_bool 9917530 non-null values prop_location_score1 9917530 non-null values prop_location_score2 7739150 non-null values prop_log_historical_price 9917530 non-null values position 9917530 non-null values price_usd 9917530 non-null values promotion_flag 9917530 non-null values srch_destination_id 9917530 non-null values srch_length_of_stay 9917530 non-null values srch_booking_window 9917530 non-null values srch_adults_count 9917530 non-null values srch_children_count 9917530 non-null values srch_room_count 9917530 non-null values srch_saturday_night_bool 9917530 non-null values srch_query_affinity_score 635564 non-null values orig_destination_distance 6701069 non-null values random_bool 9917530 non-null values comp1_rate 235806 non-null values comp1_inv 254433 non-null values comp1_rate_percent_diff 184907 non-null values comp2_rate 4040633 non-null values comp2_inv 4251538 non-null values comp2_rate_percent_diff 1109847 non-null values comp3_rate 3059273 non-null values comp3_inv 3292221 non-null values comp3_rate_percent_diff 944007 non-null values comp4_rate 620099 non-null values comp4_inv 692471 non-null values comp4_rate_percent_diff 264213 non-null values comp5_rate 4444294 non-null values comp5_inv 4720833 non-null values comp5_rate_percent_diff 1681006 non-null values comp6_rate 482487 non-null values comp6_inv 524145 non-null values comp6_rate_percent_diff 193312 non-null values comp7_rate 631077 non-null values comp7_inv 713175 non-null values comp7_rate_percent_diff 277838 non-null values comp8_rate 3819043 non-null values comp8_inv 3960388 non-null values comp8_rate_percent_diff 1225707 non-null values click_bool 9917530 non-null values gross_bookings_usd 276592 non-null values booking_bool 9917530 non-null values dtypes: float64(34), int64(19), object(1)None </code></pre>
<p>Here's a similar sized from , but without an object column</p> <pre><code>In [10]: nrows = 10000000 In [11]: df = pd.concat([DataFrame(randn(int(nrows),34),columns=[ 'f%s' % i for i in range(34) ]),DataFrame(randint(0,10,size=int(nrows*19)).reshape(int(nrows),19),columns=[ 'i%s' % i for i in range(19) ])],axis=1) In [12]: df.iloc[1000:10000,0:20] = np.nan In [13]: df.info() &lt;class 'pandas.core.frame.DataFrame'&gt; Int64Index: 10000000 entries, 0 to 9999999 Data columns (total 53 columns): f0 9991000 non-null values f1 9991000 non-null values f2 9991000 non-null values f3 9991000 non-null values f4 9991000 non-null values f5 9991000 non-null values f6 9991000 non-null values f7 9991000 non-null values f8 9991000 non-null values f9 9991000 non-null values f10 9991000 non-null values f11 9991000 non-null values f12 9991000 non-null values f13 9991000 non-null values f14 9991000 non-null values f15 9991000 non-null values f16 9991000 non-null values f17 9991000 non-null values f18 9991000 non-null values f19 9991000 non-null values f20 10000000 non-null values f21 10000000 non-null values f22 10000000 non-null values f23 10000000 non-null values f24 10000000 non-null values f25 10000000 non-null values f26 10000000 non-null values f27 10000000 non-null values f28 10000000 non-null values f29 10000000 non-null values f30 10000000 non-null values f31 10000000 non-null values f32 10000000 non-null values f33 10000000 non-null values i0 10000000 non-null values i1 10000000 non-null values i2 10000000 non-null values i3 10000000 non-null values i4 10000000 non-null values i5 10000000 non-null values i6 10000000 non-null values i7 10000000 non-null values i8 10000000 non-null values i9 10000000 non-null values i10 10000000 non-null values i11 10000000 non-null values i12 10000000 non-null values i13 10000000 non-null values i14 10000000 non-null values i15 10000000 non-null values i16 10000000 non-null values i17 10000000 non-null values i18 10000000 non-null values dtypes: float64(34), int64(19) </code></pre> <p>Timings (similar machine specs to you)</p> <pre><code>In [14]: %timeit df.mean() 1 loops, best of 3: 21.5 s per loop </code></pre> <p>You can get a 2x speedup by pre-converting to floats (mean does this, but does it in a more general way, so slower)</p> <pre><code>In [15]: %timeit df.astype('float64').mean() 1 loops, best of 3: 9.45 s per loop </code></pre> <p>You problem is the object column. Mean will try to calculate for all of the columns, but because of the object column everything is upcast to <code>object</code> dtype which is not efficient for calculating.</p> <p>Best bet is to do</p> <pre><code> df._get_numeric_data().mean() </code></pre> <p>There is an option to do this <code>numeric_only</code>, at the lower level, but for some reason we don't directly support this via the top-level functions (e.g. mean). I think will create an issue to add this parameter. However will prob be <code>False</code> by default (to not-exclude).</p>
python|performance|pandas|dataframe
19
437
18,441,779
How to specify upper and lower limits when using numpy.random.normal
<p>I want to be able to pick values from a normal distribution that only ever fall between 0 and 1. In some cases I want to be able to basically just return a completely random distribution, and in other cases I want to return values that fall in the shape of a gaussian.</p> <p>At the moment I am using the following function:</p> <pre><code>def blockedgauss(mu,sigma): while True: numb = random.gauss(mu,sigma) if (numb &gt; 0 and numb &lt; 1): break return numb </code></pre> <p>It picks a value from a normal distribution, then discards it if it falls outside of the range 0 to 1, but I feel like there must be a better way of doing this.</p>
<p>It sounds like you want a <a href="http://en.wikipedia.org/wiki/Truncated_normal_distribution" rel="noreferrer">truncated normal distribution</a>. Using scipy, you could use <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.truncnorm.html" rel="noreferrer"><code>scipy.stats.truncnorm</code></a> to generate random variates from such a distribution:</p> <pre><code>import matplotlib.pyplot as plt import scipy.stats as stats lower, upper = 3.5, 6 mu, sigma = 5, 0.7 X = stats.truncnorm( (lower - mu) / sigma, (upper - mu) / sigma, loc=mu, scale=sigma) N = stats.norm(loc=mu, scale=sigma) fig, ax = plt.subplots(2, sharex=True) ax[0].hist(X.rvs(10000), normed=True) ax[1].hist(N.rvs(10000), normed=True) plt.show() </code></pre> <p><img src="https://i.stack.imgur.com/OglPQ.png" alt="enter image description here" /></p> <p>The top figure shows the truncated normal distribution, the lower figure shows the normal distribution with the same mean <code>mu</code> and standard deviation <code>sigma</code>.</p>
python|numpy|random|scipy|gaussian
61
438
58,059,351
Update dataframe via for loop
<p>The code below has to update <code>test_df dataframe</code>, which is currently filled with <code>NaN</code>s.</p> <p>Each 'dig' (which is always an integer) value has corresponding 'top', 'bottom', 'left' and 'right' values, and the slices of dataframe, corresponding to respective top:bottom, left:right ranges for each 'dig', need to be updated with 'dig' values. </p> <p>For example, if dig=9, top=2, botton=4, left=1 and right=5, all the <code>NaN</code>s within the range of 2:4, 1:5 need to be replaced with 9s.</p> <p>The following code reports no errors, however, no NaNs are being updated.</p> <pre><code>for index, row in letters_df.iterrows(): dig = str(row[0]) top = int(height) - int(row[2]) bottom = int(height) - int(row[4]) left = int(row[1]) right = int(row[3]) test_df.iloc[top:bottom, left:right] = dig </code></pre> <p>test_df:</p> <pre><code> 0 1 2 3 4 5 6 ... 633 634 635 636 637 638 639 0 NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN NaN 1 NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN NaN 2 NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN NaN 3 NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN NaN 4 NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN NaN </code></pre> <p>letters_df:</p> <pre><code> 0 1 2 3 4 5 dig_unique_letters 0 T 36 364 51 388 0 0 1 h 36 364 55 388 0 1 2 i 57 364 71 388 0 2 3 s 76 364 96 388 0 3 4 i 109 364 112 388 0 2 </code></pre>
<p>The problem I see is that in <code>letters_df</code> the value in column 4 is higher than the value in column 2. That means that when you do <code>top = int(height) - int(row[2]) bottom = int(height) - int(row[4])</code> the value you will get in <code>top</code> will be bigger than the value you will get in <code>bottem</code>. So when you index <code>.iloc[top:bottom]</code> you have no rows in the slice. Maybe you should switch between <code>top</code> and <code>bottem</code>.</p>
python|pandas|dataframe
0
439
58,045,718
See all correctly and incorrectly identified images when training on the mnist dataset
<p>I'm trying to find a way to visualize which numbers in the mnist dataset a model was able to correctly identify and which ones it wasn't. What I can't seem to find is if such a visualization is possible in tensorboard or if I would need to use/create something else to achieve it.</p> <p>I'm currently working from the <a href="https://www.tensorflow.org/beta/tutorials/quickstart/beginner" rel="nofollow noreferrer">basic tutorial</a> provided for tensorflow 2.0 with tensorboard added.</p> <pre><code>import datetime import tensorflow as tf mnist = tf.keras.datasets.mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) log_dir="logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S") tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1) model.fit(x_train, y_train, epochs=5, validation_data=(x_test, y_test), callbacks=[tensorboard_callback]) model.evaluate(x_test, y_test) </code></pre>
<p>It appears the what-if tool is what I was looking for, it allows you to visually sort testing data depending on whether it was <a href="https://i.stack.imgur.com/xrxiW.jpg" rel="nofollow noreferrer">correctly or incorrectly identified by the model.</a></p> <p>If you want to test it out <a href="https://pair-code.github.io/what-if-tool/age.html" rel="nofollow noreferrer">here</a> is their demo that I used to get the above image and they have multiple other demos on the tools <a href="https://pair-code.github.io/what-if-tool/index.html#demos" rel="nofollow noreferrer">site</a>.</p>
python|python-3.x|tensorflow|tensorboard
0
440
36,758,114
ValueError: setting an array element with a sequence when using feed_dict in TensorFlow
<p>I am trying to feed a Tensor containing the correct labels when I perform training.</p> <p>The correct labels for the entire training dataset are contained in one tensor which has been converted from a numpy array:</p> <pre><code>numpy_label = np.zeros((614,5),dtype=np.float32) for i in range(614): numpy_label[i,label_numbers[i]-1] = 1 # Convert to tensor y_label_all = tf.convert_to_tensor(numpy_label,dtype=tf.float32) </code></pre> <p>I have a placeholder for the correct labels for each batch:</p> <pre><code>images_per_batch = 5 y_label = tf.placeholder(tf.float32,shape=[images_per_batch,5]) </code></pre> <p>During each training step, I slice the corresponding portion of <code>y_label_all</code> as <code>y_</code> and want to feed it as <code>y_label</code>:</p> <pre><code>for step in range(100): # Slice correct labels for current batch y_ = tf.slice(y_label_all,[step,0],[images_per_batch,5]) # Train _, loss_value = sess.run([train_step,loss],feed_dict={y_label:y_}) </code></pre> <p><strong>This generates the error:</strong></p> <pre><code>_, loss_value = sess.run([train_step,loss],feed_dict={y_label:y_}) File "/usr/local/lib/python2.7/dist- packages/tensorflow/python/client/session.py", line 357, in run np_val = np.array(subfeed_val, dtype=subfeed_t.dtype.as_numpy_dtype) ValueError: setting an array element with a sequence. </code></pre> <p><strong>Shape of variables <code>y_</code> and <code>y_label</code>:</strong></p> <pre class="lang-py prettyprint-override"><code>#y_: Tensor("Slice:0", shape=TensorShape([Dimension(5), Dimension(5)]), dtype=float32) #y_label: Tensor("Placeholder:0", shape=TensorShape([Dimension(5), Dimension(5)]), dtype=float32) </code></pre> <p>I don't understand what is going wrong? Apparently it is something to do with the numpy - but now that I have converted the numpy array to a tensor, does that affect anything?</p> <p>Help and insight are much appreciated. Thank you!</p>
<p>The problem is that that <code>feed_dict</code> must be compatible with numpy arrays.</p> <p>Your code results in something like this</p> <p><code>np.array(&lt;tf.Tensor 'Slice_5:0' shape=(5, 5) dtype=float32&gt;, dtype=np.float32)</code></p> <p>Which fails in numpy with cryptic error above. To fix it, you need to convert your Tensor to numpy, something like below</p> <pre><code>for step in range(100): # Slice correct labels for current batch y_ = tf.slice(y_label_all,[step,0],[images_per_batch,5]) y0 = sess.run([y_]) # Train _, loss_value = sess.run([train_step,loss],feed_dict={y_label:y0}) </code></pre>
python|arrays|numpy|tensorflow
2
441
37,013,115
How to read/traverse/slice Scipy sparse matrices (LIL, CSR, COO, DOK) faster?
<p>To manipulate Scipy matrices, typically, the built-in methods are used. But sometimes you need to read the matrix data to assign it to non-sparse data types. For the sake of demonstration I created a random LIL sparse matrix and converted it to a Numpy array (pure python data types would have made a better sense!) using different methods.</p> <pre><code>from __future__ import print_function from scipy.sparse import rand, csr_matrix, lil_matrix import numpy as np dim = 1000 lil = rand(dim, dim, density=0.01, format='lil', dtype=np.float32, random_state=0) print('number of nonzero elements:', lil.nnz) arr = np.zeros(shape=(dim,dim), dtype=float) </code></pre> <p>number of nonzero elements: 10000</p> <h3>Reading by indexing</h3> <pre><code>%%timeit -n3 for i in xrange(dim): for j in xrange(dim): arr[i,j] = lil[i,j] </code></pre> <p>3 loops, best of 3: 6.42 s per loop</p> <h3>Using the <code>nonzero()</code> method</h3> <pre><code>%%timeit -n3 nnz = lil.nonzero() # indices of nonzero values for i, j in zip(nnz[0], nnz[1]): arr[i,j] = lil[i,j] </code></pre> <p>3 loops, best of 3: 75.8 ms per loop</p> <h3>Using the built-in method to convert directly to array</h3> <p>This one is <strong>not</strong> a general solution for reading the matrix data, so it does not count as a solution.</p> <pre><code>%timeit -n3 arr = lil.toarray() </code></pre> <p>3 loops, best of 3: 7.85 ms per loop</p> <p>Reading Scipy sparse matrices with these methods is not efficient at all. Is there any faster way to read these matrices?</p>
<p>A similar question, but dealing setting sparse values, rather than just reading them:</p> <p><a href="https://stackoverflow.com/questions/35773101/efficient-incremental-sparse-matrix-in-python-scipy-numpy/35779647#35779647">Efficient incremental sparse matrix in python / scipy / numpy</a></p> <p>More on accessing values using the underlying representation</p> <p><a href="https://stackoverflow.com/questions/30386328/efficiently-select-random-non-zero-column-from-each-row-of-sparse-matrix-in-scip">Efficiently select random non-zero column from each row of sparse matrix in scipy</a></p> <p>Also</p> <p><a href="https://stackoverflow.com/questions/34010334/why-is-row-indexing-of-scipy-csr-matrices-slower-compared-to-numpy-arrays">why is row indexing of scipy csr matrices slower compared to numpy arrays</a></p> <p><a href="https://stackoverflow.com/questions/27770906/why-are-lil-matrix-and-dok-matrix-so-slow-compared-to-common-dict-of-dicts">Why are lil_matrix and dok_matrix so slow compared to common dict of dicts?</a></p> <p>Take a look at what <code>M.nonzero</code> does:</p> <pre><code> A = self.tocoo() nz_mask = A.data != 0 return (A.row[nz_mask],A.col[nz_mask]) </code></pre> <p>It converts the matrix to <code>coo</code> format and returns the <code>.row</code>, and <code>.col</code> attributes - after filtering out any 'stray' 0s in the <code>.data</code> attribute.</p> <p>So you could skip the middle man and use those attributes directly:</p> <pre><code> A = lil.tocoo() for i,j,d in zip(A.row, A.col, A.data): a[i,j] = d </code></pre> <p>This is almost as good as the <code>toarray</code>:</p> <pre><code>In [595]: %%timeit .....: aa = M.tocoo() .....: for i,j,d in zip(aa.row,aa.col,aa.data): .....: A[i,j]=d .....: 100 loops, best of 3: 14.3 ms per loop In [596]: timeit arr=M.toarray() 100 loops, best of 3: 12.3 ms per loop </code></pre> <p>But if your target is really an array, you don't need to iterate</p> <pre><code>In [603]: %%timeit .....: A=np.empty(M.shape,M.dtype) .....: aa=M.tocoo() .....: A[aa.row,aa.col]=aa.data .....: 100 loops, best of 3: 8.22 ms per loop </code></pre> <p>My times for @Thoran's 2 methods are:</p> <pre><code>100 loops, best of 3: 5.81 ms per loop 100 loops, best of 3: 17.9 ms per loop </code></pre> <p>Same ballpark of times.</p>
python|numpy|scipy
2
442
36,837,435
Delete column from a numpy structured array (list of tuples in the array)?
<p>I use an external library function which returns a numpy structured array.</p> <pre><code>cities_array &gt;&gt;&gt; array([ (1, [-122.46818353792992, 48.74387985436505], u'05280', u'Bellingham', u'53', u'Washington', u'5305280', u'city', u'N', -99, 52179), (2, [-109.67985528815007, 48.54381826401885], u'35050', u'Havre', u'30', u'Montana', u'3035050', u'city', u'N', 2494, 10201), (3, [-122.63068540357023, 48.49221584868184], u'01990', u'Anacortes', u'53', u'Washington', u'5301990', u'city', u'N', -99, 11451), ..., (3147, [-156.45657614262274, 20.870633142444376], u'22700', u'Kahului', u'15', u'Hawaii', u'1522700', u'census designated place', u'N', 7, 16889), (3148, [-156.45038252004554, 20.76059218396], u'36500', u'Kihei', u'15', u'Hawaii', u'1536500', u'census designated place', u'N', -99, 11107), (3149, [-155.08472452266503, 19.693112205773275], u'14650', u'Hilo', u'15', u'Hawaii', u'1514650', u'census designated place', u'N', 38, 37808)], dtype=[('ID', '&lt;i4'), ('Shape', '&lt;f8', (2,)), ('CITY_FIPS', '&lt;U5'), ('CITY_NAME', '&lt;U40'), ('STATE_FIPS', '&lt;U2'), ('STATE_NAME', '&lt;U25'), ('STATE_CITY', '&lt;U7'), ('TYPE', '&lt;U25'), ('CAPITAL', '&lt;U1'), ('ELEVATION', '&lt;i4'), ('POP1990', '&lt;i4')]) </code></pre> <p>The <code>cities_array</code> is of type <code>&lt;type 'numpy.ndarray'&gt;</code>.</p> <p>I am able to access individual columns of the array:</p> <pre><code>cities_array[['ID','CITY_NAME']] &gt;&gt;&gt; array([(1, u'Bellingham'), (2, u'Havre'), (3, u'Anacortes'), ..., (3147, u'Kahului'), (3148, u'Kihei'), (3149, u'Hilo')], dtype=[('ID', '&lt;i4'), ('CITY_NAME', '&lt;U40')]) </code></pre> <p>Now I want to delete the first column, <code>ID</code>. The <a href="http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.delete.html" rel="nofollow noreferrer">help</a> and <a href="https://stackoverflow.com/questions/16632568/remove-a-specific-column-in-numpy">SO questions</a> say it should be <code>numpy.delete</code>.</p> <p>When running this: <code>numpy.delete(cities_array,cities_array['ID'],1)</code> I get the error message: </p> <pre><code>...in delete N = arr.shape[axis] IndexError: tuple index out of range </code></pre> <p>What am I doing wrong? Should I post-process the cities_array to be able to work with the array?</p> <p>I am on Python 2.7.10 and numpy 1.11.0</p>
<p>I think that this should work:</p> <pre><code>def delete_colum(array, *args): filtered = [x for x in array.dtype.names if x not in args] return array[filtered] </code></pre> <p>Example with array:</p> <pre><code>a Out[9]: array([(1, [-122.46818353792992, 48.74387985436505])], dtype=[('ID', '&lt;i4'), ('Shape', '&lt;f8', (2,))]) delete_colum(a,'ID') Out[11]: array([([-122.46818353792992, 48.74387985436505],)], dtype=[('Shape', '&lt;f8', (2,))]) </code></pre>
python|arrays|python-2.7|numpy
2
443
54,934,345
Merge two numpy arrays into a list of lists of tuples
<p>I haven't been able to figure this out. Thanks for any help:</p> <p>Have:</p> <pre><code>&gt;&gt;&gt; x = np.array([[1,2],[5,6]]) &gt;&gt;&gt; x array([[1, 2], [5, 6]]) &gt;&gt;&gt; y = np.array([[3,4],[7,8]]) &gt;&gt;&gt; y array([[3, 4], [7, 8]]) </code></pre> <p>Want:</p> <pre><code>&gt;&gt;&gt; z = [[(1,2),(3,4)],[(5,6),(7,8)]] &gt;&gt;&gt; z [[(1, 2), (3, 4)], [(5, 6), (7, 8)]] </code></pre>
<p>Try this:</p> <pre><code>x_z = map(tuple,x) y_z = map(tuple,y) [list(i) for i in zip(x_z, y_z)] </code></pre> <p>Output:</p> <pre><code>[[(1, 2), (3, 4)], [(5, 6), (7, 8)]] </code></pre>
python|numpy
3
444
54,718,657
Insert dict into dataframe with loop
<p>Fetching data from API with for loop, but only last row is showing. If i put <code>print</code> statement instead of <code>d=</code>, I get all records for some reason. How to populate a <code>Dataframe</code> with all values?</p> <p>I tried with for loop and with append but keep getting wrong results</p> <pre><code>for x in elements: url = "https://my/api/v2/item/" + str(x[number"]) + "/" get_data = requests.get(url) get_data_json = get_data.json() d = {'id': [x["enumber"]], 'name': [x["value1"]["name"]], 'adress': [value2["adress"]], 'stats': [get_data_json["stats"][5]["rating"]] } df = pd.DataFrame(data=d) df.head() </code></pre> <p>Result:</p> <pre><code>id name order adress rating </code></pre> <p>Only last row is showing, probably because it's overwriting until it comes to last element. Should I put another for loop somewhere or there is some obvious solution that I cannot see?</p>
<p>Put all your data into a list of dictionaries, then convert to a dataframe at the very end</p> <p>At the top of your code write:</p> <pre><code>all_data = [] </code></pre> <p>Then in your loop, after d = {...}, write</p> <pre><code>all_data.append(d) </code></pre> <p>Finally at the end (after the loop has finished)</p> <pre><code>df = pd.DataFrame(all_data) </code></pre>
python|pandas
3
445
54,703,606
Feeding vectorized data to keras
<p>I am working on using some <code>name:gender</code> data to build and train a model that could predict the gender. I am trying the basics as I read about ML and probably got many things wrong. I haven't yet learnt how to generate and feed all the features that I want the network to use in its training. At this point, I am trying to prepare my data and have keras accept it for training.</p> <p>I am trying to build a dictionary or chars in the names and feed each vectorized name into the model:</p> <pre class="lang-py prettyprint-override"><code>names_frame = pd.DataFrame(list(cm.Name.objects.all().values())).drop('id', axis=1) names_frame['name'] = names_frame['name'].str.lower() names_frame['gender'] = names_frame['gender'].replace('Male',0).replace('Female', 1) names_list = names_frame['name'].values names_dict = list(enumerate(set(list(reduce(lambda x, y: x + y, names_list))))) names_frame['vectorized'] = names_frame['name'].apply(vectorize, args=(names_dict,)) names_frame.sample() </code></pre> <p>I end up with this:</p> <pre><code> gender gender_count name vectorized 20129 1 276 meena [1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, ... </code></pre> <p>Then I build the model and try to train it:</p> <pre class="lang-py prettyprint-override"><code>X = names_frame['vectorized'] Y = names_frame['gender'] model = Sequential() model.add(Dense(32, input_dim=1, activation='relu')) model.add(Dense(8, activation='relu')) model.add(Dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) model.fit(X, Y, epochs=150, batch_size=10) </code></pre> <p>And get the following exception:</p> <pre><code>ValueError: setting an array element with a sequence. </code></pre> <p>Both <code>names_frame['gender'].shape</code> and <code>names_frame['vectorized'].shape</code> are <code>(34325,)</code></p> <p>Basically, I am trying to feed it the vector and the gender classifier, but looks like something is not right with the input format? <code>X</code> is <code>pandas.Series</code> - I tried converting it to <code>np.array</code> but this didn't help.</p> <p>The <code>input_dim</code> parameter denotes the number of input elements I am giving the network to deal with. I have <code>1</code> since I am trying to give it an array of values. Should I be giving it <code>26</code>? But when I change it to <code>26</code>, it's giving me a different exception:</p> <pre><code>ValueError: Error when checking input: expected dense_46_input to have shape (26,) but got array with shape (1,) </code></pre> <p>This is probably because I am not giving it 26 individual pandas columns I assume - do I need to convert my array to columns or unpack the array somehow?</p>
<p>A simple example:</p> <pre><code>from keras.models import Sequential from keras.layers import Dense import pandas as pd import numpy as np df = pd.DataFrame({"vectorized": [[1,0,0],[0,1,0],[0,0,1]], "gender": [1,0,1]}) # convert the inner list to numpy array # X = np.array([np.array(l) for l in df["vectorized"]]) # or use a simpler way: X = np.vstack(df["vectorized"]) Y = df["gender"].values model = Sequential() # input_dim should be X.shape[1] model.add(Dense(32, input_dim=3, activation='relu')) model.add(Dense(8, activation='relu')) model.add(Dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) model.fit(X, Y, epochs=150, batch_size=10) </code></pre>
python|arrays|pandas|keras
1
446
54,765,997
creating a numpy array in a loop
<p>I want to create a numpy array by parsing a .txt file. The .txt file consists of features of iris flowers seperated by commas. every line is has one flower example with 5 data seperated with 4 commas. first 4 number is features and the last one is the name. I parse the .txt in a loop and want to append (using numpy.append probably) every lines parsed data into a numpy array called feature_table.</p> <p>heres the code;</p> <pre><code>import numpy as np iris_data = open("iris_data.txt", "r") for line in iris_data: currentline = line.split(",") #iris_data_parsed = (currentline[0] + " , " + currentline[3] + " , " + currentline[4]) #sepal_length = numpy.array(currentline[0]) #petal_width = numpy.array(currentline[3]) #iris_names = numpy.array(currentline[4]) feature_table = np.array([currentline[0]],[currentline[3]],[currentline[4]]) print (feature_table) print(feature_table.shape) </code></pre> <p>so I want to create a numpy array using only first, fourth and fifth data in every line but I can't make it work as I want to. tried reading numpy docs but couldn't understand it.</p>
<p>While the people in the comments are right in that you are not persisting your data anywhere, your problem, I assume, is incorrect np.array construction. You should enclose all of the arguments in a list like this:</p> <pre><code>feature_table = np.array([currentline[0],currentline[3],currentline[4]]) </code></pre> <p>And get rid of redundant <code>[</code> and <code>]</code> around the arguments.</p> <p>See the <a href="https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.array.html#numpy.array" rel="nofollow noreferrer">official documentation</a> for more examples. Basically all of the input data needs to be grouped/separated to be only 1 argument as Python will consider the other arguemnts as different positional arguments.</p>
python|arrays|numpy
2
447
73,185,812
best way to evaluate a function over each element of a dataframe or array using pandas, numpy or others
<p>I have some time series data that requires multiplying constants by variables at time t. I have come up with 3 methods to get an answer that is correct.</p> <p>The main thing I am wondering is Q1 below. I appreciate Q2 and Q3 could be subjective, but I am mostly seeing if there is a much better method I am completely missing.</p> <ul> <li>Q1. Is there a much better way to implement this formula across a dataframe/array that I have missed (i.e. not one of these three methods in the code)? If so please let me know.</li> </ul> <p>More subjectively... I could time each method and choose one purely by the most time efficient method, I was wondering:</p> <ul> <li><p>Q2. Are any of these certain methods preferred as they are clearer / better written / use less resource / more 'Pythonic'?</p> </li> <li><p>Q3. Or is it just the case that any of these 3 are absolutely fine and it is just a preference thing? One large reason I ask is that I often hear people trying to shy away from loops...</p> </li> </ul> <p>The formula to be applied is:</p> <pre><code>ans_t = x * var1_t + (1 - x) * var2_t - y * max(0, var3_t - z) </code></pre> <p>note: _t means at time t</p> <p>Due to the time series nature of it, I could not get something like this to work:</p> <pre><code>x * df['var1'] + (1 - x) * df['var2'] - y * max(0, df['var3'] - z) </code></pre> <p>Therefore I went for the 3 methods below:</p> <pre><code># %% import numpy as np import pandas as pd # example dataframe df = pd.DataFrame({'var1': [6, 8, 11, 15, 10], 'var2': [1, 8, 2, 15, 4], 'var3': [21, 82, 22, 115, 64]}) # constants x = 0.44 y = 1.68 z = 22 # function to evaluate: ans_t = x * var1_t + (1 - x) * var2_t - y * max(0, var3_t - z) # note: _t means at time t # %% # ---- Method 1: use simple for loop ---- df['ans1'] = 0 for i in range(len(df)): df['ans1'][i] = x * df['var1'][i] + (1 - x) * df['var2'][i] - y * max(0, df['var3'][i] - z) # %% # ---- Method 2: apply a lambda function ---- def my_func(var1, var2, var3): return x * var1 + (1 - x) * var2 - y * max(0, var3 - z) df['ans2'] = df.apply(lambda x: my_func(x['var1'], x['var2'], x['var3']), axis=1) # %% # ---- Method 3: numpy vectorize ---- df['ans3'] = np.vectorize(my_func)(df['var1'], df['var2'], df['var3']) </code></pre>
<p><code>np.maximum</code> (note: not the same as <code>np.max</code>) gives a vectorized way of handling the <code>max</code> element of the formula:</p> <pre><code>df['ans_t'] = x * df['var1'] + (1 - x) * df['var2'] - y * np.maximum(0, df['var3'] - z) </code></pre> <p>after which <code>df['ans_t']</code> is:</p> <pre><code>0 3.20 1 -92.80 2 5.96 3 -141.24 4 -63.92 Name: ans_t, dtype: float64 </code></pre>
python|pandas|numpy
1
448
35,328,399
How to plot the rolling mean of stock data?
<p>I was able to plot the data using the below code:</p> <pre><code>import pandas as pd import numpy as np import matplotlib.pyplot as plt url = "http://real-chart.finance.yahoo.com/table.csv?s=YHOO&amp;a=03&amp;b=12&amp;c=2006&amp;d=01&amp;e=9&amp;f=2016&amp;g=d&amp;ignore=.csv" df = pd.read_csv(url) df.index = df["Date"] df.sort_index(inplace=True) df['Adj Close'].plot() plt.show() </code></pre> <p>But now I want to calculate the rolling mean of the data and plot that. This is what I've tried:</p> <pre><code>pd.rolling_mean(df.resample("1D", fill_method="ffill"), window=3, min_periods=1) plt.plot() </code></pre> <p>But this gives me the error:</p> <pre><code>Only valid with DatetimeIndex, TimedeltaIndex or PeriodIndex </code></pre> <p>All I want to do is plot the rolling mean of the data. Why is this happening?</p>
<p>Why don't you just use the <code>datareader</code>?</p> <pre><code>import pandas.io.data as web aapl = web.DataReader("aapl", 'yahoo', '2010-1-1')['Adj Close'] aapl.plot(title='AAPL Adj Close');pd.rolling_mean(aapl, 50).plot();pd.rolling_mean(aapl, 200).plot() </code></pre> <p><a href="https://i.stack.imgur.com/Pzgue.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Pzgue.png" alt="enter image description here"></a></p> <p>To give more control over the plotting:</p> <pre><code>aapl = web.DataReader("aapl", 'yahoo', '2010-1-1')['Adj Close'] aapl.name = 'Adj Close' aapl_50ma = pd.rolling_mean(aapl, 50) aapl_50ma.name = '50 day MA' aapl_200ma = pd.rolling_mean(aapl, 200) aapl_200ma.name = '200 day MA' aapl.plot(title='AAPL', legend=True);aapl_50ma.plot(legend=True);aapl_200ma.plot(legend=True) </code></pre> <p><a href="https://i.stack.imgur.com/sV0Kn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sV0Kn.png" alt="enter image description here"></a></p>
python|pandas|matplotlib
3
449
30,955,004
Improve Polynomial Curve Fitting using numpy/Scipy in Python Help Needed
<p>I have two NumPy arrays time and no of get requests. I need to fit this data using a function so that i could make future predictions. These data were extracted from cassandra table which stores the details of a log file. So basically the time format is epoch-time and the training variable here is get_counts.</p> <pre><code>from cassandra.cluster import Cluster import numpy as np import matplotlib.pyplot as plt from cassandra.query import panda_factory session = Cluster(contact_points=['127.0.0.1'], port=9042).connect(keyspace='ASIA_KS') session.row_factory = panda_factory df = session.execute("SELECT epoch_time, get_counts FROM ASIA_TRAFFIC") .sort(columns=['epoch_time','get_counts'], ascending=[1,0]) time = np.array([x[1] for x in enumerate(df['epoch_time'])]) get = np.array([x[1] for x in enumerate(df['get_counts'])]) plt.title('Trend') plt.plot(time, byte,'o') plt.show() </code></pre> <p>The data is as follows: there are around 1000 pairs of data </p> <pre><code>time -&gt; [1391193000 1391193060 1391193120 ..., 1391279280 1391279340 1391279400 1391279460] get -&gt; [577 380 430 ...,250 275 365 15] </code></pre> <p>Plot image (<a href="https://drive.google.com/file/d/0B-r3Ym7u_hsKYzhRSXprWTdqX00/view?usp=sharing" rel="nofollow noreferrer">full size here</a>): <img src="https://i.stack.imgur.com/7RUmx.png" alt="Plot image"></p> <p>Can someone please help me in providing a function so that i could properly fit in the data? I am new to python.</p> <p>EDIT *</p> <pre><code>fit = np.polyfit(time, get, 3) yp = np.poly1d(fit) plt.plot(time, yp(time), 'r--', time, get, 'b.') plt.xlabel('Time') plt.ylabel('Number of Get requests') plt.title('Trend') plt.xlim([time[0]-10000, time[-1]+10000]) plt.ylim(0, 2000) plt.show() print yp(time[1400]) </code></pre> <p>the fit curve looks like this:<br> <a href="https://drive.google.com/file/d/0B-r3Ym7u_hsKUTF1OFVqRWpEN2M/view?usp=sharing" rel="nofollow noreferrer">https://drive.google.com/file/d/0B-r3Ym7u_hsKUTF1OFVqRWpEN2M/view?usp=sharing</a></p> <p>However at the later part of the curve the value of y becomes (-ve) which is wrong. The curve must change its slope back to (+ve) somewhere in between. Can anyone please suggest me how to go about it. Help will be much appreciated. </p>
<p>You could try:</p> <pre><code>time = np.array([x[1] for x in enumerate(df['epoch_time'])]) byte = np.array([x[1] for x in enumerate(df['byte_transfer'])]) fit = np.polyfit(time, byte, n) # step up n value here, # where n is the degree of the polynomial yp = np.poly1d(fit) print yp # displays function in cx^n +- cx^n-1...c format plt.plot(x, yp(x), '-') plt.xlabel('Time') plt.ylabel('Bytes Transfered') plt.title('Trend') plt.plot(time, byte,'o') plt.show() </code></pre> <p>I'm new to Numpy and curve fitting as well, but this is how I've been attempting to do it.</p>
python|numpy|matplotlib|curve-fitting|non-linear-regression
1
450
30,965,864
Joblib parallel write to "shared" numpy sparse matrix
<p>Im trying to compute number of shared neighbors for each node of a very big graph (~1m nodes). Using Joblib Im trying to run it in parallel. But Im worrying about parallel writes to sparse matrix, which supposed to keep all data. Will this piece of code produce consistent results?</p> <pre><code>vNum = 1259084 NN_Matrix = csc_matrix((vNum, vNum), dtype=np.int8) def nn_calc_parallel(node_id = None): i, j = np.unravel_index(node_id, (1259084, 1259084)) NN_Matrix[i, j] = len(np.intersect1d(nx.neighbors(G, i), nx.neighbors(G,j))) num_cores = multiprocessing.cpu_count() result = Parallel(n_jobs=num_cores)(delayed(nn_calc_parallel)(i) for i in xrange(vNum**2)) </code></pre> <p>If not, can you help me to solve this?</p>
<p>I needed to do the same work, in my case was just ok to merge the matrixes together into one matrix which you can do this way:</p> <pre><code>from scipy.sparse import vstack matrixes = Parallel(n_jobs=-3)(delayed(nn_calc_parallel)(x) for x in documents) matrix = vstack(matrixes) </code></pre> <p>Njob-3 means all CPUS except 2, otherwise it might throw some memory errors.</p>
python|numpy|scipy|networkx|joblib
0
451
67,425,567
Extract values from xarray dataset using geopandas multilinestring
<p>I have a few hundred <code>geopandas</code> multilinestrings that trace along an object of interest (one line each week over a few years tracing the Gulf Stream) and I want to use those lines to extract values from a few other <code>xarray</code> datasets to know sea surface temperature, chlorophyll-a, and other variables along this path each week.</p> <p>I'm unsure though how exactly to use these <code>geopandas</code> lines to extract values from the <code>xarray</code> datasets. I have thought about breaking them into points and grabbing the dataset values at each point but that seems a bit cumbersome. Is there any straightforward way to do this operation?</p>
<p>Breaking the lines into points and then extracting the point is quite straightforward actually!</p> <pre class="lang-py prettyprint-override"><code>import geopandas as gpd import numpy as np import shapely.geometry as sg import xarray as xr # Setup an example DataArray: y = np.arange(20.0) x = np.arange(20.0) da = xr.DataArray( data=np.random.rand(y.size, x.size), coords={&quot;y&quot;: y, &quot;x&quot;: x}, dims=[&quot;y&quot;, &quot;x&quot;], ) # Setup an example geodataframe: gdf = gpd.GeoDataFrame( geometry=[ sg.LineString([(0.0, 0.0), (5.0, 5.0)]), sg.LineString([(10.0, 10.0), (15.0, 15.0)]), ] ) # Get the centroids, and create the indexers for the DataArray: centroids = gdf.centroid x_indexer = xr.DataArray(centroids.x, dims=[&quot;point&quot;]) y_indexer = xr.DataArray(centroids.y, dims=[&quot;point&quot;]) # Grab the results: da.sel(x=x_indexer, y=y_indexer, method=&quot;nearest&quot;) </code></pre> <pre class="lang-py prettyprint-override"><code>&lt;xarray.DataArray (point: 2)&gt; array([0.80121949, 0.34728138]) Coordinates: y (point) float64 3.0 13.0 x (point) float64 3.0 13.0 * point (point) int64 0 1 </code></pre> <p>The main thing is to decide on which point you'd like to sample, or how many points, etc.</p> <p>Note that the geometry objects in the geodataframe also have an interpolation method, if you'd like draw values at specific points along the trajectory:</p> <p><a href="https://shapely.readthedocs.io/en/stable/manual.html#object.interpolate" rel="nofollow noreferrer">https://shapely.readthedocs.io/en/stable/manual.html#object.interpolate</a></p> <p>In such a case, <code>.apply</code> can come in handy:</p> <pre class="lang-py prettyprint-override"><code>gdf.geometry.apply(lambda geom: geom.interpolate(3.0)) 0 POINT (2.12132 2.12132) 1 POINT (12.12132 12.12132) Name: geometry, dtype: geometry </code></pre>
python|geopandas|python-xarray
1
452
67,547,760
Alternatives to Python beautiful soup
<p>I wrote a few lines to get data from a financial data website.</p> <p>It simply uses <code>beautiful soup</code> to parse and <code>requests</code> to get.</p> <p>Is there any other simpler or sleeker ways of getting the same result?</p> <p>I'm just after a discussion to see what others have come up with.</p> <pre><code>from pandas import DataFrame import bs4 import requests def get_webpage(): symbols = ('ULVR','AZN','HSBC') for ii in symbols: url = 'https://uk.finance.yahoo.com/quote/' + ii + '.L/history?p=' + ii + '.L' response = requests.get(url) soup = bs4.BeautifulSoup(response.text, 'html.parser') rows = soup.find_all('tr') data = [[td.getText() for td in rows[i].find_all('td')] for i in range(len(rows))] #for i in data: # [-7:] Date # [-6:] Open # [-5:] High # [-4:] Low # [-3:] Close # [-2:] Adj Close # [-1:] Volume data = DataFrame(data) print(ii, data) if __name__ == &quot;__main__&quot;: get_webpage() </code></pre> <p>Any thoughts?</p>
<p>You can try with <code>read_html()</code> method:</p> <pre><code>symbols = ('ULVR','AZN','HSBC') df=[pd.read_html('https://uk.finance.yahoo.com/quote/' + ii + '.L/history?p=' + ii + '.L') for ii in symbols] df1=df[0][0] df2=df[1][0] df3=df[2][0] </code></pre>
python|pandas|dataframe|beautifulsoup|python-requests
1
453
67,543,737
How can I groupby on one column while sorting by another over the entire dataframe
<p>I have a dataframe that looks like this:</p> <pre><code> id total 1 50 1 0 1 0 2 100 2 0 2 0 3 75 3 0 3 0 </code></pre> <p>But I need it to sort by the <strong>total</strong> in descending order, while keeping the rows grouped by <strong>id</strong>. Like this:</p> <pre><code> id total 2 100 2 0 2 0 3 75 3 0 3 0 1 50 1 0 1 0 </code></pre> <p>I've tried some suggestions using groupby like this:</p> <pre><code>grouped = df.groupby('id').apply(lambda g: g.sort_values('total', ascending=False)) </code></pre> <p>It looks like what it's doing is grouping and sorting the <strong>id</strong> in ascending order and then sub-sorting the <strong>total</strong> within each grouped <strong>id</strong>. But I need it to sort all the rows in the <strong>total</strong> while keeping the rows grouped by <strong>id</strong></p> <p>Any suggestions would be appreciated.</p>
<h3>(1) Clarification of requirement</h3> <p>First of all, let's revisit/clarify your requirement by exploring the expected result for a more complicated data sample:</p> <pre><code> id total 0 1 100 1 1 70 2 1 68 3 1 65 4 2 100 5 2 80 6 2 50 7 3 100 8 3 75 9 3 70 </code></pre> <p>According to the main point of your requirement:</p> <blockquote> <p><strong>I need it to sort all the rows in the total while keeping the rows</strong> <strong>grouped by id</strong></p> </blockquote> <p>I would interpret it as requiring row by row comparison of the largest element in a group with the largest element of another group, if there is a tie (same value), we go on comparing the 2nd largest element of every group, and so on. This will be like the lexical ordering of a word dictionary, but in reverse order.</p> <p>For this interpretation, I would expect the sorted outcome to be:</p> <pre><code> id total 0 2 100 1 2 80 2 2 50 3 3 100 4 3 75 5 3 70 6 1 100 7 1 70 8 1 68 9 1 65 </code></pre> <p>Here, although the last group in the sorted result (with <code>id</code> <code>1</code>) has one more element and the total sum of the group is the largest among all groups, it is still sorted at the last since it has the first largest element ties with those of other 2 groups while its second largest element is the least among the 2nd largest elements of all groups. Hence, sorted at the last.</p> <h3>(2) Approaching the solution</h3> <p>To ensure the solution works for the sample data presented in any order, let's sort the data first. You can freely skip this step if your data of <code>total</code> column are already sorted in descending order.</p> <p>Let's use your sample data (but <strong>shuffled</strong> in row ordering):</p> <pre><code> id total 0 3 0 1 3 75 2 2 100 3 2 0 4 1 0 5 1 0 6 1 50 7 2 0 8 3 0 </code></pre> <p>Then, sort it according to your ordering:</p> <pre><code>df1 = df.sort_values(['id', 'total'], ascending=[True, False]) id total 6 1 50 4 1 0 5 1 0 2 2 100 3 2 0 7 2 0 1 3 75 0 3 0 8 3 0 </code></pre> <p><strong>Applying the solution to your sample data:</strong></p> <pre><code>df_sorted = (df1.set_index('id') .loc[ np.argsort(df1.groupby('id')['total'].agg(list)) .sort_values(ascending=False) .index ] ).reset_index() print(df_sorted) id total 0 2 100 1 2 0 2 2 0 3 3 75 4 3 0 5 3 0 6 1 50 7 1 0 8 1 0 </code></pre> <p>This is your expected outcome.</p> <p><strong>Applying the solution to the more complicated data:</strong></p> <p>Let's have the complicated data also shuffled:</p> <pre><code> id total 0 1 65 1 1 70 2 2 100 3 2 50 4 3 100 5 3 75 6 1 68 7 1 100 8 2 80 9 3 70 </code></pre> <p>Then, sort it according to your ordering:</p> <pre><code>df1 = df.sort_values(['id', 'total'], ascending=[True, False]) id total 7 1 100 1 1 70 6 1 68 0 1 65 2 2 100 8 2 80 3 2 50 4 3 100 5 3 75 9 3 70 </code></pre> <p>Then, apply the solution:</p> <pre><code>df_sorted = (df1.set_index('id') .loc[ np.argsort(df1.groupby('id')['total'].agg(list)) .sort_values(ascending=False) .index ] ).reset_index() print(df_sorted) id total 0 2 100 1 2 80 2 2 50 3 3 100 4 3 75 5 3 70 6 1 100 7 1 70 8 1 68 9 1 65 </code></pre> <p>Here, we got the expected outcome shown in the clarification of requirement.</p> <h3>(3) Explanation of approach</h3> <p>Let's do an anatomy of steps in detail:</p> <p><strong>(1) First of all, we perform a <code>.groupby()</code> on <code>id</code> and take column <code>total</code> to aggregate as lists:</strong></p> <pre><code>df1.groupby('id')['total'].agg(list) id 1 [100, 70, 68, 65] 2 [100, 80, 50] 3 [100, 75, 70] Name: total, dtype: object </code></pre> <p>We got lists for each group with the list entries sorted in descending order. This sorting sequence was attributed to the sorting step before our main processing.</p> <p><strong>(2) Then, we use <a href="https://numpy.org/doc/stable/reference/generated/numpy.argsort.html" rel="nofollow noreferrer"><code>np.argsort()</code></a> on this aggregated series to get <code>the indices that would sort an array</code>:</strong></p> <pre><code>np.argsort(df1.groupby('id')['total'].agg(list)) id 1 0 2 2 3 1 Name: total, dtype: int64 </code></pre> <p>With the help of <a href="https://numpy.org/doc/stable/reference/generated/numpy.argsort.html" rel="nofollow noreferrer"><code>np.argsort()</code></a>, we obtained the sequencing that would sort the lists in last step. As we want to get the groups sorted in <strong>descending order</strong>, we further sort this outcome in descending order as follows:</p> <pre><code>np.argsort(df1.groupby('id')['total'].agg(list)).sort_values(ascending=False) id 2 2 3 1 1 0 Name: total, dtype: int64 </code></pre> <p>Now, we already arrived at <strong>the correct sequencing of the groups</strong> with the <code>id</code> sequence: <code>2 3 1</code>. The rest of the steps are to take this sequence back to the whole dataframe and display the groups in correct sequence.</p> <p><strong>(3) Get back the correct indices sequence for presentation of the whole groups sequencing:</strong></p> <p>We get the indices of <code>id</code> groups by <code>.index</code> and then present to the whole dataframe by:</p> <pre><code>df1.set_index('id').loc[] </code></pre> <p>As the index we got from previous step was the <code>id</code> index, we do a <code>.set_index()</code> on <code>id</code> in order to match the index. Further with <code>.loc</code>, we get:</p> <pre><code> total id 2 100 2 80 2 50 3 100 3 75 3 70 1 100 1 70 1 68 1 65 </code></pre> <p>Here <code>id</code> is the row index. To restore <code>id</code> from row index to data column, we do a final step of <code>.reset_index()</code> to get the final outcome:</p> <pre><code> id total 0 2 100 1 2 80 2 2 50 3 3 100 4 3 75 5 3 70 6 1 100 7 1 70 8 1 68 9 1 65 </code></pre>
python|pandas
1
454
67,391,750
Naming objects in a python for loop for bootstrapping
<p>I have a Pandas dataframe (<code>df</code>) with columns for each of the measurements taken on the individuals. There is one row per individual:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>col_1</th> <th>col_2</th> <th>col_3</th> <th>col_4</th> </tr> </thead> <tbody> <tr> <td>1.0</td> <td>2.0</td> <td>4.0</td> <td>5.0</td> </tr> <tr> <td>3.0</td> <td>4.0</td> <td>2.0</td> <td>5.0</td> </tr> <tr> <td>4.0</td> <td>5.0</td> <td>1.0</td> <td>3.0</td> </tr> <tr> <td>5.0</td> <td>5.0</td> <td>4.0</td> <td>4.0</td> </tr> </tbody> </table> </div> <p>My goal is to bootstrap each column to calculate distributions of means. For example, I want to draw a sample (with replacement) each column n times, calculate the mean, and append that mean to a list. Here is the code for one column:</p> <pre><code>mean=[] for i in np.arange(10000): sample = np.random.choice(df['col_1'],size=len(df)) mean.append(sample.mean()) mean_1 = mean </code></pre> <p>I have done this manually and I am trying to figure out how to loop through the columns. I got stuck in creating object names using the loop iterator. I tried to create objects using the value of <code>i</code>, but it won't allow me to assign values to it:</p> <pre><code>for col in df.columns: mean=[] for i in np.arange(10000): sample = np.random.choice(df[col],size=len(df)) mean.append(sample.mean()) # Need to specify which column mean I am saving: mean_1, mean_2, etc. col+'_mean'= mean </code></pre> <p>This fails because I can't assign values to the combination of <code>col + '_mean'</code>. I know there is a straightforward way to do this, but I'm out of ideas.</p>
<p>You could use a dictionary to store each of your iterations per column and then you might (if you want to) turn into a dataframe:</p> <pre><code>bootstrapped = {} for col in df: #You don't need to specify .columns mean=[] for i in np.arange(10000): sample = np.random.choice(df[col],size=len(df)) mean.append(sample.mean()) # Need to specify which column mean I am saving: mean_1, mean_2, etc. bootstrapped[col+'_mean'] = mean </code></pre> <p>Then you might use the following to turn the bootstrapped date into a df:</p> <pre><code>bootstrapped_df = pd.DataFrame(bootstrapped) </code></pre>
python|pandas|loops
1
455
67,531,914
Choiche GPU tensorflow-directml or multi-gpu
<p>I'm training a model with tensorflow on a Windows PC, but the training is low so I'm trying to configure tensorflow to use a GPU. I installed tensorflow-directml (in a conda environment with python 3.6) because my GPU is an AMD Radeon GPU. With this simple code</p> <pre><code>import tensorflow as tf tf.test.is_gpu_available() </code></pre> <p>I receive this ouput</p> <blockquote> <p>2021-05-14 11:02:30.113880: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 2021-05-14 11:02:30.121580: I tensorflow/stream_executor/platform/default/dso_loader.cc:99] Successfully opened dynamic library C:\Users\v.rocca\anaconda3\envs\tfradeon\lib\site-packages\tensorflow_core\python/directml.adbd007a01a52364381a1c71ebb6fa1b2389c88d.dll 2021-05-14 11:02:30.765470: I tensorflow/core/common_runtime/dml/dml_device_cache.cc:249] DirectML device enumeration: found 2 compatible adapters. 2021-05-14 11:02:30.984834: I tensorflow/core/common_runtime/dml/dml_device_cache.cc:185] DirectML: creating device on adapter 0 (Radeon (TM) 530) 2021-05-14 11:02:31.150992: I tensorflow/stream_executor/platform/default/dso_loader.cc:99] Successfully opened dynamic library Kernel32.dll 2021-05-14 11:02:31.174716: I tensorflow/core/common_runtime/dml/dml_device_cache.cc:185] DirectML: creating device on adapter 1 (Intel(R) UHD Graphics 620) True</p> </blockquote> <p>So tensorflow get the integrated GPU Intel instead the Radeon GPU. If I disable the Intel GPU from the Manage Hardware I receive in the output the correct GPU</p> <blockquote> <p>2021-05-14 10:47:09.171568: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 2021-05-14 10:47:09.176828: I tensorflow/stream_executor/platform/default/dso_loader.cc:99] Successfully opened dynamic library C:\Users\v.rocca\anaconda3\envs\tfradeon\lib\site-packages\tensorflow_core\python/directml.adbd007a01a52364381a1c71ebb6fa1b2389c88d.dll 2021-05-14 10:47:09.421265: I tensorflow/core/common_runtime/dml/dml_device_cache.cc:249] DirectML device enumeration: found 1 compatible adapters. 2021-05-14 10:47:09.626567: I tensorflow/core/common_runtime/dml/dml_device_cache.cc:185] DirectML: creating device on adapter 0 (Radeon (TM) 530)</p> </blockquote> <p>I don't want to disable the Intel GPU every time so this is my question. Is it possible to choice which GPU I want to use? Or Is it possible to use both GPUs? Thanks</p>
<p>From <a href="https://docs.microsoft.com/en-us/windows/win32/direct3d12/gpu-faq" rel="nofollow noreferrer">Microsoft</a>:</p> <pre><code>gpu_config = tf.GPUOptions() gpu_config.visible_device_list = &quot;1&quot; session = tf.Session(config=tf.ConfigProto(gpu_options=gpu_config)) </code></pre>
python|tensorflow|anaconda|amd
0
456
34,862,336
Performance of str.strip for Pandas
<p>I thought the third option was supposed to be the fastest way to strip whitespaces? Can someone give me some general rules that I should be applying when working with large data sets? I normally use .astype(str) but clearly that is not worthwhile for columns which I know are objects already.</p> <pre><code>%%timeit fcr['id'] = fcr['id'].astype(str).map(str.strip) 10 loops, best of 3: 47.8 ms per loop %%timeit fcr['id'] = fcr['id'].map(str.strip) 10 loops, best of 3: 25.2 ms per loop %%timeit fcr['id'] = fcr['id'].str.strip(' ') 10 loops, best of 3: 55.5 ms per loop </code></pre>
<p>Let's first look at the difference between <code>.map(str.strip)</code> and <code>.str.strip()</code> (second and third case).<br> Therefore, you need to understand what <code>str.strip()</code> does under the hood: it actually does some <code>map(str.strip)</code>, but using a custom <code>map</code> function that will handle missing values.<br> So given that <code>.str.strip()</code> <strong>does more</strong> than <code>.map(str.strip)</code>, it is to be expected that this method will always be slower (and as you have shown, in your case 2x slower).</p> <p>Using the <code>.str.strip()</code> method has it advantages in the automatic NaN handling (or handling of other non-string values). Suppose the 'id' column contains a NaN value:</p> <pre><code>In [4]: df['id'].map(str.strip) ... TypeError: descriptor 'strip' requires a 'str' object but received a 'float' In [5]: df['id'].str.strip() Out[5]: 0 NaN 1 as asd 2 asdsa asdasdas ... 29997 asds 29998 as asd 29999 asdsa asdasdas Name: id, dtype: object </code></pre> <p>As @EdChum points out, you can indeed use <code>map(str.strip)</code> <em>if</em> you are sure you don't have any NaN values if this performance difference is important.</p> <hr> <p>Coming back to the other difference of <code>fcr['id'].astype(str).map(str.strip)</code>. If you already know that the values inside the series are strings, doing the <code>astype(str)</code> call is of course superfluous. And it is this call that explains the difference:</p> <pre><code>In [74]: %timeit df['id'].astype(str).map(str.strip) 100 loops, best of 3: 10.5 ms per loop In [75]: %timeit df['id'].astype(str) 100 loops, best of 3: 5.25 ms per loop In [76]: %timeit df['id'].map(str.strip) 100 loops, best of 3: 5.18 ms per loop </code></pre> <p>Note that in the case you have non-string values (NaN, numeric values, ...), using <code>.str.strip()</code> and <code>.astype(str).map(str)</code> will <em>not</em> yield the same result:</p> <pre><code>In [11]: s = pd.Series([' a', 10]) In [12]: s.astype(str).map(str.strip) Out[12]: 0 a 1 10 dtype: object In [13]: s.str.strip() Out[13]: 0 a 1 NaN dtype: object </code></pre> <p>As you can see, <code>.str.strip()</code> will return non-string values as NaN, instead of converting them to strings.</p>
python-3.x|pandas
14
457
60,104,102
Custom max_pool layer: ValueError: The channel dimension of the inputs should be defined. Found `None`
<p>I am working on <strong>tensorflow2</strong> and I am trying to implement Max unpool with indices to implement SegNet. </p> <p>When I run it I get the following problem. I am defining the def <code>MaxUnpool2D</code> and then calling it in the model. I suppose that the problem is given by the fact that updates and mask have got shape (None, H,W,ch). </p> <pre><code>def MaxUnpooling2D(updates, mask): size = 2 mask = tf.cast(mask, 'int32') input_shape = tf.shape(updates, out_type='int32') # calculation new shape output_shape = ( input_shape[0], input_shape[1]*size, input_shape[2]*size, input_shape[3]) # calculation indices for batch, height, width and feature maps one_like_mask = tf.ones_like(mask, dtype='int32') batch_shape = tf.concat( [[input_shape[0]], [1], [1], [1]], axis=0) batch_range = tf.reshape( tf.range(output_shape[0], dtype='int32'), shape=batch_shape) b = one_like_mask * batch_range y = mask // (output_shape[2] * output_shape[3]) x = (mask // output_shape[3]) % output_shape[2] feature_range = tf.range(output_shape[3], dtype='int32') f = one_like_mask * feature_range updates_size = tf.size(updates) indices = K.transpose(K.reshape( tf.stack([b, y, x, f]), [4, updates_size])) values = tf.reshape(updates, [updates_size]) return tf.scatter_nd(indices, values, output_shape) def segnet_conv( inputs, kernel_size=3, kernel_initializer='glorot_uniform', batch_norm = False, **kwargs): conv1 = Conv2D( filters=64, kernel_size=kernel_size, padding='same', activation=None, kernel_initializer=kernel_initializer, name='conv_1' )(inputs) if batch_norm: conv1 = BatchNormalization(name='bn_1')(conv1) conv1 = LeakyReLU(alpha=0.3, name='activation_1')(conv1) conv1 = Conv2D( filters=64, kernel_size=kernel_size, padding='same', activation=None, kernel_initializer=kernel_initializer, name='conv_2' )(conv1) if batch_norm: conv1 = BatchNormalization(name='bn_2')(conv1) conv1 = LeakyReLU(alpha=0.3, name='activation_2')(conv1) pool1, mask1 = tf.nn.max_pool_with_argmax( input=conv1, ksize=2, strides=2, padding='SAME' ) def segnet_deconv( pool1, mask1, kernel_size=3, kernel_initializer='glorot_uniform', batch_norm = False, **kwargs ): dec = MaxUnpooling2D(pool5, mask5) dec = Conv2D( filters=512, kernel_size=kernel_size, padding='same', activation=None, kernel_initializer=kernel_initializer, name='upconv_13' )(dec) def classifier( dec, ch_out=2, kernel_size=3, final_activation=None, batch_norm = False, **kwargs ): dec = Conv2D( filters=64, kernel_size=kernel_size, activation='relu', padding='same', name='dec_out1' )(dec) @tf.function def segnet( inputs, ch_out=2, kernel_size=3, kernel_initializer='glorot_uniform', final_activation=None, batch_norm = False, **kwargs ): pool5, mask1, mask2, mask3, mask4, mask5 = segnet_conv( inputs, kernel_size=3, kernel_initializer='glorot_uniform', batch_norm = False ) dec = segnet_deconv( pool5, mask1, mask2, mask3, mask4, mask5, kernel_size=kernel_size, kernel_initializer=kernel_initializer, batch_norm = batch_norm ) output = classifier( dec, ch_out=2, kernel_size=3, final_activation=None, batch_norm = batch_norm ) return output inputs = Input(shape=(*params['image_size'], params['num_channels']), name='input') outputs = segnet(inputs, n_labels=2, kernel=3, pool_size=(2, 2), output_mode=None) # we define our U-Net to output logits model = Model(inputs, outputs) </code></pre> <p>Can you please help me with this problem? </p>
<p>I have solved the problem. If someone will need here is the code for MaxUnpooling2D:</p> <pre><code>def MaxUnpooling2D(pool, ind, output_shape, batch_size, name=None): &quot;&quot;&quot; Unpooling layer after max_pool_with_argmax. Args: pool: max pooled output tensor ind: argmax indices ksize: ksize is the same as for the pool Return: unpool: unpooling tensor :param batch_size: &quot;&quot;&quot; with tf.compat.v1.variable_scope(name): pool_ = tf.reshape(pool, [-1]) batch_range = tf.reshape(tf.range(batch_size, dtype=ind.dtype), [tf.shape(pool)[0], 1, 1, 1]) b = tf.ones_like(ind) * batch_range b = tf.reshape(b, [-1, 1]) ind_ = tf.reshape(ind, [-1, 1]) ind_ = tf.concat([b, ind_], 1) ret = tf.scatter_nd(ind_, pool_, shape=[batch_size, output_shape[1] * output_shape[2] * output_shape[3]]) # the reason that we use tf.scatter_nd: if we use tf.sparse_tensor_to_dense, then the gradient is None, which will cut off the network. # But if we use tf.scatter_nd, the gradients for all the trainable variables will be tensors, instead of None. # The usage for tf.scatter_nd is that: create a new tensor by applying sparse UPDATES(which is the pooling value) to individual values of slices within a # zero tensor of given shape (FLAT_OUTPUT_SHAPE) according to the indices (ind_). If we ues the orignal code, the only thing we need to change is: changeing # from tf.sparse_tensor_to_dense(sparse_tensor) to tf.sparse_add(tf.zeros((output_sahpe)),sparse_tensor) which will give us the gradients!!! ret = tf.reshape(ret, [tf.shape(pool)[0], output_shape[1], output_shape[2], output_shape[3]]) return ret </code></pre>
function|image-segmentation|tensorflow2.0|max-pooling
1
458
65,128,884
Change and swap values in the row and column by conditions
<p>I have a following pandas dataframe:</p> <p><a href="https://i.stack.imgur.com/WNJYd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WNJYd.png" alt="enter image description here" /></a></p> <p>and I want to check if the value in the column <code>'A start'</code> is negative. If so than swap values in column <code>'start'</code> and <code>'end'</code> and in columns <code>'A start'</code> and <code>'A end'</code> in the row where the <code>'A start'</code> has a negative value. So the result should be:</p> <p><a href="https://i.stack.imgur.com/Gz7Ch.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Gz7Ch.png" alt="enter image description here" /></a></p> <p>I tried to solve it with <code>where</code>, but it doesn't work.</p> <pre><code>df[['A start','A end']] = df[['A end','A start']].where(df['A start'] &lt; 0 , df[['A start','A end']].values) </code></pre> <p>I'm using Python 3.8.</p> <p>Thank you very much for your help.</p> <p>P.S.: the existing question here in forum:</p> <pre><code>What is correct syntax to swap column values for selected rows in a pandas data frame using just one line? </code></pre> <p>unfortunately doesn't help.</p>
<p>This code permutates values between start and end columns if A start value is lower than zero:</p> <pre><code>for i, row in df.iterrows(): if row['A start'] &lt; 0: start_value = row['start'] end_value = row['end'] df.iloc[i, df.columns.get_loc('start')] = end_value df.iloc[i, df.columns.get_loc('end')] = start_value </code></pre> <p><code>i</code> is the index and <code>row</code> is the dictionary with the column values.</p>
python|pandas|dataframe|conditional-statements
1
459
65,213,469
How can I convert a separate column text into row using Python/Pandas?
<p>I am teaching myself machine learning and working on some dataset which have the columns</p> <p><a href="https://i.stack.imgur.com/NHTdg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NHTdg.png" alt="dataframe image" /></a> `</p> <p>there are two sentences columns sent0 and sent1 and sol contains whether sent0 is against commonsense or sent1. If sent0 is against commonsense there is 0 in sol column.</p> <p>What I want is to combine both sent0 and sent1 in one column as</p> <p><a href="https://i.stack.imgur.com/NRUdb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NRUdb.png" alt="target dataframe" /></a></p> <p>Now if a sent0 is against commonsense i want to write it as 0 in sol column and 1 if sentence is right. I tried different ways but failed until now</p>
<p>This should do what you want</p> <pre><code>df = pd.DataFrame() # this should be your original df import pandas as pd data = pd.DataFrame(columns=[&quot;id&quot;, &quot;sent&quot;, &quot;sol&quot;]) for i, row in df.iterrows(): id_ = row[&quot;id&quot;] sent0 = row[&quot;sent0&quot;] sent1 = row[&quot;sent1&quot;] sol = row[&quot;sol&quot;] # calculate solutions if sol == 0: # if sent0 is against common sense sol0 = 0 sol1 = 1 # is common sense else: sol0 = 1 # is common sense sol1 = 0 columns = [&quot;id&quot;, &quot;sent&quot;, &quot;sol&quot;] data = data.append(pd.DataFrame([[id_, sent0, sol0]], columns=columns)) data = data.append(pd.DataFrame([[id_, sent1, sol1]], columns=columns)) </code></pre>
python|pandas|dataframe|text-processing
0
460
65,430,009
I have an error while importing tensorflow
<p>I'm using Python 3.6.0 and I downloaded tensorflow using <code>pip install tensorflow</code>, I tried several times to uninstall tensorflow and install another version of tensorflow but it didn't work... Which version of tensorflow is compatible for me? (I'm using now version 1.15.0)</p> <p>This is the import error:</p> <pre><code>Traceback (most recent call last): File &quot;C:\Users\User\PycharmProjects\pythonProject1\venv\lib\site-packages\tensorflow\python\pywrap_tensorflow.py&quot;, line 64, in &lt;module&gt; from tensorflow.python._pywrap_tensorflow_internal import * ImportError: DLL load failed: A dynamic link library (DLL) initialization routine failed. During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;C:\Users\User\Desktop\adsp\train.py&quot;, line 1, in &lt;module&gt; import tensorflow as tf File &quot;C:\Users\User\PycharmProjects\pythonProject1\venv\lib\site-packages\tensorflow\__init__.py&quot;, line 41, in &lt;module&gt; from tensorflow.python.tools import module_util as _module_util File &quot;C:\Users\User\PycharmProjects\pythonProject1\venv\lib\site-packages\tensorflow\python\__init__.py&quot;, line 39, in &lt;module&gt; from tensorflow.python import pywrap_tensorflow as _pywrap_tensorflow File &quot;C:\Users\User\PycharmProjects\pythonProject1\venv\lib\site-packages\tensorflow\python\pywrap_tensorflow.py&quot;, line 83, in &lt;module&gt; raise ImportError(msg) ImportError: Traceback (most recent call last): File &quot;C:\Users\User\PycharmProjects\pythonProject1\venv\lib\site-packages\tensorflow\python\pywrap_tensorflow.py&quot;, line 64, in &lt;module&gt; from tensorflow.python._pywrap_tensorflow_internal import * ImportError: DLL load failed: A dynamic link library (DLL) initialization routine failed. Failed to load the native TensorFlow runtime. See https://www.tensorflow.org/install/errors for some common reasons and solutions. Include the entire stack trace above this error message when asking for help. </code></pre>
<p>You need to install the C++ redist libraries C++</p> <p><a href="https://support.microsoft.com/en-us/help/2977003/the-latest-supported-visual-c-downloads" rel="nofollow noreferrer">c++ redist lib</a></p>
python|python-3.x|tensorflow
0
461
50,066,318
Custom Ticks and Labels for Hexbin Colorbar in Pandas/Matplotlib
<p>How do I create custom ticks and labels for a Hexbin plot's Colorbar?</p>
<p>The key to modifying the colorbar is gaining access to it, then using Locators and Formatters (in <a href="https://matplotlib.org/api/ticker_api.html" rel="nofollow noreferrer">matplotlib.ticker</a>) to modify them, finally updating the ticks after the changes have been made.</p> <pre><code>import matplotlib.pyplot as plt import matplotlib.ticker as tkr # set the values of the main y axis and colorbar ticks ticklocs = tkr.FixedLocator( np.linspace( ymin, ymax, ystep ) ) ticklocs2 = tkr.FixedLocator( np.arange( zmin, zmax, zstep ) ) # set the label format for the main y axis, and colorbar tickfrmt = tkr.StrMethodFormatter( yaxis_format ) tickfrmt2 = tkr.StrMethodFormatter( zaxis_format ) # must save a reference to the plot in order to retrieve its colorbar hb = df.plot.hexbin( 'xaxis', 'yaxis', C = 'zaxis', sharex = False ) # get plot axes, ax[0] is main axes, ax[1] is colorbar ax = plt.gcf().get_axes() # get colorbar cbar = hb.collections[ 0 ].colorbar # set main y axis ticks and labels ax[ 0 ].yaxis.set_major_locator( ticklocs ) ax[ 0 ].yaxis.set_major_formatter( tickfrmt ) # set color bar ticks and labels cbar.locator = ticklocs2 cbar.formatter = tickfrmt2 cbar.update_ticks() # update ticks for changes to take place </code></pre> <p>(This issue caused me some significant issues, so I thought I would share how to accomplish this.)</p>
pandas|matplotlib|plot|colorbar
2
462
50,185,275
Matplotlib x-axis disappear
<p>I'm experimenting with python's matplotlib function and having some weird result that the x-axis label disappears from the plot.</p> <p>I'm trying the following example as shown in this <a href="https://www.youtube.com/watch?v=X60m6GBq4fM" rel="nofollow noreferrer">Youtube</a>: <a href="https://i.stack.imgur.com/4zIGs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4zIGs.png" alt="enter image description here"></a></p> <p>In the example, the x-axis is showing the year of the plot. When I try to do in my own Jupyter notebook, what I get is the following:</p> <p>Code :</p> <pre><code>yearly_average[-20:].plot(x='year', y='rating', figsize=(15,10), grid=True) </code></pre> <p><a href="https://i.stack.imgur.com/wtxLg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wtxLg.png" alt="enter image description here"></a></p> <p>How do I fix this?</p>
<p>Let's convert the Year column in the yearly_average dataframe from a str or object dtype to integer. Then, plot using pandas plot.</p> <p>MVCE:</p> <p>Working example with xaxis ticklabels where dtype 'Year' is integer</p> <pre><code>df = pd.DataFrame({'Year':[2000,2001,2002,2003,2004,2005],'Value':np.random.randint(1000,5000,6)}) df.plot('Year','Value') </code></pre> <p><a href="https://i.stack.imgur.com/VeHon.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VeHon.png" alt="enter image description here"></a></p> <p>Now, let's cast 'Year' as str and test plot again.</p> <pre><code>df1 = df.copy() df1['Year'] = df['Year'].astype(str) df1.plot('Year','Value') </code></pre> <p>Missing ticklabels</p> <p><a href="https://i.stack.imgur.com/5UKJL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5UKJL.png" alt="enter image description here"></a></p>
python|pandas|matplotlib|jupyter-notebook
0
463
64,118,921
Calculate index using base year
<p>df</p> <pre><code>fruit year price index_value Boolean index apple 1960 11 apple 1961 12 100 True apple 1962 13 apple 1963 13 100 True banana 1960 11 banana 1961 12 </code></pre> <p>How could I calculate the index column for the year after a True per fruit? The base year is given by the rows where index_value==100</p> <p>I tried:</p> <pre><code>df['index'] = df.groupby('fruit')['price'].apply(lambda x: (x/x.iloc[0] * 100).round(0)) </code></pre> <p>Expected Output:</p> <pre><code>fruit year price index_value Boolean index apple 1960 11 apple 1961 12 100 True 100 apple 1962 13 108 apple 1963 13 100 True 100 apple 1964 11 84 banana 1961 12 </code></pre>
<p>I took the liberty to adjust your input data with a row for <code>apple 1964 11</code> to match your output example. The column <code>Boolean</code> is redundant</p> <pre><code>import pandas as pd import numpy as np import io t = ''' fruit year price index_value apple 1960 11 apple 1961 12 100 apple 1962 13 apple 1963 13 100 apple 1964 11 banana 1960 11 banana 1961 12 ''' df = pd.read_csv(io.StringIO(t), sep='\s+') print(df) </code></pre> <p>Out:</p> <pre><code> fruit year price index_value 0 apple 1960 11 NaN 1 apple 1961 12 100.0 2 apple 1962 13 NaN 3 apple 1963 13 100.0 4 apple 1964 11 NaN 5 banana 1960 11 NaN 6 banana 1961 12 NaN </code></pre> <p>To get your desired output first create subgroups for values after a given index_value</p> <pre><code>df['groups'] = df.index_value.notna().groupby(df.fruit).cumsum().astype('int') print(df) </code></pre> <p>Out:</p> <pre><code> fruit year price index_value groups 0 apple 1960 11 NaN 0 1 apple 1961 12 100.0 1 2 apple 1962 13 NaN 1 3 apple 1963 13 100.0 2 4 apple 1964 11 NaN 2 5 banana 1960 11 NaN 0 6 banana 1961 12 NaN 0 </code></pre> <p>Then you can compute the percentage changes to the index_values</p> <pre><code>df['index_change'] = ( df[df.groups.ne(0)] .groupby(['fruit','groups'])['price'].apply(lambda x: np.floor((x/x.iloc[0] * 100))) ) print(df) </code></pre> <p>Out:</p> <pre><code> fruit year price index_value groups index_change 0 apple 1960 11 NaN 0 NaN 1 apple 1961 12 100.0 1 100.0 2 apple 1962 13 NaN 1 108.0 3 apple 1963 13 100.0 2 100.0 4 apple 1964 11 NaN 2 84.0 5 banana 1960 11 NaN 0 NaN 6 banana 1961 12 NaN 0 NaN </code></pre>
python|python-3.x|pandas|dataframe
1
464
64,118,458
Applying a custom aggregation function to a pandas DataFrame
<p>I have a pandas DataFrame with two float columns, <code>col_x</code> and <code>col_y</code>.</p> <p>I want to return the sum of <code>col_x * col_y</code> divided by the sum of <code>col_x</code></p> <p>Can this be done with a custom aggregate function?</p> <p>I am trying to do something like this:</p> <pre><code>import pandas as pd def aggregation_function(x, y): return sum(x * y) / sum(x) df = pd.DataFrame([(0.1, 0.2), (0.3, 0.4), (0.5, 0.6)], columns=[&quot;col_x&quot;, &quot;col_y&quot;]) result = df.agg(aggregation_function, axis=&quot;columns&quot;, args=(&quot;col_x&quot;, &quot;col_y&quot;)) </code></pre> <p>I know that the aggregation function probably doesn't make sense but I can't even get to the point where I can try other things because I am getting this error:</p> <pre><code>TypeError: apply() got multiple values for keyword argument 'args' </code></pre> <p>I don't know how else I can specify the <code>args</code> for my aggregation function. I've tried using <code>kwargs</code>, too but nothing I do will work. There is no example in the <a href="https://pandas.pydata.org/pandas-docs/version/0.23.1/generated/pandas.DataFrame.aggregate.html" rel="nofollow noreferrer">docs</a> for this but it seems to say that it is possible.</p> <p>How can you specify the args for the aggregation function?</p> <p>The desired result of the output aggregation would be a single value</p>
<p>First , you can use <code>apply</code> on <code>axis=1</code> for such problems:</p> <pre><code>df.apply(lambda x: aggregation_function(x['col_x'],x['col_y']),axis=1) </code></pre> <p>however , this will result in error in your case because the aggregate function you have is calculating <code>col_x * col_y</code> for each row, sum doesnot work with a scalar value , it needs an iterable:</p> <blockquote> <p>Signature: sum(iterable, start=0, /) Docstring: Return the sum of a 'start' value (default: 0) plus an iterable of numbers</p> </blockquote> <p>Hence <code>sum(0.2)</code> doesnot work.</p> <p>If we remove the sum from the aggregate function , this works as intended:</p> <pre><code>def aggregation_function(x, y):return (x * y)/ x df.apply(lambda x: aggregation_function(x['col_x'],x['col_y']),axis=1) 0 0.2 1 0.4 2 0.6 dtype: float64 </code></pre> <p>However as you say you want to divide sum of <code>col_x</code> with the result of multiplication of <code>col_x</code> and <code>col_y</code> , you can tweak the function and use <code>series.sum</code> and use it directly with the dataframe though this can be vectorized to <code>df['col_x'].mul(df['col_y']).sum()/df['col_x'].sum()</code></p> <pre><code>def aggregation_function(x, y): return (x * y).sum() / x.sum() aggregation_function(df['col_x'],df['col_y']) 0.4888888888888889 </code></pre>
python|pandas
1
465
64,163,870
Identifying consecutive declining values in a column from a data frame
<p>I have a 278 x 2 data frame, and I want to find the rows that have 2 consecutive declining values in the second column. Here's a snippet:</p> <p><a href="https://i.stack.imgur.com/fm7Py.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fm7Py.png" alt="data frame" /></a></p> <p>I'm not sure how to approach this problem. I've searched how to identify consecutive declining values in a data frame, but so far I've only found questions that pertain to consecutive SAME values, which isn't what I'm looking for. I could iterate over the data frame, but I don't believe that's very efficient.</p> <p>Also, I'm not asking for someone to show me how to code this problem. I'm simply asking for potential ways I could go about solving this problem on my own because I'm unsure of how to approach the issue.</p>
<ul> <li>Use <code>shift</code> to create a temporary column with all values shifted up one row.</li> <li>Compare the two columns, <code>&quot;GDP&quot; &gt; &quot;shift&quot;</code> This gives you a new column of Boolean values.</li> <li>Look for consecutive <code>True</code> values in this Boolean column. That identifies two consecutive declining values.</li> </ul>
python|pandas
1
466
47,059,781
How to append new dataframe rows to a csv using pandas?
<p>I have a new dataframe, how to append it to an existed csv? </p> <p>I tried the following code:</p> <pre><code>f = open('test.csv', 'w') df.to_csv(f, sep='\t') f.close() </code></pre> <p>But it doesn't append anything to test.csv. The csv is big, I only want to use append, rather than read the whole csv as dataframe and concatenate it to and write it to a new csv. Is there any good method to solve the problem? Thanks.</p>
<p>Try this:</p> <pre><code>df.to_csv('test.csv', sep='\t', header=None, mode='a') # NOTE: -----&gt; ^^^^^^^^ </code></pre>
pandas|csv|dataframe|append
8
467
46,732,047
TypeError for predict_proba(np.array(test))
<pre><code>model = LogisticRegression() model = model.fit(X, y) test_data = [1,2,3,4,5,6,7,8,9,10,11,12,13] test_prediction = model.predict_proba(np.array(test_data)) max = -1.0 res = 0 for i in range(test_prediction): if test_prediction[i]&gt;max: max = test_prediction[i] res = i if res==0: print('A') elif res==1: print('B') else: print('C') </code></pre> <p>Using the above python code I have to predict the probabilities of the 3 possible results (A, B, C). The probabilities are saved in test_prediction and it can be printed as: </p> <pre><code>Output: [[ 0.82882588 0.08641236 0.08476175]] </code></pre> <p>But the remaining part gives an error:</p> <pre><code>for i in range(test_prediction): TypeError: only integer scalar arrays can be converted to a scalar index </code></pre> <p>I want to find the max probability and then display the event that is likely to occur the most (A/B/C). How to go about this?</p>
<p>You can also use <a href="https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.argmax.html" rel="nofollow noreferrer">numpy.argmax</a> which will directly give you the index of the largest value.</p> <pre><code>import numpy as np #test_prediction is most probably np array only pred = np.array(test_prediction) classes_val = np.argmax(pred, axis=1) for res in class_val: if res==0: print('A') elif res==1: print('B') else: print('C') </code></pre>
python|python-3.x|numpy|scikit-learn|typeerror
1
468
46,972,163
How to use pandas to group items by classes bounded by a difference less that 4
<p>I wonder how to create classes of items grouped by its difference &lt;=4, so 1,2,3,4,5 will be grouped into 1, 9-13 to 9 ... and then select the min/max values of the attribute y, in an efficient/easy way:</p> <p><code>items= [('x', [ 1,2,3,3,3,5,9,10,11,13]), ('y', [1,1,1,1,1,4,4,1,1,1])]</code></p> <p><code>In[3]: pd.DataFrame.from_items(items) Out[3]: x y 0 1 1 1 2 1 2 3 1 3 3 1 4 3 1 5 5 4 6 9 5 7 10 1 8 11 1 9 13 1 </code></p> <p>So the result I expect would be:</p> <p><code>xclass ymax ymin 1 4 1 9 5 1 </code> I did it with iterating without pandas but I would like test performace with pandas.</p>
<p>Such operations are usually done in two steps:</p> <ol> <li>Create a key to group by.</li> <li>Calculate aggregate statistics with groupby.</li> </ol> <p>I assume you have dataframe <code>df</code> defined as</p> <pre><code>df = pd.DataFrame.from_items([('x', [ 1,2,3,3,3,5,9,10,11,13]), ('y', [1,1,1,1,1,4,4,1,1,1])]) </code></pre> <p>The first step is not defined very well in you question. How to draw borders between groups, if the data is dense? For example, what would you want to do with groups if you had <code>df['x'] = [ 1,2,3,3,5,7,9,10,11,13]</code>?</p> <p>The simplest idea is to round <code>x</code> to the precision you want. This ensures that distance between any integers in the group does not exceed 4. But the groups will be placed without gaps: 1-5 to 5, 6-10 to 10, 11-15 to 15, etc.</p> <pre><code>def custom_round(x, precision, offset): return ((x-offset) // precision) * precision + offset df['xclass'] = custom_round(df['x'], 5, 1) </code></pre> <p>Another idea is to have groups that are <em>dense enough</em>: two groups can be merged, if the <em>minimal</em> distance between them is less than threshold. Such algorirthm can produce large groups divided by gaps wider than threshold. It can be implemented with a DBSCAN clustering algorithm. To have the groups you want, you can set the threshold distance to 3 (because distance between 5 and 9 is already 4):</p> <pre><code>from sklearn.cluster import DBSCAN def cluster(x, threshold): labels = DBSCAN(eps=3, min_samples=1).fit(np.array(x)[:, np.newaxis]).labels_ return x.groupby(labels).transform(min) df['xclass'] = cluster(df['x'], 3) </code></pre> <p>The second step is easy: having dataframe <code>df</code> with columns <code>xclass</code> and <code>y</code>, call:</p> <pre><code>df.groupby('xclass')['y'].aggregate([min, max]).reset_index() </code></pre>
python|pandas
1
469
46,930,558
Keras adding extra dimension to target (Error when checking target)
<p>I have a Siamese Keras model defined with this code:</p> <pre class="lang-python prettyprint-override"><code>image1 = Input(shape=(128,128,3)) image2 = Input(shape=(128,128,3)) mobilenet = keras.applications.mobilenet.MobileNet( input_shape=(128,128,3), alpha=0.25, depth_multiplier=1, dropout=1e-3, include_top=False, weights='imagenet', input_tensor=None, pooling='avg') out1 = mobilenet(image1) out2 = mobilenet(image2) diff = subtract([out1, out2]) distance = Lambda(lambda x: K.sqrt(K.sum(K.square(x), axis=1)))(diff) model = Model(inputs=[image1, image2], outputs=distance) model.compile(optimizer='rmsprop', loss='hinge', metrics=['accuracy']) </code></pre> <p>Summary of the model looks like this:</p> <pre><code>____________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ==================================================================================================== input_1 (InputLayer) (None, 128, 128, 3) 0 ____________________________________________________________________________________________________ input_2 (InputLayer) (None, 128, 128, 3) 0 ____________________________________________________________________________________________________ mobilenet_0.25_128 (Model) (None, 256) 218544 input_1[0][0] input_2[0][0] ____________________________________________________________________________________________________ subtract_1 (Subtract) (None, 256) 0 mobilenet_0.25_128[1][0] mobilenet_0.25_128[2][0] ____________________________________________________________________________________________________ lambda_1 (Lambda) (None,) 0 subtract_1[0][0] ==================================================================================================== Total params: 218,544 Trainable params: 213,072 Non-trainable params: 5,472 ____________________________________________________________________________________________________ </code></pre> <p>I am trying to train the model. My training routine looks like this:</p> <pre><code>for i in range(1000): X1,X2,y = data_source.getTrainingBatch(10, sameProb=0.5) y = y[:,0] // grab ground truth for class 0 print(y.shape) // (10,) loss=model.train_on_batch([X1,X2], y) </code></pre> <p>X1 and X2 have shapes <code>(10, 128, 128, 3)</code>.</p> <p>y has a shape <code>(10,2)</code> and is a list of one-hot encoded vectors for 2 classes. I only take ground truth for the first class and try to feed it to the model. </p> <p><code>print(y.shape)</code> statement prints <code>(10,)</code>, so the array is 1D. But when I run the code I get the following error:</p> <pre><code>ValueError Traceback (most recent call last) &lt;ipython-input-2-bf89243ee200&gt; in &lt;module&gt;() ----&gt; 1 n=net.SiamNet("drug-net") ~/development/drug_master/net_keras.py in __init__(self, model_name, training) 16 print(self.model.summary()) 17 self.data_source = data.BoxComparisonData() ---&gt; 18 self.train() 19 self.save() 20 else: ~/development/drug_master/net_keras.py in train(self) 75 X1,X2,y = self.data_source.getTrainingBatch(10, sameProb=0.5) 76 print(y.shape) ---&gt; 77 loss=self.model.train_on_batch([X1,X2], y[:,0].flatten()) # starts training 78 print("Loss (%d):" % i, loss) 79 if(i%20 == 0): ~/anaconda3/lib/python3.6/site-packages/keras/engine/training.py in train_on_batch(self, x, y, sample_weight, class_weight) 1754 sample_weight=sample_weight, 1755 class_weight=class_weight, -&gt; 1756 check_batch_axis=True) 1757 if self.uses_learning_phase and not isinstance(K.learning_phase(), int): 1758 ins = x + y + sample_weights + [1.] ~/anaconda3/lib/python3.6/site-packages/keras/engine/training.py in _standardize_user_data(self, x, y, sample_weight, class_weight, check_batch_axis, batch_size) 1380 output_shapes, 1381 check_batch_axis=False, -&gt; 1382 exception_prefix='target') 1383 sample_weights = _standardize_sample_weights(sample_weight, 1384 self._feed_output_names) ~/anaconda3/lib/python3.6/site-packages/keras/engine/training.py in _standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix) 130 ' to have ' + str(len(shapes[i])) + 131 ' dimensions, but got array with shape ' + --&gt; 132 str(array.shape)) 133 for j, (dim, ref_dim) in enumerate(zip(array.shape, shapes[i])): 134 if not j and not check_batch_axis: ValueError: Error when checking target: expected lambda_1 to have 1 dimensions, but got array with shape (10, 1) </code></pre> <p>As I understand the model expects the target has one dimension, in this case (10,), but receives a 2D target with shape (10,1). It seems like Keras adds an extra dimension to my target. What am I missing?</p> <p>Keras version is 2.0.8</p> <p>I use tensorflow as a backend (1.3.0)</p>
<p>This is strange.</p> <p>It seems <code>y</code> is actually <code>(10,1)</code>. </p> <p>You can use <code>keepdims=True</code> in your <code>K.sum</code> to make keras expect a <code>(10,1)</code> array. </p> <hr> <p>Not sure what is the cause for that. Maybe you have old <code>Y</code> vars (perhaps upper case?) that were not erased and you're using them by accident?</p>
python|numpy|tensorflow|keras
0
470
38,886,096
Adding multiple rows to pandas dataframe based on returned lambda function
<p>I have a pandas dataframe that can be represented as follows:</p> <pre><code>myDF = pd.DataFrame({'value':[5,2,4,3,6,1,4,8]}) print(myDF) value 0 5 1 2 2 4 3 3 4 6 5 1 6 4 7 8 </code></pre> <p>I can add a new column containing the returned value from a function that acts on the contents of the 'value' column. For example, I can add a column called 'square', which contains the square of the value, by defining a function and then using lambda, as follows:</p> <pre><code>def myFunc(x): mySquare = x*x return mySquare myDF['square'] = myDF['value'].map(lambda x: myFunc(x)) </code></pre> <p>...to produce</p> <pre><code> value square 0 5 25 1 2 4 2 4 16 3 3 9 4 6 36 5 1 1 6 4 16 7 8 64 </code></pre> <p>(N.B. The actual function I'm using is more complex than this but this simple squaring process is OK for illustration.)</p> <p>My question is, can the myFunc() function return a tuple (or a dictionary or a list) that could be used to add multiple new columns in the dataframe? As a (very simple) example, to add new columns for squares, cubes, fourth powers, is it possible to do something akin to:</p> <pre><code>def myFunc(x): mySquare = x*x myCube = x*x*x myFourth = x*x*x*x return mySquare,myCube,myFourth myDF['square'],myDF['cubed'],myDF['fourth'] = myDF['value'].map(lambda x: myFunc(x)) </code></pre> <p>...to produce the following:</p> <pre><code> value square cubed fourth 0 5 25 125 625 1 2 4 8 16 2 4 16 64 256 3 3 9 27 81 4 6 36 216 1296 5 1 1 1 1 6 4 16 64 256 7 8 64 512 4096 </code></pre> <p>Writing 3 separate functions would seem to be unnecessarily repetitive. None of the variations I've tried so far has worked (the above fails with: ValueError: too many values to unpack (expected 3)).</p> <p>As mentioned above, the examples of squares, cubes and fourth powers are just for illustration purposes. I know that there are much more effective ways to calculate these values in a dataframe. However, I'm interested in the method to add several columns to a dataframe based on stepping through each cell of a column.</p>
<p>You can create a dataframe based on the results and then concatenate it to your original dataframe. You then need to rename your columns.</p> <pre><code>df = pd.concat([myDF, pd.DataFrame([myFunc(x) for x in myDF['value']])], axis=1) df.columns = myDF.columns.tolist() + ['square', 'cubed', 'fourth'] &gt;&gt;&gt; df value square cubed fourth 0 5 25 125 625 1 2 4 8 16 2 4 16 64 256 3 3 9 27 81 4 6 36 216 1296 5 1 1 1 1 6 4 16 64 256 7 8 64 512 4096 </code></pre>
python|python-3.x|pandas|dataframe|lambda
0
471
38,924,256
Creating a Nested List in Python
<p>I'm trying to make a nested list in Python to contain information about points in a video, and I'm having a lot of trouble creating an array for the results to be saved in to. The structure of the list is simple: the top level is a reference to the frame, the next level is a reference to a marker, and the last level is the point of the marker. So for example, the list is setup as such:</p> <p><code>markerList # a long list of every marker in every frame markerList[0] # every marker in the first frame markerList[0][0] # the first marker of the first frame markerList[0][0][0] # the x value of the first marker of the first frame</code></p> <p>Calling markerList[0] looks like this:</p> <pre><code> array([[ 922.04443359, 903. ], [ 987.83850098, 891.38830566], [ 843.27374268, 891.70471191], [ 936.38446045, 873.34661865], [ 965.52880859, 840.44445801], [ 822.19567871, 834.06298828], [ 903.48956299, 830.62268066], [ 938.70031738, 825.71557617], [ 853.09545898, 824.47247314], [ 817.84277344, 816.05029297], [ 1057.91186523, 815.52935791], [ 833.23632812, 787.48504639], [ 924.24224854, 755.53997803], [ 836.07800293, 720.02764893], [ 937.83880615, 714.11199951], [ 813.3493042 , 720.30566406], [ 797.09521484, 705.72729492], [ 964.31713867, 703.246521 ], [ 934.9864502 , 697.27099609], [ 815.1550293 , 688.91473389], [ 954.94085693, 685.88171387], [ 797.70239258, 672.35119629], [ 877.05749512, 659.94250488], [ 962.24786377, 659.26495361], [ 843.66131592, 618.83868408], [ 901.50476074, 585.42541504], [ 863.41851807, 584.4977417 ]], dtype=float32) </code></pre> <p>The problem is that every frame contains a different number of markers. I want to create an empty array the same length as markerList (i.e., the same number of frames) in which every element is the same size as the largest frame in markerList. Some important caveats: first,I want to save the results into a .mat file where the final array (which I'll call finalStack) is a cell of cells. Second, I need to be able to reference and assign to any specific part of finalStack. So if I want to move a point to finalStack[0][22], I need to be able to do so without conflict. This basically just means I can't use append methods anywhere, but it's also unearthed my first problem - finding a way to create finalStack that doesn't cause every new assignment to be duplicated throughout the entire parent list. I've tried to do this a few different ways, and none work correctly.</p> <p><strong>Attempts at a solution:</strong></p> <p>Following another SO question, I attempted to create finalStack iteratively, but to no avail. I created the following function:</p> <pre><code>def createFinalStack(numMarkers, numPoints, frames): step = [[0]*numPoints for x in xrange(numMarkers)] finalStack = [step]*frames return finalStack </code></pre> <p>However, this causes all assignments to be copied across the parent list, such that assigning <code>finalStack[0][12]</code> leads to <code>finalStack[2][12] == finalStack[20][12] == finalStack[0][12]</code>. In this example, numMarkers= 40, numPoints = 2 (just x &amp; y), and frames= 200. (So the final array should be 200 x 40 x 2.)</p> <p>That said, this seems like the most straightforward way to do what I want, I just can't get past the copy error (I know it's a reference issue, I just don't know how to avoid it in this context).</p> <p>Another seemingly simple solution would be to copy markerList using <code>copy.deepcopy(markerList)</code>, and pad any frames with less than 40 markers to get them to numMarkers = 40, and zero out anything else. But I can't come up with a good way to cycle through all of the frames, add points in the correct format, and then empty out everything else. </p> <p>If this isn't enough information to work with, I can try to provide greater context and some other not-good-methods that didn't work at all. I've been stuck on this long enough that I'm convinced the solution is horribly simple, and I'm just missing the obvious. I hope you can prove me right!</p> <p>Thanks!</p>
<p>This illustrates what is going on:</p> <pre><code>In [1334]: step=[[0]*3 for x in range(3)] In [1335]: step Out[1335]: [[0, 0, 0], [0, 0, 0], [0, 0, 0]] In [1336]: stack=[step]*4 In [1337]: stack Out[1337]: [[[0, 0, 0], [0, 0, 0], [0, 0, 0]], [[0, 0, 0], [0, 0, 0], [0, 0, 0]], [[0, 0, 0], [0, 0, 0], [0, 0, 0]], [[0, 0, 0], [0, 0, 0], [0, 0, 0]]] In [1338]: stack[0] Out[1338]: [[0, 0, 0], [0, 0, 0], [0, 0, 0]] In [1339]: stack[0][2]=3 In [1340]: stack Out[1340]: [[[0, 0, 0], [0, 0, 0], 3], [[0, 0, 0], [0, 0, 0], 3], [[0, 0, 0], [0, 0, 0], 3], [[0, 0, 0], [0, 0, 0], 3]] In [1341]: step Out[1341]: [[0, 0, 0], [0, 0, 0], 3] </code></pre> <p>When you use <code>alist*n</code> to create new list, the new list contains multiple pointers to the same underlying object. As a general rule, using <code>*n</code> to replicate a list is dangerous if you plan on changing values later on.</p> <p>If instead I make an array of the right dimensions I don't have this problem:</p> <pre><code>In [1342]: np.zeros((4,3,3),int) Out[1342]: array([[[0, 0, 0], [0, 0, 0], [0, 0, 0]], ... [0, 0, 0]]]) </code></pre> <p>Or in list form:</p> <pre><code>In [1343]: np.zeros((4,3,3),int).tolist() Out[1343]: [[[0, 0, 0], [0, 0, 0], [0, 0, 0]], [[0, 0, 0], [0, 0, 0], [0, 0, 0]], [[0, 0, 0], [0, 0, 0], [0, 0, 0]], [[0, 0, 0], [0, 0, 0], [0, 0, 0]]] </code></pre> <p>If I assign a value in this list, I only change one item:</p> <pre><code>In [1344]: stack=np.zeros((4,3,3),int).tolist() In [1345]: stack[0][2]=3 In [1346]: stack Out[1346]: [[[0, 0, 0], [0, 0, 0], 3], [[0, 0, 0], [0, 0, 0], [0, 0, 0]], [[0, 0, 0], [0, 0, 0], [0, 0, 0]], [[0, 0, 0], [0, 0, 0], [0, 0, 0]]] </code></pre> <p>I really should have used <code>stack[0][2][1]=3</code>, but you get the idea. If I make the same assignment in the array form I end up changing a whole row</p> <pre><code>In [1347]: stack=np.zeros((4,3,3),int) In [1348]: stack[0][2]=3 In [1349]: stack Out[1349]: array([[[0, 0, 0], [0, 0, 0], [3, 3, 3]], [[0, 0, 0], ... [0, 0, 0]]]) </code></pre> <p>I should have used an expression like <code>stack[0,2,:]=4</code>.</p> <p>It's probably possible to construct a triply next list like this where all initial values are independent. But this array approach is simpler.</p>
python|python-2.7|numpy
1
472
63,078,437
Unsure how to further optimise (get rid of for loop)
<p>I am working on several datasets. One dataset (geodata - 74 observations) contains Indian district names, latitude and longitude of district centres while the other (called rainfall_2009) contains information on rainfall in a geographic grid as well as grid's latitude and longitude. The aim is to link each grid to a district such that the grid's distance from the district centre would be no more than 100 km. The dataset is big - 350 000 observations. I initially tried running 2 loops, but I know this is a very unPythonic way and in the end it was very inefficient, taking around 2.5h. I have managed to get rid of one of the loops, but it still takes 1.5h to run the code. Is there any further way I could optimise it?</p> <pre><code># Create empty variables for district names and distance to the centre rainfall_2009['district'] = np.nan rainfall_2009['distance'] = np.nan # Make a tuple of district centre geographic location (to be used in distance geodesic command) geodata['location'] = pd.Series([tuple(i) for i in np.array((np.array(geodata.centroid_latitude) , np.array(geodata.centroid_longitude))).T]) # Run the loop for each grid in the dataset. for i in tqdm(rainfall_2009.index): place = (rainfall_2009.latitude.iloc[i], rainfall_2009.longitude.iloc[i]) # select grid's geographic data distance = geodata.location.apply(lambda x: dist.geodesic(place, x).km) # construct series of distances between grid and all regional centers if list(distance[distance&lt;100]) == []: # If there are no sufficiently close district centers we just continue the loop continue else: # We take the minimum distance to assign the closest region. rainfall_2009.district.iloc[i] = geodata.distname_iaa.iloc[distance[distance &lt; 100].idxmin()] rainfall_2009.distance.iloc[i] = distance[distance &lt; 100].min() </code></pre>
<p>Can you pass pandas columns directly to <code>dist.geodesic()</code>? Calling this via the apply() statement may be slow.</p> <p>This example might be helpful (see the function <code>gcd_vec()</code> in this blog post: <a href="https://tomaugspurger.github.io/modern-4-performance" rel="nofollow noreferrer">https://tomaugspurger.github.io/modern-4-performance</a></p> <p>Also, can you perform <em>fewer</em> distance calculations? For example calculate distance from geographic grid to district center if the two end-points are in the same state or adjacent states?</p> <p>UPDATE: The Numba package may speed this up further. You just import and apply a decorator. Details here: <a href="http://numba.pydata.org/numba-doc/latest/user/jit.html" rel="nofollow noreferrer">http://numba.pydata.org/numba-doc/latest/user/jit.html</a></p> <pre><code>from numba import jit @jit def gcd_vec(): # same as before </code></pre>
pandas|for-loop|optimization|geopy
1
473
63,019,093
Classification metrics can't handle a mix of multiclass and continuous-multioutput targets
<p>I am running a <code>BERT</code> pretrained model on a <code>multiclass</code> dataset for text classification purposes. Since it is multiclass I cannot figure out how to generate a <code>classification report</code>. The solutions I found were <a href="https://stackoverflow.com/questions/48987959/classification-metrics-cant-handle-a-mix-of-continuous-multioutput-and-multi-la">this</a> and <a href="https://stackoverflow.com/questions/56492186/f-score-valueerror-classification-metrics-cant-handle-a-mix-of-multilabel-ind">this</a>. I understand since its a multiclass classification I have to <code>one-hot-encode</code> the <code>test_y</code> values (which I did)</p> <pre><code>test_y = to_categorical(np.asarray(test_y.factorize()[0])) </code></pre> <p>but when I do</p> <pre><code>from sklearn.metrics import classification_report print(classification_report(test_y, y_pred, digits=8)) </code></pre> <p>I get still get this error :</p> <pre><code> 88 if len(y_type) &gt; 1: 89 raise ValueError(&quot;Classification metrics can't handle a mix of {0} &quot; ---&gt; 90 &quot;and {1} targets&quot;.format(type_true, type_pred)) 91 92 # We can't have more than one value on y_type =&gt; The set is no more needed ValueError: Classification metrics can't handle a mix of multilabel-indicator and continuous-multioutput targets </code></pre> <p>Why?</p> <p>And if I try to calculate <code>accuracy_score</code> I get <code>0.0</code> accuracy: (but my accuracy is around 60%)</p> <pre><code>from sklearn.metrics import accuracy_score y_pred = np.argmax(y_pred, axis=1) accuracy_score(test_y, y_pred) &gt;&gt; 0.0 </code></pre> <p>Why?</p> <p>Details of the model is given below:</p> <blockquote> <p>train_test_split</p> </blockquote> <pre><code>train, test, train_y, test_y = train_test_split(df['text'], df['label'],test_size = 0.3) </code></pre> <blockquote> <p>Model:</p> </blockquote> <pre><code> def build_model(bert_layer, max_len=512): input_word_ids = Input(shape=(max_len,), dtype=tf.int32, name=&quot;input_word_ids&quot;) input_mask = Input(shape=(max_len,), dtype=tf.int32, name=&quot;input_mask&quot;) segment_ids = Input(shape=(max_len,), dtype=tf.int32, name=&quot;segment_ids&quot;) _, sequence_output = bert_layer([input_word_ids, input_mask, segment_ids]) clf_output = sequence_output[:, 0, :] #out = Dense(1, activation='sigmoid')(clf_output) out = Dense(8, activation='sigmoid')(clf_output) model = Model(inputs=[input_word_ids, input_mask, segment_ids], outputs=out) model.compile(Adam(lr=2e-6), loss='categorical_crossentropy', metrics=['accuracy']) return model </code></pre> <blockquote> <p>Model.fit</p> </blockquote> <pre><code> train_history = model.fit(train_input, train_labels, validation_split=0.2, epochs=1,batch_size=16 ) </code></pre> <blockquote> <p>model.predict</p> </blockquote> <pre><code>y_pred = model.predict(test_input) </code></pre> <blockquote> <p>Shape of parameters</p> </blockquote> <pre><code>print(type(y_pred)) print(y_pred.shape) &gt;&gt; &lt;class 'numpy.ndarray'&gt; &gt;&gt; (621,) print(type(test_y)) #before running to_categorical print(test_y.shape) &gt;&gt; &lt;class 'pandas.core.series.Series'&gt; &gt;&gt;(621,) </code></pre>
<p>Well, your output layer is defined as <code>out = Dense(1, activation='sigmoid')(clf_output)</code>, which means there is a single output node followed by sigmoid activation. This is meant to train on a objective of Binary classification or regression where the output values is a real number ranging between 0 and 1. To change it to binary labels using a threshold. This can be done using</p> <pre><code>threshold =0.5 # this can be changed. For simplistic example, let uss consider 0.5 y_pred = np.where(y&lt;threshold,0,1) </code></pre> <p>Or, if it is multiclass problem, then change <code>out = Dense(1, activation='sigmoid')(clf_output)</code> to <code>out = Dense(number_of_classes, activation='sigmoid')(clf_output)</code></p>
tensorflow|scikit-learn|one-hot-encoding|multiclass-classification
1
474
63,112,253
how to get last n indices in a python dataframe?
<p>i have the following dataframe:</p> <pre><code> volume index 1 65 1 55 2 44 2 56 3 46 3 75 4 64 4 64 </code></pre> <p>when i put the code <code>df.iloc[-2:]</code> .it only shows the last two rows of my dataframe. example:</p> <pre><code> volume index 4 64 4 64 </code></pre> <hr /> <p>i want to get the last two indices with the result below</p> <pre><code> volume index 3 46 3 75 4 64 4 64 </code></pre> <p>how do i go about it?</p>
<p>You can slice the index after getting the unique values, then use <code>Series.isin</code>:</p> <pre><code>df[df.index.isin(df.index.unique()[-2:])] </code></pre> <p>Or</p> <pre><code>df.loc[df.index.unique()[-2:]] </code></pre> <pre><code> volume index 3 46 3 75 4 64 4 64 </code></pre>
python|pandas|dataframe
5
475
67,961,979
Create For Loop To Predict Next Value Over Group (Python)
<p>I am working on a project where I need to take groups of data and predict the next value for that group using a time series model. In my data, I have a grouping variable and a numeric variable.</p> <p>Here is an example of my data:</p> <pre><code>import pandas as pd data = [ [&quot;A&quot;, 10], [&quot;B&quot;, 10], [&quot;C&quot;, 15], [&quot;D&quot;, 12], [&quot;A&quot;, 18], [&quot;B&quot;, 19], [&quot;C&quot;, 14], [&quot;D&quot;, 22], [&quot;A&quot;, 20], [&quot;B&quot;, 25], [&quot;C&quot;, 12], [&quot;D&quot;, 30], [&quot;A&quot;, 36], [&quot;B&quot;, 27], [&quot;C&quot;, 10], [&quot;D&quot;, 45] ] data = pd.DataFrame( data, columns=[ &quot;group&quot;, &quot;value&quot; ], ) </code></pre> <p>What I want to do is to create a for loop that iterates over the groups and predicts the next value for A, B, C, and D. Essentially, my end result would be a new data frame with 4 rows, one for each new predicted value. It would look something like this:</p> <pre><code>group pred_value A 40 B 36 C 8 D 42 </code></pre> <p>Here is my attempt at that so far:</p> <pre><code>from statsmodels.tsa.ar_model import AutoReg final=pd.DataFrame() for i in data['group']: group = data[data['group']==i] model = AutoReg(group['value'], lags=1) model_fit = model.fit() yhat = model_fit.predict(len(group), len(group)) final = final.append(yhat,ignore_index=True) </code></pre> <p>Unfortunately, this produces a data frame with 15 rows and I'm not sure how to get the end result that I described above.</p> <p>Can anyone help point me in the right direction? Any help would be appreciated! Thank you!</p>
<p>You can <code>groupby</code> first and then iterate. We can store the results in a <code>dict</code> and after the loop convert it to a DataFrame:</p> <pre><code># will hold the predictions forecasts = {} # in each turn e.g., group == &quot;A&quot;, values are [10, 18, 20, 36] for group, values in data.groupby(&quot;group&quot;).value: # form the model and fit model = AutoReg(values, lags=1) result = model.fit() # predict prediction = result.forecast(steps=1) # store forecasts[group] = prediction # after `for` ends, convert to DataFrame all_predictions = pd.DataFrame(forecasts) </code></pre> <p>to get</p> <pre><code>&gt;&gt;&gt; all_predictions A B C D 4 51.809524 28.561404 7.285714 62.110656 </code></pre> <hr> <p>We can also do this all with <code>apply</code>:</p> <pre><code>&gt;&gt;&gt; data.groupby(&quot;group&quot;).value.apply(lambda x: AutoReg(x, lags=1).fit().forecast(1)) group A 4 51.809524 B 4 28.561404 C 4 7.285714 D 4 62.110656 Name: value, dtype: float64 </code></pre> <p>However, we potentially lose the ability to hold references to the fitted models, whereas in explicit <code>for</code>, we could keep them aside. But if that is not wanted anyway, this can be used.</p>
python|pandas|for-loop|time-series
1
476
67,923,729
Remove all alphanumeric words from a string using pandas
<p>I have a pandas dataframe column with strings that look like</p> <blockquote> <p>'2fvRE-Ku89lkRVJ44QQFN ABACUS LABS, INC'</p> </blockquote> <p>and I want to convert it to look like</p> <blockquote> <p>'ABACUS LABS, INC'.</p> </blockquote> <p>My piece code :</p> <pre><code>list1 = data_df['Vendor'].str.split() print(list1) excludeList = list() for y in list1: if (any([x for x in y if x.isalpha()]) and any([x for x in y if x.isdigit()])) : excludeList.append(y) if y.isdigit() or len(y) == 1: excludeList.append(y) resList = [x for x in list1 if x not in excludeList] print(restList) </code></pre> <p>It however gives me an error for</p> <blockquote> <p>'list' object has no attribute 'isdigit'</p> </blockquote> <p>Can anyone help me how I can remove the alphanumeric words from the string and only retain the text part in my pandas dataframe column?</p>
<p>You can use regular expressions to ensure quick and elegant solution:</p> <pre><code>df2 = df['Text'].str.findall(r'((?&lt;=\s)[a-zA-Z,]+(?=\s|$))').agg(' '.join) </code></pre> <p>Let's break it down:</p> <ol> <li><a href="https://pythex.org/?regex=((%3F%3C%3D%5Cs)%5Ba-zA-Z%2C%5D%2B(%3F%3D%5Cs%7C%24))&amp;test_string=2fvRE-Ku89lkRVJ44QQFN%20ABACUS%202fvRE%20LABS%2C%20INC&amp;ignorecase=0&amp;multiline=0&amp;dotall=0&amp;verbose=0" rel="nofollow noreferrer">Regular expression</a> that picks up only the words without digits.</li> <li>Per each value of <code>df['Text']</code> extract list of matches of the regex.</li> <li>Aggregate each list using <code>' '.join</code> function that concatenates the values in the list adding space in-between.</li> </ol> <p>The regex is doing this:</p> <ul> <li>to catch only &quot;words&quot; that are either at the beginning/end of the string there have to be used non-capturing lookbehind and lookaheads (before and after the letter catching group respectively).</li> <li>lookahead will also stop at end of the string (instead of any white char).</li> <li>the characters accepted in the &quot;words&quot; are defined as <code>[a-zA-Z,]</code> which allows letters lowercase and uppercase as well as a comma.</li> </ul> <h4>Performance</h4> <p>Comparing with @SeaBean solution the time difference on my machine is notable (per 2 million records dataframe):</p> <ul> <li>mine: 6.6522 s</li> <li>SeaBean's: 25.1773 s (3.79x slower)</li> </ul> <p>There is also smaller memory impact of my solution vs. SeaBean's as he is creating additional temporary dataframe.</p>
python|regex|pandas
3
477
67,874,850
Extrapolate data from csv using Python
<p>I have a csv file with few rows and columns of data. Now I intend to extrapolate or interpolate new data if the input values are not matching in the csv.</p> <p>Let me describe my csv as follows.</p> <pre><code>type,depth,io,mux,enr perf,1024,32,4,103.8175 perf,1024,64,4,85.643125 perf,1024,128,4,76.5559375 perf,1024,256,4,72.01246094 dense,1024,32,4,107.391875 dense,1024,64,4,88.99640625 dense,1024,128,4,79.79851563 dense,1024,256,4,75.19976563 </code></pre> <p>If the input does not match with the depth or io value present in the csv. I would like to generate the output after extrapolation/interpolation.</p> <p>For that I need to build an array from the column.</p> <p>Is there anyway to store one of the columns from this csv or storing the csv as a list and from there?</p> <p>I tried the following.</p> <pre><code>import os import pandas as pd this_dir, this_filename = os.path.split(__file__) memory_file_path = os.path.join(this_dir, 'memory.csv') memData = pd.read_csv(memory_file_path, delimiter= ',',) class InvecasMem: def csvImport(self,depth,io): csvmem = memData.loc[(memData[&quot;type&quot;] == &quot;perf&quot;) &amp; (memData['depth'] == depth) &amp; (memData['io'] == io)] if len(csvmem) == 0: print(&quot;Error: Wrong configuration&quot;) memArr = memData.loc[(memData[&quot;type&quot;] == &quot;perf&quot;)] l = [list(row) for row in memArr.values] x=len(l) return l </code></pre> <p>However, I am unable to store the column into an array?</p> <p>Also, is it possible to produce extrapolated values from multiple inputs as in this case?</p> <p>Thanks in advance.</p> <p>Edit: The desired output as in io = [32,64,128,256,32,64,128,256] Whereas I am going to compute for depth = 512 and io = 256</p>
<p>To interpolate, I'll start by loading the data</p> <pre class="lang-py prettyprint-override"><code>import pandas import numpy from io import StringIO # https://stackoverflow.com/a/43312861/1164295 myfile=&quot;&quot;&quot;type,depth,io,mux,enr perf,1024,32,4,103.8175 perf,1024,64,4,85.643125 perf,1024,128,4,76.5559375 perf,1024,256,4,72.01246094 dense,1024,32,4,107.391875 dense,1024,64,4,88.99640625 dense,1024,128,4,79.79851563 dense,1024,256,4,75.19976563&quot;&quot;&quot; df = pandas.read_csv(StringIO(myfile)) </code></pre> <p>then insert a Nan:</p> <pre class="lang-py prettyprint-override"><code>df.at[4,'enr'] = numpy.nan </code></pre> <p>Now we can use the <code>interpolate</code> method</p> <pre class="lang-py prettyprint-override"><code>df.interpolate() </code></pre> <p>See the <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/missing_data.html#interpolation" rel="nofollow noreferrer">interpolation section of the Pandas guide to missing data</a></p>
python|pandas|csv
1
478
31,820,755
how to ignore multiple entries with same index while reading data from CSV using pandas
<p>I have a csv file that looks like this:</p> <pre><code>patient_id, age_in_years, CENSUS_REGION, URBAN_RURAL_STATUS, YEAR 11511, 7, Northeast, Urban, 2011 9882613, 73, South, Urban, 2011 32190339, 49, West, Urban, 2011 32190339, 49, West, Urban, 2011 32190339, 49, West, Urban, 2011 32190339, 49, West, Urban, 2011 32190339, 49, West, Urban, 2011 32190339, 49, West, Urban, 2011 ... </code></pre> <p>The first column (i.e., patient_id) is the index and you can see that there are multiple entries for the same patient. I want my code to ignore these multiple entries when I import the data using <code>pandas</code> but I'm not sure how to do that. I'm using the following code for this purpose at the moment:</p> <pre><code>df = pd.read_csv(filename, index_col = 0) df.drop_duplicates() </code></pre> <p>Further on in the code I have a function that says:</p> <pre><code>def URSTATUS_to_numeric(a): if a == 'Urban': return 0 if a == 'Rural': return 1 if a == 'NULL': return 2 </code></pre> <p>When I call this function and print it using <code>df.drop_duplicates()</code> though, this is what I get:</p> <pre><code>df['URSTATUS_num'] = df['URBAN_RURAL_STATUS'].apply(URSTATUS_to_numeric) print(df.drop_duplicates(['URSTATUS_num'])) &gt;&gt;&gt; patient_id URSTATUS_num 11511 0 129126475 1 151269094 NaN </code></pre> <p>So basically it is dropping the duplicates considering the <code>URSTATUS_num</code> column as the reference. However, I want the code to always reference <code>patient_id</code> while performing the <code>drop_duplicates()</code> operation. Can anyone please help?</p>
<p>I don't believe you can ignore them as they are being read, but once they have been read you can easily drop them using <code>drop_duplicates</code>. </p> <pre><code>df = pd.read_csv(filename, index_col = 0) &gt;&gt;&gt; df.drop_duplicates() patient_id age_in_years CENSUS_REGION URBAN_RURAL_STATUS YEAR 0 11511 7 Northeast Urban 2011 1 9882613 73 South Urban 2011 2 32190339 49 West Urban 2011 </code></pre> <p>EDIT:</p> <p>You probably just want to call it once, e.g. </p> <pre><code>df = pd.read_csv(filename, index_col = 0).drop_duplicates() </code></pre> <p>Depending on the cleanliness of your underlying data, you may first need to pre-process to strip spaces, etc.</p>
python|csv|pandas
1
479
31,711,686
Should stats.norm.pdf gives same result as stats.gaussian_kde in Python?
<p>I was trying to estimate PDF of 1-D using <code>gaussian_kde</code>. However, when I plot pdf using <code>stats.norm.pdf</code>, it gives me different result. Please correct me if I am wrong, I think they should give quite similar result. Here's my code. </p> <pre><code> npeaks = 9 mean = np.array([0.2, 0.3, 0.38, 0.55, 0.65,0.7,0.75,0.8,0.82]) #peak locations support = np.arange(0,1.01,0.01) std = 0.03 pkfun = sum(stats.norm.pdf(support, loc=mean[i], scale=std) for i in range(0,npeaks)) df = pd.DataFrame(support) X = df.iloc[:,0] min_x, max_x = X.min(), X.max() plt.figure(1) plt.plot(support,pkfun) kernel = stats.gaussian_kde(X) grid = 100j X= np.mgrid[min_x:max_x:grid] Z = np.reshape(kernel(X), X.shape) # plot KDE plt.figure(2) plt.plot(X, Z) plt.show() </code></pre> <p>Also, when I get the first derivative of <code>stats.gaussian_kde</code> was far from the original signal. However, the result of first derivative of <code>stats.norm.pdf</code> does make sense. So, I am assuming I might have error in my code above. </p> <p>Value of X= np.mgrid[min_x:max_x:grid]:</p> <pre><code>[ 0. 0.01010101 0.02020202 0.03030303 0.04040404 0.05050505 0.06060606 0.07070707 0.08080808 0.09090909 0.1010101 0.11111111 0.12121212 0.13131313 0.14141414 0.15151515 0.16161616 0.17171717 0.18181818 0.19191919 0.2020202 0.21212121 0.22222222 0.23232323 0.24242424 0.25252525 0.26262626 0.27272727 0.28282828 0.29292929 0.3030303 0.31313131 0.32323232 0.33333333 0.34343434 0.35353535 0.36363636 0.37373737 0.38383838 0.39393939 0.4040404 0.41414141 0.42424242 0.43434343 0.44444444 0.45454545 0.46464646 0.47474747 0.48484848 0.49494949 0.50505051 0.51515152 0.52525253 0.53535354 0.54545455 0.55555556 0.56565657 0.57575758 0.58585859 0.5959596 0.60606061 0.61616162 0.62626263 0.63636364 0.64646465 0.65656566 0.66666667 0.67676768 0.68686869 0.6969697 0.70707071 0.71717172 0.72727273 0.73737374 0.74747475 0.75757576 0.76767677 0.77777778 0.78787879 0.7979798 0.80808081 0.81818182 0.82828283 0.83838384 0.84848485 0.85858586 0.86868687 0.87878788 0.88888889 0.8989899 0.90909091 0.91919192 0.92929293 0.93939394 0.94949495 0.95959596 0.96969697 0.97979798 0.98989899 1. ] </code></pre> <p>Value of X = df.iloc[:,0]:</p> <pre><code>[ 0. 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1 0.11 0.12 0.13 0.14 0.15 0.16 0.17 0.18 0.19 0.2 0.21 0.22 0.23 0.24 0.25 0.26 0.27 0.28 0.29 0.3 0.31 0.32 0.33 0.34 0.35 0.36 0.37 0.38 0.39 0.4 0.41 0.42 0.43 0.44 0.45 0.46 0.47 0.48 0.49 0.5 0.51 0.52 0.53 0.54 0.55 0.56 0.57 0.58 0.59 0.6 0.61 0.62 0.63 0.64 0.65 0.66 0.67 0.68 0.69 0.7 0.71 0.72 0.73 0.74 0.75 0.76 0.77 0.78 0.79 0.8 0.81 0.82 0.83 0.84 0.85 0.86 0.87 0.88 0.89 0.9 0.91 0.92 0.93 0.94 0.95 0.96 0.97 0.98 0.99 1. ] </code></pre>
<p>In the row below you make pdf calculations in every peak-point along 100 datapoints with the <code>std = 0,03</code>. So you get a matrix with array with <strong>100 elements per row</strong> then you <strong>summerize</strong> it <strong>elementwise</strong>, <strong>result:</strong> <a href="https://i.stack.imgur.com/PeEFt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PeEFt.png" alt="enter image description here"></a></p> <p>Thus you get a graph with 9 narrow -because of <code>std = 0,03</code>- U-shape. Are you sure, that this was your purpose with this row?</p> <p>This will never get the similar graph as the kernel estimate base of the original data, <strong>result:</strong> <a href="https://i.stack.imgur.com/Pwu3g.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Pwu3g.png" alt="enter image description here"></a></p> <blockquote> <p>pkfun = sum(stats.norm.pdf(support, loc=mean[i], scale=std) for i in range(0,npeaks))</p> </blockquote>
python|python-2.7|pandas|scipy|probability-density
0
480
41,442,956
Get filtered row names and save it in a list
<p>I have a dataframe like this:</p> <pre><code> Area 2016-09-02 2016-09-03 2016-09-04 2016-09-05 39.TFO 1-14 6588.67 6604.03 6567.42 6421.12 40.TFO 15-28 6843.58 6929.41 6922.24 6801.98 41.TFO 29-42 3546.59 3634.46 3770.85 3813.15 42.TFO 43-52 3816.58 3834.43 3830.02 3822.59 </code></pre> <p>I want to save Area Values in a list like</p> <p>[TFO 1-14, TFO 15-28, TFO 29-42, TFO 43-52]</p> <p>I have tried this code but i am getting the wrong output.</p> <p>df['Area'].str.extract('TFO (.*)'))</p> <p>How can it be achieved?</p>
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.split.html" rel="nofollow noreferrer"><code>split</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/text.html#indexing-with-str" rel="nofollow noreferrer">indexing-with-str</a>:</p> <pre><code>print (df['Area'].str.split('.').str[1].tolist()) ['TFO 1-14', 'TFO 15-28', 'TFO 29-42', 'TFO 43-52'] </code></pre> <p>Another solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.extract.html" rel="nofollow noreferrer"><code>str.extract</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.tolist.html" rel="nofollow noreferrer"><code>tolist</code></a>:</p> <pre><code>print (df['Area'].str.extract('(TFO .*)', expand=False).tolist()) ['TFO 1-14', 'TFO 15-28', 'TFO 29-42', 'TFO 43-52'] </code></pre>
python|pandas|dataframe
4
481
41,559,151
Python: How does converters work in genfromtxt() function?
<p>I am new to Python, I have a following example that I don't understand</p> <p>The following is a csv file with some data</p> <pre><code>%%writefile wood.csv item,material,number 100,oak,33 110,maple,14 120,oak,7 145,birch,3 </code></pre> <p>Then, the example tries to define a function to convert those trees name above to integers. </p> <pre><code>tree_to_int = dict(oak = 1, maple=2, birch=3) def convert(s): return tree_to_int.get(s, 0) </code></pre> <p>The first question is why is there a "0" after "s"? I removed that "0" and get same result.</p> <p>The last step is to read those data by numpy.array</p> <pre><code>data = np.genfromtxt('wood.csv', delimiter=',', dtype=np.int, names=True, converters={1:convert} ) </code></pre> <p>I was wondering for the converters argument, what does {1:convert} exact mean? Especially what does number 1 mean in this case? </p>
<p>For the second question, according to the documentation (<a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.genfromtxt.html" rel="nofollow noreferrer">https://docs.scipy.org/doc/numpy/reference/generated/numpy.genfromtxt.html</a>), <code>{1:convert}</code> is a dictionary whose keys are column numbers (where the first column is column 0) and whose values are functions that convert the entries in that column. </p> <p>So in this code, the 1 indicates column one of the csv file, the one with the names of the trees. Including this argument causes numpy to use the <code>convert</code> function to replace the tree names with their corresponding numbers in <code>data</code>.</p>
python|python-2.7|numpy
1
482
61,437,090
Copying data from one dataframe to another with different column names
<p>I have one dataframe:</p> <pre><code>df = pd.DataFrame([[1, 2, 3, 4, 5], [6, 7, 8, 9, 10]], columns=list('ABCDE')) </code></pre> <p>I need to create another dataframe by copying data from df in this format:</p> <pre><code>df2 = pd.DataFrame([[1, 2, 0], [ 3, 4, 5], [ 6, 7, 0], [ 8, 9, 10]], columns=list('PQR')) </code></pre> <p>I've tried append and concat, but mainly facing an issue due to the column names being different. Any help is much appreciated.</p>
<p>reshaping ur data requires getting the first two columns and concatenating with the last three columns : </p> <pre><code>a = df.iloc[:,:2].set_axis([ent for ent,_ in enumerate(a.columns)],axis=1) a 0 1 0 1 2 1 6 7 b = df.iloc[:,2:].set_axis([ent for ent,_ in enumerate(b.columns)],axis=1) b 0 1 2 0 3 4 5 1 8 9 10 res = pd.concat((a,b)).fillna(0).astype(int).reset_index(drop=True) res 0 1 2 0 1 2 0 1 6 7 0 2 3 4 5 3 8 9 10 </code></pre>
python-3.x|pandas|dataframe
0
483
61,555,097
Mapping text data through huggingface tokenizer
<p>I have my encode function that looks like this:</p> <pre class="lang-py prettyprint-override"><code>from transformers import BertTokenizer, BertModel MODEL = 'bert-base-multilingual-uncased' tokenizer = BertTokenizer.from_pretrained(MODEL) def encode(texts, tokenizer=tokenizer, maxlen=10): # import pdb; pdb.set_trace() inputs = tokenizer.encode_plus( texts, return_tensors='tf', return_attention_masks=True, return_token_type_ids=True, pad_to_max_length=True, max_length=maxlen ) return inputs['input_ids'], inputs["token_type_ids"], inputs["attention_mask"] </code></pre> <p>I want to get my data encoded on the fly by doing this:</p> <pre class="lang-py prettyprint-override"><code>x_train = (tf.data.Dataset.from_tensor_slices(df_train.comment_text.astype(str).values) .map(encode)) </code></pre> <p>However, this chucks the error:</p> <pre><code>ValueError: Input is not valid. Should be a string, a list/tuple of strings or a list/tuple of integers. </code></pre> <p>Now from my understanding when I set a breakpoint inside <code>encode</code> it was because I was sending a non-numpy array. How do I get huggingface transformers to play nice with tensorflow strings as inputs?</p> <p>If you need a dummy dataframe here it is:</p> <pre><code>df_train = pd.DataFrame({'comment_text': ['Today was a good day']*5}) </code></pre> <h2>What I tried</h2> <p>So I tried to use <code>from_generator</code> so that I can parse in the strings to the <code>encode_plus</code> function. However, this does not work with TPUs.</p> <pre class="lang-py prettyprint-override"><code>AUTO = tf.data.experimental.AUTOTUNE def get_gen(df): def gen(): for i in range(len(df)): yield encode(df.loc[i, 'comment_text']) , df.loc[i, 'toxic'] return gen shapes = ((tf.TensorShape([maxlen]), tf.TensorShape([maxlen]), tf.TensorShape([maxlen])), tf.TensorShape([])) train_dataset = tf.data.Dataset.from_generator( get_gen(df_train), ((tf.int32, tf.int32, tf.int32), tf.int32), shapes ) train_dataset = train_dataset.batch(BATCH_SIZE).prefetch(AUTO) </code></pre> <h2>Version Info:</h2> <p><code>transformers.__version__, tf.__version__</code> => <code>('2.7.0', '2.1.0')</code></p>
<p>the tokenizer of bert works on a string, a list/tuple of strings or a list/tuple of integers. So, check is your data getting converted to string or not. To apply tokenizer on whole dataset I used Dataset.map, but this runs on graph mode. So, I need to wrap it in a tf.py_function. The tf.py_function will pass regular tensors (with a value and a .numpy() method to access it), to the wrapped python function. My data was getting converted to bytes after using py_function hence I applied tf.compat.as_str to convert bytes to string.</p> <pre><code>tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') def encode(lang1, lang2): lang1 = tokenizer.encode(tf.compat.as_str(lang1.numpy()), add_special_tokens=True) lang2 = tokenizer.encode(tf.compat.as_str(lang2.numpy()), add_special_tokens=True) return lang1, lang2 def tf_encode(pt, en): result_pt, result_en = tf.py_function(func = encode, inp = [pt, en], Tout=[tf.int64, tf.int64]) result_pt.set_shape([None]) result_en.set_shape([None]) return result_pt, result_en train_dataset = dataset3.map(tf_encode) BUFFER_SIZE = 200 BATCH_SIZE = 64 train_dataset = train_dataset.shuffle(BUFFER_SIZE).padded_batch(BATCH_SIZE, padded_shapes=(60, 60)) a,p = next(iter(train_dataset)) </code></pre>
tensorflow|tensorflow-datasets|huggingface-transformers
6
484
61,404,032
Getting ModuleNotFoundError only if debug mode is enabled
<p>I have a Flask server which loads <code>Tensorflow</code> models on startup in an external service module.</p> <p>The problem is if debug mode is enabled, so <code>FLASK_DEBUG = 1</code>, the app crashes because it is not able to load a certain module from Tensorflow. <code>tensorflow_core.keras</code> to be precise.</p> <p>However, running the application without debugging works.</p> <p>The project structure looks like this:</p> <pre class="lang-none prettyprint-override"><code>controllers/ __init__.py # Exposes controllers_blueprint controller.py services/ service.py __init__.py # Exposes app, socketio model_util.py # Used to load models, imports Tensorflow server.py </code></pre> <p>So, in a <code>services/service.py</code>, on module load, I call a function which does the job:</p> <pre class="lang-py prettyprint-override"><code>import model_util def _load_models(): # Loading models here using model_util.load(..) return models _models = _load_models() </code></pre> <p>This module gets imported from a <code>controllers/controller.py</code></p> <pre class="lang-py prettyprint-override"><code>from services import service from flask import current_app, request from application import socketio NAMESPACE = '/test' @socketio.on('connect', namespace=NAMESPACE) def handle_connect(): service.create_session(request.sid) </code></pre> <p>Where the <code>controllers/__init__.py</code> exposes a <code>controller_blueprint</code>:</p> <pre class="lang-py prettyprint-override"><code>from flask import Blueprint from controllers import controller controllers_blueprint = Blueprint('controllers', __name__) </code></pre> <p>Which then gets registered in my <code>server.py</code></p> <pre class="lang-py prettyprint-override"><code>from flask import current_app from controllers import controllers_blueprint from application import app, socketio def main(): app.register_blueprint(controllers_blueprint) socketio.run(app) if __name__ == '__main__': main() </code></pre> <p>For completeness, this is <code>__init__.py</code></p> <pre class="lang-py prettyprint-override"><code>from flask import Flask from flask_caching import Cache from flask_socketio import SocketIO HTTP_SERVER_PORT = 5000 app = Flask(__name__, static_folder='../static', static_url_path='') cache = Cache(config={'CACHE_TYPE': 'simple'}) cache.init_app(app) socketio = SocketIO(app) socketio.init_app(app) </code></pre> <p>Also <code>model_util.py</code> is where the Tensorflow modules are being imported:</p> <pre class="lang-py prettyprint-override"><code>import tensorflow as tf from tensor2tensor.data_generators.text_encoder import SubwordTextEncoder # ... def load(...): # .. pass </code></pre> <p>The error I am getting is</p> <pre class="lang-none prettyprint-override"><code>Traceback (most recent call last): File "/Users/sfalk/miniconda3/envs/web-service/lib/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/Users/sfalk/miniconda3/envs/web-service/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/Users/sfalk/miniconda3/envs/web-service/lib/python3.7/site-packages/flask/__main__.py", line 15, in &lt;module&gt; main(as_module=True) File "/Users/sfalk/miniconda3/envs/web-service/lib/python3.7/site-packages/flask/cli.py", line 967, in main cli.main(args=sys.argv[1:], prog_name="python -m flask" if as_module else None) File "/Users/sfalk/miniconda3/envs/web-service/lib/python3.7/site-packages/flask/cli.py", line 586, in main return super(FlaskGroup, self).main(*args, **kwargs) File "/Users/sfalk/miniconda3/envs/web-service/lib/python3.7/site-packages/click/core.py", line 782, in main rv = self.invoke(ctx) File "/Users/sfalk/miniconda3/envs/web-service/lib/python3.7/site-packages/click/core.py", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/Users/sfalk/miniconda3/envs/web-service/lib/python3.7/site-packages/click/core.py", line 1066, in invoke return ctx.invoke(self.callback, **ctx.params) File "/Users/sfalk/miniconda3/envs/web-service/lib/python3.7/site-packages/click/core.py", line 610, in invoke return callback(*args, **kwargs) File "/Users/sfalk/miniconda3/envs/web-service/lib/python3.7/site-packages/click/decorators.py", line 73, in new_func return ctx.invoke(f, obj, *args, **kwargs) File "/Users/sfalk/miniconda3/envs/web-service/lib/python3.7/site-packages/click/core.py", line 610, in invoke return callback(*args, **kwargs) File "/Users/sfalk/miniconda3/envs/web-service/lib/python3.7/site-packages/flask/cli.py", line 860, in run_command extra_files=extra_files, File "/Users/sfalk/miniconda3/envs/web-service/lib/python3.7/site-packages/werkzeug/serving.py", line 1050, in run_simple run_with_reloader(inner, extra_files, reloader_interval, reloader_type) File "/Users/sfalk/miniconda3/envs/web-service/lib/python3.7/site-packages/werkzeug/_reloader.py", line 337, in run_with_reloader reloader.run() File "/Users/sfalk/miniconda3/envs/web-service/lib/python3.7/site-packages/werkzeug/_reloader.py", line 202, in run for filename in chain(_iter_module_files(), self.extra_files): File "/Users/sfalk/miniconda3/envs/web-service/lib/python3.7/site-packages/werkzeug/_reloader.py", line 24, in _iter_module_files filename = getattr(module, "__file__", None) File "/Users/sfalk/miniconda3/envs/web-service/lib/python3.7/site-packages/tensorflow/__init__.py", line 50, in __getattr__ module = self._load() File "/Users/sfalk/miniconda3/envs/web-service/lib/python3.7/site-packages/tensorflow/__init__.py", line 44, in _load module = _importlib.import_module(self.__name__) File "/Users/sfalk/miniconda3/envs/web-service/lib/python3.7/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "&lt;frozen importlib._bootstrap&gt;", line 1006, in _gcd_import File "&lt;frozen importlib._bootstrap&gt;", line 983, in _find_and_load File "&lt;frozen importlib._bootstrap&gt;", line 965, in _find_and_load_unlocked ModuleNotFoundError: No module named 'tensorflow_core.keras' </code></pre> <p>What is the reason for this and how can I resolve this issue?</p> <h3>Additional Information</h3> <p>I am running the application with PyCharm. This is the output on startup:</p> <pre class="lang-none prettyprint-override"><code>FLASK_APP = application/server.py FLASK_ENV = development FLASK_DEBUG = 1 In folder /Users/sfalk/workspaces/git/web-service /Users/sfalk/miniconda3/envs/web-service/bin/python -m flask run * Serving Flask app "application/server.py" (lazy loading) * Environment: development * Debug mode: on * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit) * Restarting with stat * Debugger is active! * Debugger PIN: 298-908-268 </code></pre>
<p>Apparently there is a <a href="https://github.com/pallets/werkzeug/issues/461" rel="nofollow noreferrer">bug</a> in <code>werkzeug</code> which is used by <code>flask</code> to serve <code>flask</code> apps, when running <code>flask</code> apps in debug mode with <code>python -m</code>.</p> <p>To prevent this from happening, you can start the app without the <code>-m</code> option, e.g with <code>flask run</code>.</p>
python|tensorflow|flask
0
485
68,450,437
UNet loss is NaN + UserWarning: Warning: converting a masked element to nan
<p>I'm training a UNet, which class looks like this:</p> <pre><code>class UNet(nn.Module): def __init__(self): super().__init__() # encoder (downsampling) # Each enc_conv/dec_conv block should look like this: # nn.Sequential( # nn.Conv2d(...), # ... (2 or 3 conv layers with relu and batchnorm), # ) self.enc_conv0 = nn.Sequential( nn.Conv2d(in_channels=3, out_channels=64, kernel_size=3, stride=1, padding=1), nn.BatchNorm2d(64), nn.ReLU(), nn.Conv2d(in_channels=64, out_channels=64, kernel_size=3, padding=1, stride=1), nn.BatchNorm2d(64), nn.ReLU() ) self.pool0 = nn.MaxPool2d(kernel_size=2, stride=2, return_indices=False) # 256 -&gt; 128 self.enc_conv1 = nn.Sequential( nn.Conv2d(in_channels=64, out_channels=128, kernel_size=3, padding=1), nn.BatchNorm2d(128), nn.ReLU(), nn.Conv2d(in_channels=128, out_channels=128, kernel_size=3, padding=1), nn.BatchNorm2d(128), nn.ReLU() ) self.pool1 = nn.MaxPool2d(kernel_size=2, stride=2, return_indices=False) # 128 -&gt; 64 self.enc_conv2 = nn.Sequential( nn.Conv2d(in_channels=128, out_channels=256, kernel_size=3, padding=1), nn.BatchNorm2d(256), nn.ReLU(), nn.Conv2d(in_channels=256, out_channels=256, kernel_size=3, padding=1), nn.BatchNorm2d(256), nn.ReLU(), nn.Conv2d(in_channels=256, out_channels=256, kernel_size=3, padding=1), nn.BatchNorm2d(256), nn.ReLU() ) self.pool2 = nn.MaxPool2d(kernel_size=2, stride=2) # 64 -&gt; 32 self.enc_conv3 = nn.Sequential( nn.Conv2d(in_channels=256, out_channels=512, kernel_size=3, stride=1, padding=1), nn.BatchNorm2d(512), nn.ReLU(), nn.Conv2d(in_channels=512, out_channels=512, kernel_size=3, stride=1, padding=1), nn.BatchNorm2d(512), nn.ReLU(), nn.Conv2d(in_channels=512, out_channels=512, kernel_size=3, stride=1, padding=1), nn.BatchNorm2d(512), nn.ReLU() ) self.pool3 = nn.MaxPool2d(kernel_size=2, stride=2) # 32 -&gt; 16 # bottleneck self.bottleneck_conv = nn.Sequential( nn.Conv2d(in_channels=512, out_channels=1024, kernel_size=1, stride=1, padding=0), nn.BatchNorm2d(1024), nn.ReLU(), nn.Conv2d(in_channels=1024, out_channels=512, kernel_size=1, stride=1, padding=0), nn.BatchNorm2d(512), nn.ReLU() ) # decoder (upsampling) self.upsample0 = nn.UpsamplingBilinear2d(scale_factor=2) # 16 -&gt; 32 self.dec_conv0 = nn.Sequential( nn.Conv2d(in_channels=512*2, out_channels=256, kernel_size=3, stride=1, padding=1), nn.BatchNorm2d(256), nn.ReLU(), nn.Conv2d(in_channels=256, out_channels=256, kernel_size=3, stride=1, padding=1), nn.BatchNorm2d(256), nn.ReLU(), nn.Conv2d(in_channels=256, out_channels=256, kernel_size=3, stride=1, padding=1), nn.BatchNorm2d(256), nn.ReLU() ) self.upsample1 = nn.UpsamplingBilinear2d(scale_factor=2) # 32 -&gt; 64 self.dec_conv1 = nn.Sequential( nn.Conv2d(in_channels=256*2, out_channels=128, kernel_size=3, stride=1, padding=1), nn.BatchNorm2d(128), nn.ReLU(), nn.Conv2d(in_channels=128, out_channels=128, kernel_size=3, stride=1, padding=1), nn.BatchNorm2d(128), nn.ReLU(), nn.Conv2d(in_channels=128, out_channels=128, kernel_size=3, stride=1, padding=1), nn.BatchNorm2d(128), nn.ReLU() ) self.upsample2 = nn.UpsamplingBilinear2d(scale_factor=2) # 64 -&gt; 128 self.dec_conv2 = nn.Sequential( nn.Conv2d(in_channels=128*2, out_channels=64, kernel_size=3, stride=1, padding=1), nn.BatchNorm2d(64), nn.ReLU(), nn.Conv2d(in_channels=64, out_channels=64, kernel_size=3, stride=1, padding=1), nn.BatchNorm2d(64), nn.ReLU() ) self.upsample3 = nn.UpsamplingBilinear2d(scale_factor=2) # 128 -&gt; 256 self.dec_conv3 = nn.Sequential( nn.Conv2d(in_channels=64*2, out_channels=1, kernel_size=3, stride=1, padding=1), nn.BatchNorm2d(1), nn.ReLU(), nn.Conv2d(in_channels=1, out_channels=1, kernel_size=3, stride=1, padding=1), nn.BatchNorm2d(1), nn.ReLU(), nn.Conv2d(in_channels=1, out_channels=1, kernel_size=3, stride=1, padding=1), nn.BatchNorm2d(1) ) def forward(self, x): # encoder e0 = self.enc_conv0(x) pool0 = self.pool0(e0) e1 = self.enc_conv1(pool0) pool1 = self.pool1(e1) e2 = self.enc_conv2(pool1) pool2 = self.pool2(e2) e3 = self.enc_conv3(pool2) pool3 = self.pool3(e3) # bottleneck b = self.bottleneck_conv(pool3) # decoder d0 = self.dec_conv0(torch.cat([self.upsample0(b), e3], 1)) d1 = self.dec_conv1(torch.cat([self.upsample1(d0), e2], 1)) d2 = self.dec_conv2(torch.cat([self.upsample2(d1), e1], 1)) d3 = self.dec_conv3(torch.cat([self.upsample3(d2), e0], 1)) # no activation return d3 </code></pre> <p>Train method:</p> <pre><code>def train(model, opt, loss_fn, score_fn, epochs, data_tr, data_val): torch.cuda.empty_cache() losses_train = [] losses_val = [] scores_train = [] scores_val = [] for epoch in range(epochs): tic = time() print('* Epoch %d/%d' % (epoch+1, epochs)) avg_loss = 0 model.train() # train mode for X_batch, Y_batch in data_tr: # data to device X_batch = X_batch.to(device) Y_batch = Y_batch.to(device) # set parameter gradients to zero opt.zero_grad() # forward Y_pred = model(X_batch) loss = loss_fn(Y_pred, Y_batch) # forward-pass loss.backward() # backward-pass opt.step() # update weights # calculate loss to show the user avg_loss += loss / len(data_tr) toc = time() print('loss: %f' % avg_loss) losses_train.append(avg_loss) avg_score_train = score_fn(model, iou_pytorch, data_tr) scores_train.append(avg_score_train) # show intermediate results model.eval() # testing mode avg_loss_val = 0 #Y_hat = # detach and put into cpu for X_val, Y_val in data_val: with torch.no_grad(): Y_hat = model(X_val.to(device)).detach().cpu() loss = loss_fn(Y_hat, Y_val) avg_loss_val += loss / len(data_val) toc = time() print('loss_val: %f' % avg_loss_val) losses_val.append(avg_loss_val) avg_score_val = score_fn(model, iou_pytorch, data_val) scores_val.append(avg_score_val) torch.cuda.empty_cache() # Visualize tools clear_output(wait=True) for k in range(5): plt.subplot(2, 6, k+1) plt.imshow(np.rollaxis(X_val[k].numpy(), 0, 3), cmap='gray') plt.title('Real') plt.axis('off') plt.subplot(2, 6, k+7) plt.imshow(Y_hat[k, 0], cmap='gray') plt.title('Output') plt.axis('off') plt.suptitle('%d / %d - loss: %f' % (epoch+1, epochs, avg_loss)) plt.show() return (losses_train, losses_val, scores_train, scores_val) </code></pre> <p>However, when executing I get train_loss and val_loss both equal nan and also a warning. In addition, when plotting the segmented picture and the target one, the output picture is not shown. I tried to execute with different loss function, but still the same. There is probably something wrong with my class.</p> <p>Could you please help me? Thanks in advance.</p>
<p>I am not sure if this is your error, but your last Convolution layer (self.dec_conv3) has looks odd. I would only reduce to 1 channel at the very last convolution and do not perform 2 Convolutions with 1 In and 1 Out channel. Also ending with a batchnorm can only produce normalized outputs, which could be far from what you really want:</p> <pre><code>self.dec_conv3 = nn.Sequential( nn.Conv2d(in_channels=64*2, out_channels=32, kernel_size=3, stride=1, padding=1), nn.BatchNorm2d(32), nn.ReLU(), nn.Conv2d(in_channels=32, out_channels=32, kernel_size=3, stride=1, padding=1), nn.BatchNorm2d(32), nn.ReLU(), nn.Conv2d(in_channels=32, out_channels=1, kernel_size=3, stride=1, padding=1) ) </code></pre> <p>It would be interesting if your loss is Nan already at the first iteration or only after a few iterations. Maybe, you use a loss function, that devides by zero?</p>
python|neural-network|pytorch|semantic-segmentation
1
486
68,743,773
How do I map a multi-level dictionary into a DataFrame series
<p>I want to map a multi level dictionary according to two columns in a DataFrame. What I have so far is this:</p> <pre><code>df = pd.DataFrame({ 'level_1':['A','B','C','D','A','B','C','D'], 'level_2':[1,2,3,1,2,1,2,3] }) dict = { 'A':{1:0.5, 2:0.8, 3:0.4}, 'B':{1:0.4, 2:0.3, 3:0.7}, 'C':{1:0.3, 2:0.6, 3:0.6}, 'D':{1:0.5, 2:0.4, 3:0.4} } df['mapped'] = np.where( df.level_1 == 'A', df.level_2.map(dict['A']), np.where( df.level_1 == 'B', df.level_2.map(dict['B']), np.where( df.level_1 == 'C', df.level_2.map(dict['C']), np.where( df.level_1 == 'D', df.level_2.map(dict['D']), np.nan ) ) ) ) </code></pre> <p>There must be a better way but I can't seem to find it. It gets really tedious as my real dictionary has a lot more options on level_2.</p> <p>Thanks!</p>
<p>We can try <code>MultiIndex.map</code></p> <pre><code>df['mapped'] = df.set_index(['level_1', 'level_2']).index.map(pd.DataFrame(d).unstack()) </code></pre> <hr /> <pre><code> level_1 level_2 mapped 0 A 1 0.5 1 B 2 0.3 2 C 3 0.6 3 D 1 0.5 4 A 2 0.8 5 B 1 0.4 6 C 2 0.6 7 D 3 0.4 </code></pre> <p>Note: <code>dict</code> is a builtin in python, so using <code>dict</code> as a variable name must be avoided. Here I have used <code>d</code> to represent your mapping dictionary</p>
python|pandas|dictionary
4
487
68,696,475
Why am I getting a type error for the second pice of code while the first on worked?
<p>Code:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np #generate some fake data x = np.random.random(10)*10 y = np.random.random(10)*10 print(x) #[4.98113477 3.14756425 2.44010373 0.22081256 9.09519374 1.29612129 3.65639393 7.72182208 1.05662368 2.33318726] col = np.where(x&lt;1,'k',np.where(y&lt;5,'b','r')) print(col) #['r' 'r' 'r' 'k' 'b' 'b' 'r' 'b' 'r' 'b'] t = [] for i in range(1,10): t.append(i) print(t) #[1, 2, 3, 4, 5, 6, 7, 8, 9] cols = np.where(t % 2 == 0,'b','r') print(cols) </code></pre> <p>Error:</p> <pre class="lang-py prettyprint-override"><code> --------------------------------------------------------------------------- TypeError Traceback (most recent call last) &lt;ipython-input-58-350816096da9&gt; in &lt;module&gt; 6 print(t) 7 ----&gt; 8 cols = np.where(t % 2 == 0,'b','r') 9 print(cols) TypeError: unsupported operand type(s) for %: 'list' and 'int' </code></pre> <p>I am trying to generate color code, blue for even and red for odd numbers. Why do I get the error here, while it worked in the first piece of code?</p>
<p>The &quot;first time&quot; in your code snippet, <code>x</code> and <code>y</code> are numpy arrays, created from the calls to <code>np.random.random</code>:</p> <pre><code>col = np.where(x&lt;1,'k',np.where(y&lt;5,'b','r')) </code></pre> <p>This is not the case with <code>t</code>. As some of the comments have indicated, <code>t</code> is a generic Python list. You cannot apply the mod operator to a Python list.</p> <pre><code>&gt;&gt;&gt; [1, 2, 3, 4] % 2 Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; TypeError: unsupported operand type(s) for %: 'list' and 'int' </code></pre> <p>You can, however, apply the mod operator to a numpy array as Barmar indicated:</p> <pre class="lang-py prettyprint-override"><code>t = np.array([2,3,4,5]) t % 2 # returns array([0, 1, 0, 1]) np.where(t % 2 == 0, 'a', 'b') # returns array(['a', 'b', 'a', 'b'], dtype='&lt;U1') </code></pre>
python|numpy|types
0
488
36,674,609
How to create a function to return an equation
<p>I am trying to create the normal random variable pdf equation.</p> <p>This function would return the final computed value of pdf for a specific x.</p> <pre><code>def normpdf(x, mu=0, sigma=1): # u = (float((x-mu) / abs(sigma))) y = exp(-(float((x-mu) / abs(sigma)))*(float((x-mu) / abs(sigma)))/2) / (sqrt(2*pi*sigma*sigma)) return y </code></pre> <p>What I am trying to do is something like this:</p> <pre><code>def normpdfeqn(mu=0, sigma=1): y = exp(-(float((x-mu) / abs(sigma)))*(float((x-mu) / abs(sigma)))/2) / (sqrt(2*pi*sigma*sigma)) return y </code></pre> <p>So whenever I want to use this equation for integration or differentiation, I could directly call normpdfeqn() and use the equation returned in the integrate function.</p> <p>I have tried this:</p> <pre><code>from sympy import * x = Symbol('x') mu = Symbol('mu') sigma = Symbol('sigma') def normpdfeqn(mu=0, sigma=1): # u = (float((x-mu) / abs(sigma))) y = exp(-(float((x-mu) / abs(sigma)))*(float((x-mu) / abs(sigma)))/2) / float((sqrt(2*pi*sigma*sigma))) return y print(integrate(normpdfeqn(), (x, -inf, inf))) </code></pre> <p>But I get this error:</p> <pre><code>TypeError: can't convert expression to float </code></pre> <p>What should I do?</p>
<p>Don't use the <code>float</code>-function in sympy expressions:</p> <pre><code>def normpdfeqn(x, mu=0, sigma=1): return exp(-((x-mu) / abs(sigma))**2 / 2) / (sqrt(2*pi*sigma*sigma)) x = Symbol('x') print(integrate(normpdfeqn(x), (x, -inf, inf))) </code></pre>
python|python-2.7|python-3.x|numpy|scipy
1
489
36,314,255
how to make pandas.read_sql() not convert all headers to lower case
<p>I have a function that pulls tables from our a table in our SQL server into a dataframe in Python, but it forces all the column headers to be lower case. The code is as follows:</p> <pre><code>connection = pypyodbc.connect('Driver={SQL Server};' 'Server=' + server + ';' 'Database=' + database + ';' 'uid=' + username + ';' 'pwd=' + password + ';') query = 'SELECT * FROM ' + tableName #set dict value to dataframe imported from SQL tableDict[tableName] = pd.read_sql(query, connection) </code></pre> <p>The headers in SQL are for example: pmiManufacturingHeadline_Level It shows up in my pandas dataframe as: pmimanufacturingheadline_level</p> <p>Anyone have an idea how to make pandas.read_sql keep the original capitalization?</p>
<p>I think PyPyODBC does it for you:</p> <p>Here what i found in the source code of <code>PyPyODBC</code> ver. 1.3.3 lines: 28-29:</p> <pre><code>version = '1.3.3' lowercase=True </code></pre> <p>and lines 1771-1772:</p> <pre><code> if lowercase: col_name = col_name.lower() </code></pre> <p>so you can change the behaviour if you want:</p> <pre><code>import pypyodbc pypyodbc.lowercase = False # force the ODBC driver to use case-sensitive column names </code></pre>
python|sql|pandas
12
490
36,416,725
python/pandas - converting date and hour integers to datetime
<p>I have a dataframe that has a date column and an hour column. </p> <pre><code> DATE HOUR 2015-1-1 1 2015-1-1 2 . . . . . . 2015-1-1 24 </code></pre> <p>I want to convert these columns into a datetime format something like: <code>2015-12-26 01:00:00</code> </p>
<p>You could first convert <code>df.DATE</code> to datetime column and add <code>df.HOUR</code> delta via <code>timedelta64[h]</code></p> <pre><code>In [10]: df Out[10]: DATE HOUR 0 2015-1-1 1 1 2015-1-1 2 2 2015-1-1 24 In [11]: pd.to_datetime(df.DATE) + df.HOUR.astype('timedelta64[h]') Out[11]: 0 2015-01-01 01:00:00 1 2015-01-01 02:00:00 2 2015-01-02 00:00:00 dtype: datetime64[ns] </code></pre> <p>Or, use <code>pd.to_timedelta</code></p> <pre><code>In [12]: pd.to_datetime(df.DATE) + pd.to_timedelta(df.HOUR, unit='h') Out[12]: 0 2015-01-01 01:00:00 1 2015-01-01 02:00:00 2 2015-01-02 00:00:00 dtype: datetime64[ns] </code></pre>
python|datetime|pandas
14
491
52,983,061
How to create a list from the dataframe?
<p>I have a dataframe with coordinates x0,y0,x1,y1. I am trying to get a list of the coordinate</p> <p>datafrme is like-</p> <pre><code> x0 y0 x1 y1 x2 y2 ... 1 179.0 77.0 186.0 93.0 165.0 91.0... 2 178.0 76.0 185.0 93.0 164.0 91.0... </code></pre> <p>desired output </p> <pre><code> [ [ [179.0,77.0], [186.0,93.0], [165.0,91.0 ],... ], [ [178.0,76.0 ],[185.0,93.0 ],[164.0,91.0 ] ,... ] ] </code></pre> <p>I am trying to get a list of coordinates with row-wise</p> <p>I have tried this </p> <pre><code>df = pd.read_csv(path,header=None) df.reset_index(drop=True, inplace=True) data = df.values.tolist() </code></pre> <p>Thank you for any and all insight! </p>
<pre><code>[[[y['x0'], y['y0']], [y['x1'], y['y1']], [y['x2'], y['y2']]]for x,y in df.iterrows()] </code></pre> <p>df here will be your dataframe</p> <p>the output will look like the format you described</p> <pre><code> [ [ [179.0,77.0], [186.0,93.0], [165.0,91.0 ],... ], [ [178.0,76.0 ],[185.0,93.0 ],[164.0,91.0 ] ,... ] ] </code></pre>
python|pandas
1
492
52,969,708
ModuleNotFoundError: No module named 'tensorflow', even when tensorflow is installed
<p>I am trying to install tensorflow for one of my machine learning projects. However, even though I have installed it, I still get this error</p> <pre><code>ModuleNotFoundError: No module named 'tensorflow' </code></pre> <p>To help illustrate this better, I have created a <code>test.py</code> file, with the following content:</p> <pre><code>import tensorflow as tf print('Hello world!') </code></pre> <p>However, still the same error, on line 1.</p> <p><strong>Relevant questions:</strong></p> <p>I've tried doing many other answers, but none of them seems to help. Any answers would be appreciated.</p> <p><strong>Here's some debugging outputs that might help:</strong></p> <blockquote> <p>pip3 show tensorflow</p> </blockquote> <pre><code>Name: tensorflow Version: 1.11.0 Summary: TensorFlow is an open source machine learning framework for everyone. Home-page: https://www.tensorflow.org/ Author: Google Inc. Author-email: [email protected] License: Apache 2.0 Location: c:\program files\anaconda3\lib\site-packages Requires: absl-py, termcolor, keras-applications, astor, six, tensorboard, keras-preprocessing, wheel, gast, setuptools, grpcio, protobuf, numpy Required-by: </code></pre> <blockquote> <p>pip3 --version</p> </blockquote> <pre><code>pip 18.1 from c:\program files\anaconda3\lib\site-packages\pip (python 3.6) </code></pre> <blockquote> <p>python --version</p> </blockquote> <pre><code>Python 3.6.0 :: Anaconda 4.3.0 (64-bit) </code></pre> <blockquote> <p>py test.py</p> </blockquote> <pre><code>Traceback (most recent call last): File "test.py", line 1, in &lt;module&gt; import tensorflow as tf ModuleNotFoundError: No module named 'tensorflow' </code></pre>
<p>I fixed it! Special thanks to the folks at the Tensorflow Talk slack who helped me, especially @akofman.</p> <p>It was a combination of 2 problems:</p> <p><strong>Problem 1</strong></p> <p>It seems that one of the reasons it was failing was due to one of tensorflow's dependencies being outdated/misinstalled/something. The dependency is <code>h5py</code>. I found out about this by attempting to run <code>import tensorflow</code> in the python interpreter (type <code>python</code>), which gave me a long stack trace, unlike the test file (see problem 2). I fixed this by reinstalling the dependency.</p> <p><strong>Problem 2</strong></p> <p>It turns out that I have 3, <em>that's right, 3!</em>, versions of python on my computer</p> <p><code>python -V</code> ---> 3.6.0</p> <p><code>python -V</code> (in an anaconda enviorment) ---> 3.6.7</p> <p><code>py -V</code> ---> 3.7.0</p> <p>I was running my test file with <code>py</code>, which is apparently 3.7.0 (I thought it was synonymous with <code>python</code>), I guess tensorflow doesn't support that version?</p>
python|tensorflow|failed-installation
1
493
65,861,699
Understanding the nature of merge in pandas
<p>I want to understand the <code>pd.merge</code> work nature. I have two dataframes that have unequal length. When trying to merge them through this command</p> <pre><code>merged = pd.merge(surgical, comps[comps_ls+['mrn','Admission']], on=['mrn','Admission'], how='left') </code></pre> <p>The length was different from expected as follows</p> <pre><code>length of comps: 4829 length of surgical: 7939 length of merged: 9531 </code></pre> <p>From my own understanding, <code>merged</code> dataframe should have as same as the length of <code>comps</code> dataframe since <code>left</code> join will look for matching keys in both dataframes and discard the rest. As long as <code>comps</code> length is less than <code>surgical</code> length, the <code>merged</code> length should be 4829. Why does it have 9531?? larger number than the length of both. Even if I changed the <code>how</code> parameter to <code>&quot;right&quot;</code>, <code>merged</code> has a larger number than expected.</p> <p>Generally, I want to know how to merge two dataframes that have unequal length specifying some columns from the right dataframe. Also, how do I validate the merge operation?. Find this might be helpful:</p> <pre><code>comps_ls: list of complications I want to throw on surgical dataframe. mrn, Admission: the key columns I want to merge the two dataframes on. </code></pre> <p>Note: a teammate suggests this solution</p> <pre><code>merged = pd.merge(surgical, comps[comps_ls+['mrn','Admission']], on=['mrn','Admission'], how='left') merged = surgical.join(merged, on=['mrn'], how='left', lsuffix=&quot;&quot;, rsuffix=&quot;_r&quot;) </code></pre> <p>The length of the output was as follows</p> <pre><code>length of comps: 4829 length of surgical: 7939 length of merged: 7939 </code></pre> <p>How can this help?</p>
<p>The &quot;issue&quot; is with duplicated merge keys, which can cause the resulting merge to be larger than the original. For a left merge you can expect the result to be in between <code>N_rows_left</code> and <code>N_rows_left * N_rows_right</code> rows long. The lower bound is in the case that both the left and right DataFrames have no duplicate merge keys, and the upper bound is the case when the left and right DataFrames have the single same value for the merge keys on every row.</p> <p>Here's a worked example. All DataFrames are 4 rows long, but <code>df2</code> has duplicate merge keys. As a result when <code>df2</code> is merged to <code>df</code> the output is longer than <code>df</code>, because for the row with <code>2</code> as the key in df, <strong>both</strong> rows in <code>df2</code> are matched.</p> <pre><code>import pandas as pd df = pd.DataFrame({'key': [1,2,3,4]}) df1 = pd.DataFrame({'row': range(4), 'key': [2,3,4,5]}) df2 = pd.DataFrame({'row': range(4), 'key': [2,2,3,3]}) # Neither frame duplicated on merge key, result is same length (4) as left. df.merge(df1, on='key', how='left') # key row #0 1 NaN #1 2 0.0 #2 3 1.0 #3 4 2.0 # df2 is duplicated on the merge keys so we get &gt;4 rows df.merge(df2, on='key', how='left') # key row #0 1 NaN #1 2 0.0 # Both `2` rows matched #2 2 1.0 # ^ #3 3 2.0 # Both `3` rows matched #4 3 3.0 # ^ #5 4 NaN </code></pre>
python|pandas
2
494
65,541,166
Predicting purchase probability based on prior orders?
<p>Let's assume we have the following dataframe:</p> <pre><code>merged = pd.DataFrame({'week' : [0, 0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2], 'shopper' : [0, 0, 0, 1, 1, 0, 1, 1, 2, 0, 2, 2], 'product' : [63, 80, 91, 42, 77, 55, 77, 95, 77, 98, 202, 225], 'price' : [543, 644, 770, 620, 560, 354, 525, 667, 525, 654, 783, 662], 'discount' : [0, 0, 10, 12, 0, 30, 10, 0, 0, 5, 0, 0] }) print(merged) week shopper product price discount 0 0 0 63 543 0 1 0 0 80 644 0 2 0 0 91 770 10 3 0 1 42 620 12 4 0 1 77 560 0 5 1 0 55 354 30 6 1 1 77 525 10 7 1 1 95 667 0 8 1 2 77 525 0 9 2 0 98 654 5 10 2 2 202 783 0 11 2 2 225 662 0 </code></pre> <p>Can you think of a way to <strong>estimate the probability that each shopper will buy each product</strong> in week 3? I am looking for an end result that looks somewhat like this:</p> <pre><code> week shopper product y 0 3 0 55 0.32 1 3 0 63 0.66 2 3 0 80 0.77 3 3 0 91 0.54 4 3 0 98 0.23 5 3 1 42 0.24 6 3 1 77 0.51 7 3 1 95 0.40 8 3 2 77 0.12 9 3 2 202 0.53 10 3 2 225 0.39 </code></pre> <p>I've thought of using the amount of time a customer-product combination has appeared in the past or the amount of time between the orders to forecast the probability that it reoccurs next week, but I don't know how to implement that.</p> <p>I would be very thankful for any help!</p>
<p>This is not an easy task. The accuracy depends on the number of the past observations. Such a small data as you share is does not give accurate solutions. However, the code below might give you an idea. As you guess, you need to find relation between the products and should use these relations. At below, I have firstly get the averaged price to see how the shopper generally tends to pay.</p> <pre><code>idx_0_0 = np.multiply(merged['week'] == 0,1) * np.multiply(merged['shopper'] == 0,1) averaged_paid_price_0_0 = np.average(merged['price'][idx_0_0 == 1]) idx_0_0 = np.multiply(merged['week'] == 1,1) * np.multiply(merged['shopper'] == 0,1) averaged_paid_price_1_0 = np.average(merged['price'][idx_0_0 == 1]) idx_0_0 = np.multiply(merged['week'] == 2,1) * np.multiply(merged['shopper'] == 0,1) averaged_paid_price_2_0 = np.average(merged['price'][idx_0_0 == 1]) total_paid_average_0 = (averaged_paid_price_2_0 + averaged_paid_price_1_0 + averaged_paid_price_0_0)/3 </code></pre> <p>Then I have divided the each product price by total_paid_average_0 as below</p> <pre><code>merged_price_points_0 = merged['price'] / total_paid_average_0 </code></pre> <p>I am basically trying to give them points.</p> <p>After all I have looked is there any relation between tendency of the shoppers and discount</p> <pre><code>idx_0_0_discount = np.multiply(merged['week'] == 0,1) * np.multiply(merged['discount'] != 0,1) * np.multiply(merged['shopper'] == 0,1) discount_exist_0_0 = np.sum(idx_0_0_discount) / np.sum(np.multiply(merged['shopper'] == 0,1)) idx_0_0_discount = np.multiply(merged['week'] == 1,1) * np.multiply(merged['discount'] != 0,1) * np.multiply(merged['shopper'] == 0,1) discount_exist_1_0 = np.sum(idx_0_0_discount) / np.sum(np.multiply(merged['shopper'] == 0,1)) idx_0_0_discount = np.multiply(merged['week'] == 2,1) * np.multiply(merged['discount'] != 0,1) * np.multiply(merged['shopper'] == 0,1) discount_exist_2_0 = np.sum(idx_0_0_discount) / np.sum(np.multiply(merged['shopper'] == 0,1)) discount_point_0 = (discount_exist_0_0 + discount_exist_1_0 + discount_exist_2_0) / 3 </code></pre> <p>Again, I have calculated points. After all I have tried to combine the all points.</p> <p>You can find the all code at below.</p> <pre><code>import pandas as pd import numpy as np merged = pd.DataFrame({'week' : [0, 0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2], 'shopper' : [0, 0, 0, 1, 1, 0, 1, 1, 2, 0, 2, 2], 'product' : [63, 80, 91, 42, 77, 55, 77, 95, 77, 98, 202, 225], 'price' : [543, 644, 770, 620, 560, 354, 525, 667, 525, 654, 783, 662], 'discount' : [0, 0, 10, 12, 0, 30, 10, 0, 0, 5, 0, 0] }) idx_0_0 = np.multiply(merged['week'] == 0,1) * np.multiply(merged['shopper'] == 0,1) averaged_paid_price_0_0 = np.average(merged['price'][idx_0_0 == 1]) idx_0_0 = np.multiply(merged['week'] == 1,1) * np.multiply(merged['shopper'] == 0,1) averaged_paid_price_1_0 = np.average(merged['price'][idx_0_0 == 1]) idx_0_0 = np.multiply(merged['week'] == 2,1) * np.multiply(merged['shopper'] == 0,1) averaged_paid_price_2_0 = np.average(merged['price'][idx_0_0 == 1]) total_paid_average_0 = (averaged_paid_price_2_0 + averaged_paid_price_1_0 + averaged_paid_price_0_0)/3 idx_0_0 = np.multiply(merged['week'] == 0,1) * np.multiply(merged['shopper'] == 1,1) averaged_paid_price_0_1 = np.mean(merged['price'][idx_0_0 == 1]) idx_0_0 = np.multiply(merged['week'] == 1,1) * np.multiply(merged['shopper'] == 1,1) averaged_paid_price_1_1 = np.mean(merged['price'][idx_0_0 == 1]) idx_0_0 = np.multiply(merged['week'] == 2,1) * np.multiply(merged['shopper'] == 1,1) averaged_paid_price_2_1 = np.mean(merged['price'][idx_0_0 == 1]) total_paid_average_1 = (averaged_paid_price_2_1 + averaged_paid_price_1_1 + averaged_paid_price_0_1)/3 idx_0_0 = np.multiply(merged['week'] == 0,1) * np.multiply(merged['shopper'] == 2,1) averaged_paid_price_0_2 = np.mean(merged['price'][idx_0_0 == 1]) idx_0_0 = np.multiply(merged['week'] == 1,1) * np.multiply(merged['shopper'] == 2,1) averaged_paid_price_1_2 = np.mean(merged['price'][idx_0_0 == 1]) idx_0_0 = np.multiply(merged['week'] == 2,1) * np.multiply(merged['shopper'] == 2,1) averaged_paid_price_2_2 = np.mean(merged['price'][idx_0_0 == 1]) total_paid_average_2 = (averaged_paid_price_2_2 + averaged_paid_price_1_2 + averaged_paid_price_0_2)/3 merged_price_points_0 = merged['price'] / total_paid_average_0 idx_0_0_discount = np.multiply(merged['week'] == 0,1) * np.multiply(merged['discount'] != 0,1) * np.multiply(merged['shopper'] == 0,1) discount_exist_0_0 = np.sum(idx_0_0_discount) / np.sum(np.multiply(merged['shopper'] == 0,1)) idx_0_0_discount = np.multiply(merged['week'] == 1,1) * np.multiply(merged['discount'] != 0,1) * np.multiply(merged['shopper'] == 0,1) discount_exist_1_0 = np.sum(idx_0_0_discount) / np.sum(np.multiply(merged['shopper'] == 0,1)) idx_0_0_discount = np.multiply(merged['week'] == 2,1) * np.multiply(merged['discount'] != 0,1) * np.multiply(merged['shopper'] == 0,1) discount_exist_2_0 = np.sum(idx_0_0_discount) / np.sum(np.multiply(merged['shopper'] == 0,1)) discount_point_0 = (discount_exist_0_0 + discount_exist_1_0 + discount_exist_2_0) / 3 merged_price_points_0 = merged_price_points_0.T points_list = list() total_point = list() for counter in range(len(merged['product'])): if merged['discount'][counter] != 0: points_list.append(discount_point_0) else: points_list.append(0) if merged_price_points_0[counter] &gt; 1: merged_price_points_0[counter] = merged_price_points_0[counter] - 1 else: merged_price_points_0[counter] = 1-merged_price_points_0[counter] total_point.append(merged_price_points_0[counter] +points_list[counter] ) sum_of_points = np.sum(total_point) possibility_of_product_week3_for_0 = total_point / sum_of_points print(&quot;Possibility of 3th Week for 0&quot;) for counter in range(len(merged['product'])): print(str(merged['product'][counter]) + &quot;||&quot; + str(possibility_of_product_week3_for_0[counter])) </code></pre> <p>Output</p> <pre><code> Possibility of 3th Week for 0 63||0.005959173323190062 80||0.05166730062127548 91||0.18671231139850392 42||0.10112843920375306 77||0.0037403321922150397 55||0.1769494104222138 77||0.07938379612019782 95||0.06479016102447063 77||0.01622923798656016 98||0.12052745023456325 202||0.13097502218841128 225||0.06193736528464565 </code></pre> <p>I would suggest to search for what Chris commented on. This is not solid answer but might give you an idea. The main idea; defining relationship between products and why the shopper buy it, and giving them points.</p>
python|pandas|model|prediction
0
495
65,587,761
KeyError: 'accuracy'
<p>I was trying to plot train and test learning curve in keras, however, the following code produces KeyError: 'accuracy' Any help would be much appreciated. Thanks.</p> <pre><code> #plotting graphs for accuracy plt.figure(0) plt.plot(history.history['accuracy'], label='training accuracy') plt.plot(history.history['val_accuracy'], label='val accuracy') plt.title('Accuracy') plt.xlabel('epochs') plt.ylabel('accuracy') plt.legend() plt.show() plt.figure(1) plt.plot(history.history['loss'], label='training loss') plt.plot(history.history['val_loss'], label='val loss') plt.title('Loss') plt.xlabel('epochs') plt.ylabel('loss') plt.legend() plt.show() </code></pre> <p>this is the error :</p> <pre><code>----&gt; 3 plt.plot(history.history['accuracy'] , label='training accuracy') KeyError: 'accuracy' </code></pre>
<p>Try adding <code>metrics=['accuracy']</code> in model.fit() as</p> <pre><code>model.fit(#all other parameters, metrics=['accuracy']) </code></pre> <p>If you have already done so, check if you have written metrics=['acc'] instead. If so, make changes to this line in your code</p> <pre><code>plt.plot(history.history['accuracy'], label='training accuracy') </code></pre> <p>as</p> <pre><code>plt.plot(history.history['acc'], label='training accuracy') </code></pre>
python|tensorflow|plot|label
2
496
65,724,063
Optimizing Cython loop compared to Numpy
<pre><code>#cython: boundscheck=False, wraparound=False, nonecheck=False, cdivision=True, language_level=3 cpdef int query(double[::1] q, double[:,::1] data) nogil: cdef: int n = data.shape[0] int dim = data.shape[1] int best_i = -1 double best_ip = -1 double ip for i in range(n): ip = 0 for j in range(dim): ip += q[j] * data[i, j] if ip &gt; best_ip: best_i = i best_ip = ip return best_i </code></pre> <p>After compiling, I time the code from Python:</p> <pre><code>import numpy as np import ip n, dim = 10**6, 10**2 X = np.random.randn(n, dim) q = np.random.randn(dim) %timeit ip.query(q, X) </code></pre> <p>This takes roughly 100ms. Meanwhile the equivalent <code>numpy code</code>:</p> <pre><code>%timeit np.argmax(q @ X.T) </code></pre> <p>Takes just around 50ms.</p> <p>This is odd, since the <code>NumPy</code> code seemingly has to allocate the big array <code>q @ X.T</code> before taking the argmax. I thus wonder if there are some optimizations I am lacking?</p> <p>I have added <code>extra_compile_args=[&quot;-O3&quot;, '-march=native'],</code> to my setup.py and I also tried changing the function definition to</p> <pre><code>cpdef int query(np.ndarray[double] q, np.ndarray[double, ndim=2] data): </code></pre> <p>but it had virtually no difference in performance.</p>
<p>The operation <code>q @ X.T</code> will be mapped to an implementation of matrix-vector-multiplication (<a href="http://www.netlib.org/lapack/explore-html/d7/d15/group__double__blas__level2_gadd421a107a488d524859b4a64c1901a9.html" rel="nofollow noreferrer"><code>dgemv</code></a>) from either OpenBlas or MKL (depending on your distribution) under the hood - that means you are against one of the best optimized algorithms out there.</p> <p>The resulting vector has 1M elements, which results in about 8MB memory. 8MB will not always fit into L3-cache, but even RAM has about 15GB/s bandwith, thus writing/reading 8MB will take at most 1-2ms - not much gain compared to about 50ms of the overall running time.</p> <p>The most obvios issue with your code, is that it doesn't calculate the same as <code>q @X.T</code>. It calculates</p> <pre><code>((q[0]*data[i,0]+q[1]*data[i,1])+q[2]*data[i,2])+... </code></pre> <p>Because of IEEE 754 the compiler is not allowed to reorder the operations and executes them in this non-optimal order: in order to calculate the second sum, the operation must wait until the first summation is performed. This approach doesn't use the full potential of modern architectures.</p> <p>A good <code>dgemv</code> implementation will choose a much better order of operations. A similar issue, but with sums, can be found in this <a href="https://stackoverflow.com/q/49389068/5769463">SO-post</a>.</p> <p>To level the field one could use <code>-ffast-math</code>, which allows compiler to reoder operations and thus make a better use of pipelines.</p> <p>Here are results on my machine for your benchmark:</p> <pre><code>%timeit query(q, X) # 101 ms %timeit query_ffastmath(q, X) # 56.3 ms %timeit np.argmax(q @ X.T) # 50.2 ms </code></pre> <p>It is still about 10% worse, but I would be really surprised if compiler could beat a hand-crafted version created by experts especially for my processor.</p>
python|numpy|optimization|cython
2
497
65,881,093
pandas category that includes the closest greater value
<p>I have the following dataframe:</p> <pre><code>df = pd.DataFrame({'id': ['a', 'a', 'a', 'a', 'a', 'b', 'b', 'b', 'b', 'c','c','c'], 'cumsum': [1, 3, 6, 9, 10, 4, 9, 11, 13, 5, 8, 19]}) id cumsum 0 a 1 1 a 3 2 a 6 3 a 9 4 a 10 5 b 4 6 b 9 7 b 11 8 b 13 9 c 5 10 c 8 11 c 19 </code></pre> <p>I would like to get a new column with a category such that, for a specific input, for each <code>id</code> it will take the closest greater (or equal) value to be in the first category.</p> <p>For example:</p> <pre><code>input = 8 </code></pre> <p>desired output:</p> <pre><code> id cumsum category 0 a 1 0 1 a 3 0 2 a 6 0 3 a 9 0 4 a 10 1 5 b 4 0 6 b 10 0 7 b 11 1 8 b 13 1 9 c 5 0 10 c 8 0 11 c 19 1 </code></pre>
<p>You can get first value greater of equal by input by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.first.html" rel="nofollow noreferrer"><code>GroupBy.first</code></a> and filtered by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.ge.html" rel="nofollow noreferrer"><code>Series.ge</code></a>, then compare by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.gt.html" rel="nofollow noreferrer"><code>Series.gt</code></a> mapped values by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.map.html" rel="nofollow noreferrer"><code>Series.map</code></a> with <code>Id</code> and last convert mask to integers:</p> <pre><code>val = 8 s = df[df['cumsum'].ge(val)].groupby('id')['cumsum'].first() df['category'] = df['cumsum'].gt(df['id'].map(s)).astype(int) print (df) id cumsum category 0 a 1 0 1 a 3 0 2 a 6 0 3 a 9 0 4 a 10 1 5 b 4 0 6 b 9 0 7 b 11 1 8 b 13 1 9 c 5 0 10 c 8 0 11 c 19 1 </code></pre> <p>Another idea is use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.where.html" rel="nofollow noreferrer"><code>Series.where</code></a> with <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.DataFrameGroupBy.transform.html" rel="nofollow noreferrer"><code>GroupBy.transform</code></a>:</p> <pre><code>val = 8 s1 = df['cumsum'].where(df['cumsum'].ge(val)).groupby(df['id']).transform('min') #alternative s1 = df['cumsum'].where(df['cumsum'].ge(val)).groupby(df['id']).transform('first') df['category'] = df['cumsum'].gt(s1).astype(int) </code></pre>
python|pandas
1
498
65,647,762
Ensure that an array has all values encoded as 1 and -1, not 1 and 0
<p>There is a 1 dimensional <code>numpy</code> array named <code>&quot;target_y&quot;</code>. My task is to ensure that <code>&quot;target_y&quot;</code> has all values encoded as <code>1</code> and <code>-1</code>, not <code>1</code> and <code>0</code> before performing logistic regression. Once I do it, I need to assign it to <code>&quot;prepared_y&quot;</code> and return it. Can I write something like:</p> <pre><code>if target_y in 1,-1: prepared_y = target_y </code></pre>
<p>Checkout <a href="https://docs.python.org/3/library/functions.html#all" rel="nofollow noreferrer"><code>np.all</code></a>:</p> <p>We can do some quick boolean arithmetic to check if all the values are <code>1</code> or <code>-1</code>:</p> <pre class="lang-py prettyprint-override"><code>if np.all((target_y == 1) + (target_y == -1)): # checks if the array has all 1s and -1s prepared_y = target_y </code></pre> <p>If you want to change all <code>0</code>s to <code>-1</code>s, slice for all <code>0</code>s:</p> <pre class="lang-py prettyprint-override"><code>target_y[np.argwhere(target_y == 0)] = -1 </code></pre>
python|arrays|pandas|list|numpy
0
499
2,433,587
vectorize is indeterminate
<p>I'm trying to vectorize a simple function in numpy and getting inconsistent behavior. I expect my code to return 0 for values &lt; 0.5 and the unchanged value otherwise. Strangely, different runs of the script from the command line yield varying results: sometimes it works correctly, and sometimes I get all 0's. It doesn't matter which of the three lines I use for the case when d &lt;= T. It does seem to be correlated with whether the first value to be returned is 0. Any ideas? Thanks.</p> <pre><code>import numpy as np def my_func(d, T=0.5): if d &gt; T: return d #if d &lt;= T: return 0 else: return 0 #return 0 N = 4 A = np.random.uniform(size=N**2) A.shape = (N,N) print A f = np.vectorize(my_func) print f(A) $ python x.py [[ 0.86913815 0.96833127 0.54539153 0.46184594] [ 0.46550903 0.24645558 0.26988519 0.0959257 ] [ 0.73356391 0.69363161 0.57222389 0.98214089] [ 0.15789303 0.06803493 0.01601389 0.04735725]] [[ 0.86913815 0.96833127 0.54539153 0. ] [ 0. 0. 0. 0. ] [ 0.73356391 0.69363161 0.57222389 0.98214089] [ 0. 0. 0. 0. ]] $ python x.py [[ 0.37127366 0.77935622 0.74392301 0.92626644] [ 0.61639086 0.32584431 0.12345342 0.17392298] [ 0.03679475 0.00536863 0.60936931 0.12761859] [ 0.49091897 0.21261635 0.37063752 0.23578082]] [[0 0 0 0] [0 0 0 0] [0 0 0 0] [0 0 0 0]] </code></pre>
<p>If this really is the problem you want to solve, then there's a much better solution:</p> <pre><code>A[A&lt;=0.5] = 0.0 </code></pre> <p>The problem with your code, however, is that if the condition passes, you are returning the <em>integer</em> 0, not the <em>float</em> 0.0. From the documentation:</p> <blockquote> <p>The data type of the output of <code>vectorized</code> is determined by calling the function with the first element of the input. This can be avoided by specifying the <code>otypes</code> argument.</p> </blockquote> <p>So when the very first entry is <code>&lt;0.5</code>, it tries to create an integer, not float, array. You should change <code>return 0</code> to </p> <pre><code>return 0.0 </code></pre> <p>Alternately, if you don't want to touch <code>my_func</code>, you can use</p> <pre><code>f = np.vectorize(my_func, otypes=[np.float]) </code></pre>
python|numpy
7