Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
1,100
46,740,584
Python dataframe exploded list column into multiple rows
<p>I have a dataframe like this:</p> <pre><code> desc id info [a,b,c] 2 type [u,v,w] 18 tail </code></pre> <p>Three columns: desc,id,info and desc is a list.I want this:</p> <pre><code> des id info a 2 type b 2 type c 2 type u 18 tail v 18 tail w 18 tail </code></pre> <p>That means exploded the list column into multiple rows and other column no changes. I really don't know how to do this...</p>
<p>Here is one way</p> <pre><code>df.set_index(['id', 'info']).desc.apply(pd.Series).stack()\ .reset_index(name = 'desc').drop('level_2', axis = 1) id info desc 0 2 type a 1 2 type b 2 2 type c 3 18 tail u 4 18 tail v 5 18 tail w </code></pre>
python|pandas|dataframe
5
1,101
62,961,194
How does BertForSequenceClassification classify on the CLS vector?
<p><strong>Background:</strong></p> <p>Following along with this <a href="https://stackoverflow.com/questions/60876394/does-bertforsequenceclassification-classify-on-the-cls-vector">question</a> when using bert to classify sequences the model uses the &quot;[CLS]&quot; token representing the classification task. According to the paper:</p> <blockquote> <p>The first token of every sequence is always a special classification token ([CLS]). The final hidden state corresponding to this token is used as the aggregate sequence representation for classification tasks.</p> </blockquote> <p>Looking at the huggingfaces repo their BertForSequenceClassification utilizes the bert pooler method:</p> <pre><code>class BertPooler(nn.Module): def __init__(self, config): super().__init__() self.dense = nn.Linear(config.hidden_size, config.hidden_size) self.activation = nn.Tanh() def forward(self, hidden_states): # We &quot;pool&quot; the model by simply taking the hidden state corresponding # to the first token. first_token_tensor = hidden_states[:, 0] pooled_output = self.dense(first_token_tensor) pooled_output = self.activation(pooled_output) return pooled_output </code></pre> <p>We can see they take the first token (CLS) and use this as a representation for the whole sentence. Specifically they perform <code>hidden_states[:, 0]</code> which looks a lot like its taking the first element from each state rather than taking the first tokens hidden state?</p> <p><strong>My Question:</strong></p> <p>What I don't understand is how do they encode the information from the entire sentence into this token? Is the CLS token a regular token which has its own embedding vector that &quot;learns&quot; the sentence level representation? Why can't we just use the average of the hidden states (the output of the encoder) and use this to classify?</p> <p><strong>EDIT</strong>: After thinking a little about it: Because we use the CLS tokens hidden state to predict, is the CLS tokens embedding being trained on the task of classification as this is the token being used to classify (thus being the major contributor to the error which gets propagated to its weights?)</p>
<blockquote> <p>Is the CLS token a regular token which has its own embedding vector that &quot;learns&quot; the sentence level representation?</p> </blockquote> <p>Yes:</p> <pre class="lang-py prettyprint-override"><code>from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = BertModel.from_pretrained('bert-base-uncased') clsToken = tokenizer.convert_tokens_to_ids('[CLS]') print(clsToken) #or print(tokenizer.cls_token, tokenizer.cls_token_id) print(model.get_input_embeddings()(torch.tensor(clsToken))) </code></pre> <p>Output:</p> <pre><code>101 [CLS] 101 tensor([ 1.3630e-02, -2.6490e-02, -2.3503e-02, -7.7876e-03, 8.5892e-03, -7.6645e-03, -9.8808e-03, 6.0184e-03, 4.6921e-03, -3.0984e-02, 1.8883e-02, -6.0093e-03, -1.6652e-02, 1.1684e-02, -3.6245e-02, ... 5.4162e-03, -3.0037e-02, 8.6773e-03, -1.7942e-03, 6.6826e-03, -1.1929e-02, -1.4076e-02, 1.6709e-02, 1.6860e-03, -3.3842e-03, 8.6805e-03, 7.1340e-03, 1.5147e-02], grad_fn=&lt;EmbeddingBackward&gt;) </code></pre> <p>You can get a list of all other special tokens for your model with:</p> <pre class="lang-py prettyprint-override"><code>print(tokenizer.all_special_tokens) </code></pre> <p>Output:</p> <pre><code>['[CLS]', '[UNK]', '[PAD]', '[SEP]', '[MASK]'] </code></pre> <blockquote> <p>What I don't understand is how do they encode the information from the entire sentence into this token?</p> </blockquote> <p>and</p> <blockquote> <p>Because we use the CLS tokens hidden state to predict, is the CLS tokens embedding being trained on the task of classification as this is the token being used to classify (thus being the major contributor to the error which gets propagated to its weights?)</p> </blockquote> <p>Also yes. As you have already stated in your question <a href="https://github.com/huggingface/transformers/blob/09a2f40684f77e62d0fd8485fe9d2d610390453f/src/transformers/modeling_bert.py#L1227" rel="noreferrer">BertForSequenceClassification</a> utilizes the <a href="https://github.com/huggingface/transformers/blob/09a2f40684f77e62d0fd8485fe9d2d610390453f/src/transformers/modeling_bert.py#L476" rel="noreferrer">BertPooler</a> to train the linear layer on top of Bert:</p> <pre class="lang-py prettyprint-override"><code>#outputs contains the output of BertModel and the second element is the pooler output pooled_output = outputs[1] pooled_output = self.dropout(pooled_output) logits = self.classifier(pooled_output) #...loss calculation based on logits and the given labels </code></pre> <blockquote> <p>Why can't we just use the average of the hidden states (the output of the encoder) and use this to classify?</p> </blockquote> <p>I can't really answer this in general, but why do you think this would be easier or better as a linear layer? You also need to train the hidden layers to produce an output where the average maps to your class. Therefore you also need an &quot;average layer&quot; to be the major contributor to your loss. In general when you can show that it leads to better results instead of the current approach, nobody will reject it.</p>
python|transformer-model|huggingface-transformers|bert-language-model
6
1,102
63,025,291
Align dataframe by row dates
<p>The df below has a series of columns that starts with a date column followed by value columns.</p> <p>I would like to align all rows on the same data. The problem: not all columns have the same dates (some dates are missing) - see highlights in yellow. How can I realign this df so that all series are aligned to the same dates, and empty values are included when no date exist for a particular series.</p> <p><a href="https://i.stack.imgur.com/bmSwq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bmSwq.png" alt="enter image description here" /></a></p>
<p>This is code to split sub-df and after then join these.</p> <pre><code>import pandas as pd df = pd.read_csv('test.csv') # split df into sub-df df1 = df[['DATE','Y','Y1D', 'Y5D', 'Y20D']].rename(columns={'DATE' : 'NEWDATE'}) df2 = df[['DATE_CITI', 'CITI_INDEX','DATE_GLOBAL' , 'GLOBALINDEX']].rename(columns={'DATE_CITI' : 'NEWDATE'}) df3 = df[['DATE_HIGHYIELD', 'HIGHYIELDINDEX']].rename(columns={'DATE_HIGHYIELD' : 'NEWDATE'}) mergedata = df1.merge(df2,on='NEWDATE', how ='outer').merge(df3,on='NEWDATE', how ='outer') </code></pre>
pandas
0
1,103
67,989,995
Python write function saves dataframe.__repr__ output but truncated?
<p>I have a dataframe output as a result of running some code, like so</p> <pre><code>df = pd.DataFrame({ &quot;i&quot;: self.direct_hit_i, &quot;domain name&quot;: self.domain_list, &quot;j&quot;: self.direct_hit_j, &quot;domain name 2&quot;: self.domain_list2, &quot;domain name cleaned&quot;: self.clean_domain_list, &quot;domain name cleaned 2&quot;: self.clean_domain_list2 }) </code></pre> <p>All I was really looking for was a way to save these data to whatever file e.g. txt, csv but in a way where the columns of data align with the header. I was using <code>df.to_csv()</code> with <code>\t</code> delimeter but due to the data have different lengths of string and numbers, the elements within each row never quite line up as a column with the corresponding header. So I resulted to using</p> <pre><code>with open('./filename.txt', 'w') as fo: fo.write(df.__repr__()) </code></pre> <p>But bear in mind the data in the dataframe are lists with really long length. So for small lengths it returns</p> <p><a href="https://i.stack.imgur.com/EYt5E.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EYt5E.png" alt="enter image description here" /></a></p> <p>which is exactly what I want. However, when I have very big lists it gives me</p> <p><a href="https://i.stack.imgur.com/ui3dC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ui3dC.png" alt="enter image description here" /></a></p> <p>So as seen below the outputs are truncated. I would like it to not be truncated since I'll need to manually scroll down and verify things.</p>
<p>Try the syntax:</p> <pre><code>with open('./filename.txt', 'w') as fo: fo.write(f'{df!r}') </code></pre> <p>Another way of doing this export to csv would be to use a too like <a href="https://trymito.io/so" rel="nofollow noreferrer">Mito</a>, which full disclosure I'm the author of. It should allow you to export ot CSV easier than the process here!</p>
pandas|database|dataframe
0
1,104
67,701,440
Pandas conditional rolling counter between 2 columns of boolean values
<p>thanks in advance for helping a python newbie like me.</p> <p>I would like to achieve the following results in the column &quot;count&quot; without using a stupid slow for loop.</p> <p>I am sure it is possible to vectorize that. Any suggestions ?</p> <p>Thanks again !</p> <pre><code> A B count 0 False False 0 1 True False 1 2 False False 1 3 True False 2 4 False True 1 # True value in the other column: reset the counter 5 False False 1 6 True False 1 # True value in the other column: reset the counter 7 False True 1 # True value in the other column: reset the counter 8 False True 2 9 False False 2 10 False True 3 </code></pre>
<p>You can try:</p> <pre><code>df['count'] = (df.groupby(df.A.cumsum())['B'].cumsum() + df.groupby(df.B.cumsum())['A'].cumsum()) </code></pre> <p>OUTPUT:</p> <pre><code> A B count 0 False False 0 1 True False 1 2 False False 1 3 True False 2 4 False True 1 5 False False 1 6 True False 1 7 False True 1 8 False True 2 9 False False 2 10 False True 3 </code></pre>
python|pandas|count|rolling-computation
1
1,105
67,727,890
RNN LSTM valueError while training
<p>Hi there recently I've been working on a RNN LSTM project and I have e 2D data set like</p> <pre><code>x = [[x1,x2,x3...,x18],[x1,x2,x3...,x18],...] y = [[y1,y2,y3],[y1,y2,y3],...] X.shape =&gt; (295,5,18) Y.shape =&gt; (295,3) </code></pre> <p>and I convert it to a 3D dataset by code below</p> <pre><code>X_train = [] Y_train = [] for i in range(5,300): X_train.append(training_set_scaled[i-5:i,0:18]) Y_train.append(training_set_scaled[i,18:22]) X_train, Y_train = np.array(X_train), np.array(Y_train) </code></pre> <p>and then use Keras for LSTM</p> <pre><code>from keras.models import Sequential from keras.layers import Dense from keras.layers import LSTM from keras.layers import Dropout regressor = Sequential() regressor.add(LSTM(units=50, return_sequences=True,input_shape=(X_train.shape[0],X_train.shape[1],X_train.shape[2]))) regressor.add(Dropout(0.1)) regressor.add(LSTM(units=50, return_sequences=True)) regressor.add(Dropout(0.1)) regressor.add(LSTM(units=50, return_sequences=True)) regressor.add(Dropout(0.1)) regressor.add(LSTM(units=50, return_sequences=True)) regressor.add(Dropout(0.1)) regressor.add(LSTM(units=50)) regressor.add(Dropout(0.1)) regressor.add(Dense(units= 1)) regressor.compile(optimizer= 'adam', loss='binary_crossentropy') regressor.fit(X_train,Y_train, epochs = 100, batch_size = 32) </code></pre> <p>and when I run this script I got the error below:</p> <p><code>ValueError: Input 0 is incompatible with layer lstm_104: expected ndim=3, found ndim=4</code></p> <p>I have no idea about the problem can any body help me?</p>
<p>Change:</p> <pre><code>input_shape=(X_train.shape[0],X_train.shape[1],X_train.shape[2]) </code></pre> <p>to</p> <pre><code>input_shape=(X_train.shape[1],X_train.shape[2]) </code></pre> <p>Basically <code>keras</code> is designed to take any number of examples in a single batch, so it automatically puts <code>None</code> as the first parameter. So, when you mention the rest 2 dimensions, it gets a <code>3</code> dimensional input in total, but if you yourself mention the first dimension, the number of the dimensions becomes <code>4</code>, i.e, <code>(None, X_train.shape[0],X_train.shape[1],X_train.shape[2])</code>.</p> <p>But again, if you really want to hard code the batch_size you can still do it. For this, you have to use <code>batch_input_shape</code> instead of <code>input_shape</code> like follow:</p> <pre><code>regressor.add(LSTM(units=50, return_sequences=True, batch_input_shape=(X_train.shape[0], X_train.shape[1], X_train.shape[2]))) </code></pre> <p>It will give you the power to control which specific batch size to set for the network. (Your program has another flaw in this case, you are setting the batch size <code>X_train.shape[0]</code> which is <code>295</code>, but you are sending <code>32</code> in <code>fit()</code>, but they should be equal. Also batch size is generally taken lesser than the data set size).</p>
python|numpy|keras|deep-learning|lstm
1
1,106
67,798,892
torch.inverse returns identity matrices. Bug resolved by print input before calculation (But why?)
<p>torch.inverse() only returns identity matrices. (see <em>unnormal output</em> below). It occur repeatly after the first iteration.</p> <p>If I try to print anything of <code>pose_pre</code> first, the problem would disapear. (see <em>normal output</em> below)</p> <p>Here is a part of the code:</p> <pre class="lang-py prettyprint-override"><code> # print(pose_pre) # &lt;--------if I add this line, the bug would be gone pose_pre_inv = torch.inverse(pose_pre) print(pose_pre_inv) </code></pre> <h3>why would the first print line affect result</h3> <h3>Any ideas would be appreciated!</h3> <p>ps. <code>pose_pre = pose3d_BT[:,:-1,...].reshape(-1,4,4)</code>. The <code>pose3d_BT</code> is provided by the <code>__getitem__</code> function in pytorch. I printed the content in <code>__getitem__</code> and it looks normal.</p> <hr /> <hr /> <h1>unnormal output with <code>eye(4)</code></h1> <pre><code>tensor([[[ 8.7334e-01, -4.8659e-01, 2.2528e-02, 6.4885e+03], | 0/90 [00:00&lt;?, ?it/s] [ 4.8635e-01, 8.7363e-01, 1.5527e-02, -4.6202e+03], [-2.7237e-02, -2.6037e-03, 9.9963e-01, -6.3445e+01], [ 0.0000e+00, 0.0000e+00, 0.0000e+00, 1.0000e+00]], [[ 8.7334e-01, -4.8659e-01, 2.2528e-02, 6.4885e+03], [ 4.8635e-01, 8.7363e-01, 1.5527e-02, -4.6202e+03], [-2.7237e-02, -2.6037e-03, 9.9963e-01, -6.3445e+01], [ 0.0000e+00, 0.0000e+00, 0.0000e+00, 1.0000e+00]]], device='cuda:0'),) saving hidden-state image hc_state_14_31_18_734196.png to ../../visual_result/lstm_hidden_states epochs: 0%| | 0/1 [00:01&lt;?, ?it/s, loss=4.14, lr=0.0003](tensor([[[1., 0., 0., 0.], | 1/90 [00:01&lt;01:49, 1.23s/it, total_it=1] [0., 1., 0., 0.], [0., 0., 1., 0.], [0., 0., 0., 1.]], [[1., 0., 0., 0.], [0., 1., 0., 0.], [0., 0., 1., 0.], [0., 0., 0., 1.]]], device='cuda:0'),) saving hidden-state image hc_state_14_31_19_915028.png to ../../visual_result/lstm_hidden_states epochs: 0%| | 0/1 [00:02&lt;?, ?it/s, loss=3.45, lr=0.000305](tensor([[[1., 0., 0., 0.], | 2/90 [00:02&lt;01:34, 1.07s/it, total_it=2] [0., 1., 0., 0.], [0., 0., 1., 0.], [0., 0., 0., 1.]], [[1., 0., 0., 0.], [0., 1., 0., 0.], [0., 0., 1., 0.], [0., 0., 0., 1.]]], device='cuda:0'),) </code></pre> <h1>normal output</h1> <pre><code>(tensor([[[ 8.7334e-01, -4.8659e-01, 2.2528e-02, 6.4885e+03], [ 4.8635e-01, 8.7363e-01, 1.5527e-02, -4.6202e+03], [-2.7237e-02, -2.6037e-03, 9.9963e-01, -6.3445e+01], [ 0.0000e+00, 0.0000e+00, 0.0000e+00, 1.0000e+00]], [[ 8.7334e-01, -4.8659e-01, 2.2528e-02, 6.4885e+03], [ 4.8635e-01, 8.7363e-01, 1.5527e-02, -4.6202e+03], [-2.7237e-02, -2.6037e-03, 9.9963e-01, -6.3445e+01], [ 0.0000e+00, 0.0000e+00, 0.0000e+00, 1.0000e+00]]], device='cuda:0'),) saving hidden-state image hc_state_14_32_08_581169.png to ../../visual_result/lstm_hidden_states epochs: 0%| | 0/1 [00:01&lt;?, ?it/s, loss=3.94, lr=0.0003]tensor([ 8.7283e-01, 4.8730e-01, -2.6772e-02, -3.4203e+03], device='cuda:0') | 1/90 [00:01&lt;01:47, 1.20s/it, total_it=1] (tensor([[[ 8.7283e-01, -4.8751e-01, 2.2528e-02, 6.4922e+03], [ 4.8730e-01, 8.7312e-01, 1.4569e-02, -4.6133e+03], [-2.6772e-02, -1.7379e-03, 9.9964e-01, -6.8084e+01], [ 0.0000e+00, 0.0000e+00, 0.0000e+00, 1.0000e+00]], [[ 8.7235e-01, -4.8834e-01, 2.3104e-02, 6.4954e+03], [ 4.8815e-01, 8.7265e-01, 1.3706e-02, -4.6071e+03], [-2.6854e-02, -6.7833e-04, 9.9964e-01, -7.5985e+01], [ 0.0000e+00, 0.0000e+00, 0.0000e+00, 1.0000e+00]]], device='cuda:0'),) saving hidden-state image hc_state_14_32_09_748923.png to ../../visual_result/lstm_hidden_states epochs: 0%| | 0/1 [00:02&lt;?, ?it/s, loss=3.63, lr=0.000305]tensor([ 8.7182e-01, 4.8909e-01, -2.6878e-02, -3.4184e+03], device='cuda:0') | 2/90 [00:02&lt;01:32, 1.05s/it, total_it=2] (tensor([[[ 8.7182e-01, -4.8925e-01, 2.3774e-02, 6.4990e+03], [ 4.8909e-01, 8.7214e-01, 1.2556e-02, -4.6002e+03], [-2.6878e-02, 6.8170e-04, 9.9964e-01, -8.5845e+01], [ 0.0000e+00, 0.0000e+00, 0.0000e+00, 1.0000e+00]], [[ 8.7129e-01, -4.9016e-01, 2.4445e-02, 6.5026e+03], [ 4.9002e-01, 8.7163e-01, 1.1885e-02, -4.5933e+03], [-2.7133e-02, 1.6239e-03, 9.9963e-01, -9.3492e+01], [ 0.0000e+00, 0.0000e+00, 0.0000e+00, 1.0000e+00]]], device='cuda:0'),) </code></pre>
<p>This problem happens when you use cuda version of tensor. If you do matrix inverse operation under cuda, it will always return an identity matrix. Detach to cpu would solve the problem.</p>
python|multithreading|printing|pytorch
0
1,107
67,822,953
GCP: IA ML serving with autoscaling to zero
<p>I wanted to try the ML serving AI platform from GCP, but i want the node to scale only if there is a call to prediction.</p> <p>I see in the <a href="https://cloud.google.com/ai-platform/prediction/docs/deploying-models" rel="nofollow noreferrer">documentation here</a>:</p> <blockquote> <p>If you select &quot;Auto scaling&quot;, the optional Minimum number of nodes field displays. You can enter the minimum number of nodes to keep running at all times, when the service has scaled down. This field defaults to 0.</p> </blockquote> <p>But when i try to create my model version, it shows an error telling me that this field should be &gt; 1.</p> <p>Here is what i tried:</p> <ul> <li>Name: testv1</li> <li>Pre-Built Container</li> <li>Python 3.7</li> <li>Framework Tensorflow</li> <li>TF version 2.4.0</li> <li>ML 2.4</li> <li>Scaling auto-scaling</li> <li>Min nodes nb 0</li> <li>machine type n1-standard-4</li> <li>GPU TESLA_K80 * 1</li> </ul>
<p>I tried to reproduce your case and found the same thing, I was not able to set the <code>Minimum number of nodes</code> to 0.</p> <p>This seems to be an outdated documentation issue. There is an ongoing <a href="https://issuetracker.google.com/issues/147141612" rel="nofollow noreferrer">Feature Request</a> that explains it was possible to set a minimum of 0 machines with a legacy machine type, and requests to make this option available for current types too.</p> <p>On the other hand, I went ahead and opened a ticket to update the documentation.</p> <p>As a workaround, you can deploy and your models right when you need them and then proceed to <a href="https://cloud.google.com/vision/automl/object-detection/docs/undeploy" rel="nofollow noreferrer">un-deploy</a> them. Be mindful that undeployments may take up to 45 minutes, so it is advisable to wait 1 hour to re-deploy that model to avoid any issues.</p>
tensorflow|google-cloud-platform|google-ai-platform
1
1,108
61,331,091
Replace zeroes in array with different values for different halves of the array
<p>I have an array of floats whose size is not known in advance. There are zero values in it and other floats. I want to replace its zeroes with a certain value only for part of the array, let's say for the first third of the array, with another value for the second third, and another value for the last third.</p> <p>I was trying to use <code>my_array[:my_array.size//3:] = first_value</code> as suggested <a href="https://stackoverflow.com/a/35441492/2107030">here</a> as part of a list comprehension:</p> <pre><code>my_array = np.asarray([first_value for x in my_array[:my_array.size//3:] if x == 0]) </code></pre> <p>but this reduces the array to only its first third. How can I replace the zeroes in the way described above?</p>
<p>You can use mask to get the desired indices first and then assign them any value at once without loop:</p> <pre><code>mask = np.where(my_array==0)[0] my_array[mask[mask&lt;my_array.size//3]] = first_value my_array[mask[np.logical_and(mask&gt;=my_array.size//3, mask&lt;2*my_array.size//3)]] = second_value my_array[mask[mask&gt;=2*my_array.size//3]] = third_value </code></pre> <p>If you would like to chunk your array into more pieces, I would recommend looping this code.</p> <p>Note that this will NOT work as it creates a copy of array and changes that: </p> <pre><code>#THIS DOES NOT WORK, it changes values of a copy of my_array and not the original array itself my_array[my_array==0][:my_array.size//3] = first_value </code></pre>
python|arrays|numpy|list-comprehension
1
1,109
61,305,853
Grouping pandas series based on condition
<p>I have a Pandas df with one column the following values.</p> <pre><code> Data 0 A 1 A 2 B 3 A 4 A 5 A 6 B 7 A 8 A 9 B </code></pre> <p>I want to try and group these values as such, for each encounter of Value B, i want the the group value to be changed as follows </p> <pre><code> Data Group 0 A 1 1 A 1 2 B 1 3 A 2 4 A 2 5 A 2 6 B 2 7 A 3 8 A 3 9 B 3 </code></pre> <p>How can this be achieved using pandas inbuilt. in some way to create any helper columns to facilitate the mentioned task.</p>
<p>You can try <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.cumsum.html" rel="noreferrer"><code>cumsum</code></a> after comparing if the series <code>equals</code> <code>B</code> and then <code>shift</code> 1 place to include B in the group:</p> <pre><code>df['Data'].eq('B').shift(fill_value=False).cumsum().add(1) </code></pre> <hr> <pre><code>0 1 1 1 2 1 3 2 4 2 5 2 6 2 7 3 8 3 9 3 </code></pre>
python|pandas|dataframe|grouping
6
1,110
61,277,879
Convert Stacked DataFrame of Years and Months to DataFrame with Datetime Indices
<p>I am reading a csv file of the number of employees in the US by year and month (in thousands). It starts out like this:</p> <pre><code>Year,Jan,Feb,Mar,Apr,May,Jun,Jul,Aug,Sep,Oct,Nov,Dec 1961,45119,44969,45051,44997,45119,45289,45400,45535,45591,45716,45931,46035 1962,46040,46309,46375,46679,46668,46644,46720,46775,46888,46927,46910,46901 1963,46912,47000,47077,47316,47328,47356,47461,47542,47661,47805,47771,47863 ... </code></pre> <p><strong>I want my Pandas Dataframe to have the datetime as the index for each month's value</strong>. I'm doing this so I can later add values for specific time ranges. I want it to look something like this:</p> <pre><code>1961-01-01 45119.0 1961-02-01 44969.0 1961-03-01 45051.0 1961-04-01 44997.0 1961-05-01 45119.0 ... </code></pre> <p>I did some research and thought that if I stacked the years and months together, I could combine them into a datetime. Here is what I have done:</p> <pre><code>import pandas as pd import numpy as np df = pd.read_csv("BLS_private.csv", header=5, index_col="Year") df.columns = range(1, 13) # I transformed months into numbers 1-12 for easier datetime conversion df = df.stack() # Months are no longer columns print(df) </code></pre> <p>Here is my output:</p> <pre><code>Year 1961 1 45119.0 2 44969.0 3 45051.0 4 44997.0 5 45119.0 ... </code></pre> <p>I do not know how to combine the year and the months in the stacked indices. Does stacking the indices help at all in my case? I am also not the most familiar with Pandas datetime, so any explanation about how I could use that would be very helpful. Also if anyone has alternate solutions than making datetime the index, I welcome ideas.</p>
<p>After the <code>stack</code> create the DateTimeIndex from the current index</p> <pre><code>from datetime import datetime dt_index = pd.to_datetime([datetime(year=year, month=month, day=1) for year, month in df.index.values]) df.index = dt_index df.head(3) # 1961-01-01 45119 # 1961-02-01 44969 # 1961-03-01 45051 </code></pre>
python|python-3.x|pandas|dataframe
2
1,111
61,213,685
How to randomly shuffle blocks in numpy 3D array on particular axis
<p>I have a 3D numpy array and I want to shuffle it block wise in a particular axis while keeping the data in that block in it's original state. For instance I have an np array of shape (50, 140, 23) and I want to shuffle by making blocks of (50, 1, 23) on axis=1. So 140 blocks will be created and blocks should be shuffled on axis=1 while maintaining the data in blocks in it's original order. I read documentation about <code>np.random.shuffle(x)</code> but this only shuffles in first axis and we can't provide a block size to it. Is there any function in numpy or a quick way to do this?</p>
<p>You can use a random permutation:</p> <pre><code>A = sum(np.ogrid[0:0:50j,:140,0:0:23j]) rng = np.random.default_rng() Ashuff = A[:,rng.permutation(140),:] </code></pre>
python|arrays|numpy|shuffle
3
1,112
61,464,888
TensorFlow Error: ValueError("Shapes %s and %s are incompatible" % (self, other))
<p>I'm trying to classify images of PCBs into two categories (<code>defected</code> and <code>undefected</code>) using <code>categorical cross-entropy</code> as the loss function. The code for the same is as below:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt import tensorflow from tensorflow.keras.applications import ResNet50 from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense from tensorflow.keras.optimizers import SGD from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint from keras.applications.resnet50 import preprocess_input from keras.preprocessing.image import ImageDataGenerator from sklearn.model_selection import train_test_split def create_compiled_model(): model = Sequential() model.add(ResNet50(include_top=False, weights=RESNET50_WEIGHTS, input_shape=(IMAGE_SIZE, IMAGE_SIZE, 3), pooling=RESNET50_POOLING_AVERAGE)) model.add(Dense(NUM_CLASSES, activation=DENSE_LAYER_ACTIVATION)) model.layers[0].trainable = False sgd = SGD(lr = 0.01, decay = 1e-6, momentum = 0.9, nesterov = True) model.compile(optimizer = sgd, loss = OBJECTIVE_FUNCTION, metrics = LOSS_METRICS) return model def data_splitor(): x = np.load("/content/data/xtrain.npy") y = np.load("/content/data/ytrain.npy") # Getting the Test and Train splits x_train, x_test, y_train, y_test = train_test_split(x, y, test_size= TRAIN_TEST_SPLIT, shuffle= True) # Getting the Train and Validation splits x__train, x__valid, y__train, y__valid = train_test_split(x_train, y_train, test_size= TRAIN_TEST_SPLIT, shuffle= True) return x__train, x__valid, x_test, y__train, y__valid, y_test def data_generator(x, y, batch_size, seed=None, shuffle=True): data_generator = ImageDataGenerator(horizontal_flip=True, vertical_flip=True, rotation_range=180, brightness_range=[0.3, 1.0], preprocessing_function=preprocess_input) generator = data_generator.flow(x_train, y_train, batch_size= batch_size, seed= seed, shuffle=shuffle) return generator def run_program(): x_train, x_valid, x_test, y_train, y_valid, y_test = data_splitor() train_generator = data_generator(x_train, y_train, BATCH_SIZE_TRAINING) validation_generator = data_generator(x_valid, y_valid, BATCH_SIZE_VALIDATION) cb_early_stopper = EarlyStopping(monitor = 'val_loss', patience = EARLY_STOP_PATIENCE) cb_checkpointer = ModelCheckpoint(filepath = '/content/model/best.hdf5', monitor = 'val_loss', save_best_only = True, mode = 'auto') model = create_compiled_model() fit_history = model.fit_generator( train_generator, steps_per_epoch=STEPS_PER_EPOCH_TRAINING, epochs = NUM_EPOCHS, validation_data=validation_generator, validation_steps=STEPS_PER_EPOCH_VALIDATION, callbacks=[cb_checkpointer, cb_early_stopper] ) plt.figure(1, figsize = (15,8)) plt.subplot(221) plt.plot(fit_history.history['acc']) plt.plot(fit_history.history['val_acc']) plt.title('model accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['train', 'valid']) plt.subplot(222) plt.plot(fit_history.history['loss']) plt.plot(fit_history.history['val_loss']) plt.title('model loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'valid']) plt.show() # Testing test_generator = data_generator(x_test, y_test, BATCH_SIZE_TESTING, 123, False) test_generator.reset() model.load_weights("/content/model/best.hdf5") pred = model.predict_generator(test_generator, steps = len(test_generator), verbose = 1) predicted_class_indices = np.argmax(pred, axis = 1) # Running the program try: with tensorflow.device('/device:GPU:0'): run_program() except RuntimeError as e: print(e) </code></pre> <p>And upon executing this, I get the ValueError seen below:</p> <pre><code>ValueError: in user code: /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:571 train_function * outputs = self.distribute_strategy.run( /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:951 run ** return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2290 call_for_each_replica return self._call_for_each_replica(fn, args, kwargs) /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2649 _call_for_each_replica return fn(*args, **kwargs) /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:533 train_step ** y, y_pred, sample_weight, regularization_losses=self.losses) /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/compile_utils.py:204 __call__ loss_value = loss_obj(y_t, y_p, sample_weight=sw) /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/losses.py:143 __call__ losses = self.call(y_true, y_pred) /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/losses.py:246 call return self.fn(y_true, y_pred, **self._fn_kwargs) /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/losses.py:1527 categorical_crossentropy return K.categorical_crossentropy(y_true, y_pred, from_logits=from_logits) /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/backend.py:4561 categorical_crossentropy target.shape.assert_is_compatible_with(output.shape) /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/tensor_shape.py:1117 assert_is_compatible_with raise ValueError("Shapes %s and %s are incompatible" % (self, other)) ValueError: Shapes (None, 1) and (None, 2) are incompatible </code></pre> <p>I have already looked at <a href="https://stackoverflow.com/questions/49703387/tensorflow-valueerror-shapes-1-and-are-incompatible/50180397">this</a>, <a href="https://stackoverflow.com/questions/47482082/tensorflow-error-raise-valueerrorshapes-s-and-s-are-not-compatible-self">this</a> and <a href="https://stackoverflow.com/questions/51406932/tensorflow-label-and-logits-shapes-are-incompatible">this</a>, but could not resolve the error. </p> <p>I really appreciate the help in fixing this.</p> <p>Thanks Praveen</p> <p>Here is the complete traceback... <a href="https://pastebin.com/sKhVXWcp" rel="nofollow noreferrer">link</a></p>
<p>Seems your y_train data have shape (None,1) while your network is expecting (None,2). There are two options to solve this:</p> <p>1) Change your model output to 1 unit and change loss to binary crossentropy</p> <p>or</p> <p>2) Change your y_train data to categorical. See <a href="https://www.tensorflow.org/api_docs/python/tf/keras/utils/to_categorical?version=nightly" rel="nofollow noreferrer">this</a> </p> <p>If you can post here your model.summary() and your dataset shapes it will help us to help you.</p>
python|tensorflow|machine-learning|keras|deep-learning
1
1,113
68,540,419
reshape Pandas dataframe by appending column to column
<p>i do have a Pandas df like (df1):</p> <pre><code> 0 1 2 3 4 5 0 a b c d e f 1 1 4 7 10 13 16 2 2 5 8 11 14 17 3 3 6 9 12 15 18 </code></pre> <p>and i want to generate an Dataframe like (df2):</p> <pre><code> 0 1 2 0 a b c 1 1 4 7 2 2 5 7 3 3 6 9 4 d e f 5 10 13 16 6 11 14 17 7 12 15 18 </code></pre> <p>additional information about the given df:</p> <ol> <li>shape of given df ist unknown. b = df1.shape() -&gt; b = [n,m]</li> <li>it is a given fact the width of df1 is divisble by 3</li> </ol> <p>i did try stack, melt and wide_to_long. By using stack the order of the rows is lost, the rows should behave as shown in exmeplary df2 . I would really appreciate any help.</p> <p>Kind regards Hans</p>
<p>Use <code>np.vstack</code> and <code>np.hsplit</code>:</p> <pre><code>&gt;&gt;&gt; pd.DataFrame(np.vstack(np.hsplit(df, df.shape[1] / 3))) 0 1 2 0 a b c 1 1 4 7 2 2 5 8 3 3 6 9 4 d e f 5 10 13 16 6 11 14 17 7 12 15 18 </code></pre> <p>Another example:</p> <pre><code>&gt;&gt;&gt; df 0 1 2 3 4 5 6 7 8 0 a b c d e f g h i 1 1 4 7 10 13 16 19 22 25 2 2 5 8 11 14 17 20 23 26 3 3 6 9 12 15 18 21 24 27 &gt;&gt;&gt; pd.DataFrame(np.vstack(np.hsplit(df, df.shape[1] / 3))) 0 1 2 0 a b c 1 1 4 7 2 2 5 8 3 3 6 9 4 d e f 5 10 13 16 6 11 14 17 7 12 15 18 8 g h i 9 19 22 25 10 20 23 26 11 21 24 27 </code></pre>
python|python-3.x|pandas|dataframe
1
1,114
53,194,704
Keras loss function understanding
<p>In order to understand some callbacks of Keras better, I want to artificially create a <code>nan</code> loss.</p> <p>This is the function</p> <pre><code>def soft_dice_loss(y_true, y_pred): from keras import backend as K if K.eval(K.random_normal((1, 1), mean=2, stddev=2))[0][0] // 1 == 2.0: # return nan return K.exp(1.0) / K.exp(-10000000000.0) - K.exp(1.0) / K.exp(-10000000000.0) epsilon = 1e-6 axes = tuple(range(1, len(y_pred.shape) - 1)) numerator = 2. * K.sum(y_pred * y_true, axes) denominator = K.sum(K.square(y_pred) + K.square(y_true), axes) return 1 - K.mean(numerator / (denominator + epsilon)) </code></pre> <p>So normally, it calculates the dice loss, but from time to time it should randomly return a <code>nan</code>. However, this does not seem to happen:</p> <p><a href="https://i.stack.imgur.com/psKL9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/psKL9.png" alt="keras outputs"></a></p> <p>From time to time though, when I try to run the code, it stops right at the start (before the first epoch) with an error, saying that <code>An operation has None for gradient. Please make sure that all of your ops have a gradient defined</code></p> <p>Does that mean, that the the random function of Keras is just evaluated once and then always returns the same value? If so, why is that and how can I create a loss function that returns <code>nan</code> from time to time?</p>
<p>Your first conditional statement is only evaluated once the loss function is defined (i.e. called; that is why Keras stops right at the start). Instead, you could use <a href="https://keras.io/backend/#switch" rel="nofollow noreferrer">keras.backend.switch</a> to integrate your conditional into the graph's logic. Your loss function could be something along the lines of:</p> <pre><code>import keras.backend as K import numpy as np def soft_dice_loss(y_true, y_pred): epsilon = 1e-6 axes = tuple(range(1, len(y_pred.shape) - 1)) numerator = 2. * K.sum(y_pred * y_true, axes) denominator = K.sum(K.square(y_pred) + K.square(y_true), axes) loss = 1 - K.mean(numerator / (denominator + epsilon)) return K.switch(condition=K.random_normal((), mean=0, stddev=1) &gt; 3, then_expression=K.variable(np.nan), else_expression=loss) </code></pre>
python|tensorflow|keras|nan|loss
1
1,115
53,201,829
How should we take the sum of values in a column after grouping by a different column in pandas dataframe
<p>I am trying to plot a graph for to analyze if there's any relation between the available_days of a property and number of reviews for it. I have a dataset which has different unique property listings, available_days for each property, number of reviews for each property. I am trying to plot by grouping the data by 'available_days' and I need to count the total number of reviews for those properties. For example, if the available days are 25, then I need to take the sum of the number of reviews for all properties with 25 available days. I couldn't figure out a way to do this. I tried as below but it is not giving me the expected result.</p> <pre><code>available_days=listings.groupby(['availability_365']).count() available_days=listings.groupby(['availability_365'])['reviews_count'].count() available_days=listings.groupby('availability_365').agg('sum') available_days=listings.groupby(['availability_365']).agg({'reviews_count':np.sum}) </code></pre> <p>Here is the dataset I am referring to:<a href="https://i.stack.imgur.com/aa8cj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aa8cj.png" alt="dataset"></a></p> <p>This is the desired output format: <a href="https://i.stack.imgur.com/yjgfi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yjgfi.png" alt="desired_output"></a></p> <p>Also, please suggest a better way of approaching this problem to plot the graph.</p>
<p>Do you mean something like this?</p> <pre><code>import pandas as pd df = pd.DataFrame({ "availability": [1, 2, 2, 3, 3, 3, 4, 4, 4, 4], "num_reviews": [1, 1, 1, 1, 1, 1, 1, 1, 1, 1] }) # Count number of reviews per unique value for "availibility" df["reviews_by_availability"] = df.groupby("availability")["num_reviews"].transform("sum") print df # Optionally, print only one instance of each "availability" print df.drop_duplicates(subset=["availability"]) </code></pre> <p>Output:</p> <pre><code> availability num_reviews reviews_by_availability 0 1 1 1 1 2 1 2 2 2 1 2 3 3 1 3 4 3 1 3 5 3 1 3 6 4 1 4 7 4 1 4 8 4 1 4 9 4 1 4 availability num_reviews reviews_by_availability 0 1 1 1 1 2 1 2 3 3 1 3 6 4 1 4 </code></pre> <p>Also, please don't post images of your data, that's not helpful at all.</p> <p><strong>EDIT:</strong> You can plot it with <code>pandas.DataFrame.plot.scatter()</code>:</p> <pre><code># Draw scatterplot import matplotlib.pyplot as plt df.drop_duplicates(subset=["availability"]).plot.scatter(x="availability", y="reviews_by_availability") plt.show() </code></pre> <p>Result: <a href="https://i.stack.imgur.com/3YV0z.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3YV0z.png" alt="Plot result"></a></p>
python|pandas|matplotlib|data-analysis
0
1,116
53,234,329
How to do an R style aggregate in Python Pandas?
<p>I need to do an aggregate (at least that what you would call it in R) over the mtcars data set that I have uploaded into python. The end goal is to get the average mpg for each value of cyl in the data set (There are three values for cyl, 4,6,8). Here is the R code for what I want to do</p> <p>mean_each_gear &lt;- aggregate(mtcars$mpg ~ mtcars$cyl, FUN = mean)</p> <p>output: cyl mpg 1 4 26.66364 2 6 19.74286 3 8 15.10000</p> <p>The closest I've come with in Pandas is this</p> <p>mtcars.agg(['mean'])</p> <p>I'm not sure how I would do that in Pandas. Any help would be appreciated!</p>
<p>You want pandas groupby()!</p> <pre><code>import pandas as pd my_dataframe = pd.read_csv('my_input_data.csv') //insert your data here pd.groupby(['col1'])['col2'].mean() </code></pre> <p>where 'col1' is the column you want to group by and 'col2' is the column whose mean you want to obtain. Also see here:</p> <p><a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html</a></p>
python|r|pandas|aggregate
0
1,117
52,939,673
Pandas and DateTime TypeError: cannot compare a TimedeltaIndex with type float
<p>I have a pandas DataFrame Series time differences that looks like::</p> <pre><code> print(delta_t) 1 0 days 00:00:59 3 0 days 00:04:22 6 0 days 00:00:56 8 0 days 00:01:21 19 0 days 00:01:09 22 0 days 00:00:36 ... </code></pre> <p>(the full DataFrame had a bunch of NaNs which I dropped). </p> <p>I'd like to know which delta_t's are less than 1 day, 1 hour, 1 minute, so I tried: </p> <pre><code>delta_t_lt1day = delta_t[np.where(delta_t &lt; 30.)] </code></pre> <p>but then got a: </p> <pre><code>TypeError: cannot compare a TimedeltaIndex with type float </code></pre> <p>Little help?!?!</p>
<p>Assuming your Series is in <code>timedelta</code> format, you can skip the <code>np.where</code>, and index using something like this, where you compare your actual values to other timedeltas, using the appropriate units:</p> <pre><code>delta_t_lt1day = delta_t[delta_t &lt; pd.Timedelta(1,'D')] delta_t_lt1hour = delta_t[delta_t &lt; pd.Timedelta(1,'h')] delta_t_lt1minute = delta_t[delta_t &lt; pd.Timedelta(1,'m')] </code></pre> <p>You'll get the following series:</p> <pre><code>&gt;&gt;&gt; delta_t_lt1day 0 1 00:00:59 3 00:04:22 6 00:00:56 8 00:01:21 19 00:01:09 22 00:00:36 Name: 1, dtype: timedelta64[ns] &gt;&gt;&gt; delta_t_lt1hour 0 1 00:00:59 3 00:04:22 6 00:00:56 8 00:01:21 19 00:01:09 22 00:00:36 Name: 1, dtype: timedelta64[ns] &gt;&gt;&gt; delta_t_lt1minute 0 1 00:00:59 6 00:00:56 22 00:00:36 Name: 1, dtype: timedelta64[ns] </code></pre>
python|pandas|datetime|series
5
1,118
65,512,677
How can i use 2 numpy arrays as dataset for denoising autoencoder, and further split them into train and test sets
<p>I have 2 numpy arrays, one with clean data [4000 x [1000][25]] (4000 1000x25 arrays) and one with noisy data (same size as clean) to be used for a de-noising auto-encoder problem.</p> <p>I want to be able to either map them and then store them into a tensorflow data set, or any other way which allows me to do this</p> <p>clean[i] -&gt; De-noising Autoencoder -&gt; noisy[i]</p> <p>Also implement a train and test split in a way that mapping remains.</p> <p>I'm sorry if this is too vague, I'm new to ML and python.</p>
<p>assume you have your clean data in an array clean_data and your noisy data in an array noisy_data. Then use train_test_split from sklearn to split the data into a training set and a test as follows</p> <pre><code>from sklearn.model_selection import train_test_split train_size=.7 # set this to the percentage you want for training clean_train, clean_test, noisy_train, noisy_test=train_test_split(clean_data, noisy_data, train_size=train_size, randon_state=123) </code></pre>
python|numpy|tensorflow|keras|autoencoder
0
1,119
65,525,313
Merge two dataframes on string columns with values containing wilcards as for like in SQL - Python
<p>I want to merge 2 dataframes on string columns with values containing wildcards as we can do with like in SQL.</p> <p>Example :</p> <pre><code>import pandas as pd df1 = pd.DataFrame({'A': [&quot;He eat an apple in his office.&quot;, &quot;There are many apples on the tree.&quot;], 'B': [1, 2]}) df2 = pd.DataFrame({'A': [&quot;apple*tree&quot;, &quot;apple*pie&quot;], 'C': [4, 9]}) df1 A B 0 He eat an apple in his office. 1 1 There are many apples on the tree. 2 df2 A C 0 apple*tree 4 1 apple*pie 9 pd.merge(df1, df2, on = ['A']) # What it gives me : Empty DataFrame Columns: [A, B, C] Index: [] # What I want: A B C 0 There are many apples on the tree. 2 4 </code></pre> <p>I want to join the two dataframes and &quot;apple*tree&quot; of df2 has to match the sentence &quot;There are many apples on the tree.&quot; of df1.</p> <p>Can you help me to do this please?</p> <p>I have found the function fnmatch.fnmatch(string, pattern) but can I use it in this case with a merge?</p>
<p>This can be done by using apply to search for df2's patterns in each row of df1. This will require runtime proportional to <code>O(n*m)</code>, where n is the number of rows in df1, and m is the number of rows in df2. This is not very efficient, but that's fine for small dataframes.</p> <p>Once we identify the matches between df1 and df2, we can merge the two dataframes. After that, we just need to clean up the dataframe and drop unneeded columns.</p> <p>Code:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import fnmatch df1 = pd.DataFrame({'A': [&quot;He eat an apple in his office.&quot;, &quot;There are many apples on the tree.&quot;], 'B': [1, 2]}) df2 = pd.DataFrame({'A': [&quot;apple*tree&quot;, &quot;apple*pie&quot;], 'C': [4, 9]}) def wildcard_search(pattern): # Comment this line to require exact match pattern = &quot;*&quot; + pattern + &quot;*&quot; # Apply pattern to every A values within df1 matching = df1['A'].apply(lambda x: fnmatch.fnmatch(x, pattern)) # Get index of largest member first_match = matching.idxmax() # If we have all zeros, then first_match will refer to the first # zero. Check for this. if matching.loc[first_match] == 0: return None # print(first_match) return df1.loc[first_match, 'A'] # Using df2 patterns, search through df1. Record values found. df2['merge_key'] = df2['A'].apply(wildcard_search) # Merge two dataframes, on cols merge_key and A res = df2.merge( df1, left_on='merge_key', right_on='A', suffixes=(&quot;_x&quot;, &quot;&quot;) # Don't add a suffix to df1's columns ) # Reorder cols, drop unneeded res = res[['A', 'B', 'C']] print(res) </code></pre> <p>This answer is adapted from <a href="https://stackoverflow.com/a/35406648/530160">this post</a>.</p>
python|regex|pandas|merge|wildcard
0
1,120
65,797,918
How to find the closest value on the left
<p>I have a function and I have detected the peaks of this function. I took the half of the height of each peak, now I want to find the intersection point, on the left only, between the function and the line that passes by the half of the height of the peak.</p> <p>Please, note that in the picture below, the line does not exactly pass by the halves of the peaks. Indeed, each peak has a particular value of mid-height and I need to find the intersection point on the left with this value.</p> <p>my function values are:</p> <pre><code>data= [2.50075550e+01 2.68589513e+01 2.88928569e+01 3.05468408e+01 3.17558878e+01 3.28585597e+01 3.41860820e+01 3.56781188e+01 3.68868815e+01 3.72671655e+01 3.65050587e+01 3.47342596e+01 3.24647483e+01 3.02772213e+01 2.84592589e+01 2.68653782e+01 2.51627240e+01 2.33132310e+01 2.18235229e+01 ...] </code></pre> <p>and I am getting the half of the heights using find_peaks from SciPy</p> <pre><code>heights.append(signal.find_peaks(data, height=height)[1]['peak_heights']) #Then calculating the half of each peak </code></pre> <p><img src="https://i.stack.imgur.com/mMiH3.png" alt="" /></p>
<p>The following code use the function <code>find_roots</code> from <a href="https://stackoverflow.com/questions/46909373/how-to-find-the-exact-intersection-of-a-curve-as-np-array-with-y-0">How to find the exact intersection of a curve with y==0?</a>. This function searches the exact interpolated x-value corresponding to the given half-value. The segment is restricted to the interval between the previous peak and the current peak, and from the resulting list the last root (if any) is taken.</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import matplotlib.pyplot as plt from scipy import signal def find_roots(x, y): s = np.abs(np.diff(np.sign(y))).astype(bool) return x[:-1][s] + np.diff(x)[s] / (np.abs(y[1:][s] / y[:-1][s]) + 1) np.random.seed(11235) x = np.linspace(0, 20, 500) data = np.convolve(1.1 ** np.random.randn(x.size).cumsum(), np.ones(40), 'same') data -= data.min() plt.plot(x, data, c='dodgerblue') peaks, _ = signal.find_peaks(data, height=40, distance=50) plt.scatter(x[peaks], data[peaks], color='turquoise') for p, prev in zip(peaks, np.append(0, peaks)): half = data[p] / 2 roots = find_roots(x[prev:p], data[prev:p] - half) if len(roots) &gt; 0: plt.scatter(roots[-1], half, color='crimson') plt.ylim(ymin=0) plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/SKWMt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SKWMt.png" alt="example plot" /></a></p>
python|python-3.x|numpy|matplotlib|numpy-ndarray
1
1,121
65,641,499
How to parse a string calculation in a pandas dataframe column
<p>I am trying to parse a string calculation which is a column within a dataframe, if the calculation is static I can use the eval function. However this doesnt appear to work when you give it a column name.</p> <pre><code>import pandas as pd calcs = {'a': [1,1], 'b': [1,1], 'c': [1,1], 'calc': ['result=a*b','result=a+b']} df = pd.DataFrame(calcs, columns = ['a', 'b','c','calc']) print(df) a b c calc 1 1 1 a*b 1 1 1 a+b </code></pre> <p>can you please tell me how it would be possible to evaluate the calculation in the 'calc' column for each row in the dataframe.</p>
<p>You can <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.apply.html" rel="nofollow noreferrer"><code>df.apply</code></a>, <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.eval.html#pandas.DataFrame.eval" rel="nofollow noreferrer"><code>df.eval</code></a>:</p> <pre><code>&gt;&gt;&gt; df['result'] = df.apply(lambda x:x.to_frame().T.eval(x[-1]).item(), axis=1) &gt;&gt;&gt; df a b c calc result 0 1 1 1 a*b 1 1 1 1 1 a+b 2 </code></pre> <p>Or use <a href="https://numpy.org/doc/stable/reference/generated/numpy.diag.html" rel="nofollow noreferrer"><code>np.diag</code></a>:</p> <pre><code>&gt;&gt;&gt; import numpy as np &gt;&gt;&gt; df['result'] = np.diag(df.eval(df['calc'])) &gt;&gt;&gt; df a b c calc result 0 1 1 1 a*b 1 1 1 1 1 a+b 2 </code></pre>
python|python-3.x|pandas|dataframe|eval
1
1,122
65,836,299
LOOP univariate rolling window regression on entire DF Python
<p>I have a dataframe of 24 variables (24 columns x 4580 rows) from 2008 to 2020.</p> <p>My independant variable is the first one in the DF and the dependant variables are the 23 others.</p> <p>I've done a test for one rolling window regression, it works well, here is my code :</p> <pre><code>import statsmodels.api as sm from statsmodels.regression.rolling import RollingOLS import seaborn seaborn.set_style('darkgrid') pd.plotting.register_matplotlib_converters() x = sm.add_constant(df[['DIFFSWAP']]) y = df[['CADUSD']] rols = RollingOLS(y,x, window=60) rres = rols.fit() params = rres.params r_sq = rres.rsquared </code></pre> <p>Now, what i want to do, i'd like to do a loop to regress (rolling window) all the dependant variables of the DF (columns 2:24) on the independant variable (column 1) and store the coefficients and the rsquareds.</p> <p>My ultimate goal is to extract Rsquareds and Coefficients and put them in dataframes(or lists or whatever) and then graph them.</p> <p>I'm new to Python so I'd be very gratefull for any help.</p> <p>Thank you!</p>
<p>Can you throw it all in a loop and store the results in some other object like a dict?</p> <p>Potential solution:</p> <pre><code>data = {} for column in list(df.columns)[2:]: # iterate over columns 2 to 24 x = sm.add_constant(df[column]) y = df[['CADUSD']] ## This never changes from CADUSD, right? rols = RollingOLS(y, x, window=60) rres = rols.fit() params = rres.params r_sq = rres.rsquared # Store results from each column's fit as a new dict entry data[column] = {'params':params, 'r_sq':r_sq} results_df = pd.DataFrame(data).T </code></pre>
python|pandas|list|dataframe|regression
1
1,123
63,531,557
Convert time to categorical variable
<pre><code>df Time | Day_of_the_Week 16:24:18 | Sat 17:00:01 | Sun 03:48:12 | Mon </code></pre> <p>Expected Output:</p> <pre><code>df Time | Day_of_the_Week | Time_Category 16:24:18 | Sat | Afternoon 17:00:01 | Sun | Evening 03:48:12 | Mon | Midnight </code></pre> <p>df['Time'][1] returns a &quot;datetime.time&quot;</p> <p>The following code returns an invalid syntax.</p> <pre><code>for a in df: if df['Time'] &gt; 17:00:00: df['Time_Category'] == 'Evening' elif df['Time'] &gt; '12:00:00': df['Time_Category'] == 'Afternoon' elif df['Time'] &gt; '04:00:00': df['Time_Category'] == 'Morning' else: df['Time_Category'] == 'Midnight' </code></pre>
<p>We can try <code>pd.cut</code></p> <pre><code>s = pd.cut(df.Time, pd.to_timedelta(['04:00:00','12:00:00','17:00:00','23:59:59']), labels=['Morning','Afternoon','Evening']).astype(str).replace('nan','Midnight') Out[43]: 0 Afternoon 1 Evening 2 Midnight Name: Time, dtype: object df['Time_category'] = s </code></pre>
python|python-3.x|regex|pandas|datetime
1
1,124
63,558,757
% Increase value in number record
<p>my dataset</p> <pre><code>name date record A 2018-09-18 95 A 2018-10-11 104 A 2018-10-30 230 A 2018-11-23 124 B 2020-01-24 95 B 2020-02-11 167 B 2020-03-07 78 </code></pre> <p>As you can see, there are several records by name and date.</p> <p>Compared to the previous record, I would like to see the record that rose the most.</p> <p><strong>output what I want</strong></p> <pre><code>name record_before_date record_before record_increase_date record_increase increase_rate A 2018-10-11 104 2018-10-30 230 121.25 B 2020-01-24 95 2020-02-11 167 75.79 </code></pre> <p>I`m not comparing the lowest to the highest, but I want to check the record with the highest ascent rate when the next record comes, and the rate of ascent.</p> <p><strong>increase rate formula = (record_increase - record_before) / record_before * 100</strong></p> <p>Any help would be appreciated. thanks for reading.</p>
<p>Use:</p> <pre><code>#get percento change per groups s = df.groupby(&quot;name&quot;)[&quot;record&quot;].pct_change() #get row with maximal percent change df1 = df.loc[s.groupby(df['name']).idxmax()].add_suffix('_increase') #get row with previous maximal percent change df2 = (df.loc[s.groupby(df['name']) .apply(lambda x: x.shift(-1).idxmax())].add_suffix('_before')) #join together df = pd.concat([df2.set_index('name_before'), df1.set_index('name_increase')], axis=1).rename_axis('name').reset_index() #apply formula df['increase_rate'] = (df['record_increase'].sub(df['record_before']) .div(df['record_before']) .mul(100)) print (df) name date_before record_before date_increase record_increase \ 0 A 2018-10-11 104 2018-10-30 230 1 B 2020-01-24 95 2020-02-11 167 increase_rate 0 121.153846 1 75.789474 </code></pre>
python|pandas|numpy|compare
2
1,125
63,731,947
Aggregate quantity based on string and additional column using pandas
<p>I have a data set that contains data that looks like this:</p> <pre><code>Month, Year, Quantity Sold, Product Name 11, 2017, 13, &quot;Creatine Powder Supplement - 500g&quot; 11, 2017, 10, &quot;Gummies 1 bag&quot; 11, 2017, 12, &quot;Creatine Powder Supplement - 1000g&quot; 11, 2017, 15, &quot;Creatine Powder Supplement - 1500g&quot; 11, 2017, 11, &quot;Glucosamine - 500g&quot; 11, 2017, 23, &quot;Glucosamine - 1500g&quot; 12, 2017, 17, &quot;Creatine Powder Supplement - 1000g&quot; 12, 2017, 24, &quot;Glucosamine - 500g&quot; 12, 2017, 13, &quot;Glucosamine - 1500g&quot; 1, 2018, 16, &quot;Creatine Powder Supplement - 500g&quot; 1, 2018, 13, &quot;Creatine Powder Supplement - 1000g&quot; 1, 2018, 10, &quot;Gummies 1 bag&quot; 1, 2018, 11, &quot;Glucosamine - 500g&quot; 1, 2018, 21, &quot;Glucosamine - 1500g&quot; </code></pre> <p>I want to calculate the total weight of products sold, separated by month and year, which would require extracting the weight of the product from the &quot;Product Name&quot; column, multiplying it by the &quot;Quantity Sold&quot; column, then providing the total for the related product.</p> <p>Desired output (I've only calculated the total weight sold for the first row):</p> <pre><code>Matched data set: Month, Year, Product Name, Total Weight Sold 11, 2017, Creatine Powder Supplement, 41000 11, 2017, Glucosamine, &lt;total&gt; 12, 2017, Creatine Powder Supplement, &lt;total&gt; 12, 2017, Glucosamine, &lt;total&gt; 1, 2018, Creatine Powder Supplement, &lt;total&gt; 1, 2018, Glucosamine, &lt;total&gt; </code></pre> <p>In addition to this, for any products that don't end in the pattern <code> - &lt;number&gt;g</code>, I want to output these into a separate data set so they can be reviewed.</p> <pre><code>UNmatched data set: Month, Year, Quantity Sold, Product Name 11, 2017, 10, &quot;Gummies 1 bag&quot; 1, 2018, 10, &quot;Gummies 1 bag&quot; </code></pre> <p>I'm thinking of using <code>str.extract</code> but I'm not totally sure how to do the math and then sum the resulting calculated total to other rows for the same product, into a new DataFrame or otherwise.</p> <p>Thanks</p>
<p>Here is a Python solution. It writes <em>error</em> lines to an output file and writes good lines to the terminal.</p> <pre><code>from collections import defaultdict import re d = defaultdict(int) with open('f0.txt', 'r') as f, open('err.txt', 'w') as fout: fout.write(f.readline()) # print header to err.txt for row in f: row = row.rstrip() if re.search(r'- \d+g&quot;', row): month, yr, qty, product = row.split(', ') product = product.replace('g', '').replace('&quot;', '') name, grams = product.split(' - ') key = ','.join([month, yr, name]) d[key] += int(qty) * int(grams) else: # handle this row (that doesn't have a Product and weight) fout.write(row + '\n') print(','.join(['Month', 'Year', 'Product Name', 'Total Sold'])) for key, total in d.items(): print(f'{key},{total}') </code></pre> <p>Prints to terminal:</p> <pre><code>Month,Year,Product Name,Total Sold 11,2017,Creatine,41000 11,2017,Glucosamine,40000 12,2017,Creatine,17000 12,2017,Glucosamine,31500 1,2018,Creatine,21000 1,2018,Glucosamine,37000 </code></pre> <p>Prints to err.txt:</p> <pre><code>Month, Year, Quantity Sold, Product Name 11, 2017, 10, &quot;Gummies 1 bag&quot; 1, 2018, 10, &quot;Gummies 1 bag&quot; </code></pre>
python|pandas
1
1,126
63,365,169
Prediction retracing warning message while creating LOOCV in CNN keras/tensorflow2.0
<p>I am trying to write a custom for loop in order to execute a LOOCV using tensorflow 2.0 and Keras API. I am testing a CNN regression where each value is represented by 12 molecular images.</p> <p>My dataset consists of 504 images from 42 molecules and it looks like this:</p> <pre><code> file value intrain 0 mol1_scan0.bmp 6.456 True 1 mol1_scan30.bmp 6.456 True 2 mol1_scan60.bmp 6.456 True 3 mol1_scan90.bmp 6.456 True 4 mol1_scan120.bmp 6.456 True ... ... ... ... 499 mol42_scan210.bmp 6.244 True 500 mol42_scan240.bmp 6.244 True 501 mol42_scan270.bmp 6.244 True 502 mol42_scan300.bmp 6.244 True 503 mol42_scan330.bmp 6.244 True </code></pre> <p>My goal is to create a LOOCV, where at every step of the loop, 12 images are set as validation at a time, this 12 images have to belong to the same molecule. That's controlled by the 'intrain' column.</p> <p>The loop responsible for creating the LOOCV and running the CNN looks like this:</p> <pre><code> for lo in range(mols): loo = lo+1 t0= time.time() vald = np.repeat(True,mols) vald[loo] = False vals = [] for p in range(mols): vals.extend(np.repeat(vald[p],views)) data = pd.DataFrame({'file':paths,'valor':ys,'train':vals}) train = data[data.train==True] validation = data[data.train==False] print(f'##### Executing step {loo} out of {mols} ##### {datetime.now()}') datagen = ImageDataGenerator(rescale=1/255.) train_generator = datagen.flow_from_dataframe(dataframe=train, directory=original_train, color_mode=&quot;grayscale&quot;, x_col='file', y_col='valor', target_size=(200,220), class_mode='raw', batch_size=32) validation_generator = datagen.flow_from_dataframe(dataframe=validation, directory=original_train, color_mode=&quot;grayscale&quot;, x_col='file', y_col='valor', target_size=(200,220), class_mode='raw', batch_size=32, shuffle = False) model = Sequential() model.add(Conv2D(32,(3,3),activation='relu',input_shape=(200,220,1))) model.add(MaxPool2D((2,2))) model.add(Conv2D(64,(3,3),activation='relu')) model.add(MaxPool2D((2,2))) model.add(Conv2D(128,(3,3),activation='relu')) model.add(MaxPool2D((2,2))) model.add(Conv2D(128,(3,3),activation='relu')) model.add(MaxPool2D((2,2))) model.add(Flatten()) model.add(Dense(220,activation='relu')) model.add(Dropout(0.5)) model.add(Dense(1)) model.compile(loss='mse',optimizer = 'Adam') train_steps = train_generator.n//train_generator.batch_size validation_steps = validation_generator.n//validation_generator.batch_size history = model.fit(train_generator,steps_per_epoch=train_steps, epochs=1,verbose=0, validation_data=validation_generator,validation_steps=validation_steps) Y_pred = model.predict(validation_generator, validation_generator.n // (validation_generator.batch_size+1)) Y_pred = Y_pred.flatten() Y_true = validation_generator.labels Y_lab = np.repeat(loo,12) plotagem = pd.DataFrame({'yobs':Y_true,'ypred':Y_pred,'ylab':Y_lab}) plotagemFinal = pd.concat([plotagemFinal,plotagem]) t1 = time.time() - t0 print(f'##### Finalized step {loo} out of {mols} ##### ({datetime.now()} - {round(t1/60,2)} min)\n') </code></pre> <p>When I run this, I get a warning message, indicating that retracing has been triggered...</p> <blockquote> <p>WARNING:tensorflow:7 out of the last 7 calls to &lt;function Model.make_predict_function..predict_function at 0x0000023801DD0A60&gt; triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to <a href="https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args" rel="nofollow noreferrer">https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args</a> and <a href="https://www.tensorflow.org/api_docs/python/tf/function" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/function</a> for more details.</p> </blockquote> <p>I found that the problem is caused only by this line:</p> <p>Y_pred = model.predict(validation_generator, validation_generator.n // (validation_generator.batch_size+1))</p> <p>I have tried to fix it by following the tips provided by the warning and various other solutions, mainly trying to set the prediction outside the loop, but none of them seem to solve my problem..</p> <p>Has anyone gone through this kind of issue and could solve it?</p> <p>I would really appreciate any help or tips regarding not only this warning issue, but the problem as a whole..</p> <p>Thanks in advance!</p>
<p>My code was loading the model and the running predict and I had a similar message as warning.</p> <p>WARNING:tensorflow:11 out of the last 11 calls to &lt;function Model.make_predict_function..predict_function at 0x000001A789F8F288&gt; triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to <a href="https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args" rel="nofollow noreferrer">https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args</a> and <a href="https://www.tensorflow.org/api_docs/python/tf/function" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/function</a> for more details.</p> <p>I placed</p> <pre><code>tf.compat.v1.disable_eager_execution() </code></pre> <p>in front of predict, after loading the model. Magically, the Warning message disappeared. This is a tip from <a href="https://stackoverflow.com/questions/58814130/tensorflow-2-0-custom-keras-metric-caused-tf-function-retracing-warning">Tensorflow 2.0: custom keras metric caused tf.function retracing warning</a></p> <p>My code looks like this:</p> <pre><code>import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers import matplotlib.pyplot as plt import numpy as np from tensorflow.keras.preprocessing import image import time tstart = time.time() reconstructed_model = keras.models.load_model(&quot;my_model2&quot;) tf.compat.v1.disable_eager_execution() image = image.load_img(&quot;testimages/1404.jpg&quot;, target_size = (28, 28)) # image.show() input_arr = keras.preprocessing.image.img_to_array(image) input_arr = np.array([input_arr]) # Convert single image to a batch. result = reconstructed_model.predict(input_arr) print(result) print(result[0][0]) print(result[0][1]) # training_set.class_indices if result[0][1] &gt; 0.9: prediction = 'This is a z-pattern' else: prediction = 'This is not a z-pattern' print(prediction) print ('display FPS:' , (time.time()-tstart)*1000 , &quot;msec&quot;) </code></pre>
python|tensorflow|image-processing|keras|conv-neural-network
2
1,127
72,014,261
Pandas -- create ranks for diffrent records that similar except one column
<p>I want to fetch 1 record from duplicate rows in Df except one column. example :</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>index</th> <th>col1</th> <th>col2</th> <th>col3</th> <th>col4</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>abc</td> <td>efg</td> <td>hij</td> <td>op</td> </tr> <tr> <td>1</td> <td>abc</td> <td>efg</td> <td>hij</td> <td>ki</td> </tr> <tr> <td>2</td> <td>abc</td> <td>efg</td> <td>hij</td> <td>tf</td> </tr> <tr> <td>3</td> <td>abc</td> <td>efg</td> <td>hij</td> <td>ge</td> </tr> <tr> <td>4</td> <td>xyz</td> <td>mmm</td> <td>qt</td> <td>aa</td> </tr> <tr> <td>5</td> <td>xyz</td> <td>mmm</td> <td>qt</td> <td>bb</td> </tr> <tr> <td>6</td> <td>xyz</td> <td>mmm</td> <td>qt</td> <td>cc</td> </tr> </tbody> </table> </div> <p>(order by col4 asc) Thus, desired result could be like</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>index</th> <th>col1</th> <th>col2</th> <th>col3</th> <th>col4</th> <th>rank</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>abc</td> <td>efg</td> <td>hij</td> <td>op</td> <td>1</td> </tr> <tr> <td>1</td> <td>abc</td> <td>efg</td> <td>hij</td> <td>ki</td> <td>2</td> </tr> <tr> <td>2</td> <td>abc</td> <td>efg</td> <td>hij</td> <td>tf</td> <td>3</td> </tr> <tr> <td>3</td> <td>abc</td> <td>efg</td> <td>hij</td> <td>ge</td> <td>4</td> </tr> <tr> <td>4</td> <td>xyz</td> <td>mmm</td> <td>qt</td> <td>aa</td> <td>1</td> </tr> <tr> <td>5</td> <td>xyz</td> <td>mmm</td> <td>qt</td> <td>bb</td> <td>2</td> </tr> <tr> <td>6</td> <td>xyz</td> <td>mmm</td> <td>qt</td> <td>cc</td> <td>3</td> </tr> </tbody> </table> </div> <p>The goal is to obtain a rank for each similar result to fetch data as df = df[df['rank'] == 1]</p>
<p>Use:</p> <pre><code>df['rank'] = df.groupby(df.columns.difference(['col4']).tolist()).cumcount().add(1) </code></pre>
python|pandas|dataframe|group-by|rank
0
1,128
72,046,920
Bokeh callback function doesnt update scatterplot colors based on clusters
<p>My intial plot works great and shows the colors of the intial 3 clusters. However, when I select a new k value with the silder widget, the new clusters are all colored grey.</p> <pre><code>import numpy as np import pandas as pd import random from typing import List, Tuple from bokeh.models import ColumnDataSource, Slider, Div, Select from bokeh.sampledata.iris import flowers from bokeh.plotting import figure, curdoc from bokeh.layouts import column, row from bokeh.palettes import Spectral10 from bokeh.transform import factor_cmap # Use these centroids in the first iteration of you algorithm if &quot;Random Centroids&quot; is set to False in the Dashboard DEFAULT_CENTROIDS = np.array([[5.664705882352942, 3.0352941176470587, 3.3352941176470585, 1.0176470588235293], [5.446153846153847, 3.2538461538461543, 2.9538461538461536, 0.8846153846153846], [5.906666666666667, 2.933333333333333, 4.1000000000000005, 1.3866666666666667], [5.992307692307692, 3.0230769230769234, 4.076923076923077, 1.3461538461538463], [5.747619047619048, 3.0714285714285716, 3.6238095238095243, 1.1380952380952383], [6.161538461538462, 3.030769230769231, 4.484615384615385, 1.5307692307692309], [6.294117647058823, 2.9764705882352938, 4.494117647058823, 1.4], [5.853846153846154, 3.215384615384615, 3.730769230769231, 1.2076923076923078], [5.52857142857143, 3.142857142857143, 3.107142857142857, 1.007142857142857], [5.828571428571429, 2.9357142857142855, 3.664285714285714, 1.1]]) def get_closest(data_point: np.ndarray, centroids: np.ndarray): &quot;&quot;&quot; Takes a data_point and a nd.array of multiple centroids and returns the index of the centroid closest to data_point by computing the euclidean distance for each centroid and picking the closest. &quot;&quot;&quot; N = centroids.shape[0] dist = np.empty(N) for i, c in enumerate(centroids): dist[i] = np.linalg.norm(c - data_point) index_min = np.argmin(dist) return index_min def to_classes(clustering): # Get number of samples (you can pass it directly to the function) num_samples = sum(x.shape[0] for x in clustering) indices = np.empty((num_samples,)) # An empty array with correct size for ith, cluster in enumerate(clustering): # use cluster indices to assign to correct the cluster index indices[cluster] = ith return indices.astype(int) def k_means(data_np: np.ndarray, k:int=3, n_iter:int=500, random_initialization=False) -&gt; Tuple[np.ndarray, int]: &quot;&quot;&quot; :param data: your data, a numpy array with shape (n_entries, n_features) :param k: The number of clusters to compute :param n_iter: The maximal numnber of iterations :param random_initialization: If False, DEFAULT_CENTROIDS are used as the centroids of the first iteration. :return: A tuple (cluster_indices: A numpy array of cluster_indices, n_iterations: the number of iterations it took until the algorithm terminated) &quot;&quot;&quot; # Initialize the algorithm by assigning random cluster labels to each entry in your dataset k=k+1 centroids = data_np[random.sample(range(len(data_np)), k)] labels = np.array([np.argmin([(el - c) ** 2 for c in centroids]) for el in data_np]) clustering = [] for k in range(k): clustering.append(data_np[labels == k]) # Implement K-Means with a while loop, which terminates either if the centroids don't move anymore, or # if the number of iterations exceeds n_iter counter = 0 while counter &lt; n_iter: # Compute the new centroids, if random_initialization is false use DEFAULT_CENTROIDS in the first iteration # if you use DEFAULT_CENTROIDS, make sure to only pick the k first entries from them. if random_initialization is False and counter == 0: centroids = DEFAULT_CENTROIDS[random.sample(range(len(DEFAULT_CENTROIDS)), k)] # Update the cluster labels using get_closest labels = np.array([get_closest(el, centroids) for el in data_np]) clustering = [] for i in range(k): clustering.append(np.where(labels == i)[0]) counter += 1 new_centroids = np.zeros_like(centroids) for i in range(k): if len(clustering[i]) &gt; 0: new_centroids[i] = data_np[clustering[i]].mean(axis=0) else: new_centroids[i] = centroids[i] # if the centroids didn't move, exit the while loop if clustering is not None and (centroids != new_centroids).sum() == 0: break else: centroids = new_centroids pass # return the final cluster labels and the number of iterations it took clustering = to_classes(clustering) return clustering, counter def callback(attr, old, new): # recompute the clustering and update the colors of the data points based on the result k = slider_k.value_throttled init = select_init.value clustering_new, counter_new = k_means(data_np,k,500,init) source.data.update((ColumnDataSource(dict(petal_length=data['petal_length'], sepal_length=data['sepal_length'], petal_width=data['petal_width'], clustering=clustering_new.astype(str)))).data) div.text = 'Number of iterations: %d'%(counter_new) pass # read and store the dataset data: pd.DataFrame = flowers.copy(deep=True) data = data.drop(['species'], axis=1) # Create a copy of the data as numpy array, which you can use for computing the clustering data_np = np.asarray(data) # Create the dashboard # 1. A Select widget to choose between random initialization or using the DEFAULT_CENTROIDS on top select_init = Select(title='Random Centroids',value='False',options=['True','False']) # 2. A Slider to choose a k between 2 and 10 (k being the number of clusters) slider_k = Slider(start=2,end=10,value=3,step=1,title='k') # 4. Connect both widgets to the callback select_init.on_change('value',callback) slider_k.on_change('value_throttled',callback) # 3. A ColumnDataSource to hold the data and the color of each point you need clustering, counter = k_means(data_np,3,500,False) source = ColumnDataSource(dict(petal_length=data['petal_length'], sepal_length=data['sepal_length'], petal_width=data['petal_width'], clustering=clustering.astype(str))) # 4. Two plots displaying the dataset based on the following table, have a look at the images # in the handout if this confuses you. # # Axis/Plot Plot1 Plot2 # X Petal length Petal width # Y Sepal length Petal length # # Use a categorical color mapping, such as Spectral10, have a look at this section of the bokeh docs: # https://docs.bokeh.org/en/latest/docs/user_guide/categorical.html#filling plot1 = figure(title='Scatterplot of flowers distribution by petal length and sepal length') plot1.yaxis.axis_label = 'Sepal length' plot1.xaxis.axis_label = 'Petal length' plot1.scatter(x='petal_length', y='sepal_length', fill_alpha=0.4, source=source, color=factor_cmap('clustering', palette=Spectral10, factors=np.unique(clustering).astype(str))) plot2 = figure(title='Scatterplot of flowers distribution by petal width and petal length') plot2.yaxis.axis_label = 'Petal length' plot2.xaxis.axis_label = 'Petal width' plot2.scatter(x='petal_width', y='petal_length', fill_alpha=0.4, source=source, color=factor_cmap('clustering', palette=Spectral10, factors=np.unique(clustering).astype(str))) # 5. A Div displaying the currently number of iterations it took the algorithm to update the plot. div = Div(text='Number of iterations: %d'%(counter)) div.on_change('text',callback) lt = row(column(select_init,slider_k,div),plot1,plot2) curdoc().add_root(lt) </code></pre> <pre><code>bokeh serve --show file_name.py </code></pre> <p><a href="https://i.stack.imgur.com/VX0rV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VX0rV.png" alt="enter image description here" /></a></p> <p>Then when I select a new k value with the slider, the graph looks like so. <a href="https://i.stack.imgur.com/Uw32o.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Uw32o.png" alt="enter image description here" /></a></p> <p>How can I update the colors of each cluster in the callback function?</p> <p>This is the expected output after selecting k=7 <a href="https://i.stack.imgur.com/Aopj4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Aopj4.png" alt="enter image description here" /></a></p>
<p>This seemed to work in my callback function</p> <pre><code>def callback(attr, old, new): # recompute the clustering and update the colors of the data points based on the result k = slider_k.value_throttled init = select_init.value clustering_new, counter_new = k_means(data_np,k,500,init) source.data['clustering'] = clustering_new.astype(str) mapper = factor_cmap('clustering',palette=Spectral10,factors=np.unique(clustering_new).astype(str)) scatter1.glyph.fill_color = mapper scatter2.glyph.fill_color = mapper scatter1.glyph.line_color = mapper scatter2.glyph.line_color = mapper div.text = 'Number of iterations: %d'%(counter_new) pass </code></pre> <p>And changed the arguments in my plots</p> <pre><code>mapper = factor_cmap('clustering',palette=Spectral10,factors=np.unique(clustering).astype(str)) plot1 = figure(title='Scatterplot of flowers distribution by petal length and sepal length') plot1.yaxis.axis_label = 'Sepal length' plot1.xaxis.axis_label = 'Petal length' scatter1 = plot1.scatter(x='petal_length', y='sepal_length', fill_alpha=0.4, source=source, fill_color=mapper, line_color=mapper) plot2 = figure(title='Scatterplot of flowers distribution by petal width and petal length') plot2.yaxis.axis_label = 'Petal length' plot2.xaxis.axis_label = 'Petal width' scatter2 = plot2.scatter(x='petal_width', y='petal_length', fill_alpha=0.4, source=source, fill_color=mapper, line_color=mapper) </code></pre>
python|numpy|cluster-analysis|bokeh|k-means
0
1,129
55,472,095
AttributeError: module 'resource' has no attribute 'getpagesize'
<p>I am trying to use Tensorflow Object Detection API and I follow the steps mentioned in the given link -</p> <p><a href="https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/install.html#tf-models-install" rel="nofollow noreferrer">https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/install.html#tf-models-install</a></p> <p>When I try to access the Object Detection Jupyter Notebook through <code>jupyter notebook</code></p> <p>I am facing the below exception</p> <pre><code>Traceback (most recent call last): File "/usr/local/bin/jupyter-notebook", line 7, in &lt;module&gt; from notebook.notebookapp import main File "/home/dinesh/.local/lib/python3.6/site- packages/notebook/notebookapp.py", line 79, in &lt;module&gt; from .base.handlers import Template404, RedirectWithParams File "/home/dinesh/.local/lib/python3.6/site- packages/notebook/base/handlers.py", line 32, in &lt;module&gt; import prometheus_client File "/home/dinesh/.local/lib/python3.6/site- packages/prometheus_client/__init__.py", line 7, in &lt;module&gt; from . import process_collector File "/home/dinesh/.local/lib/python3.6/site- packages/prometheus_client/process_collector.py", line 12, in &lt;module&gt; _PAGESIZE = resource.getpagesize() AttributeError: module 'resource' has no attribute 'getpagesize' </code></pre> <p>I am using </p> <pre><code>Python - 3.6.3 Jupyter - 1.0.0 </code></pre> <p>How can I overcome this exception?</p>
<p>Got a similar error. My project contained modules (folders)</p> <ul> <li>model</li> <li>resource (replaced with resources)</li> <li>service</li> </ul> <p>So I changed the name of the resource module to resources (change name to any appropriate module name)</p>
python-3.x|tensorflow|jupyter-notebook|object-detection
1
1,130
55,250,161
Identifying stops from telemetric data using Pandas
<p>I have telemetric (latitude, longitude, time, mileage) from a large number of vehicles. Each pandas dataframe has one vehicle's travel over time and I would like to identify when the vehicle stops.</p> <p>I have used pandas groupby to identify if the vehicle is moving between rows (accounting for some drift).</p> <p><code>df['Stopped'] = (df.groupby('DAY')['LAT'].diff() &lt;= 0.0001) &amp; (df.groupby('DAY')['LNG'].diff() &lt;= 0.0001) </code> This is not flagging the stops accurately though. Here is a bit where the vehicle is clearly moving (sorry its in HTML code - I don't know how to get it to format as a table otherwise).</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code> Stopped LAT LNG DAY 401218 True 22.6874 113.9487 2018-10-15 401219 True 22.6874 113.9487 2018-10-15 401220 True 22.6874 113.9487 2018-10-15 401221 True 22.6873 113.9487 2018-10-15 401222 True 22.6869 113.9483 2018-10-15 401223 True 22.6863 113.9479 2018-10-15 401224 True 22.6859 113.9476 2018-10-15 401225 True 22.6854 113.9471 2018-10-15 401226 True 22.6849 113.9468 2018-10-15 401227 True 22.6844 113.9463 2018-10-15 401228 True 22.6841 113.9457 2018-10-15 401229 True 22.6839 113.9449 2018-10-15 401230 True 22.6838 113.9438 2018-10-15 401231 True 22.6837 113.9428 2018-10-15 401232 True 22.6837 113.9417 2018-10-15 401233 True 22.6836 113.9409 2018-10-15 401234 True 22.6835 113.9400 2018-10-15 401235 True 22.6833 113.9392 2018-10-15 401236 True 22.6832 113.9387 2018-10-15 401237 True 22.6832 113.9384 2018-10-15</code></pre> </div> </div> </p>
<p>Look at the output of <code>df.groupby('DAY')['LAT'].diff()</code>. Many values are negative, so you need to take the absolute value before you check if they are less than your cutoff value:</p> <pre><code>df['Stopped2'] = (df.groupby('DAY')['LAT'].diff().abs() &lt;= 0.0001) &amp; (df.groupby('DAY')['LNG'].diff().abs() &lt;= 0.0001) print(df) </code></pre> <pre><code> Stopped LAT LNG DAY Stopped2 401218 True 22.6874 113.9487 10/15/18 False 401219 True 22.6874 113.9487 10/15/18 True 401220 True 22.6874 113.9487 10/15/18 True 401221 True 22.6873 113.9487 10/15/18 True 401222 True 22.6869 113.9483 10/15/18 False 401223 True 22.6863 113.9479 10/15/18 False 401224 True 22.6859 113.9476 10/15/18 False 401225 True 22.6854 113.9471 10/15/18 False 401226 True 22.6849 113.9468 10/15/18 False 401227 True 22.6844 113.9463 10/15/18 False 401228 True 22.6841 113.9457 10/15/18 False 401229 True 22.6839 113.9449 10/15/18 False 401230 True 22.6838 113.9438 10/15/18 False 401231 True 22.6837 113.9428 10/15/18 False 401232 True 22.6837 113.9417 10/15/18 False 401233 True 22.6836 113.9409 10/15/18 False 401234 True 22.6835 113.9400 10/15/18 False 401235 True 22.6833 113.9392 10/15/18 False 401236 True 22.6832 113.9387 10/15/18 False 401237 True 22.6832 113.9384 10/15/18 False </code></pre>
python|python-3.x|pandas|pandas-groupby
0
1,131
66,792,507
Pandas how to shift column by datetime into datetime not in index
<p>I want to shift a pandas column by an amount of time, and reindex the dataframe to accommodate this shift. Take the following dataframe:</p> <pre><code>df = pd.DataFrame({&quot;Col1&quot;: [10, 20, 15, 30, 45], &quot;Col2&quot;: [13, 23, 18, 33, 48], &quot;Col3&quot;: [17, 27, 22, 37, 52]}, index=pd.date_range(&quot;11:00&quot;, &quot;13:00&quot;, freq=&quot;30min&quot;)) </code></pre> <p>I would like to shift <code>Col1</code> by 15 minutes and have the datetime index of the dataframe be updated to allow for these new values. However, if I shift <code>Col1</code> by 15 minutes, you can see that because it doesn't align with the index, the entire column is just set to <code>NaN</code> values:</p> <pre><code>df[&quot;Col1&quot;] = df[&quot;Col1&quot;].shift(15, freq=&quot;T&quot;) print(df) Col1 Col2 Col3 2021-03-25 11:00:00 NaN 13 17 2021-03-25 11:30:00 NaN 23 27 2021-03-25 12:00:00 NaN 18 22 2021-03-25 12:30:00 NaN 33 37 2021-03-25 13:00:00 NaN 48 52 </code></pre> <p>I would like the dataframe to look like this:</p> <pre><code> Col1 Col2 Col3 2021-03-25 11:00:00 NaN 13.0 17.0 2021-03-25 11:15:00 10.0 NaN NaN 2021-03-25 11:30:00 NaN 23.0 27.0 2021-03-25 11:45:00 20.0 NaN NaN 2021-03-25 12:00:00 NaN 18.0 22.0 2021-03-25 12:15:00 15.0 NaN NaN 2021-03-25 12:30:00 NaN 33.0 37.0 2021-03-25 12:45:00 30.0 NaN NaN 2021-03-25 13:00:00 NaN 48.0 52.0 2021-03-25 13:15:00 45.0 NaN NaN </code></pre> <p>(which I created with the following code:)</p> <pre><code>df = pd.DataFrame({&quot;Col1&quot;: [float('nan'), 10, float('nan'), 20, float('nan'), 15, float('nan'), 30, float('nan'), 45], &quot;Col2&quot;: [13, float('nan'), 23, float('nan'), 18, float('nan'), 33, float('nan'), 48, float('nan')], &quot;Col3&quot;: [17, float('nan'), 27, float('nan'), 22, float('nan'), 37, float('nan'), 52, float('nan')]}, index=pd.date_range(&quot;11:00&quot;, &quot;13:15&quot;, freq=&quot;15min&quot;)) </code></pre> <p>If there are any suggestions for this that would be greatly appreciated!</p>
<p>BENY's answer works, but I found to be slow with my very large dataframe. As such, I did the following and it worked for me and was much quicker:</p> <pre><code>dt_index2 = pd.date_range(df.index[0], df.index[-1], freq=&quot;15min&quot;) df = df.reindex(dt_index2) df[&quot;Col1&quot;] = df[&quot;Col1&quot;].shift(15, freq=&quot;T&quot;) print(df) Col1 Col2 Col3 2021-03-25 11:00:00 NaN 13.0 17.0 2021-03-25 11:15:00 10.0 NaN NaN 2021-03-25 11:30:00 NaN 23.0 27.0 2021-03-25 11:45:00 20.0 NaN NaN 2021-03-25 12:00:00 NaN 18.0 22.0 2021-03-25 12:15:00 15.0 NaN NaN 2021-03-25 12:30:00 NaN 33.0 37.0 2021-03-25 12:45:00 30.0 NaN NaN 2021-03-25 13:00:00 NaN 48.0 52.0 </code></pre> <p>EDIT:</p> <p>The reason I wanted to do this task is because each column needed to be offset by its index (so Col1 by 1 second, Col2 by 2 seconds etc) and then all joined into one column. For this purpose, BENY's answer is better when used in conjunction with mine, as it keeps memory usage to a minimum. In this case, you should do the following:</p> <pre><code>dt_index2 = pd.date_range(df.index[0], df.index[-1], freq=&quot;S&quot;) df2 = pd.DataFrame(columns=[&quot;concentration&quot;], index=dt_index2) df2[&quot;concentration&quot;] = df2[&quot;concentration&quot;].add(df.pop(&quot;Col1&quot;).shift(1, freq=&quot;S&quot;), fill_value=0) df2[&quot;concentration&quot;] = df2[&quot;concentration&quot;].add(df.pop(&quot;Col2&quot;).shift(2, freq=&quot;S&quot;), fill_value=0) df2[&quot;concentration&quot;] = df2[&quot;concentration&quot;].add(df.pop(&quot;Col3&quot;).shift(3, freq=&quot;S&quot;), fill_value=0) </code></pre> <p>Using this ensures you only have a single column with a dense index, where as if you reindex the other dataframe, you end up with 3 columns.</p>
python|pandas|dataframe|datetime
1
1,132
47,129,156
Tensorflow - How to use tf.gather() to deliver part of the first layer's inputs to one of the next layer's filter
<p>Let's say I have this setup:</p> <pre><code>conv1 = tf.layers.conv2d( inputs=input_layer, filters=4, kernel_size=[14, 14], padding="valid", activation=tf.nn.relu ) conv2 = tf.layers.conv2d( inputs=conv1, filters=16, kernel_size=[5, 5], padding="valid", activation=tf.nn.relu ) </code></pre> <p>Like the partial connection scheme in <a href="http://journals.tubitak.gov.tr/elektrik/issues/elk-16-24-3/elk-24-3-40-1311-58.pdf" rel="nofollow noreferrer">this paper</a>, I want to deliver separate numbers of layers from <code>conv1</code> to one filter in <code>conv2</code>. Do I use <code>tf.gather()</code> for this, and how?</p>
<p>tf.gather() makes slices only along one axis, so for your case tf.gather_nd() would work better. So it should be as following:</p> <pre><code># make a placeholder for indices of the outputs you will pick, # or make it constant if they won't change indices = tf.placeholder(tf.int32,[None,4]) conv1 = tf.layers.conv2d( inputs=input_layer, filters=4, kernel_size=[14, 14], padding="valid", activation=tf.nn.relu ) # select required outputs new_input = tf.gather_nd(conv,indices) # or you can hard-wire them, if they're constant new_input = tf.gather_nd(conv, [[0,0,0,0],[1,0,0,0]]) # then you need to reshape it back a proper size # as previous operation will return flattened list # (unless you slice raws, but not single outputs). # Depending what size you got and what you need, but possibly something like that: required_shape = [-1,10,10,4] new_input = tf.reshape(new_input,required_shape) # or instead of the constant array feed a tensor with new shape as well conv2 = tf.layers.conv2d( inputs=new_input, filters=16, kernel_size=[5, 5], padding="valid", activation=tf.nn.relu ) </code></pre> <p>In case of gather_nd you can specify explicit elements of the array along each axis. There is a good example in the official documentation:</p> <pre><code>indices = [[1]] params = [[['a0', 'b0'], ['c0', 'd0']], [['a1', 'b1'], ['c1', 'd1']]] output = [[['a1', 'b1'], ['c1', 'd1']]] indices = [[0, 1], [1, 0]] params = [[['a0', 'b0'], ['c0', 'd0']], [['a1', 'b1'], ['c1', 'd1']]] output = [['c0', 'd0'], ['a1', 'b1']] indices = [[0, 0, 1], [1, 0, 1]] params = [[['a0', 'b0'], ['c0', 'd0']], [['a1', 'b1'], ['c1', 'd1']]] output = ['b0', 'b1'] </code></pre>
python|tensorflow
0
1,133
47,314,000
divide values of column based on some other column
<p>I have a pandas data frame as can be seen below: </p> <pre><code>index ID Val 1 BBID_2041 1 2 BBID_2041 1 3 BBID_2041 3 4 BBID_2041 1 5 BBID_2041 2 6 BBID_2041 1 7 BBID_2041 1 8 BBID_20410 1 9 BBID_20410 1 10 BBID_20410 5 11 BBID_20410 1 </code></pre> <p>Now I want to divide every value present in column Val by the total number of IDs present in that group. For e.g i want to divide the values from index 1 to 7 by 7 as there are total 7 rows for ID BBID_2041 and so on. I can use a loop for that but what could be a fast way to do that. </p>
<p>By using <code>transform</code></p> <pre><code>df['New']=df.groupby('ID').Val.transform(lambda x : x/len(x)) df Out[814]: index ID Val New 0 1 BBID_2041 1 0.142857 1 2 BBID_2041 1 0.142857 2 3 BBID_2041 3 0.428571 3 4 BBID_2041 1 0.142857 4 5 BBID_2041 2 0.285714 5 6 BBID_2041 1 0.142857 6 7 BBID_2041 1 0.142857 7 8 BBID_20410 1 0.250000 8 9 BBID_20410 1 0.250000 9 10 BBID_20410 5 1.250000 10 11 BBID_20410 1 0.250000 </code></pre>
python|pandas|dataframe|grouping
2
1,134
47,290,682
Index datatime in a dateframe from a list of datetimes
<p>First of all, I did this function that returns one date frame but I wanna use in a list of dates and then concatenate them in one data frame with the index being the date time stamp that is in the list</p> <pre><code>lista = [datetime.datetime(2017, 11, 11, 0, 0), datetime.datetime(2017, 11, 12, 0, 0), datetime.datetime(2017, 11, 13, 0, 0)] </code></pre> <p>This is my function:</p> <pre><code>def min_f(yyear,mmonth,dday): a_00_04 = int( df_output.loc[ (df_output.index &gt; timezone('Europe/Berlin').localize(datetime(yyear,mmonth,dday)-timedelta(hours=1))) &amp; (df_output.index &lt;= timezone('Europe/Berlin').localize(datetime(yyear,mmonth,dday,4)+timedelta(hours=2))) ].min() ) #.tolist()[0]# a_04_08 = int( df_output.loc[ (df_output.index &gt; timezone('Europe/Berlin').localize(datetime(yyear,mmonth,dday,4,00)-timedelta(hours=1))) &amp; (df_output.index &lt;= timezone('Europe/Berlin').localize(datetime(yyear,mmonth,dday,8,00)+timedelta(hours=2))) ].min() ) a_08_12 = int( df_output.loc[ (df_output.index &gt; timezone('Europe/Berlin').localize(datetime(yyear,mmonth,dday,8,00)-timedelta(hours=1))) &amp; (df_output.index &lt;= timezone('Europe/Berlin').localize(datetime(yyear,mmonth,dday,12,00)+timedelta(hours=2))) ].min() ) a_12_16 = int( df_output.loc[ (df_output.index &gt; timezone('Europe/Berlin').localize(datetime(yyear,mmonth,dday,12,00)-timedelta(hours=1))) &amp; (df_output.index &lt;= timezone('Europe/Berlin').localize(datetime(yyear,mmonth,dday,16,00)+timedelta(hours=2))) ].min() ) a_16_20 = int( df_output.loc[ (df_output.index &gt; timezone('Europe/Berlin').localize(datetime(yyear,mmonth,dday,16,00)-timedelta(hours=1))) &amp; (df_output.index &lt;= timezone('Europe/Berlin').localize(datetime(yyear,mmonth,dday,20,00)+timedelta(hours=2))) ].min() ) a_20_24 = int( df_output.loc[ (df_output.index &gt; timezone('Europe/Berlin').localize(datetime(yyear,mmonth,dday,20,00)-timedelta(hours=1))) &amp; (df_output.index &lt;= timezone('Europe/Berlin').localize(datetime(yyear,mmonth,dday+1,00,00)+timedelta(hours=2))) ].min() ) d = {'00_04': [a_04_08], '04_08': [a_04_08], '08_12': [a_08_12], '12_16': [a_12_16],'20_24': [a_20_24]} df = pd.DataFrame(data=d) return df </code></pre> <p>Right now from one looks like this:</p> <pre><code> 00_04 04_08 08_12 12_16 20_24 0 21359 21359 10486 6747 14335 </code></pre> <p>And I wanna to set it like this:</p> <pre><code> 00_04 04_08 08_12 12_16 20_24 2017-11-10 21359 21359 10486 6747 14335 </code></pre> <p>But adding also the results from my list</p>
<p>Not positive what you're going for, but maybe something like this:</p> <pre><code>def create_df(dl): idx = [] cols = { '00_04': [], '04_08': [], '08_12': [], '12_16': [], '20_24': [], } for date in dl: col['00_04'].append(int( df_output.loc[ (df_output.index &gt; timezone('Europe/Berlin').localize(datetime(yyear,mmonth,dday)-timedelta(hours=1))) &amp; (df_output.index &lt;= timezone('Europe/Berlin').localize(datetime(yyear,mmonth,dday,4)+timedelta(hours=2))) ].min() )) ... index.append(date) return pd.DataFrame(cols, index=idx) </code></pre> <p>It's hard to tell what you're going for here, but it seems you may be missing a <code>16-24</code> field as well?</p> <p>Hope this helps.</p>
python|pandas|time
1
1,135
47,424,766
How Can I choose count the data in one column based on other column?
<p>I have two dataframe as below:</p> <pre><code>df1 = DataFrame({'a': np.random.randint(10, size=2)}) df2 = DataFrame({'a': np.random.randint(10, size=100)}) </code></pre> <p>There is two numbers in df1, and I want to count the two numbers' amount in df2. The answer is in the right of df1['a'].</p> <p>I use for in, but there is a error:Length of values does not match length of ' 'index. </p> <p>Anyone can tell me how to slove this question?</p> <p>I use df2['a'].isin(df1['a']).sum(), but it give me the result that the amount of two numbers together. </p> <p>I want the result like:</p> <pre> No Amount 8 3 1 2 </pre> <p>instead of :</p> <pre> No Amount 8 5 1 5 </pre>
<pre><code>df2.a.value_counts().reindex(df1.a) Out[369]: a 4 11 5 5 Name: a, dtype: int64 </code></pre> <p>Add <code>sum</code></p> <pre><code>df2.a.value_counts().reindex(df1.a).sum() Out[370]: 16 </code></pre>
python|pandas|numpy
2
1,136
47,328,224
Error using data augmentation options in the Object Detection API
<p>I am trying to use the data_augmentation_options in the .config files to train a network, specifically a ssd_mobilenet_v1, but when I activate the option random_adjust_brightness, I get the error message pasted below very quickly (I activate the option after the step 110000).</p> <p>I tried reducing the default value:</p> <pre><code>optional float max_delta=1 [default=0.2]; </code></pre> <p>But the result was the same.</p> <p>Any idea why? The images are RGB from png files (from the <a href="https://hci.iwr.uni-heidelberg.de/node/6132" rel="noreferrer">Bosch Small Traffic Lights Dataset</a>).</p> <pre><code>INFO:tensorflow:global step 110011: loss = 22.7990 (0.357 sec/step) INFO:tensorflow:global step 110012: loss = 47.8811 (0.401 sec/step) 2017-11-16 11:02:29.114785: W tensorflow/core/framework/op_kernel.cc:1192] Invalid argument: LossTensor is inf or nan. : Tensor had NaN values [[Node: CheckNumerics = CheckNumerics[T=DT_FLOAT, message="LossTensor is inf or nan.", _device="/job:localhost/replica:0/task:0/device:CPU:0"](total_loss)]] 2017-11-16 11:02:29.114895: W tensorflow/core/framework/op_kernel.cc:1192] Invalid argument: LossTensor is inf or nan. : Tensor had NaN values [[Node: CheckNumerics = CheckNumerics[T=DT_FLOAT, message="LossTensor is inf or nan.", _device="/job:localhost/replica:0/task:0/device:CPU:0"](total_loss)]] 2017-11-16 11:02:29.114969: W tensorflow/core/framework/op_kernel.cc:1192] Invalid argument: LossTensor is inf or nan. : Tensor had NaN values [[Node: CheckNumerics = CheckNumerics[T=DT_FLOAT, message="LossTensor is inf or nan.", _device="/job:localhost/replica:0/task:0/device:CPU:0"](total_loss)]] 2017-11-16 11:02:29.115043: W tensorflow/core/framework/op_kernel.cc:1192] Invalid argument: LossTensor is inf or nan. : Tensor had NaN values [[Node: CheckNumerics = CheckNumerics[T=DT_FLOAT, message="LossTensor is inf or nan.", _device="/job:localhost/replica:0/task:0/device:CPU:0"](total_loss)]] 2017-11-16 11:02:29.115112: W tensorflow/core/framework/op_kernel.cc:1192] Invalid argument: LossTensor is inf or nan. : Tensor had NaN values ... </code></pre> <p>Edit: The workaround I have found is this. The inf or nan is in the loss, so checking the function in /object_detection/core/preprocessor.py doing the brightness randomization:</p> <pre><code>def random_adjust_brightness(image, max_delta=0.2): """Randomly adjusts brightness. Makes sure the output image is still between 0 and 1. Args: image: rank 3 float32 tensor contains 1 image -&gt; [height, width, channels] with pixel values varying between [0, 1]. max_delta: how much to change the brightness. A value between [0, 1). Returns: image: image which is the same shape as input image. boxes: boxes which is the same shape as input boxes. """ with tf.name_scope('RandomAdjustBrightness', values=[image]): image = tf.image.random_brightness(image, max_delta) image = tf.clip_by_value(image, clip_value_min=0.0, clip_value_max=1.0) return image </code></pre> <p>It is assuming that the image values must be between 0.0 and 1.0. Is it possible that the images are actually arriving with 0 mean and even a different range? In that case, the clipping is corrupting them and leading to the fail. Long story short: I commented out the clipping line and it is working (we will see the results). </p>
<p>Often, getting <code>LossTensor is inf or nan. : Tensor had NaN values</code> is due to an error in the bounding boxes / annotations (Source: <a href="https://github.com/tensorflow/models/issues/1881" rel="nofollow noreferrer">https://github.com/tensorflow/models/issues/1881</a>).</p> <p>I know that the Bosch Small Traffic Light Dataset has some annotations that extend outside of the image dimensions. For example, the height of an image in that dataset is 720 pixels, but some bounding boxes have a height coordinate greater than 720. This is common because whenever the car recording the sequence goes under a traffic light, some of the traffic light is visible, and some of it is cut off.</p> <p>I know this isn't an exact answer to your question, but hopefully it provides insight on a possible reason why you are having the problem. Perhaps removing annotations that extend outside of the image dimensions will help solve the problem; however, I'm dealing with the same problem except I am not using image preprocessing. On the same dataset, I'm encountering the <code>LossTensor is inf or nan. : Tensor had NaN values</code> error every ~8000 steps.</p>
tensorflow|object-detection|object-detection-api
1
1,137
47,145,683
Error reading images with pandas (+pyTorch,scikit)
<p>I'm trying to read images to work with a CNN, but I'm getting a pandas error while trying to load the images. This is some of the code (omitted imports and irrelevant nn class for clarity):</p> <pre><code>file_name = "annotation.csv" image_files = pd.read_csv(file_name) class SimpsonsDataset(Dataset): def __init__(self, csv_file, root_dir, transform=None): self.image_file = pd.read_csv(csv_file) self.root_dir = root_dir self.transform = transform def __len__(self): return len(self.image_file) def __getitem__(self, idx): img_name = os.path.join(self.root_dir, self.image_file.iloc[idx,0][1:]) image = io.imread(img_name) sample = {'image': image} if self.transform: sample = self.transform(sample) return sample simpsons = SimpsonsDataset(csv_file=image_files,root_dir="folder/") </code></pre> <p>I use the <code>iloc[idx,0][1:]</code> to format the filepath, and the filepath is joined, with the folders and filenames matching.</p> <p>However, when I try to run the file, I get the following error:</p> <pre><code> File "C:\ProgramData\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 710, in runfile execfile(filename, namespace) File "C:\ProgramData\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 101, in execfile exec(compile(f.read(), filename, 'exec'), namespace) File "C:/.../image_extractor.py", line 41, in &lt;module&gt; simpsons = SimpsonsDataset(csv_file=image_files,root_dir="folder/") File "C:/.../image_extractor.py", line 26, in __init__ self.image_file = pd.read_csv(csv_file) File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\io\parsers.py", line 655, in parser_f return _read(filepath_or_buffer, kwds) File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\io\parsers.py", line 392, in _read filepath_or_buffer, encoding, compression) File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\io\common.py", line 210, in get_filepath_or_buffer raise ValueError(msg.format(_type=type(filepath_or_buffer))) ValueError: Invalid file path or buffer object type: &lt;class 'pandas.core.frame.DataFrame'&gt; </code></pre> <p>Would love some insights on why this is happening. Thanks!</p>
<p>Your variable <code>image_files</code> is a pandas DataFrame, since it holds the return value of <code>pd.read_csv()</code>, which returns a DataFrame. Try deleting the line</p> <pre><code>image_files = pd.read_csv(file_name) </code></pre> <p>and changing the last line to this:</p> <pre><code>simpsons = SimpsonsDataset(csv_file=file_name, root_dir="folder/") </code></pre>
python|pandas|scikit-image|pytorch
1
1,138
47,518,450
Error tokenizing data with Pandas from a tsv file
<p>I have got dataset named <code>train.tsv.7z</code> and <code>test.tsv.7z</code>. I unzip them in my Mac with unarchiver (double click) so I have the <code>train.tsv</code> and <code>test.tsv</code> now.</p> <p>Then I am reading those files with pandas using </p> <pre><code>PATH='data/projData/' tables = pd.read_table(PATH) </code></pre> <p>But I am getting error</p> <pre><code>ParserError: Error tokenizing data. C error: Calling read(nbytes) on source failed. Try engine='python'. </code></pre> <p>Looking into other stackoverflow thread, it seems the error is due to the file being corrupted. But not sure how to solve this issue.</p> <p>I am using python3.6 conda environment</p>
<p>It doesn't work this way.</p> <p>You have to specify a single file (not a directory):</p> <pre><code>train = pd.read_csv('data/projData/train.tsv', sep='\t') </code></pre>
python|pandas
1
1,139
47,212,051
Create new pandas dataframe and add values stored in an array of tuples
<p>I have an array of tuples like:</p> <pre><code>myArr=Ttest_indResult(statistic=array([ -5.27693006, 0., 0., 0.15105006,]), pvalue=array([ 2.31902785e-06, 1.00000000e+00, 1.00000000e+00, 8.80460569e-01,)])) </code></pre> <p>I want to add the values <code>myArr[1][0],myArr[1][1],myArr[1][2] and myArr[1][3]</code> to a new data frame:</p> <pre><code>df_tup = pd.DataFrame() df_tup['t']=df_tup['t'].apply(lambda x: myArr[1][x]...not sure) </code></pre> <p>not sure how to iterate through the array while adding a value to the column t.</p>
<p><code>myArr</code> is not exactly a <code>tuple</code> but rather a <a href="https://docs.python.org/3/library/collections.html#collections.namedtuple" rel="nofollow noreferrer"><code>namedtuple</code></a> (see <a href="https://github.com/scipy/scipy/blob/80c2d3be7064a71906fef937e633af57921ec996/scipy/stats/mstats_basic.py#L852" rel="nofollow noreferrer"><code>scipy.stats</code> source code</a>), which allow to access its fields like arguments, and the names of these fields. </p> <p>As in your case it encapsulates two <code>numpy.array</code> (see <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ttest_ind.html#scipy.stats.ttest_ind" rel="nofollow noreferrer"><code>scipy.stats.ttest_ind</code> doc</a>) you can use it directly as values for your new columns.</p> <pre><code>pd.DataFrame(columns=myArr._fields, data=np.array(myArr).T) Out[]: statistic pvalue 0 -5.27693 0.000002 1 0.00000 1.000000 2 0.00000 1.000000 3 0.15105 0.880461 </code></pre> <p><em>inspired from <a href="https://stackoverflow.com/a/20015080/6914989">this answer</a></em></p>
python|pandas
1
1,140
68,358,634
Referening chunked Dataframe - Pandas
<p>I have broken a large dataframe into small chunks. I am now trying to pass the data from these chunks into a loop but I am not sure how to call each of these chunked dataframes.</p> <p>I have broken the Dataframe into 4 chunks as shown below. But I am not sure how to call each of these chunked Dataframe and pass them in a loop</p> <pre><code>n = 4 chunks = [df[i:i+n] for i in range(0,df.shape[0],n)] </code></pre>
<ul> <li>you describe a <strong>bigdf</strong> that is <em>chunked</em> into a list of data frames. I have simulated this</li> <li>if you want a <strong>list comprehension</strong> of rows in subset of data frame <strong>list</strong> it's a simple case of effectively looping over both lists</li> </ul> <pre><code>import pandas as pd import numpy as np bigdf = pd.DataFrame({&quot;chunk&quot;:np.random.choice(range(10),1000),&quot;value&quot;:np.random.uniform(2,3,1000)}) # list of dataframes that are chunks from bigdf df = [bigdf.loc[bigdf[&quot;chunk&quot;].eq(i)] for i in range(10)] n = 4 # list comprehension of required chucks and array of each row in chunk [v for chunk in range(n) for v in df[chunk].values ] </code></pre>
pandas
0
1,141
68,212,970
Python Dataframe fill nan from multiple columns
<p>I have a data frame with 3 columns. I want to fill <code>nan</code> in the first column with the second column. If there is also <code>nan</code> in the second, go to the third column.</p> <p>My code:</p> <pre><code>xdf = pd.DataFrame({'A':[10,20,np.nan,np.nan],'B':[15,np.nan,30,np.nan],'C':[np.nan,np.nan,35,40]}) # fill nan in A xdf['A'].fillna(xdf[['B','C']],inplace=True) </code></pre> <p>Present output:</p> <pre><code>TypeError: &quot;value&quot; parameter must be a scalar, dict or Series, but you passed a &quot;DataFrame&quot; </code></pre> <p>Expected output:</p> <pre><code>xdf = A B C 0 10.0 15.0 NaN 1 20.0 NaN NaN 2 30.0 30.0 35.0 3 40.0 NaN 40.0 </code></pre>
<p>Try via <code>bfill()</code>:</p> <pre><code>xdf['A']=xdf.bfill(1)['A'] </code></pre> <p>output of <code>df</code>:</p> <pre><code> A B C 0 10.0 15.0 NaN 1 20.0 NaN NaN 2 30.0 30.0 35.0 3 40.0 NaN 40.0 </code></pre> <p><strong>Update:</strong></p> <p>if there were additional columns (like D, E) not needed to fillna then select the subset of df and backword fill on axis 1:</p> <pre><code>xdf['A']=xdf[['A','B','C']].bfill(1)['A'] </code></pre>
python|pandas|dataframe|numpy|fillna
3
1,142
68,231,398
Create and Fill Duplicate Dataframe Values with Lists
<p>I am trying to find a way to arrange a dataframe given a path that leads to several numbers that are read from files in that path.</p> <p>I have a dataframe that has a JobID and the path associated with that job that looks as follows:</p> <pre><code>JobID Path 43402866 /global/162940/mill/scanfiles0001 43408681 /global/162940/mill/scanfiles0002 </code></pre> <p>When I change directories to the paths from the job I read values and get results that look something like the following lists:</p> <pre><code>/global/162940/mill/scanfiles0001 [256, 10.0, 2605, 100, 86000] [256, 20.0, 5210, 100, 86000] [256, 40.0, 10421, 100, 86000] [256, 50.0, 13026, 100, 86000] /global/162940/mill/scanfiles0002 [256, 60.0, 15631, 100, 86000] [256, 80.0, 20841, 100, 86000] [256, 120.0, 31262, 100, 86000] </code></pre> <p>I wanted to ask if there's a way to get the data in the following form</p> <pre><code>JobID Path A B C D E 43402866 /global/...1 256 10.0 2605 100 86000 43402866 NaN 256 20.0 5210 100 86000 43402866 NaN 256 40.0 10421 100 86000 43402866 NaN 256 50.0 13026 100 86000 43408681 /global/...2 256 60.0 15631 100 86000 43408681 NaN 256 80.0 20841 100 86000 43408681 NaN 256 120.0 31262 100 86000 </code></pre> <p>Where jobIDs are duplicated for all the different lists and repeated paths are omitted or turned into NaNs.</p>
<p>You could try like this:</p> <pre class="lang-py prettyprint-override"><code># Initial dataframe df = pd.DataFrame( { &quot;JobID&quot;: [&quot;43402866&quot;, &quot;43408681&quot;], &quot;Path&quot;: [ &quot;/global/162940/mill/scanfiles0001&quot;, &quot;/global/162940/mill/scanfiles0002&quot;, ], } ) # Initialize Target dataframe new_df = pd.DataFrame( {&quot;JobID&quot;: [], &quot;Path&quot;: [], &quot;A&quot;: [], &quot;B&quot;: [], &quot;C&quot;: [], &quot;D&quot;: [], &quot;E&quot;: []} ) # Fake data for the purpose of the demonstration scanfiles0001 = [ [256, 10.0, 2605, 100, 86000], [256, 20.0, 5210, 100, 86000], [256, 40.0, 10421, 100, 86000], [256, 50.0, 13026, 100, 86000], ] # Fake data for the purpose of the demonstration scanfiles0002 = [ [256, 60.0, 15631, 100, 86000], [256, 80.0, 20841, 100, 86000], [256, 120.0, 31262, 100, 86000], ] # Fake data for the purpose of the demonstration scanfiles = { &quot;/global/162940/mill/scanfiles0001&quot;: scanfiles0001, &quot;/global/162940/mill/scanfiles0002&quot;: scanfiles0002, } start = 0 for _, row in df.iterrows(): for line in scanfiles[row[&quot;Path&quot;]]: new_df.loc[len(new_df), [&quot;A&quot;, &quot;B&quot;, &quot;C&quot;, &quot;D&quot;, &quot;E&quot;]] = line new_df[&quot;JobID&quot;].fillna(row[&quot;JobID&quot;], inplace=True) new_df.loc[start, &quot;Path&quot;] = row[&quot;Path&quot;] start += len(new_df) print(new_df) # Outputs JobID Path A B C D E 0 43402866 /global/162940/mill/scanfiles0001 256.0 10.0 2605.0 100.0 86000.0 1 43402866 NaN 256.0 20.0 5210.0 100.0 86000.0 2 43402866 NaN 256.0 40.0 10421.0 100.0 86000.0 3 43402866 NaN 256.0 50.0 13026.0 100.0 86000.0 4 43408681 /global/162940/mill/scanfiles0002 256.0 60.0 15631.0 100.0 86000.0 5 43408681 NaN 256.0 80.0 20841.0 100.0 86000.0 6 43408681 NaN 256.0 120.0 31262.0 100.0 86000.0 </code></pre>
python|pandas|dataframe
0
1,143
68,127,203
compare the two rows string for same id and get the unique string values in pandas
<p>Input:</p> <pre><code>df1 = pd.DataFrame([[101, 'DC1', ' AHT - QA + + AHT - Required Disclosures + payment'], [101, 'EM5', ' AHT - QA + AHT - Required Disclosures + + Off + STAR.ist'], [102, 'RA6', '+ AHT - QA + Recap Warning - Yes +'], [103, 'DC1', 'Greeting + NavigateToView +'], [103, 'RA6', 'PaymentSelection + Button Eligible +'], [104, 'PA4', 'Legal task + advice'], [104, 'DC1', 'Legal task + advice'] ] , columns=['Call_id', 'Agent_id', 'Task_done']) </code></pre> <p>Expected output:</p> <pre><code>output = pd.DataFrame([[101, 'DC1', ' AHT - QA + + AHT - Required Disclosures + payment','payment'], [101, 'EM5', ' AHT - QA + AHT - Required Disclosures + + Off + STAR.ist',' Off + STAR.ist'], [102, 'RA6', '+ AHT - QA + Recap Warning - Yes +','+ AHT - QA + Recap Warning - Yes +'], [103, 'DC1', 'Greeting + NavigateToView +','Greeting + NavigateToView +'], [103, 'RA6', 'PaymentSelection + Button Eligible +','PaymentSelection + Button Eligible +'], [104, 'PA4', 'Legal task + advice','same task'], [104, 'DC1', 'Legal task + advice','same task'] ] , columns=['Call_id', 'Agent_id', 'Task_done','unique_task_done']) </code></pre> <p>I have merged the multiple task_done for agent_id from different table with delimiter '+' and Now, I want to compare the Task_done for same call_id with different agent_id and get the unique string in another column Python 3.6 above.</p>
<p>You can try:</p> <pre><code>df1['unique_task_done']=df1['Task_done'].mask(df1['Task_done'].duplicated(keep=False),'same task') mask=df1['unique_task_done'].str.count(' + ') </code></pre> <p>Finally:</p> <pre><code>df1.loc[mask.ge(2),'unique_task_done']=df1.loc[mask.ge(2),'unique_task_done'].str.split('+').str[mask.max():].str.join('+') </code></pre> <p>Now If you print <code>df1</code> you will get your output</p>
python-3.x|pandas
0
1,144
59,133,420
Concatenated data from pandas_datareader
<p>I am trying to create a dataframe which columns from 2 different datarame.</p> <pre><code>import pandas as pd import numpy as np from statsmodels import api as sm import pandas_datareader.data as web import datetime start = datetime.datetime(2016,12,2) end = datetime.datetime.today() df = web.get_data_yahoo(['F'], start, end) df1 = web.get_data_yahoo(['^GSPC'], start, end) df3 = pd.concat([df['Adj Close'], df1['Adj Close']]) </code></pre> <p>With this i wanted to get <code>df3</code> with 2 columns containing data of [Adj Close]. What i got instead is :</p> <pre><code> F ^GSPC Date 2016-12-01 10.297861 NaN 2016-12-02 10.140451 NaN 2016-12-05 10.306145 NaN 2016-12-06 10.405562 NaN 2016-12-07 10.819797 NaN ... ... ... 2019-11-22 NaN 3110.290039 2019-11-25 NaN 3133.639893 2019-11-26 NaN 3140.520020 2019-11-27 NaN 3153.629883 2019-11-29 NaN 3140.979980 1508 rows × 2 columns </code></pre> <p>What do i need to do to get rid of NaN values and why is it there?</p>
<p>Add parameter <code>axis=1</code> for concanecate by columns in <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.concat.html" rel="nofollow noreferrer"><code>concat</code></a>:</p> <pre><code>df3 = pd.concat([df['Adj Close'], df1['Adj Close']], axis=1) </code></pre> <p>But I think your solution should be simplify with pass list to <code>get_data_yahoo</code>:</p> <pre><code>df3 = web.get_data_yahoo(['F', '^GSPC'], start, end) </code></pre>
python-3.x|pandas|pandas-datareader
1
1,145
56,871,477
Pandas creating and filling new columns based on other columns
<p>I have a pandas dataframe with Time and values columns. I am trying to create two new columns 'START_TIME" and 'END_TIME'. It is medication related data and it is stored poorly in the database so I am trying to transform the table. In this case, the medication for a patient started at 2018-11-07 23:59:32 with a dose value of 80.o so, I want to capture that as the start time of the medication and end time is the first zero after the last value. That would be one round of medication. Whenever a new value starts it is considered as the second round of medication and I'd like to capture the start time and end time in the following way as explained earlier. </p> <pre><code>Time Values 2018-11-07 23:59:32 80.0 2018-11-08 04:35:09 80.0 2018-11-08 05:31:24 40.0 2018-11-24 18:29:30 0.0 2018-11-24 18:33:14 0.0 2018-11-26 17:39:31 20.0 2018-11-26 18:51:07 20.0 2018-11-26 21:04:35 0.0 2018-11-26 21:05:20 0.0 2018-11-26 21:13:44 0.0 2018-11-26 21:25:57 0.0 2018-11-29 02:19:57 7.0 2018-12-09 16:02:06 5.0 2018-12-09 16:33:03 2.5 2018-12-09 21:02:10 0.0 </code></pre> <p>I believe it cannot be done with a simple for and if loop as I started with a simple step and it failed</p> <pre><code>for i in df['Values']: if i+1 != 0: df['START_TIME'] = df['TIME'].copy() </code></pre> <p>Expected DataFrame:</p> <pre><code>Time Values START_TIME END_TIME 2018-11-07 23:59:32 80.0 2018-11-07 23:59:32 2018-11-08 04:35:09 80.0 2018-11-08 05:31:24 40.0 2018-11-24 18:29:30 0.0 2018-11-24 18:29:30 2018-11-24 18:33:14 0.0 2018-11-26 17:39:31 20.0 2018-11-26 17:39:31 2018-11-26 18:51:07 20.0 2018-11-26 21:04:35 0.0 2018-11-26 21:04:35 2018-11-26 21:05:20 0.0 2018-11-26 21:13:44 0.0 2018-11-26 21:25:57 0.0 2018-11-29 02:19:57 7.0 2018-11-29 02:19:57 2018-12-09 16:02:06 5.0 2018-12-09 16:33:03 2.5 2018-12-09 21:02:10 0.0 2018-12-09 21:02:10 </code></pre> <p>I'd really appreciate if I can get some help.</p>
<p>IIUC create the condition by using <code>diff</code>, then the value equal to -1 and 1 will be the end and start point </p> <pre><code>s=df.Values.eq(0).astype(int).diff().fillna(-1) df.loc[s==-1,'START_TIME']=df.Time df.loc[s==1,'END_TIME']=df.Time df Out[334]: Time Values START_TIME END_TIME 0 2018-11-07 23:59:32 80.0 2018-11-07 23:59:32 NaT 1 2018-11-08 04:35:09 80.0 NaT NaT 2 2018-11-08 05:31:24 40.0 NaT NaT 3 2018-11-24 18:29:30 0.0 NaT 2018-11-24 18:29:30 4 2018-11-24 18:33:14 0.0 NaT NaT 5 2018-11-26 17:39:31 20.0 2018-11-26 17:39:31 NaT 6 2018-11-26 18:51:07 20.0 NaT NaT 7 2018-11-26 21:04:35 0.0 NaT 2018-11-26 21:04:35 8 2018-11-26 21:05:20 0.0 NaT NaT 9 2018-11-26 21:13:44 0.0 NaT NaT 10 2018-11-26 21:25:57 0.0 NaT NaT 11 2018-11-29 02:19:57 7.0 2018-11-29 02:19:57 NaT 12 2018-12-09 16:02:06 5.0 NaT NaT 13 2018-12-09 16:33:03 2.5 NaT NaT 14 2018-12-09 21:02:10 0.0 NaT 2018-12-09 21:02:10 </code></pre>
python-3.x|pandas|dataframe
2
1,146
57,237,671
Tensorflow & Keras can't load .ckpt save
<p>So I am using the ModelCheckpoint callback to save the best epoch of a model I am training. It saves with no errors, but when I try to load it, I get the error:</p> <pre><code>2019-07-27 22:58:04.713951: W tensorflow/core/util/tensor_slice_reader.cc:95] Could not open C:\Users\Riley\PycharmProjects\myNN\cp.ckpt: Data loss: not an sstable (bad magic number): perhaps your file is in a different file format and you need to use a different restore operator? </code></pre> <p>I have tried using the absolute/full path, but no luck. I'm sure I could use EarlyStopping, but I'd still like to understand why I am getting the error. Here is my code:</p> <pre class="lang-java prettyprint-override"><code>from __future__ import absolute_import, division, print_function import tensorflow as tf from tensorflow import keras import numpy as np import matplotlib.pyplot as plt import datetime import statistics (train_images, train_labels), (test_images, test_labels) = np.load("dataset.npy", allow_pickle=True) train_images = train_images / 255 test_images = test_images / 255 train_labels = list(map(float, train_labels)) test_labels = list(map(float, test_labels)) train_labels = [i/10 for i in train_labels] test_labels = [i/10 for i in test_labels] ''' model = keras.Sequential([ keras.layers.Flatten(input_shape=(128, 128)), keras.layers.Dense(64, activation=tf.nn.relu), keras.layers.Dense(1) ]) ''' start_time = datetime.datetime.now() model = keras.Sequential([ keras.layers.Conv2D(32, kernel_size=(5, 5), strides=(1, 1), activation='relu', input_shape=(128, 128, 1)), keras.layers.MaxPooling2D(pool_size=(2, 2), strides=(2, 2)), keras.layers.Dropout(0.2), keras.layers.Conv2D(64, (5, 5), activation='relu'), keras.layers.MaxPooling2D(pool_size=(2, 2)), keras.layers.Dropout(0.2), keras.layers.Flatten(), keras.layers.Dropout(0.5), keras.layers.Dense(1000, activation='relu'), keras.layers.Dense(1) ]) model.compile(loss='mean_absolute_error', optimizer=keras.optimizers.SGD(lr=0.01), metrics=['mean_absolute_error', 'mean_squared_error']) train_images = train_images.reshape(328, 128, 128, 1) test_images = test_images.reshape(82, 128, 128, 1) model.fit(train_images, train_labels, epochs=100, callbacks=[keras.callbacks.ModelCheckpoint("cp.ckpt", monitor='mean_absolute_error', save_best_only=True, verbose=1)]) model.load_weights("cp.ckpt") predictions = model.predict(test_images) totalDifference = 0 for i in range(82): print("%s: %s" % (test_labels[i] * 10, predictions[i] * 10)) totalDifference += abs(test_labels[i] - predictions[i]) avgDifference = totalDifference / 8.2 print("\n%s\n" % avgDifference) print("Time Elapsed:") print(datetime.datetime.now() - start_time) </code></pre>
<p>TLDR; you are saving whole model, while trying to load only weights, that's not how it works.</p> <h2>Explanation</h2> <p>Your model's <code>fit</code>:</p> <pre><code>model.fit( train_images, train_labels, epochs=100, callbacks=[ keras.callbacks.ModelCheckpoint( "cp.ckpt", monitor="mean_absolute_error", save_best_only=True, verbose=1 ) ], ) </code></pre> <p>As <code>save_weights=False</code> by default in <code>ModelCheckpoint</code>, you are saving whole model to <code>.ckpt</code>.</p> <p>BTW. File should be named <code>.hdf5</code> or <code>.hf5</code> as it's <a href="https://en.wikipedia.org/wiki/Hierarchical_Data_Format" rel="nofollow noreferrer"><code>Hierarchical Data Format 5</code></a>. As Windows is not extension-agnostic you may run into some problems if <code>tensorflow</code> / <code>keras</code> relies on extension on this OS.</p> <p>On the other hand you are loading the model's weights only, while the file contains <strong>whole model</strong>:</p> <pre><code>model.load_weights("cp.ckpt") </code></pre> <p>Tensorflow's checkpointing (<code>.cp</code>) mechanism is different from Keras's (<code>.hdf5</code>), so watch out for that (there are plans to integrate them more closely, see <a href="https://www.tensorflow.org/beta/guide/checkpoints" rel="nofollow noreferrer">here</a> and <a href="https://www.tensorflow.org/beta/guide/saved_model" rel="nofollow noreferrer">here</a>).</p> <h2>Solution</h2> <p>So, either use the callback as you currently do, <strong>BUT</strong> use <code>model.load("model.hdf5")</code> or add <code>save_weights_only=True</code> argument to <code>ModelCheckpoint</code>:</p> <pre><code>model.fit( train_images, train_labels, epochs=100, callbacks=[ keras.callbacks.ModelCheckpoint( "weights.hdf5", monitor="mean_absolute_error", save_best_only=True, verbose=1, save_weights_only=True, # Specify this ) ], ) </code></pre> <p>and you can use your <code>model.load_weights("weights.hdf5")</code>.</p>
python|tensorflow|machine-learning|keras|computer-vision
6
1,147
66,755,165
RuntimeError:shape ‘[4, 98304]’ is invalid for input of size 113216
<p>I am learning to train a basic nn model for image classification, the error happened when I was trying to feed in image data into the model. I understand that I should input correct size of image data. My image data is 128*256 with 3 channels,4 classes, and the batch size is 4. What I don't understand is where does the size 113216 come from? I checked all related parameters or image meta data, but didn't find a clue. Here is my code:</p> <pre><code>class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(3, 6, 5) self.pool = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(6, 16, 5) self.fc1 = nn.Linear(3*128*256, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = x.view(4, 3*128*256) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x net = Net() for epoch in range(2): # loop over the dataset multiple times print('round start') running_loss = 0.0 for i, data in enumerate(trainloader, 0): # get the inputs; data is a list of [inputs, labels] inputs, labels = data # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize print(inputs.shape) outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() # print statistics running_loss += loss.item() if i % 2000 == 1999: # print every 2000 mini-batches print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 2000)) running_loss = 0.0 print('Finished Training') </code></pre> <p>Thanks for your help!</p>
<h2>Shapes</h2> <ul> <li><code>Conv2d</code> changes width and height of image without <code>padding</code>. Rule of thumb (if you want to keep the same image size with <code>stride=1</code> (default)): <code>padding = kernel_size // 2</code></li> <li>You are changing number of channels, while your <code>linear</code> layer has <code>3</code> for some reason?</li> <li><strong>Use <code>print(x.shape)</code> after each step if you want to know how your tensor data is transformed!</strong></li> </ul> <h2>Commented code</h2> <p>Fixed code with comments about shapes after each step:</p> <pre><code>class Net(torch.nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = torch.nn.Conv2d(3, 6, 5) self.pool = torch.nn.MaxPool2d(2, 2) self.conv2 = torch.nn.Conv2d(6, 16, 5) # Output shape from convolution is input shape to fc self.fc1 = torch.nn.Linear(16 * 29 * 61, 120) self.fc2 = torch.nn.Linear(120, 84) self.fc3 = torch.nn.Linear(84, 10) def forward(self, x): # In: (4, 3, 128, 256) x = F.relu(self.conv1(x)) # (4, 3, 124, 252) because kernel_size=5 takes 2 pixels x = self.pool(x) # (4, 6, 62, 126) # Because pooling halving the size x = F.relu(self.conv2(x)) # (4, 16, 58, 122) # Same reason as above x = self.pool(x) # (4, 16, 29, 61) Because pooling halving the size # Better use torch.flatten(x, dim=1) so you don't have to input size here x = x.view(-1, 16 * 29 * 61) # Use -1 to be batch size independent x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x </code></pre> <h2>Other things that might help</h2> <ul> <li>Try <code>torch.nn.AdaptiveMaxPool2d(1)</code> before ReLU, it will make your network width and height independent</li> <li>Use <code>flatten</code> (or <code>torch.nn.Flatten()</code> layer) after this pooling</li> <li>If so, pass <code>num_channels</code> set in last convolution as <code>in_features</code> for <code>nn.Linear</code></li> </ul>
image|pytorch|conv-neural-network
1
1,148
66,655,525
Error in merging pandas data frame columns
<p>I'm trying to merge three columns from the same data frame into one.</p> <p>Here my data frame <code>selected_vals</code></p> <pre><code> label_1 label_2 label_3 0 NaN NaN NaN 1 ('__label__Religione_e_Magia',) NaN NaN 2 NaN ('__label__Storia',) NaN 3 NaN ('__label__Storia',) NaN 4 ('__label__Religione_e_Magia',) NaN NaN </code></pre> <p>The dataframe has only one value per row so, in the col where the value it's not specified I'm having <code>NaN</code> Following the solution proposed <a href="https://stackoverflow.com/questions/27291870/merging-several-columns-in-one-new-column-in-the-same-pandas-dataframe">here</a> I used this code:</p> <pre><code>selected_vals['selected_vals'] = selected_vals.loc[:,selected_vals.columns.tolist()[1:]].apply(lambda x: x.dropna().tolist(), 1) </code></pre> <p>However, by doing so, only the values from the col <code>label_2</code> are in the col <code>selected_vals</code></p> <p>Here the ouput</p> <pre><code> label_1 label_2 label_3 selected_vals 0 NaN NaN NaN [] 1 ('__label__Religione_e_Magia',) NaN NaN [] 2 NaN ('__label__Storia',) NaN ('__label__Storia',) 3 NaN ('__label__Storia',) NaN ('__label__Storia',) 4 ('__label__Religione_e_Magia',) NaN </code></pre> <p>As desired output I would like to have all the values stored in the same col i.e</p> <pre><code> selected_vals 0 NaN 1 ('__label__Religione_e_Magia',) 2 ('__label__Storia',) 3 ('__label__Storia',) 4 ('__label__Religione_e_Magia',) </code></pre> <p>Suggestions about how to deal with this problem?</p> <p>Thanks</p>
<p>You can apply function to each row and keep only desired value (where column is not NaN)</p> <pre class="lang-py prettyprint-override"><code>selected_vals['selected_vals'] = selected_vals.apply(lambda row: row[row[pd.notnull(row)].index.item()], axis=1) </code></pre>
python|pandas|dataframe
0
1,149
66,402,155
Merge two DataFrames with overlapping MultiIndex columns
<p>I'm trying to find a simple way to merge two MultiIndex dataframes together in a way that adds new columns and merges existing. For example if I had two data frames</p> <pre><code>d1_columns = pd.MultiIndex.from_product([[&quot;A&quot;, &quot;B&quot;,], [&quot;1&quot;, &quot;2&quot;]]) d1_index = pd.date_range(&quot;2020-01-01&quot;, &quot;2020-01-5&quot;, freq=&quot;D&quot;) d1 = pd.DataFrame(random.rand(5, 4), columns=d1_columns, index=d1_index) print(d1) A B 1 2 1 2 2020-01-01 0.381909 0.487480 0.389250 0.853449 2020-01-02 0.752374 0.508806 0.491892 0.786918 2020-01-03 0.019655 0.537763 0.263242 0.378302 2020-01-04 0.460276 0.227113 0.423472 0.514639 2020-01-05 0.046673 0.864282 0.223340 0.929049 d2_columns = pd.MultiIndex.from_product([[&quot;B&quot;, &quot;C&quot;], [&quot;1&quot;, &quot;2&quot;]]) d2_index = pd.date_range(&quot;2020-01-03&quot;, &quot;2020-01-7&quot;, freq=&quot;D&quot;) d2 = pd.DataFrame(random.rand(5, 4), columns=d2_columns, index=d2_index) print(d2) B C 1 2 1 2 2020-01-03 0.495979 0.888207 0.776861 0.531693 2020-01-04 0.408030 0.545351 0.452913 0.768284 2020-01-05 0.374996 0.593571 0.925979 0.398629 2020-01-06 0.085565 0.845354 0.792325 0.501057 2020-01-07 0.780985 0.390948 0.731769 0.488155 </code></pre> <p>If I want to merge them I get the overlapping columns seperated, while the new columns work fine:</p> <pre><code>df = d1.merge(d2, left_index=True, right_index=True, how=&quot;outer&quot;) print(df) A B_x B_y \ 1 2 1 2 1 2 2020-01-01 0.381909 0.487480 0.389250 0.853449 NaN NaN 2020-01-02 0.752374 0.508806 0.491892 0.786918 NaN NaN 2020-01-03 0.019655 0.537763 0.263242 0.378302 0.495979 0.888207 2020-01-04 0.460276 0.227113 0.423472 0.514639 0.408030 0.545351 2020-01-05 0.046673 0.864282 0.223340 0.929049 0.374996 0.593571 2020-01-06 NaN NaN NaN NaN 0.085565 0.845354 2020-01-07 NaN NaN NaN NaN 0.780985 0.390948 C 1 2 2020-01-01 NaN NaN 2020-01-02 NaN NaN 2020-01-03 0.776861 0.531693 2020-01-04 0.452913 0.768284 2020-01-05 0.925979 0.398629 2020-01-06 0.792325 0.501057 2020-01-07 0.731769 0.488155 </code></pre> <p>Is there a simple way to have the overlapping columns merge so that new data is added to the existing columns (it doesn't matter if it overwrites previous data), so the output looks like this?</p> <pre><code> A B C 1 2 1 2 1 2 2020-01-01 0.633182 0.335651 0.072520 0.578472 NaN NaN 2020-01-02 0.785482 0.562421 0.658556 0.557171 NaN NaN 2020-01-03 0.755049 0.575611 0.592934 0.735094 0.647117 0.306296 2020-01-04 0.035943 0.792211 0.002617 0.159366 0.320691 0.825184 2020-01-05 0.932623 0.643129 0.778002 0.581527 0.718405 0.289289 2020-01-06 NaN NaN 0.085565 0.845354 0.012412 0.960234 2020-01-07 NaN NaN 0.780985 0.390948 0.444406 0.210821 </code></pre> <p>Thanks</p>
<p>It seems you want</p> <pre><code>df = d1.combine_first(d2) </code></pre> <p>or</p> <pre><code>df = d2.combine_first(d1) </code></pre> <p>depending on which frame's values shall be preferred.</p>
pandas|merge|multi-index
1
1,150
66,517,341
Pandas count, sum, average specific range/ value for each row
<p>i have big data, i want to count, sum, average for each row only between specific range.</p> <pre><code>df = pd.DataFrame({'id0':[10.3,20,30,50,108,110],'id1':[100.5,0,300,570,400,140], 'id2':[-2.6,-3,5,12,44,53], 'id3':[-100.1,4,6,22,12,42]}) </code></pre> <blockquote> <pre><code> id0 id1 id2 id3 0 10.3 100.5 -2.6 -100.1 1 20.0 0.0 -3.0 4.0 2 30.0 300.0 5.0 6.0 3 50.0 570.0 12.0 22.0 4 108.0 400.0 44.0 12.0 5 110.0 140.0 53.0 42.0 </code></pre> </blockquote> <p>for example i want to count the occurrence of value between 10-100 for each row, so it will get:</p> <pre><code>0 1 1 1 2 1 3 3 4 2 5 2 Name: count_10-100, dtype: int64 </code></pre> <p>currently i get this done by iterate for each row, transverse and using groupby. But this take a time because i have ~500 column and 500000 row</p>
<p>You can apply the conditions with AND between them, and then <code>sum</code> along the row (axis 1):</p> <pre><code>((df &gt;= 10) &amp; (df &lt;= 100)).sum(axis=1) </code></pre> <p>Output:</p> <pre><code>0 1 1 1 2 1 3 3 4 2 5 2 dtype: int64 </code></pre> <hr /> <p>For sum and mean, you can apply the conditions with <code>where</code>:</p> <pre><code>df.where((df &gt;= 10) &amp; (df &lt;= 100)).sum(axis=1) df.where((df &gt;= 10) &amp; (df &lt;= 100)).mean(axis=1) </code></pre> <p>Credit for this goes to @anky, who posted it first as a comment :)</p>
python|pandas|pandas-groupby|isin
1
1,151
72,836,451
Sort values in array by keys in another dictionary (python)
<p>Let's suppose I have an array that looks like this:</p> <pre><code>x=['Other', 'Physical Training', 'Math', 'English', 'Physics', 'Literature'] </code></pre> <p>I need to sort it (not alphabetically) by keys in dictionary:</p> <pre><code> y={'Math':0, 'Physics':1, 'Chemistry':2, 'Biology':3, 'English':4, 'Literature':5, 'History':6, 'Physical Training':7, 'Other':8} </code></pre> <p>Based on y, I need to sort x, so that the end result looks like this:</p> <pre><code>x_sorted=['Math', 'Physics', 'English', 'Literature', 'Physical Training', 'Other'] </code></pre> <p>How do I reach this?</p>
<p>if <code>x</code> is a list, to sort inplace:</p> <pre><code>x.sort(key=y.get) #['Math', 'Physics', 'English', 'Literature', 'Physical Training', 'Other'] </code></pre> <p>to sort without changing <code>x</code> itself:</p> <pre><code>x_sorted = sorted(x, key=y.get) </code></pre> <p>if <code>x</code> is an array, convert to list first:</p> <pre><code>x = list(x) </code></pre> <p>if not applicable, please provide more context in the use of arrays over lists so we can help better.</p>
python|pandas|numpy
2
1,152
73,055,157
What does "ImportError: cannot import name randbits" mean?
<p>The first cell of my jupyter notebook contains the libraries I want to import. For some reason when I run it receive the <code>ImportError: cannot import name randbits</code>. I have never seen this import error before and have already tried restarting the kernel and confirmed that all libraries were installed correctly. As anyone seen this before and know what to do about this error?</p> <pre><code>import numpy as np import pandas as pd import requests import xlsxwriter import math --------------------------------------------------------------------------- ImportError Traceback (most recent call last) Input In [1], in &lt;cell line: 1&gt;() ----&gt; 1 import numpy as np 2 import pandas as pd 3 import requests File C:\pyver\py3.10.5\lib\site-packages\numpy\__init__.py:151, in &lt;module&gt; 149 from . import fft 150 from . import polynomial --&gt; 151 from . import random 152 from . import ctypeslib 153 from . import ma File C:\pyver\py3.10.5\lib\site-packages\numpy\random\__init__.py:180, in &lt;module&gt; 126 __all__ = [ 127 'beta', 128 'binomial', (...) 176 'zipf', 177 ] 179 # add these for module-freeze analysis (like PyInstaller) --&gt; 180 from . import _pickle 181 from . import _common 182 from . import _bounded_integers File C:\pyver\py3.10.5\lib\site-packages\numpy\random\_pickle.py:1, in &lt;module&gt; ----&gt; 1 from .mtrand import RandomState 2 from ._philox import Philox 3 from ._pcg64 import PCG64, PCG64DXSM File mtrand.pyx:1, in init numpy.random.mtrand() File bit_generator.pyx:38, in init numpy.random.bit_generator() ImportError: cannot import name randbits </code></pre>
<p>I have been having the same issue all day. Finally figured out what solved my problem. Somehow anaconda3/Lib/secrets.py got overwritten. Numpy relies on files in this directory called random.py and secrets.py so if you have files with those names numpy will not load.</p> <p>-I renamed my incorrect secrets.py file</p> <p>-Found the secrets.py source code and recreated the file. Solved my issue. Hope it helps. The links below were the most beneficial for me.</p> <p>People having similar issues with numpy: <a href="https://github.com/numpy/numpy/issues/14860" rel="noreferrer">https://github.com/numpy/numpy/issues/14860</a></p> <p>Source code for secrets.py: <a href="https://github.com/python/cpython/blob/3.7/Lib/secrets.py" rel="noreferrer">https://github.com/python/cpython/blob/3.7/Lib/secrets.py</a></p>
python|python-3.x|numpy|import|importerror
5
1,153
70,601,746
How to operate a function over multiple columns (Pandas/Python)?
<p>Let's consider IBM HR Attrition Dataset from Kaggle (<a href="https://www.kaggle.com/pavansubhasht/ibm-hr-analytics-attrition-dataset" rel="nofollow noreferrer">https://www.kaggle.com/pavansubhasht/ibm-hr-analytics-attrition-dataset</a>). How do I rapdly gets the variable with the highest Shapiro p-value?</p> <p>In other words, I can apply a function <code>shapiro()</code> in a column as <code>shapiro(df['column'])</code>. And I would like to calculate for all the numeric columns these function.</p> <p>I tried this:</p> <pre><code>from scypy.stats import shapiro df = pd.read_csv('path') #here i was expecting the output to be a sequential prints with the name of the columns and their respective p-value from shapiro() for col in hr: print(col,&quot; : &quot;, shapiro(hr[col])[0]) </code></pre> <p>Anyone that could help on this?</p> <p>Thanks in advance.</p>
<p>I hope this helps! I'm sure there are a lot better ways, but it was fun trying :)</p> <pre><code>import pandas as pd from scipy import stats </code></pre> <pre><code>df = pd.read_csv('path.csv') </code></pre> <pre><code># make a new dataframe newdf with only the columns containing numeric data numerics = ['int16', 'int32', 'int64', 'float16', 'float32', 'float64'] newdf = df.select_dtypes (include=numerics) </code></pre> <pre><code>#check to see that the columns are only numeric print(newdf.head()) </code></pre> <pre><code># new dataframe with rows &quot;W&quot; and &quot;P&quot; shapiro_wilks = (newdf).apply(lambda x: pd.Series(shapiro(x), index=['W','P'])).reset_index() shapiro_wilks = shapiro_wilks.set_index('index') #ugh print(shapiro_wilks) </code></pre>
python|pandas|scipy|statistics
0
1,154
70,466,211
groupby on column which contain bytearray object using Pandas Dataframe
<p>I have pandas dataframe and want to do groupby on Customer ID</p> <pre><code> df['rank_col'] = df.groupby('PSEUDO_CUSTOMER_ID')['DB_CREATED_DT'].rank(method='first') </code></pre> <p>now the problem is pseudo_customer_ID which look like this</p> <pre><code> [138, 76, 16, 9, 86, 71, 5, 85, 117, 237, 97, 212, 13, 157, 185, 150, 207, 97, 85, 165] </code></pre> <p>below is snapshot when I did value count on of pseudo customer ID,</p> <p><a href="https://i.stack.imgur.com/w4jK5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/w4jK5.png" alt="enter image description here" /></a></p> <p>I check the single value I got below value</p> <p><a href="https://i.stack.imgur.com/tLEr5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tLEr5.png" alt="enter image description here" /></a></p> <p><strong>Note: I want to do groupby on pseudo_customer_ID and do rank by DB_CREATED_DT column</strong></p>
<p>Convert your <code>bytearray</code> with the <code>bytes</code> function to allow grouping (and get hashable type):</p> <p>Demo:</p> <pre><code>df['PSEUDO_CUSTOMER_ID_BYTES'] = df['PSEUDO_CUSTOMER_ID'].apply(bytes) print(df) # Output: PSEUDO_CUSTOMER_ID PSEUDO_CUSTOMER_ID_BYTES 0 [138, 76, 16, 9, 86, 71, 5, 85, 117, 237, 97, ... b'\x8aL\x10\tVG\x05Uu\xeda\xd4\r\x9d\xb9\x96\x... </code></pre> <p>Group by <code>PSEUDO_CUSTOMER_ID</code>:</p> <pre><code>&gt;&gt;&gt; list(df.groupby('PSEUDO_CUSTOMER_ID')) ... TypeError: unhashable type: 'bytearray' </code></pre> <p>Group by <code>PSEUDO_CUSTOMER_ID_BYTES</code>:</p> <pre><code>&gt;&gt;&gt; list(df.groupby('PSEUDO_CUSTOMER_ID_BYTES')) [(b'\x8aL\x10\tVG\x05Uu\xeda\xd4\r\x9d\xb9\x96\xcfaU\xa5', PSEUDO_CUSTOMER_ID PSEUDO_CUSTOMER_ID_BYTES 0 [138, 76, 16, 9, 86, 71, 5, 85, 117, 237, 97, ... b'\x8aL\x10\tVG\x05Uu\xeda\xd4\r\x9d\xb9\x96\x...)] </code></pre> <p><strong>Important</strong></p> <p>If you are sure of your original encoding, you can use <code>str.decode</code> to get a <code>str</code> instead of a <code>bytes</code> string. Here it seems to be <code>latin-1</code>:</p> <pre><code>df['PSEUDO_CUSTOMER_ID_STR'] = df['PSEUDO_CUSTOMER_ID'].decode('latin1')) print(df.loc[0]) # Output: PSEUDO_CUSTOMER_ID [138, 76, 16, 9, 86, 71, 5, 85, 117, 237, 97, ... PSEUDO_CUSTOMER_ID_BYTES b'\x8aL\x10\tVG\x05Uu\xeda\xd4\r\x9d\xb9\x96\x... PSEUDO_CUSTOMER_ID_STR L\tVGUuíaÔ\rÏaU¥ Name: 0, dtype: object </code></pre> <p>Demo:</p> <pre><code>&gt;&gt;&gt; list(df.groupby('PSEUDO_CUSTOMER_ID_STR')) [('\x8aL\x10\tVG\x05UuíaÔ\r\x9d¹\x96ÏaU¥', PSEUDO_CUSTOMER_ID PSEUDO_CUSTOMER_ID_BYTES PSEUDO_CUSTOMER_ID_STR 0 [138, 76, 16, 9, 86, 71, 5, 85, 117, 237, 97, ... b'\x8aL\x10\tVG\x05Uu\xeda\xd4\r\x9d\xb9\x96\x... L\tVGUuíaÔ\rÏaU¥)] </code></pre>
python|arrays|pandas|pandas-groupby
1
1,155
51,433,372
Tensorflow estimator.DNNClassifier not repeating results
<p>Each time I run the following code I get a different 'Loss for final step' when training my model. The subsequent evaluation accuracy also changes. I have checked that the input data from train_test_split is constant.I have set the value of tf.random_seed, turned off shuffling and set the value of num_threads. I am using Tensorflow 1.8. Can anyone advise me on what else I need to do?`</p> <pre><code>from __future__ import print_function import numpy as np import pandas as pd import tensorflow as tf from sklearn.model_selection import train_test_split np.random.seed(1) tf.set_random_seed(1) df = pd.read_csv('diabetes.csv') X = df.iloc[:,0:8] y = df['Outcome'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, stratify=None, random_state=1) def create_feature_cols(): return [ tf.feature_column.numeric_column('Pregnancies'), tf.feature_column.numeric_column('Glucose'), tf.feature_column.numeric_column('BloodPressure'), tf.feature_column.numeric_column('SkinThickness'), tf.feature_column.numeric_column('Insulin'), tf.feature_column.numeric_column('BMI'), tf.feature_column.numeric_column('DiabetesPedigreeFunction'), tf.feature_column.numeric_column('Age') ] input_func = tf.estimator.inputs.pandas_input_fn(x=X_train,y=y_train, batch_size=10,num_epochs=1000,shuffle=False,num_threads=1) model = tf.estimator.DNNClassifier(hidden_units=[20,20], feature_columns=create_feature_cols(),n_classes=2) model.train(input_fn=input_func,steps=1000) eval_input_func = tf.estimator.inputs.pandas_input_fn( x=X_test, y=y_test, batch_size=10, num_epochs=1, shuffle=False, num_threads=1) results = model.evaluate(eval_input_func)` </code></pre>
<p>Here is some code that TensorFlow sent me to solve the problem. Rather than use tf.set_random_seed, with estimators you use tf.estimator.RunConfig. </p> <pre><code>import tensorflow as tf tf.reset_default_graph() config = tf.estimator.RunConfig(tf_random_seed=234) input1_col = tf.feature_column.numeric_column('input1') input2_col = tf.feature_column.numeric_column('input2') model = tf.estimator.DNNClassifier(hidden_units=[20,20], feature_columns=[input1_col, input2_col],n_classes=2,config=config) import numpy as np input1 = np.random.random(size=(100, 1)) input2 = np.random.random(size=(100, 1)) target = np.where(np.sum(input1 + input2, axis = 1) &gt; 0, 1, 0) def train_input_fn(): return ({'input1': input1, 'input2': input2}, target) model.train(input_fn=train_input_fn, steps = 10) </code></pre>
python|repeat|tensorflow-estimator
0
1,156
51,311,062
Can't import apply_transform from keras.preprocessing.image
<p>I have been having issues importing <code>apply_transform</code> from <code>keras.preprocessing.image</code>. As far as I know the name has not changed according to Keras documentation. Anyone has any idea what might be the issue. I can, from the same library, import <code>ImageDataGenerator</code> for instance.</p> <p><a href="https://i.stack.imgur.com/Bviu7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Bviu7.png" alt="enter image description here"></a></p>
<p><code>apply_transform</code> <a href="https://github.com/keras-team/keras/commit/08c873669f39b37743014db99fcd2d308f8ea5ea#diff-93850fa46a789f2e5905894ad0e7bee4L239" rel="nofollow noreferrer">has been removed</a> from <code>image</code> module and <a href="https://github.com/keras-team/keras-preprocessing/blob/f16eb5e1801af205f73de64a41af7e90143ac641/keras_preprocessing/image.py#L1095" rel="nofollow noreferrer">has been refactored</a> as one of the methods of <code>ImageDataGenerator</code> class. Instead you can define an instance of <code>ImageDataGenerator</code> class and use it:</p> <pre><code>from keras.preprocessing.image import ImageDataGenerator img_gen = ImageDataGenerator() img_gen.apply_transform(args) </code></pre> <p>or you can use <a href="https://github.com/keras-team/keras-preprocessing/blob/f16eb5e1801af205f73de64a41af7e90143ac641/keras_preprocessing/image.py#L250" rel="nofollow noreferrer"><code>apply_affine_transform()</code></a> method from <code>keras.preprocessing.image</code> module if it satisfies your needs.</p> <p>And I think you are right. The <a href="https://keras.io/preprocessing/image/#apply_transform" rel="nofollow noreferrer">documentation</a> is wrong about this:</p> <blockquote> <p><code>keras.preprocessing.image.apply_transform(x, transform_parameters)</code></p> </blockquote> <p>whereas it should be:</p> <blockquote> <p><code>apply_transform(x, transform_parameters)</code></p> </blockquote>
python|python-3.x|tensorflow|keras|anaconda
5
1,157
51,679,840
Pandas DataFrame initialized with a nested dict fail if only specify row index
<p>I try to specify row index of a DataFrame initialized with a nexted dict.</p> <pre><code>pop={'Nevada': {2001: 2.4, 2002: 2.9}, 'Ohio': {2000: 1.5, 2001: 1.7, 2002: 3.6}} pandas.DataFrame(pop, index=[2000,2001,2002]) AttributeError: 'list' object has no attribute 'astype' </code></pre> <p>However, if I also specify columns, it will work.</p> <p>I am wondering if this is just a bug in Pandas. Because I am sure it definitely works on Python 3.5 and my older version pandas(NOT SURE earlier version). I am running Python 3.6.5, Pandas 0.23.3.</p> <p>This example is actually from Wes McKinney's book "Python for data analysis" (2nd Edition).</p>
<p>I reproduced this problem in Python 3.7.1 with pandas 0.23.4.</p> <p>The aim of this code section is used for clarifying that the order of index can be assigned artificially.</p> <pre><code>pd.DataFrame(pop, index = pd.Series([2001, 2000, 2002])) </code></pre> <p>should work.</p>
python|pandas|dataframe
2
1,158
37,214,884
How do I choose an optimizer for my tensorflow model?
<p>Tensorflow seems to have a large collection of optimizers, is there any high level guideline (or review paper) on which one is best adapted to specific classes of loss functions ?</p>
<p>It depends on your datasets and NN models, but generally, I would start with Adam. Figure 2 in this paper (<a href="http://arxiv.org/abs/1412.6980" rel="noreferrer">http://arxiv.org/abs/1412.6980</a>) shows Adam works well.</p> <p><a href="https://i.stack.imgur.com/cWsLk.png" rel="noreferrer"><img src="https://i.stack.imgur.com/cWsLk.png" alt="enter image description here"></a></p> <p>Also, you can see a very nice animation from <a href="http://www.denizyuret.com/2015/03/alec-radfords-animations-for.html" rel="noreferrer">http://www.denizyuret.com/2015/03/alec-radfords-animations-for.html</a>.</p> <p><a href="https://i.stack.imgur.com/88sSR.gif" rel="noreferrer"><img src="https://i.stack.imgur.com/88sSR.gif" alt="enter image description here"></a></p>
tensorflow
16
1,159
8,108,649
Change elements in 2d ndarray into arrays
<p>In numpy, I have a 2d array like:</p> <pre><code>[ [1 2 3 4 5] [2 3 1 4 5] ..... [3 5 2 3 5] ] </code></pre> <p>I want to replace each element in this array into a 1d array, e.g.</p> <pre><code>1 -&gt; [0 0 0 0 1] 2 -&gt; [0 0 0 1 0] </code></pre> <p>etc.</p> <p>This will convert elements into arrays, and the whole 2d array into a 3d array. I tried few things, but nothing worked. What should be the right way to do so? Thanks.</p>
<p>Suppose this is your 2d array:</p> <pre><code>x=np.random.randint(1,3,size=(3,2)) print(x) # [[2 2] # [1 2] # [2 1]] </code></pre> <p>Create the array:</p> <pre><code>y=np.array([[0,0,0,0,0],[0,0,0,0,1],[0,0,0,1,0]]) </code></pre> <p>You can look upon this array as a mapping:</p> <pre><code>0 --&gt; [0,0,0,0,0] # y[0] is mapped to [0,0,0,0,0] 1 --&gt; [0,0,0,0,1] # y[1] ... [0,0,0,0,1] 2 --&gt; [0,0,0,1,0] # y[2] ... [0,0,0,1,0] </code></pre> <p>Then the array you desire is given by <code>y[x]</code></p> <pre><code>print(y[x]) # [[[0 0 0 1 0] # [0 0 0 1 0]] # [[0 0 0 0 1] # [0 0 0 1 0]] # [[0 0 0 1 0] # [0 0 0 0 1]]] </code></pre>
python|numpy
4
1,160
37,697,934
How to remove % symbol for particular column in dataframeusing python pandas?
<p>My data frame have following data:</p> <pre><code>company,standard,returns aaa,b1,10% bbb,b2,20% </code></pre> <p>I have to remove <code>%</code> from <code>returns</code> column.</p>
<p>First remove last value of each string by <a href="http://pandas.pydata.org/pandas-docs/stable/text.html#indexing-with-str" rel="noreferrer">indexing with str</a> and then cast to <code>int</code> or <code>float</code>:</p> <pre><code>#if int values print (df['returns'].str[:-1].astype(int)) #if flaot values print (df['returns'].str[:-1].astype(float)) </code></pre> <p>Sample:</p> <pre><code>print (df) company standard returns 0 tata b1 10% 1 dell b2 10% #if int values df['returns'] = (df['returns'].str[:-1].astype(int)) print (df) company standard returns 0 tata b1 10 1 dell b2 10 </code></pre> <p>Another solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.strip.html" rel="noreferrer"><code>str.strip</code></a>:</p> <pre><code>df['returns'] = (df['returns'].str.strip('%').astype(int)) print (df) company standard returns 0 tata b1 10 1 dell b2 10 </code></pre>
python|python-2.7|pandas|dataframe
7
1,161
31,437,402
Remove column from Pandas multiindex
<p>I have a dataframe such as:</p> <pre><code> Year Value Country Element Item ItemCode Afghanistan Production Wheat and products 2511 1961 2279 1962 2279 1963 1947 1964 2230 1965 2282 </code></pre> <p>I would like to remove the level <code>ItemCode</code> from the multiindex, yielding:</p> <pre><code> ItemCode Year Value Country Element Item Afghanistan Production Wheat and products 2511 1961 2279 2511 1962 2279 2511 1963 1947 2511 1964 2230 2511 1965 2282 </code></pre> <p>I know that it is possible with brute force, but I was wondering if there is any <code>pandas</code> specific command to this via a shortcut? </p>
<p>Try this.</p> <p><code>df.reset_index(level='ItemCode')</code></p>
python|pandas
3
1,162
31,385,478
Adding new rows to an array dynamcally
<p>I want to initialize an empty list and keep on adding new rows to it. For example. myarray=[] now at each iteration I want to add new row which I compute during iteration. For example</p> <pre><code>for i in range(5): calc=[i,i+1,i+4,i+5] </code></pre> <p>After calc I want to add this row to myarray. Therfore after 1st iteration myarray would be 1X4, after 2nd iteration it would be 2X4 etc. I tried numpy.concatenate. It simply adds to same row ie I get 1X4 then 1X8. I tried vstack as well but since myarray is initially [] it gives error "all the input array dimensions except for the concatenation axis must match exactly"</p>
<p>It looks like you need a multi dimensional array</p> <pre><code>calc = [[0, 1, 4, 5]] for i in range(1, 5): calc.append([i, i+1, i+4, i+5]) </code></pre> <p>Will yield you the following array</p> <pre><code>calc = [[0, 1, 4, 5], [1, 2, 5, 6], [2, 3, 6, 7], [3, 4, 7, 8], [4, 5, 8, 9]] </code></pre> <p>To access the various elements of calc you can address it like the following</p> <pre><code>calc[0] returns [0,1,5,6] calc[1] returns [1,2,5,6] </code></pre>
python|arrays|numpy
1
1,163
64,276,836
Splitting a column with list of tuples into a seperate columns
<p>I'm working on a dataframe which has a shopping_cart column in the following format: the first element of the tuple is the item ordered, and the second element is the quantity ordered for such item: for example:[('Candle Inferno', 1), ('iAssist Line', 2)] implies 'Candle Inferno' was bought once and 'iAssist Line' was bought twice.</p> <pre><code>[('Candle Inferno', 1), ('iAssist Line', 2)] [('Olivia x460', 1), ('Candle Inferno', 1),('Alcon 10', 2)] [('Lucent 330S', 1), ('pearTV', 1), ('Universe Note', 2), ('Olivia x460', 2)] </code></pre> <p>I have the following code in place which is intended to separate the items and quantity for example:</p> <pre><code>item_1 qty_item_1 item_2 qty_item_2 Alcon 10 1 Candle Inferno 2 </code></pre> <p>Code:</p> <pre><code>item_1=[] item_2=[] item_3=[] import re for i in range(0,1): item1=str(list_of_items[i].split(',')[0].split(',')) item2=str(list_of_items[i].split(',')[2].split(',')) item3=str(list_of_items[i].split(',')[4].split(',')) item_1.append((re.sub(r'[^\w]',' ',item1).strip())) item_2.append((re.sub(r'[^\w]',' ',item2).strip())) item_3.append((re.sub(r'[^\w]',' ',item3).strip())) </code></pre> <p>The above works to split item names: first item(item_1),second item(item_2), however, this is too repetitive. Even if I try to create a function, the function throws an error and also fails if the number of tuples are more than 3 or 4</p> <p>Is there any better way to solve this?</p>
<pre><code>def pretty_printer(l): header = &quot;|&quot;.join([&quot;item_%d\tquantity_%d&quot; % (index+1, index+1) for index, _ in numerate(l)]) items = &quot;|&quot;.join([&quot;%s\t%d&quot; % (i[0], i[1]) for i in l]) print(header) print(items) l1 = [('Candle Inferno', 1), ('iAssist Line', 2)] l2 = [('Olivia x460', 1), ('Candle Inferno', 1), ('Alcon 10', 2)] l3 = [('Lucent 330S', 1), ('pearTV', 1), ('Universe Note', 2), ('Olivia x460', 2)] pretty_printer(l1) pretty_printer(l2) pretty_printer(l3) </code></pre> <p>Output:</p> <pre><code>item_1 quantity_1|item_2 quantity_2 Candle Inferno 1|iAssist Line 2 item_1 quantity_1|item_2 quantity_2|item_3 quantity_3 Olivia x460 1|Candle Inferno 1|Alcon 10 2 item_1 quantity_1|item_2 quantity_2|item_3 quantity_3|item_4 quantity_4 Lucent 330S 1|pearTV 1|Universe Note 2|Olivia x460 2 </code></pre>
python|regex|pandas|dataframe
0
1,164
64,547,861
How to split string and get only one word in python
<p>I have a string similar like this:</p> <pre><code>HELLO TEST PACKAGE PARIS1 PROJECT </code></pre> <p>I got this string from selecting row and column:</p> <pre><code>project_name = df[col_pro].values[row_pro] </code></pre> <p>and I want to get the &quot;PARIS&quot; only to be in a new column which is 'location_id'</p> <p>I tried to split them, but it was hard for me to get the &quot;PARIS&quot; only.</p> <p>How can I do this with python? Thank you</p>
<p>I would use Named groups. Say you have <code>df</code>;</p> <pre><code> text 0 HELLO TEST PACKAGE PARIS1 PROJECT </code></pre> <p>Using Named Groups</p> <pre><code>df.text.str.extract(r'(?P&lt;location_id&gt;PARIS)') </code></pre>
python|pandas
1
1,165
64,569,141
How to sum with Null Values in group by statement using agg function in python
<p>I have a dataframe which looks like:</p> <pre><code>A B C a 100 200 a NA 100 a 200 NA a 100 100 b 200 200 b 100 200 b 200 100 b 200 100 </code></pre> <p>I use the aggregate function on column B and column C as:</p> <pre><code>ag=data.groupby(['A']).agg({'B':'sum','C':'sum'}).reset_index() Output: A B C a NULL NULL b 700 600 Expected Output: A B C a 400 400 b 700 600 </code></pre> <p><code>How can I modify my aggregate function so that NULL values are ignored?</code></p>
<p>Maybe you already though about this but is not possible in your problem, but you can replace the NA values by 0 in the dataframe before this operation. If you don´t want to change the original dataframe you can transform it in a copy.</p> <pre><code>ag=data.replace(np.nan,0).groupby(['A']).agg({'B':'sum','C':'sum'}).reset_index() </code></pre>
python-3.x|pandas|aggregate
0
1,166
47,986,269
Preprocess the input data slow down the input pipeline when using Tensorflow Dataset API to read TFRecords file
<p>I am using Tensorflow Dataset API to read TFRecords files, but the GPU usage is still low (10%). I reckon the cause is that I preprocess the data before they are fed into the <code>sess.run()</code>. Here is my code below.<br> 1. Create a dataset from 3 separate files. </p> <pre><code>tf.reset_default_graph() # The content of TFRecords files is that each row is an array. Calculate total rows. n_total_row = sum(1 for _ in tf.python_io.tf_record_iterator(epd)) def get_epd_dataset(filename): dataset = tf.data.TFRecordDataset(filename) def _parse_function(example_proto): keys_to_features = {'data':tf.VarLenFeature(tf.int64)} parsed_features = tf.parse_single_example(example_proto, keys_to_features) return tf.sparse_tensor_to_dense(parsed_features['data']) # Parse the record into tensors. dataset = dataset.map(_parse_function) return dataset # There are 3 essential files comprising input data. It reads 3 seperate # files "epd", "y_id", "x_feat" into 3 separate dataset respectively, and # uses `Dataset.zip()` to combine these 3 separate files into 1 dataset. epd_ds = get_epd_dataset(epd) n_lexicon, id_ds = get_id_dataset(y_id) feat_ds = get_feat_dataset(x_feat) data_ds = tf.data.Dataset.zip((feat_ds, epd_ds, id_ds)) # Shuffle the dataset data_ds = data_ds.shuffle(buffer_size=n_total_row, reshuffle_each_iteration=True) # Repeat the input indefinitly data_ds = data_ds.repeat(epoch) # Generate batches data_ds = data_ds.batch(1) # Create a one-shot iterator iterator = data_ds.make_one_shot_iterator() data_iter = iterator.get_next() </code></pre> <p>2. Build a Tensorflow graph. </p> <pre><code>n_input = DIM*(LEFT+1+RIGHT) n_classes = n_lexicon mlp = MultiLayerPerceptron.MultiLayerPerceptron(DIM*(LEFT+1+RIGHT), n_lexicon) # tf Graph input X = tf.placeholder("float", [None, n_input]) Y = tf.placeholder("float", [None, n_classes]) logits = mlp.multilayer_perceptron(X, dropout_mode) loss_op = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=Y), name='loss_op') optimizer = tf.train.AdamOptimizer(learning_rate=lr) train_op = optimizer.minimize(loss_op, name='train_op') </code></pre> <p>3. Generate data from <code>data_iter</code> and run TF session. </p> <pre><code>sess = tf.Session() # Initialization sess.run(tf.global_variables_initializer()) for e in range(1, epoch+1): while True: try: # Get data from dataset iterator tmp = sess.run([data_iter])[0] # a,b,c are a row from 3 serapate files. a = tmp[0].flatten() b = tmp[1].flatten() c = tmp[2].flatten() # I believe this step slows down my input pipeline. x_train, y_train = _data_generate(mlp, b, d, c) _, c = sess.run([train_op, loss_op], feed_dict={X: x_train, Y: y_train}) except tf.errors.OutOfRangeError: break sess.close() </code></pre> <p>My code reaches about 10~15% of GPU usage. I think the cause is that <code>_data_generate()</code> consumes too much time on processing numpy array. But I don't know how to improve my pipeline. Here are my questions. </p> <ol> <li>According to <a href="https://www.tensorflow.org/performance/performance_models" rel="nofollow noreferrer">Tensorflow performance guide</a> and <a href="https://www.tensorflow.org/programmers_guide/datasets" rel="nofollow noreferrer">Importing Data</a>, I think using Dataset API and TFRecords files is my best option to solve this low-GPU-usage problem. Or should I use python multithread to feed data into a buffer first and then feed data to <code>sess.run()</code>. I didn't choose the latter solution due to <a href="https://www.tensorflow.org/performance/performance_models" rel="nofollow noreferrer">this website</a> mention that</li> </ol> <blockquote> <p>We found that using tf.FIFOQueue and tf.train.queue_runner could not saturate multiple current generation GPUs when using large inputs and processing with higher samples per second,</p> </blockquote> <ol start="2"> <li><p>I think that putting <code>_data_generate()</code> in <code>_parse_function()</code> may solve this problem, bucause Tensorflow handles preprocessing data part but not python. But I don't know how to do this since <code>_data_generate()</code> needs 3 rows from 3 separate files. Does anyone know how to do this? </p></li> <li><p>Are there other methods could solve my low-GPU-usage problem?</p></li> </ol> <p>Thank you.</p>
<p>Can you share the code of <code>_data_generate</code> function? I can't see what it does.</p> <p>As you pointed out performance is likely lost because of RAM &lt;-> GPU memory swap and mixing tensorflow ops with pythonic ones.</p> <p>Instead of running iterator <code>data_iter</code> yourself by <code>sess.run()</code> , doing numpy operations and then training step, pass <code>data_iter</code> as input to your neural network graph - it should replace the placeholders. (just make a function that constructs the graph using <code>data_iter</code> as parameter).</p> <blockquote> <p>I think that putting _data_generate() in _parse_function() may solve his problem, bucause Tensorflow handles preprocessing data part but not >python. But I don't know how to do this since _data_generate() needs 3 >rows from 3 separate files. Does anyone know how to do this?</p> </blockquote> <p>The proper way is to create 3 datasets from files, decode them, zip them, and then pass the iterator to zipped dataset as input to processing graph. You're almost doing that.</p> <p>Also; Try to enforce multithreading whenever it is possible/needed. Here: </p> <pre><code>... return tf.sparse_tensor_to_dense(parsed_features['data']) # Parse the record into tensors. dataset = dataset.map(_parse_function) return dataset </code></pre> <p>You should use:</p> <pre><code>dataset.map(_parse_function, num_threads=&lt;MORE THAN ONE&gt;) </code></pre> <p>Where <code>&lt;MORE THAN ONE&gt;</code> is an integer bigger than one. In your case I would start with 8 threads (see if GPU will be 100%)</p> <p>Check dis out and tell me if its ok</p>
python|performance|tensorflow|tfrecord
1
1,167
47,741,873
Unable to install Numpy on AWS EC2 Python 3.6
<p>I have an Amazon EC2 instance running Amazon Linux and a virtual environment with python 3.6.</p> <p>I can't seem to install Numpy :</p> <pre><code>(testenv) [ec2-user@ip-xxx-xx-xx-xx venv]$ pip3 install numpy Collecting numpy Using cached numpy-1.13.3-cp36-cp36m-manylinux1_x86_64.whl Installing collected packages: numpy Successfully installed numpy-1.13.3 (testenv) [ec2-user@ip-xxx-xx-xx-xx venv]$ python Python 3.6.2 (default, Nov 2 2017, 19:34:31) [GCC 4.8.5 20150623 (Red Hat 4.8.5-11)] on linux Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; import numpy Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; ModuleNotFoundError: No module named 'numpy' </code></pre> <p>I also did a <code>sudo python36 -m pip install numpy</code> it did not work.</p>
<p>I got same problem and found a solution that worked for me on that website : <a href="https://samsblogofthings.wordpress.com/2016/07/17/installing-numpy-on-your-amazon-ec2-instance-for-a-particular-python-version/" rel="nofollow noreferrer">https://samsblogofthings.wordpress.com/2016/07/17/installing-numpy-on-your-amazon-ec2-instance-for-a-particular-python-version/</a></p> <p>Basically do : </p> <p><code>alias sudo='sudo env PATH=$PATH'</code></p> <p>and then : <code>sudo pip3 install numpy</code></p>
python|numpy|amazon-ec2
1
1,168
47,861,085
using recently created attributes in pandas dataframe to create new attribute
<p>I'm looking for the equivalent of R's mutate, which allows you to reference defined variables immediately after creating them <em>within the same mutate call</em>. </p> <pre><code>new_df &lt;- old_df %&gt;% mutate(new_col = ifelse(something, 0, 1), newer_col = ifelse(new_col == 0, 'yay', 'nay')) </code></pre> <p>Looking for the equivalent in python pandas. </p> <p>if I create the following dataframe, I was wondering if there is a way to use <code>.assign</code> to do the same thing?</p> <pre><code>dic = {'names': ['jeff', 'alice', 'steph', 'john'], 'numbers':[4, 6, 5, 7]} df = pd.DataFrame(dic) df = df.assign(less_than_6 = np.where(df.numbers &lt; 6, 100, 0), pass_fail = np.where(df.less_than_6 == 100, 'pass', 'fail')) </code></pre> <p>The alternative I can think of is..</p> <pre><code>df['less_than_6'] = np.where(df.numbers &lt; 6, 100, 0) df['pass_fail'] = np.where(df.less_than_6 == 100, 'pass', 'fail') </code></pre> <p>but was wondering if there is a way to do it in the same call?</p>
<p>Using dict in <code>assign</code> </p> <pre><code>df.assign(**{'less_than_6' :lambda x : np.where(x['numbers'] &lt; 6, 100, 0)}).assign(**{'pass_fail':lambda x : np.where(x['less_than_6'] == 100, 'pass', 'fail')}) Out[202]: names numbers less_than_6 pass_fail 0 jeff 4 100 pass 1 alice 6 0 fail 2 steph 5 100 pass 3 john 7 0 fail </code></pre>
python|r|pandas|dataframe|dplyr
2
1,169
47,971,809
Pandas Map creating NaNs
<p>My intention is to replace labels. I found out about using a dictionary and map it to the dataframe. To that end, I first extracted the necessary fields and created a dictionary which I then fed to the map function. </p> <p>My programme is as follows:</p> <pre><code>factor_name = 'Help in household' df = pd.read_csv('dat.csv') labels = pd.read_csv('labels.csv') fact_df = labels.loc[labels['Column'] == factor_name] fact_dict = dict(zip(fact_df['Level'], fact_df['Rename'])) print df.index.to_series().map(fact_dict) </code></pre> <p>My labels.csv is as follows:</p> <pre><code>Column,Name,Level,Rename Help in household,Every day,4,Every day Help in household,Never,1,Never Help in household,Once a month,2,Once a month Help in household,Once a week,3,Once a week State,AN,AN,Andaman &amp; Nicobar State,AP,AP,Andhra Pradesh State,AR,AR,Arunachal Pradesh State,BR,BR,Bihar State,CG,CG,Chattisgarh State,CH,CH,Chandigarh State,DD,DD,Daman &amp; Diu State,DL,DL,Delhi State,DN,DN,Dadra &amp; Nagar Haveli State,GA,GA,Goa State,GJ,GJ,Gujarat State,HP,HP,Himachal Pradesh State,HR,HR,Haryana State,JH,JH,Jharkhand State,JK,JK,Jammu &amp; Kashmir State,KA,KA,Karnataka State,KL,KL,Kerala State,MG,MG,Meghalaya State,MH,MH,Maharashtra State,MN,MN,Manipur State,MP,MP,Madhya Pradesh State,MZ,MZ,Mizoram State,NG,NG,Nagaland State,OR,OR,Orissa State,PB,PB,Punjab State,PY,PY,Pondicherry State,RJ,RJ,Rajasthan State,SK,SK,Sikkim State,TN,TN,Tamil Nadu State,TR,TR,Tripura State,UK,UK,Uttarakhand State,UP,UP,Uttar Pradesh State,WB,WB,West Bengal </code></pre> <p>My dat.csv is as follows:</p> <pre><code>Id,Help in household,Maths,Reading,Science,Social 11011001001,4,20.37,,27.78, 11011001002,3,12.96,,38.18, 11011001003,4,27.78,70,, 11011001004,4,,56.67,,36 11011001005,1,,,14.55,8.33 11011001006,4,,23.33,,30 11011001007,4,40.74,70,, 11011001008,3,,26.67,,22.92 </code></pre> <p>Intended result is as follows:</p> <pre><code>4 Every day 1 Never 2 Once a month 3 Once a week </code></pre> <p>The mapping fails. The result always causes NaNs to appear which I do not want. Can anyone tell me why?</p>
<p>Try this:</p> <pre><code>In [140]: df['Help in household'] \ .astype(str) \ .map(labels.loc[labels['Column']=='Help in household',['Level','Rename']] .set_index('Level')['Rename']) Out[140]: 0 Every day 1 Once a week 2 Every day 3 Every day 4 Never 5 Every day 6 Every day 7 Once a week Name: Help in household, dtype: object </code></pre> <p>You may also consider using <code>merge</code>:</p> <pre><code>In [147]: df.assign(Level=df['Help in household'].astype(str)) \ .merge(labels.loc[labels['Column']=='Help in household',['Level','Rename']], on='Level') Out[147]: Id Help in household Maths Reading Science Social Level Rename 0 11011001001 4 20.37 NaN 27.78 NaN 4 Every day 1 11011001003 4 27.78 70.00 NaN NaN 4 Every day 2 11011001004 4 NaN 56.67 NaN 36.00 4 Every day 3 11011001006 4 NaN 23.33 NaN 30.00 4 Every day 4 11011001007 4 40.74 70.00 NaN NaN 4 Every day 5 11011001002 3 12.96 NaN 38.18 NaN 3 Once a week 6 11011001008 3 NaN 26.67 NaN 22.92 3 Once a week 7 11011001005 1 NaN NaN 14.55 8.33 1 Never </code></pre>
python|pandas|csv
1
1,170
47,873,380
Recursive integration with array - Stefan Boltzmann law
<p>I'm trying to plot Stefan Boltzmann law via integrating Planck law. When I set one temperature, say T=3000, the code produces its integration well. However, when I make T as array like np.array([310,3000,5800,15000]), the code gives me errors. Attached image is a plot that I am trying to reproduce. Anyone who have insights to solve this problem, it would be very appreciable. Thank you in advance.</p> <pre><code>import matplotlib.pyplot as plt import numpy as np h = 6.626e-34 c = 2.9979e+8 k = 1.38e-23 T=np.array([310,3000,5800,15000]) from scipy.integrate import quad def integrand(wav): return (2.0*3.14*h*c**2)/ ( ((wav*1e3*1e-9)**5) * (np.exp(h*c/(wav*1e3*1e-9*k*T)) - 1.0) )*1e-6 power, err = quad(integrand, 0.01, 100) print(power) </code></pre> <p><a href="https://i.stack.imgur.com/KXlmz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KXlmz.png" alt="Stefan Boltzmann law by Methamatica"></a></p>
<p>You would need to do the integration for each temperature separately. </p> <pre><code>import matplotlib.pyplot as plt import numpy as np from scipy.integrate import quad h = 6.626e-34 c = 2.9979e+8 k = 1.38e-23 temps=np.linspace(300,15000) def integrand(wav,T): return (2.0*3.14*h*c**2)/ ( ((wav*1e3*1e-9)**5) * (np.exp(h*c/(wav*1e3*1e-9*k*T)) - 1.0) )*1e-6 p = lambda T: quad(integrand, 0.1, 100, args=(T,))[0] powers = list(map(p, temps)) plt.plot(temps, powers) plt.xlabel("Temperature [K]") plt.ylabel("Power") plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/z4z14.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/z4z14.png" alt="enter image description here"></a></p>
numpy|matplotlib|scipy
1
1,171
49,074,599
Writing a nested list into CSV (Python)
<p>I have a list that looks like this:</p> <pre><code>hello = [(('case', 'iphone'), 91524), (('adapter', 'iphone'), 12233), (('battery', 'smartphone'), 88884)] </code></pre> <p>And I am simply trying to write it to a csv file, looking like this:</p> <pre><code>keyword 1 keyword 2 frequency case iphone 91524 adapter iphone 12233 battery smartphone 88884 </code></pre> <p>I can't figure my way around it. I couldn't transform the list into a DataFrame, either. I tried to apply some code suggested here <a href="https://stackoverflow.com/questions/14037540/writing-a-python-list-of-lists-to-a-csv-file">Writing a Python list of lists to a csv file</a> without any success.</p>
<p>Pandas is convenient for this:</p> <pre><code>import pandas as pd hello = [(('case', 'iphone'), 91524), (('adapter', 'iphone'), 12233), (('battery', 'smartphone'), 88884)] df = pd.DataFrame([[i[0][0], i[0][1], i[1]] for i in hello], columns=['keyword 1', 'keyword 2', 'frequency']) # keyword 1 keyword 2 frequency # 0 case iphone 91524 # 1 adapter iphone 12233 # 2 battery smartphone 88884 df.to_csv('file.csv', index=False) </code></pre>
python|list|pandas|csv
4
1,172
49,018,263
map with named function vs identical lambda function providing different responses, pandas
<p>I am trying to apply a simple function to extract the month from a string column in a pandas dataframe, where the string is of the form m/d/yyyy.</p> <p>The dataframe is called data, the date column is called transaction date, and my new proposed month column I wish to call transaction month.</p> <p>The below works just fine:</p> <pre><code>data['transaction month']=data['transaction date'].map(lambda x: x[0:x.index('/')]) </code></pre> <p>However, if I try to do the same thing with a named function, it just returns a column where every value is <code>None</code></p> <pre><code>def extract_month_from_date(date): return date[0:date.index('/')] data['transaction month 2']=data['transaction date'].map(extract_month_from_date) </code></pre> <p>I've stared at the code for long enough that I think I'm going crazy, what's wrong with the </p>
<p>You can extract the month via <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.dt.month.html" rel="nofollow noreferrer"><code>pd.Series.dt.month</code></a>:</p> <pre><code>import pandas as pd df = pd.DataFrame({'date': ['8/2/2018']}) df['date'] = pd.to_datetime(df['date']) df['month'] = df['date'].dt.month # date month # 0 2018-08-02 8 </code></pre>
python|pandas
0
1,173
70,064,642
Setting the index for a pandas dataframe Python
<p>The code below creates a bunch of data tables form the dictionary below. I am trying to add <code>Values</code> as the index parameter for all the data tables but they all the table values turn into <code>nan</code>. How will I be able to change it so that the Indexes <code>0 to 2</code> are replaced with <code>First, Second, Third</code>.</p> <p>Dictionary:</p> <pre><code>Outcomes = { 'Values':{ 'First': { 'option 1': np.array([12,345,5412]), 'option 2': np.array([2315,32,1]), 'option 3': {'row 1': np.array([232,3,1]), 'row 2': np.array([3,4,5]), 'row 3': np.array([15,6,12])} } } } </code></pre> <p>Code:</p> <pre><code>import pandas as pd import numpy as np Values = ['First', 'Second', 'Third'] def get_nested_df(dic, concat_key=&quot;&quot;, df_dic=dict()): rows = {k:v for k,v in dic.items() if not isinstance(v, dict)} if rows: df_dic.update({concat_key: pd.DataFrame.from_dict(rows)}) for k,v in dic.items(): if isinstance(v, dict): get_nested_df(v, f&quot;{concat_key} {k}&quot;, df_dic) return df_dic df_dic = get_nested_df(Outcomes) for k,v in df_dic.items(): #print(f&quot;{k}\n{v}\n&quot;) display(pd.DataFrame(v, index=Values).style.set_caption(k).set_table_styles([{ 'selector': 'caption', 'props': [ ('color', 'red'), ('font-size', '16px'), ('text-align', 'center') ] }])) </code></pre> <p>Output:</p> <p><a href="https://i.stack.imgur.com/wTjp9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wTjp9.png" alt="enter image description here" /></a></p>
<p>Consider using the next code instead:</p> <pre><code>for k,v in df_dic.items(): #print(f&quot;{k}\n{v}\n&quot;) v.index = pd.Index(Values) v.style.set_caption(k).set_table_styles([{ 'selector': 'caption', 'props': [ ('color', 'red'), ('font-size', '16px'), ('text-align', 'center') ] }]) display(v) </code></pre>
python|pandas|database|function|dictionary
0
1,174
70,144,451
compare column value with another value in a dataframe (weather data forecast)
<p>I need to compare my column value (113 839 values) with the mean-value(rainfall) of a category (Location)(44 values). If it is higher than my mean value it should be replaced by the mean value. My foreach does not work:</p> <pre><code>df_rainfall = pd.DataFrame(weather_train_data_total.groupby(['Location'])['Rainfall'].mean()) for column in weather_train_data_total[['Location']]: result = weather_train_data_total[column] print(result) if result.equals(df_rainfall['Location']): result = df_rainfall['Rainfall'] </code></pre> <p><a href="https://i.stack.imgur.com/75hlL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/75hlL.png" alt="enter image description here" /></a></p>
<p>Without data, it's always tricky to help but you can try to adapt this:</p> <pre><code># calculate and assign the average value for each group df[&quot;mean_val&quot;] = df.groupby(&quot;Location&quot;)[&quot;Rainfall&quot;].transform(&quot;mean&quot;) # identify rows in which the value is above the average relevant_rows = df[&quot;mean_val&quot;] &lt; df[&quot;Rainfall&quot;] # replace these values with their corresponding average df.loc[relevant_rows, [&quot;Rainfall&quot;]] = df.loc[relevant_rows, [&quot;mean_val&quot;]][&quot;mean_val&quot;] df </code></pre>
python|pandas|dataframe|numpy|foreach
0
1,175
70,168,752
could not convert string to float: '2,550,000,000'
<p>I'm trying to fit a module to my dataframe but im getting <code>could not convert string to float: '2,550,000,000'</code> error. please take a look at my codes below:</p> <pre><code>import tensorflow as tf import pandas as pd import matplotlib.pyplot as plt from sklearn.compose import make_column_transformer from sklearn.preprocessing import MinMaxScaler, OneHotEncoder from sklearn.model_selection import train_test_split houseprice = pd.read_csv('houseprice.csv') houseprice = houseprice.drop(&quot;Price&quot;, axis=1) print(houseprice) </code></pre> <p>the outcome of the <code>Print(houseprice)</code> is this: <a href="https://i.stack.imgur.com/XrYHS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XrYHS.png" alt="enter image description here" /></a></p> <p>here is the rest of my code that i'm getting the error in this part</p> <pre><code># creating X and y (test set and train set) ct = make_column_transformer( (MinMaxScaler(), [&quot;Area&quot;, &quot;Room&quot;]), (OneHotEncoder(handle_unknown=&quot;ignore&quot;), [&quot;Parking&quot;, &quot;Warehouse&quot;, &quot;Elevator&quot;, &quot;Address&quot;]) ) X = houseprice.drop(&quot;Price(USD)&quot;, axis=1) y = houseprice[&quot;Price(USD)&quot;] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) ct.fit(X_train) </code></pre> <p>and here is a picture of my error (im trying to compile it in google colab but im getting this error in vscode too): <a href="https://i.stack.imgur.com/QDIZX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QDIZX.png" alt="enter image description here" /></a></p> <p>I would appreciate if someone can help me</p>
<p>You can specify the thousands separator when you read the file like this:</p> <pre><code>houseprice = pd.read_csv('houseprice.csv', thousands=',') </code></pre>
python|pandas
3
1,176
70,320,221
Read the data from a text file and reshape the data in python using pandas
<p>I need to convert the following text file into csv format using Python pandas. I have a dataset in the following format. It is a text file and doesn't have header.</p> <p><a href="https://i.stack.imgur.com/sz3o9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sz3o9.png" alt="Input text file" /></a></p> <p>And I need it in a format like this:</p> <pre><code>SourceFile RowNo ColNo Value InputFile.txt 1 1 H1000 InputFile.txt 1 2 Sample_ID InputFile.txt 1 3 MGA_E InputFile.txt 1 4 MGA_N InputFile.txt 2 1 H1001 InputFile.txt 2 2 Method InputFile.txt 2 6 AR101 InputFile.txt 3 1 H1002 InputFile.txt 3 2 Units InputFile.txt 3 3 metres InputFile.txt 3 4 metres InputFile.txt 4 1 H1003 InputFile.txt 4 2 LLD InputFile.txt 4 6 0.01 InputFile.txt 4 7 5 InputFile.txt 5 1 D InputFile.txt 5 2 DAL011 InputFile.txt 5 3 446500 InputFile.txt 5 4 6644000 InputFile.txt 5 5 L InputFile.txt 5 6 9.13 InputFile.txt 6 1 D InputFile.txt 6 2 DAL020 InputFile.txt 6 3 462800 InputFile.txt 6 4 6653400 InputFile.txt 6 5 L InputFile.txt 6 6 8.6 InputFile.txt 7 1 EOF </code></pre> <p>Here is what I have tried:</p> <pre><code>if 'txt' in str(path_txt): all_dfs = pd.read_csv(str(path_txt), dtype=str, sep='\t', header=None) # Reading file else: print ('Unknow file', str(path_txt)) for (i, c) in enumerate(all_dfs.columns): ds = ds.append(pd.DataFrame({ 'SourceFile': path_txt.name, 'RowNo': range(1, len(all_dfs) + 1), 'ColNo': i + 1, 'Value': all_dfs[[str(c)]].values[:, 0], })) ds['Value'].replace('', np.nan, inplace=True) ds.dropna(subset=['Value'], inplace=True) ds.to_csv('Data.csv', index=False) </code></pre> <ol> <li><p>SourceFile(name of the txt file)</p> </li> <li><p>RowNumber(which row, Value field is coming)</p> </li> <li><p>ColumnNumber(which column, value field is coming)</p> </li> <li><p>Value(actual data)</p> </li> </ol> <p><strong>Error: KeyError: &quot;None of [Index(['0'], dtype='object')] are in the [columns]&quot;</strong></p>
<p>This code will do exactly what you want, in a simpler (and faster) way:</p> <pre><code>with open('your_file.txt') as f: text = f.read().strip() lines = [[('InputFile.txt', line_no + 1, col_no + 1, cell) for col_no, cell in enumerate(re.split('\t', l.strip())) if cell != ''] for line_no, l in enumerate(re.split(r'[\r\n]+', text))] l = [] for line in lines: l.extend(line) df = pd.DataFrame(l, columns=['SourceFile', 'RowNo', 'ColNo', 'Value']) </code></pre> <p>Output:</p> <pre><code>&gt;&gt;&gt; df SourceFile RowNo ColNo Value 0 InputFile.txt 1 1 H1000 1 InputFile.txt 1 2 Sample_Id 2 InputFile.txt 1 3 MGA_E 3 InputFile.txt 1 4 MGA_N 4 InputFile.txt 2 1 H1001 5 InputFile.txt 2 2 Method 6 InputFile.txt 2 6 AR101 7 InputFile.txt 3 1 H1002 8 InputFile.txt 3 2 Units 9 InputFile.txt 3 3 metres 10 InputFile.txt 3 4 metres 11 InputFile.txt 4 1 H1003 12 InputFile.txt 4 2 LLD 13 InputFile.txt 4 6 0.01 14 InputFile.txt 4 7 5 15 InputFile.txt 5 1 D 16 InputFile.txt 5 2 DAL011 17 InputFile.txt 5 3 446500 18 InputFile.txt 5 4 6644000 19 InputFile.txt 5 5 L 20 InputFile.txt 5 6 9.13 21 InputFile.txt 6 1 D 22 InputFile.txt 6 2 DAL020 23 InputFile.txt 6 3 462800 24 InputFile.txt 6 4 6653400 25 InputFile.txt 6 5 L 26 InputFile.txt 6 6 8.6 27 InputFile.txt 7 1 EOF </code></pre> <p>I verified that it produces your expected output dataframe perfectly.</p>
python|pandas|text-files
0
1,177
56,141,648
AttributeError: module 'numpy' has no attribute 'testing' when importing sklearn library
<p>I've imported numpy together with sklearn library but I got an Error <code>AttributeError: module 'numpy' has no attribute 'testing'</code></p> <p>If I removed sklearn library from my code, it could run well. </p> <p>the code is just like this:</p> <pre><code>import numpy as np from kumparanian import ds from sklearn.feature_extraction.text import TfidfVectorizer, TfidfTransformer, CountVectorizer, HashingVectorizer </code></pre> <p>Traceback:</p> <pre><code>File "&lt;ipython-input-37-76f2395d81c0&gt;", line 1, in &lt;module&gt; runfile('C:/Users/LENOVO/Downloads/ds_assessment_v2/model.py', wdir='C:/Users/LENOVO/Downloads/ds_assessment_v2') File "C:\Users\LENOVO\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 705, in runfile execfile(filename, namespace) File "C:\Users\LENOVO\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 102, in execfile exec(compile(f.read(), filename, 'exec'), namespace) File "C:/Users/LENOVO/Downloads/ds_assessment_v2/model.py", line 41, in &lt;module&gt; from sklearn.feature_extraction.text import TfidfVectorizer, TfidfTransformer, CountVectorizer, HashingVectorizer File "C:\Users\LENOVO\AppData\Roaming\Python\Python36\site-packages\sklearn\__init__.py", line 76, in &lt;module&gt; from .base import clone File "C:\Users\LENOVO\AppData\Roaming\Python\Python36\site-packages\sklearn\base.py", line 16, in &lt;module&gt; from .utils import _IS_32BIT File "C:\Users\LENOVO\AppData\Roaming\Python\Python36\site-packages\sklearn\utils\__init__.py", line 13, in &lt;module&gt; from scipy.sparse import issparse File "C:\Users\LENOVO\Anaconda3\lib\site-packages\scipy\sparse\__init__.py", line 228, in &lt;module&gt; from .base import * File "C:\Users\LENOVO\Anaconda3\lib\site-packages\scipy\sparse\base.py", line 9, in &lt;module&gt; from scipy._lib._numpy_compat import broadcast_to File "C:\Users\LENOVO\Anaconda3\lib\site-packages\scipy\_lib\_numpy_compat.py", line 17, in &lt;module&gt; _assert_warns = np.testing.assert_warns AttributeError: module 'numpy' has no attribute 'testing' </code></pre> <p>Every suggestion is really appreciated.</p>
<p>Run an additional import code, example:</p> <pre><code>import numpy.testing as npt npt.assert_array_almost_equal(answer1, answer2) </code></pre>
python|numpy|scikit-learn
0
1,178
55,680,603
pandas filter on DatetimeIndex by excluding date range
<p>I currently have a <code>pandas.DataFrame</code> which has a <code>pandas.DatetimeIndex</code> and a set of values.</p> <p>I would like to <strong>exclude</strong> all the dates in a given <code>pandas.date_range</code> from this <code>pandas.DataFrame</code>.</p> <p>Example code:</p> <pre><code>dates = pd.date_range(start='04/01/2012', end='04/01/2019', freq='MS') df = pd.DataFrame(data=[100]*len(dates),index=dates,columns=["val"]) exclusion_dates = pd.date_range(start='04/01/2012', end='04/01/2019', freq=pd.offsets.DateOffset(months=12)) </code></pre> <p><strong>My attempt:</strong></p> <pre><code>df.loc[~exclusion_dates,:] </code></pre> <p>Ideally this would lead to <code>df</code> containing all dates <strong>except</strong> for <code>1st April YYYY</code></p> <p>However, this leads to the below error:</p> <blockquote> <p>TypeError: bad operand type for unary ~: 'DatetimeIndex'</p> </blockquote> <p>I looked at the below thread, however could not find anything: <a href="https://stackoverflow.com/questions/22898824/filtering-pandas-dataframes-on-dates">Filtering Pandas DataFrames on dates</a></p>
<p>Use <code>isin()</code>:</p> <pre><code>df.loc[~df.index.isin(exclusion_dates)] val 2012-02-01 100 2012-03-01 100 &lt;-- April excluded 2012-05-01 100 2012-06-01 100 2012-07-01 100 2012-08-01 100 2012-09-01 100 2012-10-01 100 2012-11-01 100 2012-12-01 100 2013-01-01 100 2013-02-01 100 2013-03-01 100 &lt;-- April excluded 2013-05-01 100 ... </code></pre> <p>Note: The default format treats your date strings as mm/dd/yyyy. So use: </p> <pre><code>pd.date_range(start='04/01/2012', end='04/01/2019', ...) </code></pre>
python|pandas|dataframe
3
1,179
65,033,492
Apply multiple functions to GroupBy object in a specific order
<p>I have a dataframe <code>df</code> with at column <code>date</code> which consists of date.</p> <p>If I want to calculate the maximum difference between the dates within each group, is that doable (without having to re-group and without using <code>.apply</code>)? If I do</p> <pre class="lang-py prettyprint-override"><code> df id | date | ---+------- 1 | 2020-01-20 1 | 2020-01-25 2 | 2020-02-03 2 | 2020-02-04 max_diff_for_each_id = df.groupby(&quot;id&quot;).diff(1).max() max_diff_for_each_id id -- 1 5 </code></pre> <p>that of course give the maximum difference between all groups, where I want</p> <pre class="lang-py prettyprint-override"><code>id -- 1 5 2 1 </code></pre> <p>I know I can just re-group <code>max_diff_for_each_id</code> but I think that</p> <pre class="lang-py prettyprint-override"><code>max_diff_for_each_id = df.groupby(&quot;id&quot;).diff(1).groupby(&quot;id&quot;).max() </code></pre> <p>is not really &quot;pretty&quot; and say you have multiple functions to apply, there is a ton of overhead by having to re-group all the time</p>
<blockquote> <p>is that doable (without having to re-group and without using .apply)</p> </blockquote> <p>I think generally not, if only 2 values per groups or some another patterns of data there should be alternatives.</p> <pre><code>#if always 2 values per id in order df1 = df.groupby(&quot;id&quot;)['date'].agg(['min','max']) max_diff_for_each_id = df1['max'].sub(df1['min']).dt.days </code></pre> <p>Or:</p> <pre><code>#if always 2 values per id df2 = df.groupby(&quot;id&quot;)['date'].agg(['first','last']) max_diff_for_each_id = df2['last'].sub(df2['first']).dt.days </code></pre> <p>One idea with convert <code>id</code> to index, but <code>max(level=0)</code> is only hidden <code>.groupby(level=0).max()</code>, so this should be trick solution (in my opinion)</p> <pre><code>max_diff_for_each_id = df.set_index('id').groupby(&quot;id&quot;)['date'].diff().max(level=0).dt.days </code></pre> <hr /> <p>There is possible multiple <code>groupby</code> like:</p> <pre><code>max_diff_for_each_id = df.groupby(&quot;id&quot;)['date'].diff(1).groupby(df[&quot;id&quot;]).max().dt.days </code></pre> <p>Or create custom functions like:</p> <pre><code>max_diff_for_each_id = df.groupby(&quot;id&quot;)['date'].apply(lambda x: x.diff().max()).dt.days max_diff_for_each_id = df.groupby(&quot;id&quot;)['date'].agg(lambda x: x.diff().max()).dt.days </code></pre> <hr /> <pre><code>print (max_diff_for_each_id) id 1 5 2 1 dtype: int64 </code></pre>
python|pandas|pandas-groupby
2
1,180
64,690,655
Plus one in calculating area of rectangle
<pre><code>areas = (end_x - start_x + 1) * (end_y - start_y + 1) </code></pre> <p>Above is what use in calculating area of rectangle for non-max-suppression in two different links below, why there is a need for plus one?</p> <p><a href="https://github.com/amusi/Non-Maximum-Suppression/blob/master/nms.py" rel="nofollow noreferrer">https://github.com/amusi/Non-Maximum-Suppression/blob/master/nms.py</a> <a href="https://www.pyimagesearch.com/2014/11/17/non-maximum-suppression-object-detection-python/" rel="nofollow noreferrer">https://www.pyimagesearch.com/2014/11/17/non-maximum-suppression-object-detection-python/</a></p>
<p>I guess that plus one is just used to get the exact area. For example, width begin in pixel 2, end in pixel 4. The exact width is 3 (pixel 2, 3, 4). A 3 equals to 4 - 2 + 1.</p> <p>But in my opion, it's not essential to care about that. Just make sure you cal every area in the same standard.</p>
python|numpy|tensorflow|tensorflow2.0|non-maximum-suppression
1
1,181
64,731,513
How to make X number of random sets of 3 from pandas column?
<p>I have a dataframe column that looks like this (roughly 200 rows):</p> <pre><code>col1 a b c d e f </code></pre> <p>I want to create a new dataframe with one column and 15 sets of 3 random combinations of the items in the pandas column. for example:</p> <p>new_df</p> <pre><code>combinations: (a,b,c) (a,c,d) (a,d,c) (b,a,d) (d,a,c) (a,d,f) (e,a,f) (a,f,e) (b,e,f) (f,b,e) (c,b,e) (b,e,a) (a,e,f) (e,f,a) </code></pre> <p>Currently the code I have creates a combination of every possible combination and runs out of memory when I try to append the results to another dataframe:</p> <pre><code>import pandas as pd from itertools import permutations df = pd.read_csv('') combo = df['col1'].tolist() perm = permutations(combo,3) combinations = pd.DataFrame(columns=['combinations']) list_ = [] for i in list(perm): combinations['combinations'] = i list_.append(i) </code></pre> <p>How do I stop the sets of random combinations to stop at any X number of set or in this case 15 combinations of 3?</p>
<p>While not quite as elegant as the previous answers, If you truly want to create a random sampling of values, not just the first you could also do something along the lines of the following:</p> <pre><code>def newFrame(df: pd.DataFrame, srccol: int, cmbs: int, rows: int) -&gt; pd.DataFrame: il = df[srccol].values.tolist() nw_df = pd.DataFrame() data = [] for r in range(rows): rd =[] for ri in range(cmbs): rd.append(rnd.choice(il)) data.append(tuple(rd)) nw_df['Combinations'] = data return nw_df </code></pre> <p>Which when passed a a df as shown in your example in the form of:</p> <pre><code>new_df = newFrame(df, 0, 3, 15) </code></pre> <p>Produces:</p> <pre><code> Combinations 0 (a, f, e) 1 (a, d, f) 2 (b, c, d) 3 (a, a, d) 4 (f, b, c) 5 (e, b, b) 6 (e, e, d) 7 (c, f, f) 8 (f, e, b) 9 (d, c, e) </code></pre>
python-3.x|pandas
1
1,182
65,029,458
Insert a row in a specific cell using pandas?
<p>Is there a way to start inserting rows from a specific cell using pandas? I attach an example for better understanding, The red mark is where I want to insert the row:</p> <pre><code>header header header header header header DATA DATA xxxxxx empty empty empty DATA DATA xxxxxx empty empty empty DATA DATA xxxxxx empty empty empty DATA DATA xxxxxx empty empty empty DATA DATA xxxxxx empty empty empty DATA DATA xxxxxx empty empty empty </code></pre> <p>What I need is basically to open an existing csv with the structure of the example and enter the missing rows in the positions where the &quot;xxxxxx&quot; are.</p> <p>It is important to mention that I need to enter a list in the entire row, therefore the empty values ​​should be filled with the content of the list, including and from &quot;xxxxxx&quot;</p> <p>Maybe there is a more efficient way to do the csv, but the way I am working the information this is the most comfortable.</p>
<p>I'm going off the example table above, starting with a csv without the column of 'x's.</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd df = pd.read_csv('example.csv') new_col = ['xxxx']*len(df) df['header_insert'] = new_col </code></pre> <p>This will insert a new column in the right most position. If you want the columns in a specific order you'll just create a new dataframe that references the existing column headers.</p> <p>For example</p> <pre class="lang-py prettyprint-override"><code>list(df.columns) #['header', 'header.1', 'header.2', 'header.3', 'header.4', 'header_insert'] </code></pre> <p>To reorder</p> <pre class="lang-py prettyprint-override"><code>new_df = df[['header', 'header.1', 'header_insert', 'header.2', 'header.3', 'header.4']] </code></pre>
python|pandas|csv
0
1,183
40,023,026
Pandas Split 9GB CSV into 2 5GB CSVs
<p>I have a 9GB CSV and need to split it into 2 5GB CSVs. I started out doing this:</p> <pre><code>for i, chunk in enumerate(pd.read_csv('csv_big_file2.csv',chunksize=100000)): chunk.drop('Unnamed: 0',axis=1,inplace=True) chunk.to_csv('chunk{}.csv'.format(i),index=False) </code></pre> <p>What I need to do is somehow tell pandas to write the chunk to a CSV until that CSV reaches a size of 6,250,000,000 (or a filesize of 5GB) then start a new CSV file with the rest of the data (without starting again from the beginning of the data from the big CSV file).</p> <p>Can this be done?</p> <p>Thanks in advance!</p>
<p>Solution is a little messy. But this should split the data based on the ~6 billion row threshold you mentioned. </p> <pre><code>import pandas as pd from __future__ import division numrows = 6250000000 #number of rows threshold to be 5 GB count = 0 #keep track of chunks chunkrows = 100000 #read 100k rows at a time df = pd.read_csv('csv_big_file2.csv', iterator=True, chunksize=chunkrows) for chunk in df: #for each 100k rows if count &lt;= numrows/chunkrows: #if 5GB threshold has not been reached outname = "csv_big_file2_1stHalf.csv" else: outname = "csv_big_file2_2ndHalf.csv" #append each output to same csv, using no header chunk.to_csv(outname, mode='a', header=None, index=None) count+=1 </code></pre>
python|python-3.x|csv|pandas
3
1,184
39,611,122
Python cv2 TypeError: src is not a numpy array, neither a scalar
<p>The following code gives the error: ""TypeError: src is not a numpy array, neither a scalar"" by using cv2.The image is defined as Grey level image with photoimage. The image is displayed correctly and I don't understand why it doesn't work.</p> <pre><code> #Photoimage self.imgTk=ImageTk.PhotoImage(img) #label the image label = Tk.Label(self.canvasright,image=self.imgTk) label.image = self.imgTk # keep a reference! label.pack() #img.save('img_gif','gif') #imgTk=ImageTk.PhotoImage(img)#('img_gif') #PhotoImage(master = self.canvasright, width = WIDTH, height = HEIGHT) #self.canvasright.image=self.imgTk #self.canvasright.create_image(0,0,image=label,anchor=Tk.NW) #self.canvasright.pack() def find_balls(self): imggray=cv2.cvtColor(self.imgTk, cv2.COLOR_BGR2GRAY) self.circles = cv2.HoughCircles(imggray,cv.CV_HOUGH_GRADIENT,1,20, param1=50,param2=30,minRadius=0,maxRadius=200) print self.circles </code></pre>
<p>For opencv You need to pass the image as numpy array</p> <p>In your function def find_balls(self): do the following</p> <p>Add a line immediately inside the function</p> <pre><code>def find_balls(self): #Add this Line temp_image = cv2.imread(self.imgTk) #This will read the image and convert it to a numpy array #Now replace the first parameter(self.imgTk to temp_image) in the next line imggray=cv2.cvtColor(temp_image, cv2.COLOR_BGR2GRAY) self.circles = cv2.HoughCircles(imggray,cv.CV_HOUGH_GRADIENT,1,20, param1=50,param2=30,minRadius=0,maxRadius=200) print self.circles </code></pre>
python|arrays|numpy
0
1,185
39,439,054
How do I get the index of a column by name?
<p>Given a <code>DataFrame</code></p> <pre><code>&gt;&gt;&gt; df x y z 0 1 a 7 1 2 b 5 2 3 c 7 </code></pre> <p>I would like to find the index of the column by name, e.g., <code>x</code> -> 0, <code>z</code> -> 2, &amp;c.</p> <p>I can do</p> <pre><code>&gt;&gt;&gt; list(df.columns).index('y') 1 </code></pre> <p>but it seems backwards (the <code>pandas.indexes.base.Index</code> class should probably be able to do it without circling back to list).</p>
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.get_loc.html" rel="nofollow"><code>Index.get_loc</code></a>:</p> <pre><code>print (df.columns.get_loc('z')) 2 </code></pre> <p>Another solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.searchsorted.html" rel="nofollow"><code>Index.searchsorted</code></a>:</p> <pre><code>print (df.columns.searchsorted('z')) 2 </code></pre> <p><strong>Timings</strong>:</p> <pre><code>In [86]: %timeit (df.columns.get_loc('z')) The slowest run took 13.42 times longer than the fastest. This could mean that an intermediate result is being cached. 100000 loops, best of 3: 1.99 µs per loop In [87]: %timeit (df.columns.searchsorted('z')) The slowest run took 10.46 times longer than the fastest. This could mean that an intermediate result is being cached. 100000 loops, best of 3: 4.48 µs per loop </code></pre>
python|python-2.7|pandas
1
1,186
39,581,300
Python ImageIO Gif Set Delay Between Frames
<p>I am using ImageIO: <a href="https://imageio.readthedocs.io/en/latest/userapi.html" rel="noreferrer">https://imageio.readthedocs.io/en/latest/userapi.html</a> , and I want to know how to set delay between frames in a gif.</p> <p>Here are the relevant parts of my code.</p> <pre><code>import imageio . . . imageio.mimsave(args.output + '.gif', ARR_ARR) </code></pre> <p>where <code>ARR_ARR</code> is an array of <code>numpy uint8</code> 2d array of couplets.</p> <p>To be clear, I have no problem writing the gif. I cannot, however, find any clarification on being able to write the amount of delay between frames.</p> <p>So, for example, I have frames 0 ... 9</p> <p>They always play at the same rate. I would like to be able to control the number of milliseconds or whatever unit between frames being played.</p>
<p>Found it using <code>imageio.help("GIF")</code> you would pass in something like</p> <p><code>imageio.mimsave(args.output + '.gif', ARR_ARR, fps=$FRAMESPERSECOND)</code></p> <p>And that seems to work.</p>
python|image|numpy|gif
7
1,187
39,710,903
pd.read_html() imports a list rather than a dataframe
<p>I used <code>pd.read_html()</code> to import a table from a webpage but instead of structuring the data as a dataframe Python imported it as a list. How can I import the data as a dataframe? Thank you!</p> <p>The code is the following:</p> <pre><code>import pandas as pd import html5lib url = 'http://www.fdic.gov/bank/individual/failed/banklist.html' dfs = pd.read_html(url) type(dfs) Out[1]: list </code></pre>
<p><a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_html.html" rel="noreferrer"><code>.read_html()</code></a> produces a <em>list of dataframes</em> (there could be multiple tables in an HTML source), get the desired one by index. In your case, there is a single dataframe:</p> <pre><code>dfs = pd.read_html(url) df = dfs[0] print(df) </code></pre> <p>Note that, if there are no <code>table</code>s in the HTML source, it would return an error and would never produce an empty list.</p>
python|html|pandas
20
1,188
39,819,090
For loop to evaluate accuracy doesn't execute
<p>So I've the following numpy arrays.</p> <ul> <li>X validation set, X_val: (47151, 32, 32, 1)</li> <li>y validation set (labels), y_val_dummy: (47151, 5, 10) </li> <li>y validation prediction set, y_pred: (47151, 5, 10)</li> </ul> <p>When I run the code, it seems to take forever. Can someone suggest why? I believe it's a code efficiency problem. I can't seem to complete the process.</p> <pre><code>y_pred_list = model.predict(X_val) correct_preds = 0 # Iterate over sample dimension for i in range(X_val.shape[0]): pred_list_i = [y_pred_array[i] for y_pred in y_pred_array] val_list_i = [y_val_dummy[i] for y_val in y_val_dummy] matching_preds = [pred.argmax(-1) == val.argmax(-1) for pred, val in zip(pred_list_i, val_list_i)] correct_preds = int(np.all(matching_preds)) total_acc = correct_preds / float(x_val.shape[0]) </code></pre>
<p>You're main problem is that you're generating a massive number of very large lists for no real reason</p> <pre><code>for i in range(X_val.shape[0]): # this line generates a 47151 x 5 x 10 array every time pred_list_i = [y_pred_array[i] for y_pred in y_pred_array] </code></pre> <p>What's happening is that iterating over an nd numpy array iterates over the slowest varying index (i.e. the leftmost), so every list comprehension is running operating on 47K entries.</p> <p>Marginally better would be </p> <pre><code>for i in range(X_val.shape[0]): pred_list_i = [y_pred for y_pred in y_pred_array[i]] val_list_i = [y_val for y_val in y_val_dummy[i]] matching_preds = [pred.argmax(-1) == val.argmax(-1) for pred, val in zip(pred_list_i, val_list_i)] correct_preds = int(np.all(matching_preds)) </code></pre> <p>But you're still copying a lot of arrays for no real purpose. The following code should do the same, without the useless copying.</p> <pre><code>correct_preds = 0.0 for pred, val in zip(y_pred_array, y_val_dummy): correct_preds += all(p.argmax(-1) == v.argmax(-1) for p, v in zip(pred, val)) total_accuracy = correct_preds / x_val.shape[0] </code></pre> <p>This assumes that your criteria for a correct prediction is accurate. You can probably avoid the explicit loop entirely with a couple of calls to <code>np.argmax</code>, but you'll have to work that out on your own.</p>
python|numpy
0
1,189
44,110,138
Python: numpy and scipy minimize: setting an array element with a sequence minimize
<p>I'm trying to minimize the function, but get an </p> <blockquote> <p><strong>ValueError: setting an array element with a sequence.</strong></p> </blockquote> <p>in the following code:</p> <pre><code>import numpy as np from scipy import optimize as opt def f(x): return np.sin(x / 5.) * np.exp(x / 10.) + 5 * np.exp( -x / 2.) aprox_0 = np.array([range(1, 10), range(11, 20), range(21, 30)]) min_0 = opt.minimize(f, aprox_0[0]) </code></pre> <p>Can anyone help?</p>
<p>Check the dimensions of both x inside the function <code>f()</code> and its returning value during the mininization, when you find the problem probably the function <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.flatten.html" rel="nofollow noreferrer">flatten()</a> will help.</p>
python|arrays|numpy|scipy|minimize
0
1,190
69,482,493
How can i implement the __getitem__ method in PyTorch for the sinus
<p>I am starting with PyTorch and i am trying to create a Network that is predicting the sinus of x. I tried to create a DataSet like this:</p> <pre><code> class SinusDataset(Dataset): def __init__(self, size: int = 1000): self.size = size def __len__(self): return self.size def __getitem__(self, idx: int)-&gt;Tensor: if idx &gt; self.size: raise ValueError return idx, math.sin(idx) </code></pre> <p>I do not think that is the proper way to implement this. How should I Implemented the `__get__´ method?</p>
<p>You could initialize your input and labels on init and save those in <em>list</em>s. Then, in your <code>__getitem__</code> function, pick instances from those two using the provided <code>idx</code> integer. Something like:</p> <pre><code>class SinusDataset(Dataset): def __init__(self, size: int = 1000): self.x = torch.linspace(0, 1, size) self.y = torch.sin(self.x) def __len__(self) -&gt; int: return len(self.x) def __getitem__(self, idx: int): return return self.x[idx][None], self.y[idx][None] </code></pre> <p>Then you can use the dataset by wrapping a <a href="https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader" rel="nofollow noreferrer"><code>torch.utils.data.DataLoader</code></a>:</p> <pre><code>&gt;&gt;&gt; dl = DataLoader(SinusDataset(100), batch_size=4, shuffle=True) &gt;&gt;&gt; for x, y in dl: ... print(x, y) ... break tensor([0.2452, 0.6116, 0.0791, 0.6667]) tensor([0.2428, 0.5742, 0.0790, 0.6184]) </code></pre> <hr /> <p>In this case it would be more appropriate to inherit from <a href="https://pytorch.org/docs/stable/data.html#torch.utils.data.TensorDataset" rel="nofollow noreferrer"><code>torch.utils.data.TensorDataset</code></a> directly. This comes with both <code>__len__</code> and <code>__getitem__</code> implemented for you (see <a href="https://github.com/pytorch/pytorch/blob/master/torch/utils/data/dataset.py#L246" rel="nofollow noreferrer"><em>source</em></a>):</p> <pre><code>class SinusDataset(TensorDataset): def __init__(self, size: int = 1000): x = torch.linspace(0, 1, 1000)[:,None] y = torch.sin(x)[:,None] super().__init__(x, y) </code></pre> <p>This is slightly more advanced but it is considered best practice to inherit from the closest built-in <code>torch.utils.data.Dataset</code> class instead of writing the same methods yourself.</p> <hr /> <p>Inference example:</p> <pre><code>&gt;&gt;&gt; model = nn.Sequential(nn.Linear(1, 4), nn.ReLU(), nn.Linear(4, 1)) &gt;&gt;&gt; x, y = next(iter(dl)) &gt;&gt;&gt; model(x) tensor([[-0.0640], [ 0.1461], [-0.0882], [ 0.2259]], grad_fn=&lt;AddmmBackward&gt;) </code></pre>
python|pytorch
3
1,191
69,323,196
Environment issues with running Anaconda Python in VS Code
<p>I am trying to learn Python and debug code for the first time in VS Code (latest edition). I have anaconda running and the code I have runs fine by itself but now I need to know how to update the code and debug it for the first time.</p> <p>I keep getting the following error related to NumPy:</p> <blockquote> <p>Exception has occurred: ImportError</p> <p>IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE!</p> <p>Importing the numpy C-extensions failed. This error can happen for many reasons, often due to issues with your setup or how NumPy was installed.</p> <p>We have compiled some common reasons and troubleshooting tips at:</p> <p><a href="https://numpy.org/devdocs/user/troubleshooting-importerror.html" rel="nofollow noreferrer">https://numpy.org/devdocs/user/troubleshooting-importerror.html</a></p> <p>Please note and check the following:</p> <ul> <li>The Python version is: Python3.7 from &quot;C:\Miniconda2\envs\myproject_flask\python.exe&quot;</li> <li>The NumPy version is: &quot;1.18.5&quot;</li> </ul> <p>and make sure that they are the versions you expect. Please carefully study the documentation linked above for further help.</p> <p>Original error was: DLL load failed: The specified module could not be found.</p> </blockquote> <p>In the Miniconda path above the Python.exe is version 3.7.7. I tried to install NumPy like so in myproject directory:</p> <blockquote> <p>(myproject_flask) c:\MyProject\source\MyProject.Flask&gt;conda install numpy=1.18.5</p> </blockquote> <p>I still get the same error when I go to <code>F5</code> to debug and run to a breakpoint.</p> <p>Need help with my environment.</p> <p>I need to use VS Code in my Windows environment with Anaconda.</p>
<p>You should launch VS Code from Anaconda Navigator so that the environment is initialized.</p>
python|numpy|visual-studio-code|anaconda|python-3.7
0
1,192
38,319,646
TensorFlow and preparing data for MS COCO
<p>I can't quite figure out how to prepare the data to use with the MS COCO dataset. I'm currently saving all of the data in <code>TFRecord</code>s. For each record, I need to save the jpeg data as well as all of the annotations. For each image, there can be up to ~20 annotations and for each of those annotations, there can be multiple polygons in a python list. </p> <p>For example, I iterate over all the segmentations and later on save it in a <code>TFRecord</code>. </p> <pre><code>obj = { 'annotation/' + str(imgNb) + '/seg/' + str(_key): _float_feature(segmentations[_key]) for _key in range(len(segmentations))} </code></pre> <p>The problem in doing this is that I end up with variable length <code>TFRecord</code>s. When I want to call <code>parse_single_example</code>, I need to send a feature_map, but I don't know the exact number of annotations. The <code>feature_map</code> would need to be pretty big, assuming a worst-case scenario. </p> <p>I also tried saving it with JSON files, but I still need to use <code>parse_single_example</code> to parse the JSON and so I still have the initial problem. </p> <p>So the question: Should I create a really big <code>feature_map</code> and at every training step I check which ones are empty, or should I try to process the annotations' data before saving it in <code>TFRecord</code>s (transform it into images, this would make for some pretty big files, but could be doable I guess)? Or is there a better way?</p> <p>Any help or insight is appreciated! Thanks!</p>
<p>You might be better off either (1) putting all annotations as the same feature or (2) always putting all features in all examples, but leaving empty values for the absent ones.</p>
python|json|tensorflow
0
1,193
38,337,918
Plot pie chart and table of pandas dataframe
<p>I have to plot pie-chart and a table side by side using matplotlib.</p> <p>For drawing the pie-chart, I use the below code:</p> <pre><code>import matplotlib.pyplot as plt df1.EventLogs.value_counts(sort=False).plot.pie() plt.show() </code></pre> <p>For drawing a table, I use the below code:</p> <pre><code>%%chart table --fields MachineName --data df_result2 </code></pre> <p>df_result2 is a table with the list of MachineName's in it.</p> <p>Not sure whether we can place both pie chart and table side by side. Any help would be appreciated.</p>
<p>Look at the code:</p> <pre><code>import pandas as pd import matplotlib.pyplot as plt from pandas.tools.plotting import table # sample data raw_data = {'officer_name': ['Jason', 'Molly', 'Tina', 'Jake', 'Amy'], 'jan_arrests': [4, 24, 31, 2, 3], 'feb_arrests': [25, 94, 57, 62, 70], 'march_arrests': [5, 43, 23, 23, 51]} df = pd.DataFrame(raw_data, columns = ['officer_name', 'jan_arrests', 'feb_arrests', 'march_arrests']) df['total_arrests'] = df['jan_arrests'] + df['feb_arrests'] + df['march_arrests'] plt.figure(figsize=(16,8)) # plot chart ax1 = plt.subplot(121, aspect='equal') df.plot(kind='pie', y = 'total_arrests', ax=ax1, autopct='%1.1f%%', startangle=90, shadow=False, labels=df['officer_name'], legend = False, fontsize=14) # plot table ax2 = plt.subplot(122) plt.axis('off') tbl = table(ax2, df, loc='center') tbl.auto_set_font_size(False) tbl.set_fontsize(14) plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/JAaaP.png" rel="noreferrer"><img src="https://i.stack.imgur.com/JAaaP.png" alt="enter image description here"></a></p>
python|pandas|matplotlib
44
1,194
38,494,300
Flatten/ravel/collapse 3-dimensional xr.DataArray (Xarray) into 2 dimensions along an axis?
<p>I have a dataset where I'm storing replicates for different classes/subtypes (not sure what to call it) and then attributes for each one. Essentially, there are 5 subtype/classes, 4 replicates for each subtype/class, and 100 attributes that are measured. </p> <p><strong>Is there a method like <code>np.ravel</code> or <code>np.flatten</code> that can merge 2 dimensions using <code>Xarray</code>?</strong> </p> <p>In this, I want to merge dims <code>subtype</code> and <code>replicates</code> so I have a 2D array (or <code>pd.DataFrame</code> with <code>attributes vs. subtype/replicates</code>. </p> <p>It wouldn't need to have the format "coord_1 | coord_2" or anything. It would be useful if it kept the original coord names. Maybe there's something like <code>groupby</code> that could do this? <code>Groupby</code> always confuses me so if it's something native to <code>xarray</code> that would be awesome. </p> <pre><code>import xarray as xr import numpy as np # Set up xr.DataArray dims = (5,4,100) DA_data = xr.DataArray(np.random.random(dims), dims=["subtype","replicates","attributes"]) DA_data.coords["subtype"] = ["subtype_%d"%_ for _ in range(dims[0])] DA_data.coords["replicates"] = ["rep_%d"%_ for _ in range(dims[1])] DA_data.coords["attributes"] = ["attr_%d"%_ for _ in range(dims[2])] # DA_data.coords # Coordinates: # * subtype (subtype) &lt;U9 'subtype_0' 'subtype_1' 'subtype_2' ... # * replicates (replicates) &lt;U5 'rep_0' 'rep_1' 'rep_2' 'rep_3' # * attributes (attributes) &lt;U7 'attr_0' 'attr_1' 'attr_2' 'attr_3' ... # DA_data.dims # ('subtype', 'replicates', 'attributes') # Naive way to collapse the replicate dimension into the subtype dimension desired_columns = list() for subtype in DA_data.coords["subtype"]: for replicate in DA_data.coords["replicates"]: desired_columns.append(str(subtype.values) + "|" + str(replicate.values)) desired_columns # ['subtype_0|rep_0', # 'subtype_0|rep_1', # 'subtype_0|rep_2', # 'subtype_0|rep_3', # 'subtype_1|rep_0', # 'subtype_1|rep_1', # 'subtype_1|rep_2', # 'subtype_1|rep_3', # 'subtype_2|rep_0', # 'subtype_2|rep_1', # 'subtype_2|rep_2', # 'subtype_2|rep_3', # 'subtype_3|rep_0', # 'subtype_3|rep_1', # 'subtype_3|rep_2', # 'subtype_3|rep_3', # 'subtype_4|rep_0', # 'subtype_4|rep_1', # 'subtype_4|rep_2', # 'subtype_4|rep_3'] </code></pre>
<p>Yes, this is exactly what the <code>.stack</code> is for:</p> <pre><code>In [33]: stacked = DA_data.stack(desired=['subtype', 'replicates']) In [34]: stacked Out[34]: &lt;xarray.DataArray (attributes: 100, desired: 20)&gt; array([[ 0.54020268, 0.14914837, 0.83398895, ..., 0.25986503, 0.62520466, 0.08617668], [ 0.47021735, 0.10627027, 0.66666478, ..., 0.84392176, 0.64461418, 0.4444864 ], [ 0.4065543 , 0.59817851, 0.65033094, ..., 0.01747058, 0.94414244, 0.31467342], ..., [ 0.23724934, 0.61742922, 0.97563316, ..., 0.62966631, 0.89513904, 0.20139552], [ 0.21157447, 0.43868899, 0.77488211, ..., 0.98285015, 0.24367352, 0.8061804 ], [ 0.21518079, 0.234854 , 0.18294781, ..., 0.64679141, 0.49678393, 0.32215219]]) Coordinates: * attributes (attributes) |S7 'attr_0' 'attr_1' 'attr_2' 'attr_3' ... * desired (desired) object ('subtype_0', 'rep_0') ... </code></pre> <p>The resulting stacked coordinate is a <code>pandas.MultiIndex</code>, whose values are given by tuples:</p> <pre><code>In [35]: stacked['desired'].values Out[35]: array([('subtype_0', 'rep_0'), ('subtype_0', 'rep_1'), ('subtype_0', 'rep_2'), ('subtype_0', 'rep_3'), ('subtype_1', 'rep_0'), ('subtype_1', 'rep_1'), ('subtype_1', 'rep_2'), ('subtype_1', 'rep_3'), ('subtype_2', 'rep_0'), ('subtype_2', 'rep_1'), ('subtype_2', 'rep_2'), ('subtype_2', 'rep_3'), ('subtype_3', 'rep_0'), ('subtype_3', 'rep_1'), ('subtype_3', 'rep_2'), ('subtype_3', 'rep_3'), ('subtype_4', 'rep_0'), ('subtype_4', 'rep_1'), ('subtype_4', 'rep_2'), ('subtype_4', 'rep_3')], dtype=object) </code></pre>
python|arrays|pandas|multidimensional-array|python-xarray
5
1,195
66,324,928
In a series with mixed data types, how to transform occasional lists and dicts into strings?
<p>I'm trying to clean up a json file with all my own Telegram messages from a certain chat where I receive notifications from a bot. Although the messages are pretty clean in the application, in the json file they get a bit messy. For example, the one-line Telegram message below...</p> <pre><code>Xparty P-Q D-21-01-30-20-12 (USDT_UNI): deal_284174394: Base order executed. Price: 21.03739122 USDT. Size: 127.06584296 USDT (6.04 UNI) </code></pre> <p>in the json file becomes:</p> <pre><code>['Xparty P-Q D-', {'type': 'phone', 'text': '21-01-30-20-12'}, ' (USDT_UNI): deal_284174394: Base order executed. Price: 21.03739122 USDT. Size: 127.06584296 USDT (6.04 UNI)'] </code></pre> <p>So I'm trying to clean up the messages in the json file to be able to work with them. Since all that noise follows certain patterns, I'm trying to add the patterns into a list and then replace them like so:</p> <pre><code>noise = [&quot;, {'type': 'phone', 'text': &quot;, &quot;, {'type': 'hashtag', 'text': &quot;, &quot;[&quot;, &quot;]&quot;, &quot;}, &quot;] df[&quot;messages&quot;].str.replace('|'.join(noise), '', regex=False) </code></pre> <p>But my first problem is that in fact some of that noise happens because SOME of such messages are recorded as LISTS, others as DICTS, and only some are strings. So I believe I have to transform everything into strings first.</p> <p>I was hoping a simply <code>df['messages'].apply(' '.join)</code> would do the trick, but since not all entries are LISTS, it's not working.</p> <p>So my question is: how could I convert all Lists and Dicts from a certain series into strings given that there are different data types in the series? (hopefully, without having to fall back on loops!)</p>
<p>This should do the trick:</p> <pre><code>import json df['messages'] = df['messages'].apply(lambda x: json.dumps(x)) </code></pre>
json|python-3.x|pandas
1
1,196
66,234,450
Sorting an array along its last dimension, and undoing the sorting
<p>In my current project, I have a D-dimensional array. For the sake of exposition, we can assume D=2, but the code should work with arbitrarily high dimensions. I need to run some operations on this matrix when it is sorted according to its last dimension, and subsequently reverse the sorting on the matrix.</p> <p>The first part of sorting the matrix is relatively simple:</p> <pre><code>import numpy as np D = 2 matrix = np.random.uniform(low=0.,high=1.,size=tuple([5]*D)) matrix_sorted = np.sort(matrix,axis=-1) </code></pre> <p>This code snippet sorts the matrix according to the last dimension, but does not remember how the array was sorted, and consequently does not allow me to revert the sorting. Alternatively, I could get the sorted indices with the following line:</p> <pre><code>sorted_indices = np.argsort(matrix,axis=-1) </code></pre> <p>Unfortunately, these indices do not seem to be very useful. I am not sure how I can use these sorted indices to (a) sort the matrix, and (b) undo the sorting in the case for general D. A simple approach would be to create a for-loop over all rows for the <code>D=2</code> case (in this case, we sorted across the columns), but since I want the code to work for arbitrary dimensions, hard-coding nested for-loops is not really an option.</p> <p>Do you have any elegant suggestions on how I could tackle this issue?</p>
<p>So yes, after following <a href="https://stackoverflow.com/users/3874623/mark-m">Mark M</a>'s suggestion, and reading up on some other StackOverflow answers continuing from there, the answer seems to be as follows:</p> <pre><code>import numpy as np # Create the initial random matrix D = 2 matrix = np.random.uniform(low=0.,high=1.,size=tuple([5]*D)) # Get the sorting indices sorting = np.argsort(matrix,axis=-1) # Get the indices for unsorting the matrix reverse_sorting = np.argsort(sorting,axis=-1) # Sort the initial matrix matrix_sorted = np.take_along_axis(matrix, sorting, axis=-1) # Undo the sorting matrix_unsorted = np.take_along_axis(matrix_sorted, reverse_sorting, axis=-1) </code></pre> <p>The trick consists of two steps: <code>np.take_along_axis</code> allows us to sort arbitrarily-dimensional matrices according to the indices we get from <code>np.argsort</code>, and sorting the indices gives us the set of indices required to undo the sorting again with <code>np.take_along_axis</code>. I can do the desired complex operations between the penultimate and ultimate steps. Perfect!</p>
python|arrays|python-3.x|numpy|sorting
0
1,197
52,870,248
How to remove columns where Multiindex levels equal NaN (no value) from a dataframe
<p>I am trying to remove columns from a dataframe with Multiindex, as for some of the columns my levels equal <code>NaN</code> (null). I tried to use dropna() but it works only for rows I assume: <a href="https://i.stack.imgur.com/nRpvg.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nRpvg.jpg" alt="enter image description here"></a></p> <p>dataframe is called 'test' When I do <code>test.dropna()</code> is not working. I have 15 levels with IDs:</p> <pre><code>names=['ID14', 'ID13', 'ID12', 'ID11', 'ID10', 'ID9', 'ID8', 'ID7', 'ID6', 'ID5', 'ID4', 'ID3', 'ID2', 'ID1', 'ID0']) </code></pre> <p>Would you have any suggestions how to indicate to remove whole columns with null values in Multiindex (all 15 levels) as well as the corresponding rows? </p> <p>Thank you in advance! (I am a beginner)</p>
<p>Try to reset the index and re-indexing again like this:</p> <pre><code>old_idx = df.index.names my_new_df = df.reset_index().dropna().set_index(old_idx) </code></pre> <p>That's should be able to solve it, hope it helps somehow.</p>
python|pandas
2
1,198
58,255,759
80 Gb file - Creating a data frame that submits data based upon a list of counties
<p>I am working with an 80 Gb data set in Python. The data has 30 columns and ~180,000,000 rows. </p> <p>I am using the chunk size parameter in <code>pd.read_csv</code> to read the data in chunks where I then iterate through the data to create a dictionary of the counties with their associated frequency.</p> <p>This is where I am stuck. Once I have the list of counties, I want to iterate through the chunks row-by-row again summing the values of 2 - 3 other columns associated with each county and place it into a new DataFrame. This would roughly be 4 cols and 3000 rows which is more manageable for my computer. </p> <p>I really don't know how to do this, this is my first time working with a large data set in python. </p> <pre><code>import pandas as pd from collections import defaultdict df_chunk = pd.read_csv('file.tsv', sep='\t', chunksize=8000000) county_dict = defaultdict(int) for chunk in df_chunk: for county in chunk['COUNTY']: county_dict[county] += 1 for chunk in df_chunk: for row in chunk: # I don't know where to go from here </code></pre> <p>I expect to be able to make a DataFrame with a column of all the counties, a column for total sales of product "1" per county, another column for sales of product per county, and then more columns of the same as needed.</p>
<h1>The idea</h1> <p>I was not sure whether you have data for different <strong>counties</strong> (e.g. in UK or USA) or <strong>countries</strong> (in the world), so I decided to have data concerning <strong>countries</strong>.</p> <p>The idea is to:</p> <ul> <li>Group data from each chunk by country.</li> <li>Generate a partial result for this chunk, as a <em>DataFrame</em> with: <ul> <li>Sums of each column of interest (per country).</li> <li>Number of rows per country.</li> </ul></li> <li>To perform concatenation of partial results (in a moment), each partial result should contain the chunk number, as an additional index level.</li> <li>Concatenate partial results vertically (due to the additional index level, each row has different index).</li> <li>The final result (total sums and row counts) can be computed as sum of the above result, grouped by country (discarding the chunk number).</li> </ul> <h1>Test data</h1> <p>The source CSV file contains country names and 2 columns to sum (<em>Tab</em> separated):</p> <pre><code>Country Amount_1 Amount_2 Austria 41 46 Belgium 30 50 Austria 45 44 Denmark 31 42 Finland 42 32 Austria 10 12 France 74 54 Germany 81 65 France 40 20 Italy 54 42 France 51 16 Norway 14 33 Italy 12 33 France 21 30 </code></pre> <p>For the test purpose I assumed chunk size of just <strong>5</strong> rows:</p> <pre><code>chunksize = 5 </code></pre> <h1>Solution</h1> <p>The main processing loop (and preparatory steps) are as follows:</p> <pre><code>df_chunk = pd.read_csv('Input.csv', sep='\t', chunksize=chunksize) chunkPartRes = [] # Partial results from each chunk chunkNo = 0 for chunk in df_chunk: chunkNo += 1 gr = chunk.groupby('Country') # Sum the desired columns and size of each group res = gr.agg(Amount_1=('Amount_1', sum), Amount_2=('Amount_2', sum))\ .join(gr.size().rename('Count')) # Add top index level (chunk No), then append chunkPartRes.append(pd.concat([res], keys=[chunkNo], names=['ChunkNo'])) </code></pre> <p>To concatenate the above partial results into a single <em>DataFrame</em>, but still with separate results from each chunk, run:</p> <pre><code>chunkRes = pd.concat(chunkPartRes) </code></pre> <p>For my test data, the result is:</p> <pre><code> Amount_1 Amount_2 Count ChunkNo Country 1 Austria 86 90 2 Belgium 30 50 1 Denmark 31 42 1 Finland 42 32 1 2 Austria 10 12 1 France 114 74 2 Germany 81 65 1 Italy 54 42 1 3 France 72 46 2 Italy 12 33 1 Norway 14 33 1 </code></pre> <p>And to generate the final result, summing data from all chunks, but keeping separation by countries, run:</p> <pre><code>res = chunkRes.groupby(level=1).sum() </code></pre> <p>The result is:</p> <pre><code> Amount_1 Amount_2 Count Country Austria 96 102 3 Belgium 30 50 1 Denmark 31 42 1 Finland 42 32 1 France 186 120 4 Germany 81 65 1 Italy 66 75 2 Norway 14 33 1 </code></pre> <h1>To sum up</h1> <p>Even if we look only on how numbers of rows per country are computed, this solution is more "pandasonic" and elegant, than usage of <em>defaultdict</em> and incrementation in a loop processing each row.</p> <p>Grouping and counting of rows per group works significantly quicker than a loop operating on rows.</p>
python|pandas|dataframe|bigdata
1
1,199
58,247,405
Using static rnn getting TypeError: Cannot convert value None to a TensorFlow DType
<p>First some of my code:</p> <pre><code>... fc_1 = layers.Dense(256, activation='relu')(drop_reshape) bi_LSTM_2 = layers.Lambda(buildGruLayer)(fc_1) ... def buildGruLayer(inputs): gru_cells = [] gru_cells.append(tf.contrib.rnn.GRUCell(256)) gru_cells.append(tf.contrib.rnn.GRUCell(128)) gru_layers = tf.keras.layers.StackedRNNCells(gru_cells) inputs = tf.unstack(inputs, axis=1) outputs, _ = tf.contrib.rnn.static_rnn( gru_layers, inputs, dtype='float32') return outputs </code></pre> <p>Error I am getting when running static_rnn is:</p> <pre><code>raise TypeError("Cannot convert value %r to a TensorFlow DType." % type_value) TypeError: Cannot convert value None to a TensorFlow DType. </code></pre> <p>The shape that comes into the Layer in the shape (64,238,256).</p> <p>Anyone has a clue what the problem could be. I already googled the error but couldn't find anything. Any help is much appreciated.</p>
<p>If anyone still needs a solution to this. Its because you need to specify the dtype for the GRUCell, e.g tf.float32</p> <p>Its default is None which in the documentation defaults to the first dimension of your input data (i.e batch dimension, which in tensorflow is a ? or None)</p> <p>Check the dtype argument from : <a href="https://www.tensorflow.org/api_docs/python/tf/compat/v1/nn/rnn_cell/GRUCell" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/compat/v1/nn/rnn_cell/GRUCell</a></p>
tensorflow|keras|tf.keras
0