Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
800
71,460,499
Append new data into existing excel file pandas python
<p>hello everyone I'm attempting to add new data (columns and values) to an already existing excel spreadsheet. I have Order_Sheet.xlsx saved with data as such:</p> <pre><code> Item: Quantity: Price: disposable cups 7000 $0.04 </code></pre> <p>and add this info from spreadsheet_1.xlsx</p> <pre><code> Order Number Location Date 0 A-21-897274 Ohio 07/01/2022 </code></pre> <p>add them to the existing excel sheet Order_Sheet.xlsx instead of creating a new excel. so that it would look like :</p> <pre><code> Item: Quantity: Price: Order Number: Location: Date: disposable cups 7000 $0.04 A-21-897274 Ohio 07/01/2022 </code></pre> <p>Is there an easy way to append new data to an existing excel or possibly combine two excel files?</p>
<p>Working only with pandas 1.4+. The following code assumes that the order of the row are the same between the first and the second write. It also assumes that you exactly know the number of existing columns.</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd df2 = pd.DataFrame({&quot;c&quot;: [3, 5], &quot;d&quot;: [8, 9]}) df3 = pd.DataFrame({&quot;c&quot;: [9, 10], &quot;d&quot;: [-1, -9]}) df4 = pd.DataFrame({&quot;a&quot;: [1, 2], &quot;b&quot;: [3, 4]}) with pd.ExcelWriter('./Order_List.xlsx', mode='w') as writer: df2.to_excel(writer, index=False) with pd.ExcelWriter('./Order_List.xlsx', mode=&quot;a&quot;, if_sheet_exists=&quot;overlay&quot;) as writer: df3.to_excel(writer, startrow=3, header=False, index=False) df4.to_excel(writer, startrow=0, startcol=2, header=True, index=False) </code></pre> <p>Link to the <a href="https://pandas.pydata.org/pandas-docs/version/1.4/reference/api/pandas.ExcelWriter.html" rel="nofollow noreferrer">1.4 documentation</a>.</p>
python|pandas|dataframe|export-to-excel
1
801
71,609,482
Drop the value that matches the string from list in python
<p>I have a list that contains strings. I want to drop out the ones that have specific strings using python.</p> <p>For example:</p> <pre><code>my_sample_list = [&quot;I am practicing Python&quot;, &quot;I am practicing Java&quot;, &quot;I am practicing SQL&quot;] </code></pre> <p>I want to drop out the element that contains <code>&quot;SQL&quot;</code> and I will be left with:</p> <pre><code>my_new_sample_list = [&quot;I am practicing Python&quot;, &quot;I am practicing Java&quot;] </code></pre> <p>How can I do that in python, please? Any help would be appreciated.</p>
<p>Turn them to sets and do an intersection and back to list</p> <pre><code>list(set(my_sample_list).intersection(set(my_new_sample_list))) ['I am practicing Java', 'I am practicing Python'] </code></pre>
python|pandas|string|list
1
802
71,576,520
Merging two dataframes that share a date column
<p>Lets say i have two dataframes (df_aapl) and (df_csco) and i wanna merge them based on the date(They both span the same timeframe and share the same dates).</p> <p>I've tried pd.merge function but it duplicates the other columns. Like this...</p> <p>(These are made up numbers)</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: center;">Date</th> <th style="text-align: center;">Symbol_x</th> <th style="text-align: center;">Price_x</th> <th style="text-align: center;">Symbol_y</th> <th style="text-align: center;">Price_y</th> </tr> </thead> <tbody> <tr> <td style="text-align: center;">2016-12-23</td> <td style="text-align: center;">AAPL</td> <td style="text-align: center;">100</td> <td style="text-align: center;">CSCO</td> <td style="text-align: center;">20</td> </tr> <tr> <td style="text-align: center;">2016-12-24</td> <td style="text-align: center;">AAPL</td> <td style="text-align: center;">120</td> <td style="text-align: center;">CSCO</td> <td style="text-align: center;">40</td> </tr> <tr> <td style="text-align: center;">2016-12-25</td> <td style="text-align: center;">AAPL</td> <td style="text-align: center;">130</td> <td style="text-align: center;">CSCO</td> <td style="text-align: center;">50</td> </tr> <tr> <td style="text-align: center;">2016-12-26</td> <td style="text-align: center;">AAPL</td> <td style="text-align: center;">140</td> <td style="text-align: center;">CSCO</td> <td style="text-align: center;">60</td> </tr> </tbody> </table> </div> <p>I want to merge them to look like this</p> <p>[Example of Dataframe desired]</p> <p><img src="https://i.stack.imgur.com/05qdX.png" alt="1" /></p>
<pre><code>import pandas as pd df_aapl = pd.read_excel(r&quot;Desktop\book2.xlsx&quot;,sheet_name = 'Sheet1') df_csco = pd.read_excel(r&quot;Desktop\book2.xlsx&quot;,sheet_name = 'Sheet2') ## Assuming both df_aapl,df_csco data frames have same number of columns with same names. data = pd.concat([df_aapl,df_csco],axis=0) data.head(10) Date Symbol Price 0 2016-12-23 AAPL 100 1 2016-12-23 AAPL 120 2 2016-12-23 AAPL 130 3 2016-12-23 AAPL 140 0 2016-12-23 CSCO 20 1 2016-12-23 CSCO 40 2 2016-12-23 CSCO 50 3 2016-12-23 CSCO 60 ## Assuming sum of price of each symbol needed by date. This can be changed as per requirement data.groupby(['Date','Symbol'])['Price'].sum().reset_index() Date Symbol Price 0 2016-12-23 AAPL 490 1 2016-12-23 CSCO 170 </code></pre>
python|pandas
0
803
71,590,615
mathematical row operation on a dataframe
<p>I have a very large labelled dictionary array dataframe, df of dimension (9 by 4500) with index <code>[1,2,3,...,4500]</code>. I intend to carry out the following respective mathematical row operation element-by-element on the dataframe:</p> <pre><code>[ 0. 0.00000771 0.00006065 ... 79.96962749 79.96969808 79.96976853] [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] [0. 0. 0. ... 0. 0. 0.] </code></pre> <h1>For mu1, considering the 1st row:</h1> <pre><code>= [ 0.*0 0.00000771*1 0.00006065*2 ... 79.96962749*4497 79.96969808*4498 79.96976853*4499] # 'n' is the index number </code></pre> <h1>For mu2, considering the 1st row:</h1> <pre><code>= [ 0.*(0)**2 0.00000771*(1)**2 0.00006065*(2)**2 ... 79.96962749*(4497)**2 79.96969808* (4498)**2 79.96976853*(4499)**2] # 'n' is the index number </code></pre> <p>Here is the code I implemented but do not get what I expected:</p> <pre><code>##calculating mu1: n=10 for x in range(1,n): mu1=df.apply(lambda x: x if x.name in [1,2,3,...,4500] else x,axis =1) mu1 ## calculating mu2: n=10 for x in range(1,n): mu2=df.apply(lambda x:(x**2) if x.name in [1,2,3,...,4500] else x,axis =1) mu2 </code></pre> <p>I presume my code might be wrong.</p>
<p>I can suggest first converting your dataframe to numpy array for ease of handling (It's my own preference)</p> <pre><code>array = np.array(df) mu1, mu2 = [],[] #mu1 and mu2 calculation for i in array: mu1.append([number * index for index, number in enumerate(i)]) mu2.append([number * index ** 2 for index, number in enumerate(i)]) </code></pre> <p>Or even easier, just do vector multiplication:</p> <pre><code>index = np.array(range(4500)) mu1 = array * index mu2 = array * index ** 2 </code></pre>
python|arrays|pandas|dataframe|numpy-ndarray
1
804
69,848,550
Plot a bar chart with Seaborn library and group by function
<p>The requirement is to Plot a bar chart using a seaborn library, showing total sales (<code>Customer_Value</code>) by the <code>Last_region</code>. Here my code</p> <pre><code>import pandas as pd import seaborn as sns import matplotlib.pyplot as plt customer = pd.read_csv('D:\PythonTraining\Customer.csv') df = customer.groupby('Last_region')['Customer_Value'].sum() df sns.barplot(x = df.Last_region, y = df.Customer_Value) plt.show </code></pre> <p>I get an error:</p> <blockquote> <p>AttributeError: 'Series' object has no attribute 'Last_region'.</p> </blockquote> <p>How can I correct it? I believed after groupby, the attribute cant be referenced.</p>
<p>The issue is that <code>Last_region</code> becomes the index when you group on it. Also note that <code>df</code> here is most likely a Series, not a DataFrame, in which case <code>Customer_Value</code> would also not be a column.</p> <ul> <li><p>Either use <code>x=df.index</code> and <code>y=df.values</code></p> <pre><code>sns.barplot(x=df.index, y=df.values) </code></pre> </li> <li><p>Or use <code>data=df.reset_index()</code> (now it's guaranteed to be a DataFrame with those columns)</p> <pre><code>sns.barplot(data=df.reset_index(), x='Last_region', y='Customer_Value') </code></pre> </li> </ul> <hr /> <p>Alternatively, as Asish commented, you can change <code>df</code> so that <code>Last_region</code> is not the index:</p> <ul> <li><p>Either set <code>as_index=False</code> while grouping</p> <pre><code>df = customer.groupby('Last_region', as_index=False)['Customer_Value'].sum() </code></pre> </li> <li><p>Or <code>reset_index()</code> after grouping</p> <pre><code>df = customer.groupby('Last_region')['Customer_Value'].sum().reset_index() </code></pre> </li> </ul>
python|pandas|dataframe|seaborn
2
805
72,230,205
ValueError when fitting my model. (ValueError: could not broadcast input array from shape (224,224,3) into shape (224,224,3,3))
<p>I am new to machine learning and I am using kaggle's notebook to code. I am making a classification model with multiple categories. I used efficientnet to make my model's architecture but the issue happens with every other model I've tried. The images to be classified are divided in train and val folders in the dataset. In those folders they are in their respective class's folder.</p> <p>The code runs fine till the fit_generator, it gives me a valueError &quot;ValueError: could not broadcast input array from shape (224,224,3) into shape (224,224,3,3)&quot;</p> <p>I have attached the full code, the dataset and an image of the error message.</p> <p>I have no idea what is wrong in the code or the data? Please help me and thank you for reading this question and I apologize if there is any more context missing.</p> <pre><code>#!pip install -U efficientnet import pandas as pd import numpy as np import efficientnet.tfkeras as efn # Convolutional Neural Network architecture import IPython.display as ipd import librosa.display import matplotlib.pyplot as plt from efficientnet.keras import preprocess_input from keras.callbacks import ModelCheckpoint, ReduceLROnPlateau from sklearn.utils import class_weight from tensorflow.keras.layers import Dense, Dropout, Flatten from tensorflow.keras.models import Model from tensorflow.keras.optimizers import Adam from tensorflow.keras.preprocessing.image import ImageDataGenerator import os IM_SIZE = (224, 224, 3) train=pd.read_csv(&quot;../input/birdclef-2022/train_metadata.csv&quot;) BIRDS = os.listdir(&quot;../input/mel-split-mark17/mel_spectrogram/train&quot;) BATCH_SIZE = 16 train_datagen = ImageDataGenerator( preprocessing_function=preprocess_input, width_shift_range=0.2, height_shift_range=0.2, shear_range=0.2, zoom_range=0.1, fill_mode=&quot;nearest&quot;, ) train_batches = train_datagen.flow_from_directory( &quot;../input/mel-split-mark17/mel_spectrogram/train&quot;, classes=BIRDS, target_size=IM_SIZE, class_mode=&quot;categorical&quot;, shuffle=True, batch_size=BATCH_SIZE, ) valid_datagen = ImageDataGenerator(preprocessing_function=preprocess_input) valid_batches = valid_datagen.flow_from_directory( &quot;../input/mel-split-mark17/mel_spectrogram/val&quot;, classes=BIRDS, target_size=IM_SIZE, class_mode=&quot;categorical&quot;, shuffle=False, batch_size=BATCH_SIZE, ) # Define CNN's architecture net = efn.EfficientNetB3( include_top=False, weights=&quot;imagenet&quot;, input_tensor=None, input_shape=IM_SIZE ) x = net.output x = Flatten()(x) x = Dropout(0.5)(x) output_layer = Dense(len(BIRDS), activation=&quot;softmax&quot;, name=&quot;softmax&quot;)(x) net_final = Model(inputs=net.input, outputs=output_layer) net_final.compile( optimizer=Adam(), loss=&quot;categorical_crossentropy&quot;, metrics=[&quot;accuracy&quot;] ) print(net_final.summary()) # Estimate class weights for unbalanced dataset class_weights = class_weight.compute_class_weight( class_weight = &quot;balanced&quot;, classes= np.unique(train_batches.classes), y=train_batches.classes ) # Define callbacks ModelCheck = ModelCheckpoint( &quot;models/efficientnet_checkpoint.h5&quot;, monitor=&quot;val_loss&quot;, verbose=0, save_best_only=True, save_weights_only=True, mode=&quot;auto&quot;, period=1, ) ReduceLR = ReduceLROnPlateau(monitor=&quot;val_loss&quot;, factor=0.2, patience=5, min_lr=3e-4) # Train the model net_final.fit_generator( train_batches, validation_data=valid_batches, epochs=30, steps_per_epoch=1596, class_weight=class_weights, callbacks=[ModelCheck, ReduceLR], ) </code></pre> <p><a href="https://i.stack.imgur.com/LNjOS.png" rel="nofollow noreferrer">I get this error when I run the code</a></p> <p><a href="https://www.kaggle.com/datasets/bluetriad/mel-split-mark17" rel="nofollow noreferrer">https://www.kaggle.com/datasets/bluetriad/mel-split-mark17</a></p>
<p>I think the problem is in the <strong>.flow_from_directory</strong> method. The shape pf the image in that method should not include the image channels and you can specify you are working with 3 channels by setting an additional parameter “color_mode” to “rgb”.</p>
python|pandas|tensorflow|machine-learning|keras
0
806
50,630,821
cannot instantiate an Xception model in Keras
<p>I'm running Keras in an NVIDIA Docker container on a multi-GPU machine. I'd like to instantiate a fairly standard model (Xception), but I keep getting weird errors. MRE:</p> <pre><code>import tensorflow as tf from keras.applications import Xception height = 299 width = 299 num_classes = 1000 # Instantiate model model = Xception(weights=None, input_shape=(height, width, 3), classes=num_classes) </code></pre> <p>I get the error:</p> <pre><code>Traceback (most recent call last): File "basic_test.py", line 9, in &lt;module&gt; model = Xception(weights=None, input_shape=(height, width, 3), classes=num_classes) File "/usr/local/lib/python2.7/dist-packages/keras/applications/xception.py", line 235, in Xception x = Dense(classes, activation='softmax', name='predictions')(x) File "/usr/local/lib/python2.7/dist-packages/keras/engine/topology.py", line 619, in __call__ output = self.call(inputs, **kwargs) File "/usr/local/lib/python2.7/dist-packages/keras/layers/core.py", line 881, in call output = self.activation(output) File "/usr/local/lib/python2.7/dist-packages/keras/activations.py", line 29, in softmax return K.softmax(x) File "/usr/local/lib/python2.7/dist-packages/keras/backend/tensorflow_backend.py", line 2963, in softmax return tf.nn.softmax(x, axis=axis) TypeError: softmax() got an unexpected keyword argument 'axis' </code></pre> <p>Versions for Python, Keras &amp; Tensorflow:</p> <pre><code>python -c 'import keras; import tensorflow; import sys; print(sys.version, 'keras.__version__', 'tensorflow.__version__')' Using TensorFlow backend. ('2.7.12 (default, Nov 20 2017, 18:23:56) \n[GCC 5.4.0 20160609]', '2.1.6', '1.4.0') </code></pre>
<p>It seems its a known issue with keras and tensorflow <code>1.4</code> version as mentioned <a href="https://github.com/keras-team/keras/issues/9621" rel="nofollow noreferrer">here</a>. You may want to update both to the latest version to resolve this issue.</p>
python|tensorflow|keras
2
807
50,399,123
Extract values from csv file while going through a list using a for loop
<p>I've run into an issue when trying to extract values (in order to count them) from a .csv file while using a for loop to go through a list to try and find the correct values.</p> <p>The .csv file is structured as follows:</p> <pre><code>word,pleasantness,activation,imagery a,2.0000,1.3846,1.0 abandon,1.0000,2.3750,2.4 abandoned,1.1429,2.1000,3.0 abandonment,1.0000,2.0000,1.4 etc... </code></pre> <p>The first column contains a list of ~9000 words and the 3 others columns contain values that are of linguistic relevance to that specific word.</p> <p>I used pandas to create a dataframe:</p> <pre><code>df = pd.read_csv("dictionary.csv", sep=',') </code></pre> <p>I've also got a text files which I've turned into a list:</p> <pre><code>read_file = open(textfile) data = read_file.read().split() </code></pre> <p>Now, my goal is to have the program go through each word in the list and every time one of those words is encountered in the first column of the .csv file it will add its values to the existing variables. And so on until it's reached the end of the list.</p> <pre><code>count = 0 pleasantness = 0 activation = 0 imagery = 0 for w in data: count = count + 1 if w in df.word: pleasantness = pleasantness + df.pleasantness activation = activation + df.activation imagery = imagery + df.imagery print(count, pleasantness, activation, imagery) </code></pre> <p>This is the best I've been able to come up with and it clearly doesn't work; by the end of it the variables are all still 0.</p> <p>Does anyone have a clue as to how to do this? It naturally doesn't have to be done using something similar to this approach; I merely care about getting the results.</p>
<p>IIUC, given you have a <code>.csv</code> such as:</p> <pre><code>z = StringIO("""word,pleasantness,activation,imagery a,2.0000,1.3846,1.0 abandon,1.0000,2.3750,2.4 abandoned,1.1429,2.1000,3.0 abandonment,1.0000,2.0000,1.4""") df = pd.read_csv(z) </code></pre> <p>which yields </p> <pre><code>&gt;&gt;&gt; df word pleasantness activation imagery 0 a 2.0000 1.3846 1.0 1 abandon 1.0000 2.3750 2.4 2 abandoned 1.1429 2.1000 3.0 3 abandonment 1.0000 2.0000 1.4 </code></pre> <p>and a text such as </p> <pre><code>text = ("Lorem abandon ipsum dolor sit amet abandonment , consectetur adipiscing elit. abandon Maecenas consequat accumsan lacus. Duis justo nunc, mattis non ante a, convallis luctus eros. Sed sed urna sed magna auctor sagittis eu id magna. Maecenas leo nunc, tincidunt ut sagittis quis, porttitor sit amet ligula. Nunc faucibus ante ac blandit porta") data = np.array(text.split()) </code></pre> <p>which yields</p> <pre><code>&gt;&gt;&gt; data ['Lorem' 'abandon' 'ipsum' 'dolor' 'sit' 'amet' 'abandonment' ',' 'consectetur' 'adipiscing' 'elit.' 'abandon' 'Maecenas' 'consequat' 'accumsan' 'lacus.' 'Duis' 'justo' 'nunc,' 'mattis' 'non' 'ante' 'a,' 'convallis' 'luctus' 'eros.' 'Sed' 'sed' 'urna' 'sed' 'magna' 'auctor' 'sagittis' 'eu' 'id' 'magna.' 'Maecenas' 'leo' 'nunc,' 'tincidunt'. 'ut' 'sagittis' 'quis,' 'porttitor' 'sit' 'amet' 'ligula.' 'Nunc' 'faucibus' 'ante' 'ac' 'blandit' 'porta'] </code></pre> <p>You can use <code>numpy.isin</code> and <code>collections.Counter</code> to be auxiliaries in the processing:</p> <pre><code>&gt;&gt;&gt; d = Counter(data[np.isin(data, df.word)]) &gt;&gt;&gt; d Counter({'abandon': 2, 'abandonment': 1}) </code></pre> <p>and run through the counted values</p> <pre><code>pleasantness, activation, imagery = (0,0,0) for k,v in d.items(): values = df.loc[df.word == k] pleasantness += values["pleasantness"].item()*v activation += values["activation"].item()*v imagery += values["imagery"].item()*v </code></pre> <p>Which would yield, for this text,</p> <pre><code>print(pleasantness, activation, imagery) 3.0 6.75 6.2 </code></pre> <p>Your total count would simply be</p> <pre><code>print(sum(d.values())) 3 </code></pre> <p>If you want to avoid the looping through the <code>Counter</code>, you can build a new data frame, such as </p> <pre><code>ndf = pd.merge(pd.DataFrame(dict(d), index=[0]).T, df.set_index("word"), left_index=True, right_index=True) </code></pre> <p>which is </p> <pre><code>&gt;&gt;&gt; ndf count pleasantness activation imagery abandon 2 1.0 2.375 2.4 abandonment 1 1.0 2.000 1.4 </code></pre> <p>and multiply <code>count</code> through the rest of the rows</p> <pre><code>ndf.apply(lambda k: k[0]*k[1:], 1) </code></pre> <p>to get</p> <pre><code> pleasantness activation imagery abandon 2.0 4.75 4.8 abandonment 1.0 2.00 1.4 </code></pre> <p>Now you can just play with pandas bulit-in functions, such as <code>.sum()</code></p> <pre><code>pleasantness 3.00 activation 6.75 imagery 6.20 dtype: float64 </code></pre>
python|python-3.x|pandas
2
808
45,717,484
Numpy Array random mutation
<p>I'm coding my first genetic algorithm in Python. I particularly care about the optimization and population scalability.</p> <pre><code>import numpy as np population = np.random.randint(-1, 2, size=(10,10)) </code></pre> <p>Here I make a [10,10] array, with random number between -1 and 1.<br> And now I want to perform a specific mutation ( mutation rate depends on the specimens fitness ) for each specimen of my array. </p> <p>For example, I have:</p> <pre><code>print population [[ 0 0 1 1 -1 1 1 0 1 0] [ 0 1 -1 -1 0 1 -1 -1 0 -1] [ 0 0 0 0 0 0 0 0 0 0] [ 0 0 0 0 0 0 0 0 0 0] [ 0 0 0 0 0 0 0 0 0 0] [ 0 1 1 0 0 0 1 1 0 1] [ 1 -1 0 0 1 0 -1 1 1 0] [ 1 -1 1 -1 0 -1 0 0 1 1] [ 0 0 0 0 0 0 0 0 0 0] [ 0 1 1 0 0 0 1 -1 1 0]] </code></pre> <p>I want to perform the mutation of this array with a specific mutation rate for each sub-array in population. I try this but the optimization is not perfect and I need to perform a different mutation for each sub-array (each sub-array is a specimen) in the population (the main array, "population").</p> <pre><code>population[0][numpy.random.randint(0, len(population[0]), size=10/2)] = np.random.randint(-1, 2, size=10/2) </code></pre> <p>I'm looking for a way to apply something like a mutation mask on all the main-array. Something like that:</p> <pre><code> population[array element select with the mutation rate] = random_array[with the same size] </code></pre> <p>I think it's the best way (because we only to an array selection and after we replace this selection with the random array), but I don't know how to perform this. And if you have other solution I am on it ^^.</p>
<p>Let's say you have an array <code>fitness</code> with the fitness of each specimen, with size <code>len(population)</code>. Let's also say you have a function <code>fitness_mutation_prob</code> that, for a given fitness, gives you the mutation probability for each of the elements in the specimen. For example, if the values of <code>fitness</code> range from 0 to 1, <code>fitness_mutation_prob(fitness)</code> could be something like <code>(1 - fitness)</code>, or <code>np.square(1 - fitness)</code>, or whatever. You can then do:</p> <pre><code>r = np.random.random(size=population.shape) mut_probs = fitness_mutation_prob(fitness) m = r &lt; mut_probs[:, np.newaxis] population[m] = np.random.randint(-1, 2, size=np.count_nonzero(m)) </code></pre>
python|numpy|genetic-algorithm|mutation
1
809
45,708,443
TensorFlow - object detection module, error appear when trying to use protoc
<p>having problems with <code>protoc</code>, the line doesn't work in windows.</p> <p>I get this <code>errors</code>:</p> <p>using this line</p> <pre><code>protoc --proto_path=./object_detection/protos --python_out=c:\testmomo ./object_detection/protos/anchor_generator.proto </code></pre> <p>I get this error</p> <pre><code>object_detection/protos/grid_anchor_generator.proto: File not found. object_detection/protos/ssd_anchor_generator.proto: File not found. anchor_generator.proto: Import "object_detection/protos/grid_anchor_generator.proto" was not found or had errors. anchor_generator.proto: Import "object_detection/protos/ssd_anchor_generator.proto" was not found or had errors. anchor_generator.proto:12:5: "GridAnchorGenerator" is not defined. anchor_generator.proto:13:5: "SsdAnchorGenerator" is not defined. </code></pre> <p>what is the problem??</p>
<p>I was trying different things, and figured out where was the problem.</p> <p>Make sure you're doing it this way:</p> <pre><code># From models/ protoc object_detection/protos/*.proto --python_out=. </code></pre> <p>whereas I was trying to do it like:</p> <pre><code># from object_detection/ protoc protos/*.proto --python_out=. </code></pre> <p>that gives me errors as yours.</p> <p>Check if you're in the right place (directory).</p>
python|tensorflow|deep-learning|object-detection|protoc
15
810
62,543,843
cannot import torch audio ' No audio backend is available.'
<pre><code>import torchaudio </code></pre> <p>When I just try to import torch audio on Pycharm, I have this error</p> <pre><code>61: UserWarning: No audio backend is available. </code></pre> <p>warnings.warn('No audio backend is available.')</p>
<p>You need to install the audio file I/O backend. If Linux it's <code>Sox</code>, if Windows it's <code>SoundFile</code><br></p> <p>To check if you have one set run <code>str(torchaudio.get_audio_backend())</code> and if 'None' is the result then install the backend.</p> <p>SoundFile for Windows <code>pip install PySoundFile</code></p> <p>Sox for Linux <code>pip install sox</code></p> <p><a href="https://pytorch.org/audio/backend.html" rel="noreferrer">Check out the PyTorch Audio Backend docs here</a></p>
python-3.x|pytorch|torch
24
811
73,572,260
Model is not learning/training pytorch
<p>Here is my training loop</p> <pre><code>def train(model, train_dl, valid_dl, loss_fn, optimizer, scheduler, acc_fn, epochs=50): start = time.time() model.cuda() train_loss, valid_loss = [], [] train_acc, valid_acc = [], [] best_acc = 0.0 for epoch in range(epochs): print('Epoch {}/{}'.format(epoch, epochs - 1)) print('-' * 10) time_epoch_start = time.time() for phase in ['train', 'valid']: if phase == 'train': model.train(True) # Set trainind mode = true dataloader = train_dl else: model.train(False) # Set model to evaluate mode dataloader = valid_dl running_loss = 0.0 running_acc = 0.0 step = 0 for x, y, _ in dataloader: x = x.cuda() y = y.cuda() step += 1 if phase == 'train': optimizer.zero_grad() outputs = model(x) loss = loss_fn(outputs, y) loss.backward() optimizer.step() if scheduler is not None: scheduler.step() else: with torch.no_grad(): outputs = model(x) loss = loss_fn(outputs, y.long()) acc = acc_fn(outputs, y) running_acc += acc * dataloader.batch_size running_loss += loss * dataloader.batch_size if step % 100 == 0: print('Current step: {} Loss: {} Acc: {} AllocMem (Mb): {}'.format(step, loss, acc, torch.cuda.memory_allocated()/1024/1024)) epoch_loss = running_loss / len(dataloader.dataset) epoch_acc = running_acc / len(dataloader.dataset) train_loss.append(epoch_loss) if phase=='train' else valid_loss.append(epoch_loss) train_acc.append(epoch_acc) if phase=='train' else valid_acc.append(epoch_acc) if phase=='train': train_loss_print = epoch_loss train_acc_print = epoch_acc else: valid_loss_print = epoch_loss valid_acc_print = epoch_acc time_epoch = time.time() - time_epoch_start print('Epoch {}/{} - TRAIN Loss: {:.4f} TRAIN Acc: {:.4f} - VAL. Loss: {:.4f} VAL. Acc: {:.4f} ({:.4f} seconds - {:.4f} Mb)'.format(epoch, epochs - 1, train_loss_print, train_acc_print, valid_loss_print, valid_acc_print, time_epoch, torch.cuda.memory_allocated()/1024/1024)) time_elapsed = time.time() - start print('Training complete in {:.0f}m {:.0f}s'.format(time_elapsed // 60, time_elapsed % 60)) return model, train_loss, valid_loss, train_acc, valid_acc </code></pre> <p>Here I define the optimizer and learning rate</p> <pre><code>opt = torch.optim.Adam(modelo.parameters(), lr=hp_lr, betas=(0.9, 0.999), eps=1e-08, weight_decay=0, amsgrad=False) lr_scheduler = lr_scheduler.StepLR(opt, step_size=10, gamma=0.1) modelo_trained, train_loss, valid_loss, train_acc, valid_acc = train(modelo, train_dl, valid_dl, loss_fn, opt, lr_scheduler, acc_metric, epochs=num_epochs) </code></pre> <p>However, when training, here is my training loss [Graphic][1] [1]: https://i.stack.imgur.com/3Y1wL.png</p> <p>The repeated pattern in loss seems to indicate that the model <strong>is not training</strong> and that their is an issue with the training loop in particular, because even an incorrect model would still see variance in the loss over time.</p>
<p>Welcome to stackoverflow!</p> <p>You are calling <code>model.cuda()</code> inside the training loop, but supplying the optimizer externally. Smells like the optimizer was initialized on the wrong parameters.</p> <p>ps: let me know whether that makes sense and/or works</p>
python|image|pytorch
0
812
73,840,379
Pandas date convesrion unconverted data remains
<p>In Pandas (Juypter) I have a column with dates in string format:</p> <pre><code>koncerti.Date.values[:20] array(['15 September 2010', '16 September 2010', '18 September 2010', '20 September 2010', '21 September 2010', '23 September 2010', '24 September 2010', '26 September 2010', '28 September 2010', '30 September 2010', '1 October 2010', '3 October 2010', '5 October 2010', '6 October 2010', '8 October 2010', '10 October 2010', '12 October 2010', '13 October 2010', '15 October 2010', '17 October 2010'], dtype=object) </code></pre> <p>I try to convert them to date format with the following statement:</p> <pre><code>koncerti.Date = pd.to_datetime(koncerti.Date, format='%d %B %Y') </code></pre> <p>Unfortunatelly, it produces the following error: ValueError: <strong>unconverted data remains: [31]</strong></p> <p>What does it mean this error?</p>
<p>Solution: <code>koncerti.Date = pd.to_datetime(koncerti.Date, format='%d %B %Y', exact=False)</code></p> <p>Addditional parameter was needed: <code>exact=False</code></p>
python|pandas|datetime
0
813
71,143,488
How to convert two ECG measurments to HR with frequency 20 hz
<p>I have data from two ECG sensors taken at a frequency of 50hz. I want to convert this to an HR signal with a frequency of 20hz. I have tried a solution with heartpy, but I can't get good values for HR at low frequency. Does anybody have an example of how I can implement this in python?</p> <p>The data looks like this: <a href="https://i.stack.imgur.com/do74b.png" rel="nofollow noreferrer">Pandas dataframe of signal input</a></p>
<p>I have made an example in python using neurokit and their <a href="https://github.com/neuropsychology/NeuroKit/blob/master/docs/examples/heartbeats.ipynb" rel="nofollow noreferrer">example</a> for heartbeats and simply taking the mean of the two ECG examples. Afterwards, I use a simple downsampling function to get the signal to 20 hz.</p> <p>Code example:</p> <pre><code> ecg = df[['ECG_1', 'ECG_2']].mean(axis=1).to_numpy() # Automatically process the (raw) ECG signal ecg_signals, info = nk.ecg_process(ecg, sampling_rate=sample_rate) # plot = nk.ecg_plot(ecg_signals, sampling_rate=sample_rate) df.insert(loc=len(df.columns)-1, column='HR (bpm)', value=(ecg_signals['ECG_Rate'])) # Remove ECG signals df = df.drop(columns=['ECG_1', 'ECG_2']) # Downsampling of the whole dataframe: df_20hz = pd.DataFrame(columns=df.columns) ds_rate = 50/20 df_20hz = df.groupby(np.arange(len(df))//ds_rate).mean() # Remove overlapping classes: label that is not a whole number df_20hz = df_20hz[df_20hz.activity_id % 1 == 0] return df_20hz </code></pre>
python|pandas|medical|downsampling
0
814
71,201,140
tensorflow.keras.Model inherit
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers class KerasSupervisedModelWrapper(keras.Model): def __init__(self, batch_size, **kwargs): super().__init__() self.batch_size = batch_size def summary(self, input_shape): # temporary fix for a bug x = layers.Input(shape=input_shape) model = keras.Model(inputs=[x], outputs=self.call(x)) return model.summary() class ExampleModel(KerasSupervisedModelWrapper): def __init__(self, batch_size): super().__init__(batch_size) self.conv1 = layers.Conv2D(32, kernel_size=(3, 3), activation='relu') def call(self, x): x = self.conv1(x) return x model = MyModel(15) model.summary([28, 28, 1]) </code></pre> <p>output:</p> <pre><code>Model: &quot;model_1&quot; _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_2 (InputLayer) [(None, 28, 28, 1)] 0 conv2d_2 (Conv2D) (None, 26, 26, 32) 320 ================================================================= Total params: 320 Trainable params: 320 Non-trainable params: 0 _________________________________________________________________ </code></pre> <p>I'm writting a wrapper for keras model to pre-define some useful method and variables as above.<br /> And I'd like to modify the wrapper to get some layers to compose model as the <code>keras.Sequential</code> does.<br /> Therefore, I added <code>Sequential</code> method that assigns new <code>call</code> method as below.</p> <pre class="lang-py prettyprint-override"><code>class KerasSupervisedModelWrapper(keras.Model): ...(continue)... @staticmethod def Sequential(layers, **kwargs): model = KerasSupervisedModelWrapper(**kwargs) pipe = keras.Sequential(layers) def call(self, x): return pipe(x) model.call = call return model </code></pre> <p>However, it seems not working as I intended. Instead, it shows below error message.</p> <pre class="lang-py prettyprint-override"><code>model = KerasSupervisedModelWrapper.Sequential([ layers.Conv2D(32, kernel_size=(3, 3), activation=&quot;relu&quot;) ], batch_size=15) model.summary((28, 28, 1)) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) /tmp/ipykernel_91471/2826773946.py in &lt;module&gt; 1 # model.build((None, 28, 28, 1)) 2 # model.compile('adam', loss=keras.losses.SparseCategoricalCrossentropy(), metrics=['accuracy']) ----&gt; 3 model.summary((28, 28, 1)) /tmp/ipykernel_91471/3696340317.py in summary(self, input_shape) 10 def summary(self, input_shape): # temporary fix for a bug 11 x = layers.Input(shape=input_shape) ---&gt; 12 model = keras.Model(inputs=[x], outputs=self.call(x)) 13 return model.summary() 14 TypeError: call() missing 1 required positional argument: 'x' </code></pre> <p>What can I do for the wrapper to get <code>keras.Sequential</code> model while usuing other properties?</p>
<p>You could try something like this:</p> <pre><code>import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers class KerasSupervisedModelWrapper(keras.Model): def __init__(self, batch_size, **kwargs): super().__init__() self.batch_size = batch_size def summary(self, input_shape): # temporary fix for a bug x = layers.Input(shape=input_shape) model = keras.Model(inputs=[x], outputs=self.call(x)) return model.summary() @staticmethod def Sequential(layers, **kwargs): model = KerasSupervisedModelWrapper(**kwargs) pipe = keras.Sequential(layers) model.call = pipe return model class ExampleModel(KerasSupervisedModelWrapper): def __init__(self, batch_size): super().__init__(batch_size) self.conv1 = layers.Conv2D(32, kernel_size=(3, 3), activation='relu') def call(self, x): x = self.conv1(x) return x model = ExampleModel(15) model.summary([28, 28, 1]) model = KerasSupervisedModelWrapper.Sequential([ layers.Conv2D(32, kernel_size=(3, 3), activation=&quot;relu&quot;) ], batch_size=15) model.summary((28, 28, 1)) print(model(tf.random.normal((1, 28, 28, 1))).shape) </code></pre> <pre><code>Model: &quot;model_9&quot; _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_14 (InputLayer) [(None, 28, 28, 1)] 0 conv2d_17 (Conv2D) (None, 26, 26, 32) 320 ================================================================= Total params: 320 Trainable params: 320 Non-trainable params: 0 _________________________________________________________________ Model: &quot;model_10&quot; _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_15 (InputLayer) [(None, 28, 28, 1)] 0 sequential_8 (Sequential) (None, 26, 26, 32) 320 ================================================================= Total params: 320 Trainable params: 320 Non-trainable params: 0 _________________________________________________________________ (1, 26, 26, 32) </code></pre>
python|tensorflow|keras
2
815
71,216,162
Is it possible to train except for a specific area when implementing cycleGAN?
<p>Recently, I am conducting research to create fake images using cycleGAN. The figure below is an example of the cycleGAN result. It can be seen that all areas are changed as in the image. However, I want to change the image only for a specific area. Is it possible to change only the image of the part except for the red marked part (disease part) using cycleGAN?</p> <p><a href="https://i.stack.imgur.com/ipKQR.jpg" rel="nofollow noreferrer">enter image description here</a></p>
<p>I think what you describe is possible. You would need to alter the cycleGan pipeline. To focus the representational capacity of the network on everything other than the red marked part you could replace the red marked part with the same part from the original image, this would enforce a focus on the other regions. When taking the cycle consistency pipeline shown below, I'm referring to adding the original image parts after e.g. Generator A2B.</p> <p><a href="https://miro.medium.com/max/1400/1*tN351F_-GGgTwk1gBir9JA.jpeg" rel="nofollow noreferrer">Cycle consistency loss pipeline</a></p>
tensorflow|keras|generative-adversarial-network
0
816
60,542,475
Confirm that TF2 is using my GPU when training
<p>I am wondering if there is a way to confirm that my TF model is training on my GPU after I stored the training data on it as advised in the TF tutorial. Here is a short code example:</p> <pre class="lang-py prettyprint-override"><code>import tensorflow as tf print('Num GPUs Available:', len(tf.config.experimental.list_physical_devices('GPU'))) # load data on GPU with tf.device('/GPU:0'): mnist = tf.keras.datasets.mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 # define, compile and train the model model = tf.keras.models.Sequential([tf.keras.layers.Dense(1)]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['acc']) model.fit(x_train, y_train, batch_size=32, epochs=5) </code></pre>
<p>There a couple of ways to check for GPU in Tensorflow 2.x. Essentially, if GPU is available, then the model will be run on it (unless it's busy by e.g. another instance of TF that locked it). The placement will be seen also in the log files and can be confirmed with e.g. <code>nvidia-smi</code>.</p> <p>In the code below, I will assume <code>tensorflow</code> is imported as <code>tf</code> (per convention and your code).</p> <h3>To check what devices are available, run:</h3> <pre class="lang-py prettyprint-override"><code>tf.config.experimental.list_physical_devices() </code></pre> <p>Here's my output:</p> <blockquote> <p>[PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU'), PhysicalDevice(name='/physical_device:XLA_CPU:0', device_type='XLA_CPU'), PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU'), PhysicalDevice(name='/physical_device:XLA_GPU:0', device_type='XLA_GPU')]</p> </blockquote> <p>In order to check if there is any GPU on the system:</p> <pre class="lang-py prettyprint-override"><code>is_gpu = len(tf.config.experimental.list_physical_devices('GPU')) &gt; 0 </code></pre> <p>From Tensorflow 2.1, this functionality has been migrated from experimental and you can use: <code>tf.config.list_physical_devices()</code> in the same manner, i.e.</p> <pre class="lang-py prettyprint-override"><code>is_gpu = len(tf.config.list_physical_devices('GPU')) &gt; 0 </code></pre> <p>At some point in time the experimental part will be deprecated.</p> <p>Last but not least, if your tensorflow was built without CUDA (it's a non-GPU version), <code>list_physical_devices('GPU')</code> will also return <code>False</code>, even if your system physicaly has a GPU.</p> <h3>"Is it automatic once the gpu is recognized by TF?"</h3> <p>Yes. To quote after <a href="https://www.tensorflow.org/guide/gpu" rel="noreferrer">TF docs</a>:</p> <blockquote> <p>Note: Use tf.config.experimental.list_physical_devices('GPU') to confirm that TensorFlow is using the GPU.</p> </blockquote> <p>If it is recognised, it will be used during the training. If you'd like to be dead sure, you can ask for more explicit logging:</p> <pre class="lang-py prettyprint-override"><code>tf.debugging.set_log_device_placement(True) </code></pre>
python|python-3.x|tensorflow
7
817
60,591,479
How to fill multidimensional array using an equation in python
<p>I am new to python and I would like to fill a Numpy multidimensional array using an equation. In Fortran I can use the index of the array to fill it up, is this possible to do in python? Say I have an equation a=i*j where i and j are the row and column position respectively. So if I have an n by n array then the array would be filled with the results from the equation, so first value would be 1 since a=1*1 and so on. </p>
<p>You could use <code>numpy</code> <a href="https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html" rel="nofollow noreferrer">broadcasting</a> to get this result:</p> <pre class="lang-py prettyprint-override"><code>i = np.arange(3)[:, np.newaxis] # (3, 1) # array([0, 1, 2]) j = np.arange(4)[np.newaxis, :] # (1, 4) # array([0, 1, 2, 3]) arr = i * j # (3 ,4) # array([[0, 0, 0, 0], # [0, 1, 2, 3], # [0, 2, 4, 6]]) </code></pre> <p>You can perform most calculations you could want with the index arrays <code>i</code> [<code>n x 1</code>] and <code>j</code> [<code>1 x m</code>] and the result will be always be [<code>n x m</code>]</p> <pre class="lang-py prettyprint-override"><code>arr = np.sin(i)**2 + np.cos(i)**2 + (j-i) # array([[ 1., 2., 3., 4.], # [ 0., 1., 2., 3.], # [-1., 0., 1., 2.]]) </code></pre> <p>You could also use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.meshgrid.html" rel="nofollow noreferrer"><code>np.meshgrid()</code></a> to explicitly repeat the indices along the other dimension to get full 2d arrays for <code>i</code> and <code>j</code>:</p> <pre class="lang-py prettyprint-override"><code>i, j = np.meshgrid(np.arange(3), np.arange(4), indexing='ij') # i # array([[0, 1, 2, 3], # [0, 1, 2, 3], # [0, 1, 2, 3]]) # j # array([[0, 0, 0, 0], # [1, 1, 1, 1], # [2, 2, 2, 2]]) arr = i * j # array([[0, 0, 0, 0], # [0, 1, 2, 3], # [0, 2, 4, 6]]) </code></pre> <p>This is a good visualisation of what happens during broadcasting automatically.</p> <p>Note the indexing argument <code>ij</code> of <code>np.meshgrid()</code> for matrix indexing; from the docs: </p> <blockquote> <p>Giving the string ‘ij’ returns a meshgrid with matrix indexing, while ‘xy’ returns a meshgrid with Cartesian indexing. In the 2-D case with inputs of length M and N, the outputs are of shape (N, M) for ‘xy’ indexing and (M, N) for ‘ij’ indexing.</p> </blockquote>
python|arrays|numpy
3
818
72,500,942
Error when trying to use df.merge: "You are trying to merge on object and int64 columns"
<p>I'm currently trying to write a program that takes a chemical compound's identifier (something called the CID number) and then gives back the compound's properties by using the pubchempy documentation.</p> <p>However, I keep getting an error when I try to merge the data values that I get from pubchempy to the initial database.</p> <p>This is the code I've written for now:</p> <pre><code>import pandas as pd import pubchempy import numpy as np df = pd.read_csv(&quot;Data.tsv.txt&quot;, sep=&quot;\t&quot;) from pubchempy import get_properties df['CID'] = df['CID'].astype(str).apply(lambda x: x.replace('.0','')) df['CID'] = df['CID'].astype(str).apply(lambda x: x.replace('0','')) df = df.drop(df[df.CID=='nan'].index) df = df.drop(labels='reference', axis=1) df = df.drop(labels='group', axis=1) df = df.drop(labels='comments', axis=1) df = df.drop(labels='compound_name', axis=1) props = ['HBondDonorCount', 'RotatableBondCount', 'MolecularWeight', 'HBondAcceptorCount'] df2 = pd.DataFrame(get_properties(identifier=df.CID.to_list(), properties=props)) df = df.merge(df2) print(df) </code></pre> <p>However, I get an error message that says,</p> <pre class="lang-none prettyprint-override"><code>ValueError: You are trying to merge on object and int64 columns. If you wish to proceed you should use pd.concat </code></pre> <p>Does anyone know how to fix this?</p> <hr /> <p>Few lines of text file (datafile):</p> <pre><code>NO. compound_name IUPAC_name SMILES CID Inchi threshold reference group comments 1 sulphasalazine 2-hydroxy-5-[[4-(pyridin-2-ylsulfamoyl)phenyl]diazenyl]benzoic acid O=C(O)c1cc(N=Nc2ccc(S(=O)(=O)Nc3ccccn3)cc2)ccc1O 5339 InChI=1S/C18H14N4O5S/c23-16-9-6-13(11-15(16)18(24)25)21-20-12-4-7-14(8-5-12)28(26,27)22-17-3-1-2-10-19-17/h1-11,23H,(H,19,22)(H,24,25) R2|R2|R25|R46| A 2 moxalactam 7-[[2-carboxy-2-(4-hydroxyphenyl)acetyl]amino]-7-methoxy-3-[(1-methyltetrazol-5-yl)sulfanylmethyl]-8-oxo-5-oxa-1-azabicyclo[4.2.0]oct-2-ene-2-carboxylic acid COC1(NC(=O)C(C(=O)O)c2ccc(O)cc2)C(=O)N2C(C(=O)O)=C(CSc3nnnn3C)COC21 3889 InChI=1S/C20H20N6O9S/c1-25-19(22-23-24-25)36-8-10-7-35-18-20(34-2,17(33)26(18)13(10)16(31)32)21-14(28)12(15(29)30)9-3-5-11(27)6-4-9/h3-6,12,18,27H,7-8H2,1-2H3,(H,21,28)(H,29,30)(H,31,32) R25| A 3 clioquinol 5-chloro-7-iodoquinolin-8-ol Oc1c(I)cc(Cl)c2cccnc12 2788 InChI=1S/C9H5ClINO/c10-6-4-7(11)9(13)8-5(6)2-1-3-12-8/h1-4,13H R18|R26|R27| A </code></pre> <hr /> <p>Few lines of <strong>df2</strong> output:</p> <pre><code> CID MolecularWeight HBondDonorCount HBondAcceptorCount RotatableBondCount 0 5339 398.4 3 9 6 1 3889 520.5 4 13 9 2 2788 305.50 1 2 0 3 1422517 440.5 0 8 4 4 18595497 461.5 5 10 3 </code></pre>
<p>It seems you want to merge two dataframes on <code>CID</code> column. <code>CID</code> column type of <code>df2</code> is <code>int</code>, you need change it to object to match the type of <code>CID</code> in <code>df</code></p> <pre class="lang-py prettyprint-override"><code>df = df.merge(df2.astype({'CID': str}), on='CID') </code></pre>
python|pandas|dataframe|pubchem
0
819
72,624,204
expanding mean include conditions
<p>I have two dataframe, df1 and df2, the shape is the same</p> <p>now I'd like to calculate expanding mean of df2 along columns from a certain column for each row. however, I'd like to add condition by df1&gt;0 so only include certain columns in calculating mean.</p> <p>Here is what I have in my mind. I think I need to get column index which meet the condition. Not sure how to do it?</p> <pre><code>column_index=df1.iloc[:,2:]&gt;0 </code></pre> <p>The below only starts from column 2, not sure how to exclude the first 2 columns</p> <pre><code>df2.loc[:,column_index]=df2.iloc[:,column_index].expanding(axis=1).mean() </code></pre> <p>Here is the df1 dictionary:</p> <pre><code> Wellname Padname 2021-10-01 2021-10-02 2021-10-03 2021-10-04 0 19A77 77 7.482527 7.482464 7.481077 7.481595 1 18A75 75 8.539606 8.538932 0.000000 8.537870 2 19A74 74 0.000000 8.436184 8.436184 8.436478 3 6A75 75 0.000000 0.000000 0.000000 0.000000 4 11A74 74 0.000000 0.000000 0.000000 0.000000 {'Wellname': {0: '19A77', 1: '18A75', 2: '19A74', 3: '6A75', 4: '11A74'}, 'Padname': {0: '77', 1: '75', 2: '74', 3: '75', 4: '74'}, datetime.date(2021, 10, 1): {0: 7.482527, 1: 8.539606, 2: 0.0, 3: 0.0, 4: 0.0}, datetime.date(2021, 10, 2): {0: 7.482464, 1: 8.538932, 2: 8.436184, 3: 0.0, 4: 0.0}, datetime.date(2021, 10, 3): {0: 7.4810767, 1: 0.0, 2: 8.436184, 3: 0.0, 4: 0.0}, datetime.date(2021, 10, 4): {0: 7.4815946, 1: 8.53787, 2: 8.436478, 3: 0.0, 4: 0.0}} </code></pre> <p>Here is df2 dictionary:</p> <pre><code>Wellname Padname 2021-10-01 2021-10-02 2021-10-03 2021-10-04 0 19A77 77 0.030476 0.031979 0.030553 0.031249 1 18A75 75 0.012853 0.013348 0.014725 0.013044 2 19A74 74 0.000000 0.008568 0.008858 0.009324 3 6A75 75 0.000000 0.000000 0.000000 0.000000 4 11A74 74 0.000000 0.000000 0.000000 0.00000 {'Wellname': {0: '19A77', 1: '18A75', 2: '19A74', 3: '6A75', 4: '11A74'}, 'Padname': {0: '77', 1: '75', 2: '74', 3: '75', 4: '74'}, datetime.date(2021, 10, 1): {0: 0.03047586123578023, 1: 0.012852623962781664, 2: 0.0, 3: 0.0, 4: 0.0}, datetime.date(2021, 10, 2): {0: 0.03197930324012337, 1: 0.013347760497657282, 2: 0.0085677628558828, 3: 0.0, 4: 0.0}, datetime.date(2021, 10, 3): {0: 0.030553271599108457, 1: 0.01472520929639739, 2: 0.008857505630343878, 3: 0.0, 4: 0.0}, datetime.date(2021, 10, 4): {0: 0.0312492523780255, 1: 0.013043631856251387, 2: 0.009324431738911416, 3: 0.0, 4: 0.0}} </code></pre> <p>Here is my first try and I still got an error</p> <pre><code>mask=df1.iloc[:,2:]&gt;0 mask 2021-10-01 2021-10-02 2021-10-03 2021-10-04 0 True True True True 1 True True False True 2 False True True True 3 False False False False 4 False False False False </code></pre> <p>after getting boolean matrix, then I did below</p> <pre><code>df3=df2.iloc[:,2:] b=df3.loc[mask] --------------------------------------------------------------------------- ValueError Traceback (most recent call last) C:\Temp\1\ipykernel_624\3951860798.py in &lt;module&gt; ----&gt; 1 b=df3.loc[mask] 2 b C:\Anaconda\envs\dash_tf\lib\site-packages\pandas\core\indexing.py in __getitem__(self, key) 929 930 maybe_callable = com.apply_if_callable(key, self.obj) --&gt; 931 return self._getitem_axis(maybe_callable, axis=axis) 932 933 def _is_scalar_access(self, key: tuple): C:\Anaconda\envs\dash_tf\lib\site-packages\pandas\core\indexing.py in _getitem_axis(self, key, axis) 1149 1150 if hasattr(key, &quot;ndim&quot;) and key.ndim &gt; 1: -&gt; 1151 raise ValueError(&quot;Cannot index with multidimensional key&quot;) 1152 1153 return self._getitem_iterable(key, axis=axis) ValueError: Cannot index with multidimensional key </code></pre> <p>Thanks</p>
<p>Here's a solution using the <code>mask</code> you have provided.</p> <pre><code>df2[mask].expanding(axis=1).mean() 2021-10-01 2021-10-02 2021-10-03 2021-10-04 0 0.030476 0.031228 0.031003 0.031064 1 0.012853 0.013100 0.013100 0.013081 2 NaN 0.008568 0.008713 0.008917 3 NaN NaN NaN NaN 4 NaN NaN NaN NaN </code></pre>
python|pandas
1
820
72,615,652
Improvement of iteration thru data frames
<p>Trying to find some insides in two data frames,know that loops are not solution in pandas and using two sheets with 15k rows each. How can I improve the speed on the code that follows? Its not possible to use merge because after matching the condition row from err need to be removed in order to don't be matched again. It takes 140 min with 16k rows.</p> <pre><code>import pandas as pd scr = pd.DataFrame({ 'Product':['10101.A', '10101.A', '10101.A', '10147.A', '10147.A', '10147.A', '10147.A','10147.A'], 'Source Handling Unit':['7000000051481339', '7000000051481342', '7000000051722237','7000000051530150','7000000051530152', '7000000051530157', '7000000051546193', '7000000051761150'], 'Available Qty BUoM':[1,1,1,1,1,1,1,1], 'Confirmation Date':['10-5-2022', '10-5-2022', '9-5-2022', '6-5-2022', '6-5-2022', '6-5-2022', '6-5-2022', '11-5-2022'] }) err = pd.DataFrame({ 'Posting Date':['4-5-2022','6-5-2022','11-5-2022','11-5-2022','11-5-2022','11-5-2022','11-5-2022','11-5-2022','11-5-2022','13-5-2022','15-5-2022','16-5-2022','25-5-2022'], 'Product':['10101.A', '10147.A', '10101.A', '10101.A', '10101.A', '10101.A', '10101.A', '10101.A', '10101.A', '10101.A', '10101.A', '10101.A', '10147.A'], 'Reason':['L400', 'CCIV', 'UPLD', 'UPLD', 'UPLD', 'UPLD', 'UPLD', 'UPLD', 'UPLD', 'UPLD', 'UPLD', 'L400', 'L400'], 'Activity Area':['A970', 'D300', 'A990', 'A990', 'A990', 'A990', 'A990', 'A990','A990', 'A990', 'A990','A970','A970'], 'Difference Quantity':[1, 5, -1, -1, -1, -1, -1, -1, -1, 1, -1, 1, 1] }) #Creating filter for scr filt_scr_col = ['Product', 'Source Handling Unit', 'Available Qty BUoM', 'Confirmation Date'] #Applying filter scr = scr[filt_scr_col] #Creating filter for err filt_post_col = ['Posting Date', 'Product', 'Reason', 'Activity Area', 'Difference Quantity'] #Applying filter err = err[filt_post_col] #Replace empty characters with underscore scr.columns = scr.columns.str.replace(' ', '_') err.columns = err.columns.str.replace(' ', '_') #Creating filter to extract A450 rows from postings filt = err['Activity_Area'] == 'A450' #Assign A450 rows to new dataframe a450 a450 = err.loc[filt] #.groupby 'Posting_Date', 'Product','Reason' but when I pass an argument as_index = False doent aggregate Products returns relevant object #.query gets Difference_Quantity &gt; 0 this evaluates and refers to exact row a450 = ( a450.groupby(['Posting_Date', 'Product','Reason'], as_index = False, sort = False).sum() .query('Difference_Quantity &gt; 0') ) #Creating filter to remove all &lt; 0 filt = err['Difference_Quantity'] &gt; 0 #Applying filter to dataframe err = err.loc[filt] #Removing column 'Activity_Area' from err don't need it and columns will match when appent with a450 err = err.drop(columns='Activity_Area') #Concat err and a450 err = pd.concat([err, a450], ignore_index= True) scr['Confirmation_Date'] = pd.to_datetime(scr['Confirmation_Date'], format = &quot;%d-%m-%Y&quot;) err['Posting_Date'] = pd.to_datetime(err['Posting_Date'], format = &quot;%d-%m-%Y&quot;) scr = scr.sort_values(by='Product', ascending=True) err = err.sort_values(by='Posting_Date', ascending=True) scr['Reason'] = None match = [] for i, row in scr.iterrows(): for e, erow in err.iterrows(): if (row['Product'] == erow['Product']) &amp; (erow['Posting_Date'] &gt;= row['Confirmation_Date']) &amp; (erow['Difference_Quantity'] - row['Available_Qty_BUoM'] &gt;= 0): row['Reason'] = err['Reason'][e] err['Difference_Quantity'][e]-= row['Available_Qty_BUoM'] row_to_dict = row.to_dict() match.append(row_to_dict) break report = pd.DataFrame(match) report = report[['Product', 'Source_Handling_Unit', 'Available_Qty_BUoM', 'Confirmation_Date', 'Reason']] #Pandas think that report['Source_Handling_Unit'] has floats and love to round them report = report.astype({'Source_Handling_Unit': str}) report </code></pre>
<p>Setup:</p> <pre><code>scr = pd.DataFrame({'id' : ['10101.A', '10101.A', '10101.A'],'date' : ['10-5-2022', '10-5-2022', '9-5-2022'], 'qty': [1, 1, 1]}) err = pd.DataFrame({'id' : ['10101.A', '10101.A', '10101.A'], 'date' : ['4-5-2022', '13-5-2022', '16-5-2022'],'qty': [1, 1, 1], 'r':['a', 'b', 'c']}) scr['date'] = pd.to_datetime(scr['date'], format = &quot;%d-%m-%Y&quot;) err['date'] = pd.to_datetime(err['date'], format = &quot;%d-%m-%Y&quot;) </code></pre> <p>Doing:</p> <pre><code>df = scr.merge(err, on='id', suffixes=[None, '_er']).drop_duplicates() report = df[df['date_er'].ge(df['date']) &amp; df['qty_er'].sub(df['qty']).ge(0)] print(report[['id', 'date', 'qty', 'r']]) </code></pre> <p>Output:</p> <pre><code> id date qty r 1 10101.A 2022-05-10 1 b 2 10101.A 2022-05-10 1 c 7 10101.A 2022-05-09 1 b 8 10101.A 2022-05-09 1 c </code></pre> <hr /> <p>Another approach would be a sql-like one, this may be more efficient on a large dataset:</p> <pre><code>from pandasql import sqldf pysqldf = lambda q: sqldf(q, globals()) query = ''' select distinct scr.id, scr.date, scr.qty, r from scr join err on scr.id == err.id and err.date &gt;= scr.date and err.qty - scr.qty &gt;= 0 ''' df = pysqldf(query) df.date = pd.to_datetime(df.date) print(df) ... id date qty r 0 10101.A 2022-05-10 1 b 1 10101.A 2022-05-10 1 c 2 10101.A 2022-05-09 1 b 3 10101.A 2022-05-09 1 c </code></pre>
python|pandas|nested-loops
2
821
59,695,402
Iterate over list elements in Pandas dataframe column and match with values in a different dataframe
<p>I have two dataframes, I want to iterate over the elements in each list in the Companies column and match it with the company names in my second dataframe only if the date from the first dataframe occurs after the date of the second dataframe. I want two columns for the name matches and two columns for the date matches returned. </p> <pre><code>df = pd.DataFrame(columns=['Customer','Companies', 'Date']) df = df.append({'Customer':'Gold', 'Companies':['Gold Ltd', 'Gold X', 'Gold De'], 'Date':'2019-01-07'}, ignore_index=True) df = df.append({'Customer':'Micro', 'Companies':['Microf', 'Micro Inc', 'Micre'], 'Date':'2019-02-10'}, ignore_index=True) Customer Companies Date 0 Gold [Gold Ltd, Gold X, Gold De] 2019-01-07 1 Micro [Microf, Micro Inc, Micre] 2019-02-10 df2 = pd.DataFrame(columns=['Companies', 'Date']) df2 = df2.append({'Companies':'Gold Ltd', 'Date':'2019-01-01'}, ignore_index=True) df2 = df2.append({'Companies':'Gold X', 'Date':'2020-01-07'}, ignore_index=True) df2 = df2.append({'Companies': 'Gold De', 'Date':'2018-07-07'}, ignore_index=True) df2 = df2.append({'Companies':'Microf', 'Date':'2019-02-18'}, ignore_index=True) df2 = df2.append({'Companies':'Micro Inc', 'Date':'2017-09-27'}, ignore_index=True) df2 = df2.append({'Companies':'Micre', 'Date':'2018-12-11'}, ignore_index=True) Companies Date 0 Gold Ltd 2019-01-01 1 Gold X 2020-01-07 2 Gold De 2018-07-07 3 Microf 2019-02-18 4 Micro Inc 2017-09-27 5 Micre 2018-12-11 def match_it(d1, d2): for companies in d1['Companies']: for company in companies: if d2['Companies'].str.contains(company).any(): mask = d1.Companies.apply(lambda x: company in x) dff = d1[mask] date1 = datetime.strptime(dff['Date'].values[0], '%Y-%m-%d').date() date2 = datetime.strptime(d2[d2['Companies']==company]['Date'].values[0], '%Y-%m-%d').date() if date2 &lt; date1: print(d2[d2['Companies']==company]) new_row = pd.Series([d2[d2['Companies']==company]['Date'], d2[d2['Companies']==company]['Companies']]) return new_row </code></pre> <p>Desired Output:</p> <pre><code>Customer Companies Date Name_1 Date_1 Name_2 Date_2 Gold [Gold Ltd, Gold X, Gold De] 2019-01-07 Gold Ltd 2019-01-01 Gold De 2018-07-07 Micro [Microf, Micro Inc, Micre] 2019-02-10 Micro Inc 2017-09-27 Micre 2018-12-11 </code></pre>
<p>Start from more pandasonic way to convert <em>Date</em> columns in both DataFrames from <em>string</em> do <em>datetime</em>:</p> <pre><code>df.Date = pd.to_datetime(df.Date) df2.Date = pd.to_datetime(df2.Date) </code></pre> <p>Then proceed as follows:</p> <pre><code>df3 = df.explode('Companies') df3 = df3.merge(df2, on='Companies', suffixes=('_x', '')) df3 = df3[df3.Date_x &gt; df3.Date].drop(columns='Date_x') df3.rename(columns={'Companies': 'Name'}, inplace=True) df3['idx'] = df3.groupby('Customer').cumcount() df3 = df3.pivot(index='Customer',columns='idx') df3 = df3.swaplevel(axis=1) df3 = df3.sort_index(axis=1, ascending=[True, False]) cols = [] for i in range(1, df3.columns.size // 2 + 1): cols.extend(['Name_' + str(i), 'Date_' + str(i)]) df3.columns = cols result = df.merge(df3, how='left', left_on='Customer', right_index=True) </code></pre> <p>The result is just as you want.</p> <p>To comprehend the details run each instruction separately and print the result. It is better to see the result on your own than read the description.</p> <p>Caution: <em>Explode</em> is a relatively new function, added in <em>Pandas</em> version <em>0.25</em>. If you have older version of <em>Pandas</em>, start from upgrading it.</p> <h1>Edit following the comment as of 03:25:19Z</h1> <p><em>df1</em> can have more columns.</p> <p>To test it, I added <em>Xxx</em> column to <em>df1</em>. The only change required in this case is to block these additional columns from copying to <em>df3</em>. To do this, the first instruction should be appended with:</p> <pre><code>.drop(columns=['Xxx']) </code></pre> <p>(in general case, replace <em>'Xxx'</em> with the actual list of additional columns).</p> <p>To check the case of different number of output columns, I changed the <em>Date</em> for <em>Gold X</em> company in <em>df2</em> to <em>2019-01-06</em>, so that this company will also be included in the output.</p> <p>For your data, with the above changes, the result is:</p> <pre><code> Customer Companies Date Xxx Name_1 Date_1 Name_2 Date_2 Name_3 Date_3 0 Gold [Gold Ltd, Gold X, Gold De] 2019-01-07 Xxx1 Gold Ltd 2019-01-01 Gold X 2019-01-06 Gold De 2018-07-07 1 Micro [Microf, Micro Inc, Micre] 2019-02-10 Xxx2 Micro Inc 2017-09-27 Micre 2018-12-11 NaN NaT </code></pre> <p>So, as you can see:</p> <ul> <li>The result contais also the added column (<em>Xxx</em>).</li> <li>The output contains also <em>Name_3</em> and <em>Date_3</em> columns.</li> <li>As for the second row from <em>df1</em> only 2 matches have been found, these columns contain here <em>NaN</em> and <em>NaT</em> (<em>Pandas</em> counterparts for <em>None</em>).</li> </ul>
python|pandas
1
822
32,281,529
What is the correct way to mix feature sparse matrices with sklearn?
<p>The other day I was dealing with a machine learning task that required to extract several types of feature matrices. I save this feature matrices as numpy arrays in disk in order to later use them in some estimator (this was a classification task). After all, when I wanted to use all the features I just concatenated the matrices in order to have a big feature matrix. When I obtained this big feature matrix I presented it to an estimator.</p> <p>I do not know if this is the correct way to work with a feature matrix that has a lot of patterns (counts) in it. <strong>What other approaches should I use to mix correctly several types of features?</strong>. However, looking through the documentation I found <a href="http://scikit-learn.org/stable/modules/generated/sklearn.pipeline.FeatureUnion.html#sklearn.pipeline.FeatureUnion" rel="nofollow">FeatureUnion</a> that seems to do this task.</p> <p>For example, Let's say I would like to create a big feature matrix of 3 vectorizer approaches <code>TfidfVectorizer</code>, <code>CountVectorizer</code> and <code>HashingVectorizer</code> This is what I tried following the <a href="http://scikit-learn.org/dev/auto_examples/feature_stacker.html#example-feature-stacker-py" rel="nofollow">documentation example</a>:</p> <pre><code>#Read the .csv file import pandas as pd df = pd.read_csv('file.csv', header=0, sep=',', names=['id', 'text', 'labels']) #vectorizer 1 from sklearn.feature_extraction.text import TfidfVectorizer tfidf_vect = TfidfVectorizer(use_idf=True, smooth_idf=True, sublinear_tf=False, ngram_range=(2,2)) #vectorizer 2 from sklearn.feature_extraction.text import CountVectorizer bow = CountVectorizer(ngram_range=(2,2)) #vectorizer 3 from sklearn.feature_extraction.text import HashingVectorizer hash_vect = HashingVectorizer(ngram_range=(2,2)) #Combine the above vectorizers in one single feature matrix: from sklearn.pipeline import FeatureUnion combined_features = FeatureUnion([("tfidf_vect", tfidf_vect), ("bow", bow), ("hash",hash_vect)]) X_combined_features = combined_features.fit_transform(df['text'].values) y = df['labels'].values #Check the matrix print X_combined_features.toarray() </code></pre> <p>Then:</p> <pre><code>[[ 0. 0. 0. ..., 0. 0. 0.] [ 0. 0. 0. ..., 0. 0. 0.] [ 0. 0. 0. ..., 0. 0. 0.] ..., [ 0. 0. 0. ..., 0. 0. 0.] [ 0. 0. 0. ..., 0. 0. 0.] [ 0. 0. 0. ..., 0. 0. 0.]] </code></pre> <p>Split the data:</p> <pre><code>from sklearn import cross_validation X_train, X_test, y_train, y_test = cross_validation.train_test_split(X_combined_features,y, test_size=0.33) </code></pre> <p>So I have a few questions: <strong>Is this the right approach to mix several feature extractors in order to yield a big feature matrix?</strong> and <strong>assume I create my own "vectorizers" and they return sparse matrices, how can I use correctly the FeatureUnion interface to mix them with the above 3 features?</strong>.</p> <p><strong>update</strong></p> <p>Let's say that I have a matrix like this:</p> <p>Matrix A (<code>(152, 33)</code>)</p> <pre><code>[[ 0. 0. 0. ..., 0. 0. 0.] [ 0. 0. 0. ..., 0. 0. 0.] [ 0. 0. 0. ..., 0. 0. 0.] ..., [ 0. 0. 0. ..., 0. 0. 0.] [ 0. 0. 0. ..., 0. 0. 0.] [ 0. 0. 0. ..., 0. 0. 0.]] </code></pre> <p>Then with my vectorizer that returns a <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.asarray.html" rel="nofollow">numpy array</a> I get this feature matrix:</p> <p>Matrix B (<code>(152, 10)</code>)</p> <pre><code>[[4210 228 25 ..., 0 0 0] [4490 180 96 ..., 10 4 6] [4795 139 8 ..., 0 0 1] ..., [1475 58 3 ..., 0 0 0] [4668 256 25 ..., 0 0 0] [1955 111 10 ..., 0 0 0]] </code></pre> <p>Matrix C (<code>(152, 46)</code>)</p> <pre><code>[[ 0 0 0 ..., 0 0 0] [ 0 0 0 ..., 0 0 17] [ 0 0 0 ..., 0 0 0] ..., [ 0 0 0 ..., 0 0 0] [ 0 0 0 ..., 0 0 0] [ 0 0 0 ..., 0 0 0]] </code></pre> <p>How can I merge A, B and C correctly with <code>numpy.hstack</code>,<code>scipy.sparse.hstack</code> or <code>FeatureUnion</code>? . Do you guys think this is a correct pipeline-approach to follow for any machine learning task?</p>
<blockquote> <p>Is this the right approach to mix several feature extractors in order to yield a big feature matrix?</p> </blockquote> <p>In terms of correctness of the result, your approach is right, since <code>FeatureUnion</code> runs each individual transformer on the input data and concatenates the resulting matrices horizontally. However, it's not the only way, and which way is better in terms of efficiency will depend on your use case (more on this later).</p> <blockquote> <p>Assume I create my own "vectorizers" and they return sparse matrices, how can I use correctly the FeatureUnion interface to mix them with the above 3 features?</p> </blockquote> <p>Using <code>FeatureUnion</code>, you simply need to append your new transformer to the transformer list:</p> <pre><code>custom_vect = YourCustomVectorizer() combined_features = FeatureUnion([("tfidf_vect", tfidf_vect), ("bow", bow), ("hash", hash_vect), ("custom", custom_vect]) </code></pre> <p>However, if your input data and most of the transformers are fixed (e.g. when you're experimenting with the inclusion of a new transformer), the above approach will lead to many re-computation. In that case, an alternative approach is to pre-compute store the intermediate results of the transformers (matrices or sparse matrices), and concatenate them manually using <code>numpy.hstack</code> or <code>scipy.sparse.hstack</code> when needed.</p> <p>If your input data is always changing but the list of transformers is fixed, <code>FeatureUnion</code> offers more convenience. Another advantage of it is that it has the option of <code>n_jobs</code>, which helps you parallelize the fitting process.</p> <hr> <p>Side note: It seems little bit strange to mix hashing vectorizer with the other vectorizers, since hashing vectorizer is typically used when you cannot afford to use the exact versions.</p>
python|numpy|pandas|scikit-learn
6
823
32,216,630
Get Maximum Value from Dataframe
<p>I'm running the following code to get the maximum value in a dataframe. It works fine.</p> <pre><code> p_max_shot1_15_CH8 = corrected_shot1_data[['CH 8 [psi]']][0.0119:0.0122].max() </code></pre> <p>I would like to use the max value for math, but it is not a value but another dataframe</p> <pre><code> CH 8 [psi] 1.419032 dtype: float64 </code></pre> <p>How do I get just the maximum value with no index?</p>
<p>You should be able to get values without an index using <code>.values</code>:</p> <pre><code>p_max_shot1_15_CH8 = corrected_shot1_data[['CH 8 [psi]']][0.0119:0.0122].max().values </code></pre> <p>Alternatively, try putting max first, e.g.:</p> <pre><code>max(p_max_shot1_15_CH8 = corrected_shot1_data[['CH 8 [psi]']][0.0119:0.0122]) </code></pre>
pandas|ipython-notebook
0
824
32,483,772
How to index numpy array on subset of array of bools that is smaller than numpy array's dimensions?
<p>My question is inspired by another one: <a href="https://stackoverflow.com/questions/32481491/intersection-of-2d-and-1d-numpy-array/32483377#32483377">Intersection of 2d and 1d Numpy array</a> I am looking for a succinct solution that does not use <code>in1d</code></p> <p>The setup is this. I have a <code>numpy array</code> of <code>bools</code> telling me which values of <code>numpy array A</code> I should set equal to 0, called <code>listed_array</code>. However, I want to ignore the information in the first 3 columns of <code>listed_array</code> and only set A to zero as indicated in the other columns of listed_array.</p> <p>I know the following is incorrect:</p> <pre><code>A[listed_array[:, 3:]] = 0 </code></pre> <p>I also know I can pad this subset of <code>listed_array</code> with a call to <code>hstack</code>, and this will yield correct output, but <strong>is there something more succinct?</strong></p>
<p>If I understand the question, this should do it:</p> <pre><code>A[:, 3:][listed_array[:, 3:]] = 0 </code></pre> <p>which is a concise version of</p> <pre><code>mask3 = listed_array[:, 3:] A3 = A[:, 3:] # This slice is a *view* of A, so changing A3 changes A. A3[mask3] = 0 </code></pre>
python|numpy
1
825
40,426,118
How to create bounding boxes around the ROIs using TensorFlow
<p>I'm using inception v3 and tensorflow to identify some objects within the image. However, it just create a list of possible objects and I need it to inform their position in the image.</p> <p>I'm following the flowers tutorial: <a href="https://www.tensorflow.org/versions/r0.9/how_tos/image_retraining/index.html" rel="noreferrer">https://www.tensorflow.org/versions/r0.9/how_tos/image_retraining/index.html</a></p> <blockquote> <p>bazel-bin/tensorflow/examples/image_retraining/retrain --image_dir ~/flower_photos</p> </blockquote>
<p>Inception is a classification network, not a localization network.</p> <p>You need another architecture to predict the bounding boxes, like <a href="https://people.eecs.berkeley.edu/~rbg/papers/r-cnn-cvpr.pdf" rel="noreferrer">R-CNN</a> and its newer (and faster) variants (Fast R-CNN, Faster R-CNN).</p> <p>Optionally, if you want to use inception and you have a train set annotated with class and bounding box coordinates, you can add a regression head to inception, and make the network learn to regress the bounding box coordinates. It's the same thing of transfer learning, but you just use the last convolutional layer output as a feature extractor, and train this new head to regress 4 coordinates + 1 class for every bounding box in your training set.</p>
python|neural-network|tensorflow|artificial-intelligence
6
826
40,683,430
Training letter images to a neural network with full-batch training
<p>According to <a href="https://iamtrask.github.io/2015/07/12/basic-python-network/" rel="nofollow noreferrer">this tutorial</a>(Pure Python with NumPy), I want to build a simple(at simplest level for learning purpose) neural network(Perceptron) that can train to recognize "A" letter. In this tutorial, in the proposed example, they build a network that can learn "AND" logical operator. In this case, we have some inputs(4*3 Matrix) and one output(4*1 Matrix):</p> <p><a href="https://i.stack.imgur.com/63dzQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/63dzQ.png" alt="enter image description here"></a></p> <p>Each time we subtract output matrix with input matrix and calculate the error and updating rate and so on.</p> <p>Now I want to give an image as an input, in this case, What will be my output? How can I define that image is an "A" letter? one solution is define "1" as "A" letter and "0" for "non-A" , But if my output is a scalar, How can I subtract it with hidden layer and calculate error and update weights? This tutorial uses "full-batch" training and multiply whole input matrix with weight matrix. I want to do with this method. The final destination is designing a neural net that can recognize "A" letter in the simplest form. I have no idea how to do this.</p>
<p>Fist off: Great that you try to understand neural networks by programming them from scratch, instead of starting of with some complex library. Let me try to clear things up: your understanding here:</p> <blockquote> <p>Each time we subtract output matrix with input matrix and calculate the error and updating rate and so on.</p> </blockquote> <p>is not really correct. In your example, the input matrix <code>X</code> is what you present to the input of your neural network. The output <code>Y</code> is what you want the network to do for <code>X</code>: the first element <code>Y[0]</code> is the desired output for the first row of <code>X</code>, and so on. We often call this the "target vector". Now to calculate the loss function (i.e. the error) we compare the output of the network (<code>L2</code> in the linked example code), to the target vector<code>Y</code>. In words, we compare what we <em>want</em> the network to do (<code>Y</code>) to what it <em>really</em> does (<code>L2</code>). Then we make one step towards a direction which is closer to <code>Y</code>.</p> <p>Now, if you want to use an image as the input, you should think of each pixel in the image as one input variable. Previously, we had two input variables: A and B, for which we wanted to calculate the term X = A ∧ B.</p> <p><strong>Example</strong>:</p> <p>If we take a 8-by-8 pixel image, we have 8*8=64 input variables. Thus, our input matrix <code>X</code> should be a matrix with 65 columns (the 64 pixels of the image + 1 input as bias term, which is constantly =1) and one row per training example you have. E.g. if you have one image of each of the 26 letters, the matrix will contain 26 rows. </p> <p>The output (target) vector <code>Y</code> should have the same length as <code>X</code>, i.e. 26 in the previous example. Each element in <code>Y</code> is 1 if the corresponding input row is an A, and 0 if it is another letter. In our example, <code>Y[0]</code> would be 1, <code>Y[1:]</code> would be 0.</p> <p>Now, you can use the same code as before: the output <code>L2</code> will be a vector containing the networks prediction, which you can then compare to <code>Y</code> as before. </p> <p><strong>tl;dr</strong> The key idea is to forget that an image is 2D, and store each input image as a vector.</p>
python|numpy|machine-learning|neural-network|artificial-intelligence
2
827
40,359,666
Pandas Dataframe groupby: produce multiple output columns
<p>I have the following code: </p> <pre><code>def func(x): return (1, 2, 3) df.groupby[col].aggregate(func) </code></pre> <p>How to make three columns as the result of one aggregation function? I also tried returning np.array, pd.Series, but it doesn't help.</p>
<p>In <code>func()</code> you have to return a dataframe and I believe you should use <code>apply()</code>, for example like so:</p> <pre><code>def func(x): return pd.DataFrame([1,2,3]).T df.groupby[col].apply(func) </code></pre>
python|pandas|numpy|dataframe
3
828
61,921,978
Python Pandas Column to Minutes
<p>I've subtracted two datetimes from each other, like so:</p> <p><code>df['Time Difference'] = df['Time 1'] - df['Time 2']</code></p> <p>resulting in a timedelta object. I need the total number of minutes from this object, but I can't for the life of me figure it out. Currently, the "Time Difference" column looks like this:</p> <pre><code>1 0 days 00:01:00.000000000 2 0 days 00:04:00.000000000 3 0 days 00:03:00.000000000 4 0 days 00:01:00.000000000 5 0 days 00:03:00.000000000 </code></pre> <p>I've tried dividing by a numpy timedelta (which seems to be the most common suggestion) as well as by pandas timedelta, as well as a few other things. Operations such as <code>df['Time Difference'].seconds</code>, or <code>.seconds()</code>, or <code>.total_seconds</code>, (all suggestions I've seen for this), all give errors. I'm really at a loss for what to do here. I need this in minutes in order to make graphs in matplotlib, and I'm kind of stuck until I figure this out, so any suggestions are very much appreciated. Thanks!</p>
<p>use <code>dt.total_seconds()</code> and divide by 60 to get the minutes:</p> <pre><code>import pandas as pd df = pd.DataFrame({'td': pd.to_timedelta(['0 days 00:01:00.000000000', '0 days 00:04:00.000000000', '0 days 00:03:00.000000000', '1 days 00:01:00.000000000', '0 days 00:03:00.000000000'])}) df['delta_min'] = df['td'].dt.total_seconds() / 60 # df['delta_min'] # 0 1.0 # 1 4.0 # 2 3.0 # 3 1441.0 # 4 3.0 </code></pre>
python|pandas|data-science|timedelta
0
829
61,824,375
Replacing null value in Python with next available value by group
<pre><code>df = pd.DataFrame({ 'group': [1,1,1,2,2,2], 'value': [None,None,'A',None,'B',None] }) </code></pre> <p>I would like to replace missing values by the first next non missing value by group. The desired result is:</p> <pre><code>df = pd.DataFrame({ 'group': [1,1,1,2,2,2], 'value': ['A','A','A','B','B',None] }) </code></pre>
<p>The Easiest way as @Erfan mention using backfill method <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.DataFrameGroupBy.bfill.html" rel="nofollow noreferrer"><code>DataFrameGroupBy.bfill</code></a>.</p> <h2>Solution 1)</h2> <pre><code>&gt;&gt;&gt; df['value'] = df.groupby('group')['value'].bfill() &gt;&gt;&gt; df group value 0 1 A 1 1 A 2 1 A 3 2 B 4 2 B 5 2 NaN </code></pre> <h2>Solution 2)</h2> <p><a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.DataFrameGroupBy.bfill.html" rel="nofollow noreferrer"><code>DataFrameGroupBy.bfill</code></a> with <code>limit</code> parameter works perfectly as well here. From the pandas Documentation which nicely briefs the <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/missing_data.html" rel="nofollow noreferrer"><code>Limit the amount of filling</code></a> is worth to read. as per the doc <code>If we only want consecutive gaps filled up to a certain number of data points, we can use the limit keyword</code>.</p> <pre><code>&gt;&gt;&gt; df['value'] = df.groupby(['group']).bfill(limit=2) # &gt;&gt;&gt; df['value'] = df.groupby('group').bfill(limit=2) &gt;&gt;&gt; df group value 0 1 A 1 1 A 2 1 A 3 2 B 4 2 B 5 2 NaN </code></pre> <h2>Solution 3)</h2> <p>With <code>groupby()</code> we can also combine <code>fillna()</code> with <code>bfill()</code> along with limit parameter.</p> <pre><code>&gt;&gt;&gt; df.groupby('group').fillna(method='bfill',limit=2) value 0 A 1 A 2 A 3 B 4 B 5 None </code></pre> <h2>Solution 4)</h2> <p>Other way around using <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.transform.html" rel="nofollow noreferrer"><code>DataFrame.transform</code></a> function to fill the <code>value</code> column after group by with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.DataFrameGroupBy.bfill.html" rel="nofollow noreferrer"><code>DataFrameGroupBy.bfill</code></a>.</p> <pre><code>&gt;&gt;&gt; df['value'] = df.groupby('group')['value'].transform(lambda v: v.bfill()) &gt;&gt;&gt; df group value 0 1 A 1 1 A 2 1 A 3 2 B 4 2 B 5 2 None </code></pre> <h2>Solution 5)</h2> <p>You can use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.set_index.html" rel="nofollow noreferrer"><code>DataFrame.set_index</code></a> to add the <code>group</code> column to the index, making it unique, and do a simple <code>bfill()</code> via <code>groupby()</code>, then you can use reset index to its original state.</p> <pre><code>&gt;&gt;&gt; df.set_index('group', append=True).groupby(level=1).bfill().reset_index(level=1) group value 0 1 A 1 1 A 2 1 A 3 2 B 4 2 B 5 2 NaN </code></pre> <h2>Solution 6)</h2> <p>In case strictly not going for <code>groupby()</code> then below would be the easiest ..</p> <pre><code>&gt;&gt;&gt; df['value'] = df['value'].bfill() &gt;&gt;&gt; df group value 0 1 A 1 1 A 2 1 A 3 2 B 4 2 B 5 2 None </code></pre>
python|pandas|pandas-groupby
0
830
57,869,559
numpy: Different results between diff and gradient for finite differences
<p>I want to calculate the numerical derivative of two arrays <code>a</code> and <code>b</code>.</p> <p>If I do </p> <pre><code>c = diff(a) / diff(b) </code></pre> <p>I get what I want, but I loose the edge (the last point) so <code>c.shape ~= a.shape</code>.</p> <p>If I do</p> <pre><code>c = gradient(a, b) </code></pre> <p>then <code>c.shape = a.shape</code>, but I get a completely different result.</p> <p>I have read how gradient is calculated in numpy and I guess it does a completely different thing, although I dont understand quite well the difference yet. But is there a way or another function to calculate the differential which also gives the values at the edges?</p> <p>And why is the result so different between <code>gradient</code> and <code>diff</code>?</p>
<p>These functions, although related, do different actions.</p> <p><code>np.diff</code> simply takes the differences of matrix slices along a given axis, and used for <code>n</code>-th difference returns a matrix smaller by <code>n</code> along the given axis (what you observed in the <code>n=1</code> case). Please see: <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.diff.html" rel="nofollow noreferrer">https://docs.scipy.org/doc/numpy/reference/generated/numpy.diff.html</a></p> <p><code>np.gradient</code> produces a set of gradients of an array along all its dimensions while preserving its shape <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.gradient.html" rel="nofollow noreferrer">https://docs.scipy.org/doc/numpy/reference/generated/numpy.gradient.html</a> Please also observe that <code>np.gradient</code> should be executed for one input array, your second argument <code>b</code> does not make sense here (was interpreted as first non-keyword argument from <code>*varargs</code> which is meant to describe spacings between the values of the first argument), hence the results that don't match your intuition.</p> <p>I would simply use <code>c = diff(a) / diff(b)</code> and append values to <code>c</code> if you really need to have <code>c.shape</code> match <code>a.shape</code>. For instance, you might append zeros if you expect the gradient to vanish close to the edges of your window.</p>
numpy|numpy-ndarray
4
831
58,076,848
How to copy the current row and the next row value in a new dataframe using python?
<p>The df looks like below:</p> <pre><code>A B C 1 8 23 2 8 22 3 8 45 4 9 45 5 6 12 6 8 10 7 11 12 8 9 67 </code></pre> <p>I want to create a new df with the occurence of 8 in 'B' and the next row value of 8.</p> <p>New df: The df looks like below:</p> <pre><code>A B C 1 8 23 2 8 22 3 8 45 4 9 45 6 8 10 7 11 12 </code></pre>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#boolean-indexing" rel="nofollow noreferrer"><code>boolean indexing</code></a> with compared by shifted values with <code>|</code> for bitwise <code>OR</code>:</p> <pre><code>df = df[df.B.shift().eq(8) | df.B.eq(8)] print (df) A B C 0 1 8 23 1 2 8 22 2 3 8 45 3 4 9 45 5 6 8 10 6 7 11 12 </code></pre>
python|pandas|dataframe
5
832
57,894,304
showing of angle between two vector in python in degrees
<p>i wrote following program in python </p> <pre><code>import math import numpy as np u =np.array([2,2]) v =np.array([0,3]) #alculate manualy product =np.dot(u,v) normu = np.linalg.norm(u) normv = np.linalg.norm(v) cost = product /(normu * normv) </code></pre> <p>what i want is to show angle in degrees, for instance it is equal to pi/4, what i want is to show it as 45 degrees, how can i do ? thanks in advance, i know there exist several functions, like np.rad2deg() or np.deg2rad(), but non of them was useful or maybe i am not using it correctly, please help me</p>
<p>The <code>cost</code> you have is not the angle. It is the cosine of the angle. You need to take the inverse cosine THEN convert radians to degrees.</p> <pre><code>print(np.rad2deg(np.arccos(cost))) #45.000000000000007 </code></pre>
python|numpy
1
833
57,925,014
Filtering rows on multiple string conditions at the same column
<p>I want to filter a dataframe on multiple conditions. Let's say I have one column called 'detail', i want to get a dataframe where the 'detail' column values match the following:</p> <pre><code>detail = unidecode.unidecode(str(row['detail']).lower()) </code></pre> <p>So now I have all <code>detail</code> rows unidecoded and to lowercase, then i want to extract the rows that start with some substring like:</p> <pre class="lang-py prettyprint-override"><code>detail.startswith('bomb') </code></pre> <p>And finally also take the rows where another integer column equals 100.</p> <p>I tried to do this but obviously it doesn't work:</p> <pre><code>llista_dfs['df_bombes'] = df_filtratge[df_filtratge['detail'].str.lower().startswith('bomb') or df_filtratge['family']==100] </code></pre> <p>This line above is what I would like to execute but I'm not sure which is the syntax to be able to achieve this in a single line of code (if that's possible).</p> <p>That's an example of what the code should do:</p> <p><strong>Initial table:</strong></p> <pre><code> detail family 0 bòmba 90 1 boMbá 87 2 someword 100 3 someotherword 65 4 Bombá 90 </code></pre> <p><strong>Result table:</strong></p> <pre><code> detail family 0 bòmba 90 1 boMbá 87 2 someword 100 4 Bombá 90 </code></pre>
<p>Actually @user3483203's comment is the right solution as to filter in pandas you use <code>&amp;</code> and <code>|</code> instead of <code>and</code> and <code>or</code>. In any case in case you want to get rid of <code>unidecode</code> you might use this solution:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd txt="""0 bòmba 90 1 boMbá 87 2 someword 100 3 someotherword 65 4 Bombá 90""" df = [list(filter(lambda x: x!='', t.split(' ')))[1:] for t in txt.split("\n")] df = pd.DataFrame(df, columns=["details", 'family']) df["family"] = df["family"].astype(int) cond1 = df["details"].str.normalize('NFKD')\ .str.encode('ascii', errors='ignore')\ .str.decode('utf-8')\ .str.lower()\ .str.startswith('bomba') cond2 = df["family"]==100 df[cond1 | cond2] </code></pre>
python|pandas|dataframe
1
834
58,112,952
Filter column in pandas and convert to float
<p>I have a pandas dataframe, which contains some pretty infiltered data</p> <pre><code>df['Q53'] OUTPUT: 0 Hvor mange timer træner din virksomhed medarbe... 3 NaN 4 NaN 5 NaN 6 2 7 NaN 8 10 9 NaN 10 50 11 NaN 12 ? 13 ? 14 8 15 NaN 16 2 17 0 18 1 19 1 20 5 21 7x3 timer 22 NaN 23 8 timer 24 NaN 25 0 26 8 27 NaN </code></pre> <p>the issue now, is that i want to just have the integers left in the column, and then cast them as a float, so i can do some data visualization with the column. </p> <p>I was wondering if i could do some standardized filtering, but i could not get get it to work.</p> <p>Is there an operation, where i can filter out all <code>NaN</code> and <code>String</code> values, and just be left with a value, that could be casted into a <code>float</code> or <code>int</code></p>
<p>You can check if <code>isdigit</code> to select only <code>True</code> columns.</p> <pre><code>df[df['Q53'].apply(lambda x: str(x).isdigit())] </code></pre>
python|pandas|dataframe
2
835
34,039,290
How to loop through a dataframe, create a new column and append values to it in python
<p>I have the following problem. I have a dataframe with several columns, one of those contains strings as values. I want to loop through this column, change those values and save the changed values in a new column. </p> <p>The code I have written so far looks like this:</p> <pre><code>def get_classes(x): for index, string in df['column'].iteritems(): listi = string.split(',') Classes=[] for value in listi: count=listi.count(value) if count &gt;= 3: Classes.append(value) Unique=(',').join(sorted(list(set(Classes)))) df['NewColumn']=Unique End.apply(get_classes) </code></pre> <p>It loops through the rows of <code>df['column']</code>, splitting the string at each <code>,</code>(creating a list called listi) and creates an empty <code>list</code> called classes. It then counts each value in listi and appends it to Classes if it occures at least three times in the list. The finished list is then <code>sorted</code> and <code>set()</code>, so that all objects in the list are unique, and finally joined at comma to a string again. Then I want to append this unique list of value in a new column, at the same index position as the row value the changed value is derived from. As example:</p> <pre><code>df column NewColumn 0 A,A,A,C A 1 C,B,C,C C 2 B,B,B,B B </code></pre> <p>My code seems to work fine when I do <code>print Unique</code> instead of <code>df['NewColumn']=Unique</code>, as it then prints all the transformed values. If I execute the code like in my example however, the <code>NewColumn</code> of the dataframe is completely filled with the same value, which seems to correspond to the original value of the last row in the df. Can someone explain to me what the problem here is? </p>
<p>You can use powerfull <code>Counter</code> from Collections:</p> <pre><code>from collections import Counter foo = lambda x: ','.join(sorted([k for k,v in Counter(x).iteritems() if v&gt;=3])) df['new'] = df['column'].str.split(',').map(foo) #In [33]: df #Out[33]: # column NewColumn new #0 A,A,A,C A A #1 C,B,C,C C C #2 B,B,B,B B B </code></pre>
python|for-loop|pandas|dataframe
2
836
34,046,048
Debugging nans in the backward pass
<p>I'm trying to debug a somewhat complicated and non-canonical NN architecture. Computing the forward pass is fine and is giving me the expected results, but when I try to optimize using Adam or any of the standard optimizers, even after one iteration with a very small learning rate I get nans everywhere. I'm trying to localize them and was wondering if there's a way to catch the first occurrence of a nan and detect in which op it arose? I tried <code>tf.add_check_numerics_ops()</code> but it doesn't appear to be doing anything, or perhaps I'm using it incorrectly.</p>
<p>Debugging NaNs can be tricky, especially if you have a large network. <a href="https://www.tensorflow.org/api_docs/python/tf/compat/v1/add_check_numerics_ops" rel="nofollow noreferrer"><code>tf.add_check_numerics_ops()</code></a> adds ops to the graph that assert that each floating point tensor in the graph does not contain any NaN values, but does not run these checks by default. Instead it returns an op that you can run periodically, or on every step, as follows:</p> <pre class="lang-py prettyprint-override"><code>train_op = ... check_op = tf.add_check_numerics_ops() sess = tf.Session() sess.run([train_op, check_op]) # Runs training and checks for NaNs </code></pre>
tensorflow
24
837
36,891,977
Diff of two Dataframes
<p>I need to compare two dataframes of different size row-wise and print out non matching rows. Lets take the following two:</p> <pre><code>df1 = DataFrame({ 'Buyer': ['Carl', 'Carl', 'Carl'], 'Quantity': [18, 3, 5, ]}) df2 = DataFrame({ 'Buyer': ['Carl', 'Mark', 'Carl', 'Carl'], 'Quantity': [2, 1, 18, 5]}) </code></pre> <p>What is the most efficient way to row-wise over df2 and print out rows not in df1 e.g.</p> <pre><code>Buyer Quantity Carl 2 Mark 1 </code></pre> <p>Important: I do not want to have row:</p> <pre><code>Buyer Quantity Carl 3 </code></pre> <p>Included in the diff:</p> <p>I have already tried: <a href="https://stackoverflow.com/questions/32972856/comparing-two-dataframes-of-different-length-row-by-row-and-adding-columns-for-e">Comparing two dataframes of different length row by row and adding columns for each row with equal value</a> and <a href="https://stackoverflow.com/questions/17095101/outputting-difference-in-two-pandas-dataframes-side-by-side-highlighting-the-d">Compare two DataFrames and output their differences side-by-side</a></p> <p>But these do not match with my problem.</p>
<p><a href="http://pandas.pydata.org/pandas-docs/version/0.18.0/generated/pandas.DataFrame.merge.html" rel="noreferrer"><code>merge</code></a> the 2 dfs using method 'outer' and pass param <code>indicator=True</code> this will tell you whether the rows are present in both/left only/right only, you can then filter the merged df after:</p> <pre><code>In [22]: merged = df1.merge(df2, indicator=True, how='outer') merged[merged['_merge'] == 'right_only'] Out[22]: Buyer Quantity _merge 3 Carl 2 right_only 4 Mark 1 right_only </code></pre>
python|pandas|dataframe|diff
130
838
36,763,771
dataframe to long format
<p>I have the following df:</p> <pre><code>tz.head() state 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 0 AL 5.7 4.5 4.0 4.0 5.7 11.0 10.5 9.6 8.0 7.2 6.8 6.1 1 AK 7.5 6.9 6.6 6.3 6.7 7.7 7.9 7.6 7.1 6.9 6.9 6.5 2 AZ 5.0 4.7 4.2 3.9 6.2 9.9 10.4 9.5 8.3 7.7 6.8 6.1 3 AR 5.7 5.2 5.2 5.3 5.5 7.8 8.2 8.3 7.6 7.3 6.1 5.2 4 CA 6.2 5.4 4.9 5.4 7.3 11.2 12.2 11.7 10.4 8.9 7.5 6.2 </code></pre> <p>I would like to change it so that looks like this:</p> <pre><code>year state unemployment 2004 AL 5.7 2005 AL 4.5 2006 AL 4.0 2007 AL 4.0 2008 AL 5.7 2009 AL 11.0 2010 AL 10.5 2011 AL 9.6 2012 AL 8.0 2013 AL 7.2 2014 AL 6.8 2015 AL 6.1 2004 AK 7.5 2005 AK 6.9 2006 AK 6.6 2007 AK 6.3 2008 AK 6.7 2009 AK 7.7 2010 AK 7.9 2011 AK 7.6 2012 AK 7.1 2013 AK 6.9 2014 AK 6.9 2015 AK 6.5 </code></pre> <p>The reason is that I have a df that is similarly shaped and I need to merge the two dfs. I have recently had similar df shaping issues that I have been unable to find simple quick solutions to with python. Does anyone know how to solve this kind of problem?</p>
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.melt.html" rel="noreferrer"><code>melt</code></a>:</p> <pre><code>print pd.melt(df,id_vars=['state'],var_name='year', value_name='unemployment') state year unemployment 0 AL 2004 5.7 1 AK 2004 7.5 2 AZ 2004 5.0 3 AR 2004 5.7 4 CA 2004 6.2 5 AL 2005 4.5 6 AK 2005 6.9 7 AZ 2005 4.7 8 AR 2005 5.2 9 CA 2005 5.4 10 AL 2006 4.0 11 AK 2006 6.6 12 AZ 2006 4.2 13 AR 2006 5.2 14 CA 2006 4.9 15 AL 2007 4.0 16 AK 2007 6.3 17 AZ 2007 3.9 18 AR 2007 5.3 19 CA 2007 5.4 20 AL 2008 5.7 21 AK 2008 6.7 22 AZ 2008 6.2 23 AR 2008 5.5 24 CA 2008 7.3 25 AL 2009 11.0 26 AK 2009 7.7 27 AZ 2009 9.9 28 AR 2009 7.8 29 CA 2009 11.2 30 AL 2010 10.5 31 AK 2010 7.9 32 AZ 2010 10.4 33 AR 2010 8.2 34 CA 2010 12.2 35 AL 2011 9.6 36 AK 2011 7.6 37 AZ 2011 9.5 38 AR 2011 8.3 39 CA 2011 11.7 40 AL 2012 8.0 41 AK 2012 7.1 42 AZ 2012 8.3 43 AR 2012 7.6 44 CA 2012 10.4 45 AL 2013 7.2 46 AK 2013 6.9 47 AZ 2013 7.7 48 AR 2013 7.3 49 CA 2013 8.9 50 AL 2014 6.8 51 AK 2014 6.9 52 AZ 2014 6.8 53 AR 2014 6.1 54 CA 2014 7.5 55 AL 2015 6.1 56 AK 2015 6.5 57 AZ 2015 6.1 58 AR 2015 5.2 59 CA 2015 6.2 </code></pre>
python|pandas
13
839
36,960,086
GroupBy - How to extract seconds from DateTime with diff()
<p>I have the following dataframe:</p> <pre><code>In [372]: df_2 Out[372]: A ID3 DATETIME 0 B-028 b76cd912ff 2014-10-08 13:43:27 1 B-054 4a57ed0b02 2014-10-08 14:26:19 2 B-076 1a682034f8 2014-10-08 14:29:01 3 B-023 b76cd912ff 2014-10-08 18:39:34 4 B-023 f88g8d7sds 2014-10-08 18:40:18 5 B-033 b76cd912ff 2014-10-08 18:44:30 6 B-032 b76cd912ff 2014-10-08 18:46:00 7 B-037 b76cd912ff 2014-10-08 18:52:15 8 B-046 db959faf02 2014-10-08 18:59:59 9 B-053 b76cd912ff 2014-10-08 19:17:48 10 B-065 b76cd912ff 2014-10-08 19:21:38 </code></pre> <p>And I want to find the difference between different entries - grouped by <code>'ID3'</code>.</p> <p>I am trying to use <code>transform()</code> on a <code>GroupBy</code> like this:</p> <pre><code>In [379]: df_2['diff'] = df_2.sort_values(by='DATETIME').groupby('ID3')['DATETIME'].transform(lambda x: x.diff()); df_2['diff'] Out[379]: 0 NaT 1 NaT 2 NaT 3 1970-01-01 04:56:07 4 NaT 5 1970-01-01 00:04:56 6 1970-01-01 00:01:30 7 1970-01-01 00:06:15 8 NaT 9 1970-01-01 00:25:33 10 1970-01-01 00:03:50 Name: diff, dtype: datetime64[ns] </code></pre> <p>I have also tried with <code>x.diff().astype(int)</code> for <code>lambda</code>, with the exact same result.</p> <p>Datatype of both <code>'DATETIME'</code> and <code>'diff'</code> is: <code>datetime64[ns]</code></p> <p>What I am trying to achieve is have <code>diff</code> represented in seconds instead of some time in relation to Epoch time.</p> <p>I have figured out that I can convert <code>df_2['diff']</code> to <code>TimeDelta</code> and then extract seconds in one chained call at this point, like this:</p> <pre><code>In [405]: df_2['diff'] = pd.to_timedelta(df_2['diff']).map(lambda x: x.total_seconds()); df_2['diff'] Out[407]: 0 NaN 1 NaN 2 NaN 3 17767.0 4 NaN 5 296.0 6 90.0 7 375.0 8 NaN 9 1533.0 10 230.0 Name: diff, dtype: float64 </code></pre> <p>Is there a way to achieve this (seconds as values for <code>df_2['diff']</code>) in one step in the <code>transform</code> instead of having to take a couple of steps in the process?</p> <p>Finally, I have already tried making conversion to <code>TimeDelta</code> in <code>transform</code> without any success.</p> <p>Thanks for the help!</p>
<p><strong>UPDATE:</strong> <code>transform()</code> from <code>class NDFrameGroupBy(GroupBy)</code> doesn't seem to do downcasting and works as expected: </p> <pre><code>In [220]: (df_2[['ID3','DATETIME']] .....: .sort_values(by='DATETIME') .....: .groupby('ID3') .....: .transform(lambda x: x.diff().dt.total_seconds()) .....: ) Out[220]: DATETIME 0 NaN 1 NaN 2 NaN 3 17767.0 4 NaN 5 296.0 6 90.0 7 375.0 8 NaN 9 1533.0 10 230.0 </code></pre> <p>the <code>transform()</code> from <code>class SeriesGroupBy(GroupBy)</code> tries to do the following:</p> <pre><code>result = _possibly_downcast_to_dtype(result, dtype) </code></pre> <p>which could (i'm not sure) cause your problem</p> <p><strong>OLD answer:</strong></p> <p>try this:</p> <pre><code>In [168]: df_2.sort_values(by='DATETIME').groupby('ID3')['DATETIME'].diff().dt.total_seconds() Out[168]: 0 NaN 1 NaN 2 NaN 3 17767.0 4 NaN 5 296.0 6 90.0 7 375.0 8 NaN 9 1533.0 10 230.0 dtype: float64 </code></pre>
python|python-3.x|pandas|dataframe
4
840
54,752,287
Get input (filenames) from tensorflow dataset iterators
<p>I am using tensorflow datasets to train a model. A list of filenames is taken by the dataset to read them during the session, and I would like to get the filename together with the image. In more detail, I have something like this:</p> <pre><code>filenames = tf.constant(["/var/data/image1.jpg", "/var/data/image2.jpg", ...]) labels = tf.constant([0, 37, ...]) dataset = tf.data.Dataset.from_tensor_slices((filenames, labels)) dataset.shuffle() def _parse_function(filename, label): image_string = tf.read_file(filename) image_decoded = tf.image.decode_jpeg(image_string) image_resized = tf.image.resize_images(image_decoded, [28, 28]) return image_resized, label dataset = dataset.map(_parse_function) iterator = dataset.make_one_shot_iterator() X, Y = iterator.get_next() sess = tf.Session() sess.run(iterator.initializer) while True: sess.run(X) #Here I want the element from filenames being used for X </code></pre> <p>I thought that this information could be included in the <code>iterator</code>, but I could not find it. </p>
<p>You just need to keep the filename along with the image data in the dataset:</p> <pre><code>filenames = tf.constant(["/var/data/image1.jpg", "/var/data/image2.jpg", ...]) labels = tf.constant([0, 37, ...]) dataset = tf.data.Dataset.from_tensor_slices((filenames, labels)) dataset.shuffle() def _parse_function(filename, label): image_string = tf.read_file(filename) image_decoded = tf.image.decode_jpeg(image_string) image_resized = tf.image.resize_images(image_decoded, [28, 28]) return filename, image_resized, label dataset = dataset.map(_parse_function) iterator = dataset.make_one_shot_iterator() F, X, Y = iterator.get_next() sess = tf.Session() sess.run(iterator.initializer) while True: sess.run(F, X) </code></pre>
python|tensorflow|iterator|tensorflow-datasets
5
841
54,824,768
RNN model (GRU) of word2vec to regression not learning
<p>I am converting Keras code into PyTorch because I am more familiar with the latter than the former. However, I found that it is not learning (or only barely).</p> <p>Below I have provided almost all of my PyTorch code, including the initialisation code so that you can try it out yourself. The only thing you would need to provide yourself, is the word embeddings (I'm sure you can find many word2vec models online). The first input file should be a file with tokenised text, the second input file should be a file with floating-point numbers, one per line. Because I have provided all the code, this question may seem huge and too broad. However, my question is specific enough I think: what is wrong in my model or training loop that causes my model to not or barely improve. (See below for results.)</p> <p>I have tried to provide many comments where applicable, and I have provided the shape transformations as well so you do not <em>have</em> to run the code to see what is going on. The data prep methods are not important to inspect.</p> <p>The most important parts are the forward method of the <code>RegressorNet</code>, and the training loop of <code>RegressionNN</code> (admittedly, these names were badly chosen). I think the mistake is there somewhere.</p> <pre><code>from pathlib import Path import time import numpy as np import torch from torch import nn, optim from torch.utils.data import DataLoader import gensim from scipy.stats import pearsonr from LazyTextDataset import LazyTextDataset class RegressorNet(nn.Module): def __init__(self, hidden_dim, embeddings=None, drop_prob=0.0): super(RegressorNet, self).__init__() self.hidden_dim = hidden_dim self.drop_prob = drop_prob # Load pretrained w2v model, but freeze it: don't retrain it. self.word_embeddings = nn.Embedding.from_pretrained(embeddings) self.word_embeddings.weight.requires_grad = False self.w2v_rnode = nn.GRU(embeddings.size(1), hidden_dim, bidirectional=True, dropout=drop_prob) self.dropout = nn.Dropout(drop_prob) self.linear = nn.Linear(hidden_dim * 2, 1) # LeakyReLU rather than ReLU so that we don't get stuck in a dead nodes self.lrelu = nn.LeakyReLU() def forward(self, batch_size, sentence_input): # shape sizes for: # * batch_size 128 # * embeddings of dim 146 # * hidden dim of 200 # * sentence length of 20 # sentence_input: torch.Size([128, 20]) # Get word2vec vector representation embeds = self.word_embeddings(sentence_input) # embeds: torch.Size([128, 20, 146]) # embeds.view(-1, batch_size, embeds.size(2)): torch.Size([20, 128, 146]) # Input vectors into GRU, only keep track of output w2v_out, _ = self.w2v_rnode(embeds.view(-1, batch_size, embeds.size(2))) # w2v_out = torch.Size([20, 128, 400]) # Leaky ReLU it w2v_out = self.lrelu(w2v_out) # Dropout some nodes if self.drop_prob &gt; 0: w2v_out = self.dropout(w2v_out) # w2v_out: torch.Size([20, 128, 400 # w2v_out[-1, :, :]: torch.Size([128, 400]) # Only use the last output of a sequence! Supposedly that cell outputs the final information regression = self.linear(w2v_out[-1, :, :]) regression: torch.Size([128, 1]) return regression class RegressionRNN: def __init__(self, train_files=None, test_files=None, dev_files=None): print('Using torch ' + torch.__version__) self.datasets, self.dataloaders = RegressionRNN._set_data_loaders(train_files, test_files, dev_files) self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') self.model = self.w2v_vocab = self.criterion = self.optimizer = self.scheduler = None @staticmethod def _set_data_loaders(train_files, test_files, dev_files): # labels must be the last input file datasets = { 'train': LazyTextDataset(train_files) if train_files is not None else None, 'test': LazyTextDataset(test_files) if test_files is not None else None, 'valid': LazyTextDataset(dev_files) if dev_files is not None else None } dataloaders = { 'train': DataLoader(datasets['train'], batch_size=128, shuffle=True, num_workers=4) if train_files is not None else None, 'test': DataLoader(datasets['test'], batch_size=128, num_workers=4) if test_files is not None else None, 'valid': DataLoader(datasets['valid'], batch_size=128, num_workers=4) if dev_files is not None else None } return datasets, dataloaders @staticmethod def prepare_lines(data, split_on=None, cast_to=None, min_size=None, pad_str=None, max_size=None, to_numpy=False, list_internal=False): """ Converts the string input (line) to an applicable format. """ out = [] for line in data: line = line.strip() if split_on: line = line.split(split_on) line = list(filter(None, line)) else: line = [line] if cast_to is not None: line = [cast_to(l) for l in line] if min_size is not None and len(line) &lt; min_size: # pad line up to a number of tokens line += (min_size - len(line)) * ['@pad@'] elif max_size and len(line) &gt; max_size: line = line[:max_size] if list_internal: line = [[item] for item in line] if to_numpy: line = np.array(line) out.append(line) if to_numpy: out = np.array(out) return out def prepare_w2v(self, data): idxs = [] for seq in data: tok_idxs = [] for word in seq: # For every word, get its index in the w2v model. # If it doesn't exist, use @unk@ (available in the model). try: tok_idxs.append(self.w2v_vocab[word].index) except KeyError: tok_idxs.append(self.w2v_vocab['@unk@'].index) idxs.append(tok_idxs) idxs = torch.tensor(idxs, dtype=torch.long) return idxs def train(self, epochs=10): valid_loss_min = np.Inf train_losses, valid_losses = [], [] for epoch in range(1, epochs + 1): epoch_start = time.time() train_loss, train_results = self._train_valid('train') valid_loss, valid_results = self._train_valid('valid') # Calculate Pearson correlation between prediction and target try: train_pearson = pearsonr(train_results['predictions'], train_results['targets']) except FloatingPointError: train_pearson = "Could not calculate Pearsonr" try: valid_pearson = pearsonr(valid_results['predictions'], valid_results['targets']) except FloatingPointError: valid_pearson = "Could not calculate Pearsonr" # calculate average losses train_loss = np.mean(train_loss) valid_loss = np.mean(valid_loss) train_losses.append(train_loss) valid_losses.append(valid_loss) # print training/validation statistics print(f'----------\n' f'Epoch {epoch} - completed in {(time.time() - epoch_start):.0f} seconds\n' f'Training Loss: {train_loss:.6f}\t Pearson: {train_pearson}\n' f'Validation loss: {valid_loss:.6f}\t Pearson: {valid_pearson}') # validation loss has decreased if valid_loss &lt;= valid_loss_min and train_loss &gt; valid_loss: print(f'!! Validation loss decreased ({valid_loss_min:.6f} --&gt; {valid_loss:.6f}). Saving model ...') valid_loss_min = valid_loss if train_loss &lt;= valid_loss: print('!! Training loss is lte validation loss. Might be overfitting!') # Optimise with scheduler if self.scheduler is not None: self.scheduler.step(valid_loss) print('Done training...') def _train_valid(self, do): """ Do training or validating. """ if do not in ('train', 'valid'): raise ValueError("Use 'train' or 'valid' for 'do'.") results = {'predictions': np.array([]), 'targets': np.array([])} losses = np.array([]) self.model = self.model.to(self.device) if do == 'train': self.model.train() torch.set_grad_enabled(True) else: self.model.eval() torch.set_grad_enabled(False) for batch_idx, data in enumerate(self.dataloaders[do], 1): # 1. Data prep sentence = data[0] target = data[-1] curr_batch_size = target.size(0) # Returns list of tokens, possibly padded @pad@ sentence = self.prepare_lines(sentence, split_on=' ', min_size=20, max_size=20) # Converts tokens into w2v IDs as a Tensor sent_w2v_idxs = self.prepare_w2v(sentence) # Converts output to Tensor of floats target = torch.Tensor(self.prepare_lines(target, cast_to=float)) # Move input to device sent_w2v_idxs, target = sent_w2v_idxs.to(self.device), target.to(self.device) # 2. Predictions pred = self.model(curr_batch_size, sentence_input=sent_w2v_idxs) loss = self.criterion(pred, target) # 3. Optimise during training if do == 'train': self.optimizer.zero_grad() loss.backward() self.optimizer.step() # 4. Save results pred = pred.detach().cpu().numpy() target = target.cpu().numpy() results['predictions'] = np.append(results['predictions'], pred, axis=None) results['targets'] = np.append(results['targets'], target, axis=None) losses = np.append(losses, float(loss)) torch.set_grad_enabled(True) return losses, results if __name__ == '__main__': HIDDEN_DIM = 200 # Load embeddings from pretrained gensim model embed_p = Path('path-to.w2v_model').resolve() w2v_model = gensim.models.KeyedVectors.load_word2vec_format(str(embed_p)) # add a padding token with only zeros w2v_model.add(['@pad@'], [np.zeros(w2v_model.vectors.shape[1])]) embed_weights = torch.FloatTensor(w2v_model.vectors) # Text files are used as input. Every line is one datapoint. # *.tok.low.*: tokenized (space-separated) sentences # *.cross: one floating point number per line, which we are trying to predict regr = RegressionRNN(train_files=(r'train.tok.low.en', r'train.cross'), dev_files=(r'dev.tok.low.en', r'dev.cross'), test_files=(r'test.tok.low.en', r'test.cross')) regr.w2v_vocab = w2v_model.vocab regr.model = RegressorNet(HIDDEN_DIM, embed_weights, drop_prob=0.2) regr.criterion = nn.MSELoss() regr.optimizer = optim.Adam(list(regr.model.parameters())[0:], lr=0.001) regr.scheduler = optim.lr_scheduler.ReduceLROnPlateau(regr.optimizer, 'min', factor=0.1, patience=5, verbose=True) regr.train(epochs=100) </code></pre> <p>For the LazyTextDataset, you can refer to the class below.</p> <pre><code>from torch.utils.data import Dataset import linecache class LazyTextDataset(Dataset): def __init__(self, paths): # labels are in the last path self.paths, self.labels_path = paths[:-1], paths[-1] with open(self.labels_path, encoding='utf-8') as fhin: lines = 0 for line in fhin: if line.strip() != '': lines += 1 self.num_entries = lines def __getitem__(self, idx): data = [linecache.getline(p, idx + 1) for p in self.paths] label = linecache.getline(self.labels_path, idx + 1) return (*data, label) def __len__(self): return self.num_entries </code></pre> <p>As I wrote before, I am trying to convert a Keras model to PyTorch. The original Keras code does not use an embedding layer, and uses pre-built word2vec vectors per sentence as input. In the model below, there is no embedding layer. The Keras summary looks like this (I don't have access to the base model setup).</p> <hr> <pre><code>Layer (type) Output Shape Param # Connected to ==================================================================================================== bidirectional_1 (Bidirectional) (200, 400) 417600 ____________________________________________________________________________________________________ dropout_1 (Dropout) (200, 800) 0 merge_1[0][0] ____________________________________________________________________________________________________ dense_1 (Dense) (200, 1) 801 dropout_1[0][0] ==================================================================================================== </code></pre> <p>The issue is that with identical input, the Keras model <em>works</em> and gets a +0.5 Pearson correlation between predicted and actual labels. The PyTorch model above, though, does not seem to work at all. To give you an idea, here is the loss (mean squared error) and Pearson (correlation coefficient, p-value) after the first epoch:</p> <pre><code>Epoch 1 - completed in 11 seconds Training Loss: 1.684495 Pearson: (-0.0006077809280690612, 0.8173368901481127) Validation loss: 1.708228 Pearson: (0.017794288315261794, 0.4264098054188664) </code></pre> <p>And after the 100th epoch:</p> <pre><code>Epoch 100 - completed in 11 seconds Training Loss: 1.660194 Pearson: (0.0020315421756790806, 0.4400929436716754) Validation loss: 1.704910 Pearson: (-0.017288118524826892, 0.4396865964324158) </code></pre> <p>The loss is plotted below (when you look at the Y-axis, you can see the improvements are minimal).</p> <p><a href="https://i.stack.imgur.com/nf0z1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nf0z1.png" alt="loss plot"></a></p> <p>A final indicator that something may be wrong, is that for my 140K lines of input, each epoch only takes 10 seconds on my GTX 1080TI. I feel that his is not much and I would guess that the optimisation is not working/running. I cannot figure out why, though. To issue will probably be in my train loop or the model itself, but I cannot find it.</p> <p>Again, something must be going wrong because: - the Keras model <em>does</em> perform well; - the training speed is 'too fast' for 140K sentences - almost no improvemnts after training.</p> <p>What am I missing? The issue is more than likely present in the training loop or in the network structure.</p>
<p><strong>TL;DR</strong>: Use <code>permute</code> instead of <code>view</code> when swapping axes, see the end of answer to get an intuition about the difference.</p> <h1>About RegressorNet (neural network model)</h1> <ol> <li><p>No need to freeze embedding layer if you are using <code>from_pretrained</code>. As <a href="https://pytorch.org/docs/0.4.0/nn.html#torch.nn.Embedding.from_pretrained" rel="noreferrer">documentation</a> states, it <strong>does not</strong> use gradient updates.</p></li> <li><p>This part:</p> <pre><code>self.w2v_rnode = nn.GRU(embeddings.size(1), hidden_dim, bidirectional=True, dropout=drop_prob) </code></pre> <p>and especially <code>dropout</code> without providable <code>num_layers</code> is totally pointless (as no dropout can be specified with shallow one layer network).</p></li> <li><p><strong>BUG AND MAIN ISSUE</strong>: in your <code>forward</code> function you are using <code>view</code> instead of <code>permute</code>, here:</p> <pre><code>w2v_out, _ = self.w2v_rnode(embeds.view(-1, batch_size, embeds.size(2))) </code></pre> <p>See <a href="https://stackoverflow.com/questions/51143206/difference-between-tensor-permute-and-tensor-view-in-pytorch">this answer</a> and appropriate documentation for each of those functions and try to use this line instead:</p> <pre><code>w2v_out, _ = self.w2v_rnode(embeds.permute(1, 0, 2)) </code></pre> <p>You may consider using <code>batch_first=True</code> argument during <code>w2v_rnode</code> creation, you won't have to permute indices that way.</p></li> <li><p>Check documentation of <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.GRU" rel="noreferrer">torch.nn.GRU</a>, you are after <strong>last step of the sequence</strong>, not after all of the sequences you have there, so you should be after:</p> <pre><code>_, last_hidden = self.w2v_rnode(embeds.permute(1, 0, 2)) </code></pre> <p>but I think this part is fine otherwise. </p></li> </ol> <h1>Data preparation</h1> <p>No offence, but <code>prepare_lines</code> is <strong>very unreadable</strong> and seems pretty hard to maintain as well, not to say spotting an eventual bug (I suppose it lies in here).</p> <p>First of all, it seems like you are padding manually. <strong>Please don't do it that way</strong>, use <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.utils.rnn.pad_sequence" rel="noreferrer">torch.nn.pad_sequence</a> to work with batches!</p> <p>In essence, first you encode each word in every sentence as index pointing into embedding (as you seem to do in <code>prepare_w2v</code>), after that you use <code>torch.nn.pad_sequence</code> and <code>torch.nn.pack_padded_sequence</code> <strong>or</strong> <code>torch.nn.pack_sequence</code> if the lines are already sorted by length.</p> <h1>Proper batching</h1> <p>This part is <strong>very important</strong> and it seems you are not doing that at all (and likely this is the second error in your implementation).</p> <p>PyTorch's RNN cells take inputs <strong>not as padded tensors</strong>, but as <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.utils.rnn.PackedSequence" rel="noreferrer">torch.nn.PackedSequence</a> objects. This is an efficient object storing indices which specify <strong>unpadded</strong> length of each sequence.</p> <p>See more informations on the topic <a href="https://stackoverflow.com/questions/51030782/why-do-we-pack-the-sequences-in-pytorch">here</a>, <a href="https://gist.github.com/Tushar-N/dfca335e370a2bc3bc79876e6270099e" rel="noreferrer">here</a> and in many other blog posts throughout the web.</p> <p>First sequence in batch <strong>has to be the longest</strong>, and all others have to be provided in the descending length. What follows is:</p> <ol> <li>You have to sort your batch each time by sequences length <strong>and sort your targets</strong> in an analogous way <strong>OR</strong></li> <li>Sort your batch, push it through the network and <strong>unsort</strong> it afterwards to match with your targets.</li> </ol> <p>Either is fine, it's your call what seems to be more intuitive for you. What I like to do is more or less the following, hope it helps:</p> <ol> <li>Create unique indices for each word and map each sentence appropriately (you've already done it).</li> <li>Create regular <code>torch.utils.data.Dataset</code> object returning single sentence for each <strong>geitem</strong>, where it is returned as a tuple consisting of features (<code>torch.Tensor</code>) and labels (single value), seems like you're doing it as well.</li> <li>Create custom <code>collate_fn</code> for use with <a href="https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader" rel="noreferrer">torch.utils.data.DataLoader</a>, which is responsible for sorting and padding each batch in this scenario (+ it returns lengths of each sentence to be passed into neural network).</li> <li>Using <strong>sorted and padded features</strong> and <strong>their lengths</strong> I'm using <code>torch.nn.pack_sequence</code> inside neural network's <code>forward</code> method (<strong>do it after embedding!</strong>) to push it through RNN layer.</li> <li>Depending on the use-case I unpack them using <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.utils.rnn.pad_packed_sequence" rel="noreferrer">torch.nn.pad_packed_sequence</a>. In your case, you only care about last hidden state, hence <strong>you don't have to do that</strong>. If you were using all of the hidden outputs (like is the case with, say, attention networks), you would add this part.</li> </ol> <p>When it comes to the third point, here is a sample implementation of <code>collate_fn</code>, you should get the idea:</p> <pre><code>import torch def length_sort(features): # Get length of each sentence in batch sentences_lengths = torch.tensor(list(map(len, features))) # Get indices which sort the sentences based on descending length _, sorter = sentences_lengths.sort(descending=True) # Pad batch as you have the lengths and sorter saved already padded_features = torch.nn.utils.rnn.pad_sequence(features, batch_first=True) return padded_features, sentences_lengths, sorter def pad_collate_fn(batch): # DataLoader return batch like that unluckily, check it on your own features, labels = ( [element[0] for element in batch], [element[1] for element in batch], ) padded_features, sentences_lengths, sorter = length_sort(features) # Sort by length features and labels accordingly sorted_padded_features, sorted_labels = ( padded_features[sorter], torch.tensor(labels)[sorter], ) return sorted_padded_features, sorted_labels, sentences_lengths </code></pre> <p>Use those as <code>collate_fn</code> in <code>DataLoaders</code> and you should be just about fine (maybe with minor adjustments, so it's essential you understand the idea standing behind it).</p> <h1>Other possible problems and tips</h1> <ul> <li><p><strong>Training loop</strong>: great place for a lot of small errors, you may want to minimalize those by using <a href="https://github.com/pytorch/ignite" rel="noreferrer">PyTorch Ignite</a>. I am having unbelievably hard time going through your Tensorflow-like-Estimator-like-API-like training loop (e.g. <code>self.model = self.w2v_vocab = self.criterion = self.optimizer = self.scheduler = None</code> this). Please, don't do it this way, separate each task (data creating, data loading, data preparation, model setup, training loop, logging) into it's own respective module. All in all there is a reason why PyTorch/Keras is more readable and sanity-preserving than Tensorflow.</p></li> <li><p><strong>Make the first row of your embedding equal to vector containg zeros</strong>: By default, <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.functional.embedding" rel="noreferrer">torch.nn.functional.embedding</a> expects the first row to be used for padding. Hence you should start your unique indexing for each word at 1 <strong>or</strong> specify an argument <code>padding_idx</code> to different value (though I highly discourage this approach, confusing at best).</p></li> </ul> <p><strong>I hope this answer helps you at least a little bit, if something is unclear post a comment below and I'll try to explain it from a different perspective/more detail.</strong></p> <h2>Some final comments</h2> <p>This code <strong>is not reproducible</strong>, nor the question's specific. We don't have the data you are using, neither we got your word vectors, random seed is not fixed etc.</p> <p>PS. One last thing: Check your performance on <strong>really small subset</strong> of your data (say 96 examples), if it does not converge, it is very likely you indeed have a bug in your code.</p> <p>About the times: they are probably off (due to not sorting and not padding I suppose), usually Keras and PyTorch's times are quite similar (if I understood this part of your question as intended) for correct and efficient implementations.</p> <h1>Permute vs view vs reshape explanation</h1> <p>This simple example show the differences between <code>permute()</code> and <code>view()</code>. The first one swaps axes, while the second does not change memory layout, just chunks the array into desired shape (if possible).</p> <pre><code>import torch a = torch.tensor([[1, 2], [3, 4], [5, 6]]) print(a) print(a.permute(1, 0)) print(a.view(2, 3)) </code></pre> <p>And the output would be:</p> <pre><code>tensor([[1, 2], [3, 4], [5, 6]]) tensor([[1, 3, 5], [2, 4, 6]]) tensor([[1, 2, 3], [4, 5, 6]]) </code></pre> <p><code>reshape</code> is almost like <code>view</code>, was added for those coming from <code>numpy</code>, so it's easier and more natural for them, but it has one important difference:</p> <ul> <li><code>view</code> <strong>never copies data</strong> and work only on contiguous memory (so after permutation like the one above your data may not be contiguous, hence acces to it might be slower)</li> <li><code>reshape</code> <strong>can copy data if needed</strong>, so it would work for non-contiguous arrays as well.</li> </ul>
python|tensorflow|machine-learning|keras|pytorch
8
842
54,930,568
Weird "too many indices for array" error in python
<p>Let's create a large np array 'a' with 10,000 entries</p> <pre><code>import numpy as np a = np.arange(0, 10000) </code></pre> <p>Let's slice the array with 'n' indices 0->9, 1->10, 2->11, etc.</p> <pre><code>n = 32 b = list(map(lambda x:np.arange(x, x+10), np.arange(0, n))) c = a[b] </code></pre> <p>The weird thing that I am getting, is that if n is smaller than 32, I get an error "IndexError: too many indices for array". If n is bigger or equal than 32, then the code works perfectly. The error occurs regardless of the size of the initial array, or the size of the individual slices, but always with number 32. Note that if n == 1, the code works.</p> <p>Any idea on what is causing this? Thank you.</p>
<p>Your <code>b</code> is a list of arrays:</p> <pre><code>In [84]: b = list(map(lambda x:np.arange(x, x+10), np.arange(0, 5))) In [85]: b Out[85]: [array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]), array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]), array([ 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]), array([ 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]), array([ 4, 5, 6, 7, 8, 9, 10, 11, 12, 13])] </code></pre> <p>When used as an index:</p> <pre><code>In [86]: np.arange(1000)[b] /usr/local/bin/ipython3:1: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result. #!/usr/bin/python3 --------------------------------------------------------------- IndexError: too many indices for array </code></pre> <p><code>A[1,2,3]</code> is the same as <code>A[(1,2,3)]</code> - that is, the comma separated indices are a tuple, which is then passed on to the indexing function. Or to put it another way, a multidimensional index should be a tuple (that includes ones with slices).</p> <p>Up to now <code>numpy</code> has been a bit sloppy, and allowed us to use a list of indices in the same way. The warning tells us that the developers are in the process of tightening up those restrictions.</p> <p>The error means it is trying to interpret each array in your list as the index for a separate dimension. An array can have at most 32 dimensions. Evidently for the longer list it doesn't try to treat it as a tuple, and instead creates a 2d array for indexing.</p> <p>There are various ways we can use your <code>b</code> to index a 1d array:</p> <pre><code>In [87]: np.arange(1000)[np.hstack(b)] Out[87]: array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13]) In [89]: np.arange(1000)[np.array(b)] # or np.vstack(b) Out[89]: array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9], [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10], [ 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [ 3, 4, 5, 6, 7, 8, 9, 10, 11, 12], [ 4, 5, 6, 7, 8, 9, 10, 11, 12, 13]]) In [90]: np.arange(1000)[b,] # 1d tuple containing b Out[90]: array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9], [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10], [ 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [ 3, 4, 5, 6, 7, 8, 9, 10, 11, 12], [ 4, 5, 6, 7, 8, 9, 10, 11, 12, 13]]) </code></pre> <p>Note that if <code>b</code> is a ragged list - one or more of the arrays is shorter, only the <code>hstack</code> version works.</p>
python|numpy|numpy-ndarray|index-error
2
843
55,083,787
No module named 'object_detection'
<p>I downloaded Tensorflow object_detection API. I was able to run the tutorial and see the results. </p> <p>However, while I want to train my own data, i have an error here at this code:</p> <pre><code>python3 train.py --logtostderr --train_dir=training/ --pipeline_config_path=training/ssd_mobilenet_v1_pets.config </code></pre> <p>The error will came out as below:</p> <blockquote> <p>Traceback (most recent call last): File "train.py", line 49, in from object_detection.builders import dataset_builder ModuleNotFoundError: No module named 'object_detection'</p> </blockquote> <p>Here the code snippet from train.py:</p> <pre><code>import functools import json import os import tensorflow as tf from object_detection.builders import dataset_builder from object_detection.builders import graph_rewriter_builder from object_detection.builders import model_builder from object_detection.legacy import trainer from object_detection.utils import config_util </code></pre> <p><strong>Info:</strong></p> <p>I'm using Tensorflow 1.10 and Windows 10</p> <p><strong>Note</strong></p> <p>I run this code however it didn't work for me.</p> <blockquote> <p>set PYTHONPATH=$PYTHONPATH:<code>pwd</code>:<code>pwd</code>/slim</p> </blockquote>
<p>You can try the following steps. Change to the object detection directory, activate your virtualenv and then do the following</p> <pre><code>export PYTHONPATH=$PYTHONPATH:home/&lt;username&gt;/&lt;path&gt;/models/research export PYTHONPATH=$PYTHONPATH:home/&lt;username&gt;/&lt;path&gt;/models export PYTHONPATH=$PYTHONPATH:home/&lt;username&gt;/&lt;path&gt;/research/slim PATH=$PATH:$PYTHONPATH cd .. (Make sure you are now in the research directory) python setup.py build python setup.py install </code></pre> <p>Now change to the <code>object_detection</code> directory and try the <code>train.py</code> command again. Hope this helps you out. Let me know in case you face any issues.</p>
python|tensorflow|object-detection|object-detection-api
1
844
49,762,795
Finding the mode of a series consisting of list elements in Pandas
<p>I am working with a <code>pd.Series</code> where each entry is a list. I would like to find the mode of the series, that is, the most common list in this series. I have tried using both <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.value_counts.html" rel="nofollow noreferrer"><code>pandas.Series.value_counts</code></a> and <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.mode.html" rel="nofollow noreferrer"><code>pandas.Series.mode</code></a>. However, both of these approaches lead to the following exception being raised:</p> <blockquote> <p>TypeError: unhashable type: 'list'</p> </blockquote> <p>Here is a simple example of such a series:</p> <pre><code>pd.Series([[1,2,3], [4,5,6], [1,2,3]]) </code></pre> <p>I am looking for a function that will return <code>[1,2,3]</code>.</p>
<p>You need to convert to <code>tuple</code> , then using <code>mode</code></p> <pre><code>pd.Series([[1,2,3], [4,5,6], [1,2,3]]).apply(tuple).mode().apply(list) Out[192]: 0 [1, 2, 3] dtype: object </code></pre> <p>Slightly improvement: </p> <pre><code>list(pd.Series([[1,2,3], [4,5,6], [1,2,3]]).apply(tuple).mode().iloc[0]) Out[210]: [1, 2, 3] </code></pre> <p>Since two apply is ugly </p> <pre><code>s=pd.Series([[1,2,3], [4,5,6], [1,2,3]]) s[s.astype(str)==s.astype(str).mode()[0]].iloc[0] Out[205]: [1, 2, 3] </code></pre>
python|list|pandas|dataframe|series
5
845
28,363,447
What are the advantages of using numpy.identity over numpy.eye?
<p>Having looked over the man pages for <code>numpy</code>'s <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.eye.html" rel="noreferrer"><code>eye</code></a> and <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.identity.html" rel="noreferrer"><code>identity</code></a>, I'd assumed that <code>identity</code> was a special case of <code>eye</code>, since it has fewer options (e.g. <code>eye</code> can fill shifted diagonals, <code>identity</code> cannot), but could plausibly run more quickly. However, this isn't the case on either small or large arrays:</p> <pre><code>&gt;&gt;&gt; np.identity(3) array([[ 1., 0., 0.], [ 0., 1., 0.], [ 0., 0., 1.]]) &gt;&gt;&gt; np.eye(3) array([[ 1., 0., 0.], [ 0., 1., 0.], [ 0., 0., 1.]]) &gt;&gt;&gt; timeit.timeit(&quot;import numpy; numpy.identity(3)&quot;, number = 10000) 0.05699801445007324 &gt;&gt;&gt; timeit.timeit(&quot;import numpy; numpy.eye(3)&quot;, number = 10000) 0.03787708282470703 &gt;&gt;&gt; timeit.timeit(&quot;import numpy&quot;, number = 10000) 0.00960087776184082 &gt;&gt;&gt; timeit.timeit(&quot;import numpy; numpy.identity(1000)&quot;, number = 10000) 11.379066944122314 &gt;&gt;&gt; timeit.timeit(&quot;import numpy; numpy.eye(1000)&quot;, number = 10000) 11.247124910354614 </code></pre> <p>What, then, is the advantage of using <code>identity</code> over <code>eye</code>?</p>
<p><code>identity</code> just calls <code>eye</code> so there is no difference in how the arrays are constructed. Here's the code for <a href="https://github.com/numpy/numpy/blob/v1.9.1/numpy/core/numeric.py#L2125" rel="noreferrer"><code>identity</code></a>: </p> <pre><code>def identity(n, dtype=None): from numpy import eye return eye(n, dtype=dtype) </code></pre> <p>As you say, the main difference is that with <code>eye</code> the diagonal can may be offset, whereas <code>identity</code> only fills the main diagonal. </p> <p>Since the identity matrix is such a common construct in mathematics, it seems the main advantage of using <code>identity</code> is for its name alone.</p>
python|arrays|performance|numpy
82
846
28,314,870
Python - multiple list by a scalar
<p>Refer to the question mentioned on this link <a href="https://stackoverflow.com/questions/8194959/in-python-how-will-you-multiply-individual-elements-of-an-array-with-a-floating">In Python how will you multiply individual elements of a list with a floating point or integer number?</a></p> <p>I use <code>import numpy as np</code>; then multiply a list with a flaoting number and stored the updated values in the same list. However, the new values are like [[array([ 10.])] .....</p> <p>Now I want to remove array and use the list with the integer value.</p>
<p>you want to turn the <code>np.array()</code> back to a list?</p> <pre><code>import numpy as np P=2.45 S=[22, 33, 45.6, 21.6, 51.8] SP = P*np.array(S) SP_LIST =list(SP) </code></pre> <p>As the post you link to also contains:</p> <pre><code>[x * P for x in S] </code></pre> <p>returns a list directly</p>
python|numpy
1
847
73,216,116
Reference dict variables for data manipulation purposes
<p>I have successfully iterated through multiple directories to create a dictionary of lists (excel files) of DataFrames (sheets). However, <strong>a) how would I read in specific worksheets that match 1-2 list values? and exclude all other worksheets so I don't read in unnecessary amount of data in memory.</strong></p> <pre><code>sheet_list = [&quot;Total Residents&quot;, &quot;Total (excluding Non-Residents)&quot;, &quot;Individuals&quot;, &quot;Corporations&quot;, &quot;Other&quot;] sheet_list2 = [&quot;City1&quot;, &quot;City2&quot;, &quot;City3&quot;, &quot;City4&quot;, &quot;City5&quot;, &quot;City6&quot;] </code></pre> <p>and b) <strong>how to best reference dict object values?</strong> For example, currently my list <code>df_list</code> has 33 elements (dicts), with each dict having 14-30 keys (worksheets), and most having 360 cols x 40 rows of data. I need to be able to select specific columns/rows by column index value using the list and dict keys. However, how would I know if my lists and dict objects have been read-in in the correct order, without possibly adding in an additional key/reference ID?</p> <p>For example, if my files are named: <code>1515CC, 2525CC, 3535CC, 1515DD, 2525DD, 3535DD</code>, where 1515CC values in the Total Residents sheet should equal 1515DD City1 sheet and I need to cross-check and validate to make sure they are equal by splicing the &quot;N&quot; column or 9th column from the two sheets and comparing.</p> <pre><code># Create list and iterate through select directories to get files file_list = [] excludes = [&quot;graphs&quot;, &quot;archive&quot;] for root, directories, files in os.walk(root_path, topdown=True): directories[:] = [d for d in directories if d not in excludes] for filename in files: if fnmatch.fnmatch(filename, &quot;0*.xlsx&quot;): file_list.append(os.path.join(root,filename)) df_list = [pd.read_excel(files, sheet_name=None, skiprows=16, nrows=360, usecols=&quot;E:AR&quot;) for files in file_list] </code></pre>
<p>Following @srinath's recommendation, I decided to append the root link with the filename, like so <code>file_list.append(os.path.join(root,filename))</code>. This change has been made in my question, and the title has been revised to reflect the change in status. Thank you to everyone and @srinath.</p>
python|pandas|list|loops|dictionary
0
848
30,931,830
Combining different elements of array in Numpy while doing vector operations
<p>I have a function <code>f</code> which I am using to evolve a Numpy array <code>z</code> repeatedly. Thus my code looks like this:</p> <pre><code>for t in range(100): z = f(z) </code></pre> <p>However, now I want to combine elements of array while evolving. For example, in simple Python, I would like to do something like this:</p> <pre><code>N = len(z) for t in range(100): for i in range(len(z)): z_new[i] = f(z[i]) + z[(i-1)%N] + z[(i+1)%N] z = z_new </code></pre> <p>How can achieve the same thing in Numpy vector operations so that I wouldn't have to compromise with the great speed that Numpy gives me?</p> <p>Thanks in advance</p>
<p>You can roll the data <em>back and forth</em> to achieve the same result. </p> <pre><code>Z = f(z) Z = np.roll(Z, 1) + z Z = np.roll(Z, -2) + z z = np.roll(Z, 1) </code></pre> <hr> <p>I had also first thought about slicing but went with <code>np.roll</code> when I found it.</p> <p>Prompted by @hpaulj's comment I came up with a slice solution:</p> <pre><code>q = np.array([1,2,3,4,5]) Q = f(q) # operate on the middle Q[1:] += q[:-1] Q[:-1] += q[1:] # operate on the ends Q[0] += q[-1] Q[-1] += q[0] q = Q.copy() </code></pre>
python|arrays|numpy
2
849
67,418,422
How to skip the lines of an excel file loaded to a Pandas dataframe if data types are wrong (checking types)
<p>I have just coded this:</p> <pre><code>import os import pandas as pd files = os.listdir(path) #AllData = pd.DataFrame() for f in files: info = pd.read_excel(f, &quot;File&quot;) info.fillna(0) try: info['Country'] = info['Country'].astype('str') except ValueError: continue try: info['Name'] = info['Name'].astype('str') except ValueError: continue try: info['Age'] = info['Age'].astype('int') except ValueError as error: continue writer = pd.ExcelWriter(&quot;Output.xlsx&quot;) info.to_excel(writer, &quot;Sheet 1&quot;) writer.save() </code></pre> <p>It reads some excel files, selects a sheet named &quot;File&quot; and put all its data in a dataframe. Once it is done, it returns all the records.</p> <p>What I want is to check the types of all the values of each column, and to skip the line in the reading source if the type is not the one I want for this column. Finally I want to record in the output the data that fits the types I want.</p> <p>I tried to use <code>astype</code> but that's not working as expected.</p> <p>Thus, read source - check astype - if not astype - skip line and keep running the code.</p>
<p>I first have to say that <em>type checking</em> and <em>type casting</em> are 2 different things.</p> <p>Pandas' <code>astype</code> is used for <em>type casting</em> (it will &quot;convert&quot; a type to another type, it will not check if a value is of certain type) .</p> <p>But if what you want is to not keep the rows that can't be cast as numeric type, you can do it like this:</p> <pre><code>info['Age'] = pd.to_numeric(info['Age'], errors='coerce') info = info.dropna() </code></pre> <p>Note that you don't have to use a try-except block here. Here, we use <code>to_numeric</code> because we can pass <code>errors='coerce'</code>, so that if it can't be cast, the value will be <code>NaN</code>, and then we use <code>dropna()</code> in order to remove rows contaiing <code>NaN</code>s.</p> <h2>Update about type checking:</h2> <p>Here I'll add some informations you asked in comment about how to check types in pandas dataframes:</p> <ul> <li>How to get the types inferred by pandas for each column?</li> <li>How to check the types of all values of the whole dataframe?</li> <li>Some useful functions for type checkings</li> <li>Ways to check types in Python</li> </ul> <p><strong>How to get the types infered by pandas for each column?</strong></p> <p><code>columns_dtypes = df.dtypes</code></p> <p>It will output something like this:</p> <pre><code>Country object Name object Age int64 dtype: object </code></pre> <p>Note that i your column &quot;Age&quot; contains some <code>Nan</code> values the <code>dtype</code> could be <code>float64</code>.</p> <p>And when a column contains strings, the <code>dtype</code> will be <code>object</code> when you'll load your excel file to a dataframe like in your example. See below for how to check if an object is a Python string (type <code>str</code>).</p> <p>Pandas documentation listing all dtypes: <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/basics.html?highlight=basics#dtypes" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/user_guide/basics.html?highlight=basics#dtypes</a></p> <p>Other useful information about Pandas dtypes : <a href="https://stackoverflow.com/questions/29245848/what-are-all-the-dtypes-that-pandas-recognizes">what are all the dtypes that pandas recognizes?</a></p> <p><strong>How to check the types of all values of the whole dataframe?</strong></p> <p>There are numerous ways of doing this.</p> <p>Here is one way. I choose this code because it's clear and simple:</p> <pre><code># Iterate over all the columns for (column_name, column_data) in info.iteritems(): print(&quot;column_name: &quot;, column_name) print(&quot;column_data: &quot;, column_data.values) # Iterate over all the values of this column for column_value in column_data.values: # print the value and its type print(column_value, type(column_value)) # So here you can check the type and do something with that # For example, log the error to a log file </code></pre> <p><strong>Some useful functions for type checkings:</strong></p> <p>How to test if <code>object</code> (as returned by <code>df.dtypes</code> like in the output above) is a string? <code>isinstance(object_to_test, str)</code> See: <a href="https://stackoverflow.com/questions/1303243/how-to-find-out-if-a-python-object-is-a-string">How to find out if a Python object is a string?</a></p> <p>Now, if you have a column that contains strings (like &quot;hello&quot;, &quot;world&quot;, etc.) and some of these strings are <code>int</code>, and you want to check if these stings represent a number, or a <code>int</code> you can use these functions:</p> <p>How to check if a string is an <code>int</code>?</p> <pre><code>def str_is_int(s): try: int(s) return True except ValueError: return False </code></pre> <p>How to check if a string is an number?</p> <pre><code>def str_is_number(s): try: float(s) return True except ValueError: return False </code></pre> <p>Python's strings have a method <code>isdigit()</code>, but it can't be used to check for int or number, because it will fail with <code>one = &quot;+1&quot;</code> or <code>minus_one = &quot;-1&quot;</code>.</p> <p><strong>And finally, here are 2 common ways to check &quot;types&quot; in Python:</strong></p> <pre><code>object_to_test = 1 print( type(object_to_test) is int) print( type(object_to_test) in (int, float) ) # Check is is one of those types print( isinstance(object_to_test, int) ) </code></pre> <p><code>isinstance(object_to_test, str)</code> will return <code>True</code> if <code>object_to_test</code> is of type <code>str</code> OR any sublass of <code>str</code>.</p> <p><code>type(object_to_test) is str</code> will return <code>True</code> if <code>object_to_test</code> is ONLY of type <code>str</code> (excluding any subclass of <code>str</code>)</p> <p>There is also a libray called <code>pandas-stubs</code> that could be useful for type safety: <a href="https://github.com/VirtusLab/pandas-stubs" rel="nofollow noreferrer">https://github.com/VirtusLab/pandas-stubs</a>.</p>
python|python-3.x|pandas|dataframe
1
850
67,253,963
What does "TypeError: 'generator' object does not support item assignment" mean?
<p>When I try to run the following code I get an error called: <code>TypeError: 'generator' object does not support item assignment</code>. How can I fix this?</p> <pre><code>import os, glob import pandas as pd import re import sys path = r'C:\Users\Nicole\02_Datenverarbeitung und Analyse\Input' all_files = glob.glob(os.path.join(path, &quot;scrapeddata*.csv&quot;)) for files in all_files: basic_file_name = files.replace(path, '') date = basic_file_name[13:22] df_from_each_file = (pd.read_csv(f, sep=',', encoding='iso-8859-1', error_bad_lines=False, warn_bad_lines=False) for f in all_files) df_from_each_file['date'] = 'date' df_merged = pd.concat(df_from_each_file, ignore_index=True) df_merged </code></pre>
<p>Try this:</p> <pre class="lang-py prettyprint-override"><code>... # Use brackets instead of parenthesis df_from_each_file = [ pd.read_csv( f, sep=&quot;,&quot;, encoding=&quot;iso-8859-1&quot;, error_bad_lines=False, warn_bad_lines=False ) for f in all_files ] # Assign new column to each df for df in df_from_each_file: df[&quot;date&quot;] = &quot;date&quot; # Concat dataframes df_merged = pd.concat(df_from_each_file, ignore_index=True) </code></pre>
python|pandas|date
0
851
34,842,544
count of unique occurrences of a value pandas python
<p>So I have an extremely simple dataframe:</p> <pre><code>values 1 1 1 2 2 </code></pre> <p>I want to add a new column and for each row assign the sum of it's unique occurences, so the table would look like:</p> <pre><code>values unique_sum 1 3 1 3 1 3 2 2 2 2 </code></pre> <p>I have seen some examples in R, but for python and pandas I have not come across anything and am stuck. I can list the value counts using <code>.value_counts()</code> and I have tried <code>groupby</code>routines but cannot fathom it. </p>
<p>Just use <code>map</code> to map your column onto its <code>value_counts</code>:</p> <pre><code>&gt;&gt;&gt; x A 0 1 1 1 2 1 3 2 4 2 &gt;&gt;&gt; x['unique'] = x.A.map(x.A.value_counts()) &gt;&gt;&gt; x A unique 0 1 3 1 1 3 2 1 3 3 2 2 4 2 2 </code></pre> <p>(I named the column <code>A</code> instead of <code>values</code>. <code>values</code> is not a great choice for a column name, because DataFrames have a special attribute called <code>values</code>, which prevents you from getting the column with <code>x.values</code> --- you'd have to use <code>x['values']</code> instead.)</p>
python|pandas
2
852
60,116,795
Create pd series based on conditions on df1, and reporting values from df2 or df3
<p>First post here. I'm new to Python, but have made alot of progress leveraging the answers posted here to others questions. Unfortunately i'm having trouble with what seems to be an easy task. I have 3 pandas series, indexed on dates</p> <pre><code>df1 = {'signal': [0,0,1,1,0,0,1]} #binary trading signal df2 = {'SPX': [5,0,5,1,0,5,2]} #S&amp;P 500 returns df3 = {'UST': [-1,1,1,0,1,-1,0]} #10yr Treasury returns </code></pre> <p>I am trying to create a new series df4 that will represent the return profile of the trading signal. If the signal = 1, get the df3 value on that day, else give me the df2 value (which is for all the zeros)</p> <p>I've found plenty of posts regarding this topic, which seems very simple, but have struggled to make them work. I tried a simple if statement...</p> <pre><code>df4 = df1 if df1 == 1: df4.replace(1, df3) else: df4.replace(0, df2) </code></pre> <p>But I get ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). If I add df1.any(), no change is made</p> <p>I've also tried and failed to use other solutions...</p> <pre><code>df4 = df1.apply(lambda x: df2 if x == 0 else df3, axis=1) df4 = df1.loc[df1 == 1, df3] == df2 df4 = df1.select([df1 &gt; 0], [df3], default=df2) </code></pre> <p>One thing i'm concerned about is that if I replace all the 1s in df4 with a return from df3 and at some point it just so happens the value is a 0... then if I do a second replace for all the 0s in df4, I may place a 0 that should be left along.</p> <p>Any help to educate me on the most efficient way to do this is very much appreciated.</p>
<p>use <code>Series.where()</code>, specify the column names.</p> <p>see <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.where.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.where.html</a></p> <pre><code>&gt;&gt;&gt; df3.where(df1.signal == 1, other=df2.SPX, axis=0) UST 0 5 1 0 2 1 3 0 4 0 5 5 6 0 </code></pre>
python|pandas|dataframe
1
853
65,279,392
Convert Python Array to String in List [Pandas]
<p><strong>I want to convert pandas data frame from this:</strong></p> <pre><code> label 0 ['hello', 'world'] 1 ['just','string'] </code></pre> <p><strong>To this:</strong></p> <pre><code>0 hello world 1 just string </code></pre> <p><strong>But, my output like this:</strong></p> <pre><code>0 [ ' h e l l o ' , ' w o r l d ' ] 1 [ ' j u s t ' , ' s t r i n g ' ] </code></pre> <p><strong>This is my code:</strong></p> <pre><code>import pandas as pd data=pd.read_csv('testing.csv',nrows=578) temp = [] comment = [] for i in data['comment_clean']: temp = ' '.join(i) comment.append(temp) df = pd.DataFrame(comment, columns=['comment_clean_append']) print (df) </code></pre> <p>this is my dataframe <a href="https://i.stack.imgur.com/kYxpd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kYxpd.png" alt="enter image description here" /></a></p>
<p>You can use <code>str.join</code></p> <pre><code>import pandas as pd df = pd.DataFrame({&quot;col&quot;: [[&quot;Hello&quot;,&quot;world&quot;], [&quot;just&quot;,&quot;string&quot;]]}) df[&quot;col&quot;] = df[&quot;col&quot;].str.join(&quot; &quot;) </code></pre> <p>df is:</p> <pre><code>col 0 Hello world 1 just string </code></pre>
python|arrays|pandas|csv
0
854
65,255,166
Interesting results with duplicate columns in pandas.DataFrame
<p>Can anyone help to explain why I get errors in some actions and not others when there is a duplicate column in a <code>pandas.DataFrame</code>.</p> <p><strong>Minimal, Reproducible Example</strong></p> <pre><code>import pandas as pd df = pd.DataFrame(columns=['a', 'b', 'b']) </code></pre> <p>If I try and insert a list into <code>column 'a'</code> I get an error about dimension mis-match:</p> <pre><code>df.loc[:, 'a'] = list(range(5)) Traceback (most recent call last): ... ValueError: cannot copy sequence with size 5 to array axis with dimension 0 </code></pre> <p>Similar with <code>'b'</code>:</p> <pre><code>df.loc[:, 'b'] = list(range(5)) Traceback (most recent call last): ... ValueError: could not broadcast input array from shape (5) into shape (0,2) </code></pre> <p>However if I insert into an entirely new column, I don't get an error, unless I insert into <code>'a'</code> or <code>'b'</code>:</p> <pre><code>df.loc[:, 'c'] = list(range(5)) print(df) a b b c 0 NaN NaN NaN 0 1 NaN NaN NaN 1 2 NaN NaN NaN 2 3 NaN NaN NaN 3 4 NaN NaN NaN 4 df.loc[:, 'a'] = list(range(5)) Traceback (most recent call last): ... ValueError: Buffer has wrong number of dimensions (expected 1, got 0) </code></pre> <p>All of these errors disappear if I remove the duplicate column <code>'b'</code></p> <hr /> <p><strong>Additional information</strong></p> <p><code>pandas==1.0.2</code></p>
<p>Why use loc and not just:</p> <pre><code>df['a'] = list(range(5)) </code></pre> <p>This gives no error and seems to produce what you need:</p> <pre><code>a b b 0 NaN NaN 1 NaN NaN 2 NaN NaN 3 NaN NaN 4 NaN NaN </code></pre> <p>same for creating column c:</p> <pre><code>df['c'] = list(range(5)) </code></pre>
python|pandas|dataframe
1
855
50,127,527
How to save training history on every epoch in Keras?
<p>I can't keep my PC running all day long, and for this I need to save training history after every epoch. For example, I have trained my model for 100 epochs in one day, and on the next day, I want to train it for another 50 epochs. I need to generate the loss vs epoch and accuracy vs epoch graphs for the whole 150 epochs. I am using <code>fit_generator</code> method. Is there any way to save the training history after every epoch (most probably using <code>Callback</code>)? I know how to save the training history after the training has ended. I am using Tensorflow backend.</p>
<p>Keras has the CSVLogger callback which appears to do exactly what you need; from the <a href="https://keras.io/api/callbacks/csv_logger/" rel="noreferrer">documentation</a>:</p> <blockquote> <p>Callback that streams epoch results to a CSV file.</p> </blockquote> <p>It has an append parameter for adding to the file. Again, from the documentation:</p> <blockquote> <p><strong>append</strong>: Boolean. True: append if file exists (useful for continuing training). False: overwrite existing file</p> </blockquote> <pre><code>from keras.callbacks import CSVLogger csv_logger = CSVLogger(&quot;model_history_log.csv&quot;, append=True) model.fit_generator(...,callbacks=[csv_logger]) </code></pre>
python|tensorflow|keras
17
856
50,223,197
Converting Matlab code into Python - FFT
<p>I need to convert a piece of MATLAB code to Python and I'm bad at both. The code in MATLAB uses <code>fft</code> and <code>fftshift</code>. I tried to use NumPy in Python. The code runs but when I compare the outcome they are not matching. I appreciate your help.</p> <p>Here is the MATLAB code: </p> <pre class="lang-matlab prettyprint-override"><code>h(1,1:Modes_number) = -1i*S; hfft = fft(h); hft0 = fftshift(hfft); </code></pre> <p>and here is the Python code which I wrote:</p> <pre class="lang-py prettyprint-override"><code>h = np.zeros((1,self.cfg.Modes_number+1),dtype=complex) for i in range(0, self.cfg.Modes_number+1): h[0,i] = -1j*S; hfft = np.fft.fft(h) hft0 = np.fft.fftshift(hfft) </code></pre> <p>Here is the values for <code>S</code> and <code>Modes_number</code>:</p> <pre><code>S = 12.5022214424; Modes_number = 200; </code></pre> <p>Here is also an example of the results I get in MATLAB and Python:</p> <pre><code>MATLAB: hfft(1,1) ans = 1.1857e-13 - 2.5129e+03i Python: hfft[0] 0. -2.52544873e+03j </code></pre> <p>Cheers.</p>
<p>The error in your Python code is that you define <code>h</code> to be of size <code>Modes_number+1</code>, which is one more than the size in the MATLAB code. The first value in <code>hfft</code> is the sum of all input values. In MATLAB this is <code>-1j*S*200 = -2500.4j</code>, and in your Python code this is <code>-1j*S*201 = -2512.9j</code>. These are the values that you are seeing.</p> <p>This bit of Python code produces the same as your bit of MATLAB code, up to numerical precision (I see some values like <code>-1.68388521e-15 +6.55829989e-15j</code> in Python, which are forced to 0 by MATLAB's algorithms). I am creating <code>h</code> as a one-dimensional vector, rather than a 2D array with one dimension of size 1.</p> <pre><code>import numpy as np S = 12.5022214424 Modes_number = 200 h = np.zeros(Modes_number,dtype=complex) for i in range(0,Modes_number): h[i] = -1j*S; hfft = np.fft.fft(h) hft0 = np.fft.fftshift(hfft) </code></pre> <p>Python:</p> <pre><code>&gt;&gt;&gt; hfft[0] -2500.4442884800001j </code></pre> <p>MATLAB:</p> <pre><code>&gt;&gt; hfft(1) ans = 0.000000000000000e+00 - 2.500444288480000e+03i` </code></pre>
python|matlab|numpy|fft
1
857
63,992,639
Pandas to_sql - append vs replace
<p>I'm trying to understand how to modify the to_sql function to my needs. Here's the dataframe <code>df_interface</code>:</p> <pre><code>| YEAR | QUARTER | USER_ACCOUNT | BYTES | USER_CODE | |------|---------|--------------|---------------|-----------| | 2020 | 2 | SHtte34 | 7392577516389 | 2320885 | | 2020 | 2 | Exel2 | 441712306685 | 348995 | </code></pre> <p>I'm trying to insert this into a table <code>USER_USAGE</code> (via oracle+cx and SQLAlchemy). The contents of this table before the insert are:</p> <pre><code>| YEAR | QUARTER | USER_ACCOUNT | BYTES | USER_CODE | |------|---------|--------------|---------------|-----------| | 2020 | 1 | SHtte34 | 34560 | 2320885 | | 2020 | 1 | Exel2 | 5478290 | 348995 | </code></pre> <p>I would like a new row inserted only in the case of a new quarter AND account. Basically I would like this after the insert:</p> <pre><code>| YEAR | QUARTER | USER_ACCOUNT | BYTES | USER_CODE | |------|---------|--------------|---------------|-----------| | 2020 | 1 | SHtte34 | 7392577516389 | 2320885 | | 2020 | 1 | Exel2 | 5478290 | 348995 | | 2020 | 2 | SHtte34 | 7392577516389 | 2320885 | | 2020 | 2 | Exel2 | 441712306685 | 348995 | </code></pre> <p>Here's the code with &quot;replace&quot;:</p> <pre><code>conn = create_engine('oracle+cx_oracle://{}:{}@{}/?service_name={}'.format(s_uid,s_pwd,s_db,s_service)) df_interface.to_sql('USER_USAGE', conn, if_exists='replace',dtype={'USER_ACCOUNT': types.String(df_interface.USER_ACCOUNT.str.len().max()),'USER_CODE': types.String(df_interface.USER_CODE.str.len().max())},index=False) </code></pre> <p>This seems to be removing the previous quarter(1) values as well. Output after replace:</p> <pre><code> | YEAR | QUARTER | USER_ACCOUNT | BYTES | USER_CODE | |------|---------|--------------|---------------|-----------| | 2020 | 2 | SHtte34 | 7392577516389 | 2320885 | | 2020 | 2 | Exel2 | 441712306685 | 348995 | </code></pre> <p>Append is closer to what I want to see, however if I accidentally run the program twice, I'm seeing duplicated rows:</p> <pre><code>| YEAR | QUARTER | USER_ACCOUNT | BYTES | USER_CODE | |------|---------|--------------|---------------|-----------| | 2020 | 1 | SHtte34 | 7392577516389 | 2320885 | | 2020 | 1 | Exel2 | 5478290 | 348995 | | 2020 | 2 | SHtte34 | 7392577516389 | 2320885 | | 2020 | 2 | Exel2 | 441712306685 | 348995 | | 2020 | 1 | SHtte34 | 7392577516389 | 2320885 | | 2020 | 1 | Exel2 | 5478290 | 348995 | | 2020 | 2 | SHtte34 | 7392577516389 | 2320885 | | 2020 | 2 | Exel2 | 441712306685 | 348995 | </code></pre> <p>How do I use the &quot;append&quot; but also prevent duplicates from getting created in case of an inadvertent run?</p>
<p>The <code>if_exists</code> argument refers to the table as a whole, not individual rows within the table. <code>if_exists=&quot;replace&quot;</code> means &quot;if the table exists then drop it and create a new one with the rows in the DataFrame, whereas <code>if_exists=&quot;append&quot;</code> means &quot;append the DataFrame rows to the existing table&quot;.</p> <p>If potentially you only want to insert some (or none) of the rows into an existing table then you can't use <code>to_sql</code> to insert them directly. Instead, you could:</p> <p>• Create a temporary table (e.g., <code>USER_USAGE_TEMP</code>) with the same structure as the main <code>USER_USAGE</code> table.</p> <p>• Use <code>to_sql</code> to upload the DataFrame to the temp table (with <code>if_exists=&quot;append&quot;</code>).</p> <p>• Execute an INSERT statement like</p> <pre class="lang-sql prettyprint-override"><code>INSERT INTO USER_USAGE (YEAR, QUARTER, USER_ACCOUNT, BYTES, USER_CODE) SELECT YEAR, QUARTER, USER_ACCOUNT, BYTES, USER_CODE FROM USER_USAGE_TEMP WHERE NOT EXISTS ( SELECT * FROM USER_USAGE UU WHERE UU.YEAR = USER_USAGE_TEMP.YEAR AND UU.QUARTER = USER_USAGE_TEMP.QUARTER ) </code></pre>
python|pandas|oracle|sqlalchemy
2
858
64,014,323
Pandas: Only the last row is appearing
<p>While the definitions of both tissue and tube can be seen using the print function, only tube shows up in pandas.</p> <pre><code>from bs4 import BeautifulSoup as bs import re from requests import get import requests import numpy as np import pandas as pd def get_soup(url): soup = bs(requests.get(url).content, &quot;html.parser&quot;) return soup.select(&quot;#MainTxt&quot;)[0].select('.ds-single')[0].text.strip() def lookup(word): base_url = &quot;http://www.thefreedictionary.com/&quot; query_url = base_url + word return get_soup(query_url) Terms = [&quot;Tissues&quot;, &quot;tube&quot;] for i in Terms: m = lookup(i) r = print(&quot;Word:&quot; + &quot; &quot; + i + &quot; | &quot; + m) data = {'Word':[i],'Definition':[m]} print(&quot;\n&quot;) # Create DataFrame df = pd.DataFrame(data) df </code></pre> <p><a href="https://i.stack.imgur.com/KiLzH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KiLzH.png" alt="enter image description here" /></a></p>
<p>In the following line</p> <pre class="lang-py prettyprint-override"><code>data = {'Word':[i],'Definition':[m]} </code></pre> <p>you are overwriting your dictionary i.e. the <code>data</code> variable, because of that your dataframe contains only one rows, you can rather create two empty list for Word and Definition by keeping the respective terms as keys of the dictionary and then append the values in them.</p> <p>The modified code would look something like this</p> <pre class="lang-py prettyprint-override"><code>data = {'Word': [], 'Definition': []} for i in Terms: m = lookup(i) r = print(&quot;Word:&quot; + &quot; &quot; + i + &quot; | &quot; + m) data['Word'].append(i) data['Definition'].append(m) print(&quot;\n&quot;) </code></pre>
python|pandas|google-colaboratory
2
859
64,022,213
Python pandas fill missing value (NaN) based on condition of another column
<p>I have figured out how to fill the NaN values with the previous cell by using <code>df.fillna(method='ffill')</code>.</p> <p>However, I am not sure how to base it on a condition that if the country name differs from the country name in its previous cell, then the total case cell value should be 0, otherwise replace <code>NaN</code> with the previous cell's total case value.</p>
<p>Simply using <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>groupby</code></a> with <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.fillna.html" rel="nofollow noreferrer"><code>fillna</code></a> will give the wanted result. <code>columns</code> here are all the columns you want to apply the missing value logic to.</p> <pre><code>columns = ['total_cases', 'total_deaths', ...] df[columns] = df.groupby('location')[columns].fillna(method='ffill').fillna(0) </code></pre> <p>Note that you need to apply <code>fillna</code> twice, once with forward fill and once with a constant 0 to fill all nan values. This is to make sure that any nans that starts in a new group are filled with 0.</p>
python|pandas
1
860
47,006,726
Panda in Python skips computation on first line
<p>I have a file with its content as </p> <pre><code>0.08300343840033242 0.5721455830484666 0.46518116038504165 </code></pre> <p>I ran following script on it:</p> <pre><code>import pandas as pd import csv df = pd.read_csv('circle1.csv') df1 = df**2 print df1 </code></pre> <p>Problem in the output is pandas skips the computation on first line but squares the rest of numbers:</p> <pre><code>C:\Python27\python.exe C:/Users/Deepak/PycharmProjects/Submission/balnk.py 0.08300343840033242 0 0.327351 1 0.216394 </code></pre> <p>What is causing this trouble and what can be done to resolve it. </p>
<p>You're not loading your data properly, it appears your CSV has no headers. In which case, specify <code>header=None</code>.</p> <pre><code>df = pd.read_csv('circle1.csv', header=None, names=['Value']) df Value 0 0.083003 1 0.572146 2 0.465181 </code></pre> <hr> <pre><code>df ** 2 Value 0 0.006889 1 0.327351 2 0.216393 </code></pre>
python|pandas|csv
0
861
46,953,310
Importing Excel into Panda Dataframe
<p>The following is only the beginning for an Coursera assignment on Data Science. I hope this is not to trivial for. But I am lost on this and could not find an answer. I am asked to import an Excelfile into a panda dataframe and to manipulate it afterwards. The file can be found here: <a href="http://unstats.un.org/unsd/environment/excel_file_tables/2013/Energy%20Indicators.xls" rel="nofollow noreferrer">http://unstats.un.org/unsd/environment/excel_file_tables/2013/Energy%20Indicators.xls</a></p> <p>What makes it difficult for me is</p> <p>a) there is an 'overhead' of 17 lines and a footer b) the first two columns are empty c) the index column has no header name</p> <p>After hours if seraching and reading I came up with this useless line:</p> <pre><code>energy=pd.read_excel('Energy Indicators.xls', sheetname='Energy', header=16, skiprows=[17], skipfooter=38, skipcolumns=2 ) </code></pre> <p>This seems to produce a multindex dataframe. Though the command energy.head() returns nothing. </p> <p>I have two questions:</p> <ol> <li>what did I wrong. Up to this exercise I thought I understand the dataframe. But now I am totally clueless and lost :-((</li> <li>How do I have to tackle this? What do I have to do to get this Exceldata into a datafrae with the index consisting of the countries?</li> </ol> <p>Thanks.</p>
<p>I think you need add parameters:</p> <ul> <li><code>index_col</code> for convert column to index</li> <li><code>usecols</code> - parse columns by positions</li> <li>change header position to <code>15</code></li> </ul> <hr /> <pre><code>energy=pd.read_excel('Energy Indicators.xls', sheet_name='Energy', skiprows=[17], skipfooter=38, header=15, index_col=[0], usecols=[2,3,4,5] ) print (energy.head()) Energy Supply Energy Supply per capita \ Afghanistan 321 10 Albania 102 35 Algeria 1959 51 American Samoa ... ... Andorra 9 121 Renewable Electricity Production Afghanistan 78.669280 Albania 100.000000 Algeria 0.551010 American Samoa 0.641026 Andorra 88.695650 </code></pre>
python|excel|pandas|dataframe|import
3
862
38,877,766
Converting pandas dataframe into list of tuples with index
<p>I'm currently trying convert a pandas dataframe into a list of tuples. However I'm having difficulties getting the Index (which is the Date) for the values in the tuple as well. My first step was going here, but they do not add any index to the tuple.</p> <p><a href="https://stackoverflow.com/questions/9758450/pandas-convert-dataframe-to-array-of-tuples">Pandas convert dataframe to array of tuples</a></p> <p>My only problem is accessing the index for each row in the numpy array. I have one solution shown below, but it uses an additional counter <code>indexCounter</code> and it looks sloppy. I feel like there should be a more elegant solution to retrieving an index from a particular numpy array. </p> <pre><code>def get_Quandl_daily_data(ticker, start, end): prices = [] symbol = format_ticker(ticker) try: data = quandl.get("WIKI/" + symbol, start_date=start, end_date=end) except Exception, e: print "Could not download QUANDL data: %s" % e subset = data[['Open','High','Low','Close','Adj. Close','Volume']] indexCounter = 0 for row in subset.values: dateIndex = subset.index.values[indexCounter] tup = (dateIndex, "%.4f" % row[0], "%.4f" % row[1], "%.4f" % row[2], "%.4f" % row[3], "%.4f" % row[4],row[5]) prices.append(tup) indexCounter += 1 </code></pre> <p>Thanks in advance for any help!</p>
<p>You can iterate over the result of <code>to_records(index=True)</code>.</p> <p>Say you start with this:</p> <pre><code>In [6]: df = pd.DataFrame({'a': range(3, 7), 'b': range(1, 5), 'c': range(2, 6)}).set_index('a') In [7]: df Out[7]: b c a 3 1 2 4 2 3 5 3 4 6 4 5 </code></pre> <p>then this works, except that it does not include the index (<code>a</code>):</p> <pre><code>In [8]: [tuple(x) for x in df.to_records(index=False)] Out[8]: [(1, 2), (2, 3), (3, 4), (4, 5)] </code></pre> <p>However, if you pass <code>index=True</code>, then it does what you want:</p> <pre><code>In [9]: [tuple(x) for x in df.to_records(index=True)] Out[9]: [(3, 1, 2), (4, 2, 3), (5, 3, 4), (6, 4, 5)] </code></pre>
python|pandas|numpy|tuples
10
863
38,619,143
Convert Python sequence to NumPy array, filling missing values
<p>The implicit conversion of a Python sequence of <em>variable-length</em> lists into a NumPy array cause the array to be of type <em>object</em>.</p> <pre><code>v = [[1], [1, 2]] np.array(v) &gt;&gt;&gt; array([[1], [1, 2]], dtype=object) </code></pre> <p>Trying to force another type will cause an exception:</p> <pre><code>np.array(v, dtype=np.int32) ValueError: setting an array element with a sequence. </code></pre> <p>What is the most efficient way to get a dense NumPy array of type int32, by filling the "missing" values with a given placeholder?</p> <p>From my sample sequence <code>v</code>, I would like to get something like this, if 0 is the placeholder</p> <pre><code>array([[1, 0], [1, 2]], dtype=int32) </code></pre>
<p>You can use <a href="https://docs.python.org/3.4/library/itertools.html#itertools.zip_longest">itertools.zip_longest</a>:</p> <pre><code>import itertools np.array(list(itertools.zip_longest(*v, fillvalue=0))).T Out: array([[1, 0], [1, 2]]) </code></pre> <p>Note: For Python 2, it is <a href="https://docs.python.org/2/library/itertools.html#itertools.izip_longest">itertools.izip_longest</a>.</p>
python|arrays|numpy|sequence|variable-length-array
34
864
63,126,775
header and skiprows difference in pandas unclear
<p>Can any one please elaborate with good example the difference between header and skiprows in syntax of pd.read_excel(&quot;name&quot;,header=number,skiprows=number)</p>
<p>You can follow <a href="https://towardsdatascience.com/import-csv-files-as-pandas-dataframe-with-skiprows-skipfooter-usecols-index-col-and-header-fbf67a2f92a" rel="nofollow noreferrer">this article</a>, which explains the difference between the parameters <code>header</code> and <code>skiprows</code> with examples from the olympic dataset, which can be downloaded <a href="https://github.com/rashida048/Datasets/blob/master/olympics.csv" rel="nofollow noreferrer">here</a>.</p> <p>To summarize: the default behavior for <code>pd.read()</code> is to read in all of the rows, which in the case of this dataset, includes an unnecessary first row of row numbers.</p> <pre><code>import pandas as pd df = pd.read_csv('olympics.csv') df.head() </code></pre> <hr /> <pre><code> 0 1 2 3 4 ... 11 12 13 14 15 0 NaN № Summer 01 ! 02 ! 03 ! ... № Games 01 ! 02 ! 03 ! Combined total 1 Afghanistan (AFG) 13 0 0 2 ... 13 0 0 2 2 2 Algeria (ALG) 12 5 2 8 ... 15 5 2 8 15 3 Argentina (ARG) 23 18 24 28 ... 41 18 24 28 70 4 Armenia (ARM) 5 1 2 9 ... 11 1 2 9 12 </code></pre> <p>However the parameter <code>skiprows</code> allows you to delete one or more rows when you read in the .csv file:</p> <pre><code>df1 = pd.read_csv('olympics.csv', skiprows = 1) df1.head() </code></pre> <hr /> <pre><code> Unnamed: 0 № Summer 01 ! 02 ! ... 01 !.2 02 !.2 03 !.2 Combined total 0 Afghanistan (AFG) 13 0 0 ... 0 0 2 2 1 Algeria (ALG) 12 5 2 ... 5 2 8 15 2 Argentina (ARG) 23 18 24 ... 18 24 28 70 3 Armenia (ARM) 5 1 2 ... 1 2 9 12 4 Australasia (ANZ) [ANZ] 2 3 4 ... 3 4 5 12 </code></pre> <p>And if you want to skip a bunch of different rows, you can do the following (notice the missing countries):</p> <pre><code>df2 = pd.read_csv('olympics.csv', skiprows = [0, 2, 3]) df2.head() </code></pre> <hr /> <pre><code> Unnamed: 0 № Summer 01 ! 02 ! ... 01 !.2 02 !.2 03 !.2 Combined total 0 Argentina (ARG) 23 18 24 ... 18 24 28 70 1 Armenia (ARM) 5 1 2 ... 1 2 9 12 2 Australasia (ANZ) [ANZ] 2 3 4 ... 3 4 5 12 3 Australia (AUS) [AUS] [Z] 25 139 152 ... 144 155 181 480 4 Austria (AUT) 26 18 33 ... 77 111 116 304 </code></pre> <p>The <code>header</code> parameter tells you where to start reading in the .csv, which in the following case, does the same thing as <code>skiprows = 1</code>:</p> <pre><code># this gives the same result as df1 = pd.read_csv(‘olympics.csv’, skiprows = 1) df4 = pd.read_csv('olympics.csv', header = 1) df4.head() </code></pre> <hr /> <pre><code> Unnamed: 0 № Summer 01 ! 02 ! ... 01 !.2 02 !.2 03 !.2 Combined total 0 Afghanistan (AFG) 13 0 0 ... 0 0 2 2 1 Algeria (ALG) 12 5 2 ... 5 2 8 15 2 Argentina (ARG) 23 18 24 ... 18 24 28 70 3 Armenia (ARM) 5 1 2 ... 1 2 9 12 4 Australasia (ANZ) [ANZ] 2 3 4 ... 3 4 5 12 </code></pre> <p>However you cannot use the header parameter to skip a bunch of different rows. You would not be able to replicate df2 using the header parameter. Hopefully this clears things up.</p>
python|excel|pandas
3
865
63,317,748
calculate mean of cells from different dataframes
<p>I want to calculate the mean of multiple cells from different dataframes. I have calculated the correlation between variables with <code>df.corr()</code> and I have to do this another 9 times and calculate the mean of correlation of each varaible.</p> <p>For example, the first dataframe with correlations I got as a result could be this:</p> <pre><code> a b c __________________ a 1 0.2 0.3 b 0.2 1 0.4 c 0.3 0.4 1 </code></pre> <p>The second correlation dataframe could be this:</p> <pre><code> a b c __________________ a 1 0.3 0.2 b 0.3 1 0.4 c 0.2 0.4 1 </code></pre> <p>And I would like to obtain a final dataframe with the mean of each one of the cells considering all the dataframes.</p> <pre><code> df_result a b c __________________ a 1 0.25 0.25 b 0.25 1 0.4 c 0.25 0.4 1 </code></pre>
<p>It's pretty forward, you could just do:</p> <pre class="lang-py prettyprint-override"><code>(df1.corr() + df2.corr()) / 2 </code></pre> <p>as the two dataframes have the same columns</p>
python-3.x|pandas|dataframe
1
866
63,021,958
How can l extract a section of the pandas dataframe like marked in the picture below?
<p><img src="https://i.stack.imgur.com/xiGWo.jpg" alt="Click here to open the marked image" /></p> <p>I am trying to extract the section (matrix) of the numbers in pandas dataframe like as marked in the given picture embedded above.<br /> Please anyone who can assist me, I want to perform analytics based on the section (matrix) of a bigger data frame. Thank you in advance!!</p>
<p>You can use the .iloc[] function to select the rows and columns you want.</p> <pre><code>dataframe.iloc[5:15,6:15] </code></pre> <p>This should select rows 5-14 and columns 6-14. Not sure if the numbers are correct but I think this method is what you were looking for.</p> <p>edit: changed .loc[] to .iloc[] because we're using index values, and cleaned it up a bit</p> <p>Here is the code to iterate over the whole dataframe</p> <pre><code>#df = big data frame shape = (10,10) #shape of matrix to be analized, here is 10x10 step = 1 #step size, itterate over every number #or step = 10 #step size, itterate block by block #keep in mind, iterating by block will leave some data out at the end of the rows and columns #you can set step = shape if you are working with a matrix that isn't square, just be sure to change step in the code below to step[0] and step[1] respectively for row in range( 0, len(df[0]) - shape[0]+1, step): #number of rows of big dataframe - number of rows of matrix to be analized for col in range(0, len(df.iloc[0,:]) - shape[1]+1, step): #number of columns of big dataframe - number of columns of matrix to be analized matrix = df.iloc[row:shape[0]+row, col:shape[1]+col] #slice out matrix and set it equal to 'matrix' #analize matrix here </code></pre> <p>This is basically the same as @dafmedinama said, i just added more commenting and simplified specifying the shape of the matrix as well as included a step variable if you don't want to iterate over every single number every time you move the matrix.</p>
python|pandas
2
867
63,094,112
Return Pandas entry in specific format?
<p>Right now, I'm searching through a pandas dataframe for entries that match a certain username. It's returning stuff like this: <code>{&quot;username&quot;:{&quot;0&quot;:&quot;user&quot;,&quot;1&quot;:&quot;user&quot;,&quot;2&quot;:&quot;user&quot;},&quot;title&quot;:{&quot;0&quot;:&quot;Title&quot;,&quot;1&quot;:&quot;asdfasdfasdf&quot;,&quot;2&quot;:&quot;Bob&quot;},&quot;start&quot;:{&quot;0&quot;:&quot;2020-07-10&quot;,&quot;1&quot;:&quot;2020-07-25&quot;,&quot;2&quot;:&quot;2020-07-10&quot;},&quot;end&quot;:{&quot;0&quot;:&quot;2020-08-01&quot;,&quot;1&quot;:&quot;2020-07-25&quot;,&quot;2&quot;:&quot;2020-07-11&quot;},&quot;startTime&quot;:{&quot;0&quot;:&quot;2020-07-25T14:24&quot;,&quot;1&quot;:&quot;2020-07-25T14:25&quot;,&quot;2&quot;:&quot;2020-07-25T19:29&quot;},&quot;endTime&quot;:{&quot;0&quot;:&quot;2020-07-31T14:24&quot;,&quot;1&quot;:&quot;2020-07-25T14:25&quot;,&quot;2&quot;:&quot;2020-07-25T14:32&quot;},&quot;color&quot;:{&quot;0&quot;:&quot;#000000&quot;,&quot;1&quot;:&quot;#000000&quot;,&quot;2&quot;:&quot;#ff0000&quot;}}</code></p> <p>Is there a way to return values from the pandas dataframe in another format, such as this? <code>{username: user, Title: asdsdfs, startTime: 2020-07-25T14:24}, {username: user, Title: asdsdfs, startTime: 2020-07-25T14:24}</code> Sorry if this is a really obvious question, I'm doing this for a school related activity and I need the output in this format for another program of ours to work.</p>
<p>Here is how you can use a nested dictionary comprehension:</p> <pre><code>d = {&quot;username&quot;:{&quot;0&quot;:&quot;user&quot;,&quot;1&quot;:&quot;user&quot;,&quot;2&quot;:&quot;user&quot;}, &quot;title&quot;:{&quot;0&quot;:&quot;Title&quot;,&quot;1&quot;:&quot;asdfasdfasdf&quot;,&quot;2&quot;:&quot;Bob&quot;}, &quot;start&quot;:{&quot;0&quot;:&quot;2020-07-10&quot;,&quot;1&quot;:&quot;2020-07-25&quot;,&quot;2&quot;:&quot;2020-07-10&quot;}, &quot;end&quot;:{&quot;0&quot;:&quot;2020-08-01&quot;,&quot;1&quot;:&quot;2020-07-25&quot;,&quot;2&quot;:&quot;2020-07-11&quot;}, &quot;startTime&quot;:{&quot;0&quot;:&quot;2020-07-25T14:24&quot;,&quot;1&quot;:&quot;2020-07-25T14:25&quot;,&quot;2&quot;:&quot;2020-07-25T19:29&quot;}, &quot;endTime&quot;:{&quot;0&quot;:&quot;2020-07-31T14:24&quot;,&quot;1&quot;:&quot;2020-07-25T14:25&quot;,&quot;2&quot;:&quot;2020-07-25T14:32&quot;}, &quot;color&quot;:{&quot;0&quot;:&quot;#000000&quot;,&quot;1&quot;:&quot;#000000&quot;,&quot;2&quot;:&quot;#ff0000&quot;}} c = [{k:v for k,v in zip(d,i)} for i in zip(*[d[a].values() for a in d])] print(c) </code></pre> <p>Output:</p> <pre><code>[{'username': 'user', 'title': 'Title', 'start': '2020-07-10', 'end': '2020-08-01', 'startTime': '2020-07-25T14:24', 'endTime': '2020-07-31T14:24', 'color': '#000000'}, {'username': 'user', 'title': 'asdfasdfasdf', 'start': '2020-07-25', 'end': '2020-07-25', 'startTime': '2020-07-25T14:25', 'endTime': '2020-07-25T14:25', 'color': '#000000'}, {'username': 'user', 'title': 'Bob', 'start': '2020-07-10', 'end': '2020-07-11', 'startTime': '2020-07-25T19:29', 'endTime': '2020-07-25T14:32', 'color': '#ff0000'}] </code></pre>
python|pandas
0
868
67,936,385
Unable to import 'pandas_profiling' module
<p>I have installed 'pandas_profiling' through <code>conda install -c conda-forge pandas-profiling</code> in the base environment. I could see through the <code>conda list</code> that pandas_profiling has been installed correctly (snapshot attached), <a href="https://i.stack.imgur.com/gIwj3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gIwj3.png" alt="enter image description here" /></a></p> <p>When I try to <code>import pandas_profiling</code> I receive ModuleNotFoundError</p> <pre><code>import pandas_profiling Traceback (most recent call last): File &quot;&lt;ipython-input-4-60d2bac64bfc&gt;&quot;, line 1, in &lt;module&gt; import pandas_profiling ModuleNotFoundError: No module named 'pandas_profiling' </code></pre> <p>Update: output of <code>import sys; print(sys.path); print(sys.prefix)</code></p> <pre><code>['/home/user1/miniconda3/lib/python38.zip', '/home/user1/miniconda3/lib/python3.8', '/home/user1/miniconda3/lib/python3.8/lib-dynload', '', '/home/user1/miniconda3/lib/python3.8/site-packages', '/home/user1/miniconda3/lib/python3.8/site-packages/IPython/extensions', '/home/user1/.ipython'] /home/user1/miniconda3 </code></pre>
<p>Occasionally you will encounter this error if you import a package from the current notebook. It is important to ensure that the pip version is associated with the current Python kernel. That way, the installed packages can be used in the current notebook.</p> <p>As detailed <a href="https://jakevdp.github.io/blog/2017/12/05/installing-python-packages-from-jupyter/" rel="nofollow noreferrer">here</a>, the shell environment and the Python executable are disconnected.</p> <p>This should work for you</p> <pre><code>import sys !{sys.executable} -m pip install pandas-profiling </code></pre>
python|conda|pandas-profiling
0
869
31,742,495
Pandas DataFrame get substrings from column
<p>I have a column named "KL" with for example:</p> <pre><code>sem_0405M4209F2057_1.000 sem_A_0103M5836F4798_1.000 </code></pre> <p>Now I want to extract the four digits after "M" and the four digits after "F". But with <code>df["KL"].str.extract</code> I can't get it to work.</p> <p>Locations of M and F vary, thus just using the slice <code>[9:13]</code> won't work for the complete column.</p>
<p>If you want to use <code>str.extract</code>, here's how:</p> <pre><code>&gt;&gt;&gt; df['KL'].str.extract(r'M(?P&lt;M&gt;[0-9]{4})F(?P&lt;F&gt;[0-9]{4})') M F 0 4209 2057 1 5836 4798 </code></pre> <p>Here, <code>M(?P&lt;M&gt;[0-9]{4})</code> matches the character <code>'M'</code> and then captures 4 digits following it (the <code>[0-9]{4}</code> part). This is put in the column <code>M</code> (specified with <code>?P&lt;M&gt;</code> inside the capturing group). The same thing is done for <code>F</code>.</p>
python|pandas|dataframe
1
870
31,948,243
Pandas (0.16.2) Show 3 Rows of Dataframe
<p>I'm trying to limit the output of the pandas dataframe to the first 3 rows. However I get a summary of all 500000 data points. When I run this without specifying "Time [s]" as the index it works properly and I only get 3 rows of data. I'm running Pandas 0.16.2 and Python 3.</p> <pre><code>%matplotlib inline import pandas as pd from sys import platform as _platform import matplotlib.pyplot as plt pd.set_option('display.mpl_style', 'default') # Make the graphs a bit prettier plt.rcParams['figure.figsize'] = (15, 5) shot1 = "C:\\Users\ELEC-LAB_1\Documents\GitHub\Black Powder\BlackPowder_Shot-1.csv" shot1_data = pd.read_csv(shot1, parse_dates=['Time [s]'], index_col='Time [s]') # Read the data using the time as the index. shot1_data[:3] </code></pre> <p>Time [s] CH 1 [psi] CH 2 [psi] CH 3 [psi] CH 4 [psi] CH 5 [psi] CH 6 [psi] CH 7 [psi] CH 8 [psi] CH 9 [psi] CH 10 [psi] CH 16 [V]</p> <p>-0.200000 -0.018311 -0.054932 -0.012207 -0.054932 -0.006104 -0.048828 -0.030518 -0.018311 0.030518 -0.018311 0.011597 </p> <p>-0.199998 0.006104 0.109863 0.048828 0.048828 -0.018311 -0.054932 0.042725 0.054932 -0.042725 0.024414 0.010986 </p> <p>-0.199996 0.012207 -0.042725 0.061035 -0.097656 0.067139 0.006104 -0.054932 -0.067139 -0.097656 -0.134277 0.010986 </p> <p>-0.199994 0.012207 -0.006104 -0.079346 0.036621 -0.036621 0.042725 0.006104 0.067139 0.012207 -0.042725 0.011597 </p> <p>-0.199992 0.006104 0.067139 0.091553 0.091553 0.024414 0.012207 0.097656 -0.030518 -0.024414 0.061035 0.010986 </p> <p>-0.199990 0.036621 0.006104 0.061035 0.109863 0.073242 0.067139 0.109863 -0.054932 0.158691 0.000000 0.011597 </p> <p>500000 rows × 11 columns</p>
<p>You're trying to slice a df which has a <code>datetimeindex</code> using integer values that are not valid which is why you get the full df.</p> <p>Example:</p> <pre><code>In [34]: df = pd.DataFrame(index=pd.date_range(start=dt.datetime(2015,1,1), end=dt.datetime(2015,1,10))) df[:3] Out[34]: Empty DataFrame Columns: [] Index: [2015-01-01 00:00:00, 2015-01-02 00:00:00, 2015-01-03 00:00:00, 2015-01-04 00:00:00, 2015-01-05 00:00:00, 2015-01-06 00:00:00, 2015-01-07 00:00:00] </code></pre> <p>If you used <code>head</code> or <code>iloc[:3]</code> then you get the desired result:</p> <pre><code>In [35]: df.head(3) Out[35]: Empty DataFrame Columns: [] Index: [2015-01-01 00:00:00, 2015-01-02 00:00:00, 2015-01-03 00:00:00] In [36]: df.iloc[:3] Out[36]: Empty DataFrame Columns: [] Index: [2015-01-01 00:00:00, 2015-01-02 00:00:00, 2015-01-03 00:00:00] </code></pre> <p>That is why when you don't pass param <code>index_col='Time [s]'</code> to <code>read_csv</code> your line of code works as a default <code>int64</code> index is created for you.</p>
pandas|ipython-notebook
1
871
41,560,053
Python reshaping array in a certain order using np.reshape
<p>I have an array (a) that is the shape <code>(1800,144)</code> where <code>a[0:900,:]</code> are all real numbers and the second half of the array <code>a[900:1800,:]</code> are all zeros. I want to take the second half of the array and put it next to the first half horizontally and push them together so that the new array shape (a) will be <code>(900,288)</code> and the array, a, will look like this:</p> <pre><code>[[1,2,3,......,0,0,0], [1,2,3,......,0,0,0], ... ] </code></pre> <p>if that makes sense.</p> <p>when I try to use <code>np.reshape(a,(900,288))</code> it doesn't exactly do what I want. It makes the array all real numbers from <code>a[0:450,:]</code> and zeros from <code>a[450:900,:]</code>. I want all of the zeros to be tacked onto the second dimension so that from <code>a[0:900,0:144]</code> is all real numbers and <code>a[0:900,144:288]</code> are all zeros.</p> <p>Is there an easy way to do this?</p>
<p>sorry, this is too big for a comment, so I will post it here. If you have a long array and you need to split it and reassemble it, there are other methods that can accomplish this. This example shows how to assemble an equally sized sequence of numbers into a single array.</p> <pre><code>a = np.arange(100) &gt;&gt;&gt; b = np.split(a,10) &gt;&gt;&gt; c = np.c_[b] &gt;&gt;&gt; c array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9], [10, 11, 12, 13, 14, 15, 16, 17, 18, 19], [20, 21, 22, 23, 24, 25, 26, 27, 28, 29], [30, 31, 32, 33, 34, 35, 36, 37, 38, 39], [40, 41, 42, 43, 44, 45, 46, 47, 48, 49], [50, 51, 52, 53, 54, 55, 56, 57, 58, 59], [60, 61, 62, 63, 64, 65, 66, 67, 68, 69], [70, 71, 72, 73, 74, 75, 76, 77, 78, 79], [80, 81, 82, 83, 84, 85, 86, 87, 88, 89], [90, 91, 92, 93, 94, 95, 96, 97, 98, 99]]) </code></pre> <p>so you can split a sequence easily and reassemble it easily. You could reorder the sequence of stacking if you want. Perhaps that is easier to show in this sequence.</p> <pre><code>d = np.r_[b[5:],b[:5]].ravel() &gt;&gt;&gt; d array([50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49]) </code></pre> <p>This example simply shows that you can take the last five split sequences and throw them into the front of the pile. It shouldn't take long to figure out that if you have a series of values, even of unequal length, you can place them in a list and reassemble them using np.c_ and np.r_ convenience functions (np.c_ would normally expect equal sized arrays). </p> <p>So not a solution to your specific case perhaps, but some suggestions on how to reassemble samples in various ways.</p>
python|arrays|numpy|reshape
1
872
41,561,011
Reset the index for a pandas DataFrame created from a groupby or pivot?
<p>I have data that contains prices, volumes and other data about various financial securities. My input data looks like the following:</p> <pre><code>import numpy as np import pandas prices = np.random.rand(15) * 100 volumes = np.random.randint(15, size=15) * 10 idx = pandas.Series([2007, 2007, 2007, 2007, 2007, 2008, 2008, 2008, 2008, 2008, 2009, 2009, 2009, 2009, 2009], name='year') df = pandas.DataFrame.from_items([('price', prices), ('volume', volumes)]) df.index = idx # BELOW IS AN EXMPLE OF WHAT INPUT MIGHT LOOK LIKE # IT WON'T BE EXACT BECAUSE OF THE USE OF RANDOM # price volume # year # 2007 0.121002 30 # 2007 15.256424 70 # 2007 44.479590 50 # 2007 29.096013 0 # 2007 21.424690 0 # 2008 23.019548 40 # 2008 90.011295 0 # 2008 88.487664 30 # 2008 51.609119 70 # 2008 4.265726 80 # 2009 34.402065 140 # 2009 10.259064 100 # 2009 47.024574 110 # 2009 57.614977 140 # 2009 54.718016 50 </code></pre> <p>I want to produce a data frame that looks like:</p> <pre><code>year 2007 2008 2009 0 0.121002 23.019548 34.402065 1 15.256424 90.011295 10.259064 2 44.479590 88.487664 47.024574 3 29.096013 51.609119 57.614977 4 21.424690 4.265726 54.718016 </code></pre> <p>I know of one way to produce the output above using groupby:</p> <pre><code>df = df.reset_index() grouper = df.groupby('year') df2 = None for group, data in grouper: series = data['price'].copy() series.index = range(len(series)) series.name = group df2 = pandas.DataFrame(series) if df2 is None else pandas.concat([df2, series], axis=1) </code></pre> <p>And I also know that you can do pivot to get a DataFrame that has NaNs for the missing indices on the pivot:</p> <pre><code># df = df.reset_index() df.pivot(columns='year', values='price') # Output # year 2007 2008 2009 # 0 0.121002 NaN NaN # 1 15.256424 NaN NaN # 2 44.479590 NaN NaN # 3 29.096013 NaN NaN # 4 21.424690 NaN NaN # 5 NaN 23.019548 NaN # 6 NaN 90.011295 NaN # 7 NaN 88.487664 NaN # 8 NaN 51.609119 NaN # 9 NaN 4.265726 NaN # 10 NaN NaN 34.402065 # 11 NaN NaN 10.259064 # 12 NaN NaN 47.024574 # 13 NaN NaN 57.614977 # 14 NaN NaN 54.718016 </code></pre> <p>My question is the following:</p> <p>Is there a way that I can create my output DataFrame in the groupby without creating the series, or is there a way I can re-index my input DataFrame so that I get the desired output using pivot?</p>
<p>You need to label each year 0-4. To do this, use the <code>cumcount</code> after grouping. Then you can pivot correctly using that new column as the index.</p> <pre><code>df['year_count'] = df.groupby(level='year').cumcount() df.reset_index().pivot(index='year_count', columns='year', values='price') year 2007 2008 2009 year_count 0 61.682275 32.729113 54.859700 1 44.231296 4.453897 45.325802 2 65.850231 82.023960 28.325119 3 29.098607 86.046499 71.329594 4 67.864723 43.499762 19.255214 </code></pre>
python|pandas
3
873
41,348,587
Pandas groupby and then select one row
<p>I hava pandas dataframe where I have to group by some columns. Most groups in the group by only have one row, but a few have more than one row. For each of these, I only want to keep the row with the earliest date. I've tried both the <code>agg</code> and <code>filter</code> functions, but they don't seem to do what I need.</p> <pre><code>def first(df): if len(df) &gt; 1: return df.ix[df['date'].idxmin()] else: return df df.groupby(['id', 'period', 'type').agg(first) </code></pre>
<p>Sort by date and then just grab the first row.</p> <pre><code>df.sort_values('date').groupby(['id', 'period', 'type']).first() </code></pre>
python|pandas
16
874
41,446,914
Unsupported operand type for unicode error in Python
<p>I have a pandas dataframe in the below format:</p> <pre><code> Timestamp Clientip 2015-07-22T02:40:06.499174Z 106.51.235.133 2015-07-22T02:40:06.632589Z 115.250.16.146 </code></pre> <p>To sessionize the above data, I grouped it based on clientip and then created a session number field. </p> <pre><code> dfgrouped = testdf.groupby(['clientip']) testdf['session_number'] = dfgrouped['timestamp'].apply(lambda s: (s - s.shift(1) &gt; pd.Timedelta("15 min")).fillna(0).cumsum(skipna=False)) </code></pre> <p>When i run the second command, I get the error "unsupported operand type(s) for -: 'unicode' and 'unicode'"</p> <p>Not sure what is making things wrong here. Any help would be appreciated.</p>
<p>Try this:</p> <pre><code>In [162]: df Out[162]: Timestamp Clientip 0 2015-07-22T02:40:06.499174Z 106.51.235.133 1 2015-07-22T02:50:06.000000Z 106.51.235.133 2 2015-07-22T02:40:06.632589Z 115.250.16.146 3 2015-07-22T03:30:16.111111Z 115.250.16.146 In [163]: df.Timestamp = pd.to_datetime(df.Timestamp, errors='coerce') In [164]: df Out[164]: Timestamp Clientip 0 2015-07-22 02:40:06.499174 106.51.235.133 1 2015-07-22 02:50:06.000000 106.51.235.133 2 2015-07-22 02:40:06.632589 115.250.16.146 3 2015-07-22 03:30:16.111111 115.250.16.146 In [165]: df.groupby('Clientip')['Timestamp'].apply(lambda s: (s - s.shift(1) &gt; pd.Timedelta("15 min")).fillna(0).cumsum(skipna=False)) Out[165]: 0 0 1 0 2 0 3 1 Name: Timestamp, dtype: int32 </code></pre>
python|pandas
2
875
61,238,095
Pandas adding extra zeros to decimal value
<p>I'm importing a file into pandas dataframe but the dataframe is not retaining original values as is, instead its adding extra zero to some float columns. </p> <p>example original value in the file is 23.84 but when i import it into the dataframe it has a value of 23.8400</p> <p>How to fix this? or is there a way to import original values to the dataframe as is in the text file. </p>
<p>For anyone who encounters the same problem, I'm adding the solution I found to this problem. Pandas read_csv has an attribute as dtype where we can tell pandas to read all columns as a string so this will read the data as is and not interpret based on its own logic. </p> <p>df1 = pd.read_csv('file_location', sep = ',', dtype = {'col1' : 'str', 'col2' : 'str'}</p> <p>I had too many columns so I first created a dictionary with all the columns as keys and 'str' as their values and passed this dictionary to the dtype argument. </p>
python|pandas
1
876
68,862,658
Trying to find FP,FN,TP,TN but i'm having some errors
<p>I'm trying to find FP,FN,TP,TN values but it gives me this error:</p> <pre><code>AttributeError: 'function' object has no attribute 'sum' </code></pre> <p>Here is that part my code:</p> <pre><code>FP = confusion_matrix.sum(axis=0) - np.diag(confusion_matrix) &lt;-- Error in this line FN = confusion_matrix.sum(axis=1) - np.diag(confusion_matrix) TP = np.diag(confusion_matrix) TN = confusion_matrix.sum() - (FP + FN + TP) TPR = TP/(TP+FN) TNR = TN/(TN+FP) PPV = TP/(TP+FP) NPV = TN/(TN+FN) FPR = FP/(FP+TN) FNR = FN/(TP+FN) FDR = FP/(TP+FP) ACC = (TP+TN)/(TP+FP+FN+TN) </code></pre>
<p>You're getting this error because <code>confusion_matrix</code> is a function, and you're trying to call the <code>sum</code> function on it.</p> <p>If you're using <code>confusion_matrix</code> from <a href="https://scikit-learn.org/stable/modules/generated/sklearn.metrics.confusion_matrix.html" rel="nofollow noreferrer">scikit-learn</a>, in the simple binary case you can get FP, FN, TP, &amp; TN like this:</p> <pre><code>tn, fp, fn, tp = confusion_matrix([0, 1, 0, 1], [1, 1, 1, 0]).ravel() </code></pre> <p>Otherwise, you'll want to call it on your actual and predicted y before you calculate positives and negatives.</p> <pre><code>cm = confusion_matrix(y_true, y_pred) # compute FP, FN, TP, &amp; TN here on cm </code></pre>
python|pandas|numpy|confusion-matrix
3
877
68,534,141
AutoEncoder Resulting In (61,61,3) instead of (64,64,3)
<p>I am trying to build a convolutional autoencoder. Here is my architecture.</p> <pre><code> def MainEncoder(): inp = Input(shape=(64,64,3)) x = Conv2D(256,2)(inp) x = MaxPool2D()(x) x = Conv2D(128,2)(x) x = Flatten()(x) encoded = Dense(100,activation=&quot;relu&quot;)(x) encoder= Model(inp,encoded) return encoder def Decoder(): enc = Input(shape=(100,)) y = Dense(128)(enc) y = Dense(768)(y) y= Reshape((16,16,3))(y) y= Conv2DTranspose(128,(1,1),(2,2),padding='same')(y) y= Conv2DTranspose(128,(1,1),(2,2),padding='same')(y) decoded1 = Conv2D(3,1,padding=&quot;same&quot;)(y) decoder = Model(enc,decoded1) return decoder encoder= MainEncoder() decoderA = Decoder() decoderB = Decoder() print(encoder.summary()) print(decoderA.summary()) print(decoderB.summary()) input() #decoder= Model(encoded_input,decoded1) #print(decoder.summary()) Inp = Input(shape=(64,64,3)) Inp2 = Input(shape=(64,64,3)) AutoEncoder1 = Model(Inp,decoderA(encoder(Inp))) AutoEncoder2 = Model(Inp2,decoderB(encoder(Inp2))) AutoEncoder1.summary() AutoEncoder2.summary() print(ot[0].shape) input() AutoEncoder1.compile(optimizer='adam',loss='mse') AutoEncoder2.compile(optimizer='adam',loss='mse') AutoEncoder1.fit(ot,ot,16,100) AutoEncoder2.fit(kt,kt,16,100) encoder.save(path+'encoder') decoderA.save(path+'obama') decoderB.save(path+'kimmel') </code></pre> <p>The outputs of all models and shapes of all images are 64,64,3 according to the summary. However whenever I try to add the accuracy metric or just test out the auto encoder it always results in and image of size 61,61,3. I don't really know how to fix this. Any help would be appreciated.</p> <p>Here is the test code</p> <pre><code> from numpy.core.shape_base import block import tensorflow as tf from tensorflow.keras.layers import * from tensorflow.keras.models import * import pickle import numpy as np import matplotlib.pyplot as plt path = 'youtube_stuff2/' ot = pickle.load(open(path+'oi.pickle','rb')) kt = pickle.load(open(path+'ki.pickle','rb')) ot = ot/255.0 kt = kt/255.0 encoder = load_model(path+'encoder') obama = load_model(path+&quot;obama&quot;) kimmel = load_model(path+&quot;kimmel&quot;) print(ot[0].shape) ott = np.array([ot[0]]) print(ott.shape) thing = encoder.predict(ott) image = obama.predict(thing) print(image.shape) #plt.imshow(ott[0]) plt.imshow(image[0]) plt.show() </code></pre> <p>The variable image has shape (61,61,3)</p>
<p>When using Convolution, you need to be aware that pixels on the edge of the image will not be kept.</p> <p>If you want them to be of similar shapes, you can add the keyword &quot;padding&quot; and set its value to &quot;same&quot; when defining your Conv2D.</p> <p>Here's what it's probably going to look like :</p> <pre class="lang-py prettyprint-override"><code>def MainEncoder(): inp = Input(shape=(64,64,3)) x = Conv2D(256,2, padding=&quot;same&quot;)(inp) x = MaxPool2D()(x) x = Conv2D(128,2, padding=&quot;same&quot;)(x) x = Flatten()(x) encoded = Dense(100,activation=&quot;relu&quot;)(x) encoder= Model(inp,encoded) return encoder </code></pre> <p>This padding will create what is effectively a black border on the outside of your image when the convolution is going through.</p> <p>I hope I was useful</p>
python|tensorflow|autoencoder
0
878
36,599,346
make the values in a column being rownames
<p>I want to <strong>convert the dataframe</strong> </p> <p><a href="https://i.stack.imgur.com/8BjmB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8BjmB.png" alt="orginal dataframe"></a></p> <p><strong>into</strong> </p> <p><a href="https://i.stack.imgur.com/MAsfr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MAsfr.png" alt="new dataframe"></a></p> <p>The column 'col' (values are duplicated) in the original dataframe being the rownames of the new dataframe and 'index' column being the INDEX of the new dataframe, and 'data' column being the data of the new dataframe.</p> <p>It is like two-dimensional vlookup.</p> <p>Your patience and help will be greatly appreciated:) </p>
<pre><code>data.pivot('index', 'col', 'data') </code></pre>
python|pandas|dataframe
1
879
36,411,276
Python Pandas dataframe: Collect values of a column
<p>I have the following data frame:</p> <pre><code> var_1 var_2 item_list 0 0 1 [beer, apple, pear, rice] 1 0 1 [egg, banana, oil, pear] 2 0 1 [beer, noodle] 3 1 0 [tomato, milk] 4 1 0 [apple] </code></pre> <p>Is it possible to collect all items in the item_list using data-frame apply function? The output should be something like <code>[beer, apple, pear, rice, egg, banana, oil, pear, ...]</code> without duplicates in the list. </p> <p>Or I have to iterate cell by cell to collect all values in to one list? </p>
<p>I think you can <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.apply.html" rel="nofollow"><code>apply</code></a> <code>Series</code>, <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.stack.html" rel="nofollow"><code>stack</code></a> and convert <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.tolist.html" rel="nofollow"><code>tolist</code></a>:</p> <pre><code>print df['item_list'].apply(pd.Series).stack().tolist() ['beer', 'apple', 'pear', 'rice', 'egg', 'banana', 'oil', 'pear', 'beer', 'noodle', 'tomato', 'milk', 'apple'] </code></pre> <p>If you need remove duplicates use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.drop_duplicates.html" rel="nofollow"><code>drop_duplicates</code></a> or <code>set</code>:</p> <pre><code>print df['item_list'].apply(pd.Series).stack().drop_duplicates().tolist() ['beer', 'apple', 'pear', 'rice', 'egg', 'banana', 'oil', 'noodle', 'tomato', 'milk'] print list(set(df['item_list'].apply(pd.Series).stack().tolist())) ['tomato', 'oil', 'apple', 'pear', 'milk', 'beer', 'noodle', 'rice', 'egg', 'banana'] </code></pre> <p>EDIT:</p> <p>If you need remove duplicates in each row first:</p> <pre><code>print df['item_list'].apply(lambda x: pd.Series(list(set(x)))).stack().drop_duplicates().tolist() </code></pre>
python|pandas|dataframe
3
880
53,068,443
I am having trouble installing Tensorflow - gpu into my anaconda virtual enviorment
<p>Every time I try and re-install Cuda it fails and whenever I try and import Tensorflow with the current set up I have it will pip install but it will not import and will instead return:</p> <blockquote> <blockquote> <blockquote> <p>import tensorflow as tf Traceback (most recent call last): File "C:\Users\MBUSI\Anaconda3\pkgs\conda-4.4.10-py36_0\Library\bin\conda\envs\tensorflow2\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in from tensorflow.python.pywrap_tensorflow_internal import * File "C:\Users\MBUSI\Anaconda3\pkgs\conda-4.4.10-py36_0\Library\bin\conda\envs\tensorflow2\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in _pywrap_tensorflow_internal = swig_import_helper() File "C:\Users\MBUSI\Anaconda3\pkgs\conda-4.4.10-py36_0\Library\bin\conda\envs\tensorflow2\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) File "C:\Users\MBUSI\Anaconda3\pkgs\conda-4.4.10-py36_0\Library\bin\conda\envs\tensorflow2\lib\imp.py", line 243, in load_module return load_dynamic(name, filename, file) File "C:\Users\MBUSI\Anaconda3\pkgs\conda-4.4.10-py36_0\Library\bin\conda\envs\tensorflow2\lib\imp.py", line 343, in load_dynamic return _load(spec) ImportError: DLL load failed: The specified module could not be found.</p> </blockquote> </blockquote> </blockquote> <p>During handling of the above exception, another exception occurred:</p> <p>Traceback (most recent call last): File "", line 1, in File "C:\Users\MBUSI\Anaconda3\pkgs\conda-4.4.10-py36_0\Library\bin\conda\envs\tensorflow2\lib\site-packages\tensorflow__init__.py", line 22, in from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import File "C:\Users\MBUSI\Anaconda3\pkgs\conda-4.4.10-py36_0\Library\bin\conda\envs\tensorflow2\lib\site-packages\tensorflow\python__init__.py", line 49, in from tensorflow.python import pywrap_tensorflow File "C:\Users\MBUSI\Anaconda3\pkgs\conda-4.4.10-py36_0\Library\bin\conda\envs\tensorflow2\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 74, in raise ImportError(msg) ImportError: Traceback (most recent call last): File "C:\Users\MBUSI\Anaconda3\pkgs\conda-4.4.10-py36_0\Library\bin\conda\envs\tensorflow2\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in from tensorflow.python.pywrap_tensorflow_internal import * File "C:\Users\MBUSI\Anaconda3\pkgs\conda-4.4.10-py36_0\Library\bin\conda\envs\tensorflow2\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in _pywrap_tensorflow_internal = swig_import_helper() File "C:\Users\MBUSI\Anaconda3\pkgs\conda-4.4.10-py36_0\Library\bin\conda\envs\tensorflow2\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) File "C:\Users\MBUSI\Anaconda3\pkgs\conda-4.4.10-py36_0\Library\bin\conda\envs\tensorflow2\lib\imp.py", line 243, in load_module return load_dynamic(name, filename, file) File "C:\Users\MBUSI\Anaconda3\pkgs\conda-4.4.10-py36_0\Library\bin\conda\envs\tensorflow2\lib\imp.py", line 343, in load_dynamic return _load(spec) ImportError: DLL load failed: The specified module could not be found.</p> <p>Failed to load the native TensorFlow runtime.</p> <p>See <a href="https://www.tensorflow.org/install/install_sources#common_installation_problems" rel="nofollow noreferrer">https://www.tensorflow.org/install/install_sources#common_installation_problems</a></p>
<p>This is basically DLL load error. The answer is already given in the past posts. <a href="https://stackoverflow.com/questions/20201868/importerror-dll-load-failed-the-specified-module-could-not-be-found">DLL failed-1</a>,<a href="https://github.com/pytorch/pytorch/issues/9263" rel="nofollow noreferrer">DLL load failed during Pytorch</a>,<a href="https://github.com/tensorflow/tensorflow/issues/10033" rel="nofollow noreferrer">DLL failure 3</a></p>
python|tensorflow|anaconda|conda
0
881
65,744,701
Python Pandas Groupby to count unique records in a single column
<p>I have a df having a single column containing rows of repeating data. I want to display a pivot table of unique values of that column along with their count. I know it would be some sort of groupby however I could not get it to work, please help.</p> <p><a href="https://i.stack.imgur.com/FYAH7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FYAH7.png" alt="My df" /></a>. <a href="https://i.stack.imgur.com/IE3ZA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IE3ZA.png" alt="Desired output" /></a></p>
<p>Try:</p> <pre><code>df.groupby(&quot;PdDistrict&quot;).size() </code></pre>
python|pandas|matplotlib
1
882
65,817,029
How to expand series into dataframe
<p>Suppose we have a dataframe like:</p> <pre class="lang-py prettyprint-override"><code>data = pd.DataFrame({'num': [1,2,3], 'tags': [['toto','tata','titi'], ['one','two','three'], ['he','she','us']]}) </code></pre> <p>data</p> <pre><code> num tags 0 1 [toto, tata, titi] 1 2 [one, two, three] 2 3 [he, she, us] </code></pre> <p>I don't understand why data.tags.apply(pd.Series) can expand data.tags into its own dataframe</p> <pre><code>data.tags.apply(pd.Series) 0 1 2 0 toto tata titi 1 one two three 2 he she us </code></pre> <p>and DataFrame can not!</p> <pre><code>data.tags.apply(pd.DataFrame) 0 0 0 toto 1 tata 2 titi 1 0 0 one 1 two 2 three 2 0 0 he 1 she 2 us Name: tags, dtype: object </code></pre> <p>How it's work?</p>
<p>You can think of <code>apply()</code> as going rowwise through column <code>tags</code> in this case. So applying <code>pd.Series</code> is going to make each list a &quot;horizontal&quot; series. Then since you have a &quot;vertical&quot; column of these &quot;horizontal&quot; series you are left with a single dataframe. Conversely, applying <code>pd.DataFrame</code> will make each list a &quot;vertical&quot; dataframe so a &quot;vertical&quot; column of &quot;vertical&quot; dataframes will give you a series of dataframes.</p> <p>Hopefully thinking of it in terms of &quot;horizontal&quot; and &quot;vertical&quot; helps a bit with the intuition of what is happening.</p>
python|pandas|dataframe|series|expand
0
883
65,489,827
How to make a similar custom generator in Keras for a CNN that takes multiple images as inputs?
<p>I am working on DR detection using CNNs on Google Colab. The CNN that I have designed has <strong>3 inputs for 3 different grayscale images</strong> of each eye (one original, one with extracted blood vessels, and one with extracted exudates). The code for the CNN is as follows:</p> <pre><code>#Custom CNN Model from keras.layers import Input, Conv2D, MaxPooling2D, Dropout, BatchNormalization, concatenate from keras.models import Sequential, Model def cnn_model(): input_1 = Input(shape=(224,224,1)) input_2 = Input(shape=(224,224,1)) input_3 = Input(shape=(224,224,1)) b1 = Conv2D(32, (3, 3), activation='relu')(input_1) b1 = Conv2D(16, (3, 1), activation='relu')(b1) b1 = Conv2D(8, (1, 3), activation='relu')(b1) b1 = BatchNormalization()(b1) b1 = MaxPooling2D(pool_size=(2, 2))(b1) b1 = Dropout(0.25)(b1) b1 = Flatten()(b1) b2 = Conv2D(8, (3, 1), activation='relu')(input_2) b2 = Conv2D(4, (1, 3), activation='relu')(b2) b2 = BatchNormalization()(b2) b2 = MaxPooling2D(pool_size=(2, 2), strides=(2, 2))(b2) b2 = Dropout(0.25)(b2) b2 = Flatten()(b2) b3 = Conv2D(8, (3, 1), activation='relu')(input_3) b3 = Conv2D(4, (1, 3), activation='relu')(b3) b3 = BatchNormalization()(b3) b3 = MaxPooling2D(pool_size=(2, 2), strides=(2, 2))(b3) b3 = Dropout(0.25)(b3) b3 = Flatten()(b3) concatenated = concatenate([b1, b2, b3]) fc = Dense(units=64, activation='relu')(concatenated) fc = Dense(units=32, activation='relu')(fc) fc = Dense(units=16, activation='relu')(fc) op = Dense(units=5, activation='softmax')(fc) final = Model(inputs=[input_1, input_2, input_3], outputs=[op]) final.compile( optimizer = 'adam', loss = 'categorical_crossentropy', metrics = ['accuracy']) #define optimizer and loss functions as well as required metrics final.summary() return final </code></pre> <p>The block-diagram of the proposed architecture (with 3 branches) is shown below:</p> <p><a href="https://i.stack.imgur.com/xFQui.png" rel="nofollow noreferrer">proposed-cnn-architecture</a></p> <p>Since using the default <em>ImageDataGenerator</em> provided by Keras causes Colab to crash, probably because of very large number of images (~35,000 images of high resolution), I have written my own Custom Data Generator that uses the <strong>numpy arrays of only the filenames of the images</strong> in one batch at a time, rather than the actual images themselves, and thus preventing the crashing of Colab (that's what I believe), the code for which is as follows:</p> <pre><code>class Custom_Generator(keras.utils.Sequence) : #constructor for initializing class instance def __init__(self, image_filenames, labels, batch_size) : self.image_filenames = image_filenames self.labels = labels self.batch_size = batch_size #num of batches def __len__(self) : return (np.ceil(len(self.image_filenames) / float(self.batch_size))).astype(np.int) #function that delivers the batches to the model def __getitem__(self, idx) : #creating each batch for image filenames as well as labels on the basis of its index batch_x = self.image_filenames[idx * self.batch_size : (idx+1) * self.batch_size] batch_y = self.labels[idx * self.batch_size : (idx+1) * self.batch_size] img_list = [] for file_name in batch_x: orig_img = cv2.imread(&quot;../resized_train_cropped/&quot; + str(file_name)) blood_ves, exu = preprocess_image(orig_img) #resizing all images to 224*224 blood_ves = cv2.resize(blood_ves, (224, 224), interpolation = cv2.INTER_AREA) exu = cv2.resize(exu, (224, 224), interpolation = cv2.INTER_AREA) orig_img = cv2.resize(orig_img, (224, 224), interpolation = cv2.INTER_AREA) orig_img = cv2.cvtColor(orig_img, cv2.COLOR_BGR2GRAY) #normalizing all pixel values blood_ves = blood_ves / 255.0 exu = exu / 255.0 orig_img = orig_img / 255.0 #temporary list to hold all 3 images temp_list = [] temp_list.append(orig_img) temp_list.append(blood_ves) temp_list.append(exu) #converting temp_list to a numpy array temp_list = np.array(temp_list) img_list.append(temp_list) #returning the resized images of a batch and their respective labels as numpy arrays return np.array(img_list), np.array(batch_y) </code></pre> <p>The code for implementing the model:</p> <pre><code>model = cnn_model() #Preparing for saving the model at each checkpoint checkpoint_path = &quot;cp_cust.ckpt&quot; #save model after every 10 batches cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_path, verbose=1, save_weights_only=True, save_freq=10) train_len = X_data_bal.shape[0] with tf.device('/device:GPU:0'): history = model.fit_generator(generator=training_batch_generator, steps_per_epoch = int(train_len // batch_size), epochs = 10, verbose = 1, callbacks=[cp_callback]) </code></pre> <p>The same data generator had earlier worked for single input images, but now throws the following list of errors:</p> <pre><code>/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:1844: UserWarning: `Model.fit_generator` is deprecated and will be removed in a future version. Please use `Model.fit`, which supports generators. warnings.warn('`Model.fit_generator` is deprecated and ' --------------------------------------------------------------------------- ValueError Traceback (most recent call last) &lt;ipython-input-40-caf54c85826b&gt; in &lt;module&gt;() 4 epochs = 10, 5 verbose = 1, ----&gt; 6 callbacks=[cp_callback]) 18 frames /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py in fit_generator(self, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, validation_freq, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch) 1859 use_multiprocessing=use_multiprocessing, 1860 shuffle=shuffle, -&gt; 1861 initial_epoch=initial_epoch) 1862 1863 def evaluate_generator(self, /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing) 1062 use_multiprocessing=use_multiprocessing, 1063 model=self, -&gt; 1064 steps_per_execution=self._steps_per_execution) 1065 1066 # Container that configures and calls `tf.keras.Callback`s. /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/data_adapter.py in __init__(self, x, y, sample_weight, batch_size, steps_per_epoch, initial_epoch, epochs, shuffle, class_weight, max_queue_size, workers, use_multiprocessing, model, steps_per_execution) 1110 use_multiprocessing=use_multiprocessing, 1111 distribution_strategy=ds_context.get_strategy(), -&gt; 1112 model=model) 1113 1114 strategy = ds_context.get_strategy() /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/data_adapter.py in __init__(self, x, y, sample_weights, shuffle, workers, use_multiprocessing, max_queue_size, model, **kwargs) 907 max_queue_size=max_queue_size, 908 model=model, --&gt; 909 **kwargs) 910 911 @staticmethod /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/data_adapter.py in __init__(self, x, y, sample_weights, workers, use_multiprocessing, max_queue_size, model, **kwargs) 779 peek, x = self._peek_and_restore(x) 780 peek = self._standardize_batch(peek) --&gt; 781 peek = _process_tensorlike(peek) 782 783 # Need to build the Model on concrete input shapes. /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/data_adapter.py in _process_tensorlike(inputs) 1014 return x 1015 -&gt; 1016 inputs = nest.map_structure(_convert_numpy_and_scipy, inputs) 1017 return nest.list_to_tuple(inputs) 1018 /usr/local/lib/python3.6/dist-packages/tensorflow/python/util/nest.py in map_structure(func, *structure, **kwargs) 657 658 return pack_sequence_as( --&gt; 659 structure[0], [func(*x) for x in entries], 660 expand_composites=expand_composites) 661 /usr/local/lib/python3.6/dist-packages/tensorflow/python/util/nest.py in &lt;listcomp&gt;(.0) 657 658 return pack_sequence_as( --&gt; 659 structure[0], [func(*x) for x in entries], 660 expand_composites=expand_composites) 661 /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/data_adapter.py in _convert_numpy_and_scipy(x) 1009 if issubclass(x.dtype.type, np.floating): 1010 dtype = backend.floatx() -&gt; 1011 return ops.convert_to_tensor_v2_with_dispatch(x, dtype=dtype) 1012 elif scipy_sparse and scipy_sparse.issparse(x): 1013 return _scipy_sparse_to_sparse_tensor(x) /usr/local/lib/python3.6/dist-packages/tensorflow/python/util/dispatch.py in wrapper(*args, **kwargs) 199 &quot;&quot;&quot;Call target, and fall back on dispatchers if there is a TypeError.&quot;&quot;&quot; 200 try: --&gt; 201 return target(*args, **kwargs) 202 except (TypeError, ValueError): 203 # Note: convert_to_eager_tensor currently raises a ValueError, not a /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py in convert_to_tensor_v2_with_dispatch(value, dtype, dtype_hint, name) 1403 1404 return convert_to_tensor_v2( -&gt; 1405 value, dtype=dtype, dtype_hint=dtype_hint, name=name) 1406 1407 /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py in convert_to_tensor_v2(value, dtype, dtype_hint, name) 1413 name=name, 1414 preferred_dtype=dtype_hint, -&gt; 1415 as_ref=False) 1416 1417 /usr/local/lib/python3.6/dist-packages/tensorflow/python/profiler/trace.py in wrapped(*args, **kwargs) 161 with Trace(trace_name, **trace_kwargs): 162 return func(*args, **kwargs) --&gt; 163 return func(*args, **kwargs) 164 165 return wrapped /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py in convert_to_tensor(value, dtype, name, as_ref, preferred_dtype, dtype_hint, ctx, accepted_result_types) 1538 1539 if ret is None: -&gt; 1540 ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref) 1541 1542 if ret is NotImplemented: /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/tensor_conversion_registry.py in _default_conversion_function(***failed resolving arguments***) 50 def _default_conversion_function(value, dtype, name, as_ref): 51 del as_ref # Unused. ---&gt; 52 return constant_op.constant(value, dtype, name=name) 53 54 /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/constant_op.py in constant(value, dtype, shape, name) 263 264 return _constant_impl(value, dtype, shape, name, verify_shape=False, --&gt; 265 allow_broadcast=True) 266 267 /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/constant_op.py in _constant_impl(value, dtype, shape, name, verify_shape, allow_broadcast) 274 with trace.Trace(&quot;tf.constant&quot;): 275 return _constant_eager_impl(ctx, value, dtype, shape, verify_shape) --&gt; 276 return _constant_eager_impl(ctx, value, dtype, shape, verify_shape) 277 278 g = ops.get_default_graph() /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/constant_op.py in _constant_eager_impl(ctx, value, dtype, shape, verify_shape) 299 def _constant_eager_impl(ctx, value, dtype, shape, verify_shape): 300 &quot;&quot;&quot;Implementation of eager constant.&quot;&quot;&quot; --&gt; 301 t = convert_to_eager_tensor(value, ctx, dtype) 302 if shape is None: 303 return t /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/constant_op.py in convert_to_eager_tensor(value, ctx, dtype) 96 dtype = dtypes.as_dtype(dtype).as_datatype_enum 97 ctx.ensure_initialized() ---&gt; 98 return ops.EagerTensor(value, ctx.device_name, dtype) 99 100 ValueError: Failed to convert a NumPy array to a Tensor (Unsupported object type numpy.ndarray). </code></pre> <p>Any kind of help would be highly appreciated. Thanks a lot!</p> <p>Edit 1:</p> <p>I had used ImageDataGenerator for batches of only a single input image (batch size of 256) as given below. I had used <em>Flow</em> method on numpy arrays of resized images (224, 224, 3) to instantiate the generators. However, it always gave an error of &quot;Session Crashed&quot; or &quot;Google Drive Error&quot; after 5-10 mins. So, I used the Custom Generator with same batch size and it worked:</p> <pre><code>#loading the numpy arrays of images and labels X_data = np.load(r'Data\X_data.npy') y_data = np.load(r'Data\y_data.npy') #creating training, cross-validation, and testing sets X_train, X_new, y_train, y_new = ms.train_test_split(X_data, y_data, test_size = 0.3, random_state = 0) X_crossval, X_test, y_crossval, y_test = ms.train_test_split(X_new, y_new, test_size = 0.5, random_state = 0) #training and testing data generators (No Augmentation initially) train_datagen = ImageDataGenerator(rescale = 1./255) test_datagen = ImageDataGenerator(rescale=1./255) train_generator = train_datagen.flow(X_train, y_train, batch_size = 256) val_generator = test_datagen.flow(X_crossval, y_crossval, batch_size = 256) test_generator = test_datagen.flow(X_test, y_test, batch_size = 256) model = cnn_model() #training the model history = model.fit_generator(train_generator, steps_per_epoch = len(X_train) // 256, epochs = 10, validation_data = val_generator) </code></pre> <p>The CNN was based on Inception-v3, with only a single input image:</p> <pre><code>#inception v3 CNN model #importing required libraries from keras.models import Sequential from keras.models import Model from keras.callbacks import ModelCheckpoint, LearningRateScheduler, EarlyStopping, ReduceLROnPlateau, TensorBoard from keras import optimizers, losses, activations, models from keras.layers import Convolution2D, Dense, Input, Flatten, Dropout, MaxPooling2D, BatchNormalization, GlobalAveragePooling2D, Concatenate from keras import applications def cnn_model(): #initializing base model to build upon base_model = applications.InceptionV3(weights='imagenet', include_top=False, input_shape=(224, 224, 3)) #freezing the weights of base model base_model.trainable = False add_model = Sequential() add_model.add(base_model) #Adding layers on Base Model add_model.add(GlobalAveragePooling2D()) #average pooling to reduce dims until 1 add_model.add(Dropout(0.5)) #dropping nodes for regularization add_model.add(Dense(5, activation='softmax')) #5 output neurons model = add_model #compiling the new model model.compile(loss='categorical_crossentropy', optimizer=optimizers.SGD(lr=1e-4, momentum=0.9), metrics=['accuracy']) model.summary() return model </code></pre> <p>Basically, I had used a model based on Inception-v3, just to observe the benefits of transfer learning. However, now I am working on a custom model that can take 3 input images.</p>
<p>you say the ImageDataGenerator causes Colab to crash. Please show the code you used for this. Did you use .flow_from_directory? If so what is the setting for the batch_size? If this is set with to large a value it may cause use of two much memory especially if you have 3 generators running. The fact that you have 35,000 images should not be a factor if you are retrieving the images in batches.</p>
python|tensorflow|keras|conv-neural-network|google-colaboratory
0
884
65,501,679
Simple calculation on table. Please help me to make my code more effective
<p>Please help me to make my code more effective. This is my df:</p> <pre><code>df = pd.DataFrame([['A', 80], ['A', 64], ['A', 55], ['B', 56], ['B', 89], ['B', 73], ['C', 78], ['C', 100], ['C', 150], ['C', 76], ['C', 87]], columns=['Well', 'GR']) </code></pre> <pre><code>Well GR A 80 A 64 A 55 B 56 B 89 B 73 C 78 C 100 C 150 C 76 C 87 </code></pre> <p>Please help me to find the Vshale. Vshale on each well = GR - GR(min) / GR(max) - GR(min). This is my desired result:</p> <pre><code> Well GR Vshale A 80 1 A 64 0.36 A 55 0 B 56 0 B 89 1 B 73 0.515151515 C 78 0.027027027 C 100 0.324324324 C 150 1 C 76 0 C 87 0.148648649 </code></pre> <p>This code is work for me, but, I should create a new column that consists of GRMax and GRMin and merge it into my previous df. I am looking for a more effective way without adding GRmin and GRmax on my original df. Thank you.</p> <pre><code>df1 = df.groupby(['Well']).agg({'GR': ['min', 'max']}).reset_index() df1.columns = list(map(''.join, df1.columns.values)) df2 = pd.merge (df, df1, on = 'Well', how = 'left') df2['Vshale'] = (df2['GR'] - df2['GRmin'])/(df2['GRmax'] - df2['GRmin']) </code></pre>
<p>one string solution with method <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.transform.html" rel="nofollow noreferrer">transform</a>:</p> <pre><code>df['Vshale'] = df.groupby('Well').transform(lambda x: (x - np.min(x))/(np.max(x) - np.min(x))) </code></pre>
python|pandas|pandas-groupby
1
885
21,093,729
Dot product of csr_matrix causes segmentation fault
<p>I have two (scipy) CSR sparse matrices:</p> <pre><code>A (12414693, 235470) B (235470, 48063) </code></pre> <p>Performing:</p> <pre><code>A.dot(B) </code></pre> <p>causes a segmentation fault.</p> <p>What am I doing wrong?</p> <p><strong>EDIT</strong></p> <p>I've submitted a bug to the scipy developer community: <a href="https://github.com/scipy/scipy/issues/3212" rel="nofollow">https://github.com/scipy/scipy/issues/3212</a></p>
<p>Your problem is very likely being caused by an overflow of an index stored in an <code>int32</code>, caused by the result of your dot product having more than 2^31 non-zero entries. Try the following...</p> <pre><code>&gt;&gt;&gt; import scipy.sparse &gt;&gt;&gt; c = np.empty_like(A.indptr) &gt;&gt;&gt; scipy.sparse.sparsetools.csr_matmat_pass1(A.shape[0], B.shape[1], A.indptr, A.indices, B.indptr, B.indices, c) &gt;&gt;&gt; np.all(np.diff(c) &gt;= 0) </code></pre> <p>With your data, The array <code>c</code> is a vector of <code>12414693 + 1</code> items, holding the accumulated number of non-zero entries per row in the product of your two matrices, i.e. it is what <code>C.indptr</code> will be if <code>C = A.dot(B)</code> finishes successfully. It is of type <code>np.int32</code>, even on 64 bit platforms, which is not good. If your sparse matrix is too large, there will be an overflow, that last line will return <code>False</code>, and the arrays to store the result of the matrix product will be instantiated with a wrong size (the last item of <code>c</code>, which is very likely to be a negative number if an overflow did happen). If that's the case, then yes, file a bug report...</p>
python|numpy|scipy|sparse-matrix
4
886
63,734,091
Python Dataframe select last n rows based on present value condition
<p>I have a dataframe. I want to select last n (=2) rows if present value is <code>True</code>.</p> <p>My code:</p> <pre><code>df = pd.DataFrame({'A':[10,20,30,40,50,60],'B':[False,False,True,False,True,False]}) A B 0 10 False 1 20 False 2 30 True # Here, I should select 30,20 3 40 False 4 50 True # Here, I should select 50,40 5 60 False cl_id = df.columns.tolist().index('B') ### cl_id for index number of the column for using in .iloc op = [df['A'].iloc[x+1-n:x+1,cl_id] if all(df['B'].iloc[x]) for x in np.arange(2,len(df))] </code></pre> <p>The code gave error saying <code>invalid syntax</code> I want to select last 2 values in column A if column B value is True My expected output:</p> <pre><code>opdf = A B 1 20 False 2 30 True # Here, I should select 30,20 3 40 False 4 50 True # Here, I should select 50,40 </code></pre>
<p>Let us try <code>limit</code> with <code>bfill</code></p> <pre><code>n = 2 df[df.B.where(df.B).bfill(limit=n-1)==1] Out[95]: A B 1 20 False 2 30 True 3 40 False 4 50 True </code></pre>
python|pandas|numpy
3
887
63,364,332
Reformat pivot table output for table
<p>I have a pivot table called <code>pivot</code> that I have created using:</p> <pre><code>pivot = MonthyData.pivot_table(index=['year'],columns=MonthyData['month'], values=['total_pos'], aggfunc='sum') pivot = pivot.rename(columns=lambda x: look_up.get(f'{str(x).zfill(2)}', x)) pivotdf = pivot.reset_index() pivotdf=pivotdf.fillna(0).astype(int) pivotdf=pivotdf.replace(0, '-') </code></pre> <p>and looks like:</p> <pre><code> year total_pos \ month Jan Feb Mar Apr May Jun Jul 0 2005 - - - - - - - 1 2006 176119 346592 158928 73999 -45773 115140 163590 2 2007 -96906 9942 -8859 161790 -62723 70319 45462 </code></pre> <p>I need to format the table for a report. How can I remove the <code>year</code> and <code>total_pos</code> and <code>month</code> index column so the data-frame looks like this?</p> <pre><code> Jan Feb Mar Apr May Jun Jul 2005 - - - - - - - 2006 176119 346592 158928 73999 -45773 115140 163590 2007 -96906 9942 -8859 161790 -62723 70319 45462 </code></pre>
<p>As @QuangHoang, pointed out in the comments, it helps if you provide the name of the columns because you are applying <code>.pivot()</code> onto the data.frame, so it can look for the column to pivot. Once you have that, you almost have the table you need, just need to remove the names of the index:</p> <pre><code>MonthyData = pd.DataFrame({'year':np.random.choice([2005,2006,2007],100), 'month':np.random.choice(['Jan','Feb','Mar'],100), 'total_pos':np.random.uniform(0,1,100)}) MonthyData['month'] = pd.Categorical(MonthyData['month'],categories=['Jan','Feb','Mar']) pivotdf = MonthyData.pivot_table(index='year',columns='month', values='total_pos', aggfunc='sum') pivotdf.columns.name = None pivotdf.index.name=None pivotdf Jan Feb Mar 2005 7.039743 4.314391 3.827450 2006 6.679975 6.091983 3.706505 2007 8.847631 4.412842 3.931816 </code></pre>
python|pandas
0
888
21,483,959
How can get ' USDJPY'(currency rates) with pandas and yahoo finance?
<p>I am learning and using the pandas and python.</p> <p>Today, I am trying to make a fx rate table, but I got a trouble with getting the pricess of 'USDJPY'.</p> <p>When I get a prices of 'EUR/USD', i code like this.</p> <pre><code>eur = web.DataReader('EURUSD=X','yahoo')['Adj Close'] </code></pre> <p>it works.</p> <p>But when I wrote </p> <pre><code>jpy = web.DataReader('USDJPY=X','yahoo')['Adj Close'] </code></pre> <p>the error message comes like this:</p> <blockquote> <p>--------------------------------------------------------------------------- IOError Traceback (most recent call last) in () ----> 1 jpy = web.DataReader('USDJPY=X','yahoo')['Adj Close']</p> <p>C:\Anaconda\lib\site-packages\pandas\io\data.pyc in DataReader(name, data_source, start, end, retry_count, pause) 70 return get_data_yahoo(symbols=name, start=start, end=end, 71 adjust_price=False, chunksize=25, ---> 72 retry_count=retry_count, pause=pause) 73 elif data_source == "google": 74 return get_data_google(symbols=name, start=start, end=end,</p> <p>C:\Anaconda\lib\site-packages\pandas\io\data.pyc in get_data_yahoo(symbols, start, end, retry_count, pause, adjust_price, ret_index, chunksize, name) 388 """ 389 return _get_data_from(symbols, start, end, retry_count, pause, --> 390 adjust_price, ret_index, chunksize, 'yahoo', name) 391 392 </p> <p>C:\Anaconda\lib\site-packages\pandas\io\data.pyc in _get_data_from(symbols, start, end, retry_count, pause, adjust_price, ret_index, chunksize, source, name) 334 # If a single symbol, (e.g., 'GOOG') 335 if isinstance(symbols, (basestring, int)): --> 336 hist_data = src_fn(symbols, start, end, retry_count, pause) 337 # Or multiple symbols, (e.g., ['GOOG', 'AAPL', 'MSFT']) 338 elif isinstance(symbols, DataFrame):</p> <p>C:\Anaconda\lib\site-packages\pandas\io\data.pyc in _get_hist_yahoo(sym, start, end, retry_count, pause) 188 '&amp;g=d' + 189 '&amp;ignore=.csv') --> 190 return _retry_read_url(url, retry_count, pause, 'Yahoo!') 191 192 </p> <p>C:\Anaconda\lib\site-packages\pandas\io\data.pyc in _retry_read_url(url, retry_count, pause, name) 167 168 raise IOError("after %d tries, %s did not " --> 169 "return a 200 for url %r" % (retry_count, name, url)) 170 171 </p> <p>IOError: after 3 tries, Yahoo! did not return a 200 for url '<a href="http://ichart.yahoo.com/table.csv?s=USDJPY=X&amp;a=0&amp;b=1&amp;c=2010&amp;d=1&amp;e=1&amp;f=2014&amp;g=d&amp;ignore=.csv" rel="noreferrer">http://ichart.yahoo.com/table.csv?s=USDJPY=X&amp;a=0&amp;b=1&amp;c=2010&amp;d=1&amp;e=1&amp;f=2014&amp;g=d&amp;ignore=.csv</a>'</p> </blockquote> <p>Other currencies like 'GBPUSD' also have same problem.</p> <p>Can you solve this problem?</p> <p>Do you have any idea of getting 'USDJPY' from yahoo or google???</p>
<p>Yahoo Finance doesn't provide historical data on exchange rates (i.e. there's no "Historical Prices" link in the top left of the page like there would be for stocks, indices, etc...)</p> <p>You can use FRED (Federal Reserve of St. Louis data) to get these exchange rates...</p> <pre><code>import pandas.io.data as web jpy = web.DataReader('DEXJPUS', 'fred') </code></pre> <p>UPDATE: hase moved the pandas-datareader</p> <pre><code>from pandas_datareader import data jpy = data.DataReader('DEXJPUS', 'fred') </code></pre> <p>or the more direct way...</p> <pre><code>jpy = web.get_data_fred('DEXJPUS') </code></pre> <p>A list of all of the exchange rate that FRED has daily data for can be found here: <a href="http://research.stlouisfed.org/fred2/categories/94" rel="nofollow noreferrer">http://research.stlouisfed.org/fred2/categories/94</a></p>
python|ios|pandas|currency|yahoo-finance
15
889
24,680,318
Bokeh datetime axis should show all yyyy, mm, dd, hh, mm, ss
<p>I create my datetimeindex via</p> <pre><code>datetimes = pd.to_datetime(SeriesOfUnixtimeStamps,"s") line(x=datetimes,y,x_axis_type="datetime",...) </code></pre> <p>Depending on how much I zoom in or out, the x-axis only shows lets say <code>:07:03</code> instead of <code>2014-06-12 12:07:03</code>. I want to show the whole date, not only numbers. It would be nice If one could also show it in multiple rows below the x-axis like</p> <pre><code>yyyy-mm-dd hh:mm:ss </code></pre> <p>I thought I could apply a list of strings, but it does not work either, because it is <code>not in the ColumnDataSource</code>. If I zoom in deeper, the numbers are getting even less meaningfull. Then it might say <code>03</code>, but 03 of what? At which minute, which hour? Is there a solution for this?</p>
<p>Unfortunately at the moment we don't have this capability, although I have opened an <a href="https://github.com/ContinuumIO/bokeh/issues/813" rel="nofollow noreferrer">issue</a> over on our GitHub page so you can track its progress.</p> <p>The fix may be relatively simple, but we're in the midst of a SciPy release for 0.5 at the moment. We will probably get this merged in for the point release within a few weeks.</p>
python|pandas|bokeh
1
890
29,876,541
re.match takes a long time to finish
<p>I am new to python and have written the following code that runs very slow. </p> <p>I have debugged the code and found out it is the last <code>re.match()</code> that is causing the code to run very slow. Even though the previous match does the same kind of match against the same DataFrame, it comes back quickly. </p> <p>Here is the code:</p> <pre><code>My_Cells = pd.read_csv('SomeFile',index_col = 'Gene/Cell Line(row)').T My_Cells_Others = pd.DataFrame(index=My_Cells.index,columns=[col for col in My_Cells if re.match('.*\sCN$|.*\sMUT$|^bladder$|^blood$|^bone$|^breast$|^CNS$|^GI tract$|^kidney$|^lung$|^other$|^ovary$|^pancreas$|^skin$|^soft tissue$|^thyroid$|^upper aerodigestive$|^uterus$',col)]) My_Cells_Genes = pd.DataFrame(index=My_Cells.index,columns=[col for col in My_Cells if re.match('.*\sCN$|.*\sMUT$|^bladder$|^blood$|^bone$|^breast$|^CNS$|^GI tract$|^kidney$|^lung$|^other$|^ovary$|^pancreas$|^skin$|^soft tissue$|^thyroid$|^upper aerodigestive$|^uterus$',col) is None ]) for col in My_Cells.columns: if re.match('.*\sCN$|.*\sMUT$|^bladder$|^blood$|^bone$|^breast$|^CNS$|^GI tract$|^kidney$|^lung$|^other$|^ovary$|^pancreas$|^skin$|^soft tissue$|^thyroid$|^upper aerodigestive$|^uterus$',col): My_Cells_Others [col] = pd.DataFrame(My_Cells[col]) if re.match('.*\sCN$|.*\sMUT$|^bladder$|^blood$|^bone$|^breast$|^CNS$|^GI tract$|^kidney$|^lung$|^other$|^ovary$|^pancreas$|^skin$|^soft tissue$|^thyroid$|^upper aerodigestive$|^uterus$',col) is None: My_Cells_Genes [col] = pd.DataFrame(My_Cells[col]) </code></pre> <p>I do not think the problem is related to regular expressions. The code below is still running slow.</p> <pre><code>for col in My_Cells_Others.columns: if (col in lst) or col.endswith(' CN') or col.endswith(' MUT'): My_Cells_Others [col] = My_Cells[col] for col in My_Cells_Genes.columns: if not ((col in lst) or col.endswith(' CN') or col.endswith(' MUT')): My_Cells_Genes [col] = My_Cells[col] </code></pre>
<p>"Poorly" designed regular expressions can be unnecessarily slow. </p> <p>My guess is that <code>.*\sCN</code> and <code>*\sMUT</code> combined with a big string that does <strong>not</strong> match, makes it that slow, since it forces your script to check all possible combinations. </p> <hr> <p>As @jedwards said, you can replace this piece of code</p> <pre><code>if re.match('.*\sCN$|.*\sMUT$|^bladder$|^blood$|^bone$|^breast$|^CNS$|^GI tract$|^kidney$|^lung$|^other$|^ovary$|^pancreas$|^skin$|^soft tissue$|^thyroid$|^upper aerodigestive$|^uterus$',col): My_Cells_Others [col] = pd.DataFrame(My_Cells[col]) </code></pre> <p>with: </p> <pre><code>lst = ['bladder', 'blood', 'bone', 'breast', 'CNS', 'GI tract', 'kidney', 'lung', 'other', 'ovary', 'pancreas', 'skin', 'soft tissue', 'thyroid', 'upper aerodigestive', 'uterus'] if (col in lst) or col.endswith(' CN') or col.endswith(' MUT'): # Do stuff </code></pre> <hr> <p>Alternatively, if you want to use <code>re</code> for some reason, moving <code>.*\sCN</code> and <code>*\sMUT</code> to the end of the regex <em>might</em> help, depending on your data, since it will not be forced to check all those combinations unless really necessary. </p>
python|regex|pandas
0
891
53,770,530
Color mapping of data on a date vs time plot
<p>I am trying to plot 3 variables x,y,z on a 2d plot, with x (date) on the x axis, y (time) on the y axis and z (temperature) mapped with a colorscale. I have the three variables available within a pandas Dataframe and created an extra column with the datenumber so that matplotlib can work with it. </p> <pre><code>import pandas as pd import matplotlib.pyplot as plt import matplotlib.dates as mdates data=pd.DataFrame() data['datenum']=mdates.date2num(data['Date']) </code></pre> <p>Example: </p> <pre><code> Date Time Tgrad datenum 0 2016-08-01 00 -0.841203 736177.0 1 2016-08-01 01 -0.629176 736177.0 2 2016-08-01 02 -0.623608 736177.0 3 2016-08-01 03 -0.615145 736177.0 4 2016-08-01 04 -0.726949 736177.0 5 2016-08-01 05 -0.788864 736177.0 6 2016-08-01 06 -0.794655 736177.0 7 2016-08-01 07 -0.775724 736177.0 8 2016-08-01 08 -0.677951 736177.0 </code></pre> <p>I have been trying to follow this suggestions: </p> <p><a href="https://stackoverflow.com/questions/39727040/matplotlib-2d-plot-from-x-y-z-values">matplotlib 2D plot from x,y,z values</a> <a href="https://stackoverflow.com/questions/23139595/dates-in-the-xaxis-for-a-matplotlib-plot-with-imshow">Dates in the xaxis for a matplotlib plot with imshow</a></p> <p>But have not been successful due to the wrong shape of my input data I think. I have tried something like this: </p> <pre><code>fig, ax = plt.subplots() ax.imshow(data['Tgrad'], extent = [min(data['datenum']), max(data['datenum']),min(data['Time']), max(data['Time'])], cmap="autumn", aspect = "auto") ax.xaxis_date() </code></pre> <p>But get a ValueError:</p> <pre><code>ValueError: setting an array element with a sequence </code></pre> <p>Is it necessary to have the data as numpy array or any other type? And how can I map the data once I have it in a different format?</p> <p>Thanks you for helping. Vroni</p>
<p><code>imshow</code> requires a 2d array as input. You'll need to reformat your data into a 2d array: <code>Date</code> x <code>Time</code> with <code>Tgrad</code> as your values. Pandas makes this fairly easy with <code>pivot</code>. It does require that you have nicely spaced data points, i.e., a grid-like data set (same <code>Time</code> values for each <code>Date</code>). The post you linked would be useful if data points were not neatly scattered in 2d space. Also, there's no need to convert to a numpy array as matplotlib can handle dataframes.</p> <pre><code>C = data.pivot(index='Time', columns='Date', values='Tgrad') fig, ax = plt.subplots() ax.imshow(C) </code></pre>
python|pandas|imshow
2
892
53,668,128
Counting the number of rows that fulfill a condition and assigning to each row a number from 1 to nRows pandas
<p>So again, i have another question related to this: I'm processing a DataFrame, which looks like the following:</p> <p><a href="https://i.stack.imgur.com/vvObO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vvObO.png" alt="enter image description here"></a></p> <p>the thing is that now I want to add an additional column, called 'position', in which, according to the contributor_id, and the number of edits, the number of the corresponding row appears. The thing is that now, I don't want the count of rows to reestart until the value in nEdits is greater than 0, and This number must be reinitiated to 1 when the contributor_id changes:</p> <pre><code> contributor_id timestamp nEdits Position 0 8 2018-01-01 1 1 1 8 2018-02-01 1 2 2 8 2018-03-01 1 3 3 8 2018-04-01 1 4 4 8 2018-05-01 1 5 5 8 2018-06-01 1 6 6 8 2018-07-01 1 7 7 8 2018-08-01 1 8 8 26424341 2018-01-01 0 0 9 26424341 2018-02-01 0 0 10 26424341 2018-03-01 11 1 11 26424341 2018-04-01 34 2 12 26424341 2018-05-01 42 3 13 26424341 2018-06-01 46 4 14 26424341 2018-07-01 50 5 15 26424341 2018-08-01 54 6 16 26870381 2018-01-01 465 1 17 26870381 2018-02-01 566 2 18 26870381 2018-03-01 601 3 </code></pre> <p>The idea I got from some answers to compute the <code>position</code> column is to do: <code>df.groupby("contributor_id").position.cumsum()</code> <strong>But I don't know how to include the condition that nEdits must be greater than 0 in order to reestart the count.</strong></p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.cumcount.html" rel="nofollow noreferrer"><code>GroupBy.cumcount</code></a> by column <code>contributor_id</code> and helper <code>Series</code> for distingoush multiple <code>0</code> in same group:</p> <pre><code>m = df['nEdits'] == 0 df['Position1'] = np.where(m, 0, df.groupby([m.ne(m.shift()).cumsum(), 'contributor_id']).cumcount() + 1) print (df) contributor_id timestamp nEdits Position Position1 0 8 2018-01-01 1 1 1 1 8 2018-02-01 1 2 2 2 8 2018-03-01 1 3 3 3 8 2018-04-01 1 4 4 4 8 2018-05-01 1 5 5 5 8 2018-06-01 1 6 6 6 8 2018-07-01 1 7 7 7 8 2018-08-01 1 8 8 8 26424341 2018-01-01 0 0 0 9 26424341 2018-02-01 0 0 0 10 26424341 2018-03-01 11 1 1 11 26424341 2018-04-01 34 2 2 12 26424341 2018-05-01 0 3 0 &lt;- added 0 for more general data 13 26424341 2018-06-01 46 4 1 14 26424341 2018-07-01 50 5 2 15 26424341 2018-08-01 54 6 3 16 26870381 2018-01-01 465 1 1 17 26870381 2018-02-01 566 2 2 18 26870381 2018-03-01 601 3 3 </code></pre> <p><strong>Detail</strong>:</p> <pre><code>print (m.ne(m.shift()).cumsum()) 0 1 1 1 2 1 3 1 4 1 5 1 6 1 7 1 8 2 9 2 10 3 11 3 12 4 13 5 14 5 15 5 16 5 17 5 18 5 Name: nEdits, dtype: int32 </code></pre> <p>Check difference:</p> <pre><code>m = df['nEdits'] == 0 df['Position1'] = np.where(m, 0, df.groupby([m, 'contributor_id']).cumcount() + 1) print (df) contributor_id timestamp nEdits Position Position1 0 8 2018-01-01 1 1 1 1 8 2018-02-01 1 2 2 2 8 2018-03-01 1 3 3 3 8 2018-04-01 1 4 4 4 8 2018-05-01 1 5 5 5 8 2018-06-01 1 6 6 6 8 2018-07-01 1 7 7 7 8 2018-08-01 1 8 8 8 26424341 2018-01-01 0 0 0 9 26424341 2018-02-01 0 0 0 10 26424341 2018-03-01 11 1 1 11 26424341 2018-04-01 34 2 2 12 26424341 2018-05-01 0 3 0 13 26424341 2018-06-01 46 4 3 &lt;-not new group 14 26424341 2018-07-01 50 5 4 &lt;-not new group 15 26424341 2018-08-01 54 6 5 &lt;-not new group 16 26870381 2018-01-01 465 1 1 17 26870381 2018-02-01 566 2 2 18 26870381 2018-03-01 601 3 3 </code></pre>
python|pandas|dataframe|pandas-groupby
0
893
53,671,498
Interpolate array to constant density
<p>I have been going in circles with this apparently simple issue for hours and I can't seem to find the answer.</p> <p>The setup is straightforward: given an array of floats, interpolate extra points so that the resulting interpolated data is distributed with a constant (or approximately constant) density.</p> <p>The standard interpolation works, but the density of the interpolated points is not constant at all (right plot):</p> <p><a href="https://i.stack.imgur.com/N0on9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/N0on9.png" alt="enter image description here"></a></p> <p>I must be missing something obvious here because I'm sure this issue is not that complicated, and I've been struggling for too long now.</p> <p>Any help is much appreciated.</p> <hr> <pre><code>import numpy as np import matplotlib.pyplot as plt data = np.array([13.826, 13.608, 13.163, 13.034, 12.672, 12.126, 11.585, 11.192, 10.609, 10.082, 9.67 , 9.261, 9.175, 8.869, 8.408, 7.868, 7.317, 6.827, 6.52 , 6.375, 5.968, 5.601, 5.271, 5.242, 4.961, 4.888, 4.661, 4.395, 4.376, 4.286, 4.105, 4.019, 3.845, 3.785, 3.601, 3.371, 3.226, 3.156, 2.984, 2.96 , 2.931, 2.786, 2.757, 2.62 , 2.554, 2.473, 2.464, 2.451, 2.309, 2.196, 2.15 , 2.061, 1.987, 1.907, 1.825, 1.803, 1.721, 1.62 , 1.595, 1.57 , 1.462, 1.346, 1.334, 1.208, 1.09 , 1.033, 0.94 , 0.874, 0.852, 0.857, 0.872, 0.884, 0.889, 0.888, 0.9 , 0.856, 0.756, 0.652, 0.567, 0.495, 0.432, 0.378, 0.331, 0.293, 0.264, 0.244, 0.232, 0.228, 0.228, 0.231, 0.239, 0.248, 0.261, 0.278, 0.308, 0.357, 0.417, 0.495, 0.575, 0.59 , 0.544, 0.465, 0.355, 0.246, 0.138, 0.032, -0.032, -0.075, -0.139, -0.179, -0.28 , -0.38 , -0.471, -0.565, -0.671, -0.772, -0.872, -0.974, -1.069, -1.164, -1.257, -1.169, -1.131, -1.084, -1.016, -0.936, -0.846, -0.748, -0.647, -0.546, -0.444, -0.348, -0.274, -0.159, -0.05 , 0.091, 0.145, 0.236, 0.318, 0.239, 0.105, -0.036, -0.168, -0.303, -0.304, -0.429, -0.571, -0.685, -0.704, -0.806, -0.849, -0.865, -0.835, -0.823, -0.892, -0.928, -0.978, -1.077, -1.156, -1.065, -1.153, -1.244, -1.332, -1.426, -1.523, -1.623, -1.722, -1.819, -1.918, -2.03 , -2.135, -2.233, -2.33 , -2.423, -2.516, -2.609, -2.7 , -2.791, -2.88 , -2.948, -3.913]) # Number of points to interpolate N = 1000 t = np.linspace(0, 1, N) xp = np.linspace(0, 1, data.size) # Interpolated data d_interp = np.interp(t, xp, data) plt.subplot(121) plt.scatter(t, d_interp, label='Interpolated points') plt.scatter(xp, data, s=4, label='Original data') plt.legend() plt.subplot(122) plt.hist(d_interp, 25, label='Density of interpolated data') plt.legend() plt.show() </code></pre>
<p>You want to sample y points instead of x points. Sampling y points is easy:</p> <pre><code>d_interp = np.linspace(data.min(), data.max(), N) </code></pre> <p>Now you want to interpolate (y, x) instead of (x,y). You could try this, but this won't work:</p> <pre><code>t = np.interp(d_interp, data, xp) </code></pre> <p>The problem is that interpolating the corresponding x points: <code>np.interp</code> expects your <code>x</code> to be monotonally increasing. In fact, by choosing a random <code>y</code> point in your chart (say <code>x=0</code>), you have at 2 or 3 corresponding x points. Therefore, for each <code>y</code>, you don't know what <code>x</code> to choose.</p> <p>The solution I suggest is, given your function is mostly decreasing, to filter your sample set by using a mask:</p> <pre><code>mask = np.append(True, np.diff(data) &lt; 0) #first points, plus all monotonally decreasing y points d_interp = np.linspace(data.min(), data.max(), N) t = np.interp(d_interp, data[mask][::-1], xp[mask][::-1]) </code></pre> <p><a href="https://i.stack.imgur.com/LX7KE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LX7KE.png" alt="output"></a></p>
python|numpy|interpolation
0
894
19,963,253
using math function on an arange
<p>I have a function that I want to apply to an arange: </p> <pre><code>import math from numpy import arange x = arange(7.0,39.0,0.0001) fx = math.exp(-2.0 / (-14.4 + 19.33 * x - 0.057 * pow(x,2))) </code></pre> <p>The resulting error is as follows:</p> <pre><code>`TypeError: only length-1 arrays can be converted to Python scalars` </code></pre> <p>I am using Python 2.7.</p> <p>This pythonic approach seems like it should work, but it does not. What do I need to do to make <code>fx</code> contain the corresponding f(x) values, according to the equation? </p> <p>Thanks.</p>
<p>Use Numpy's <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.exp.html" rel="nofollow"><code>exp</code></a> instead of <code>math</code>'s:</p> <pre><code>&gt;&gt;&gt; from numpy import arange, exp &gt;&gt;&gt; x = arange(7.0,39.0,0.0001) &gt;&gt;&gt; fx = exp(-2.0 / (-14.4 + 19.33 * x - 0.057 * pow(x,2))) &gt;&gt;&gt; fx array([ 0.98321018, 0.98321044, 0.98321071, ..., 0.99694082, 0.99694082, 0.99694083]) </code></pre> <p>Numpy's version plays nice with Numpy ndarrays such as <code>x</code>. It also has Numpy's performance benefits, which in this case are an order of magnitude compared to the <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.vectorize.html" rel="nofollow"><code>vectorize</code></a> <code>math.exp</code> solution:</p> <pre><code># built-in Numpy function In [5]: timeit exp(-2.0 / (-14.4 + 19.33 * x - 0.057 * pow(x,2))) 100 loops, best of 3: 10.1 ms per loop # vectorized math.exp function In [6]: fx = np.vectorize(lambda y: math.exp(-2.0 / (-14.4 + 19.33 * - 0.057 * pow(y,2)))) In [7]: timeit fx(x) 1 loops, best of 3: 221 ms per loop </code></pre>
python|math|python-2.7|numpy
6
895
72,094,230
How to concat all columns in a multiindex dataframe?
<p>I have a multiindex df that I'm trying to concat. The columns are:</p> <pre><code>a.columns MultiIndex([( 'Note', '507.3'), ( 'Note', '507.4'), ( 'Note', '507.5'), ( 'Note', '507.6'), ('Standard Deviation', '507.3'), ('Standard Deviation', '507.4'), ('Standard Deviation', '507.5'), ('Standard Deviation', '507.6'), ( 'Value', '507.3'), ( 'Value', '507.4'), ( 'Value', '507.5'), ( 'Value', '507.6')], names=[None, 'ESTS id']) </code></pre> <p>When I do</p> <pre><code>pd.concat([a['Note']['507.3'],a['Note']['507.4'],a['Note']['507.5']],axis=1) </code></pre> <p>I get the result I want for those 3 columns.</p> <p>But I can't figure out how to concat all columns without manually writing them out like that.</p> <p>I tried</p> <pre><code>pd.concat([a.columns],axis=1) TypeError: cannot concatenate object of type '&lt;class 'pandas.core.indexes.multi.MultiIndex'&gt;'; only Series and DataFrame objs are valid </code></pre> <pre><code>pd.concat(a[a.columns],axis=1) TypeError: first argument must be an iterable of pandas objects, you passed an object of type &quot;DataFrame&quot; </code></pre>
<p>Firstly, may I suggest that you will be more likely to get an answer that is helpful for you, if you are clearer about what your expected output is.</p> <p>However, based on your statement that:</p> <p><code>pd.concat([a['Note']['507.3'],a['Note']['507.4'],a['Note']['507.5']], axis=1)</code></p> <p>achieved what you want for those three columns, I assume that your intention is to drop the first level of your MultiIndex column names, resulting in a DataFrame with the following columns:</p> <pre><code>Index(['507.3', '507.4', '507.5', '507.6', '507.3', '507.4', '507.5', '507.6','507.3', '507.4', '507.5', '507.6'], dtype='object') </code></pre> <p>To achieve this, you can use <code>a.droplevel(level=0, axis=1)</code>. See the documentation for the <code>droplevel</code> method <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.droplevel.html" rel="nofollow noreferrer">here</a>.</p> <p><strong>Brief Explanation</strong></p> <ol> <li>The <code>level=0</code> tells pandas which level of the MultiIndex you would like to drop. If you not familiar with the concept of levels in a MultiIndex, you may want to familiarise yourself with the concept of MultiIndex's more generally, for example <a href="https://pandas.pydata.org/docs/user_guide/advanced.html" rel="nofollow noreferrer">here</a>. You can see the levels of a MultiIndex object using <code>.levels</code> attribute. So in your example, <code>a.columns.levels[0]</code> is a regular Index object containing <code>['Note', 'Note', .... , 'Value']</code>, and <code>a.columns.levels[1]</code> contains the column names you are trying to keep.</li> <li>The <code>axis=1</code> keyword argument tells pandas you are referring the columns. The default behaviour is to operate on the row index, i.e. <code>axis=0</code>.</li> </ol> <p><strong>Other Notes</strong></p> <p>Notice that you will be left with <em>non-unique</em> column names, which may cause unintended results later if you are unaware of this. If your only goal is to have columns which are not a MultiIndex, but instead an Index type, then it may be safer approach to do something like:</p> <p><code>a.columns = a.columns.to_flat_index()</code>.</p> <p>The <code>.to_flat_index()</code> method (from the documentation):</p> <blockquote> <p>Convert a MultiIndex to an Index of Tuples containing the level values.</p> </blockquote> <p>So the above snippet replaces the MultiIndex columns with Index type columns, where each value is a tuple. Therefore, to refer to an individual column you would need to use (as an example):</p> <p><code>a[('Note', '507.3')]</code></p> <p>Alternatively, if you prefer the convenience of column names which are strings, but want to maintain unique columns names, you could do something like:</p> <p><code>a.columns = [f&quot;{x}::{y}&quot; for x, y in a.columns]</code></p> <p>resulting in <code>a.columns</code>:</p> <pre><code>Index(['Note::507.3', 'Note::507.4', 'Note::507.5', 'Note::507.6', 'Standard Deviation::507.3', 'Standard Deviation::507.4', 'Standard Deviation::507.5', 'Standard Deviation::507.6', 'Value::507.3', 'Value::507.4', 'Value::507.5', 'Value::507.6'], dtype='object') </code></pre> <p>Note you may replace the <code>::</code> with any other separator you like.</p>
python|pandas|dataframe
1
896
72,004,269
How to calculate minutes passed since given date
<p>I have a dataframe that has a column named <code>Created_Timestamp</code>. I want to see how many minutes passed since that given date. I want to know dwell time in minutes from the column <code>Created_Timestamp</code>.</p> <pre class="lang-none prettyprint-override"><code> Created_Timestamp Dwell Time 2022-04-25 00:33:33.482 842 min 2022-04-25 08:30:52.904 364 min 2022-04-25 12:11:04.624 144 min </code></pre> <p>Dwell Time will be from current time.</p> <pre><code>curtime = dt.now() df['Dwell Time'] = curtime - df['Created_Timestamp'] </code></pre> <p>This did not work as I intended. How can I do this correctly?</p>
<pre><code>import pandas as pd from datetime import datetime curtime = datetime.now() qqq = pd.to_datetime(df['Created_Timestamp']) aaa = pd.to_datetime(curtime) ttt = (aaa - qqq).dt.total_seconds()/60 df['Dwell Time'] = ttt </code></pre> <p>Output</p> <pre><code> Created_Timestamp Dwell Time Dwell Time 0 2022-04-25 00:33:33.482 842 min 1463.694005 1 2022-04-25 08:30:52.904 364 min 986.370305 2 2022-04-25 12:11:04.624 144 min 766.174972 </code></pre>
python|pandas|datetime|timedelta
1
897
71,923,159
How to get output_attentions of a pretrained Distilbert Model?
<p>I am using a pretrained DistilBert model:</p> <pre><code>from transformers import TFDistilBertModel,DistilBertConfig dbert = 'distilbert-base-uncased' config = DistilBertConfig(max_position_embeddings=256 , dropout=0.2, attention_dropout=0.2, output_hidden_states=True, output_attentions=True) #or true dbert_model = TFDistilBertModel.from_pretrained(dbert, config) input_ids_in = tf.keras.layers.Input(shape=(256,), name='input_id', dtype='int32') input_masks_in = tf.keras.layers.Input(shape=(256,), name='attn_mask', dtype='int32') outputs = dbert_model([input_ids_in, input_masks_in], output_attentions = 1) </code></pre> <p>I am trying to get the output_attentions. But the output is of length 1 and is given as:</p> <blockquote> <p>TFBaseModelOutput([('last_hidden_state', &lt;KerasTensor: shape=(None, 256, 768) dtype=float32 (created by layer 'tf_distil_bert_model_6')&gt;)])</p> </blockquote> <p>i have given &quot;output_attentions = True&quot; in config and in forward pass &quot;output_attentions = 1&quot; is specified. Can anyone let me know what i am doing wrong? EDIT: I have changed the default configuration value of <code>max_positional_embeddings </code> of 512 to 256. if i change my model instantiation to</p> <pre><code>dbert_model = TFDistilBertModel.from_pretrained('distilbert-base-uncased',config=config) </code></pre> <p>it gives me the following error.</p> <pre><code>ValueError: cannot reshape array of size 393216 into shape (256,768) </code></pre> <p>768*512 being 393216. So it might be related to config code.</p> <p>Any ideas?</p>
<p>I am posting the answer as @cronoik suggested: I modified the code as <code> dbert_model = TFDistilBertModel.from_pretrained('distilbert-base-uncased',config, output_attentions=True)</code> This gave both hidden states and attention in output.</p>
python|tensorflow|tf.keras|huggingface-transformers|distilbert
0
898
16,652,663
Building NumPy on RedHat
<p>I installed a local version of Python 2.7 in my home directory (Linux RedHat) under ~/opt using the --prefix flag. More specifically, Python was placed in ~/home/opt/bin.</p> <p>Now, I want to install NumPy, but I am not really sure how I would achieve this. All I found in the INSTALL.txt and online documentation was the command to use the compiler. I tried gfortran, and it worked without any error message: <code>python setup.py build --fcompiler=gnu95</code></p> <p>However, I am not sure how to install it for my local version of Python. Also, I have to admit that I don't really understand how this whole approach works in general. E.g., what is the <code>setup.py build</code> doing? Is it creating module files that I have to move to a specific folder?</p> <p>I hope anyone can give me some help here, and I would also appreciate a few lines of information how this approach works, or maybe some resources where I can read it up (I didn't find anything on the NumPy pages).</p>
<p>Your local version of python should keep all of it's files somewhere in <code>~/opt</code> (presumably). As long as this is the python installation that gets used when you issue the command </p> <pre><code>python setup.py build --fcompiler=gnu95 </code></pre> <p>you should be all set because in the <code>sys</code> module, there are a bunch of constants which the setup script uses to determine <em>where</em> to put the modules once they are built. </p> <p>So -- running <code>python setup.py build</code> issues all of the necessary commands to build the module (compiling the C/Fortran code into shared object libraries that python can load dynamically and copying the pure python code to create the proper directory structure). The module is actually built somewhere in the <code>build</code> subdirectory which gets created during the process if it doesn't already exist. Once the library has been built (successfully), installing it should be as simple as:</p> <pre><code>python setup.py install </code></pre> <p>(You might need to <code>sudo</code> if you don't have write privileges in the install directory).</p>
python|numpy|setup.py
1
899
16,992,422
For loop speed with Numpy
<p>I am trying to get this code running fast in python however I am having trouble getting it to run anywhere near the speed it runs in MATLAB. The problem seems to be this for loop which takes about 2 second to run when the number "SRpixels" is approximately equal to 25000.</p> <p>I cant seem to find any way to trim this down any further, and I am looking for suggestions.</p> <p>The datatypes for the numpy arrays below are float32 for all except the **_Location[] which are uint32.</p> <pre><code>for j in range (0,SRpixels): #Skip data if outside valid range if (abs(SR_pointCloud[j,0]) &gt; SR_xMax or SR_pointCloud[j,2] &gt; SR_zMax or SR_pointCloud[j,2] &lt; 0): pass else: RIGrid1_Location[j,0] = np.floor(((SR_pointCloud[j,0] + xPosition + 5) - xGrid1Center) / gridSize) RIGrid1_Location[j,1] = np.floor(((SR_pointCloud[j,2] + yPosition) - yGrid1LowerBound) / gridSize) RIGrid1_Count[RIGrid1_Location[j,0],RIGrid1_Location[j,1]] += 1 RIGrid1_Sum[RIGrid1_Location[j,0],RIGrid1_Location[j,1]] += SR_pointCloud[j,1] RIGrid1_SumofSquares[RIGrid1_Location[j,0],RIGrid1_Location[j,1]] += SR_pointCloud[j,1] * SR_pointCloud[j,1] RIGrid2_Location[j,0] = np.floor(((SR_pointCloud[j,0] + xPosition + 5) - xGrid2Center) / gridSize) RIGrid2_Location[j,1] = np.floor(((SR_pointCloud[j,2] + yPosition) - yGrid2LowerBound) / gridSize) RIGrid2_Count[RIGrid2_Location[j,0],RIGrid2_Location[j,1]] += 1 RIGrid2_Sum[RIGrid2_Location[j,0],RIGrid2_Location[j,1]] += SR_pointCloud[j,1] RIGrid2_SumofSquares[RIGrid2_Location[j,0],RIGrid2_Location[j,1]] += SR_pointCloud[j,1] * SR_pointCloud[j,1] </code></pre> <p>I did attempt to use Cython, where I replaced j with a <code>cdef int j</code> and compiled. There was no noticeable performance gain. Anyone have suggestions?</p>
<p>Vectorization is almost always the best way to speed up numpy code, and much of this seems vectorizable. To start, for example, the location arrays seem quite simple to do:</p> <pre><code># these are all of your j values inds = np.arange(0,SRpixels) # these are the j values you don't want to skip sel = np.invert((abs(SR_pointCloud[inds,0]) &gt; SR_xMax) | (SR_pointCloud[inds,2] &gt; SR_zMax) | (SR_pointCloud[inds,2] &lt; 0)) RIGrid1_Location[sel,0] = np.floor(((SR_pointCloud[sel,0] + xPosition + 5) - xGrid1Center) / gridSize) RIGrid1_Location[sel,1] = np.floor(((SR_pointCloud[sel,2] + yPosition) - yGrid1LowerBound) / gridSize) RIGrid2_Location[sel,0] = np.floor(((SR_pointCloud[sel,0] + xPosition + 5) - xGrid2Center) / gridSize) RIGrid2_Location[sel,1] = np.floor(((SR_pointCloud[sel,2] + yPosition) - yGrid2LowerBound) / gridSize) </code></pre> <p>This has no python loop.</p> <p>The rest are trickier and will depend upon what you are doing, but should also be vectorizable if you think about them in this way.</p> <p>If you <em>really</em> have something that can't be vectorized and must be done with a loop—I've only had this happen a few times—I'd suggest Weave over Cython. It's harder to use, but should give speeds comparable to C.</p>
python|numpy
5