Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
600
48,378,015
Compare values in single column
<p>I have a single column </p> <pre><code>df = pd.DataFrame({'Name': ['Harry', 'John', 'Peter', 'Stan', 'Petra'], 'Score': [10, 9, 5, 7, 8]}) </code></pre> <p>I can use itertools to get a matrix</p> <pre><code>for a, b in itertools.combinations(df['Score'], 2): print (a, b) </code></pre> <p>What is the best what way to return a matrix comparing each value with an x and y index. for instance:</p> <pre><code> Harry John Peter Stan Petra Harry - 1 5 3 2 John -1 - 4 2 1 etc.... </code></pre>
<p>You can use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.ufunc.outer.html" rel="nofollow noreferrer"><code>np.subtract.outer</code></a> with <code>DataFrame</code> constructor:</p> <pre><code>df = pd.DataFrame(np.subtract.outer(df['Score'], df['Score']), index=df['Name'], columns=df['Name']) </code></pre> <p>Or <code>broadcasting</code>:</p> <pre><code>a = df['Score'].values df = pd.DataFrame(a[:, None] - a, columns = df['Name'], index=df['Name']) print (df) Name Harry John Peter Stan Petra Name Harry 0 1 5 3 2 John -1 0 4 2 1 Peter -5 -4 0 -2 -3 Stan -3 -2 2 0 -1 Petra -2 -1 3 1 0 </code></pre>
python|pandas
1
601
48,081,927
Adding a new column based on values of another column in Pandas(python)
<p>Given a dataframe that looks something like</p> <pre><code>Vote A Vote B 1 4 3 2 1 5 </code></pre> <p>I want to add a new column named <code>Winner</code> that compares the value between two columns <code>Vote A</code> and <code>Vote B</code> and specify the winner.</p> <pre><code>Winner B A B </code></pre> <p>How can I do this in Pandas?</p>
<p>You could use</p> <pre><code>def winner(row): if row['Vote A'] &gt; row['Vote B']: return 'A' elif row['Vote A'] &lt; row['Vote B']: return 'B' else: return '' df['Winner'] = df[['Vote A','Vote B']].apply(winner, axis=1) </code></pre> <p>Which yields</p> <pre><code>Vote A Vote B Winner 1 4 B 3 2 A 1 5 B </code></pre>
python|pandas
2
602
48,186,074
Renaming MultiIndex columns is not working
<p>I am trying to rename the columns of my data frame on the fly. The reason is I want to do something like</p> <pre><code>df.rename(..).plot() </code></pre> <p>This is how I attempt to do it:</p> <pre><code>import pandas as pd import numpy as np np.random.seed(42) cols = [(100,i) for i in range(1, 6)] cols_replace = ['Sensor ' + str(i) for i in range(1, len(cols)+1)] rename_dict = dict(zip(df.columns, cols_replace)) df = pd.DataFrame(np.random.rand(5, len(cols)), columns=pd.MultiIndex.from_tuples(cols)) print(df.rename(columns=rename_dict)) </code></pre> <p>However, for some reason this does not work as the resulting data frame still does not show the column names that I am seeking:</p> <pre><code> 100 1 2 3 4 5 0 0.374540 0.950714 0.731994 0.598658 0.156019 1 0.155995 0.058084 0.866176 0.601115 0.708073 2 0.020584 0.969910 0.832443 0.212339 0.181825 3 0.183405 0.304242 0.524756 0.431945 0.291229 4 0.611853 0.139494 0.292145 0.366362 0.456070 </code></pre> <p>Why is this not working as I'd expect and is there a way how I can achieve this?</p> <hr> <p>The content of <code>rename_dict</code> is:</p> <pre><code>{(100, 1): 'Sensor 1', (100, 2): 'Sensor 2', (100, 3): 'Sensor 3', (100, 4): 'Sensor 4', (100, 5): 'Sensor 5'} </code></pre>
<p>Try using <code>rename</code> with a <code>level</code> argument - </p> <pre><code>df = df.rename(columns='Sensor {}'.format , level=1) </code></pre> <p>Thanks to Zero for the shorthand improvement. Alternatively,</p> <pre><code>i = df.columns.levels[1] # OP's suggestion, for more flexibility! j = ['Sensor ' + str(x) for x in range(1, len(cols) + 1)] rename_dict = dict(zip(i, j)) df = df.rename(columns=rename_dict, level=1) </code></pre> <p></p> <pre><code>df 100 Sensor 1 Sensor 2 Sensor 3 Sensor 4 Sensor 5 0 0.374540 0.950714 0.731994 0.598658 0.156019 1 0.155995 0.058084 0.866176 0.601115 0.708073 2 0.020584 0.969910 0.832443 0.212339 0.181825 3 0.183405 0.304242 0.524756 0.431945 0.291229 4 0.611853 0.139494 0.292145 0.366362 0.456070 </code></pre> <p>Since you want to apply the renaming operation on the <em>first</em> level (rather than the zeroth), pass <code>level=1</code>.</p>
python|pandas|dataframe|rename|multi-index
2
603
48,402,780
Web Scraping a Forum Post in Python Using Beautiful soup and lxml, saving results to a pandas dataframe
<p>I was looking at a past stackoverflow post but I am having trouble to build on top of it.</p> <p>I want to get :</p> <ol> <li>users of who posted it in the form</li> <li>forum post content</li> <li>save it all to a dataframe</li> </ol> <p>Something like</p> <p>Dateframe</p> <pre><code>col1 col2 johnsmith I love cats janesmith I own 50 cats </code></pre> <p>Code trying to modify</p> <pre><code>import requests from bs4 import BeautifulSoup import lxml r = requests.get('http://www.catforum.com/forum/43-forum-fun/350938-count-one-billion-2016-a-120.html') soup = BeautifulSoup(r.text) for div in soup.select('[id^=post_message]'): print(div.get_text("\n", strip=True)) </code></pre>
<p>I only parsed the webpage the URL you included in the question.</p> <p>The <code>posts</code> list may need some data clean up by eliminating the new line, tabs, and etc.</p> <p>Code:</p> <pre><code>import requests, re from bs4 import BeautifulSoup import pandas as pd headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.0; WOW64; rv:24.0) Gecko/20100101 Firefox/24.0'} r = requests.get('http://www.catforum.com/forum/43-forum-fun/350938-count-one-billion-2016-a-120.html', headers=headers) soup = BeautifulSoup(r.text, 'html.parser') names = [name.text for name in soup.find_all('a', href=re.compile('^http://www.catforum.com/forum/members/[0-9]'), text=True)] posts = [post.text for post in soup.find_all('div', id=re.compile('^post_message_'))] df = pd.DataFrame(list(zip(names, posts))) print(df) </code></pre> <p>Output:</p> <pre><code> 0 1 0 bluemilk \r\n\t\t\t\r\n\t\t\t11301\n\nDid you get the c... 1 Mochas Mommy \r\n\t\t\t\r\n\t\t\t11302\nWell, I tidied, cle... 2 Mochas Mommy \r\n\t\t\t\r\n\t\t\t11303\nDaisy sounds like s... 3 DebS \r\n\t\t\t\r\n\t\t\t11304\n\nNo, Kurt, I haven... 4 Mochas Mommy \r\n\t\t\t\r\n\t\t\t11305\n\nI had a sore neck... 5 annegirl \r\n\t\t\t\r\n\t\t\t11306\nMM- Thanks for your... 6 Mochas Mommy \r\n\t\t\t\r\n\t\t\t11307\nWelcome back annieg... 7 spirite \r\n\t\t\t\r\n\t\t\t11308. Hi annegirl! None o... 8 DebS \r\n\t\t\t\r\n\t\t\t11309\n\nWelcome to you, a... 9 annegirl \r\n\t\t\t\r\n\t\t\t11310\nDebS and Spirite th... </code></pre>
python|pandas|beautifulsoup
1
604
48,725,520
Get dataframe values by down instead of across?
<pre><code> 0 2 0 -0.089329 -0.945867 1 -0.932132 0.017587 2 -0.016692 0.254161 3 -1.143704 1.193555 4 -0.077118 -0.862495 </code></pre> <p><code>df.values</code> gives</p> <pre><code>[[-0.089329, -0.945867], [-0.932132, 0.017587], ...] </code></pre> <p>But I want:</p> <pre><code>[[-0.089329, -0.932132, ...], [-0.945867, 0.0.017587, ...]] </code></pre>
<p>You need transpose numpy array or pandas DataFrame:</p> <pre><code>df.values.T </code></pre> <p>Or:</p> <pre><code>df.T.values </code></pre>
python|pandas
3
605
48,832,364
How to keep partitions after performing a group-by aggregation in dask
<p>In my application I perform an aggregation on a dask dataframe using groupby, ordered by a certain id. </p> <p>However I would like that the aggregation maintains the partition divisions, as I intend to perform joins with other dataframe identically partitioned.</p> <pre><code>import pandas as pd import numpy as np import dask.dataframe as dd df =pd.DataFrame(np.arange(16), columns=['my_data']) df.index.name = 'my_id' ddf = dd.from_pandas(df, npartitions=4) ddf.npartitions # 4 ddf.divisions # (0, 4, 8, 12, 15) aggregated = ddf.groupby('my_id').agg({'my_data': 'count'}) aggregated.divisions # (None, None) </code></pre> <p>Is there a way to accomplish that?</p>
<p>You probably can't maintain <em>the same</em> partitioning, because dask will need to aggregate counts between partitions. Your data will necessarily have to move around in ways that depend on the values of your data.</p> <p>If you're looking to ensure that your output has many partitions then you might choose to use the <code>split_out=</code> keyword to <code>agg</code></p>
python|pandas|dataframe|distributed|dask
2
606
48,689,700
Combination of matrix elements giving non-zero value (PYTHON)
<p>I have to evaluate the following expression, given two quite large matrices A,B and a very complicated function F: <a href="https://i.stack.imgur.com/8k9rR.gif" rel="nofollow noreferrer">The mathematical expression</a></p> <p>I was thinking if there is an efficient way in order to first find those indices i,j that will give a non-zero element after the multiplication of the matrices, so that I avoid the quite slow 'for loops'.</p> <p><strong>Current working code</strong></p> <pre><code># Starting with 4 random matrices A = np.random.randint(0,2,size=(50,50)) B = np.random.randint(0,2,size=(50,50)) C = np.random.randint(0,2,size=(50,50)) D = np.random.randint(0,2,size=(50,50)) indices [] for i in range(A.shape[0]): for j in range(A.shape[0]): if A[i,j] != 0: for k in range(B.shape[1]): if B[j,k] != 0: for l in range(C.shape[1]): if A[i,j]*B[j,k]*C[k,l]*D[l,i]!=0: indices.append((i,j,k,l)) print indices </code></pre> <p>As you can see, in order to get the indices I need I have to use nested loops (= huge computational time). </p>
<p>My guess would be NO: you cannot avoid the for-loops. In order to find all the indices <code>ij</code> you need to loop through all the elements which defeats the purpose of this check. Therefore, you should go ahead and use simple array elementwise multiplication and dot product in <code>numpy</code> - it should be quite fast with for loops taken care by <code>numpy</code>.</p> <p>However, if you plan on using a Python loop then the answer is YES, you can avoid them by using <code>numpy</code>, using the following pseudo-code (=hand-waving):</p> <pre><code>i, j = np.indices((N, M)) # CAREFUL: you may need to swap i&lt;-&gt;j or N&lt;-&gt;M fs = F(i, j, z) # array of values of function F # for a given z over the index grid R = np.dot(A*fs, B) # summation over j # return R # if necessary do a summation over i: np.sum(R, axis=...) </code></pre> <p>If the issue is that computing <code>fs = F(i, j, z)</code> is a very slow operation, then you will have to identify elements of <code>A</code> that are zero using two loops built-in into <code>numpy</code> (so they are quite fast):</p> <pre><code>good = np.nonzero(A) # hidden double loop (for 2D data) fs = np.zeros_like(A) fs[good] = F(i[good], j[good], z) # compute F only where A != 0 </code></pre>
python|numpy|for-loop|matrix|indices
0
607
70,990,715
Gather data by year and also by industry
<p>I have this very large Dataframe containing statistics for various firms for years 1950 to 2020. I have been trying to divide the data first by year and then by industry code (4 digits). Both 'year' and 'industry_code' are columns from the Dataframe. I have created a dictionary in order to obtain data by year, but then I find myself stuck when trying to divide each key by industry, since all of my columns from my initial Dataframe find themselves in the 'value' part of the dictionary. Here is my starting code:</p> <pre><code>df= pd.read_csv('xyz') dictio = {} for year in df['year'].unique(): dictio[year] = df[ df['year'] == year ] </code></pre> <p>Could someone help me figure out a groupby / loc / if statement or other in order to complete the sampling by year and by industry? Thank you!</p>
<p>Try using dict comprehension + <code>groupby</code>:</p> <pre><code>dct = {key1: {key2: df2 for key2, df2 in df1.groupby('industry_code')} for key1, df1 in df.groupby('year')} </code></pre> <p>Now try accessing one:</p> <pre><code>firm_year_df = dct[1994]['My Firm'] </code></pre>
pandas|dataframe|dictionary|if-statement|pandas-groupby
0
608
51,661,239
Error when checking target: expected dense_1 to have shape (1,) but got array with shape (256,)
<p>I am trying to learn tensorflow, and I was following a demo tutorial (<a href="https://www.tensorflow.org/tutorials/keras/basic_text_classification" rel="nofollow noreferrer">https://www.tensorflow.org/tutorials/keras/basic_text_classification</a>)</p> <p>The error report is telling me </p> <p>"Error when checking target: expected dense_1 to have shape (1,) but got array with shape (256,)"</p> <p>Can someone explain to me why this won't work?</p> <pre><code>train_data = keras.preprocessing.sequence.pad_sequences(train_data, value=word_index["&lt;PAD&gt;"], padding='post', maxlen=256) #max length test_data = keras.preprocessing.sequence.pad_sequences(test_data, value=word_index["&lt;PAD&gt;"], padding='post', maxlen=256) vocal_size = 10000 model = keras.Sequential() model.add(keras.layers.Embedding(vocal_size,16)) model.add(keras.layers.GlobalAveragePooling1D()) model.add(keras.layers.Dense(16,activation=tf.nn.relu)) model.add(keras.layers.Dense(1,activation=tf.nn.sigmoid)) model.compile(optimizer=tf.train.AdamOptimizer(), loss='binary_crossentropy', metrics=['accuracy']) x_val = train_data[:10000] partial_x_train = train_data[10000:] y_val = train_data[:10000] partial_y_train = train_data[10000:] history = model.fit(partial_x_train, partial_y_train, epochs=40, batch_size=512, validation_data=(x_val, y_val), verbose=1) </code></pre>
<p>The error is in these lines</p> <pre><code>y_val = train_data[:10000] partial_y_train = train_data[10000:] </code></pre> <p>But the tutorial says it should be</p> <pre><code>y_val = train_labels[:10000] partial_y_train = train_labels[10000:] </code></pre> <p><code>train_data</code> represents each written review, and <code>train_labels</code> represents whether the reviews are positive or negative. You want your model to learn when a written review is positive or negative.</p>
numpy|machine-learning|neural-network|keras
0
609
41,809,308
Use pandas with Spark
<p>I have a Noob Question on spark and pandas. I would like to use pandas, numpy etc.. with spark but when i import a lib i have an error. can you help me plz? This is my code</p> <pre><code>from pyspark import SparkContext, SQLContext from pyspark import SparkConf import pandas # Config conf = SparkConf().setAppName("Script") sc = SparkContext(conf=conf) log4j = sc._jvm.org.apache.log4j log4j.LogManager.getRootLogger().setLevel(log4j.Level.ERROR) sqlCtx = SQLContext(sc) # Importation of csv out of HDFS data_name = "file_on_hdfs.csv" data_textfile = sc.textFile(data_name) </code></pre> <p>This is the error:</p> <pre><code>ImportError: No module named pandas </code></pre> <p>How can i use pandas? It's not a local mode.</p>
<p>Spark has it's own <a href="http://spark.apache.org/docs/latest/sql-programming-guide.html#datasets-and-dataframes" rel="noreferrer">Dataframe</a> object that can be created from RDDs.</p> <p>You can still use libraries such as numpy but you must install them first. </p>
python|pandas|pyspark|importerror
7
610
64,571,500
Pandas: Sort a Multiindex Dataframe's multi-level column with mixed datatypes
<p>Below is my dataframe:</p> <pre><code>In [2804]: df = pd.DataFrame({'A':[1,2,3,4,5,6], 'D':[{&quot;value&quot;: '126', &quot;perc&quot;: None, &quot;unit&quot;: None}, {&quot;value&quot;: 324, &quot;perc&quot;: None, &quot;unit&quot;: None}, {&quot;value&quot;: 'N/A', &quot;perc&quot;: None, &quot;unit&quot;: None}, {}, {&quot;value&quot;: '100', &quot;perc&quot;: None, &quot;unit&quot;: None}, np.nan]}) In [2794]: df.columns = pd.MultiIndex.from_product([df.columns, ['E']]) In [2807]: df Out[2807]: A D E E 0 1 {'value': '126', 'perc': None, 'unit': None} 1 2 {'value': 324, 'perc': None, 'unit': None} 2 3 {'value': 'N/A', 'perc': None, 'unit': None} 3 4 {} 4 5 {'value': '100', 'perc': None, 'unit': None} 5 6 NaN </code></pre> <p>I need to sort the multi-level column with index <code>(D,E)</code> in descending order based on <code>value</code> key from <code>dict</code>.</p> <p>As you can see <code>value</code> key can have values in mixed datatypes like <code>int, string</code> or empty like <code>{}</code>, or <code>NaN</code>.</p> <p><strong><code>N/A</code> and <code>Nan</code> values should always appear at last after sorting(both asc and desc).</strong></p> <p><strong>Expected output:</strong></p> <pre><code>In [2814]: df1 = pd.DataFrame({'A':[2,1,5,3,4,6], 'D':[{&quot;value&quot;: 324, &quot;perc&quot;: None, &quot;unit&quot;: None}, {&quot;value&quot;: '126', &quot;perc&quot;: None, &quot;unit&quot;: None}, {&quot;value&quot;: '100', &quot;perc&quot;: None, &quot;unit&quot;: None}, {&quot;value&quot;: 'N/A', &quot;perc&quot;: None, &quot;unit&quot;: None}, {},np.nan]}) In [2799]: df1.columns = pd.MultiIndex.from_product([df1.columns, ['E']]) In [2811]: df1 Out[2811]: A D E E 0 2 {'value': 324, 'perc': None, 'unit': None} 1 1 {'value': '126', 'perc': None, 'unit': None} 2 5 {'value': '100', 'perc': None, 'unit': None} 3 3 {'value': 'N/A', 'perc': None, 'unit': None} 4 4 {} 5 6 NaN </code></pre>
<p>Create helper column filled by numeric and sorting by this column:</p> <pre><code>df['tmp'] = pd.to_numeric(df[('D','E')].str.get('value'), errors='coerce') df1 = df.sort_values('tmp', ascending=False).drop('tmp', axis=1) print (df1) A D E E 1 2 {'value': 324, 'perc': None, 'unit': None} 0 1 {'value': '126', 'perc': None, 'unit': None} 4 5 {'value': '100', 'perc': None, 'unit': None} 2 3 {'value': 'N/A', 'perc': None, 'unit': None} 3 4 {} 5 6 NaN </code></pre> <hr /> <pre><code>df1 = df.sort_values('tmp').drop('tmp', axis=1) print (df1) A D E E 4 5 {'value': '100', 'perc': None, 'unit': None} 0 1 {'value': '126', 'perc': None, 'unit': None} 1 2 {'value': 324, 'perc': None, 'unit': None} 2 3 {'value': 'N/A', 'perc': None, 'unit': None} 3 4 {} 5 6 NaN </code></pre>
python|python-3.x|pandas|dataframe
1
611
64,423,343
AWS lambda task timed out issue with large data while processing data from S3 bucket
<p>I have 120 mb file of data in my S3 bucket and i am loading it in lambda by python pandas and processing it but after 15 mins(the time set in timeout option of basic settings) it is giving me an error of task timed out and stopping the process.The same process i am doing in basic sublime text and terminal is taking only 2-3 mins.What is the problem and how can i solve it. Thanks in advance</p>
<p>You should try to take a look at the resourcing used within your local machine if you believe that it takes a significantly less period of time. <a href="https://docs.aws.amazon.com/lambda/latest/dg/configuration-console.html" rel="nofollow noreferrer">Increasing the amount of memory</a> available to your Lambda can significantly improve performance in circumstances where it is being constrained, this will also increase the amount of CPU.</p> <p>If there are large volumes of data can this be moved into <a href="https://aws.amazon.com/efs/" rel="nofollow noreferrer">EFS</a>? Lambda can have an <a href="https://aws.amazon.com/blogs/compute/using-amazon-efs-for-aws-lambda-in-your-serverless-applications/" rel="nofollow noreferrer">EFS mount</a> attached and accessed as if it is local storage. By doing this you remove this process from your Lambda script and instead can process only.</p> <p>Finally if neither of the above result in cutting down the time it takes to execute, take a look at whether you can break up the Lambda into smaller Lambda functions and then orchestrate via <a href="https://aws.amazon.com/step-functions/" rel="nofollow noreferrer">Step Functions</a>. By doing this you can <a href="https://aws.amazon.com/getting-started/hands-on/create-a-serverless-workflow-step-functions-lambda/" rel="nofollow noreferrer">create a chained sequence of Lambda functions</a> that will perform the original operation of the single Lambda function.</p>
pandas|amazon-web-services|amazon-s3|aws-lambda|aws-lambda-layers
0
612
47,626,579
Creating new column with letters v from beginning to last row
<p>How do I add a new empty column with the letters 'v' from beginning to the last row.</p> <p>df1:</p> <pre><code> AM 0 MA 1 Ming 2 Mo </code></pre> <p>Desired output for df1:</p> <pre><code> AM C 0 MA v 1 Ming v 2 Mo v </code></pre> <p>I get error: AttributeError: 'DataFrame' object has no attribute 'Sample'</p> <p>When I try this:</p> <pre><code>df1["C"] = np.nan df1 df1["C"] = df1.Sample.str['v'] </code></pre>
<p>d = {'one' : pd.Series([1., 2., 3.], index=['a', 'b', 'c'])}</p> <p>dd = pd.DataFrame(d)</p> <p>print dd</p> <p>one</p> <p>a 1</p> <p>b 2</p> <p>c 3</p> <p>dd = pd.DataFrame(d)</p> <p>dd['df'] = 'test'</p> <p>print dd</p> <p>one df</p> <p>a 1 test</p> <p>b 2 test</p> <p>c 3 test</p>
python|python-3.x|pandas
0
613
47,841,405
How to manually create text summaries in TensorFlow?
<p>First of all, I already know <em>how to manually add float or image summaries</em>. I can construct a <code>tf.Summary</code> protobuf manually. But what about text summaries? I look at the definition for summary protobuf <a href="https://github.com/tensorflow/tensorflow/blob/r1.4/tensorflow/core/framework/summary.proto" rel="nofollow noreferrer">here</a>, but I don't find a "string" value option there.</p>
<p>TensorBoard's text plugin offers a <code>pb</code> method that lets you create text summaries outside of a TensorFlow environment. <a href="https://github.com/tensorflow/tensorboard/blob/master/tensorboard/plugins/text/summary.py#L74" rel="nofollow noreferrer">https://github.com/tensorflow/tensorboard/blob/master/tensorboard/plugins/text/summary.py#L74</a></p> <p>Example usage:</p> <pre><code>import tensorboard as tb text_summary_proto = tb.summary.pb('fooTag', 'text data') </code></pre>
tensorflow|tensorboard
2
614
47,895,225
Tensorflow Combining Two Models End to End
<p>In tensorflow it is fairly easy to load trained models back into tensorflow through the use of checkpoints. However, this use case seems oriented towards users that want to either run evaluation or additional training on a checkpointed model.</p> <p>What is the simplest way in tensorflow to load a pre-trained model and use it (without training) to produce results which will then be used in a new model? </p> <p>Right now the methods that seem most promising are tf.get_tensor_by_name() and tf.stop_gradient() in order to get the input and output tensors for the trained model loaded from tf.train.import_meta_graph(). </p> <p>What is the best practices setup for this sort of thing?</p>
<p>The most straightforward solution would be to freeze the pre-trained model variables using this function:</p> <pre><code>def freeze_graph(model_dir, output_node_names): """Extract the sub graph defined by the output nodes and convert all its variables into constant Args: model_dir: the root folder containing the checkpoint state file output_node_names: a string, containing all the output node's names, comma separated """ if not tf.gfile.Exists(model_dir): raise AssertionError( "Export directory doesn't exist") if not output_node_names: print("You need to supply the name of the output node") return -1 # We retrieve our checkpoint fullpath checkpoint = tf.train.get_checkpoint_state(model_dir) input_checkpoint = checkpoint.model_checkpoint_path # We precise the file fullname of our freezed graph absolute_model_dir = "/".join(input_checkpoint.split('/')[:-1]) # We clear devices to allow TensorFlow to control on which device it will load operations clear_devices = True # We start a session using a temporary fresh Graph with tf.Session(graph=tf.Graph()) as sess: # We import the meta graph in the current default Graph saver = tf.train.import_meta_graph(args.meta_graph_path, clear_devices=clear_devices) # We restore the weights saver.restore(sess, input_checkpoint) # We use a built-in TF helper to export variables to constants frozen_graph = tf.graph_util.convert_variables_to_constants( sess, # The session is used to retrieve the weights tf.get_default_graph().as_graph_def(), # The graph_def is used to retrieve the nodes output_node_names.split(",") # The output node names are used to select the usefull nodes ) return frozen_graph </code></pre> <p>Then you'd be able to build your new-model on top of the pre-trained model:</p> <pre><code># Get the frozen graph frozen_graph = freeze_graph(YOUR_MODEL_DIR, YOUR_OUTPUT_NODES) # Set the frozen graph as a default graph frozen_graph.as_default() # Get the output tensor from the pre-trained model pre_trained_model_result = frozen_graph.get_tensor_by_name(OUTPUT_TENSOR_NAME_OF_PRETRAINED_MODEL) # Let's say you want to get the pre trained model result's square root my_new_operation_results = tf.sqrt(pre_trained_model_result) </code></pre>
tensorflow
10
615
49,298,488
How to extract hour, minute and second from Series filled with datetime.time values
<p>Data:</p> <pre><code>0 09:30:38 1 13:40:27 2 18:05:24 3 04:58:08 4 09:00:09 </code></pre> <p>Essentially what I'd like to do is split this into three columns [hour, minute, second]</p> <p>I've tried the following code but none seem to be working:</p> <pre><code>train_sample.time.hour AttributeError: 'Series' object has no attribute 'hour' train_sample.time.dt.hour AttributeError: Can only use .dt accessor with datetimelike values pd.DatetimeIndex(train_sample.time).hour TypeError: &lt;class 'datetime.time'&gt; is not convertible to datetime </code></pre> <p>This seems so simple but I can't figure it out. Any help would be much appreciated. </p>
<p>Use list comprehension with extract attributes of <code>time</code>s:</p> <pre><code>import datetime as datetime df = pd.DataFrame({'time': [datetime.time(9, 30, 38), datetime.time(13, 40, 27), datetime.time(18, 5, 24), datetime.time(4, 58, 8), datetime.time(9, 0, 9)]}) print (df) time 0 09:30:38 1 13:40:27 2 18:05:24 3 04:58:08 4 09:00:09 df[['h','m','s']] = pd.DataFrame([(x.hour, x.minute, x.second) for x in df['time']]) </code></pre> <p>Or convert to <code>string</code>s, split and convert to <code>int</code>:</p> <pre><code>df[['h','m','s']] = df['time'].astype(str).str.split(':', expand=True).astype(int) print (df) time h m s 0 09:30:38 9 30 38 1 13:40:27 13 40 27 2 18:05:24 18 5 24 3 04:58:08 4 58 8 4 09:00:09 9 0 9 </code></pre>
python|pandas|datetime|series
10
616
49,131,972
Why was my dataframe column changed?
<p>My code</p> <pre><code>import pandas as pd import numpy as np series = pd.read_csv('o1.csv', header=0) s1 = series s2 = series s1['userID'] = series['userID'] + 5 s1['adID'] = series['adID'] + 3 s2['userID'] = s1['userID'] + 5 s2['adID'] = series['adID'] + 4 r1=series.append(s1) r2=r1.append(s2) print(r2) </code></pre> <p>I got something wrong,now columns are exactly the same. Output</p> <pre><code> userID gender adID rating 0 11 m 107 50 1 11 m 108 100 2 11 m 109 0 3 12 f 107 50 4 12 f 108 100 5 13 m 109 62 6 13 m 114 28 7 13 m 108 36 8 12 f 109 74 9 12 f 114 100 10 14 m 108 62 11 14 m 109 28 12 15 f 116 50 13 15 f 117 100 0 11 m 107 50 1 11 m 108 100 2 11 m 109 0 </code></pre> <p>I didn't want my series column to be changed. Why did it happened? How to change this? Do I need to use iloc?</p>
<p>IIUC need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.copy.html" rel="nofollow noreferrer"><code>copy</code></a> if need new object <code>DataFrame</code>:</p> <pre><code>s1 = series.copy() s2 = series.copy() </code></pre> <p><strong>Sample</strong>:</p> <pre><code>print (df) userID gender adID rating 0 11 m 107 50 1 11 m 108 100 2 11 m 109 0 s1 = df.copy() s2 = df.copy() s1['userID'] = df['userID'] + 5 s1['adID'] = df['adID'] + 3 s2['userID'] = s1['userID'] + 5 s2['adID'] = df['adID'] + 4 r1=df.append(s1) r2=r1.append(s2) print(r2) userID gender adID rating 0 11 m 107 50 1 11 m 108 100 2 11 m 109 0 0 16 m 110 50 1 16 m 111 100 2 16 m 112 0 0 21 m 111 50 1 21 m 112 100 2 21 m 113 0 </code></pre>
python|pandas
2
617
49,252,387
encounter error during deeplab v3+ training on Cityscapes Semantic Segmentation Dataset
<p>all,</p> <p>I start the training process using deeplab v3+ following this <a href="https://github.com/tensorflow/models/blob/master/research/deeplab/g3doc/cityscapes.md" rel="nofollow noreferrer">guide</a>. However, after step 1480, I got the error: </p> <pre><code>Error reported to Coordinator: Nan in summary histogram for: image_pooling/BatchNorm/moving_variance_2 </code></pre> <p>The detailed train log is <a href="https://gist.githubusercontent.com/amiltonwong/ef0befefdd7dd618983e94252057cb8c/raw/96855c4962da6be3c8ee63a0acaa29bd62d2167f/detail_log" rel="nofollow noreferrer">here</a></p> <p>Could someone suggest how to solve this issue? THX!</p>
<p>Based on the log, it seems that you are training with batch_size = 1, fine_tune_batch_norm = True (default value). Since you are fine-tuning batch norm during training, it is better to set batch size as large as possible (see <a href="https://github.com/tensorflow/models/blob/master/research/deeplab/train.py#L93" rel="noreferrer">comments</a> in train.py and Q5 in <a href="https://github.com/tensorflow/models/blob/master/research/deeplab/g3doc/faq.md" rel="noreferrer">FAQ</a>). If only limited GPU memory is available, you could fine-tune from the provided pre-trained checkpoint, set <strong>smaller learning rate</strong> and <strong>fine_tune_batch_norm = False</strong> (see <a href="https://github.com/tensorflow/models/blob/master/research/deeplab/g3doc/model_zoo.md" rel="noreferrer">model_zoo.md</a> for details). Note make sure the flag tf_initial_checkpoint has correct path to the desired pre-trained checkpoint.</p>
tensorflow|semantic-segmentation
7
618
70,068,792
how to conditionally remove rows from a pandas expanding window
<p>I have a series in which I want to take the cumulative median of all non-zero values, resulting in a series the same length as the original.</p> <p><code>my_series.expanding().median()</code> gives me a series the same length as <code>my_series</code> which is close to what I want, but before I take the median of each window I want to drop rows that equal zero from the window, or slice out non-zero values, or something else... whatever performs best.</p> <pre><code>a = [0, 1, 2, 0, 100, 1000] my_series = pd.Series(a) my_series.expanding().median() # returns: 0 0.0 1 0.5 2 1.0 3 0.5 4 1.0 5 1.5 dtype: float64 # desired output: # the median is only computed on values in each window that are greater than zero 0 nan 1 1.0 2 1.5 3 1.5 4 2.0 5 51.0 dtype: float64 </code></pre>
<p>You can replace 0 values with nan while calculating, so they won't be used in the median calculations.</p> <pre><code> my_series.replace(0, np.nan).expanding().median() </code></pre> <p>Output:</p> <pre><code>0 NaN 1 1.0 2 1.5 3 1.5 4 2.0 5 51.0 dtype: float64 </code></pre>
python|pandas
3
619
70,037,505
"ValueError: Columns must be same length as key" when filtering dataframe with isin(list)
<p>I am trying to filter a column in my dataframe based on values from a list, here is the snippet of my code where it's going wrong (replaced values for simplicity's sake)</p> <pre><code>import pandas as pd from pandas import Series df['Campaign']=df['Location'] campaign_list = ['a', 'b'] df['Campaign']=df[df['Campaign'].isin(campaign_list)] </code></pre> <p>here is an example of what the dataframe looks like before the problem code</p> <pre><code>Location Billed Amount TransactionID Campaign a Na x a b Na y b c Na z c d Na xx d e Na xy e f Na xz f </code></pre> <p>here is what my desired df should look like</p> <pre><code>Location Billed Amount TransactionID Campaign a NaN x a b NaN y b c NaN z NaN d NaN xx NaN e NaN xy NaN f NaN xz NaN </code></pre> <p>Here is the error I am receiving, which is strange because I ran this exact code yesterday and didn't have any issue. Is there something obvious here I'm just not seeing?</p> <pre><code>~\anaconda3\lib\site-packages\pandas\core\frame.py in __setitem__(self, key, value) 3600 self._setitem_array(key, value) 3601 elif isinstance(value, DataFrame): -&gt; 3602 self._set_item_frame_value(key, value) 3603 elif ( 3604 is_list_like(value) ~\anaconda3\lib\site-packages\pandas\core\frame.py in _set_item_frame_value(self, key, value) 3727 len_cols = 1 if is_scalar(cols) else len(cols) 3728 if len_cols != len(value.columns): -&gt; 3729 raise ValueError(&quot;Columns must be same length as key&quot;) 3730 3731 # align right-hand-side columns if self.columns ValueError: Columns must be same length as key </code></pre>
<p>Use <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.where.html?highlight=where#pandas.Series.where" rel="nofollow noreferrer"><code>Series.where</code></a></p> <pre><code>df['Campaign'] = df['Campaign'].where(lambda camp: camp.isin(campaign_list)) </code></pre> <p>or</p> <pre><code>df['Campaign'] = df['Campaign'].where(df['Campaign'].isin(campaign_list)) </code></pre> <p>Output:</p> <pre><code>&gt;&gt;&gt; df Location Billed Amount TransactionID Campaign 0 0 Na x a 1 1 Na y b 2 2 Na z NaN 3 3 Na xx NaN 4 4 Na xy NaN 5 5 Na xz NaN </code></pre>
python|pandas|list|dataframe|isin
1
620
56,099,314
What is a replacement for tf.losses.absolute_difference
<p>My question is about in <code>TF2.0</code>. There is no <code>tf.losses.absolute_difference()</code> function and also there is no <code>tf.losses.Reduction.MEAN</code> attribute.</p> <p>What should I use instead? Is there a list of deleted <code>TF</code> functions in <code>TF2</code> and perhaps their replacement. </p> <p>This is <code>TF1.x</code> code which does not run with <code>TF2</code>: </p> <pre class="lang-py prettyprint-override"><code>result = tf.losses.absolute_difference(a,b,reduction=tf.losses.Reduction.MEAN) </code></pre>
<p>You still can access this function via <code>tf.compat.v1</code>:</p> <pre class="lang-py prettyprint-override"><code>import tensorflow as tf labels = tf.constant([[0, 1], [1, 0], [0, 1]]) predictions = tf.constant([[0, 1], [0, 1], [1, 0]]) res = tf.compat.v1.losses.absolute_difference(labels, predictions, reduction=tf.compat.v1.losses.Reduction.MEAN) print(res.numpy()) # 0.6666667 </code></pre> <p>Or you could implement it yourself:</p> <pre class="lang-py prettyprint-override"><code>import tensorflow as tf from tensorflow.python.keras.utils import losses_utils def absolute_difference(labels, predictions, weights=1.0, reduction='mean'): if reduction == 'mean': reduction_fn = tf.reduce_mean elif reduction == 'sum': reduction_fn = tf.reduce_sum else: # You could add more reductions pass labels = tf.cast(labels, tf.float32) predictions = tf.cast(predictions, tf.float32) losses = tf.abs(tf.subtract(predictions, labels)) weights = tf.cast(tf.convert_to_tensor(weights), tf.float32) res = losses_utils.compute_weighted_loss(losses, weights, reduction=tf.keras.losses.Reduction.NONE) return reduction_fn(res, axis=None) res = absolute_difference(labels, predictions) print(res.numpy()) # 0.6666667 </code></pre>
python|tensorflow2.0
3
621
56,096,399
Creating model throws "AttributeError: 'Tensor' object has no attribute '_keras_history'"
<p>When I created a model using keras.models.Model(), I have the following error:</p> <blockquote> <p>AttributeError: 'Tensor' object has no attribute '_keras_history'</p> </blockquote> <p>I created 3 MLP in my model, and intents is a tensor with shape(6040,100).</p> <p>The code and full traceback like this:</p> <pre class="lang-py prettyprint-override"><code>def get_model(num_users,num_items,layers_1=[10], layers_2=[10],layers_3=[10],reg_layers_1=[0], reg_layers_2=[0],reg_layers_3=[0]): assert len(layers_1) == len(reg_layers_1) assert len(layers_2) == len(reg_layers_2) assert len(layers_3) == len(reg_layers_3) num_layer_1 = len(layers_1) num_layer_2 = len(layers_2) num_layer_3 = len(layers_3) intents = scaled_dotproduct_attention(user_bought_sorted[0],100) for i in range(1,len(user_bought_sorted)): temp = scaled_dotproduct_attention(user_bought_sorted[i],100) intents = concatenate([intents,temp],0) #intents = tf.reshape(intents,[-1,100]) user_input = Input(shape=(1,),dtype='int32',name='user_input') item_input = Input(shape=(1,),dtype='int32',name='item_input') mlp_embedding_user = Embedding(input_dim=num_users,output_dim=100,name='mlp_embedding_user', embeddings_initializer=initializers.random_normal(), embeddings_regularizer=l2(reg_layers_1[0]),input_length=1) mlp_embedding_item = Embedding(input_dim=num_items,output_dim=100,name='mlp_embedding_items', embeddings_initializer=initializers.random_normal(), embeddings_regularizer=l2(reg_layers_1[0]),input_length=1) attention_embedding_item = Embedding(input_dim=num_items,output_dim=100,name='attention_embedding_items', embeddings_initializer=initializers.random_normal(), embeddings_regularizer=l2(reg_layers_2[0]),input_length=1) #attention_embedding_intent = Embedding(input_dim=num_users,output_dim=100,name='attention_embedding_intent', # embeddings_initializer=initializers.random_normal(), # embeddings_regularizer=l2(reg_layers_2[0]),input_length=1) #MLP_1 mlp_user_latent = Flatten()(mlp_embedding_user(user_input)) mlp_item_latent = Flatten()(mlp_embedding_item(item_input)) mlp_vector = concatenate([mlp_user_latent,mlp_item_latent]) for idx in range(1,num_layer_1): layer_1 = Dense(layers_1[idx],kernel_regularizer=l2(reg_layers_1[idx]), activation='relu',name='layer_1%d' % idx) mlp_vector = layer_1(mlp_vector) #MLP_attention attention_item_latent = Flatten()(attention_embedding_item(item_input)) #attention_intent_latent = Reshape((100,))(intents) attention_vector = K.dot(attention_item_latent,K.transpose(intents)) for adx in range(1,num_layer_2): layer_2 = Dense(layers_2[adx],kernel_regularizer=l2(reg_layers_2[adx]), activation='relu',name='layer_2%d' % adx) attention_vector = layer_2(attention_vector) #MLP_intents intents_vector = concatenate([mlp_vector,attention_vector]) for ndx in range(num_layer_3): layer_3 = Dense(layers_3[ndx],kernel_regularizer=l2(reg_layers_3[ndx]), activation='relu',name='layer_3%d' % ndx) intents_vector = layer_3(intents_vector) prediction = Dense(1,activation='sigmoid',kernel_initializer=initializers.lecun_normal(), name='prediction')(intents_vector) model = Model(inputs=[user_input, item_input], outputs=prediction) return model model = get_model(num_users,num_items,layers_1=[64,32,16,8], layers_2=[64,32],layers_3=[64,32,16], reg_layers_1=[0,0,0,0],reg_layers_2=[0,0], reg_layers_3=[0,0,0]) </code></pre> <p>and the full traceback:</p> <pre><code>AttributeError Traceback (most recent call last) &lt;ipython-input-121-25ef0dd05d42&gt; in &lt;module&gt; 2 layers_2=[64,32],layers_3=[64,32,16], 3 reg_layers_1=[0,0,0,0],reg_layers_2=[0,0], ----&gt; 4 reg_layers_3=[0,0,0]) &lt;ipython-input-120-d2a66d53e76f&gt; in get_model(num_users, num_items, layers_1, layers_2, layers_3, reg_layers_1, reg_layers_2, reg_layers_3) 62 name='prediction')(intents_vector) 63 ---&gt; 64 model = Model(inputs=[user_input, item_input], outputs=prediction) 65 66 return model ~/Desktop/100_server_venv/lib/python3.6/site-packages/keras/legacy/interfaces.py in wrapper(*args, **kwargs) 85 warnings.warn('Update your `' + object_name + 86 '` call to the Keras 2 API: ' + signature, stacklevel=2) ---&gt; 87 return func(*args, **kwargs) 88 wrapper._original_function = func 89 return wrapper ~/Desktop/100_server_venv/lib/python3.6/site-packages/keras/engine/topology.py in __init__(self, inputs, outputs, name) 1714 nodes_in_progress = set() 1715 for x in self.outputs: -&gt; 1716 build_map_of_graph(x, finished_nodes, nodes_in_progress) 1717 1718 for node in reversed(nodes_in_decreasing_depth): ~/Desktop/100_server_venv/lib/python3.6/site-packages/keras/engine/topology.py in build_map_of_graph(tensor, finished_nodes, nodes_in_progress, layer, node_index, tensor_index) 1704 tensor_index = node.tensor_indices[i] 1705 build_map_of_graph(x, finished_nodes, nodes_in_progress, -&gt; 1706 layer, node_index, tensor_index) 1707 1708 finished_nodes.add(node) ~/Desktop/100_server_venv/lib/python3.6/site-packages/keras/engine/topology.py in build_map_of_graph(tensor, finished_nodes, nodes_in_progress, layer, node_index, tensor_index) 1704 tensor_index = node.tensor_indices[i] 1705 build_map_of_graph(x, finished_nodes, nodes_in_progress, -&gt; 1706 layer, node_index, tensor_index) 1707 1708 finished_nodes.add(node) ~/Desktop/100_server_venv/lib/python3.6/site-packages/keras/engine/topology.py in build_map_of_graph(tensor, finished_nodes, nodes_in_progress, layer, node_index, tensor_index) 1704 tensor_index = node.tensor_indices[i] 1705 build_map_of_graph(x, finished_nodes, nodes_in_progress, -&gt; 1706 layer, node_index, tensor_index) 1707 1708 finished_nodes.add(node) ~/Desktop/100_server_venv/lib/python3.6/site-packages/keras/engine/topology.py in build_map_of_graph(tensor, finished_nodes, nodes_in_progress, layer, node_index, tensor_index) 1704 tensor_index = node.tensor_indices[i] 1705 build_map_of_graph(x, finished_nodes, nodes_in_progress, -&gt; 1706 layer, node_index, tensor_index) 1707 1708 finished_nodes.add(node) ~/Desktop/100_server_venv/lib/python3.6/site-packages/keras/engine/topology.py in build_map_of_graph(tensor, finished_nodes, nodes_in_progress, layer, node_index, tensor_index) 1704 tensor_index = node.tensor_indices[i] 1705 build_map_of_graph(x, finished_nodes, nodes_in_progress, -&gt; 1706 layer, node_index, tensor_index) 1707 1708 finished_nodes.add(node) ~/Desktop/100_server_venv/lib/python3.6/site-packages/keras/engine/topology.py in build_map_of_graph(tensor, finished_nodes, nodes_in_progress, layer, node_index, tensor_index) 1704 tensor_index = node.tensor_indices[i] 1705 build_map_of_graph(x, finished_nodes, nodes_in_progress, -&gt; 1706 layer, node_index, tensor_index) 1707 1708 finished_nodes.add(node) ~/Desktop/100_server_venv/lib/python3.6/site-packages/keras/engine/topology.py in build_map_of_graph(tensor, finished_nodes, nodes_in_progress, layer, node_index, tensor_index) 1675 """ 1676 if not layer or node_index is None or tensor_index is None: -&gt; 1677 layer, node_index, tensor_index = tensor._keras_history 1678 node = layer.inbound_nodes[node_index] 1679 AttributeError: 'Tensor' object has no attribute '_keras_history' </code></pre>
<p>You cannot use backend functions directly in Keras tensors, every operation in these tensors must be a layer. You need to wrap each custom operation in a Lambda layer and provide the appropriate inputs to the layer.</p>
python|tensorflow|keras
0
622
56,371,432
Transpose all rows in one column of dataframe to multiple columns based on certain conditions
<p>I would like to convert one column of data to multiple columns in dataframe based on certain values/conditions.</p> <p>Please find the code to generate the input dataframe</p> <pre><code>df1 = pd.DataFrame({'VARIABLE':['studyid',1,'age_interview', 65,'Gender','1.Male', '2.Female', 'Ethnicity','1.Chinese','2.Indian','3.Malay']}) </code></pre> <p>The data looks like as shown below </p> <p><a href="https://i.stack.imgur.com/v5ht4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/v5ht4.png" alt="enter image description here"></a></p> <p>Please note that I may not know the column names in advance. But it usually follows this format. What I have shown above is a sample data and real data might have around 600-700 columns and data arranged in this fashion</p> <p>What I would like to do is convert values which start with non-digits(characters) as new columns in dataframe. It can be a new dataframe.</p> <p>I attempted to write a for loop but failed to due to the below error. Can you please help me achieve this outcome.</p> <pre><code>for i in range(3,len(df1)): #str(df1['VARIABLE'][i].contains('^\d')) if (df1['VARIABLE'][i].astype(str).contains('^\d') == True): </code></pre> <p>Through the above loop, I was trying to check whether first char is a digit, if yes, then retain it as a value (ex: 1,2,3 etc) and if it's a character (ex:gender, ethnicity etc), then create a new column. But guess this is an incorrect and lengthy approach</p> <p>For example, in the above example, the columns would be studyid,age_interview,Gender,Ethnicity.</p> <p>The final output would look like this</p> <p><a href="https://i.stack.imgur.com/9lQsq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9lQsq.png" alt="enter image description here"></a></p> <p>Can you please let me know if there is an elegant approach to do this? </p>
<p>Use <code>itertools.groupby</code> and then construct <code>pd.DataFrame</code>:</p> <pre><code>import pandas as pd import itertools l = ['studyid',1,'age_interview', 65,'Gender','1.Male', '2.Female', 'Ethnicity','1.Chinese','2.Indian','3.Malay'] l = list(map(str, l)) grouped = [list(g) for k, g in itertools.groupby(l, key=lambda x:x[0].isnumeric())] d = {k[0]: v for k,v in zip(grouped[::2],grouped[1::2])} pd.DataFrame.from_dict(d, orient='index').T </code></pre> <p>Output:</p> <pre><code> Gender studyid age_interview Ethnicity 0 1.Male 1 65 1.Chinese 1 2.Female None None 2.Indian 2 None None None 3.Malay </code></pre>
python|python-3.x|pandas|dataframe|transpose
1
623
55,977,204
Creating a new column based on three existing columns
<p>I have a data frame with three columns, <code>target_degrees</code>, <code>low_degrees</code>, and <code>high_degrees</code>. I would like to make a new column labeled success that checks to see if target_degrees is located between <code>low_degrees</code> and <code>high_degrees</code>.</p> <p>example dataframe:</p> <pre><code>target_degrees low_degrees high_degrees success 10 0 50 1 50 45 100 1 20 100 200 0 1 300 350 0 </code></pre> <p>I have tried using <code>np.where</code> in the following code but I am getting a syntax error.</p> <pre><code>df['success'] = np.where(df['target_degrees'] is in np.arange(df['low_degrees'], df['high_degrees']), 1, 0) </code></pre>
<p>Use multiple conditions:</p> <pre><code>df['success'] = np.where(((df['target_degrees'] &gt;= df['low_degrees']) &amp; (df['target_degrees']&lt;= df['high_degrees'])), 1, 0) </code></pre> <p><strong>output</strong>:</p> <pre><code> target_degrees low_degrees high_degrees success 0 10 0 50 1 1 50 45 100 1 2 20 100 200 0 3 1 300 350 0 </code></pre>
python|python-3.x|pandas|numpy|dataframe
1
624
55,675,170
pandas equivalent for excels 'file origin'
<p>Have csv files. When open in excel or pandas, foreign letters turns in gibberish. </p> <p>In excel, I go to </p> <p>Data --> From Text --> Specify File --> Step 1 and change 'File Origin' and it solved the problem.</p> <p>How do I do this while importing file into dataframe?</p> <p><a href="https://i.stack.imgur.com/vOspn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vOspn.png" alt="enter image description here"></a></p>
<p>You could simply use the <code>encoding</code> parameter while reading the csv file as follows </p> <p>df = pd.read_csv('filename.csv', encoding="SHIFT-JIS")</p>
excel|pandas
1
625
64,775,042
Group object based on attributes and combine rest of the columns in a list gives me unhashable type: 'list'
<p>I am having this object :</p> <pre><code>obj = [ {&quot;mode&quot;:1,&quot;items&quot;:[{&quot;id&quot;:1}],&quot;people&quot;:[{&quot;id&quot;:8888}],&quot;value&quot;:{&quot;v&quot;:1000}}, {&quot;mode&quot;:1,&quot;items&quot;:[{&quot;id&quot;:1}],&quot;people&quot;:[{&quot;id&quot;:8888}],&quot;value&quot;:{&quot;v&quot;:2000}}, {&quot;mode&quot;:1,&quot;items&quot;:[{&quot;id&quot;:1}],&quot;people&quot;:[{&quot;id&quot;:9999}],&quot;value&quot;:{&quot;v&quot;:1000}}, {&quot;mode&quot;:1,&quot;items&quot;:[{&quot;id&quot;:1}],&quot;people&quot;:[{&quot;id&quot;:9999}],&quot;value&quot;:{&quot;v&quot;:2000}} ] </code></pre> <p>and i want to group by <code>mode</code> and <code>items</code> and <code>value</code> and combine <code>people</code> values into a list.</p> <p>So the result I want to get is :</p> <pre><code>resObj = [ {&quot;mode&quot;:1,&quot;items&quot;:[{&quot;id&quot;:1}],&quot;people&quot;:[{&quot;id&quot;:8888},{&quot;id&quot;:9999}],&quot;value&quot;:{&quot;v&quot;:1000}} {&quot;mode&quot;:1,&quot;items&quot;:[{&quot;id&quot;:1}],&quot;people&quot;:[{&quot;id&quot;:8888},{&quot;id&quot;:9999}],&quot;value&quot;:{&quot;v&quot;:2000}} ] </code></pre> <p>When i do :</p> <pre><code>&gt;&gt;&gt; obj = [{&quot;mode&quot;:1,&quot;items&quot;:[{&quot;id&quot;:1}],&quot;people&quot;:[{&quot;id&quot;:8888}],&quot;value&quot;:{&quot;v&quot;:1000}},{&quot;mode&quot;:1,&quot;items&quot;:[{&quot;id&quot;:1}],&quot;people&quot;:[{&quot;id&quot;:8888}],&quot;value&quot;:{&quot;v&quot;:2000}},{&quot;mode&quot;:1,&quot;items&quot;:[{&quot;id&quot;:1}],&quot;people&quot;:[{&quot;id&quot;:9999}],&quot;value&quot;:{&quot;v&quot;:1000}},{&quot;mode&quot;:1,&quot;items&quot;:[{&quot;id&quot;:1}],&quot;people&quot;:[{&quot;id&quot;:9999}],&quot;value&quot;:{&quot;v&quot;:2000}}] &gt;&gt;&gt; import pandas as pd &gt;&gt;&gt; df = pd.DataFrame(obj) &gt;&gt;&gt; df.groupby(['items','mode','value'])['people'].apply(list) </code></pre> <p>i get <code>unhashable type: 'list'</code></p> <p>This is expected as <code>people</code> is a list, but how can I achieve what I want? Another problem is that &quot;items&quot; is also a list and I've been reading <code>groupby</code> doesn't work on unhashable types.</p> <p>is there a way to achieve the transformation I need?</p> <p>EDIT: I've also tried :</p> <pre><code>&gt;&gt;&gt; df['items']=df['items'].apply(lambda x: tuple(x)) &gt;&gt;&gt; df['people']=df['people'].apply(lambda x: tuple(x)) &gt;&gt;&gt; df.groupby(['items','mode','value'])['people'].apply(list) </code></pre> <p>but now I get unhashable type <code>dict</code>.</p>
<p>You can't group by columns that contain lists or dicts as they are not hashable. So in fact the <code>people</code> column is not the problem, but the columns <code>item</code> and <code>value</code> are. The easiest solution would be to convert them to strings so they can be used for grouping.</p> <p>This sample shows how this can be achieved:</p> <pre><code>df['_items'] = df['items'].apply(lambda item: &quot;,&quot;.join([str(x['id']) for x in item])) df['_value'] = df['value'].apply(lambda value: value['v']) print(df.groupby(['_items','mode','_value'])['people'].sum()) </code></pre> <p>Output:</p> <pre><code>_items mode _value 1 1 1000 [{'id': 8888}, {'id': 9999}] 2000 [{'id': 8888}, {'id': 9999}] Name: people, dtype: object </code></pre>
python|pandas
2
626
64,727,450
how do I reassign new values to python dataframe?
<p>i'm trying to convert prices like &quot; €565K &quot; to integers and I know that pandas dataframes require reassigning, so I wrote the code like this :</p> <pre><code>def to_int(cols): for column in cols: column = column.astype(str) for i in range(len(column)): if column[i][-1] == 'M': column[i] = float(column[i][1:-1])*1000000 elif column[i][-1] == 'K': column[i] = float(column[i][1:-1])*1000 </code></pre> <p>but it doesn't reassign the values.</p>
<p>You can try this using <code>apply</code></p> <pre><code>def to_int(x): if x[-1] == 'M': return float(x[1:-1])*1000000 elif x[-1] == 'K': return float(x[1:-1])*1000 df['col_with_price'] = df['col_with_price'].apply(to_int) print(df) </code></pre>
python|pandas
0
627
64,933,175
Maximum of calculated pandas column and 0
<p>I have a very simple problem (I guess) but don't find the right syntax to do it :</p> <p>The following Dataframe :</p> <pre><code> A B C 0 7 12 2 1 5 4 4 2 4 8 2 3 9 2 3 </code></pre> <p>I need to create a new column D equal for each row to max (0 ; A-B+C)</p> <p>I tried a np.maximum(df.A-df.B+df.C,0) but it doesn't match and give me the maximum value of the calculated column for each row (= 10 in the example).</p> <p>Finally, I would like to obtain the DF below :</p> <pre><code> A B C D 0 7 12 2 0 1 5 4 4 5 2 4 8 2 0 3 9 2 3 10 </code></pre> <p>Any help appreciated Thanks</p>
<p>Let us try</p> <pre><code>df['D'] = df.eval('A-B+C').clip(lower=0) Out[256]: 0 0 1 5 2 0 3 10 dtype: int64 </code></pre>
pandas|dataframe|max
1
628
64,997,947
What is the difference between these backward training methods in Pytorch?
<p>I am a 3-month DL freshman who is doing small NLP projects with Pytorch.<br> Recently I am trying to reappear a GAN network introduced by a paper, using my own text data, to generate some specific kinds of question sentences. <br></p> <blockquote> <p>Here is some background... If you have no time or interest about it, just kindly read the following question is OK.<br> As that paper says, the generator is firstly trained normally with normal question data to make that the output at least looks like a real question. Then by using an auxiliary classifier's result (of classifying the outputs), the generator is trained again to just generate the specific (several unique categories) questions.<br> However, as the paper do not reveal its code, I have to do the code all myself. I have these three training thoughts, but I do not know their differences, could you kindly tell me about it? <br> If they have almost the same effect, could you tell me which is more recommended in Pytorch's grammar? Thank you very much!</p> </blockquote> <p>Suppose the discriminator loss to generator is loss_G_D, the classifier loss to generator is loss_G_C, and loss_G_D and loss_G_C has the same shape, i.e. [batch_size, loss value], then what is the difference?<br> 1.</p> <pre><code>optimizer.zero_grad() loss_G_D = loss_func1(discriminator(generated_data)) loss_G_C = loss_func2(classifier(generated_data)) loss = loss_G+loss_C loss.backward() optimizer.step() </code></pre> <ol start="2"> <li></li> </ol> <pre><code>optimizer.zero_grad() loss_G_D = loss_func1(discriminator(generated_data)) loss_G_D.backward() loss_G_C = loss_func2(classifier(generated_data)) loss_G_C.backward() optimizer.step() </code></pre> <ol start="3"> <li></li> </ol> <pre><code>optimizer.zero_grad() loss_G_D = loss_func1(discriminator(generated_data)) loss_G_D.backward() optimizer.step() optimizer.zero_grad() loss_G_C = loss_func2(classifier(generated_data)) loss_G_C.backward() optimizer.step() </code></pre> <p>Additional info: I observed that the classifier's classification loss is always very big compared with generator's loss, like -300 vs 3. So maybe the third one is better?</p>
<p>First of all:</p> <p><code>loss.backward()</code> backpropagates the error and assigns a gradient for every parameter along the way that has <code>requires_grad=True</code>.</p> <p><code>optimizer.step()</code> updates the model parameters using their stored gradients</p> <p><code>optimizer.zero_grad()</code> sets the gradients to 0, so that you can backpropagate your loss and update your model parameters for each batch without interfering with other batches.</p> <p><code>1</code> and <code>2</code> are quite similar, but if your model uses batch statistics or you have an adaptive optimizer they will probably perform differently. However, for instance, if your model doesn't use batch statistics and you have a plain old SGD optimizer, they will produce the same result, even though <code>1</code> would be faster since you do the backprop only once.</p> <p><code>3</code> is a completely different case, since you update your model parameters with <code>loss_G_D.backward()</code> and <code>optimizer.step()</code> before processing and backpropagating <code>loss_G_C</code>.</p> <p>Given all of these, it's up to you which one to choose depending on your application.</p>
machine-learning|deep-learning|pytorch
0
629
40,014,645
Is there a theano operation equivalent to numpy "broadcast_to" method?
<p>Since I need to repeat over a specific axis, I want to avoid unnecessary memory reallocation as much as possible.</p> <p>For example, given a numpy array <code>A</code> of shape (3, 4, 5), I want to create a view named <code>B</code> of shape (3, 4, 100, 5) on the original <code>A</code>. The 3rd axis of <code>A</code> is repeated 100 times. </p> <p>In numpy, this can be achieved like this:</p> <pre><code> B=numpy.repeat(A.reshape((3, 4, 1, 5)), repeats=100, axis=2) </code></pre> <p>or:</p> <pre><code> B=numpy.broadcast_to(A.reshape((3, 4, 1, 5)), repeats=100, axis=2) </code></pre> <p>The former allocates a new memory and then do some copy stuff, while the latter just create a view over <code>A</code> without extra memory reallocation. This can be identified by the method described in the answer <a href="https://stackoverflow.com/questions/34637875/size-of-numpy-strided-array-broadcast-array-in-memory">Size of numpy strided array/broadcast array in memory?</a> .</p> <p>In theano, however, the <code>theano.tensor.repeat</code> seems to be the only way, of course it's not preferable.</p> <p>I wonder if there is a `numpy.broadcast_to' like theano method can do this in an efficient way?</p>
<p>There is a nice method dimshuffle, which makes a theano variable broadcastable over some dimension</p> <pre><code>At = theano.tensor.tensor3() Bt = At.dimshuffle(0,1,'x',2) </code></pre> <p>Now you've got a tensor variable with shape (3,4,'x',5), where 'x' means any dimension you want to add.</p> <pre><code>Ct=theano.tensor.zeros((Bt.shape[0],Bt.shape[1],100,Bt.shape[3]))+Bt </code></pre> <p>Example</p> <pre><code>f=theano.function([At],[Bt,Ct]) A = np.random.random((3,4,5)).astype(np.float32) B,C=f(A) print B.shape print C.shape </code></pre> <p>(3, 4, 1, 5)</p> <p>(3, 4, 100, 5)</p> <p>Unless specified, it's better to work with variable Bt.</p>
numpy|theano|broadcast
0
630
39,973,243
Randomly select list from list of lists in python depending on weights
<p>I have a list of lists, where each list is associated with a score/weight. I want to produce a new list of lists by randomly selecting from the first one so that those with higher scores will appear more often. The line below works fine when <code>population</code> is just a normal list. But I want to have it for a list of lists.</p> <pre><code>population = [['a','b'],['b','a'],['c','b']] list_of_prob = [0.2, 0.2, 0.6] population = np.random.choice(population, 10, replace=True, p=list_of_prob) </code></pre> <p>This will give the output <code>ValueError: a must be 1-dimensional</code></p>
<p>Instead of passing the actual list, pass a list with indexes into the list.</p> <p><code>np.random.choice</code> already allows this, if you pass an int <code>n</code> then it works as if you passed <code>np.arange(n)</code>.</p> <p>So</p> <pre><code>choice_indices = np.random.choice(len(population), 10, replace=True, p=list_of_prob) choices = [population[i] for i in choice_indices] </code></pre>
python|list|numpy|random
9
631
44,266,139
Pandas dataframe left-merge with different dataframe sizes
<p>I have a toy stock predictor, and from time to time save results using dataframes. After the first result set I would like to append my first dataframe. Here is what I do: </p> <ol> <li>Create first dataframe using predicted results</li> <li>Sort descending to predicted performance</li> <li><p>Save to csv, without the index</p></li> <li><p>With new data, read out result csv and try left merge, goal is to append new predicted performance to the correct stock ticker</p></li> </ol> <p><code>df=pd.merge(df, df_new[['ticker', 'avgrd_app']], on='ticker', how='left')</code></p> <p>Those two dataframes have different amounts of columns. In the end it only appends the dataframes to another: </p> <pre><code>avgrd,avgrd_app,prediction1,prediction2,ticker -0.533520756811,,110.64654541,110.37853241,KIO -0.533520756811,,110.64654541,110.37853241,MMM -0.604610694122,,110.64654541,110.37853241,SRI [...] ,-0.212600450514,,,G5DN ,0.96378750992,,,G5N ,2.92757501984,,,DAL3 ,2.27297945023,,,WHF4 </code></pre> <p>So - how can I merge correctly?</p>
<p>From the sample result, it works as expected, the new data don't have numbers for all the tickers so some of the predictions are missing. So what exactly do you want to achieve? If you only need stocks with all the predictions, use inner join.</p>
python|pandas|dataframe
0
632
44,325,179
Tensorflow/LSTM machanism: How to specify the previous output of first time step of LSTM cells
<p>Just started using TensorFlow to build LSTM networks for multiclass classification</p> <p>Given the structure shown below: <a href="https://i.stack.imgur.com/zChh2.png" rel="nofollow noreferrer">A RNN model</a> Let's Assume each node A represents TensorFlow BasicLSTMcell.</p> <p>According to some popular examples found online, the input for training is prepared as [batch_size, timeStep_size, feature_size]</p> <p>let's Assume timeStep_size = 5, feature_size = 2, num_class = 4, given one training set : (dummy data)</p> <pre><code>t = t0 t1 t2 t3 t4 x = [ [1] [2] [2] [5] [2] ] [ [2] [3] [3] [1] [2] ] y = [ [0] [1] [1] [0] [0] ] [ [1] [0] [0] [0] [0] ] [ [0] [0] [0] [0] [1] ] [ [0] [0] [0] [1] [0] ] </code></pre> <p>According to the popular usage:</p> <pre><code>... # 1-layer LSTM with n_hidden units. rnn_cell = rnn.BasicLSTMCell(n_hidden) # generate prediction outputs, states = rnn.static_rnn(rnn_cell, x, dtype=tf.float32) return tf.matmul(outputs[-1], weights['out']) + biases['out'] </code></pre> <p>It seems to me that the training of LSTM cell doesn't make use of all the five outputs of y (y at t0 - t3). Only y at time t4 is used for calculating the loss when compared to output[-1]. </p> <p>Question 1: is it the case that LSTM calculates/approximates y_t0 by itself, and feed into t1 to calculate y_t1, and so on... until it y_t4 is calculated?</p> <p>If this is the case, </p> <p>Question 2: what if y at t-1 is very important?</p> <p>Example: </p> <pre><code>t = t-1 t0 t1 t2 t3 t4 x = [ [1] [2] [2] [2] [2] [2]] [ [1] [2] [2] [2] [2] [2]] y = [ [0] [1] [1] [1] [1] [1]] [ [1] [0] [0] [0] [0] [0]] [ [0] [0] [0] [0] [0] [0]] [ [0] [0] [0] [0] [0] [0]] </code></pre> <p>VS:</p> <pre><code>t = t-1 t0 t1 t2 t3 t4 x = [ [3] [2] [2] [2] [2] [2]] [ [3] [2] [2] [2] [2] [2]] y = [ [0] [0] [0] [0] [0] [0]] [ [0] [0] [0] [0] [0] [0]] [ [1] [0] [0] [0] [0] [0]] [ [0] [1] [1] [1] [1] [1]] </code></pre> <p>Which means that even though the input features from t0 to t4 are same, the output y are different since the previous outputs (y_t-1) are different.</p> <p>Then how to deal with this kind of situation? how does TensorFlow set the output for t-1, when calculating the output at t0?</p> <p>I've thought about increasing the timeStep_Size, but the real case might be very large, so I'm a bit confused...</p> <p>Any pointers are highly appreciated!</p> <p>Thank You in advance.</p> <p>================= UPDATE ===============================</p> <p>Re: jdehesa, Thanks Again. </p> <p>Some additional background: my intention is to classify a long series of x, like below:</p> <pre><code>t = t0 t1 t2 t3 t4 t5 t6 t7 ... x = [ [3] [2] [2] [2] [2] [2] [1] [2] [2] [2] [2] [2] ...] [ [3] [2] [2] [2] [2] [2] [1] [2] [2] [2] [2] [2] ...] y = [ c3 c2 c2 c2 c2 c2 c1 c4 c4 c4 c4 c4 ...] Note: c1: class 1, c2: class2 c3: class 3, c4: class 4 </code></pre> <p>The main confusion behind this post is that there are some known rules for manual classification. Take the dummy data above for example, assume there are rules that</p> <ol> <li><p>if previous feature x is class 3 ([3, 3]), then all following [2, 2] will be class 2 until it reaches class 1.</p></li> <li><p>if previous x is class 1 ([1, 1]), then all following [2, 2] will be class 4 until it reaches class 3.</p></li> </ol> <p>In such case, if the LSTM only sees [5 by 2] feature vector (x) same as t1 to t4, the network will completely lost in wheather classify as class 2 or class 4. So what i mean is that not only do those features of the 5 time steps matter, so does the output/label of previous time step.</p> <p>So restate the question: if now the training set is t1 to t5, so in addition to x [batch_size, t1:t5, 2], how to involve the label/class y at t0 as well. </p> <p>Below are my reponse to your answer. </p> <p>consider i use GRU instead of LSTM, where cell output and cell state are all represented by "h" as in <a href="https://i.stack.imgur.com/zChh2.png" rel="nofollow noreferrer">understandign LSTM</a>.</p> <ol> <li><p>About the initial_state parameter: I just found that the dynamic_rnn and static_rnn take this parameter as you pointed out :D. if i were to solve the problem mentioned just now, can i use assign the previous class/label (y at t0) to initial_state param to before training, instead of using zeros_state.</p></li> <li><p>I suddenly feel like i'm totally lost about the time span of LSTM memory. I've been thinking the time span of memory is limited by timeStep_size only. if timeStep_size = 5, the network can only recall up to 4 steps back, since every training we only feed [5 x 2] of x feature vector. please correct me if i'm wrong.</p></li> </ol> <p>Again thank you so much</p>
<p>LSTM cells, or RNN cells in general, have an internal state that gets updated after each time step is processed. Obviously, you cannot go infinitely back in time, so you gotta start at some point. The general convention is to begin with a cell state full of zeros; in fact, RNN cells in TensorFlow have a <code>zero_state</code> method that return this kind of state for each particular cell type and size. If you are not happy with that starting point (for example, because you have processed half a sequence and now you want to process the other half, picking up at the same state you were), you can pass an <code>initial_state</code> parameter to <a href="https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn" rel="nofollow noreferrer"><code>tf.nn.dynamic_rnn</code></a>.</p> <p>About the training, I'm not sure what is the most popular usage of LSTM cells, but that's entirely up to you. I work on a problem where I have a label per time sample, and so my output is the same size as the input. However, in many cases you just want a label for the whole sequence (e.g. "this sentence is positive/negative"), so you just look at the last output. All the previous inputs are of course important too anyway, because they define the last cell state that is used in combination with the last input to determine the final output. For example, if you take a sentence like "That's cool, man" and process it word by word, the last word "man" will probably not tell you much about whether it is a positive or negative sentence by itself, but at that point the cell is in a state where it is <em>pretty sure</em> it is a positive sentence (that is, it would take a clearly negative input afterwards to make it produce a "negative" output).</p> <p>I'm not sure what you mean about the t-1 thing... I mean if your input starts at t0 and you never saw t-1, there is nothing you can do about that (e.g. if you only got the input "really like this food" but turns out the whole original sentence was "not really like this food", you will just get it completely wrong). However, if you do have the input the network will learn to take it into account if it really is important. The whole point of LSTM cells is that they are able to <em>remember</em> things very far in the past (i.e. the effect of an input in the internal state can reach a very long time span).</p> <p><em>Update:</em></p> <p>About your additional comments.</p> <ul> <li><p>You can use whatever you want as input state, of course. However, even with GRU the internal state does not usually match the output label. Typically, you would use a sigmoid or softmax activation <em>after</em> the recurrent unit, which would then produce an output be comparable to the labels.</p></li> <li><p>About time spans. It is correct that using inputs with a small time step will constraint the ability of the recurrent units to learn long-term dependencies (to find patterns in longer sequences). As I mention, you can "emulate" a longer time step if you feed the last state of the recurrent units as initial state in the next run. But, whether you do that or not, it is not exact that the LSTM unit will just "not remember" things further back in the past. Even if you train with a time step of 5, if you then run the network with a sequence of size 100, the output for the last input will be (potentially) affected by all the 99 previous inputs; you simply will not be able to tell how much they affect, because that is a case that you did not have during training.</p></li> </ul>
tensorflow|deep-learning|lstm|recurrent-neural-network|gated-recurrent-unit
1
633
69,660,258
Split Dataframe dates into individual min max date ranges by group
<p>I have a dataframe which looks something like this:</p> <pre><code>S.No date origin dest journeytype 1 2021-10-21 FKG HYM OP 2 2021-10-21 FKG HYM PK 3 2021-10-21 HYM LDS OP 4 2021-10-22 FKG HYM OP 5 2021-10-22 FKG HYM PK 6 2021-10-22 HYM LDS OP 7 2021-10-23 FKG HYM OP 8 2021-10-24 AVM BLA OP 9 2021-10-24 AVM DBL OP 10 2021-10-27 AVM BLA OP </code></pre> <p>I need to split the individual origin, destination &amp; journeytype into individual start &amp; end_date columns.</p> <p>Output dataframe for the above input should look like:</p> <pre><code>start_date end_date origin dest journeytype 2021-10-21 2021-10-23 FKG HYM OP 2021-10-21 2021-10-22 FKG HYM PK 2021-10-21 2021-10-22 HYM LDS OP 2021-10-24 2021-10-24 AVM BLA OP 2021-10-24 2021-10-24 AVM DBL OP 2021-10-27 2021-10-27 AVM BLA OP </code></pre> <p>Also if the date for any group is non-continuous they need to be shown as seperate records in the result</p>
<p>If possible specified consecutive values by compare differencies if greater like <code>1</code> per groups use:</p> <pre><code>df['date'] = pd.to_datetime(df['date']) g = df.groupby(['origin','dest','journeytype'])['date'].diff().dt.days.gt(1).cumsum() df = (df.groupby(['origin','dest','journeytype', g], sort=False)['date'] .agg(start_date='min', end_date='max') .reset_index()) df = df[['start_date', 'end_date','origin', 'dest', 'journeytype']] print (df) start_date end_date origin dest journeytype 0 2021-10-21 2021-10-23 FKG HYM OP 1 2021-10-21 2021-10-22 FKG HYM PK 2 2021-10-21 2021-10-22 HYM LDS OP 3 2021-10-24 2021-10-24 AVM BLA OP 4 2021-10-24 2021-10-24 AVM DBL OP 5 2021-10-27 2021-10-27 AVM BLA OP </code></pre>
python|pandas|dataframe|group-by
1
634
69,620,035
TypeError: Dimension value must be integer or None in keras VQ-VAE example
<p>I tried <a href="https://keras.io/examples/generative/vq_vae/" rel="nofollow noreferrer">keras example on VQ-VAE</a> in colab and also in my environment. In both I encountered the same error:</p> <pre class="lang-py prettyprint-override"><code>--------------------------------------------------------------------------- TypeError Traceback (most recent call last) &lt;ipython-input-16-a6e2591462e2&gt; in &lt;module&gt;() 9 # Feed the whole array and retrieving the pixel value probabilities for the next 10 # pixel. ---&gt; 11 probs = sampler.predict(priors) 12 # Use the probabilities to pick pixel values and append the values to the priors. 13 priors[:, row, col] = probs[:, row, col] 9 frames /usr/local/lib/python3.7/dist-packages/keras/engine/training.py in predict(self, x, batch_size, verbose, steps, callbacks, max_queue_size, workers, use_multiprocessing) 1749 for step in data_handler.steps(): 1750 callbacks.on_predict_batch_begin(step) -&gt; 1751 tmp_batch_outputs = self.predict_function(iterator) 1752 if data_handler.should_sync: 1753 context.async_wait() /usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py in __call__(self, *args, **kwds) 883 884 with OptionalXlaContext(self._jit_compile): --&gt; 885 result = self._call(*args, **kwds) 886 887 new_tracing_count = self.experimental_get_tracing_count() /usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py in _call(self, *args, **kwds) 931 # This is the first call of __call__, so we have to initialize. 932 initializers = [] --&gt; 933 self._initialize(args, kwds, add_initializers_to=initializers) 934 finally: 935 # At this point we know that the initialization is complete (or less /usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py in _initialize(self, args, kwds, add_initializers_to) 758 self._concrete_stateful_fn = ( 759 self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access --&gt; 760 *args, **kwds)) 761 762 def invalid_creator_scope(*unused_args, **unused_kwds): /usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs) 3064 args, kwargs = None, None 3065 with self._lock: -&gt; 3066 graph_function, _ = self._maybe_define_function(args, kwargs) 3067 return graph_function 3068 /usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs) 3461 3462 self._function_cache.missed.add(call_context_key) -&gt; 3463 graph_function = self._create_graph_function(args, kwargs) 3464 self._function_cache.primary[cache_key] = graph_function 3465 /usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes) 3306 arg_names=arg_names, 3307 override_flat_arg_shapes=override_flat_arg_shapes, -&gt; 3308 capture_by_value=self._capture_by_value), 3309 self._function_attributes, 3310 function_spec=self.function_spec, /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes, acd_record_initial_resource_uses) 1005 _, original_func = tf_decorator.unwrap(python_func) 1006 -&gt; 1007 func_outputs = python_func(*func_args, **func_kwargs) 1008 1009 # invariant: `func_outputs` contains only Tensors, CompositeTensors, /usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py in wrapped_fn(*args, **kwds) 666 # the function a weak reference to itself to avoid a reference cycle. 667 with OptionalXlaContext(compile_with_xla): --&gt; 668 out = weak_wrapped_fn().__wrapped__(*args, **kwds) 669 return out 670 /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs) 992 except Exception as e: # pylint:disable=broad-except 993 if hasattr(e, &quot;ag_error_metadata&quot;): --&gt; 994 raise e.ag_error_metadata.to_exception(e) 995 else: 996 raise TypeError: in user code: /usr/local/lib/python3.7/dist-packages/keras/engine/training.py:1586 predict_function * return step_function(self, iterator) /usr/local/lib/python3.7/dist-packages/keras/engine/training.py:1576 step_function ** outputs = model.distribute_strategy.run(run_step, args=(data,)) /usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:1286 run return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) /usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:2849 call_for_each_replica return self._call_for_each_replica(fn, args, kwargs) /usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:3632 _call_for_each_replica return fn(*args, **kwargs) /usr/local/lib/python3.7/dist-packages/keras/engine/training.py:1569 run_step ** outputs = model.predict_step(data) /usr/local/lib/python3.7/dist-packages/keras/engine/training.py:1537 predict_step return self(x, training=False) /usr/local/lib/python3.7/dist-packages/keras/engine/base_layer.py:1037 __call__ outputs = call_fn(inputs, *args, **kwargs) /usr/local/lib/python3.7/dist-packages/keras/engine/functional.py:415 call inputs, training=training, mask=mask) /usr/local/lib/python3.7/dist-packages/keras/engine/functional.py:550 _run_internal_graph outputs = node.layer(*args, **kwargs) /usr/local/lib/python3.7/dist-packages/keras/engine/base_layer.py:1044 __call__ self._set_save_spec(inputs, args, kwargs) /usr/local/lib/python3.7/dist-packages/tensorflow/python/training/tracking/base.py:530 _method_wrapper result = method(self, *args, **kwargs) /usr/local/lib/python3.7/dist-packages/keras/engine/base_layer.py:3049 _set_save_spec flat_specs = [tf_utils.get_tensor_spec(x) for x in flat_kwarg] /usr/local/lib/python3.7/dist-packages/keras/engine/base_layer.py:3049 &lt;listcomp&gt; flat_specs = [tf_utils.get_tensor_spec(x) for x in flat_kwarg] /usr/local/lib/python3.7/dist-packages/keras/utils/tf_utils.py:467 get_tensor_spec spec = tf.TensorSpec(shape=t.shape, dtype=t.dtype, name=name) /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/tensor_spec.py:51 __init__ self._shape = tensor_shape.TensorShape(shape) /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/tensor_shape.py:784 __init__ self._dims = [as_dimension(dims)] /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/tensor_shape.py:729 as_dimension return Dimension(value) /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/tensor_shape.py:209 __init__ .format(value, type(value))), None) &lt;string&gt;:3 raise_from TypeError: Dimension value must be integer or None or have an __index__ method, got value '&lt;attribute 'shape' of 'numpy.generic' objects&gt;' with type '&lt;class 'getset_descriptor'&gt;' </code></pre> <p>The error comes from this part of code:</p> <pre class="lang-py prettyprint-override"><code># Create a mini sampler model. inputs = layers.Input(shape=pixel_cnn.input_shape[1:]) x = pixel_cnn(inputs, training=False) dist = tfp.distributions.Categorical(logits=x) sampled = dist.sample() sampler = keras.Model(inputs, sampled) # Create an empty array of priors. batch = 10 priors = np.zeros(shape=(batch,) + (pixel_cnn.input_shape)[1:]) batch, rows, cols = priors.shape # Iterate over the priors because generation has to be done sequentially pixel by pixel. for row in range(rows): for col in range(cols): # Feed the whole array and retrieving the pixel value probabilities for the next # pixel. probs = sampler.predict(priors) # Use the probabilities to pick pixel values and append the values to the priors. priors[:, row, col] = probs[:, row, col] print(f&quot;Prior shape: {priors.shape}&quot;) </code></pre> <p>As I understand, the issue comes from the <code>priors</code> array dimension mismatch with pixel_cnn input. But the input is <code>KerasTensor(type_spec=TensorSpec(shape=(None, 32, 32), dtype=tf.float32, name='input_2'), name='input_2', description=&quot;created by layer 'input_2'&quot;)</code> and shape of <code>priors</code> is <code>(10, 32, 32)</code>, so there is no mismatch...</p> <p>I found those similar questions but still I have hard time understanding what is going on:</p> <p><a href="https://stackoverflow.com/questions/62205053/typeerror-dimension-value-must-be-integer-or-none-or-have-an-index-method">TypeError: Dimension value must be integer or None or have an <strong>index</strong> method, got TensorShape([None, 1])</a></p> <p><a href="https://stackoverflow.com/questions/67015528/typeerror-dimension-value-must-be-integer-or-none-or-have-an-index-method">TypeError: Dimension value must be integer or None or have an <strong>index</strong> method, got value 'TensorShape([None, 16])'</a></p> <p>Can someone please explain why this error exists and how to fix it?</p>
<p>Yes this is an unfortunate side-effect of not documenting package versions and jupyter environments. The tensorflow-probability package is still in a pre-1.0 phase, which causes things to shift underneath you, unless you pin package versions.</p> <p>In this case, I looked up which version Sayak Paul was likely to have used at the time of writing the article, which was <a href="https://pypi.org/project/tensorflow-probability/0.13.0" rel="nofollow noreferrer">0.13.0</a>. Using this specific version of tensorflow-probability made the errors go away.</p> <p>I think the error has to do with eager execution of a function that has no categorical data yet to sample from, but this is just a guess. Wrapping it in a tf.py_function didn't do it for me, downgrading to 0.13.0 did. None of the versions after 0.13.0 worked anymore on this particular example.</p> <p>EDIT: I updated the code at <a href="https://keras.io/examples/generative/vq_vae/" rel="nofollow noreferrer">https://keras.io/examples/generative/vq_vae/</a> to be used with newer versions of tensorflow-probability. There's no more need to pin the version to 0.13.0</p>
python|python-3.x|tensorflow|keras
0
635
41,002,879
Image classification by small object
<p>I'm trying to participate in a challenge for classifying dashboard camera images (for car) with labels being -traffic light red / green / non-existent. Traffic lights are small part of the image, and no bounding box is supplied.</p> <p>I'm trying to fine-tune the image as suggested <a href="https://github.com/tensorflow/models/tree/master/slim#Tuning" rel="nofollow noreferrer">here</a> currently with the Inception net, but getting 0.55-0.6 accuracy. Need to achieve 0.95+.</p> <p>I think that the network is not performing well because of the small portion of the traffic light in the image. </p> <p>How can I make better progress with this?</p>
<p>I suggest instead of using the entire image at once, take crops of the image with a sliding window with overlap. You need to label the crops as well.</p>
tensorflow|deep-learning|tf-slim
0
636
53,818,563
Cropping image in numpy array
<p>I want to crop RGB images such that the upper half part of the image is removed. After cropping I want to concatenate the image to a numpy array (here images). But I get the following error <code>ValueError: all the input array dimensions except for the concatenation axis must match exactly</code>. I tried multiple things but I don't seem to have luck with any of my attempts.</p> <p>My code looks like</p> <pre><code>images = np.zeros((1, 32, 64, 3)) image = get_image() # has shape 1, 64, 64, 3 # removing the first coordinate didn't change the error. images = np.concatenate([images, image[:, 32:63, :, :]], axis=0) </code></pre> <p>EDIT: The following modifications in <code>image[:, 32:63, :, :]</code> did not resolve the problem</p> <p>a) [:, 32:63, :, :] -> [32:63, :, :]</p> <p>b) [:, 32:63, :, :] -> [:][32:63][:][:]</p>
<p>You should do </p> <pre><code>images = np.zeros((1, 32, 64, 3)) image = get_image() # has shape 1, 64, 64, 3 # removing the first coordinate didn't change the error. images = np.concatenate([images, image[:, 32:, :, :]], axis=0) </code></pre> <p>As 32:63 leaves out the last element. (32:64 would be possible too)</p>
python|image|numpy
1
637
54,069,863
Swap two rows in a numpy array in python
<p>How to swap xth and yth rows of the 2-D NumPy array? x &amp; y are inputs provided by the user. Lets say x = 0 &amp; y =2 , and the input array is as below:</p> <pre><code>a = [[4 3 1] [5 7 0] [9 9 3] [8 2 4]] Expected Output : [[9 9 3] [5 7 0] [4 3 1] [8 2 4]] </code></pre> <p>I tried multiple things, but did not get the expected result. this is what i tried:</p> <pre><code>a[x],a[y]= a[y],a[x] output i got is: [[9 9 3] [5 7 0] [9 9 3] [8 2 4]] </code></pre> <p>Please suggest what is wrong in my solution.</p>
<p>Put the index as a whole:</p> <pre><code>a[[x, y]] = a[[y, x]] </code></pre> <p>With your example:</p> <pre><code>a = np.array([[4,3,1], [5,7,0], [9,9,3], [8,2,4]]) a # array([[4, 3, 1], # [5, 7, 0], # [9, 9, 3], # [8, 2, 4]]) a[[0, 2]] = a[[2, 0]] a # array([[9, 9, 3], # [5, 7, 0], # [4, 3, 1], # [8, 2, 4]]) </code></pre>
python|arrays|numpy|swap
65
638
53,884,001
How do I fix dimension error in a simple Autoencoder?
<p>I am new to python and autoencoders. I just wanted to build a simple autoencoder to start with, but I keep getting this error:</p> <pre><code>ValueError: Error when checking target: expected conv2d_39 to have 4 dimensions, but got array with shape (32, 3) </code></pre> <p>Is there a better way to get my own data, besides the <code>flow_from_directory</code> method? I built the Autoencoder like <a href="https://www.datacamp.com/community/tutorials/autoencoder-keras-tutorial" rel="nofollow noreferrer">this</a>, but I took some layers away.</p> <p>I don't know, but am I feeding the autoencoder with the tuple generated from the <code>flow_from_directory</code> method? Is there a way to cast this tuple into a format the autoencoder accepts?</p> <pre><code>import numpy as np from keras.preprocessing.image import ImageDataGenerator from keras.models import Sequential, Model from keras.layers import Dropout, Flatten, Dense, Input, Conv2D, UpSampling2D, MaxPooling2D from keras.optimizers import RMSprop IMG_WIDTH, IMG_HEIGHT = 112, 112 input_img = Input(shape=(IMG_WIDTH, IMG_HEIGHT,3)) #encoder def encoder(input_img): # 1x112x112x3 conv1 = Conv2D(32,(3,3), activation='relu', padding='same') (input_img) # 32x112x112 pool1 = MaxPooling2D(pool_size=(2,2))(conv1) # 32x56x56 return pool1 #decoder def decoder(pool1): # 32x56x56 up1 = UpSampling2D((2,2))(pool1) # 32x112x112 decoded = Conv2D(1,(3,3),activation='sigmoid',padding='same')(up1) # 1x112x112 return decoded autoencoder = Model(input_img, decoder(encoder(input_img))) autoencoder.compile(loss='mean_squared_error', optimizer=RMSprop()) datagen = ImageDataGenerator(rescale=1./255) training_set = datagen.flow_from_directory( r'C:\Users\user\Desktop\dataset\train', target_size=(112,112), batch_size=32, class_mode='categorical') test_set = datagen.flow_from_directory( r'C:\Users\user\Desktop\dataset\validation', target_size=(112,112), batch_size=32, class_mode='categorical') history = autoencoder.fit_generator( training_set, steps_per_epoch=2790, epochs=5, validation_data=test_set, validation_steps=1145) </code></pre> <p>Here is the model summary:</p> <pre><code>_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_14 (InputLayer) (None, 112, 112, 3) 0 _________________________________________________________________ conv2d_42 (Conv2D) (None, 112, 112, 32) 896 _________________________________________________________________ max_pooling2d_4 (MaxPooling2 (None, 56, 56, 32) 0 _________________________________________________________________ up_sampling2d_4 (UpSampling2 (None, 112, 112, 32) 0 _________________________________________________________________ conv2d_43 (Conv2D) (None, 112, 112, 1) 289 ================================================================= Total params: 1,185 Trainable params: 1,185 Non-trainable params: 0 _________________________________________________________________ </code></pre> <p>I am working with <code>512x496</code> OCT images.</p>
<p>Since you are building an autoencoder and therefore the output of the model must be the same as the input, there are two problems with your code:</p> <ol> <li><p>You must set the <code>class_mode</code> argument of generators to <code>'input'</code> to let the labels generated be the same as the generated inputs.</p></li> <li><p>The last layer must have 3 filters since the input image has 3 channels: <code>decoded = Conv2D(3, ...)</code>.</p></li> </ol>
python|tensorflow|machine-learning|keras|autoencoder
2
639
53,948,123
I am not able to build .so file
<p>I am trying to make an app for object detection using tensorflow and I am following the instructions as listed in this website: </p> <p><a href="https://www.skcript.com/svr/realtime-object-and-face-detection-in-android-using-tensorflow-object-detection-api/" rel="nofollow noreferrer">https://www.skcript.com/svr/realtime-object-and-face-detection-in-android-using-tensorflow-object-detection-api/</a></p> <p>But I have run into build errors.</p> <p>I am making an android application for detecting objects using tensorflow API and I have followed all steps as mentioned in the above link. I am using Windows 10 for coding, not any Linux distro. I tried building the app using bazel but there are build errors.</p> <p>Here's the command as instructed from the above website:</p> <pre><code>bazel build -c opt //tensorflow/contrib/android:libtensorflow_inference.so --crosstool_top=//external:android/crosstool --host_crosstool_top=@bazel_tools//tools/cpp:toolchain --cpu=armeabi-v7a </code></pre> <p>After running, it starts compiling and does 1069 processes, but after reaching 1068/1069 it displays the following:</p> <pre><code>ERROR: C:/sri/sritrain/tensorflow-master/tensorflow/contrib/android/BUILD:60:1: Linking of rule '//tensorflow/contrib/android:libtensorflow_inference.so' failed (Exit 1) external/androidndk/ndk/toolchains/arm-linux-androideabi-4.9/prebuilt/windows-x86_64/lib/gcc/arm-linux-androideabi/4.9.x/../../../../arm-linux-androideabi/bin\ld: fatal error: bazel-out/armeabi-v7a-opt/bin/tensorflow/core/kernels/libandroid_tensorflow_kernels.lo: pread failed: Invalid argument clang.exe: error: linker command failed with exit code 1 (use -v to see invocation) Target //tensorflow/contrib/android:libtensorflow_inference.so failed to build Use --verbose_failures to see the command lines of failed build steps. INFO: Elapsed time: 3148.512s, Critical Path: 443.26s INFO: 1045 processes: 1045 local. FAILED: Build did NOT complete successfully </code></pre> <p>I scoured through the internet and found a small modification so I typed:</p> <pre><code>bazel build -c opt //tensorflow/contrib/android:libtensorflow_inference.so --crosstool_top=//external:android/crosstool --host_crosstool_top=@bazel_tools//tools/cpp:toolchain --cpu=armeabi-v7a --cxxopt=-std=c++11 </code></pre> <p>However this returns an error even before the previous command did:</p> <pre><code>ERROR: C:/sri/sritrain/tensorflow/tensorflow/contrib/android/BUILD:60:1: Linking of rule '//tensorflow/contrib/android:libtensorflow_inference.so' failed (Exit 1) external/androidndk/ndk/toolchains/arm-linux-androideabi-4.9/prebuilt/windows-x86_64/lib/gcc/arm-linux-androideabi/4.9.x/../../../../arm-linux-androideabi/bin\ld: fatal error: bazel-out/armeabi-v7a-opt/bin/tensorflow/core/kernels/libandroid_tensorflow_kernels.lo: pread failed: Invalid argument clang.exe: error: linker command failed with exit code 1 (use -v to see invocation) Target //tensorflow/contrib/android:libtensorflow_inference.so failed to build Use --verbose_failures to see the command lines of failed build steps. INFO: Elapsed time: 2787.155s, Critical Path: 244.57s INFO: 795 processes: 795 local. FAILED: Build did NOT complete successfully </code></pre> <p>It is supposed to create a .so file on my computer but it doesn't.</p>
<p>I SOLVED IT! I found the problem was that i was using the ndk-bundle from under Android Studio's folder and it was the latest ndk. I downloaded an older ndk version android_ndk_r15c and ran the command:</p> <pre><code>bazel build -c opt //tensorflow/contrib/android:libtensorflow_inference.so --crosstool_top=//external:android/crosstool --host_crosstool_top=@bazel_tools//tools/cpp:toolchain --cpu=armeabi-v7a --cxxopt=-std=c++11 </code></pre> <p>So the build was completed successfully!</p>
android|python|windows|tensorflow|bazel
1
640
54,231,398
add new column based on the several rows which have the same date
<p>I have one data frame as below. At first,they have three columns<code>('date','time','flag')</code>. I want to add one column which based on the flag and date which means when I get <code>flag=1</code> in one day at first, then this row target is <code>1</code>, the other target in this day is <code>0</code>.</p> <pre><code> date time flag target 0 2017/4/10 10:00:00 0 0 1 2017/4/10 11:00:00 1 1 2 2017/4/10 12:00:00 0 0 3 2017/4/10 13:00:00 0 0 4 2017/4/10 14:00:00 0 0 5 2017/4/11 10:00:00 1 1 6 2017/4/11 11:00:00 0 0 7 2017/4/11 12:00:00 1 0 8 2017/4/11 13:00:00 1 0 9 2017/4/11 14:00:00 0 0 10 2017/4/12 10:00:00 0 0 11 2017/4/12 11:00:00 0 0 12 2017/4/12 12:00:00 0 0 13 2017/4/12 13:00:00 0 0 14 2017/4/12 14:00:00 0 0 15 2017/4/13 10:00:00 0 0 16 2017/4/13 11:00:00 1 1 17 2017/4/13 12:00:00 0 0 18 2017/4/13 13:00:00 1 0 19 2017/4/13 14:00:00 0 0 </code></pre>
<p>Compare <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.DataFrameGroupBy.cumsum.html" rel="nofollow noreferrer"><code>DataFrameGroupBy.cumsum</code></a> by <code>1</code> and chain codition compare <code>flag</code> by <code>1</code> with <code>bitwise AND</code> and convert to integer:</p> <pre><code>df['target1'] = (df.groupby('date')['flag'].cumsum().eq(1) &amp; df['flag'].eq(1)).astype(int) date time flag target target1 0 2017/4/10 10:00:00 0 0 0 1 2017/4/10 11:00:00 1 1 1 2 2017/4/10 12:00:00 0 0 0 3 2017/4/10 13:00:00 0 0 0 4 2017/4/10 14:00:00 0 0 0 5 2017/4/11 10:00:00 1 1 1 6 2017/4/11 11:00:00 0 0 0 7 2017/4/11 12:00:00 1 0 0 8 2017/4/11 13:00:00 1 0 0 9 2017/4/11 14:00:00 0 0 0 10 2017/4/12 10:00:00 0 0 0 11 2017/4/12 11:00:00 0 0 0 12 2017/4/12 12:00:00 0 0 0 13 2017/4/12 13:00:00 0 0 0 14 2017/4/12 14:00:00 0 0 0 15 2017/4/13 10:00:00 0 0 0 16 2017/4/13 11:00:00 1 1 1 17 2017/4/13 12:00:00 0 0 0 18 2017/4/13 13:00:00 1 0 0 19 2017/4/13 14:00:00 0 0 0 </code></pre> <p>Another solution:</p> <pre><code>df['target1'] = ((~df.loc[df['flag']==1, 'date'].duplicated()) .reindex(df.index, fill_value=False).astype(int)) </code></pre>
python|pandas
1
641
54,146,761
tensorflow: output layer with a single neuron, expected float output [0.0,1.0]
<p>I try to build a nn with an output layer consisting of a single neuron only. My input data contain 500 floats assigned to a "0" or "1". The final nn should output a "probability" value [0.0, 1.0]. Since I'm new to tensorflow, I have taken an MNIST example from Aurélien Géron's excellent book and modified it according to my needs. However, I'm stuck at a few points. Basically, he uses a "softmax" function at some point, which can't be correct for my example. Also, his evaluation function ("tf.nn.in_top_k") can't be right. Finally, I'm wondering if I need an activation function for the output layer ("sigmoid"?). Thanks a lot for your feedback!</p> <p>Here is my code:</p> <pre><code>import tensorflow as tf import numpy as np n_inputs = 500 n_hidden1 = 400 n_hidden2 = 300 n_outputs = 1 # import training, test and validation data... X_train,y_train = &lt;import my training data as "np.array" objects&gt; X_valid,y_valid = &lt;import my validation data as "np.array" objects&gt; X_test,y_test = &lt;import my testing data as "np.array" objects&gt; seed = 42 tf.reset_default_graph() tf.set_random_seed(seed) np.random.seed(seed) X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X") y = tf.placeholder(tf.float32, shape=(None), name="y") def neuron_layer(X, n_neurons, name, activation=None): with tf.name_scope(name): n_inputs = int(X.get_shape()[1]) stddev = 2 / np.sqrt(n_inputs) init = tf.truncated_normal((n_inputs, n_neurons), stddev=stddev) W = tf.Variable(init, name="kernel") b = tf.Variable(tf.zeros([n_neurons]), name="bias") Z = tf.matmul(X, W) + b if activation is not None: return activation(Z) else: return Z with tf.name_scope("dnn"): hidden1 = neuron_layer(X, n_hidden1, name="hidden1",activation=tf.nn.relu) hidden2 = neuron_layer(hidden1, n_hidden2, name="hidden2",activation=tf.nn.relu) # do I need an activation function here? logits = neuron_layer(hidden2, n_outputs, name="outputs") with tf.name_scope("loss"): # this is probably not correct - I should most likely use something like "sigmoid"... but how exactly do I do that? xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y,logits=logits) loss = tf.reduce_mean(xentropy, name="loss") learning_rate = 0.01 with tf.name_scope("train"): optimizer = tf.train.GradientDescentOptimizer(learning_rate) training_op = optimizer.minimize(loss) with tf.name_scope("eval"): # same thing here. what is the right function to be used here? correct = tf.nn.in_top_k(logits, y, 1) accuracy = tf.reduce_mean(tf.cast(correct, tf.float32)) init = tf.global_variables_initializer() saver = tf.train.Saver() n_epochs = 100 batch_size = 50 def shuffle_batch(X, y, batch_size): rnd_idx = np.random.permutation(len(X)) n_batches = len(X) // batch_size for batch_idx in np.array_split(rnd_idx, n_batches): X_batch, y_batch = X[batch_idx], y[batch_idx] yield X_batch, y_batch with tf.Session() as sess: init.run() for epoch in range(n_epochs): for X_batch, y_batch in shuffle_batch(X_train, y_train, batch_size): sess.run(training_op, feed_dict={X: X_batch, y: y_batch}) acc_batch = accuracy.eval(feed_dict={X: X_batch, y: y_batch}) acc_val = accuracy.eval(feed_dict={X: X_valid, y: y_valid}) print(epoch, "Batch accuracy:", acc_batch, "Val accuracy:", acc_val) save_path = saver.save(sess, "./my_model_final.ckpt") </code></pre> <hr> <p><strong>ADDITIONAL INFO:</strong></p> <p>Thanks a lot for your reply. Introducing the "sigmoid" function was a step in the right direction. But, there are still some problems:</p> <p>1.) When training the nn the accuracy is not very good:</p> <pre><code>(95, 'Batch accuracy:', 0.54, 'Val accuracy:', 0.558) (96, 'Batch accuracy:', 0.52, 'Val accuracy:', 0.558) (97, 'Batch accuracy:', 0.56, 'Val accuracy:', 0.558) (98, 'Batch accuracy:', 0.58, 'Val accuracy:', 0.558) (99, 'Batch accuracy:', 0.52, 'Val accuracy:', 0.558) </code></pre> <p>2.) It seems as if the returned results when testing the trained model are too low. The values are all between [0.0,0.3]:</p> <pre><code>('Predicted classes:', array([[0.2000685 ],[0.17176622],[0.14039296],[0.15600625],[0.15928227],[0.15543781],[0.1348885 ],[0.17185831],[0.170376],[0.17732298],[0.17864114],[0.16391528],[0.18579942],[0.12997991],[0.13886571],[0.24408364], [0.17308617],[0.16365634],[0.1782803 ],[0.11332873]], dtype=float32)) ('Actual classes: ', array([0., 0., 0., 1., 0., 0., 1., 1., 1., 1., 1., 1., 0., 0., 1., 1., 1.,1., 0., 0.])) </code></pre> <p>I guess, my validation function still isn't right:</p> <pre><code>with tf.name_scope("eval"): predicted = tf.nn.sigmoid(logits) correct_pred = tf.equal(tf.round(predicted), y) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) </code></pre> <p>How must a correct validation function look like?</p> <p>Again, thanks a lot for your help!</p>
<ol> <li>Logits should not have an activation.</li> <li>Loss should be sigmoid capable of handling logits, <code>tf.nn.sigmoid_cross_entropy_with_logits</code> is the one.</li> <li>You can calculate accuracy by checking whether final logit is less than zero or more than zero. If the first case, classify it as 0, if second, classify as 1. I'm not sure whether there is a built-in in tf for this.</li> </ol>
python|tensorflow
0
642
65,987,202
Equations related to indexing with Numpy Python
<p>I am trying to make a function that calculates the difference between the previous array and the current array. So the first 2 numbers in the <code>Numbers</code> array is <code>52599, 52575</code>, the previous number is <code>52599</code> with the label <code>U</code> and the current number is <code>52575</code> with the label <code>L</code>. Since the current value has the <code>L</code> label it will use the equation <code>L_val</code> equation <code>(current_L_pos - previouse_U_pos)/current_L_pos*-100</code>. Placing the numbers for the equation <code>(52575 -52599 )/52575 *-100</code> Which would equal <code>0.0456</code>. the calculations will go on like this until the end of the <code>Numbers</code> and <code>Set</code> array, these arrays have the same length as well. How would i be able to do this with the <code>np.where()</code> function and such.</p> <pre><code>Set = np.array(['U' 'L' 'U' 'L' 'U' 'L' 'U' 'L' 'U' 'L' 'U' 'L' 'U' 'L' 'U']) Numbers = np.array([ 52599 52575 53598 336368 336875 337466 338292 356587 357474 357763 358491 358659 359041 360179 360286]) L_val = (current_L_pos - previouse_U_pos)/current_L_pos*-100 U_val = (previouse_L_pos - current_U_pos)/current_U_pos*-100 </code></pre>
<p>I think you want the <a href="https://numpy.org/doc/stable/reference/generated/numpy.roll.html" rel="nofollow noreferrer">np.roll()</a> function. You need to be careful with the first and/or last entry, but as long as you handle those edge cases then:</p> <p><code>Numbers - np.roll(Numbers, -1)</code></p> <p>will give you an array of the current number minus the last number.</p> <p>Is this what you're looking for?</p>
python|arrays|function|numpy|indexing
0
643
66,051,785
numpy get values in col2 given an array of values matching col1
<p>How can I extract values in Col1 whose Col0 matches any values in a numpy array. I have an np array A, idx. Get me all values in Col1 of array A, whose Col0 values are 1 or 4.</p> <pre><code>A = np.array([[1, 11], [2, 12], [3, 13], [4,14]]) idx = [1, 4] </code></pre> <p>I can get for 1 value like this.. but I don't know to get for an array of idx.</p> <pre><code>vals = A[np.where(A[:,0]==4),1] vals = A[np.where(A[:,0]==4),4] </code></pre> <p>a) how can I get the values of Col1 in A where Col0 values are 1 or 4 ( matching idx).</p> <p>expected result = [11,14]</p> <p>b) how can I get values of Col1 in A where row indices are 1,4 (matching idx)</p> <p>expected result = [12, 14]</p>
<p>1st part:</p> <pre><code>idx = [1, 4] A[np.isin(A[:,0], idx), 1] </code></pre> <hr /> <pre><code>array([11, 14]) </code></pre> <hr /> <p>2nd part:</p> <pre><code>idx = [1, 3] A[idx,1] </code></pre> <hr /> <pre><code>array([12, 14]) </code></pre>
python|arrays|numpy
1
644
52,765,294
Unable to compile Tensorflow from source on Ubuntu 18.04
<p>When I attempt to compile TensorFlow from source I get the following error. I'm using the GPU Docker image to run the build. Which in theory has all of the proper dependencies set up. </p> <p>The host machine(s) is Ubuntu 18.04. I'm receiving this issue on two different machines that both have the latest Nvidia drivers. One has a 1080 TI and the other has a 1060 nvidia card.</p> <p>Building a docker image based on: tensorflow/tensorflow:1.11.0-devel-gpu. </p> <p>I run it in interactive mode and run ./configure. </p> <p>This is the configuration:</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>./configure WARNING: --batch mode is deprecated. Please instead explicitly shut down your Bazel server using the command "bazel shutdown". You have bazel 0.15.0 installed. Please specify the location of python. [Default is /usr/bin/python]: Found possible Python library paths: /usr/local/lib/python2.7/dist-packages /usr/lib/python2.7/dist-packages Please input the desired Python library path to use. Default is [/usr/local/lib/python2.7/dist-packages] Do you wish to build TensorFlow with jemalloc as malloc support? [Y/n]: y jemalloc as malloc support will be enabled for TensorFlow. Do you wish to build TensorFlow with Google Cloud Platform support? [Y/n]: y Google Cloud Platform support will be enabled for TensorFlow. Do you wish to build TensorFlow with Hadoop File System support? [Y/n]: y Hadoop File System support will be enabled for TensorFlow. Do you wish to build TensorFlow with Amazon AWS Platform support? [Y/n]: y Amazon AWS Platform support will be enabled for TensorFlow. Do you wish to build TensorFlow with Apache Kafka Platform support? [Y/n]: y Apache Kafka Platform support will be enabled for TensorFlow. Do you wish to build TensorFlow with XLA JIT support? [y/N]: y XLA JIT support will be enabled for TensorFlow. Do you wish to build TensorFlow with GDR support? [y/N]: y GDR support will be enabled for TensorFlow. Do you wish to build TensorFlow with VERBS support? [y/N]: y VERBS support will be enabled for TensorFlow. Do you wish to build TensorFlow with nGraph support? [y/N]: y nGraph support will be enabled for TensorFlow. Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]: n No OpenCL SYCL support will be enabled for TensorFlow. Please specify the location where CUDA 9.0 toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]: Please specify the location where cuDNN 7 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]: Please specify the location where TensorRT is installed. [Default is /usr/lib/x86_64-linux-gnu]: Please specify the location where NCCL 2 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]: Do you want to use clang as CUDA compiler? [y/N]: n nvcc will be used as CUDA compiler. Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]: Do you wish to build TensorFlow with MPI support? [y/N]: n No MPI support will be enabled for TensorFlow. Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native]: Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]: n Not configuring the WORKSPACE for Android builds. Preconfigured Bazel build configs. You can use any of the below by adding "--config=&lt;&gt;" to your build command. See tools/bazel.rc for more details. --config=mkl # Build with MKL support. --config=monolithic # Config for mostly static monolithic build. Configuration finished root@17ed0ddbe2a4:/tensorflow# </code></pre> </div> </div> </p> <p>This is the error I get.</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>ERROR: /root/.cache/bazel/_bazel_root/68a62076e91007a7908bc42a32e4cff9/external/ngraph_tf/BUILD.bazel:19:1: C++ compilation of rule '@ngraph_tf//:ngraph_tf' failed (Exit 1) In file included from external/org_tensorflow/tensorflow/core/framework/common_shape_fns.h:22:0, from external/org_tensorflow/tensorflow/core/framework/resource_mgr.h:24, from external/org_tensorflow/tensorflow/core/common_runtime/device.h:43, from external/org_tensorflow/tensorflow/core/common_runtime/device_set.h:23, from external/org_tensorflow/tensorflow/core/common_runtime/optimization_registry.h:25, from external/ngraph_tf/src/ngraph_encapsulate_pass.cc:23: external/org_tensorflow/tensorflow/core/util/tensor_format.h: In function 'tensorflow::TensorShape tensorflow::ShapeFromFormat(tensorflow::TensorFormat, tensorflow::int64, tensorflow::gtl::ArraySlice&lt;long long int&gt;, tensorflow::int64)': external/org_tensorflow/tensorflow/core/util/tensor_format.h:501:45: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] if (format == FORMAT_NHWC_VECT_W &amp;&amp; dim == spatial.size() - 1) { ^ external/ngraph_tf/src/ngraph_encapsulate_pass.cc: In member function 'tensorflow::Status ngraph_bridge::NGraphEncapsulatePass::EncapsulateFunctions(tensorflow::Graph*)': external/ngraph_tf/src/ngraph_encapsulate_pass.cc:393:17: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] if (i &lt; node-&gt;requested_inputs().size()) { ^ external/ngraph_tf/src/ngraph_encapsulate_pass.cc:414:42: error: 'class absl::string_view' has no member named 'ToString' cluster_idx, tensor_id.first.ToString(), tensor_id.second)); ^ In file included from external/org_tensorflow/tensorflow/core/platform/default/logging.h:24:0, from external/org_tensorflow/tensorflow/core/platform/logging.h:25, from external/org_tensorflow/tensorflow/core/lib/core/refcount.h:22, from external/org_tensorflow/tensorflow/core/platform/tensor_coding.h:21, from external/org_tensorflow/tensorflow/core/framework/resource_handle.h:19, from external/org_tensorflow/tensorflow/core/framework/allocator.h:24, from external/org_tensorflow/tensorflow/core/common_runtime/device.h:35, from external/org_tensorflow/tensorflow/core/common_runtime/device_set.h:23, from external/org_tensorflow/tensorflow/core/common_runtime/optimization_registry.h:25, from external/ngraph_tf/src/ngraph_encapsulate_pass.cc:23: external/org_tensorflow/tensorflow/core/util/tensor_format.h: In instantiation of 'T tensorflow::GetTensorDim(tensorflow::gtl::ArraySlice&lt;T&gt;, tensorflow::TensorFormat, char) [with T = long long int; tensorflow::gtl::ArraySlice&lt;T&gt; = absl::Span&lt;const long long int&gt;]': external/org_tensorflow/tensorflow/core/util/tensor_format.h:452:47: required from here external/org_tensorflow/tensorflow/core/util/tensor_format.h:420:29: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] CHECK(index &gt;= 0 &amp;&amp; index &lt; dimension_attributes.size()) ^ external/org_tensorflow/tensorflow/core/platform/macros.h:87:47: note: in definition of macro 'TF_PREDICT_FALSE' #define TF_PREDICT_FALSE(x) (__builtin_expect(x, 0)) ^ external/org_tensorflow/tensorflow/core/util/tensor_format.h:420:3: note: in expansion of macro 'CHECK' CHECK(index &gt;= 0 &amp;&amp; index &lt; dimension_attributes.size()) ^ external/org_tensorflow/tensorflow/core/util/tensor_format.h: In instantiation of 'T tensorflow::GetFilterDim(tensorflow::gtl::ArraySlice&lt;T&gt;, tensorflow::FilterTensorFormat, char) [with T = long long int; tensorflow::gtl::ArraySlice&lt;T&gt; = absl::Span&lt;const long long int&gt;]': external/org_tensorflow/tensorflow/core/util/tensor_format.h:461:54: required from here external/org_tensorflow/tensorflow/core/util/tensor_format.h:435:29: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] CHECK(index &gt;= 0 &amp;&amp; index &lt; dimension_attribute.size()) ^ external/org_tensorflow/tensorflow/core/platform/macros.h:87:47: note: in definition of macro 'TF_PREDICT_FALSE' #define TF_PREDICT_FALSE(x) (__builtin_expect(x, 0)) ^ external/org_tensorflow/tensorflow/core/util/tensor_format.h:435:3: note: in expansion of macro 'CHECK' CHECK(index &gt;= 0 &amp;&amp; index &lt; dimension_attribute.size()) ^ Target //tensorflow/tools/pip_package:build_pip_package failed to build</code></pre> </div> </div> </p> <p>I get the same problem with a 1.10 image too. Or trying to build either 1.10 or 1.11 from source directly on the host.</p>
<p>it seems it was solved on this commit <a href="https://github.com/tensorflow/tensorflow/commit/08af8cac22af4cc430e092b6218ca77736efb82c" rel="nofollow noreferrer">here</a>.</p> <p>Check also the thread where it was mentioned in the first place <a href="https://github.com/tensorflow/tensorflow/issues/22583" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/issues/22583</a></p> <p>Just pull from master and try to rebuild again.</p>
tensorflow
2
645
46,342,959
Python - How to combine / concatenate / join pandas series variables ignoring the empty variable
<p>With the help of .loc method, I am identifying the values in a column in a Panda data frame based on values in the another column of the same data frame.</p> <p>The code snippet is given below for your reference :</p> <pre><code>var1 = output_df['Player'].loc[output_df['Team']=='India'].reset_index(drop=True) var2 = output_df['Player'].loc[output_df['Team']=='Australia'].reset_index(drop=True) var3 = output_df['Player'].loc[output_df['Team']=='Algeria'].reset_index(drop=True) </code></pre> <p><strong>Update</strong></p> <p><strong>There may be 'n' number of teams in my data frame but I want the top players from selective teams only. That's why I manually enter the team names in the code. And I may require top performer, 2nd top performer and so on. So cannot fetch values from a column in data frame by using join statement.</strong> </p> <p>Now I would be having 3 variables of type "pandas.core.series.Series"</p> <p>I already sorted this data frame in the order of descending based on another column called "Score"</p> <p>And my requirement is to fetch the top scoring player from each team and create an output variable combining all the player names with a ','.</p> <p>I tried with the below command to get the desired output :</p> <pre><code>Final = var1[0]+','+var2[0]+','+var3[0] </code></pre> <p>It is producing the expected output successfully but suppose if any of the variable is empty - For example consider my data frame does not have top scoring player from Algeria, var3 will be empty. Hence when I execute the previous command, it is getting ended up with "Out of bounds" error</p> <p>Is there any way to execute the previous command or is there any similar kind of command that has to ignore the null variable but combine the remaining variables together with a separator in between ?</p> <p><strong>Update</strong> </p> <p>The logic that i get here will be used for framing sentences from words based on their POS tags (noun, adjective, verb, so on). Var1 will be used for storing Nouns arranged in descending order based on some score. Var2 will be used for storing Adjectives arranged in same order as noun and so on...</p> <p><em>Finally while framing a string / sentence I would be using these variables to concatenate. Ex: top-performing-noun + top-performing-adjective + top-performing-verb. Second sentence will be formed by 2nd-top-performing-noun + 2nd-top-performing-adjective ..... Right now I do not have code snippet for the same. It is being framed from Team-Player code.</em></p> <p>Hope this update helps to understand the question more clearly**</p>
<p>I think you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html" rel="nofollow noreferrer"><code>concat</code></a> with <code>apply</code> for remove <code>NaN</code>s by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.dropna.html" rel="nofollow noreferrer"><code>dropna</code></a>:</p> <pre><code>var1 = pd.Series(list('abcd')) var2 = pd.Series(list('rftyru')) var3 = pd.Series(list('de')) print (pd.concat([var1, var2, var3], axis=1)) 0 1 2 0 a r d 1 b f e 2 c t NaN 3 d y NaN 4 NaN r NaN 5 NaN u NaN Final = (pd.concat([var1, var2, var3], axis=1) .apply(lambda x: ', '.join(x.dropna()), axis=1)) print (Final) 0 a, r, d 1 b, f, e 2 c, t 3 d, y 4 r 5 u dtype: object </code></pre> <p>But better is use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>groupby</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_values.html" rel="nofollow noreferrer"><code>sort_values</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.head.html" rel="nofollow noreferrer"><code>GroupBy.head</code></a> for top e.g <code>2</code> players.</p> <p>For filtering <code>Teams</code> use <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow noreferrer"><code>boolean indexing</code></a>:</p> <pre><code>#a bit changed data from another solution df = pd.DataFrame([['Tim', 'India', 100], ['Bob', 'Australia', 50], ['John', 'Algeria', 123], ['Sarah', 'Algeria', 456], ['Jane', 'Australia', 9]], columns=["Player", "Team", "Score"]) df1 = df[df['Team'].isin(['Algeria','India','Australia'])] df1 = df1.sort_values('Score', ascending=False).groupby('Team').head(2) print (df1) Player Team Score 3 Sarah Algeria 456 2 John Algeria 123 0 Tim India 100 1 Bob Australia 50 4 Jane Australia 9 df1 = (df.sort_values('Score', ascending=False) .groupby('Team')['Player'] .apply(lambda x: ', '.join(x.head(2))) .reset_index()) print (df1) Team Player 0 Algeria Sarah, John 1 Australia Bob, Jane 2 India Tim </code></pre> <p>For second top use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.nth.html" rel="nofollow noreferrer"><code>GroupBy.nth</code></a>:</p> <pre><code>df1 = df.sort_values('Score', ascending=False).groupby('Team', as_index=False).nth(1) print (df1) Player Team Score 2 John Algeria 123 4 Jane Australia 9 </code></pre>
python|python-3.x|pandas
2
646
46,389,373
When running gym, sanity check returns attribute error for numpy __version__
<p>I am trying to get open AI gym working, but i am facing a very persistent error.<br/> When I run my programme (just the simple demo cartpole solver) I get this error. (The file "gperm.py" is the cartpole solver)</p> <pre><code>File "gperm.py", line 1, in &lt;module&gt; import gym File "/Users/sonyaferraro/Desktop/dpy/gym/__init__.py", line 48, in &lt;module&gt; sanity_check_dependencies() File "/Users/sonyaferraro/Desktop/dpy/gym/__init__.py", line 20, in sanity_check_dependencies if distutils.version.LooseVersion(numpy.__version__) &lt; distutils.version.LooseVersion('1.10.4'): </code></pre> <p>and finally prints:</p> <pre><code>AttributeError: module 'numpy' has no attribute '__version__' </code></pre> <p>This is strange because I did a full pip install of numpy and even tried to git clone it neither worked. I have checked to make sure I have no other files called numpy and everything seems to be in check. </p> <p>If anyone else is having this same problem or anyone has a solution it would be greatly appreciated. </p> <p>It also prints a "hint" prompting me to try: pip install -U numpy.</p> <pre><code> logger.warn("You have 'numpy' version %s installed, but 'gym' requires at least 1.10.4. HINT: upgrade via 'pip install -U numpy'.", numpy.__version__) </code></pre> <p>I do have a version of numpy>= 1.10.4 though so that shouldn't be popping up right?(cant remember exactly what version)</p> <p>Using the pip install -U numpy however returns an 'SNIMissingWarning', an 'InsequrePlatformWarning' and the following:</p> <pre><code>The directory '/Users/sonyaferraro/Library/Caches/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag. The directory '/Users/sonyaferraro/Library/Caches/pip' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag. </code></pre> <p>I honestly have no idea what that is telling me to do as I do have permission for those directories. </p>
<p>Based on your terminal output, I think you're using MacOS with brew.</p> <p><code>brew link --overwrite numpy</code> seems to have fixed the issue for me.</p>
python|numpy|openai-gym
2
647
58,588,604
Numpy array indexing unexpected behavior
<p>Consider the following:</p> <pre><code>import numpy as np X = np.ones((5,5)) print(X[:,0].shape) print(X[:,0:1].shape) </code></pre> <ul> <li><p><code>X[:,0].shape</code> returns <code>(5,)</code></p></li> <li><p><code>X[:,0:1].shape</code> returns <code>(5,1)</code>.</p></li> </ul> <p>In both cases the same column is selected (indexed) but <strong>why</strong> is this happening? What is the logic behind it?</p> <hr> <p>Exactly the same happens with <strong><code>X[:,-1:].shape</code> and <code>X[:,-1].shape</code></strong></p>
<p>This behaviour is explained by the fact that, as opposed to indexing with a slice, integer indexing with say <code>i</code>, will return the same values as a slice <code>i:i+1</code> but with the dimensionality of the returned object reduced by <code>1</code>. This is explained in the <a href="https://docs.scipy.org/doc/numpy-1.13.0/reference/arrays.indexing.html#integer-array-indexing" rel="nofollow noreferrer">docs</a>:</p> <blockquote> <p>In particular, a selection tuple with the p-th element an integer (and all other entries :) returns the corresponding sub-array with dimension N - 1</p> </blockquote> <hr> <p>We could write a simple subclass to take a closer look at how <code>np.ndarray</code> handles indexing, and see what the <code>__getitem__</code> dunder is receiving in each call:</p> <pre><code>class ndarray_getitem_print(np.ndarray): def __getitem__(self, t): print(t) return super().__getitem__(t) </code></pre> <p>Now let's instanciate <code>ndarray_getitem_print</code> and see what are the differences when indexing with a slice and an integer:</p> <pre><code>a = ndarray_getitem_print((5,5)) a[:,0:1] (slice(None, None, None), slice(0, 1, None)) (-5, -1) (-4, -1) (-3, -1) (-2, -1) (-1, -1) ndarray_getitem_print([[1.], [1.], [1.], [1.], [1.]]) </code></pre> <p>Whereas indexing along the second axis with a <code>0</code>, will be producing an output ndarray where each item has a one dimensional shape, i.e <code>(-k,)</code></p> <pre><code>a[:,0] (slice(None, None, None), 0) (-5,) (-4,) (-3,) (-2,) (-1,) ndarray_getitem_print([1., 1., 1., 1., 1.]) </code></pre>
python|numpy
1
648
69,295,963
Color Negative Values on Matplotlib Bar Plots Differently
<p>I am trying to color the bar plots of the negative values differently. Any pointer to accomplish this is much appreciated. Thanks.</p> <pre><code>import matplotlib.pyplot as plt import numpy as np city=['a','b','c','d'] pos = np.arange(len(city)) Effort =[4, 3, -1.5, -3.5] plt.barh(pos,Effort,color='blue',edgecolor='black') plt.yticks(pos, city) plt.xlabel('DV', fontsize=16) plt.ylabel('Groups', fontsize=16) plt.title('ABCDEF',fontsize=20) plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/QkTju.png" rel="nofollow noreferrer">ABCDEF matplotlib graph</a></p>
<p>Just color a second plot differently:</p> <pre class="lang-py prettyprint-override"><code>city = ['a', 'b', 'c', 'd'] pos = np.arange(len(city)) Effort = np.array([4, 3, -1.5, -3.5]) plt.barh(pos[Effort &gt;= 0], Effort[Effort &gt;= 0], color='blue', edgecolor='black') # positive values in blue plt.barh(pos[Effort &lt; 0], Effort[Effort &lt; 0], color='red', edgecolor='black') # negative values in red plt.yticks(pos, city) plt.xlabel('DV', fontsize=16) plt.ylabel('Groups', fontsize=16) plt.title('ABCDEF', fontsize=20) plt.show() </code></pre> <p>This results in:</p> <p><a href="https://i.stack.imgur.com/9qJTV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9qJTV.png" alt="enter image description here" /></a></p>
python|pandas|matplotlib|visualization
1
649
69,235,383
pytorch custom loss function nn.CrossEntropyLoss
<p>After studying autograd, I tried to make loss function myself. And here are my loss</p> <pre><code>def myCEE(outputs,targets): exp=torch.exp(outputs) A=torch.log(torch.sum(exp,dim=1)) hadamard=F.one_hot(targets, num_classes=10).float()*outputs B=torch.sum(hadamard, dim=1) return torch.sum(A-B) </code></pre> <p>and I compared with torch.nn.CrossEntropyLoss</p> <p>here are results</p> <pre><code>for i,j in train_dl: inputs=i targets=j break outputs=model(inputs) myCEE(outputs,targets) : tensor(147.5397, grad_fn=&lt;SumBackward0&gt;) loss_func = nn.CrossEntropyLoss(reduction='sum') : tensor(147.5397, grad_fn=&lt;NllLossBackward&gt;) </code></pre> <p>values were same.</p> <p>I thought, because those are different functions so grad_fn are different and it won't cause any problems.</p> <p>But something happened!</p> <p>After 4 epochs, loss values are turned to <code>nan</code>.</p> <p>Contrary to <code>myCEE</code>, with nn.CrossEntropyLoss learning went well.</p> <p>So, I wonder if there is a problem with my function.</p> <p>After read some posts about <code>nan</code> problems, I stacked more convolutions to the model.</p> <p>As a result 39-epoch training did not make an error.</p> <p>Nevertheless, I'd like to know difference between myCEE and nn.CrossEntropyLoss</p>
<p><code>torch.nn.CrossEntropyLoss</code> is different to your implementation because it uses a trick to counter instable computation of the exponential when using numerically big values. Given the logits output <code>{l_1, ... l_j, ..., l_n}</code>, the softmax is defined as:</p> <pre><code>softmax(l_i) = exp(l_i) / sum_j(exp(l_j)) </code></pre> <p>The trick is to multiple both the numerator and denominator by <code>exp(-β)</code>:</p> <pre><code>softmax(l_i) = exp(l_i)*exp(-β) / [sum_j(exp(l_j))*exp(-β)] = exp(l_i-β) / sum_j(exp(l_j-β)) </code></pre> <p>Then the log-softmax comes down to:</p> <pre><code>logsoftmax(l_i) = l_i - β - log[sum_j(exp(l_j-β))] </code></pre> <p>In practice <code>β</code> is chosen as the highest logit value <em>i.e.</em> <code>β = max_j(l_j)</code>.</p> <p>You can read more about it on this question: <a href="https://stackoverflow.com/questions/42599498/numercially-stable-softmax"><em>Numerically Stable Softmax</em></a>.</p>
python|pytorch|loss-function|cross-entropy
1
650
69,156,439
pandas dataframe groupby conditional count on multi-level column
<p>Let's say we have dataframe like this</p> <pre><code>np.random.seed(123) df = pd.DataFrame(np.random.randint(100,size=(4, 4)),columns = pd.MultiIndex.from_product([['exp0','exp1'],['rnd0','rnd1']],names=['experiments','rnd_runs'])) df['grp1','cat'] = ['A','A','B','B'] df['grp2','cat2'] = ['C','C','C','B'] experiments exp0 exp1 grp1 grp2 rnd_runs rnd0 rnd1 rnd0 rnd1 cat cat2 0 66 92 98 17 A C 1 83 57 86 97 A C 2 96 47 73 32 B C 3 46 96 25 83 B B </code></pre> <p>I want to <code>count</code> values in <code>('exp0', 'rdn0')</code> column with <code>groupby</code> <code>('grp1','cat')</code></p> <p>So I tried;</p> <pre><code>df['exp0_cnt','rdn0'] = df.groupby([('grp1','cat')])[('exp0', 'rdn')].apply(sum(x &gt; 50 for x in df[(('exp0', 'rdn'))])) </code></pre> <p>but getting the error</p> <blockquote> <p>TypeError: other must be a MultiIndex or a list of tuples</p> </blockquote> <p>Here are the similar posts and I think I'm doing multi-level column callings with <code>tuples</code>.</p> <p><a href="https://stackoverflow.com/questions/54409660/conditional-on-multi-header-pandas-dataframe">conditional on multi header pandas dataframe</a></p> <p><a href="https://stackoverflow.com/questions/48917614/pandas-dataframe-groupby-on-multiindex">pandas dataframe groupby on multiindex</a></p> <p><a href="https://stackoverflow.com/questions/66533883/better-way-for-creating-columns-in-a-multi-level-columns-pandas-dataframe">Better way for creating columns in a multi level columns pandas dataframe</a></p>
<p>The only ways to select MultiIndex columns from a groupby is with a <em>list</em> of tuples or a MultiIndex (as indicated by the Error Message):</p> <p>So, instead of <code>[('exp0', 'rdn')]</code> it needs to be <code>[[('exp0', 'rdn')]]</code>, then it just needs to be a valid column name like <code>('exp0', 'rnd0')</code>, for example.</p> <pre><code>df['exp0_cnt', 'rdn0'] = ( df.groupby([('grp1', 'cat')])[[('exp0', 'rnd0')]] # ^ need to use valid column name # ^ needs to be a list of tuples .transform(lambda x: x.gt(50).sum()) # Some function that works ) </code></pre> <p>*I've also changed the apply function, because it seems to be missing the <code>lambda</code> so I made a guess as to an equivalent:</p> <pre><code>.apply(sum(x &gt; 50 for x in df[(('exp0', 'rdn'))]) </code></pre> <p>To <code>transform</code> since it's being assigned back to the DataFrame:</p> <pre><code>.transform(lambda x: x.gt(50).sum()) </code></pre> <p><code>df</code>:</p> <pre><code>experiments exp0 exp1 grp1 grp2 exp0_cnt rnd_runs rnd0 rnd1 rnd0 rnd1 cat cat2 rdn0 0 66 92 98 17 A C 2 1 83 57 86 97 A C 2 # 2 values over 50 (in group) 2 96 47 73 32 B C 1 3 46 96 25 83 B B 1 # 1 values over 50 (in group) </code></pre> <hr /> <p><strong>Please Note</strong>: This means that a <code>SeriesGroupBy</code> cannot be created by selecting MultiIndex columns, only <code>DataFrameGroupBy</code> operations.</p> <pre><code>type(df.groupby([('grp1', 'cat')])[[('exp0', 'rnd0')]]) # &lt;class 'pandas.core.groupby.generic.DataFrameGroupBy'&gt; </code></pre> <p>This will exclude a few operations like <a href="https://pandas.pydata.org/docs/reference/api/pandas.core.groupby.SeriesGroupBy.unique.html" rel="nofollow noreferrer"><code>SeriesGroupBy.unique</code></a></p> <pre><code>df.groupby([('grp1', 'cat')])[[('exp0', 'rnd0')]].unique() </code></pre> <pre><code>AttributeError: 'DataFrameGroupBy' object has no attribute 'unique' </code></pre> <p>However, we can force a <code>SeriesGroupBy</code> by Selecting the Series from the DataFrame and grouping by the Series values directly:</p> <pre><code>df[('exp0', 'rnd0')].groupby(df[('grp1', 'cat')]).unique() # ^ select specific column ^ pass the Series to groupby directly </code></pre> <pre><code>(grp1, cat) A [66, 83] B [96, 46] Name: (exp0, rnd0), dtype: object </code></pre>
python|pandas|dataframe|pivot-table|multi-index
2
651
68,982,444
select_dtypes(include=['number']) does not select the first column
<p>I am reading an excel file like so:</p> <pre><code>df = pd.read_excel(r&quot;C:file.xlsx&quot;, sheet_name='first') </code></pre> <p><code>df</code></p> <pre><code>category 1 2 3 4 A 105 200 54 49 B 18 9 8 74 </code></pre> <pre><code># then I want to multiply the numbers by a 1000, so I use this df[df.select_dtypes(include=['number']).columns] *= 1000 </code></pre> <p><code>df</code></p> <pre><code>category 1 2 3 4 A 105 200000 54000 49000 B 18 9000 8000 74000 </code></pre> <p>But it ends up multiplying everything but the first column, then I check if it is <code>int</code>:</p> <pre><code>df[df.select_dtypes(include=['number']).columns] category 2 3 4 A 200 54 49 B 9 8 74 </code></pre> <p>But I am sure that all of the columns are type int:</p> <pre><code>for item in df.columns: print(item, type(item)) </code></pre> <pre><code>category &lt;class 'str'&gt; 1 &lt;class 'int'&gt; 2 &lt;class 'int'&gt; 3 &lt;class 'int'&gt; 4 &lt;class 'int'&gt; </code></pre> <p>So why is <code>df[df.select_dtypes(include=['number']).columns]</code> not selecting the first column?</p>
<p>First columns is filled by numbers saved like strings, check it by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.dtypes.html" rel="nofollow noreferrer"><code>DataFrame.dtypes</code></a>:</p> <pre><code>print (df.dtypes) category object 1 object 2 int64 3 int64 4 int64 dtype: object </code></pre> <p>Your solution:</p> <pre><code>for item in df.columns: print(item, item.dtype) </code></pre>
python|pandas
1
652
44,724,228
Efficiently check rows that match certain values in Pandas DataFrame and add it to another dataframe
<p>I have a dataframe (called: data) which has list of customers and their purchases - looks like this:</p> <p><code>ID product 1 orange 1 banana 2 apple 2 orange 2 banana 3 banana 3 apple 4 apple 5 apple 5 orange 5 banana </code> what I would like to do is to generate a matrix where the indexes are the ids of the costumers and the columns to be the products and fill the matrix with either 1 if the costumer purchased the products or 0 if he didn't. the final matrix will look like this:</p> <p><a href="https://i.stack.imgur.com/LF0B8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LF0B8.png" alt="enter image description here"></a></p> <p>I have done it but it took too long as I'm dealing with around 20,000 costumers with more than 3,000 products (the estimated time to be completed is around 4 days!).</p> <p>Here is my code:</p> <pre><code>df_matrix = pd.DataFrame(index = customers, columns = products) for indx in df_matrix.index: for col in df_matrix.columns: if ((data['ID'] == indx) &amp; (data['product'] == col)).any() == True: df_matrix.loc[indx,col] = 1 </code></pre>
<p><code>pd.get_dummies</code> my friend</p> <p>have a look here <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.get_dummies.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/generated/pandas.get_dummies.html</a></p>
python|pandas|dataframe
3
653
44,791,932
Convert string tensor to lower case
<p>Is there any way to convert a string tensor to lower case, without evaluating in the session ? Some sort of <code>tf.string_to_lower</code> op ?</p> <p>More specifically, I am reading data from <code>tfrecords</code> files, so my data is made of tensors. I then want to use <code>tf.contrib.lookup.index_table_from_*</code> to lookup indices for words in the data, and I need this to be case-insensitive. Lowering the data before writing it to <code>tfrecords</code> is not an option, as it needs to be kept in original format. One option would be to store both original and lowered, but I'd like to avoid this if possible.</p>
<p>Here's an implementation with tensorflow ops:</p> <pre><code>def lowercase(s): ucons = tf.constant_initializer([chr(i) for i in range(65, 91)]) lcons = tf.constant_initializer([chr(i) for i in range(97, 123)]) upchars = tf.constant(ucons, dtype=tf.string) lchars = tf.constant(lcons, dtype=tf.string) upcharslut = tf.contrib.lookup.index_table_from_tensor(mapping=upchars, num_oov_buckets=1, default_value=-1) splitchars = tf.string_split(tf.reshape(s, [-1]), delimiter="").values upcharinds = upcharslut.lookup(splitchars) return tf.reduce_join(tf.map_fn(lambda x: tf.cond(x[0] &gt; 25, lambda: x[1], lambda: lchars[x[0]]), (upcharinds, splitchars), dtype=tf.string)) if __name__ == "__main__": s = "komoDO DragoN " sess = tf.Session() x = lowercase(s) sess.run(tf.global_variables_initializer()) sess.run(tf.tables_initializer()) print(sess.run([x])) </code></pre> <p>returns <code>[b'komodo dragon ']</code></p>
tensorflow
1
654
60,896,818
question about asterik in curve fitting code
<p>In this following example where it is trying to curve fit a sigmoid function to data I don't understand what does * in <code>*ppot</code> in line 11 mean</p> <pre><code>from scipy.optimize import curve_fit import numpy as np import matplotlib.pyplot as plt def sigmoid(x, Beta_1, Beta_2): y = 1 / (1 + np.exp(-Beta_1*(x-Beta_2))) return y popt, pcov = curve_fit(sigmoid, xdata, ydata) x = np.linspace(1960, 2015, 55) x = x/max(x) plt.figure(figsize=(8,5)) y = sigmoid(x, *popt) plt.plot(xdata, ydata, 'ro', label='data') plt.plot(x,y, linewidth=3.0, label='fit') plt.legend(loc='best') plt.ylabel('GDP') plt.xlabel('Year') plt.show() </code></pre> <p>thank you in advance. </p>
<p>The <code>curve_fit</code> method returns <code>popt</code> as a list of values, in this case, a list of 2 values (optimal values for the parameters).</p> <p>Adding the <code>*</code> before a list splits the list into its values each assigned to a parameter of the function.</p> <p>Example</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; # Sample list &gt;&gt;&gt; lst = [1, 2, 3] &gt;&gt;&gt; lst [1, 2, 3] &gt;&gt;&gt; # Creating a function that requires 3 parameters &gt;&gt;&gt; def add(x, y, z): ... return x + y + z ... &gt;&gt;&gt; add(*lst) 6 &gt;&gt;&gt; add(lst) Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; TypeError: add() missing 2 required positional arguments: 'y' and 'z' </code></pre>
numpy|scikit-learn|user-defined-functions
1
655
60,859,500
How to fix Python/Pandas too many indexers error?
<p>I have tried the run the code below in Python 3.7 to loop through every combination of data columns in the dataframe 'Rawdata' to create a subset of regression models using statsmodel library and returns the best one. The code does not throw up any errors until I run the last line: best_subset(X, Y). It returns : "IndexingError: Too many indexers".</p> <p>Any idea what's wrong/how to fix?</p> <p>Would be great if someone can help! Thanks</p> <pre class="lang-py prettyprint-override"><code>#Data Rawdata = pd.read_csv(r'C:\Users\Lucas\Documents\sample.csv') #Main code def best_subset(X, Y): n_features = X.shape[1] subsets = chain.from_iterable(combinations(range(n_features), k+1) for k in range(n_features)) best_score = -np.inf best_subset = None for subset in subsets: lin_reg = sm.OLS(Y, X.iloc[:, subset]).fit() score = lin_reg.rsquared_adj if score &gt; best_score: best_score, best_subset = score, subset return best_subset, best_score #Define data inputs and call code above X = Rawdata.iloc[:, 1:10] Y = Rawdata.iloc[:, 0] #To return best model best_subset(X, Y) </code></pre>
<p>Your looping variable subset can be a tuple of length n_features. If, for example, the subset is (0, 1), your regression reads as</p> <pre><code>lin_reg = sm.OLS(Y, X.iloc[:, (0, 1)]).fit() </code></pre> <p>Pandas does not know how to handle this (see <a href="https://stackoverflow.com/questions/30781037/too-many-indexers-with-dataframe-loc">here</a>). One solution is to convert the type of subset from tuple to a list:</p> <pre><code>for subset in subsets: subset = list(subset) lin_reg = sm.OLS(Y, X.iloc[:, subset]).fit() </code></pre>
python-3.x|pandas|python-3.7|statsmodels
0
656
71,742,794
Validation of a conditional rule in a column and duplication in another
<p>I have a database of products and I have to validate if the product ID repeats in a column and also validate if it's 'True' or 'False' in another column. Then set all to 'True' if at least one of the duplicated rows are 'True'.</p> <p>I found a way in this link: <a href="https://stackoverflow.com/questions/67164011/create-rule-for-sets-of-duplicates-in-a-pandas-dataframe">Create rule for sets of duplicates in a Pandas Dataframe</a> using the second answer, but it's spending too much time doing the process in my database, something like 8min.</p> <p>Does someone know how to do that in a faster way?</p> <p>Example:</p> <pre><code>ID Active 01 False 01 False 01 True 02 False 02 False 03 True 03 False 03 False </code></pre> <p>And it should be like this in the end:</p> <pre><code>ID Active 01 True 01 True 01 True 02 False 02 False 03 True 03 True 03 True </code></pre>
<p>You can conveniently use <code>max</code> in a <a href="https://pandas.pydata.org/docs/reference/api/pandas.core.groupby.DataFrameGroupBy.transform.html" rel="nofollow noreferrer"><code>groupby.transform</code></a>:</p> <pre><code>df['Active'] = df.groupby('ID')['Active'].transform('max') </code></pre> <p>Or <code>any</code> that is a bit faster:</p> <pre><code>df['Active'] = df.groupby('ID')['Active'].transform('any') </code></pre> <p>Output:</p> <pre><code> ID Active 0 1 True 1 1 True 2 1 True 3 2 False 4 2 False 5 3 True 6 3 True 7 3 True </code></pre>
python|pandas
2
657
71,688,024
How to calculate conditional aggregate measure for each row in dataframe?
<p>I have a table like this...</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Date</th> <th>PlayerId</th> <th>Goals</th> </tr> </thead> <tbody> <tr> <td>June 1</td> <td>A</td> <td>1</td> </tr> <tr> <td>June 14</td> <td>A</td> <td>1</td> </tr> <tr> <td>June 15</td> <td>B</td> <td>2</td> </tr> <tr> <td>June 28</td> <td>A</td> <td>1</td> </tr> <tr> <td>July 6th</td> <td>B</td> <td>0</td> </tr> <tr> <td>July 17th</td> <td>A</td> <td>1</td> </tr> </tbody> </table> </div> <p>I would like to calculate the amount of goals a player had scored in the 30 days prior (NOT 30 games). The final results should look like...</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Date</th> <th>PlayerId</th> <th>Goals</th> <th>Goals_Prev_30</th> </tr> </thead> <tbody> <tr> <td>June 1</td> <td>A</td> <td>1</td> <td>0</td> </tr> <tr> <td>June 14</td> <td>A</td> <td>1</td> <td>1</td> </tr> <tr> <td>June 15</td> <td>B</td> <td>2</td> <td>0</td> </tr> <tr> <td>June 28</td> <td>A</td> <td>1</td> <td>2</td> </tr> <tr> <td>July 6th</td> <td>B</td> <td>0</td> <td>2</td> </tr> <tr> <td>July 17th</td> <td>A</td> <td>1</td> <td>1</td> </tr> </tbody> </table> </div> <p>I created a for loop that filters that identifies a single row in the dataframe, then filters the dataframe by characteristics of the row, then calculates the sum of goals in the filtered dataframe, appends it to a list, which is finally assigned to the Goals_Prev_30 column. The code looks like...</p> <pre><code>30_day_goals = [] for i in range(len(df)): row = df.iloc[i] filtered_df = df[(df['Date'] &lt; row['Date']) &amp; (df['Date'] &gt;= row['Date'])- pd.to_timedelta(30,unit='d')) &amp; (df['PlayerId'] == row['PlayerId'])] total = filtered_df['Goals'].sum() 30_day_goals.append(total) df['Goals_Prev_30'] = 30_day_goals </code></pre> <p>This solution works, but it's slow. It can do around 30 rows a second, however it's not a viable solution as I have multiple measures that are similar and there are over 1.2M rows. This means it will take around 11hrs per measure to complete.</p> <p>How can this problem be solved in a more efficient manner?</p>
<p>I change your solution to custom function per groups with mask created by broadcasting and <code>sum</code> values of <code>Goals</code> column per groups if match:</p> <pre><code>#if necessary #df['Date'] = pd.to_datetime(df['Date'], format='%B %d') def f(x): d1 = x['Date'] d2 = d1 - pd.to_timedelta(30,unit='d') a1 = d1.to_numpy() a2 = d2.to_numpy() m = (a1 &lt; a1[:, None]) &amp; (a1 &gt;=a2[:, None]) x['Goals_Prev_30'] = np.where(m, x['Goals'], 0).sum(axis=1) return x df = df.groupby('PlayerId').apply(f) </code></pre> <hr /> <pre><code>print (df) Date PlayerId Goals Goals_Prev_30 0 1900-06-01 A 1 0 1 1900-06-14 A 1 1 2 1900-06-15 B 2 0 3 1900-06-28 A 1 2 4 1900-07-06 B 0 2 5 1900-07-17 A 1 1 </code></pre>
python|pandas|conditional-statements|aggregate
1
658
71,576,695
Parse a JSON column in a df and extract specific key value
<p>I have a pandas DataFrame containing one column with a nested JSON dict. I want to normalize the JSON column ('media') and extract the value for the key 'url' when it is present. The 'media' json payload has three types of possible media objects all included in the example data set. I need to extract from the 'MessageMediaWebPage' object, only.</p> <p>The typical error (although there is some variation) after using</p> <pre><code># Using JSON normalize function pd.json_normalize(df['media'], max_level=1) </code></pre> <blockquote> <p>AttributeError: 'str' object has no attribute 'values'</p> </blockquote> <p>The full error is listed below</p> <p>Data and code example in a JSON format. The example data set only consists of three records but is too large to post directly. The link is to my git:</p> <pre><code># The json is nested # The full data set consists of several columns and 40K + records. This is just a small slice. df = pd.read_json('https://raw.githubusercontent.com/whitedl/telegram/main/df_3.json', dtype = {'telegram_id':object}) df.info() pd.json_normalize(df['media'], max_level=1) </code></pre> <p>As background I have tried the following solutions:</p> <ul> <li><a href="https://stackoverflow.com/questions/61546983/how-to-parse-a-json-column-in-a-df-where-we-append-new-column-using-selected-key">how to parse a json column in a df where we append new column using selected keys</a></li> <li><a href="https://stackoverflow.com/questions/49671693/pandas-dataframe-normalize-one-json-column-and-merge-with-other-columns">pandas DataFrame: normalize one JSON column and merge with other columns</a></li> <li><a href="https://stackoverflow.com/questions/45383702/how-to-use-json-normalize-from-pandas">How to use json_normalize from Pandas</a></li> </ul>
<p>The problem is that value in media column is string type. You can apply <code>ast.literal_eval</code> to media column to convert it value to python dict.</p> <pre class="lang-py prettyprint-override"><code>import ast pd.json_normalize(df['media'].apply(ast.literal_eval), max_level=1) </code></pre>
python|json|pandas|dataframe|json-normalize
1
659
71,668,181
Add a column in a Pandas dataframe and input a certain name
<p>I am working with a dataset using Pandas (Python/Jupyter notebook).</p> <p>I want to search one column in dataset imported and if some of the text appears add a new column with particular details and input a particular name eg</p> <p>I want to search the following dataset so if Tom appears in the set add column named Team and input Team 1 at the end of the row where it finds Tom If Bill appears in the set add Team 2 under the new column</p> <p><strong>Name</strong>, <strong>Age</strong>, <strong>Sport</strong>, Tom Smith, 29, Rugby, Billy John, 21, Rugby, Henry Clips, 25, Rugby,</p>
<p>Jupyter notebook is just an interface which allows you to combine python lines with markup for something that is easy to read. As for your problem, it might help to be a little descriptive.</p> <p>What are you using to create the table? Is it pandas library? If it is you could do something like</p> <pre><code>df[&quot;Team&quot;]=np.nan # add a new column called team df[&quot;Team&quot;][df[&quot;Name&quot;]==&quot;Tom&quot;]=1 #where Tom make team 1 df[&quot;Team&quot;][df[&quot;Name&quot;]==&quot;Bill&quot;]=2 and so on </code></pre>
python|pandas|dataframe|jupyter-notebook|jupyter
2
660
42,578,096
Low accuracy in Deep Neural Network with Tensorflow
<p>I am following the third Jupyter notebook on <a href="https://github.com/jdwittenauer/ipython-notebooks/blob/master/notebooks/tensorflow/Tensorflow-3-Regularization.ipynb" rel="nofollow noreferrer">Tensorflow examples</a>.</p> <p>Running problem 4, I tried to implement a function which builds automatically a number of hidden layers, without manually coding the configuration of each layer.</p> <p>However, the model runs providing very low accuracy (10%) so I thought that maybe such function could not be compatible with the graph builder of Tensorflow.</p> <p>My code is the following:</p> <pre><code>def hlayers(n_layers, n_nodes, i_size, a, r=0, keep_p=1): for i in range(n_layers): if i &gt; 0: i_size = n_nodes w = tf.Variable(tf.truncated_normal([i_size, n_nodes]), name=f'W{i}') b = tf.Variable(tf.zeros([n_nodes]), name=f'b{i}') pa = tf.nn.relu(tf.add(tf.matmul(a, w), b)) a = tf.nn.dropout(pa, keep_prob=keep_p, name=f'a{i}') r += tf.nn.l2_loss(w, name=f'r{i}') return a, r batch_size = 128 num_nodes = 1024 beta = 0.01 graph = tf.Graph() with graph.as_default(): # Input data. For the training data, we use a placeholder that will be fed # at run time with a training minibatch. tf_train_dataset = tf.placeholder( tf.float32, shape=(batch_size, image_size * image_size), name='Dataset') tf_train_labels = tf.placeholder( tf.float32, shape=(batch_size, num_labels), name='Labels') tf_valid_dataset = tf.constant(valid_dataset) tf_test_dataset = tf.constant(test_dataset) keep_p = tf.placeholder(tf.float32, name='KeepProb') # Hidden layers. a, r = hlayers( n_layers=3, n_nodes=num_nodes, i_size=image_size * image_size, a=tf_train_dataset, keep_p=keep_p) # Output layer. wo = tf.Variable(tf.truncated_normal([num_nodes, num_labels]), name='Wo') bo = tf.Variable(tf.zeros([num_labels]), name='bo') logits = tf.add(tf.matmul(a, wo), bo, name='Logits') loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits( labels=tf_train_labels, logits=logits)) # Regularizer. regularizers = tf.add(r, tf.nn.l2_loss(wo)) loss = tf.reduce_mean(loss + beta * regularizers, name='Loss') # Optimizer. optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss) # Predictions for the training, validation, and test data. train_prediction = tf.nn.softmax(logits) a, _ = hlayers( n_layers=3, n_nodes=num_nodes, i_size=image_size * image_size, a=tf_valid_dataset) valid_prediction = tf.nn.softmax(tf.add(tf.matmul(a, wo), bo)) a, _ = hlayers( n_layers=3, n_nodes=num_nodes, i_size=image_size * image_size, a=tf_test_dataset) test_prediction = tf.nn.softmax(tf.add(tf.matmul(a, wo), bo)) num_steps = 3001 with tf.Session(graph=graph) as session: tf.global_variables_initializer().run() print("Initialized") for step in range(num_steps): # Pick an offset within the training data, which has been randomized. # Note: we could use better randomization across epochs. offset = (step * batch_size) % (train_labels.shape[0] - batch_size) # Generate a minibatch. batch_data = train_dataset[offset:(offset + batch_size), :] batch_labels = train_labels[offset:(offset + batch_size), :] # Prepare a dictionary telling the session where to feed the minibatch. # The key of the dictionary is the placeholder node of the graph to be fed, # and the value is the numpy array to feed to it. feed_dict = { tf_train_dataset : batch_data, tf_train_labels : batch_labels, keep_p : 0.5} _, l, predictions = session.run( [optimizer, loss, train_prediction], feed_dict=feed_dict) if (step % 500 == 0): print("Minibatch loss at step %d: %f" % (step, l)) print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels)) print("Validation accuracy: %.1f%%" % accuracy( valid_prediction.eval(), valid_labels)) print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels)) </code></pre>
<p>Weight regularization is stronger with more layers. Therefore you could try to reduce the regularization and see if the accuracy increases.</p>
python|machine-learning|tensorflow|deep-learning
0
661
42,135,056
How to get only nonzero values from Sparse Tensor
<p>Utilizing TensorFlow's HashTable lookup implementation i get my SparseTensor back with the default value supplied. I'd like to clean that off and get a final SparseTensor without the default value.</p> <p>How can I clean that default value? I don't mind what the default value will be in order to make this happen. 0 is fine and so is -1.</p>
<p><code>tf.sparse_retain</code> should work:</p> <pre><code>def sparse_remove(sparse_tensor, remove_value=0.): return tf.sparse_retain(sparse_tensor, tf.not_equal(a.values, remove_value)) </code></pre> <p>As an example:</p> <pre><code>import tensorflow as tf a = tf.SparseTensor(indices=[[1, 2], [2, 2]], values=[0., 1.], shape=[3, 3]) with tf.Session() as session: print(session.run([a, sparse_remove(a)])) </code></pre> <p>Prints (I've reformatted it slightly):</p> <pre><code>[SparseTensorValue(indices=array([[1, 2], [2, 2]]), values=array([ 0., 1.], dtype=float32), shape=array([3, 3])), SparseTensorValue(indices=array([[2, 2]]), values=array([ 1.], dtype=float32), shape=array([3, 3]))] </code></pre>
tensorflow
0
662
42,514,206
TensorFlow : Enqueuing and dequeuing a queue from multiple threads
<p>The problem I am trying to solve is as follows : I have a list <code>trainimgs</code> of filenames. I have defined a </p> <ul> <li><code>tf.RandomShuffleQueue</code> with its <code>capacity=len(trainimgs)</code> and <code>min_after_dequeue=0</code>. </li> <li>This <code>tf.RandomShuffleQueue</code> is expected to be filled by <code>trainimgs</code> for a specified <code>epochlimit</code> number of times. </li> <li>A number of threads are expected to work in parallel. Each thread dequeues an element from the <code>tf.RandomShuffleQueue</code> and does some operations on it and enqueues it to another queue. I have got that part right.</li> <li>However once <code>1 epoch</code> of <code>trainimgs</code> have been processed and the <code>tf.RandomShuffleQueue</code> is empty, provided that the current epoch <code>e &lt; epochlimit</code>, the queue must again be filled up and the threads must work again.</li> </ul> <p>The good news is : I have got it working in a certain case (See <strong>PS</strong> at the end !!)</p> <p>The bad news is : I think that there is a better way of doing this.</p> <p>The method I am using to do this now is as follows (I have simplified the functions and have removed e image processing based preprocessing and subsequent enqueuing but the heart of the processing remains the same !!) : </p> <pre><code>with tf.Session() as sess: train_filename_queue = tf.RandomShuffleQueue(capacity=len(trainimgs), min_after_dequeue=0, dtypes=tf.string, seed=0) queue_size = train_filename_queue.size() trainimgtensor = tf.constant(trainimgs) close_queue = train_filename_queue.close() epoch = tf.Variable(initial_value=1, trainable=False, dtype=tf.int32) incrementepoch = tf.assign(epoch, epoch + 1, use_locking=True) supplyimages = train_filename_queue.enqueue_many(trainimgtensor) value = train_filename_queue.dequeue() init_op = tf.group(tf.global_variables_initializer(), tf.local_variables_initializer()) sess.run(init_op) coord = tf.train.Coordinator() tf.train.start_queue_runners(sess, coord) sess.run(supplyimages) lock = threading.Lock() threads = [threading.Thread(target=work, args=(coord, value, sess, epoch, incrementepoch, supplyimages, queue_size, lock, close_queue)) for i in range(200)] for t in threads: t.start() coord.join(threads) </code></pre> <p>The work function is as follows : </p> <pre><code>def work(coord, val, sess, epoch, incrementepoch, supplyimg, q, lock,\ close_op): while not coord.should_stop(): if sess.run(q) &gt; 0: filename, currepoch = sess.run([val, epoch]) filename = filename.decode(encoding='UTF-8') print(filename + ' ' + str(currepoch)) elif sess.run(epoch) &lt; 2: lock.acquire() try: if sess.run(q) == 0: print("The previous epoch = %d"%(sess.run(epoch))) sess.run([incrementepoch, supplyimg]) sz = sess.run(q) print("The new epoch = %d"%(sess.run(epoch))) print("The new queue size = %d"%(sz)) finally: lock.release() else: try: sess.run(close_op) except tf.errors.CancelledError: print('Queue already closed.') coord.request_stop() return None </code></pre> <p>So, although this works, I have a feeling that there is a better and cleaner way to achieve this. So, in a nutshell my questions are :</p> <ol> <li>Is there a simpler and cleaner way of achieving this task in TensorFlow ?</li> <li>Is there any problem with this code's logic ? I am not very experienced with multithreading scenarios, so any obvious faults which have skipped my attention would be very helpful to me.</li> </ol> <p>P.S : It seems that this code is not Perfect after all. When I ran with 1.2 million images and 200 threads, it ran. However when I run it for 10 images and 20 threads, it gives the following error :</p> <pre><code>CancelledError (see above for traceback): RandomShuffleQueue '_0_random_shuffle_queue' is closed. [[Node: random_shuffle_queue_EnqueueMany = QueueEnqueueManyV2[Tcomponents=[DT_STRING], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](random_shuffle_queue, Const)]] </code></pre> <p>I thought I got that covered by <code>except tf.errors.CancelledError</code>. What the hell is going on here ?</p>
<p>I finally found out the answer. The problem was that multiple threads were clashing together on various points in the <code>work()</code> function. The following <code>work()</code> function works perfectly.</p> <pre><code>def work(coord, val, sess, epoch, maxepochs, incrementepoch, supplyimg, q, lock, close_op): print('I am thread number %s'%(threading.current_thread().name)) print('I can see a queue with size %d'%(sess.run(q))) while not coord.should_stop(): lock.acquire() if sess.run(q) &gt; 0: filename, currepoch = sess.run([val, epoch]) filename = filename.decode(encoding='UTF-8') tid = threading.current_thread().name print(filename + ' ' + str(currepoch) + ' thread ' + str(tid)) elif sess.run(epoch) &lt; maxepochs: print('Thread %s has acquired the lock'%(threading.current_thread().name)) print("The previous epoch = %d"%(sess.run(epoch))) sess.run([incrementepoch, supplyimg]) sz = sess.run(q) print("The new epoch = %d"%(sess.run(epoch))) print("The new queue size = %d"%(sz)) else: coord.request_stop() lock.release() return None </code></pre>
tensorflow|python-multithreading
2
663
42,451,155
Pandas merge column after json_normalize
<p>I have a list of dicts in a single column but for each row, a different <em>post_id</em> in a separate column. I've gotten the dataframe I am looking for via <code>pd.concat(json_normalize(d) for d in data['comments'])</code> but I'd like to add another column to this from the original dataframe to attach the original <em>post_id</em>.</p> <p><strong>Original</strong></p> <pre><code>'post_id' 'comments' 123456 [{'from':'Bob','present':True}, {'from':'Jon', 'present':False}] </code></pre> <p><strong>Current Result</strong> (after <code>json_normalize</code>)</p> <pre><code>comments.from comments.present Bob True Jon False </code></pre> <p><strong>Desired Result</strong></p> <pre><code>comments.from comments.present post_id Bob True 123456 Jon False 123456 </code></pre> <p>Thanks for any help</p>
<p>Consider first outputting dataframe <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_json.html" rel="nofollow noreferrer"><code>to_json</code></a> then run <a href="http://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.io.json.json_normalize.html" rel="nofollow noreferrer"><code>json_normalize</code></a>:</p> <pre><code>import json from pandas import DataFrame from pandas.io.json import json_normalize df = DataFrame({'post_id':123456, 'comments': [{'from':'Bob','present':True}, {'from':'Jon', 'present':False}]}) df_json = df.to_json(orient='records') finaldf = json_normalize(json.loads(df_json), meta=['post_id']) print(finaldf) # comments.from comments.present post_id # 0 Bob True 123456 # 1 Jon False 123456 </code></pre>
python|pandas
4
664
42,542,790
pandas slice rows based on joint condition
<p>consider the below dataframe -df</p> <pre><code> one two three four five six seven eight 0 0.1 1.1 2.2 3.3 3.6 4.1 0.0 0.0 1 0.1 2.1 2.3 3.2 3.7 4.3 0.0 0.0 2 1.6 0.0 0.0 0.0 0.0 0.0 0.0 0.0 3 0.1 1.2 2.5 3.7 4.4 0.0 0.0 0.0 4 1.7 2.1 0.0 0.0 0.0 0.0 0.0 0.0 5 2.1 3.2 0.0 0.0 0.0 0.0 0.0 0.0 6 2.1 2.3 3.2 4.3 0.0 0.0 0.0 0.0 7 2.2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 8 0.1 1.8 0.0 0.0 0.0 0.0 0.0 0.0 9 1.6 0.0 0.0 0.0 0.0 0.0 0.0 0.0 </code></pre> <p>i want to select all rows where <strong>any</strong> columns value is '3.2' but at the same time the selected rows should not have values '0.1' or '1.2'</p> <p>I can able to get the first part with the below query</p> <pre><code>df[df.values == 3.2] </code></pre> <p>but cannot combine this with the second part of the query (the joint != condition)</p> <p>i also get the following error</p> <blockquote> <p>DeprecationWarning: elementwise != comparison failed; this will raise an error in the future.</p> </blockquote> <p>on the larger data set (but not on the smaller replica) when trying the below</p> <pre><code>df[df.values != [0.1,1.2]] </code></pre> <p>//edit:</p> <p>@pensen, here is the output, rows 1, 15, 27, 35 have values '0.1' though as per the condition they should have been filtered. </p> <pre><code>contains = df.eq(3.2).any(axis=1) not_contains = ~df.isin([0.1,1.2]).any(axis=1) print(df[contains &amp; not_contains]) 0 1 2 3 4 5 6 7 1 0.1 2.1 3.2 0.0 0.0 0.0 0.0 0.0 15 0.1 1.1 2.2 3.2 3.3 3.6 3.7 0.0 27 0.1 2.1 2.3 3.2 3.6 3.7 4.3 0.0 31 3.2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 35 0.1 1.7 2.1 3.2 3.6 3.7 4.3 0.0 </code></pre> <p>here is the original dataset from 0:36 rows to replicate the above output</p> <pre><code> 0 1 2 3 4 5 6 7 0 4.1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1 0.1 2.1 3.2 0.0 0.0 0.0 0.0 0.0 2 0.1 2.4 2.5 0.0 0.0 0.0 0.0 0.0 3 2.4 0.0 0.0 0.0 0.0 0.0 0.0 0.0 4 4.4 0.0 0.0 0.0 0.0 0.0 0.0 0.0 5 1.1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 6 0.1 2.1 4.1 0.0 0.0 0.0 0.0 0.0 7 4.4 0.0 0.0 0.0 0.0 0.0 0.0 0.0 8 1.7 0.0 0.0 0.0 0.0 0.0 0.0 0.0 9 2.2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 10 1.1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 11 1.1 4.1 0.0 0.0 0.0 0.0 0.0 0.0 12 0.1 2.2 3.3 3.6 0.0 0.0 0.0 0.0 13 0.1 1.8 3.3 0.0 0.0 0.0 0.0 0.0 14 0.1 1.2 1.3 2.5 3.7 4.2 0.0 0.0 15 0.1 1.1 2.2 3.2 3.3 3.6 3.7 0.0 16 1.3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 17 1.3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 18 1.3 2.5 0.0 0.0 0.0 0.0 0.0 0.0 19 0.1 1.2 2.5 3.7 4.4 0.0 0.0 0.0 20 1.2 4.4 0.0 0.0 0.0 0.0 0.0 0.0 21 4.3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 22 1.1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 23 0.1 2.2 2.4 2.5 3.7 0.0 0.0 0.0 24 0.1 2.4 4.3 0.0 0.0 0.0 0.0 0.0 25 1.7 0.0 0.0 0.0 0.0 0.0 0.0 0.0 26 0.1 1.1 4.1 0.0 0.0 0.0 0.0 0.0 27 0.1 2.1 2.3 3.2 3.6 3.7 4.3 0.0 28 1.4 2.2 3.6 4.1 0.0 0.0 0.0 0.0 29 1.8 0.0 0.0 0.0 0.0 0.0 0.0 0.0 30 1.2 4.4 0.0 0.0 0.0 0.0 0.0 0.0 31 3.2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 32 3.6 4.1 0.0 0.0 0.0 0.0 0.0 0.0 33 2.1 2.4 0.0 0.0 0.0 0.0 0.0 0.0 34 0.1 1.8 0.0 0.0 0.0 0.0 0.0 0.0 35 0.1 1.7 2.1 3.2 3.6 3.7 4.3 0.0 </code></pre> <p>here is the <a href="https://www.dropbox.com/s/qky9kuqzxotvkvn/eq_universe.h5?dl=0" rel="nofollow noreferrer">link</a> to the actual dataset</p>
<p>You can do the following in short:</p> <pre><code>df.eq(3.2).any(axis=1) &amp; ~df.isin([0.1, 1.2]).any(axis=1) </code></pre> <p>Or here more explicitly:</p> <pre><code>contains = df.eq(3.2).any(axis=1) not_contains = ~df.isin([0.1,1.2]).any(axis=1) print(df[contains &amp; not_contains]) one two three four five six seven eight 5 2.1 3.2 0.0 0.0 0.0 0.0 0.0 0.0 6 2.1 2.3 3.2 4.3 0.0 0.0 0.0 0.0 </code></pre>
python|pandas|numpy
4
665
70,002,979
Create return dataframe from price dataframe
<p>I try to figure out an efficient way to create a DataFrame of returns (ReturnTable) based on prices (PriceTable). Is there a more efficient way than just iterating with a for loop over the columns?</p> <p>Here I have a small example:</p> <pre><code>import pandas as pd PriceTable = pd.DataFrame({ 'Dates':['2021-01-01', '2021-01-02', '2021-01-03', '2021-01-04'], 'Stock1':[10, 12, 13.5, 11.5], 'Stock2':[5, 5.5, 'NaN', 'NaN'], 'Stock3':['NaN', 9, 9.5, 10.5], 'Stock4':[20, 20, 19.5, 15]}) </code></pre> <p>I try to get a ReturnTable which is just the division of the price at time t and t-1.</p> <pre><code>ReturnTable = pd.DataFrame({ 'Dates':['2021-01-01', '2021-01-02', '2021-01-03', '2021-01-04'], 'Stock1':[0, 1.2, 1.125, 0.852], 'Stock2':[0, 1.1, 'NaN', 'NaN'], 'Stock3':['NaN', 0, 1.056, 1.105], 'Stock4':[0, 1, 0.975, 0.769]}) </code></pre> <p>Thanks a lot!</p>
<p>You could use <code>shift</code> and a division:</p> <p><em>NB. <code>'NaN'</code> are strings, so you first need to convert to <code>float('nan')</code></em></p> <pre><code>PriceTable = PriceTable.replace('NaN', float('nan')) cols = PriceTable.select_dtypes('number').columns ReturnTable = PriceTable.copy() ReturnTable[cols] = PriceTable[cols]/PriceTable[cols].shift() </code></pre> <p>output:</p> <pre><code> Dates Stock1 Stock2 Stock3 Stock4 0 2021-01-01 NaN NaN NaN NaN 1 2021-01-02 1.2000 1.1000 NaN 1.0000 2 2021-01-03 1.1250 NaN 1.0556 0.9750 3 2021-01-04 0.8519 NaN 1.1053 0.7692 </code></pre>
python|pandas|dataframe
2
666
69,915,768
Bias grad in linear regression remains small compared to weight grad, and intercept is not properly learnt
<p>I have thrown together a dummy model to showcase linear regression in pytorch, but I find that my model is not properly learning. It's doing well when it comes to learning the slope, but the intercept is not really budging. Printing out the grads at every epoch tells me that, indeed, the grad is a lot smaller for the bias. Why is that? How can I remedy it, so the intercept is properly learnt?</p> <p>This is what happens (a set to 0 to illustrate):</p> <p><a href="https://i.stack.imgur.com/RwiXS.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RwiXS.gif" alt="enter image description here" /></a></p> <pre><code># Create some dummy data: we establish a linear relationship between x and y a = np.random.rand() b = np.random.rand() a=0 x = np.linspace(start=0, stop=100, num=100) y = a * x + b # Now let's create some noisy measurements noise = np.random.normal(size=100) y_noisy = a * x + b + noise # What's the overall error? mse_actual = np.sum(np.power(y-y_noisy,2))/len(y) # Visualize plt.scatter(x,y_noisy, label='Measurements', alpha=.7) plt.plot(x,y,'r', label='Underlying') plt.legend() plt.show() # Let's learn something! inputs = torch.from_numpy(x).type(torch.FloatTensor).unsqueeze(1) targets = torch.from_numpy(y_noisy).type(torch.FloatTensor).unsqueeze(1) # This is our model (one hidden node + bias) model = torch.nn.Linear(1,1) optimizer = torch.optim.SGD(model.parameters(),lr=1e-5) loss_function = torch.nn.MSELoss() # What does it predict right now? shuffled_inputs, preds = [], [] for input, target in zip(inputs,targets): pred = model(input) shuffled_inputs.append(input.detach().numpy()[0]) preds.append(pred.detach().numpy()[0]) # Visualize plt.scatter(x,y_noisy, color='blue', label='Measurements', alpha=.7) plt.plot(shuffled_inputs, preds, color='orange', label='Predictions', alpha=.7) plt.plot(x,y,'r', label='Underlying') plt.legend() plt.show() # Let's train! epochs = 100 a_s, b_s = [], [] for epoch in range(epochs): # Reset optimizer values optimizer.zero_grad() # Predict values using current model preds = model(inputs) # How far off are we? loss = loss_function(targets,preds) # Calculate the gradient loss.backward() # Update model optimizer.step() for p in model.parameters(): print('Grads:', p.grad) # New parameters a_s.append(list(model.parameters())[0].item()) b_s.append(list(model.parameters())[1].item()) print(f&quot;Epoch {epoch+1} -- loss = {loss}&quot;) </code></pre>
<p>It's a bit of a non-answer, but just use more epochs or add more datapoints. When you have 100 datapoints with noise as significant as you had (if you just plot the initial data it becomes obvious) the model will struggle with MSE as a loss.</p> <p>I can't see your image (work blocked imgur...) but I found it looked bad if you didn't adjust the axes on your matplotlib plot because it was so zoomed in on the x axis (when a=0), so I zoomed out of that too:</p> <pre><code># Create some dummy data: we establish a linear relationship between x and y a = np.random.rand() b = np.random.rand() a=0 N = 10000 x = np.linspace(start=0, stop=100, num=N) y = a * x + b # Now let's create some noisy measurements noise = np.random.normal(size=N)*0.1 y_noisy = a * x + b + noise # What's the overall error? mse_actual = np.sum(np.power(y-y_noisy,2))/len(y) # Visualize plt.figure() plt.scatter(x,y_noisy, label='Measurements', alpha=.7) plt.plot(x,y,'r', label='Underlying') plt.legend() plt.show() # Let's learn something! inputs = torch.from_numpy(x).type(torch.FloatTensor).unsqueeze(1) targets = torch.from_numpy(y_noisy).type(torch.FloatTensor).unsqueeze(1) # This is our model (one hidden node + bias) model = torch.nn.Linear(1,1) optimizer = torch.optim.SGD(model.parameters(),lr=1e-5) loss_function = torch.nn.MSELoss() # Let's train! epochs = 50000 a_s, b_s = [], [] for epoch in range(epochs): # Reset optimizer values optimizer.zero_grad() # Predict values using current model preds = model(inputs) # How far off are we? loss = loss_function(targets,preds) # Calculate the gradient loss.backward() # Update model optimizer.step() #for p in model.parameters(): # print('Grads:', p.grad) # New parameters a_s.append(list(model.parameters())[0].item()) b_s.append(list(model.parameters())[1].item()) print(f&quot;Epoch {epoch+1} -- loss = {loss}&quot;) # What does it predict right now? shuffled_inputs, preds = [], [] for input, target in zip(inputs,targets): pred = model(input) shuffled_inputs.append(input.detach().numpy()[0]) preds.append(pred.detach().numpy()[0]) plt.figure() plt.scatter(x,y_noisy, color='blue', label='Measurements', alpha=.7) plt.plot(shuffled_inputs, preds, color='orange', label='Predictions', alpha=.7) plt.plot(x,y,'r', label='Underlying') plt.axis([0,100,y.min()-1,y.max()+1]) plt.legend() plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/B1RM9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/B1RM9.png" alt="predictions" /></a></p>
python-3.x|deep-learning|neural-network|pytorch
1
667
69,999,194
Reverse operation of torch.unique
<p>In pytorch , unique (with return_count is True) operation do like this</p> <pre><code>[1,1,2,2,3,3] =&gt; ([1,2,3],[2,2,2]) </code></pre> <p>Are there any reverse operations of torch.unique() ? i.e Given a unique list and its count , return the original list like</p> <pre><code>([1,2,3],[2,2,2]) = &gt; [1,1,2,2,3,3] </code></pre>
<p>You probably want <a href="https://pytorch.org/docs/stable/generated/torch.repeat_interleave.html" rel="nofollow noreferrer">torch.repeat_interleave()</a>. You can use it like this:</p> <pre class="lang-py prettyprint-override"><code> &gt;&gt;&gt; x = torch.tensor([1, 1, 2, 3, 3, 3]) &gt;&gt;&gt; v, c = torch.unique(x, return_counts=True) &gt;&gt;&gt; v, c (tensor([1, 2, 3]), tensor([2, 1, 3])) &gt;&gt;&gt; torch.repeat_interleave(v, c) tensor([1, 1, 2, 3, 3, 3]) </code></pre>
pytorch
0
668
69,669,956
python, Iterate through a dataframe and check if a value from a list exist in a column in that dataframe
<p>I have a dataframe that contains a column with different submarkets from a city. I need to iterate through that column and check if a value in that row matches any of the entries that could be in the list. that list will then be added as a column to the original dataframe</p> <pre><code>submarket_list = [ 'South Financial District', 'Financial District', South of Market] submarket_check = [] for index,row in test_df.iterrows(): for j in submarket_list: if row['Submarket'] == j: submarket_check.append(&quot;yes&quot;) else: submarket_check.append(&quot;No&quot;) test_df['Submarket Check'] = submarket_check test_df </code></pre>
<pre><code>test_df[&quot;Submarket&quot;].isin(submarket_list) </code></pre> <p>will give you a column of booleans. It's all you need</p>
python|pandas|dataframe
2
669
72,289,714
How to validate Hugging Face organization token?
<p><code>/whoami-2</code> endpoint returns <code>Unauthorized</code> for organization tokens, the ones that start with <code>api_...</code>.</p> <pre><code>$ curl https://huggingface.co/api/whoami-2 -H &quot;Authorization: Bearer api_&lt;token&gt;&quot; &gt; { &quot;error&quot;: &quot;Unauthorized&quot; } </code></pre> <p>At the same time I can use the same token to get private models. Should I use some other endpoint to validate the tokens?</p>
<p>you are requesting to the wrong endpoint. It seems the endpoint is updated and I got a similar error with sending requests to the older endpoint (<code>whoami</code>).</p> <p>just send the request to <code>whoami-v2</code> like:</p> <pre><code>$ curl https://huggingface.co/api/whoami-v2 -H &quot;Authorization: Bearer ${token}&quot; &gt; {&quot;type&quot;: &quot;&quot;,&quot;name&quot;:&quot;sadra&quot;,&quot;fullname&quot;:&quot;sadra&quot;,&quot;email&quot;:&quot;&quot;,&quot;emailVerified&quot;:true,&quot;plan&quot;:&quot;&quot;,&quot;periodEnd&quot;:,&quot;avatarUrl&quot;:&quot;&quot;,&quot;orgs&quot;:[]} </code></pre> <p>NB: According to <a href="https://huggingface.co/docs/api-inference/quicktour" rel="nofollow noreferrer">docs</a>, it seems old tokens were <code>api_XXX</code> or <code>api_org_XXX</code>, while all new ones start with <code>hf_XXX</code>. So, maybe creating a new token might be helpful if you still face issue with new endpoint.</p> <p>So, same thing happens for organization tokens:</p> <pre><code>$ curl https://huggingface.co/api/whoami-v2 -H &quot;Authorization: Bearer api_org_XXX&quot; &gt; {&quot;type&quot;:&quot;org&quot;,&quot;name&quot;:&quot;testmy&quot;,&quot;fullname&quot;:&quot;testorg&quot;,&quot;email&quot;:null,&quot;plan&quot;:&quot;NO_PLAN&quot;,&quot;periodEnd&quot;:null,&quot;avatarUrl&quot;:&quot;https://www.gravatar.com/avatar/1bd0170cca6f638f0dd02c6a79e8c270?d=retro&amp;size=100&quot;} </code></pre>
curl|huggingface-transformers|huggingface
1
670
72,385,968
Detecting duplicates in pandas when a column contains lists
<p>Is there a reasonable way to detect duplicates in a Pandas dataframe when a column contains lists or numpy nd arrays, like the example below? I know I could convert the lists into strings, but the act of converting back and forth feels... wrong. Plus, lists seem more legible and convenient given <a href="https://www.online-python.com/jkprUEqnJi" rel="nofollow noreferrer">~how I got here (online code)</a> and where I'm going after.</p> <pre><code>import pandas as pd df = pd.DataFrame( { &quot;author&quot;: [&quot;Jefe98&quot;, &quot;Jefe98&quot;, &quot;Alex&quot;, &quot;Alex&quot;, &quot;Qbert&quot;], &quot;date&quot;: [1423112400, 1423112400, 1603112400, 1423115600, 1663526834], &quot;ingredients&quot;: [ [&quot;ingredA&quot;, &quot;ingredB&quot;, &quot;ingredC&quot;], [&quot;ingredA&quot;, &quot;ingredB&quot;, &quot;ingredC&quot;], [&quot;ingredA&quot;, &quot;ingredB&quot;, &quot;ingredD&quot;], [&quot;ingredA&quot;, &quot;ingredB&quot;, &quot;ingredD&quot;, &quot;ingredE&quot;], [&quot;ingredB&quot;, &quot;ingredC&quot;, &quot;ingredF&quot;], ], } ) # Traditional find duplicates # df[df.duplicated(keep=False)] # Avoiding pandas duplicated function (question 70596016 solution) i = [hash(tuple(i.values())) for i in df.to_dict(orient=&quot;records&quot;)] j = [i.count(k) &gt; 1 for k in i] df[j] </code></pre> <p>Both methods (the latter from <a href="https://stackoverflow.com/a/70596317/6417219">this alternative find duplicates answer</a>) result in</p> <blockquote> <p>TypeError: unhashable type: 'list'.</p> </blockquote> <p>They would work, of course, if the dataframe looked like this:</p> <pre><code>df = pd.DataFrame( { &quot;author&quot;: [&quot;Jefe98&quot;, &quot;Jefe98&quot;, &quot;Alex&quot;, &quot;Alex&quot;, &quot;Qbert&quot;], &quot;date&quot;: [1423112400, 1423112400, 1603112400, 1423115600, 1663526834], &quot;recipe&quot;: [ &quot;recipeC&quot;, &quot;recipeC&quot;, &quot;recipeD&quot;, &quot;recipeE&quot;, &quot;recipeF&quot;, ], } ) </code></pre> <p>Which made me wonder if something like integer encoding might be reasonable? It's not that different from converting to/from strings, but at least it's legible. Alternatively, suggestions for converting to a single string of ingredients per row directly from the starting dataframe in the <a href="https://www.online-python.com/jkprUEqnJi" rel="nofollow noreferrer">code link</a> above would be appreciated (i.e., avoiding lists altogether).</p>
<p>With <code>map</code> <code>tuple</code></p> <pre><code>out = df[df.assign(rating = df['rating'].map(tuple)).duplicated(keep=False)] Out[295]: author date rating 0 Jefe98 1423112400 [ingredA, ingredB, ingredC] 1 Jefe98 1423112400 [ingredA, ingredB, ingredC] </code></pre>
python|pandas|dataframe|duplicates
1
671
50,256,918
Plot a pandas categorical Series with Seaborn barplot
<p>I would like to plot the result of the <code>values_counts()</code> method with <code>seaborn</code>, but when I do so, it only shows one of the variables.</p> <pre><code>df = pd.DataFrame({"A":['b','b','a','c','c','c'],"B":['a','a','a','c','b','d']}) counts = df.A.value_counts() sns.barplot(counts) </code></pre> <p><a href="https://i.stack.imgur.com/HEJwa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HEJwa.png" alt="Result of above code"></a></p> <p>I want a barplot showing heights of <code>'a' = 1, 'b' = 2, 'c' = 3</code></p> <p>I've tried renaming the index and passing in <code>x</code> and <code>y</code> parameters, but I cant' get it to work.</p>
<p>You can do this:</p> <pre><code># Sorting indices so it's easier to read counts.sort_index(inplace=True) sns.barplot(x = counts.index, y = counts) plt.ylabel('counts') </code></pre> <p><a href="https://i.stack.imgur.com/9V0wG.png" rel="noreferrer"><img src="https://i.stack.imgur.com/9V0wG.png" alt="enter image description here"></a></p> <p>Note that using <code>pandas.Series.plot</code> gives a <em>very</em> similar plot: <code>counts.plot('bar')</code> or <code>counts.plot.bar()</code></p>
python|pandas|seaborn
7
672
50,553,605
Extract rule path of data point through decision tree with sklearn python
<p>I'm using decision tree model and I want to extract the decision path for each data point in order to understand what caused the Y rather than to predict it. How can I do that? Couldn't find any documentation. </p>
<p>Here is an example using the <code>iris dataset</code>.</p> <pre><code>from sklearn.datasets import load_iris from sklearn import tree import graphviz iris = load_iris() clf = tree.DecisionTreeClassifier() clf = clf.fit(iris.data, iris.target) dot_data = tree.export_graphviz(clf, out_file=None, feature_names=iris.feature_names, class_names=iris.target_names, filled=True, rounded=True, special_characters=True) graph = graphviz.Source(dot_data) #this will create an iris.pdf file with the rule path graph.render(&quot;iris&quot;) </code></pre> <p><a href="https://i.stack.imgur.com/8OV3L.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8OV3L.png" alt="enter image description here" /></a></p> <hr /> <p>EDIT: the following code is from the sklearn documentation with some small changes to address your goal</p> <pre><code>import numpy as np from sklearn.model_selection import train_test_split from sklearn.datasets import load_iris from sklearn.tree import DecisionTreeClassifier iris = load_iris() X = iris.data y = iris.target X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0) estimator = DecisionTreeClassifier(max_leaf_nodes=3, random_state=0) estimator.fit(X_train, y_train) # The decision estimator has an attribute called tree_ which stores the entire # tree structure and allows access to low level attributes. The binary tree # tree_ is represented as a number of parallel arrays. The i-th element of each # array holds information about the node `i`. Node 0 is the tree's root. NOTE: # Some of the arrays only apply to either leaves or split nodes, resp. In this # case the values of nodes of the other type are arbitrary! # # Among those arrays, we have: # - left_child, id of the left child of the node # - right_child, id of the right child of the node # - feature, feature used for splitting the node # - threshold, threshold value at the node n_nodes = estimator.tree_.node_count children_left = estimator.tree_.children_left children_right = estimator.tree_.children_right feature = estimator.tree_.feature threshold = estimator.tree_.threshold # The tree structure can be traversed to compute various properties such # as the depth of each node and whether or not it is a leaf. node_depth = np.zeros(shape=n_nodes, dtype=np.int64) is_leaves = np.zeros(shape=n_nodes, dtype=bool) stack = [(0, -1)] # seed is the root node id and its parent depth while len(stack) &gt; 0: node_id, parent_depth = stack.pop() node_depth[node_id] = parent_depth + 1 # If we have a test node if (children_left[node_id] != children_right[node_id]): stack.append((children_left[node_id], parent_depth + 1)) stack.append((children_right[node_id], parent_depth + 1)) else: is_leaves[node_id] = True print(&quot;The binary tree structure has %s nodes and has &quot; &quot;the following tree structure:&quot; % n_nodes) for i in range(n_nodes): if is_leaves[i]: print(&quot;%snode=%s leaf node.&quot; % (node_depth[i] * &quot;\t&quot;, i)) else: print(&quot;%snode=%s test node: go to node %s if X[:, %s] &lt;= %s else to &quot; &quot;node %s.&quot; % (node_depth[i] * &quot;\t&quot;, i, children_left[i], feature[i], threshold[i], children_right[i], )) print() # First let's retrieve the decision path of each sample. The decision_path # method allows to retrieve the node indicator functions. A non zero element of # indicator matrix at the position (i, j) indicates that the sample i goes # through the node j. node_indicator = estimator.decision_path(X_test) # Similarly, we can also have the leaves ids reached by each sample. leave_id = estimator.apply(X_test) # Now, it's possible to get the tests that were used to predict a sample or # a group of samples. First, let's make it for the sample. # HERE IS WHAT YOU WANT sample_id = 0 node_index = node_indicator.indices[node_indicator.indptr[sample_id]: node_indicator.indptr[sample_id + 1]] print('Rules used to predict sample %s: ' % sample_id) for node_id in node_index: if leave_id[sample_id] == node_id: # &lt;-- changed != to == #continue # &lt;-- comment out print(&quot;leaf node {} reached, no decision here&quot;.format(leave_id[sample_id])) # &lt;-- else: # &lt; -- added else to iterate through decision nodes if (X_test[sample_id, feature[node_id]] &lt;= threshold[node_id]): threshold_sign = &quot;&lt;=&quot; else: threshold_sign = &quot;&gt;&quot; print(&quot;decision id node %s : (X[%s, %s] (= %s) %s %s)&quot; % (node_id, sample_id, feature[node_id], X_test[sample_id, feature[node_id]], # &lt;-- changed i to sample_id threshold_sign, threshold[node_id])) </code></pre> <hr /> <p>This will print at the end the following:</p> <pre><code>Rules used to predict sample 0: decision id node 0 : (X[0, 3] (= 2.4) &gt; 0.800000011920929) decision id node 2 : (X[0, 2] (= 5.1) &gt; 4.949999809265137) leaf node 4 reached, no decision here </code></pre> <hr />
python|decision-tree|sklearn-pandas
1
673
45,590,507
Install TensorFlow & Tensorboard from source
<p>I want to install Tensorflow (CPU)(py 3.6) for windows, my company uses a proxy, so i can't install through pip, i have to build it from source. I unzipped tensorflow/tensorboard/protobuf.tar.gz in my Anaconda3 folders.</p> <p>When i use the setup.py files, it occurs that i need tensorboard for installing tensorflow, and i need tensorflow for installing tensorboard.</p> <p>So i don't know how to proceed for installing Tensorflow without using dependencies from pypi.org.</p> <p>Thanks</p>
<p>You can use pip with proxy. I was struggling with company proxy too and that was the solution for me: Run a command prompt as administrator and type the following:</p> <p>pip install --proxy <a href="http://username:password@proxy_url:port" rel="nofollow noreferrer">http://username:password@proxy_url:port</a> tensorflow</p> <p>(this will install the latest CPU version of tensorflow)</p> <p>This should work.</p>
python|tensorflow|tensorboard
1
674
62,641,280
generate binary list of minutes which indicates wheter or not said minute is in a given time range
<p>I have a list of ranges that looks like this:</p> <pre><code> [(Timestamp('2018-12-17 07:30:45', freq='S'), Timestamp('2018-12-17 07:32:45', freq='S')), (Timestamp('2018-12-03 07:14:12', freq='S'), Timestamp('2018-12-03 07:15:39', freq='S')), (Timestamp('2018-12-03 07:32:47', freq='S'), Timestamp('2018-12-03 07:34:10', freq='S')), (Timestamp('2018-12-03 08:00:36', freq='S'), Timestamp('2018-12-03 08:02:28', freq='S')), (Timestamp('2018-12-19 07:34:02', freq='S'), Timestamp('2018-12-19 07:34:19', freq='S')), (Timestamp('2018-12-19 07:33:26', freq='S'), Timestamp('2018-12-19 07:35:25', freq='S')), (Timestamp('2018-12-19 07:49:28', freq='S'), Timestamp('2018-12-19 07:49:44', freq='S')), (Timestamp('2018-12-19 07:49:08', freq='S'), Timestamp('2018-12-19 07:50:32', freq='S')), (Timestamp('2018-12-18 07:47:24', freq='S'), Timestamp('2018-12-18 07:48:56', freq='S')), (Timestamp('2018-12-13 07:56:24', freq='S'), Timestamp('2018-12-13 07:57:58', freq='S'))] </code></pre> <p>The list goes from December 2018 to April 2019. Now I would like to create a list of integer values whose length is equal the number of minutes between that timespam, where the integer is 0, where the time is outside any of the timeranges and 1 if it is within one. Basically for every minute of the timespam I want to be able discern wheter or not it is within any of the timeranges</p>
<p>For test purpose, I took a shorter set of date / time pairs:</p> <pre><code>arr = np.array([ ('2018-12-17 23:40:45', '2018-12-17 23:45:45'), ('2018-12-18 00:14:12', '2018-12-18 00:20:39'), ('2018-12-18 00:30:47', '2018-12-18 00:34:10')], dtype='datetime64') </code></pre> <p>It is much easier to use <em>Pandas</em> to do your task and the code is more readable.</p> <p>Start from conversion of <em>arr</em> into a <em>Pandas</em> DataFrame with 2 columns, <em>time start</em> and <em>time end</em>:</p> <pre><code>df = pd.DataFrame(arr, columns=['tStart', 'tEnd']) </code></pre> <p>Then generate an <em>IntervalIndex</em>:</p> <pre><code>iInd = pd.IntervalIndex.from_arrays(df.tStart, df.tEnd) </code></pre> <p>In the target version of code you will probably set the &quot;border dates / times&quot; of the result to <em>0:00</em> at the start date and <em>23:59</em> at the end date, but to keep the result as short as possible, I set them as time just before the first interval and just after the last interval (rounded to minutes):</p> <pre><code>t1 = df.tStart.min().floor('min') t2 = df.tEnd.max().ceil('min') </code></pre> <p>To create the result, I started with the &quot;list of minutes&quot; (a <em>DatetimeIndex</em>):</p> <pre><code>mInd = pd.date_range(t1, t2, freq='min') </code></pre> <p>And the final step is to create the actual result:</p> <pre><code>result = pd.Series([iInd.contains(x).any() for x in mInd], index=mInd, dtype=int) </code></pre> <p>It is a <em>Series</em> with:</p> <ul> <li>consecutive minutes as the index,</li> <li>either <em>0</em> or <em>1</em> as values.</li> </ul> <p>The result, for the assumed (shorter) list if intervals, is:</p> <pre><code>2018-12-17 23:40:00 0 2018-12-17 23:41:00 1 2018-12-17 23:42:00 1 2018-12-17 23:43:00 1 2018-12-17 23:44:00 1 2018-12-17 23:45:00 1 2018-12-17 23:46:00 0 2018-12-17 23:47:00 0 2018-12-17 23:48:00 0 2018-12-17 23:49:00 0 2018-12-17 23:50:00 0 2018-12-17 23:51:00 0 2018-12-17 23:52:00 0 2018-12-17 23:53:00 0 2018-12-17 23:54:00 0 2018-12-17 23:55:00 0 2018-12-17 23:56:00 0 2018-12-17 23:57:00 0 2018-12-17 23:58:00 0 2018-12-17 23:59:00 0 2018-12-18 00:00:00 0 2018-12-18 00:01:00 0 2018-12-18 00:02:00 0 2018-12-18 00:03:00 0 2018-12-18 00:04:00 0 2018-12-18 00:05:00 0 2018-12-18 00:06:00 0 2018-12-18 00:07:00 0 2018-12-18 00:08:00 0 2018-12-18 00:09:00 0 2018-12-18 00:10:00 0 2018-12-18 00:11:00 0 2018-12-18 00:12:00 0 2018-12-18 00:13:00 0 2018-12-18 00:14:00 0 2018-12-18 00:15:00 1 2018-12-18 00:16:00 1 2018-12-18 00:17:00 1 2018-12-18 00:18:00 1 2018-12-18 00:19:00 1 2018-12-18 00:20:00 1 2018-12-18 00:21:00 0 2018-12-18 00:22:00 0 2018-12-18 00:23:00 0 2018-12-18 00:24:00 0 2018-12-18 00:25:00 0 2018-12-18 00:26:00 0 2018-12-18 00:27:00 0 2018-12-18 00:28:00 0 2018-12-18 00:29:00 0 2018-12-18 00:30:00 0 2018-12-18 00:31:00 1 2018-12-18 00:32:00 1 2018-12-18 00:33:00 1 2018-12-18 00:34:00 1 2018-12-18 00:35:00 0 Freq: T, dtype: int32 </code></pre> <p>If you need, you can convert it to a <em>Numpy</em> array, but I think a more readable version is just as here, a <em>Series</em>.</p>
numpy
1
675
62,542,232
calculate age from dob and given date in pandas and make age as zero if dob is missing in pandas
<p>I have a data frame as shown below. df:</p> <pre><code>cust_id lead_date dob 1 2016-12-25 1989-12-20 2 2017-10-25 1980-09-20 3 2016-11-25 NaN 4 2014-04-25 1989-12-20 5 2019-12-21 NaN </code></pre> <p>From the above I would like to calculate age as difference of lead_date with dob in years.</p> <p>if dob is NaN then make age as 0.</p> <p>Expected output:</p> <pre><code>cust_id lead_date dob age 1 2016-12-25 1989-12-20 27 2 2017-10-25 1980-09-20 37 3 2016-11-25 NaN 0 4 2014-04-25 1989-12-20 25 5 2019-12-21 NaN 0 </code></pre>
<p>You can do:</p> <pre><code># convert to datetime type df['lead_date'] = pd.to_datetime(df.lead_date) df['dob'] = pd.to_datetime(df.dob) df['age'] = (df.lead_date.dt.year - df.dob.dt.year).fillna(0) </code></pre> <p>Output:</p> <pre><code> cust_id lead_date dob age 0 1 2016-12-25 1989-12-20 27.0 1 2 2017-10-25 1980-09-20 37.0 2 3 2016-11-25 NaT 0.0 3 4 2014-04-25 1989-12-20 25.0 4 5 2019-12-21 NaT 0.0 </code></pre>
pandas
1
676
62,636,267
How to aggregate goupby and discard the rows after appearing a certain value?
<p>Say I have a given dataframe as below</p> <pre><code>input = pd.DataFrame({&quot;id&quot;:[1,1,1,2,2,3,3,3,3,3], &quot;values&quot;:[&quot;l&quot;, &quot;m&quot;, &quot;c&quot;, &quot;l&quot;, &quot;l&quot;, &quot;l&quot;, &quot;l&quot;, &quot;c&quot;,&quot;c&quot;, &quot;c&quot;]}) </code></pre> <p>and I wanted to remove the extra transactions after &quot;c&quot; appear for an id. say for id 3, the 1st 2 values are &quot;l&quot; and after that all transactions are value c so I only want the 1st c.</p> <pre><code>output = pd.DataFrame({&quot;id&quot;:[1,1,1,2,2,3,3,3], &quot;values&quot;: [&quot;l&quot;, &quot;m&quot;, &quot;c&quot;, &quot;l&quot;, &quot;l&quot;, &quot;l&quot;, &quot;l&quot;, &quot;c&quot;]}) </code></pre> <p>I tried to do drop_duplicates on a group by but it is not working as per my expectation:</p> <blockquote> <p>input.groupby(&quot;id&quot;).drop_duplicates(&quot;values&quot;)</p> </blockquote>
<p>Create a <em>boolean mask</em> where <code>values</code> equals <code>c</code>, then use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>DataFrame.groupby</code></a> to group this mask on <code>id</code>, then transform it using <code>cumsum</code>, finally use this <code>mask</code> to filter the dataframe:</p> <pre><code># Here 'df' is your 'input' dataframe mask = df['values'].eq('c').groupby(df['id']).cumsum().gt(1) df1 = df[~mask] </code></pre> <p>Result:</p> <pre><code>print(df1) id values 0 1 l 1 1 m 2 1 c 3 2 l 4 2 l 5 3 l 6 3 l 7 3 c </code></pre>
python|pandas|pandas-groupby
2
677
62,545,134
How to use embedding models in tensorflow hub with LSTM layer?
<p>I'm learning tensorflow 2 working through the text classification with TF hub tutorial. It used an embedding module from TF hub. I was wondering if I could modify the model to include a LSTM layer. Here's what I've tried:</p> <pre><code>train_data, validation_data, test_data = tfds.load( name=&quot;imdb_reviews&quot;, split=('train[:60%]', 'train[60%:]', 'test'), as_supervised=True) embedding = &quot;https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim/1&quot; hub_layer = hub.KerasLayer(embedding, input_shape=[], dtype=tf.string, trainable=True) model = tf.keras.Sequential() model.add(hub_layer) model.add(tf.keras.layers.Embedding(10000, 50)) model.add(tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64))) model.add(tf.keras.layers.Dense(64, activation='relu')) model.add(tf.keras.layers.Dense(1)) model.summary() model.compile(optimizer='adam', loss=tf.keras.losses.BinaryCrossentropy(from_logits=True), metrics=['accuracy']) history = model.fit(train_data.shuffle(10000).batch(512), epochs=10, validation_data=validation_data.batch(512), verbose=1) results = model.evaluate(test_data.batch(512), verbose=2) for name, value in zip(model.metrics_names, results): print(&quot;%s: %.3f&quot; % (name, value)) </code></pre> <p>I don't know how to get the vocabulary size from the hub_layer. So I just put 10000 there. When run it, it throws this exception:</p> <pre><code>tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[480,1] = -6 is not in [0, 10000) [[node sequential/embedding/embedding_lookup (defined at .../learning/tensorflow/text_classify.py:36) ]] [Op:__inference_train_function_36284] Errors may have originated from an input operation. Input Source operations connected to node sequential/embedding/embedding_lookup: sequential/embedding/embedding_lookup/34017 (defined at Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/contextlib.py:112) Function call stack: train_function </code></pre> <p>I stuck here. My questions are:</p> <ol> <li><p>how should I use the embedding module from TF hub to feed an LSTM layer? it looks like embedding lookup has some issues with the setting.</p> </li> <li><p>how do I get the vocabulary size from the hub layer?</p> </li> </ol> <p>Thanks</p>
<p>Finally figured out the way to link pre-trained embeddings to LSTM or other layers. Just post the steps here in case anyone feels helpful.</p> <p>Embedding layer has to be the first layer in the model. (hub_layer is the same as Embedding layer.) The not very intuitive part is that any text input to the hub layer will be converted to only one vector of shape [embedding_dim]. You need to do sentence splitting and tokenization to make sure whatever input to the model is a sequence in the form of array of arrays. e.g., &quot;Let us prepare the data.&quot; should be converted to [[&quot;let&quot;],[&quot;us&quot;],[&quot;prepare&quot;], [&quot;the&quot;], [&quot;data&quot;]]. You will also need to pad the sequences if you are using batch mode.</p> <p>In addition, you will need to convert your target tokens to int if your training labels are strings. The input to the model is array of strings with shape [batch, seq_length], the hub embedding layer converts it to [batch, seq_length, embed_dim]. (If you add a LSTM or other RNN layer, the output from the layer is [batch, seq_length, rnn_units]. ) The output dense layer will output index of text instead of actual text. The index of text is stored in the downloaded tfhub directory as &quot;tokens.txt&quot;. You can load the file and convert text to the corresponding index. Otherwise you cannot compute the loss.</p>
tensorflow|keras|lstm
2
678
62,722,004
Roles of parameter servers and workers
<p>What exact role do <strong>parameter servers</strong> and <strong>workers</strong> have the during <strong>distributed training</strong> of neural networks? (e.g. in Distributed TensorFlow)</p> <p>Perhaps breaking it down as follows:</p> <ul> <li>During the forward pass</li> <li>During the backward pass</li> </ul> <p>For example:</p> <ul> <li>Are parameter servers <strong>only</strong> responsible for storing and providing variable values in an ACID store?</li> <li>Do <strong>different</strong> parameter servers manage different variables in the graph?</li> <li>Do parameter servers receive gradients themshelves (and thus adding them up)?</li> </ul>
<p><strong>Parameter Servers</strong> — This is actually same as a <code>worker</code>. Typically it’s a <code>CPU</code> where you store the <code>variables</code> you need in the <code>workers</code>. In my case this is where I defined the <code>weights variables</code> needed for my networks</p> <p><strong>Workers</strong> — This is where we do most of our <strong>computation intensive work</strong>.</p> <p><strong>In the forward pass</strong> — We take variables from <code>Parameter servers</code>, do something with them on our workers</p> <p><strong>In the backward pass</strong> — We send the current state back to the <code>parameter servers</code> which do some update operation and give us the new weights to try out</p> <p>Are parameter servers only responsible for storing and providing variable values in an ACID store? ==&gt; <strong>Yes</strong>, as per <a href="https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/ParameterServerStrategy" rel="nofollow noreferrer">Tensorflow Documentation</a> and <a href="https://medium.com/@alienrobot/understanding-distributed-tensorflow-2cdbd9881d9b" rel="nofollow noreferrer">Medium Article</a>.</p> <p>Do different parameter servers manage different variables in the graph? ==&gt; <strong>Yes</strong>, inferred from the statement,</p> <blockquote> <p>In addition, to that you can decide to have more than one parameter server for efficiency reasons. Using parameters the server can provide better network utilization, and it allows to scale models to more parallel machines. It is possible to allocate more than one parameter server.</p> </blockquote> <p>from <a href="https://hub.packtpub.com/distributed-tensorflow-multiple-gpu-server/#:%7E:text=Using%20parameters%20the%20server%20can,more%20than%20one%20parameter%20server." rel="nofollow noreferrer">this link</a>.</p> <p>Do <code>parameter servers</code> receive gradients themselves (and thus adding them up)? ==&gt; <strong>No</strong>. AFAIK, it receives the Updated <code>Weights</code> because computation of <code>Gradients</code> and modifying the <code>Weights</code> using the Formula,</p> <pre><code>W1 = W0 - Learning Rate * Gradients </code></pre> <p>happens in the <code>Workers</code>.</p>
tensorflow|deep-learning|tensorflow2.0|distributed-computing
1
679
62,635,974
Pandas concat multi level index dataframes and merge same name columns within same level
<p>I have two multi-level index dataframes. When I concat them, the same name columns become duplicated.</p> <p>df1</p> <pre><code>Column col1 col2 1 3 2 4 </code></pre> <p>I want to merge this with another df,</p> <p>df2</p> <pre><code>Column col3 5 6 </code></pre> <p>When I merge both using</p> <pre><code>pd.concat([df1, df2], axis=1) </code></pre> <p>The result comes:</p> <pre><code>Column Column col1 col2 col3 1 3 5 2 4 6 </code></pre> <p>What I want to get is:</p> <pre><code>Column col1 col2 col3 1 3 5 2 4 6 </code></pre> <p>Any help would be much appreciated. Thanks</p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sort_index.html" rel="nofollow noreferrer"><code>DataFrame.sort_index</code></a>:</p> <pre><code>pd.concat([df1, df2], axis=1).sort_index(axis=1) </code></pre> <p>EDIT:</p> <pre><code>print (df1) Column col5 col2 0 1 3 1 2 4 print (df2) Column col1 0 5 1 6 df = pd.concat([df1, df2], axis=1) c = df.columns.tolist() df = df.reindex(c[:1] + sorted(c[1:]), axis=1) print (df) Column col5 col1 col2 0 1 5 3 1 2 6 4 </code></pre> <p>EDIT1: Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.xs.html" rel="nofollow noreferrer"><code>DataFrame.xs</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sort_index.html" rel="nofollow noreferrer"><code>DataFrame.sort_index</code></a>, add original non selected caolumns values by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Index.union.html" rel="nofollow noreferrer"><code>Index.union</code></a> and last change order by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.reindex.html" rel="nofollow noreferrer"><code>DataFrame.reindex</code></a>:</p> <pre><code>print (df) Column a col2 col1 col5 col1 col3 0 1 5 3 5 4 1 2 6 4 7 7 cols = (df.xs('Column', drop_level=False, axis=1, level=0) .sort_index(ascending=False, axis=1).columns) print (cols) MultiIndex([('Column', 'col5'), ('Column', 'col2'), ('Column', 'col1')], ) df = df.reindex(cols.union(df.columns, sort=False), axis=1) print (df) Column a col5 col2 col1 col1 col3 0 3 1 5 5 4 1 4 2 6 7 7 </code></pre>
python|pandas
1
680
54,613,578
Assigning "list of directory" array on numpy
<p>I tried assigning a list of directories on a numpy array, but somehow the array only stores the first letter, not the full address of strings.</p> <pre><code>lasdir=np.array(range(4), dtype=str).reshape(2,2) i=0 for root, dirs, files in os.walk(source_dir): for file in files: if (file.lower().endswith(".las")): lasdir[i,0]=file lasdir[i,1]=os.path.join(root, file) i=i+1 </code></pre> <p>Does anybody know why?</p>
<p>When usings a <code>str</code> dtype, it uses fixed-length strings. As suggested <a href="https://stackoverflow.com/a/14639568/9291575">in this answer</a>, you're better off using dtype <code>object</code>.</p> <p>So your first line could transform to :</p> <pre><code>lasdir = np.empty((2,2), dtype=object) </code></pre>
python|numpy
0
681
73,618,309
Implement inverse of minmax scaler in numpy
<p>I want to implement inverse of min-max scaler in numpy instead of sklearn. Applying min max scale is easy</p> <pre><code>v_min, v_max = v.min(), v.max() new_min, new_max = 0, 1 v_p = (v - v_min)/(v_max - v_min)*(new_max - new_min) + new_min v_p.min(), v_p.max() </code></pre> <p>But once I got the scaled value, how can i go back to original one in numpy</p>
<p>Try Mathematic:</p> <pre><code>import numpy as np org_arr = np.array([ [2.0, 3.0], [2.5, 1.5], [0.5, 3.5] ]) # save min &amp; max min_val = org_arr.min(axis = 0) max_val = org_arr.max(axis = 0) scl_arr = (org_arr - min_val) / (max_val - min_val) print(scl_arr) # inverse of min-max scaler in numpy org_arr_2 = scl_arr*(max_val - min_val) + min_val print(org_arr_2) </code></pre> <p>Output:</p> <pre class="lang-none prettyprint-override"><code># scl_arr [[0.75 0.75] [1. 0. ] [0. 1. ]] # org_arr_2 [[2. 3. ] [2.5 1.5] [0.5 3.5]] </code></pre> <p>Check with <a href="https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html" rel="nofollow noreferrer">sklearn.preprocessing.MinMaxScale</a></p> <pre><code>from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler() scl_arr = scaler.fit_transform(org_arr) print(scl_arr) org_arr_2 = scaler.inverse_transform(scl_arr) print(org_arr_2) </code></pre> <p>Output:</p> <pre class="lang-none prettyprint-override"><code># scl_arr [[0.75 0.75] [1. 0. ] [0. 1. ]] # org_arr_2 [[2. 3. ] [2.5 1.5] [0.5 3.5]] </code></pre>
python|numpy
2
682
73,660,261
What is the most efficient way to make a method that is able to process single and multi dimensional arrays in python?
<p>I was using pytorch and realized that for a linear layer you could pass not only 1d tensors but multidmensional tensors as long as the last dimensions matched. <a href="https://stackoverflow.com/questions/58587057/multi-dimensional-inputs-in-pytorch-linear-method">Multi dimensional inputs in pytorch Linear method?</a></p> <p>I tried looping over each item, but is that what pytorch does? I'm having trouble thinking how you would program looping with more dimensions, maybe recursion but that seems messy. What is the most efficient way to implement this?</p>
<p><em>I tried looping over each item, but is that what pytorch does?</em></p> <p>The short answer is yes, loops are used, but it's more complicated than you probably think. If <code>input</code> is a 2D tensor (a matrix), then the output of a linear operation is computed as <code>input @ weight.T + bias</code> using an external BLAS library's GEMM operation. Otherwise it uses <code>torch.matmul(input, weight.T) + bias</code> which uses broadcast semantics to compute a batched version of the operation. Broadcasting is a semantic, not an implementation, so how the broadcasting is performed is going to be backend-dependent. Ultimately some form of looping combined with parallel processing will be used for most of these implementation.</p> <p>To go a little deeper, lets take a look at the PyTorch implementation of the linear layer. This quickly leads down some rabbit holes since PyTorch uses different backend libraries for performing linear algebra operations efficiently on the hardware available (libraries like oneAPI, Intel MKL, or MAGMA) but perhaps understanding some of the details can help.</p> <p>Starting at the C++ entrypoint to <code>nn.functional.linear</code>:</p> <pre class="lang-cpp prettyprint-override"><code>Tensor linear(const Tensor&amp; input, const Tensor&amp; weight, const Tensor&amp; bias) { if (input.is_mkldnn()) { return at::mkldnn_linear(input, weight, bias); } if (input.dim() == 2 &amp;&amp; bias.defined()) { // Fused op is marginally faster. return at::addmm(bias, input, weight.t()); } auto output = at::matmul(input, weight.t()); if (bias.defined()) { output.add_(bias); } return output; } </code></pre> <p>There are three cases here.</p> <ol> <li><p><code>input.is_mkldnn()</code>. This condition occurs if the input tensor is in the MKL-DNN format (<a href="https://pytorch.org/docs/stable/generated/torch.Tensor.to_mkldnn.html?highlight=to_mkldnn#torch.Tensor.to_mkldnn" rel="nofollow noreferrer"><code>Tensor.to_mkldnn</code></a>) and will make PyTorch use the <a href="https://github.com/pytorch/pytorch/blob/31b348411afa608639a2f7353060974c849829dd/aten/src/ATen/native/mkldnn/Linear.cpp#L42-L88" rel="nofollow noreferrer"><code>at::mkldnn_linear</code></a> function, which in turn makes calls to <a href="https://github.com/intel/ideep" rel="nofollow noreferrer">ideep</a>, which in turn makes calls to the <a href="https://github.com/oneapi-src/oneDNN" rel="nofollow noreferrer">oneDNN</a> library (previous known as Intel MKL-DNN), which ultimately <a href="https://github.com/oneapi-src/oneDNN/blob/master/src/cpu/cpu_inner_product_list.cpp#L44-L210" rel="nofollow noreferrer">selects</a> a specific general matrix-matrix multiplication (GEMM) routine dependent on platform and data types. The simplest implementation is the <a href="https://github.com/oneapi-src/oneDNN/blob/23679763e28ce85451b68c238b279f77173e7a05/src/cpu/ref_inner_product.cpp#L30-L95" rel="nofollow noreferrer">reference implementation</a>, and from that we can see that they use a parallel-for loop (note the anonymous function they use uses a quadruple nested for-loop). In practice the reference implementation probably isn't used, instead, you would probably be calling the <a href="https://github.com/oneapi-src/oneDNN/blob/master/src/cpu/x64/jit_brgemm_inner_product.cpp#L60-L469" rel="nofollow noreferrer">x86 optimized version</a> compiled with the <a href="https://github.com/herumi/xbyak" rel="nofollow noreferrer">Xbyak JIT assembler</a> to produce highly optimized code. I'm not going to pretend to follow all the details of the optimized code, but efficient GEMM is a heavily studied topic that I only have a passing knowledge of.</p> </li> <li><p><code>input.dim() == 2 &amp;&amp; bias.defined()</code>. This condition means that <code>input</code> is a 2D tensor (shape <code>[B,M]</code>) and <code>bias</code> is defined. In this case pytorch uses the <a href="https://pytorch.org/docs/stable/generated/torch.addmm.html#torch.addmm" rel="nofollow noreferrer"><code>addmm</code></a> function. This efficiently computes the output as <code>input @ weight.T + bias</code> where <code>@</code> is matrix multiplication. There are multiple implementations of <code>addmm</code> <a href="https://github.com/pytorch/pytorch/blob/ad44670fa1ce2dad7e2cdc3f90d27668e88e9548/aten/src/ATen/native/native_functions.yaml#L5938-L5955" rel="nofollow noreferrer">registered</a> in PyTorch depending on what types of tensors are being used. The dense-CPU specific version is <a href="https://github.com/pytorch/pytorch/blob/166dec74b5ce3968a53d4c0f616776d0a2bf4309/aten/src/ATen/native/LinearAlgebra.cpp#L1216-L1342" rel="nofollow noreferrer">here</a> which <a href="https://github.com/pytorch/pytorch/blob/efe2c0422d25a237e7df1c457d1bf77430f7bc2a/aten/src/ATen/native/CPUBlas.cpp#L186-L193" rel="nofollow noreferrer">eventually</a> makes calls to an external <a href="https://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms" rel="nofollow noreferrer">BLAS</a> library's GEMM subroutine. The backend used is likely <a href="https://www.intel.com/content/www/us/en/develop/documentation/onemkl-developer-reference-fortran/top/blas-and-sparse-blas-routines/blas-routines/blas-level-3-routines/gemm.html" rel="nofollow noreferrer">Intel MKL</a> but you can check using <code>print(torch.__config__.parallel_info())</code>. Whichever BLAS implementation is being used, its certainly a highly optimized implementation of matrix multiplication similar to the oneDNN implementation, probably using multi-threading and optimized compilation.</p> </li> <li><p>If neither of the previous two conditions are met then PyTorch uses the <a href="https://pytorch.org/docs/stable/generated/torch.matmul.html#torch.matmul" rel="nofollow noreferrer"><code>torch.matmul</code></a> function, which performs a broadcasted version of <code>input @ weight.T</code> where <code>input</code> is shape <code>[..., M]</code>. The result of this operation is a tensor of shape <code>[..., N]</code>. Similar to <code>addmm</code>, there are multiple implementations of this function depending on the tensor types but an external library will ultimately be used that uses parallelization and optimized matrix-multiplication subroutines. After the broadcasted matrix-multiplication a broadcasted <a href="https://pytorch.org/docs/stable/generated/torch.Tensor.add_.html?highlight=add_#torch.Tensor.add_" rel="nofollow noreferrer"><code>add_</code></a> operation is used to add the <code>bias</code> term (if <code>bias</code> is defined).</p> </li> </ol>
python|arrays|pytorch
0
683
71,223,050
How do I find last index of true value in a dataframe when applying condition to each row in an efficient way in python?
<p>Let us say I have pandas dataframe having two columns, <em>previous</em> and <em>current</em>. We can assume that values are non-decreasing and <em>current</em> values are always greater than <em>previous</em> value.</p> <p>Now, for each element in <em>previous</em> column, I want to look up index of last value of <em>current</em> column which is less than this value. I then want to subtract that index from the this element's index and store that value in the new column, say <em>numIndexes</em></p> <p>working but inefficient code is as follows:</p> <pre><code>df = pd.DataFrame({'previous': [1,3,5,7,9,11,13,17],'current': [2,6,9,10,15,19,20,21]}) df['numIndexes']=1 for i in range(len(df)): x=df['previous'][i]&gt;df['current'] df['numIndexes'][i]=i-x[::-1].idxmax() OUTPUT previous current numIndexes 0 1 2 -7 1 3 6 1 2 5 9 2 3 7 10 2 4 9 15 3 5 11 19 2 6 13 20 3 7 17 21 3 </code></pre> <p>Ignore the first negative value.</p> <p>To explain my objective via example above, for 5th index, we have previous value as 11. Now in the current column, last index where current value is less than 11 is index 3. This gives me numIndexes for 5th row as 2 ( 5-3)</p> <p>For a large dataset, this method is extremely slow. Any help appreciated to speed up this logic.</p> <p>EDIT : The assumption of strictly increasing values is not correct. Values are non-decreasing. However, each previous value is strictly less than its corresponding current value</p>
<p>Since the values are non-decreasing, you can use numpy.broadcasting, <code>[:, None]</code>, to compare the current values with all previous values. We then take the sum and subtract 1 since counting starts at 0, giving us the index position of the last row with current value &lt; the previous value for all rows in the DataFrame.</p> <p>Then create your column, which is the index minus the index of this calculated row.</p> <pre><code>ilocs = (df['current'].to_numpy()[:, None] &lt; df['previous'].to_numpy()).sum(0) - 1 df['numIndexes'] = df.index - df.index[ilocs] previous current numIndexes 0 1 2 -7 1 3 6 1 2 5 9 2 3 7 10 2 4 9 15 3 5 11 19 2 6 13 20 3 7 17 21 3 </code></pre> <hr /> <p>The above is memory intensive. If that doesn't work you can use an <code>asof</code> merge to match on the last row with the current value &lt; previous value. We bring along the index so you can then perform the subtraction afterwards. I've left the additional columns showing the value it matched and the index it matched in for illustration -- drop them if you don't care.</p> <pre><code>import pandas as pd df = pd.merge_asof(df, df[['current']].reset_index(), left_on='previous', right_on='current', suffixes=['', '_match'], allow_exact_matches=False # Require strictly less than ) df['numIndexes'] = df.index - df['index'] previous current index current_match numIndexes 0 1 2 NaN NaN NaN 1 3 6 0.0 2.0 1.0 2 5 9 0.0 2.0 2.0 3 7 10 1.0 6.0 2.0 4 9 15 1.0 6.0 3.0 5 11 19 3.0 10.0 2.0 6 13 20 3.0 10.0 3.0 7 17 21 4.0 15.0 3.0 </code></pre>
python|pandas|dataframe
3
684
71,319,951
extract items from column with pandas
<p>I have a DataFrame with the following structure:</p> <pre><code> id year name genres 238 2022 Adventure [{&quot;revenue&quot;: 1463, &quot;name&quot;: &quot;culture clash&quot;, 'runtime': 150, 'vote_average': 7}] 239 2020 Comedy [] </code></pre> <p>But what I need is this structure</p> <pre><code> id year name revenue name runtime vote_average 238 2022 Adventure 1463 culture clash 150 7 239 2020 Comedy </code></pre> <p>Please note that i sometimes have empty array in column <strong>genres</strong></p> <p>i used this code</p> <pre><code>(df.join(pd.json_normalize(df['genres'], record_path='genres'), lsuffix='', rsuffix='_genres') </code></pre> <p>but it got me an error <code>TypeError: list indices must be integers or slices, not str</code></p> <p>Any solutions?</p>
<p>You could try: In case the <code>genres</code> columns contains strings do</p> <pre><code>df[&quot;genres&quot;] = df[&quot;genres&quot;].map(eval) </code></pre> <p>fist. Then:</p> <pre><code>df = pd.concat( [df[[&quot;id&quot;, &quot;year&quot;]], pd.DataFrame(obj[0] if obj else {} for obj in df[&quot;genres&quot;])], axis=&quot;columns&quot; ) </code></pre> <p>Result for the sample:</p> <pre><code> id year revenue name runtime vote_average 0 238 2022 1463.0 culture clash 150.0 7.0 1 239 2020 NaN NaN NaN NaN </code></pre> <p>If you don't want use <code>eval</code> you could try this</p> <pre><code>df[&quot;genres&quot;] = pd.read_json(&quot;[&quot; + &quot;, &quot;.join(df[&quot;genres&quot;]) + &quot;]&quot;) df = pd.concat( [df[[&quot;id&quot;, &quot;year&quot;]], pd.json_normalize(df[&quot;genres&quot;])], axis=&quot;columns&quot; ) </code></pre> <p>instead.</p>
python|pandas
1
685
52,064,112
installing tensorflow_transform and apache_beam on Datalab
<p>I'm going over these example from google-cloud Coursera courses, and although they worked till a few weeks ago, I can't install tf.transform or apache_beam on Datalab anymore.</p> <p><a href="https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/courses/machine_learning/feateng/tftransform.ipynb" rel="nofollow noreferrer">https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/courses/machine_learning/feateng/tftransform.ipynb</a></p> <p><a href="https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/courses/machine_learning/deepdive/06_structured/4_preproc_tft.ipynb" rel="nofollow noreferrer">https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/courses/machine_learning/deepdive/06_structured/4_preproc_tft.ipynb</a></p> <p>When installing tensorflow_transform I get the following errors:</p> <pre><code>%bash pip install --upgrade --force tensorflow_transform==0.6.0 </code></pre> <p>twisted 18.7.0 requires PyHamcrest>=1.9.0, which is not installed. datalab 1.1.3 has requirement six==1.10.0, but you'll have six 1.11.0 which is incompatible. gapic-google-cloud-pubsub-v1 0.15.4 has requirement oauth2client&lt;4.0dev,>=2.0.0, but you'll have oauth2client 4.1.2 which is incompatible. proto-google-cloud-pubsub-v1 0.15.4 has requirement oauth2client&lt;4.0dev,>=2.0.0, but you'll have oauth2client 4.1.2 which is incompatible. apache-airflow 1.9.0 has requirement bleach==2.1.2, but you'll have bleach 1.5.0 which is incompatible. apache-airflow 1.9.0 has requirement funcsigs==1.0.0, but you'll have funcsigs 1.0.2 which is incompatible. google-cloud-monitoring 0.28.0 has requirement google-cloud-core&lt;0.29dev,>=0.28.0, but you'll have google-cloud-core 0.25.0 which is incompatible. proto-google-cloud-datastore-v1 0.90.4 has requirement oauth2client&lt;4.0dev,>=2.0.0, but you'll have oauth2client 4.1.2 which is incompatible. pandas-gbq 0.3.0 has requirement google-cloud-bigquery>=0.28.0, but you'll have google-cloud-bigquery 0.25.0 which is incompatible. googledatastore 7.0.1 has requirement httplib2&lt;0.10,>=0.9.1, but you'll have httplib2 0.11.3 which is incompatible. googledatastore 7.0.1 has requirement oauth2client&lt;4.0.0,>=2.0.1, but you'll have oauth2client 4.1.2 which is incompatible. Cannot uninstall 'dill'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.</p>
<p>The tensorflow version on my Datalab instance was 1.4. I had to add this one line of code to update tensorflow to 1.10.1</p> <pre><code>%bash pip install --upgrade --force-reinstall pip==10.0.1 pip install tensorflow==1.10.1 pip install tensorflow_transform </code></pre> <p>my environment: </p> <pre><code>apache-airflow==1.9.0 apache-beam==2.6.0 tensorflow==1.10.1 tensorflow-metadata==0.9.0 tensorflow-tensorboard==0.4.0rc3 tensorflow-transform==0.8.0 </code></pre>
tensorflow|google-cloud-platform|apache-beam|google-cloud-datalab|tensorflow-transform
2
686
52,408,443
Specified Trace Colors not looping through entire Dash chart
<p>I have a dataframe that looks like this where I am plotting voter registration for political parties across the 27 districts of New York between the years 2014-2018:</p> <p><a href="https://i.stack.imgur.com/Sb5JI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Sb5JI.png" alt="enter image description here"></a></p> <p>Using Dash, I wanted to specify the colors for my individual traces using the following code:</p> <pre><code>import dash import dash_core_components as dcc import dash_html_components as html import plotly.graph_objs as go import pandas as pd # Read in the data districts = pd.read_csv("https://github.com/thedatasleuth/New-York-Congressional-Districts/blob/master/districts.csv?raw=True") df = districts # Get a list of all the years years = districts['Year'].unique() # Create the app app = dash.Dash() # Populate the layout with HTML and graph components app.layout = html.Div([ html.H2("New York Congressional Districts"), html.Div( [ dcc.Dropdown( id="Year", options=[{ 'label': i, 'value': i } for i in years], value='All Years'), ], style={'width': '25%', 'display': 'inline-block'}), dcc.Graph(id='funnel-graph'), ]) # Add the callbacks to support the interactive componets @app.callback( dash.dependencies.Output('funnel-graph', 'figure'), [dash.dependencies.Input('Year', 'value')]) def update_graph(Year): if Year == "All Years": df_plot = df.copy() else: df_plot = df[df['Year'] == Year] trace1 = go.Bar(x=df_plot ['DISTRICT'], y=df_plot [('DEM')], name='DEM', marker=dict(color=['rgb(3,67,223)'])) trace2 = go.Bar(x=df_plot ['DISTRICT'], y=df_plot [('REP')], name='REP', marker=dict(color=['rgb(229,0,0)'])) trace3 = go.Bar(x=df_plot ['DISTRICT'], y=df_plot [('CON')], name='CON', marker=dict(color=['rgb(132,0,0)'])) trace4 = go.Bar(x=df_plot ['DISTRICT'], y=df_plot [('WOR')], name='WOR', marker=dict(color=['rgb(149,208,252)'])) trace5 = go.Bar(x=df_plot ['DISTRICT'], y=df_plot [('IND')], name='IND', marker=dict(color=['rgb(126,30,156)'])) trace6 = go.Bar(x=df_plot ['DISTRICT'], y=df_plot [('GRE')], name='GRE', marker=dict(color=['rgb(21,176,26)'])) trace7 = go.Bar(x=df_plot ['DISTRICT'], y=df_plot [('WEP')], name='WEP', marker=dict(color=['rgb(255,129,192)'])) trace8 = go.Bar(x=df_plot ['DISTRICT'], y=df_plot [('REF')], name='REF', marker=dict(color=['rgb(206,162,253)'])) trace9 = go.Bar(x=df_plot ['DISTRICT'], y=df_plot [('OTH')], name='OTH', marker=dict(color=['rgb(249,115,6)'])) trace10 = go.Bar(x=df_plot ['DISTRICT'], y=df_plot [('BLANK')], name='BLANK', marker=dict(color=['rgb(101,55,0)'])) return { 'data': [trace1, trace2, trace3, trace4, trace5, trace6, trace7, trace8, trace9, trace10], 'layout': go.Layout( title='{}'.format(Year), barmode='group') } if __name__ == '__main__': app.server.run(port=8000, host='127.0.0.1') </code></pre> <p>However, the colors I want are only showing up in the first district as opposed to all 27 districts as seen <a href="https://ny-districts-by-year.herokuapp.com/" rel="nofollow noreferrer">here</a>.</p>
<p>I removed the colors out of the list like this:</p> <p><code>marker=dict(color='rgb(3,67,223)'))</code></p>
python|pandas|plotly-dash
0
687
52,323,907
Some modules can be imported in python previously but now can only be imported in ipython2
<p>Previously I installed pytorch,PIL,numpy... using pip. After that I installed python3. Thus ipython switched from python2 to python3. I have to use ipython2 to start python2 kernel. These modules still works well in ipython2, but when I run a python script using python, python2, python2.7, they all raise ImportError:</p> <blockquote> <p>ImportError: No module named PIL(numpy,torch...)</p> </blockquote> <p>When run this command: <code>sudo pip install numpy</code></p> <p>return: </p> <blockquote> <p>Requirement already satisfied: numpy in /usr/local/lib/python3.5/dist-packages (1.15.1)</p> </blockquote> <p>when running this command: <code>sudo pip2 install numpy</code></p> <p>return: Requirement already satisfied (use --upgrade to upgrade): numpy in /usr/lib/python2.7/dist-packages</p> <p>When I run: <code>python, import sys, sys.path</code></p> <p>it shows :</p> <blockquote> <p>['', '/home/szy/miniconda2/lib/python27.zip', '/home/szy/miniconda2/lib/python2.7', '/home/szy/miniconda2/lib/python2.7/plat-linux2', '/home/szy/miniconda2/lib/python2.7/lib-tk', '/home/szy/miniconda2/lib/python2.7/lib-old', '/home/szy/miniconda2/lib/python2.7/lib-dynload', '/home/szy/.local/lib/python2.7/site-packages', '/home/szy/miniconda2/lib/python2.7/site-packages']</p> </blockquote> <p>The location of numpy is not among them. and the sys.path in ipython2:</p> <blockquote> <p>['', '/usr/local/bin', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-x86_64-linux-gnu', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/home/szy/.local/lib/python2.7/site-packages', '/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages', '/usr/local/lib/python2.7/dist-packages/IPython/extensions', '/home/szy/.ipython']</p> </blockquote> <p>What's wrong? Previous I could run scripts with python and import these modules.</p>
<p>Make sure the python path that you given in bashrc is correct. Also it will be good to use conda environment to try out the same since there is confusion in python environments. For that you can follow the below steps:</p> <p>Create the environment and activate it using following commands:</p> <p>conda create -n test_env python=2.7</p> <p>conda activate test_env</p> <p>conda install ipykernel</p> <p>ipython kernel install --name test_env --user</p> <p>Then install the required packages in the environment that you created and try to import it within the created environment.</p>
python|linux|python-2.7|numpy
0
688
60,684,643
nearest member aditional atribute analysis
<p>I have following dataframe df(sample):</p> <pre><code> lat lon crs Band1 x y 0 41.855584 20.619156 b'' 1568.0 468388.198606 4.633812e+06 1 41.855584 20.622590 b'' 1562.0 468673.173031 4.633811e+06 2 41.855584 20.626023 b'' 1605.0 468958.147443 4.633810e+06 3 41.859017 20.612290 b'' 1598.0 467819.970900 4.634196e+06 4 41.859017 20.615723 b'' 1593.0 468104.930108 4.634195e+06 5 41.859017 20.619156 b'' 1600.0 468389.889303 4.634193e+06 6 41.859017 20.622590 b'' 1586.0 468674.848486 4.634192e+06 7 41.859017 20.626023 b'' 1577.0 468959.807656 4.634191e+06 8 41.859017 20.629456 b'' 1584.0 469244.766814 4.634190e+06 9 41.859017 20.632889 b'' 1598.0 469529.725959 4.634188e+06 </code></pre> <p>fields <code>x</code> and <code>y</code> are coordinates in xy plane, and <code>Band1</code> is point elevation ( in essence it is z coordinate ). Dataframe is rectangle grid with <code>x</code> and <code>y</code>as center grid coordinate and <code>Band1</code> as grid elevation.</p> <p>How can I detect which of grid cells is highest in <code>Band1</code> against neighboring cells?</p> <p>Expected output in this case is additional column in dataframe with boolean value defining that cell is highest in elevation <code>Band1</code> prior to neighboring 4 cells.</p> <p>I can easily get neigbouring grid distances and indices with:</p> <pre><code>X=df[['x','y']].to_numpy() nbrs = NearestNeighbors(n_neighbors=5, algorithm='ball_tree').fit(X) distances, indices = nbrs.kneighbors(X) </code></pre> <p>With Indices output:</p> <pre><code>array([[0, 1, 5, 6, 4], [1, 2, 0, 6, 7], [2, 1, 7, 8, 6], [3, 4, 5, 0, 6], [4, 5, 3, 0, 6], [5, 6, 4, 0, 1], [6, 7, 5, 1, 2], [7, 8, 6, 2, 1], [8, 9, 7, 2, 6], [9, 8, 7, 2, 6]], dtype=int64) </code></pre> <p>I can loop though dataframe and compare all members, but its resource consuming since i have 1M rows. Any help is appreciated.</p>
<p>IIUC, you can use <code>indices</code> to get the corresponding value in the column <code>Band1</code>, then use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.argmax.html" rel="nofollow noreferrer"><code>np.argmax</code></a> with the parameter axis set to 1 to get the position of the highest value per row. If the value is 0, then it means that the Band1 of this row is higher than the ones of the neighbors like:</p> <pre><code>df['local_high'] = np.argmax(df['Band1'].to_numpy()[indices], axis=1)==0 </code></pre> <p>and you get </p> <pre><code> lat lon crs Band1 x y local_high 0 41.855584 20.619156 b'' 1568.0 468388.198606 4633812.0 False 1 41.855584 20.622590 b'' 1562.0 468673.173031 4633811.0 False 2 41.855584 20.626023 b'' 1605.0 468958.147443 4633810.0 True 3 41.859017 20.612290 b'' 1598.0 467819.970900 4634196.0 False 4 41.859017 20.615723 b'' 1593.0 468104.930108 4634195.0 False 5 41.859017 20.619156 b'' 1600.0 468389.889303 4634193.0 True 6 41.859017 20.622590 b'' 1586.0 468674.848486 4634192.0 False 7 41.859017 20.626023 b'' 1577.0 468959.807656 4634191.0 False 8 41.859017 20.629456 b'' 1584.0 469244.766814 4634190.0 False 9 41.859017 20.632889 b'' 1598.0 469529.725959 4634188.0 False </code></pre>
python|pandas|scikit-learn|sklearn-pandas
0
689
60,356,541
pivot table in specific intervals pandas
<p>I have a one column dataframe which looks like this:</p> <pre><code>Neive Bayes 0 8.322087e-07 1 3.213342e-24 2 4.474122e-28 3 2.230054e-16 4 3.957606e-29 5 9.999992e-01 6 3.254807e-13 7 8.836033e-18 8 1.222642e-09 9 6.825381e-03 10 5.275194e-07 11 2.224289e-06 12 2.259303e-09 13 2.014053e-09 14 1.755933e-05 15 1.889681e-04 16 9.929193e-01 17 4.599619e-05 18 6.944654e-01 19 5.377576e-05 </code></pre> <p>I want to pivot it to wide format but with specific intervals. The first 9 rows should make up 9 columns of the first row, and continue this pattern until the final table has 9 columns and has 9 times fewer rows than now. How would I achieve this?</p>
<p>Using <code>pivot_table</code>:</p> <pre><code>df.pivot_table(columns=df.index % 9, index=df.index // 9, values='Neive Bayes') 0 1 2 3 4 \ 0 8.322087e-07 3.213342e-24 4.474122e-28 2.230054e-16 3.957606e-29 1 6.825381e-03 5.275194e-07 2.224289e-06 2.259303e-09 2.014053e-09 2 6.944654e-01 5.377576e-05 NaN NaN NaN 5 6 7 8 0 0.999999 3.254807e-13 8.836033e-18 1.222642e-09 1 0.000018 1.889681e-04 9.929193e-01 4.599619e-05 2 NaN NaN NaN NaN </code></pre>
python|python-3.x|pandas
2
690
60,480,686
pytorch model summary - forward func has more than one argument
<p>I am using torch summary </p> <pre><code>from torchsummary import summary </code></pre> <p>I want to pass more than one argument when printing the model summary, but the examples mentioned here: <a href="https://stackoverflow.com/a/56762410/5082406">Model summary in pytorch</a> taken only one argument. for e.g.:</p> <pre><code>model = Network().to(device) summary(model,(1,28,28)) </code></pre> <p>The reason is that the forward function takes two arguments as input, e.g.:</p> <pre><code>def forward(self, img1, img2): </code></pre> <p>How do I pass two arguments here?</p>
<p>You can use the example given here: <a href="https://github.com/sksq96/pytorch-summary#multiple-inputs" rel="noreferrer">pytorch summary multiple inputs</a></p> <pre><code>summary(model, [(1, 16, 16), (1, 28, 28)]) </code></pre>
python|pytorch
14
691
61,780,927
Plot the correct point on x-axis
<p>I'd like to put the correct value of the point on x-axis not use the approximated number. For example, using pandas and matplotlib. </p> <p><a href="https://i.stack.imgur.com/kkw9B.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kkw9B.png" alt="enter image description here"></a></p> <p>I would like to show these indexes:</p> <pre><code>index_plot = [4.4,7.6,11.0,14.2,17.4,20.8,24.0,27.1] </code></pre> <p>not </p> <pre><code>[0,5,10,15,20,25] </code></pre> <p>Is it possible to do?</p> <pre><code>plot_4.plot.line(logy=True, style=['+-','o-','.-','s-','x-'],grid=True,figsize=(10,6)).legend(title='algorithm', bbox_to_anchor=(1, 1), fontsize=12) plt.xlabel('Current', fontsize=14) plt.ylabel('Beats per minute (BPM)', fontsize=14) plt.show() </code></pre>
<p>You an do:</p> <pre><code>ax = plot_4.plot.line(logy=True, style=['+-','o-','.-','s-','x-'],grid=True,figsize=(10,6)).legend(title='algorithm', bbox_to_anchor=(1, 1), fontsize=12) ax.set_xticks(plot_4.index) </code></pre>
python|pandas|matplotlib
2
692
61,801,500
Convert date/month/year string variable to include time
<p>I have the following data with some Date/Month/Year string values:</p> <pre><code>import pandas as pd d = {'date': ["31/03/2019", "12/05/2020"]} df = pd.DataFrame(data=d) print(df) date 0 31/03/2019 1 12/05/2020` </code></pre> <p>I would like the <code>date</code> variable to be printed like this:</p> <pre><code>new_d = {'date': ["2019-03-31T00:00:000Z", "2020-05-12T00:00:000Z"]} correct_df = pd.DataFrame(data=new_d) print(correct_df) </code></pre> <p>So the format is <code>"%Y-%m-%dT%H:%M:%S.%fZ"</code>. Does anyone know how I would go about achieving this? I have read numerous articles on how to strip date/month/year and reorder, but not how to include time. The inclusion of time here will always be 'T00:00:000Z' as the original value does not contain time. This is for aesthetic reasons only. </p> <p>I think this can be done with a function like below but it doesn't quite work for me:</p> <pre><code>def convert_to_json_date(incoming_string: str)-&gt; str: return incoming_string as ("%Y-%m-%dT%H:%M:%S.%fZ") </code></pre> <p>Thanks,</p>
<p>You can try:</p> <pre><code>pd.to_datetime(df['date'], dayfirst=True).dt.strftime("%Y-%m-%dT%H:%M:%S.%fZ") </code></pre> <p>Output:</p> <pre><code>0 2019-03-31T00:00:00.000000Z 1 2020-05-12T00:00:00.000000Z Name: date, dtype: object </code></pre>
python|pandas|datetime
0
693
61,703,249
I was trying to create a video out of a numpy array but i was getting an error
<p>I was trying to create a video from a NumPy array, but i was getting this error all the time:</p> <pre><code>cv2.error: OpenCV(4.2.0) c:\projects\opencv-python\opencv\modules\imgproc\src\color.simd_helpers.hpp:94: error: (-2:Unspecified error) in function '__thiscall cv::impl::`anonymous-namespace'::CvtHelper&lt;struct cv::impl::`anonymous namespace'::Set&lt;3,4,-1&gt;,struct cv::impl::A0xe227985e::Set&lt;3,4,-1&gt;,struct cv::impl::A0xe227985e::Set&lt;0,2,5&gt;,2&gt;::CvtHelper(const class cv::_InputArray &amp;,const class cv::_OutputArray &amp;,int)' &gt; Unsupported depth of input image: &gt; 'VDepth::contains(depth)' &gt; where &gt; 'depth' is 4 (CV_32S) </code></pre> <p>Link to the full code - <a href="https://drive.google.com/file/d/1qqat43eJw0Ql46TlFRiGMjqROs2QjCFj/view?usp=sharing" rel="nofollow noreferrer">https://drive.google.com/file/d/1qqat43eJw0Ql46TlFRiGMjqROs2QjCFj/view?usp=sharing</a></p> <p>Here is the structure of the code:</p> <pre><code>import cv2 , time , numpy frame = numpy.asarray(the long list of NumPy array, if you want, it is in the link above) fourcc = cv2.VideoWriter_fourcc(*'mp4v') out = cv2.VideoWriter('output.mp4v' , fourcc , 20.0 , (640 , 480)) out.write(frame) out.release() </code></pre> <p>If I added a frame = frame.astype(numpy.uint8) before it, then a video was created but it was actually a video of just 1 photo of the first frame!!!!</p> <p>Please help me out with this. It would really mean a lot. </p>
<p>I am only able to find a single RGB image in your link. Do you want to make a video with that single image of dimension 640x480x3? If you can ensure that the frame array here has multiple RGB images the following code should be enough:</p> <pre><code>import cv2 , time , numpy frame = numpy.asarray(the long list of NumPy array, if you want, it is in the link above) fourcc = cv2.VideoWriter_fourcc(*'XVID') writer = cv2.VideoWriter('output.avi', fourcc, 1, (frame_height, frame_width)) i=0 for img in (frame): print("writing frame # {}".format(i)) writer.write(img.astype('uint8')) i+=1 writer.release() </code></pre>
python|numpy|opencv|cv2
0
694
54,928,731
Pandas convert two separate columns into a single datetime column?
<p>I have two columns in a pandas dataframe that I want to convert into a single datetime column. The problem is that one of the columns is the week of the year and one is the actual year. It looks something like this:</p> <pre><code>WEEK_OF_YEAR | YEAR 1 2016 2 2016 ... 52 2016 1 2017 2 2017 3 2017 ... 52 2017 1 2018 </code></pre> <p>How can I make another column called "DATE" that is the datetime conversion of both columns together?</p>
<p>If converting week of year is necesary define day of <a href="http://strftime.org/" rel="nofollow noreferrer">week by <code>%w</code></a>:</p> <blockquote> <p><strong>%w</strong> - Weekday as a decimal number, where 0 is Sunday and 6 is Saturday.</p> </blockquote> <pre><code>#for Sundays is set value to 0 s = df['WEEK_OF_YEAR'].astype(str) + '-0-' + df['YEAR'].astype(str) df['date'] = pd.to_datetime(s, format='%W-%w-%Y') print (df) WEEK_OF_YEAR YEAR date 0 1 2016 2016-01-10 1 2 2016 2016-01-17 2 52 2016 2017-01-01 3 1 2017 2017-01-08 4 2 2017 2017-01-15 5 3 2017 2017-01-22 6 52 2017 2017-12-31 7 1 2018 2018-01-07 </code></pre>
python|pandas
3
695
54,798,223
Tensor conversion requested dtype int32 for Tensor with dtype int64 - while estimator.export_savedmodel
<p>Trying to export a model built using <a href="https://colab.research.google.com/github/google-research/bert/blob/master/predicting_movie_reviews_with_bert_on_tf_hub.ipynb" rel="nofollow noreferrer">https://colab.research.google.com/github/google-research/bert/blob/master/predicting_movie_reviews_with_bert_on_tf_hub.ipynb</a> with this:</p> <pre><code>def serving_input_fn(): with tf.variable_scope("bert_model"): feature_spec = { "input_ids": tf.FixedLenFeature([MAX_SEQ_LENGTH], tf.int64), "input_mask": tf.FixedLenFeature([MAX_SEQ_LENGTH], tf.int64), "segment_ids": tf.FixedLenFeature([MAX_SEQ_LENGTH], tf.int64), "label_ids": tf.FixedLenFeature([], tf.int64), } serialized_tf_example = tf.placeholder(dtype=tf.string, shape=[None], name='input_example_tensor') receiver_tensors = {'examples': serialized_tf_example} features = tf.parse_example(serialized_tf_example, feature_spec) return tf.estimator.export.ServingInputReceiver(features, receiver_tensors) MODEL_DIR = 'gs://{}/bert/models_servable/{}'.format(BUCKET,'bert') tf.gfile.MakeDirs(MODEL_DIR) estimator._export_to_tpu = False model_file = os.path.join(MODEL_DIR, "bert_model") path = estimator.export_savedmodel(model_file, serving_input_fn) print(path) </code></pre> <p>and it gives following error:</p> <pre><code>--------------------------------------------------------------------------- ValueError Traceback (most recent call last) &lt;ipython-input-106-aaf5ee490ed7&gt; in &lt;module&gt;() 21 model_file = os.path.join(MODEL_DIR, "bert_model") 22 print(model_file) ---&gt; 23 path = estimator.export_savedmodel(model_file, serving_input_fn) 24 print(path) /usr/local/lib/python3.6/dist-packages/tensorflow_estimator/python/estimator/estimator.py in export_savedmodel(self, export_dir_base, serving_input_receiver_fn, assets_extra, as_text, checkpoint_path, strip_default_attrs) 1643 as_text=as_text, 1644 checkpoint_path=checkpoint_path, -&gt; 1645 experimental_mode=model_fn_lib.ModeKeys.PREDICT) 1646 1647 def export_saved_model( /usr/local/lib/python3.6/dist-packages/tensorflow_estimator/python/estimator/estimator.py in export_saved_model(self, export_dir_base, serving_input_receiver_fn, assets_extra, as_text, checkpoint_path, experimental_mode) 721 assets_extra=assets_extra, 722 as_text=as_text, --&gt; 723 checkpoint_path=checkpoint_path) 724 725 def experimental_export_all_saved_models( /usr/local/lib/python3.6/dist-packages/tensorflow_estimator/python/estimator/estimator.py in experimental_export_all_saved_models(self, export_dir_base, input_receiver_fn_map, assets_extra, as_text, checkpoint_path) 825 self._add_meta_graph_for_mode( 826 builder, input_receiver_fn_map, checkpoint_path, --&gt; 827 save_variables, mode=model_fn_lib.ModeKeys.PREDICT) 828 save_variables = False 829 /usr/local/lib/python3.6/dist-packages/tensorflow_estimator/python/estimator/estimator.py in _add_meta_graph_for_mode(self, builder, input_receiver_fn_map, checkpoint_path, save_variables, mode, export_tags, check_variables) 895 labels=getattr(input_receiver, 'labels', None), 896 mode=mode, --&gt; 897 config=self.config) 898 899 export_outputs = model_fn_lib.export_outputs_for_mode( /usr/local/lib/python3.6/dist-packages/tensorflow_estimator/python/estimator/estimator.py in _call_model_fn(self, features, labels, mode, config) 1110 1111 logging.info('Calling model_fn.') -&gt; 1112 model_fn_results = self._model_fn(features=features, **kwargs) 1113 logging.info('Done calling model_fn.') 1114 &lt;ipython-input-90-119a3167bf33&gt; in model_fn(features, labels, mode, params) 72 else: 73 (predicted_labels, log_probs) = create_model( ---&gt; 74 is_predicting, input_ids, input_mask, segment_ids, label_ids, num_labels) 75 76 predictions = { &lt;ipython-input-89-0f7bd7d1be35&gt; in create_model(is_predicting, input_ids, input_mask, segment_ids, labels, num_labels) 13 inputs=bert_inputs, 14 signature="tokens", ---&gt; 15 as_dict=True) 16 17 # Use "pooled_output" for classification tasks on an entire sentence. /usr/local/lib/python3.6/dist-packages/tensorflow_hub/module.py in __call__(self, inputs, _sentinel, signature, as_dict) 240 dict_inputs = _convert_dict_inputs( 241 inputs, self._spec.get_input_info_dict(signature=signature, --&gt; 242 tags=self._tags)) 243 244 dict_outputs = self._impl.create_apply_graph( /usr/local/lib/python3.6/dist-packages/tensorflow_hub/module.py in _convert_dict_inputs(inputs, tensor_info_map) 442 dict_inputs = _prepare_dict_inputs(inputs, tensor_info_map) 443 return tensor_info.convert_dict_to_compatible_tensor(dict_inputs, --&gt; 444 tensor_info_map) 445 446 /usr/local/lib/python3.6/dist-packages/tensorflow_hub/tensor_info.py in convert_dict_to_compatible_tensor(values, targets) 146 for key, value in sorted(values.items()): 147 result[key] = _convert_to_compatible_tensor( --&gt; 148 value, targets[key], error_prefix="Can't convert %r" % key) 149 return result 150 /usr/local/lib/python3.6/dist-packages/tensorflow_hub/tensor_info.py in _convert_to_compatible_tensor(value, target, error_prefix) 115 """ 116 try: --&gt; 117 tensor = tf.convert_to_tensor_or_indexed_slices(value, target.dtype) 118 except TypeError as e: 119 raise TypeError("%s: %s" % (error_prefix, e)) /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py in convert_to_tensor_or_indexed_slices(value, dtype, name) 1296 """ 1297 return internal_convert_to_tensor_or_indexed_slices( -&gt; 1298 value=value, dtype=dtype, name=name, as_ref=False) 1299 1300 /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py in internal_convert_to_tensor_or_indexed_slices(value, dtype, name, as_ref) 1330 raise ValueError( 1331 "Tensor conversion requested dtype %s for Tensor with dtype %s: %r" % -&gt; 1332 (dtypes.as_dtype(dtype).name, value.dtype.name, str(value))) 1333 return value 1334 else: ValueError: Tensor conversion requested dtype int32 for Tensor with dtype string: 'Tensor("bert_model/ParseExample/ParseExample:0", shape=(?, 128), dtype=string)' </code></pre> <p>The int32 coercing is for one or all of the features: input_ids, input_mask, segment_ids and label_ids. </p> <p>The model code only has an int32 conversion, which doesn't seem to be causing this</p> <pre><code>one_hot_labels = tf.one_hot(labels, depth=num_labels, dtype=tf.float32) predicted_labels = tf.squeeze(tf.argmax(log_probs, axis=-1, output_type=tf.int32)) </code></pre> <p>Where is the int32 coercing happening and how to fix? Thanks in advance!</p>
<p>the bert model require input_ids,input_mask and segment_ids as type of tf.int32. In order to fix the bug, you have to convert them from tf.int64 to tf.int32 as below</p> <pre><code>def create_model(is_predicting, input_ids, input_mask, segment_ids, labels, num_labels): """Creates a classification model.""" input_ids = tf.cast(input_ids, tf.int32) input_mask = tf.cast(input_mask, tf.int32) segment_ids = tf.cast(segment_ids, tf.int32) bert_module = hub.Module( BERT_MODEL_HUB, trainable=True) </code></pre>
python|tensorflow
7
696
55,012,046
Trouble with ignore_index and concat()
<p>I'm new to Python. I have 2 dataframes each with a single column. I want to join them together and keep the values based on their respective positions in each of the tables.</p> <p>My code looks something like this:</p> <pre><code>huh = pd.DataFrame(columns=['result'], data=['a','b','c','d']) huh2 = pd.DataFrame(columns=['result2'], data=['aa','bb','cc','dd']) huh2 = huh2.sort_values('result2', ascending=False) tmp = pd.concat([huh,huh2], ignore_index=True, axis=1) tmp </code></pre> <p>From the documentation it looks like the <code>ignore_index</code> flag and <code>axis=1</code> should be sufficient to achieve this but the results obviously disagree. </p> <p>Current Output:</p> <pre><code> 0 1 0 a aa 1 b bb 2 c cc 3 d dd </code></pre> <p>Desired Output:</p> <pre><code> result result2 0 a dd 1 b cc 2 c bb 3 d aa </code></pre>
<p>If you concatenate the DataFrames horizontally, then the column names are ignored. If you concatenate vertically, the indexes are ignored. You can only ignore one or the other, not both.</p> <p>In your case, I would recommend setting the index of "huh2" to be the same as that of "huh".</p> <pre><code>pd.concat([huh, huh2.set_index(huh.index)], axis=1) result result2 0 a dd 1 b cc 2 c bb 3 d aa </code></pre> <p>If you aren't dealing with custom indices, <code>reset_index</code> will suffice.</p> <pre><code>pd.concat([huh, huh2.reset_index(drop=True)], axis=1) result result2 0 a dd 1 b cc 2 c bb 3 d aa </code></pre>
python|pandas
3
697
55,027,737
gRPC Name Resolution Failure
<p>While trying to run tensorflow-serving with docker, I am getting the following error issuing a client request using gRPC with following code:</p> <pre><code>`python client.py --server=172.17.0.2/16:9000 --image=./test_images/image2.jpg debug_error_string = "{"created":"@1551888435.208113000","description":"Failed to create subchannel","file":"src/core/ext/filters/client_channel/client_channel.cc","file_line":2267,"referenced_errors":[{"created":"@1551888435.208109000","description":"Name resolution failure","file":"src/core/ext/filters/client_channel/request_routing.cc","file_line":165,"grpc_status":14}]}"` </code></pre> <p>Information about my environment:</p> <blockquote> <p>OS: macOS virtual env.: Anaconda 3 Python 3.6 gRPC/tools 1.19</p> </blockquote> <p>Would you please help me in resolving the issue? </p>
<p>This happens when the channel is in TRANSIENT_FAILURE and the load balancing policy can't find any ready backend to send the request. </p> <p>Please file an issue on <a href="https://github.com/grpc/grpc/" rel="nofollow noreferrer">https://github.com/grpc/grpc/</a> detailing what you did, hopefully with more log/tracing context, so that we can better help you.</p>
docker|tensorflow-serving|grpc-python
1
698
49,641,753
how to match a character variable with a regex defined in another variable?
<p>Consider this simple example</p> <pre><code>import pandas as pd mydata = pd.DataFrame({'mystring' : ['heLLohelloy1', 'hAllohallo'], 'myregex' : ['hello.[0-9]', 'ulla']}) mydata Out[3]: myregex mystring 0 hello.[0-9] heLLohelloy1 1 ulla hAllohallo </code></pre> <p>I would like to create a variable <code>flag</code> that identifies the rows where <code>mystring</code> matches with the regex in <code>myregex</code> for the same row. </p> <p>That is, in the example, only the first row <code>heLLohelloy1</code> matches with the regex <code>hello.[0-9]</code>. Indeed, <code>hAllohallo</code> does not match with the regex <code>ulla</code>.</p> <p>How can I do that as efficiently as possible in Pandas? Here we are talking about millions of observations (still the data fits into the RAM).</p>
<p>You can use <code>re library</code> and <code>apply function</code> do the following:</p> <pre><code>import re # apply function mydata['flag'] = mydata.apply(lambda row: bool(re.search(row['myregex'], row['mystring'])), axis=1) ### to convert bool to int - optional ### mydata['flag'] = mydata['flag'].astype(int) myregex mystring flag 0 hello.[0-9] heLLohelloy1 True 1 ulla hAllohallo False </code></pre>
python|regex|pandas
2
699
49,490,262
Combining graphs: is there a TensorFlow import_graph_def equivalent for C++?
<p>I need to extend exported models with a custom input and output layer. I have found out this can <em>easily</em> be done with:</p> <pre><code>with tf.Graph().as_default() as g1: # actual model in1 = tf.placeholder(tf.float32,name="input") ou1 = tf.add(in1,2.0,name="output") with tf.Graph().as_default() as g2: # model for the new output layer in2 = tf.placeholder(tf.float32,name="input") ou2 = tf.add(in2,2.0,name="output") gdef_1 = g1.as_graph_def() gdef_2 = g2.as_graph_def() with tf.Graph().as_default() as g_combined: #merge together x = tf.placeholder(tf.float32, name="actual_input") # the new input layer # Import gdef_1, which performs f(x). # "input:0" and "output:0" are the names of tensors in gdef_1. y, = tf.import_graph_def(gdef_1, input_map={"input:0": x}, return_elements=["output:0"]) # Import gdef_2, which performs g(y) z, = tf.import_graph_def(gdef_2, input_map={"input:0": y}, return_elements=["output:0"]) sess = tf.Session(graph=g_combined) print "result is: ", sess.run(z, {"actual_input:0":5}) #result is: 9 </code></pre> <p>this works fine. </p> <p>However instead of passing a dataset in arbitrary shape, I need to give a pointer as network input. The problem is, I can't think of any solution for this inside python (defining and passing a pointer), and when developing a network with the <code>C++ Api</code> I can't find an equivalent to the <code>tf.import_graph_def</code> function.</p> <p>Does this have a different name in C++ or is there an other way to merge two graphs/models in C++?</p> <p>Thanks for any advice</p>
<p>It is not as easy as in Python.</p> <p>You can load a <code>GraphDef</code> with something like this:</p> <pre><code>#include &lt;string&gt; #include &lt;tensorflow/core/framework/graph.pb.h&gt; #include &lt;tensorflow/core/platform/env.h&gt; tensorflow::GraphDef graph; std::string graphFileName = "..."; auto status = tensorflow::ReadBinaryProto( tensorflow::Env::Default(), graphFileName, &amp;graph); if (!status.ok()) { /* Error... */ } </code></pre> <p>Then you can use it to create a session:</p> <pre><code>#include &lt;tensorflow/core/public/session.h&gt; tensorflow::Session *newSession; auto status = tensorflow::NewSession(tensorflow::SessionOptions(), &amp;newSession); if (!status.ok()) { /* Error... */ } status = session-&gt;Create(graph); if (!status.ok()) { /* Error... */ } </code></pre> <p>Or to extend the graph of an existing one:</p> <pre><code>status = session-&gt;Extend(graph); if (!status.ok()) { /* Error... */ } </code></pre> <p>This way you can put several <code>GraphDef</code>s into the same graph. However, there are no additional facilities to extract particular nodes, nor to avoid names collisions - you have to find the nodes yourself and you have to ensure that the <code>GraphDef</code>s do not have conflicting op names. As an example, I use this function to find all the nodes with a name matching a given regular expression, sorted by name:</p> <pre><code>#include &lt;vector&gt; #include &lt;regex&gt; #include &lt;tensorflow/core/framework/node_def.pb.h&gt; std::vector&lt;const tensorflow::NodeDef *&gt; GetNodes(const tensorflow::GraphDef &amp;graph, const std::regex &amp;regex) { std::vector&lt;const tensorflow::NodeDef *&gt; nodes; for (const auto &amp;node : graph.node()) { if (std::regex_match(node.name(), regex)) { nodes.push_back(&amp;node); } } std::sort(nodes.begin(), nodes.end(), [](const tensorflow::NodeDef *lhs, const tensorflow::NodeDef *rhs) { return lhs-&gt;name() &lt; rhs-&gt;name(); }); return nodes; } </code></pre>
python|c++|pointers|tensorflow|merge
2