Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
1,000
64,317,235
How do I install researchpy on Jupyter Notebook 6.0.3
<p>I am trying to install <code>researchpy</code> with <code>pip install researchpy</code> or <code>pip3 install researchpy</code> on Jupyter but it gives the following error:</p> <pre><code>ModuleNotFoundError Traceback (most recent call last) ModuleNotFoundError: No module named 'researchpy' </code></pre> <p>Could you help me to install it?</p>
<p>You run the command below from the terminal(like cmd.exe):</p> <pre><code>pip install researchpy </code></pre>
python|pandas|jupyter-notebook
1
1,001
64,278,022
Reversing the first two columns of a DataFrame and appending results
<p>I have a Dataframe that looks similar to the following:</p> <pre><code>value1 value2 value3 A B 1 C D 2 E F 3 </code></pre> <p>I want to create a DataFrame that looks something like this:</p> <pre><code>value1 value2 value3 A B 1 C D 2 E F 3 B A 1 D C 2 F E 3 </code></pre> <p>In other words, I want to switch around <code>value1</code> and <code>value2</code> while retaining the same <code>value3</code>, is there any way to do this?</p>
<p>I would use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.rename.html" rel="nofollow noreferrer"><code>rename</code></a> and <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.append.html" rel="nofollow noreferrer"><code>append</code></a> following way:</p> <pre><code>import pandas as pd df = pd.DataFrame({'value1':['A','C','E'],'value2':['B','D','F'],'value3':[1,2,3]}) df2 = df.append(df.rename(columns={'value1':'value2','value2':'value1'}), sort=False) print(df2) </code></pre> <p>Output:</p> <pre><code> value1 value2 value3 0 A B 1 1 C D 2 2 E F 3 0 B A 1 1 D C 2 2 F E 3 </code></pre>
python|pandas|dataframe
1
1,002
47,917,288
Tensorflow Dataset.from_generator blocks input?
<p>I want to build a project that requests will be put to a python Queue at any arbitrary time, and a set of tensorflow models consume those requests from the queue, and return their results immediately.</p> <p>The models are in different threads, different tf.Graph, but the structure and weight values are the same.</p> <p>Every model use tf.data.Dataset.from_generator to encapsule a python iterator which fetch request from the queue.</p> <p>The problem is, when there is more than one models, request might be blocked until future request come. From testing results, it seems the python iterator indeed got the request just at the time it was put in the queue, but no result came from the model. Moreover, there seems no request was discarded, but blocked by maybe the tf Dataset iterator.</p> <p>Here is my test code:</p> <pre><code># -*- coding: utf-8 -*- import tensorflow as tf import numpy as np import sys import random import time from queue import Queue from concurrent.futures import ThreadPoolExecutor thread_count=int(sys.argv[1]) request_queue=Queue(128) def data_iter(): while True: yield request_queue.get() def task(): with tf.Graph().as_default(): ds=tf.data.Dataset.from_generator(data_iter, (tf.int32), output_shapes=([1, 8])) sample=ds.make_one_shot_iterator().get_next() with tf.Session() as sess: coord=tf.train.Coordinator() threads=tf.train.start_queue_runners(sess=sess, coord=coord) while not coord.should_stop(): try: result=sess.run(sample) print(result) except: coord.request_stop() coord.join(threads) executor=ThreadPoolExecutor(thread_count) try: for i in range(thread_count): executor.submit(task) rand=random.Random() for i in range(100): request_queue.put(np.full((1, 8), i, 'int32')) time.sleep(1e-3)#to let the model get request from the request_queue t=rand.randint(5,10) print('round {}, request_queue size is about {}, sleeping {} secs...'.format(i, request_queue.qsize(), t)) time.sleep(t) finally: for i in range(thread_count): request_queue.put(None) executor.shutdown() </code></pre> <p>Environment: python 3.5.3, tensorflow 1.4.0</p> <p>testing result:</p> <ol> <li>Running with one model: <code>python tf_ds_test.py 1</code></li> </ol> <p>The result looks like:</p> <pre><code>round 0, request_queue size is about 1, sleeping 6 secs... 2017-12-21 10:42:24.924251: I C:\tf_jenkins\home\workspace\rel-win\M\windows\PY\35\tensorflow\core\platform\cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2 [[0 0 0 0 0 0 0 0]] [[1 1 1 1 1 1 1 1]] round 1, request_queue size is about 0, sleeping 6 secs... [[2 2 2 2 2 2 2 2]] round 2, request_queue size is about 0, sleeping 5 secs... [[3 3 3 3 3 3 3 3]] round 3, request_queue size is about 0, sleeping 7 secs... [[4 4 4 4 4 4 4 4]] round 4, request_queue size is about 0, sleeping 6 secs... [[5 5 5 5 5 5 5 5]] round 5, request_queue size is about 0, sleeping 7 secs... ... </code></pre> <p>Everything goes well.</p> <ol start="2"> <li>But when running with 32 models: <code>python tf_ds_test.py 32</code></li> </ol> <p>The result looks like:</p> <pre><code>2017-12-21 10:45:41.660251: I C:\tf_jenkins\home\workspace\rel-win\M\windows\PY\35\tensorflow\core\platform\cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2 round 0, request_queue size is about 1, sleeping 9 secs... [[0 0 0 0 0 0 0 0]] [[1 1 1 1 1 1 1 1]] round 1, request_queue size is about 0, sleeping 5 secs... round 2, request_queue size is about 0, sleeping 8 secs... round 3, request_queue size is about 0, sleeping 10 secs... [[4 4 4 4 4 4 4 4]] [[2 2 2 2 2 2 2 2]] [[3 3 3 3 3 3 3 3]] round 4, request_queue size is about 0, sleeping 8 secs... round 5, request_queue size is about 0, sleeping 6 secs... round 6, request_queue size is about 0, sleeping 10 secs... [[6 6 6 6 6 6 6 6]] [[5 5 5 5 5 5 5 5]] round 7, request_queue size is about 0, sleeping 9 secs... [[7 7 7 7 7 7 7 7]] round 8, request_queue size is about 0, sleeping 5 secs... round 9, request_queue size is about 0, sleeping 10 secs... round 10, request_queue size is about 0, sleeping 6 secs... round 11, request_queue size is about 0, sleeping 10 secs... [[8 8 8 8 8 8 8 8]] round 12, request_queue size is about 0, sleeping 8 secs... </code></pre> <p>The request blocked! The python iterator consumed the request immediately, but the model give no result until an arbitrary period maybe until the model got its next request.</p> <p>Does anybody has any idea? How can I let those model return result immediately?</p>
<p>Could you modify the loop that generates elements into the queue to:</p> <pre><code>for i in range(100): request_queue.put(np.full((1, 8), i, 'int32')) print('round {}, queue size {}'.format(i, request_queue.qsize())) </code></pre> <p>and share the output?</p> <p>I tried reproducing your issue (using the nightly build of TF), but things ran smoothly even with 1000 tasks and 10000 iterations of the loop.</p> <p>Could you also try this with the nightly build of TF?</p>
python|multithreading|tensorflow
0
1,003
47,984,900
Profiling Execution Time of ResNet
<p>I used CIFAR-10 dataset to train and evaluate ResNet on Intel i7 CPU. (ResNet model is in Tensorflow: <a href="https://github.com/tensorflow/models/tree/master/official/resnet" rel="nofollow noreferrer">https://github.com/tensorflow/models/tree/master/official/resnet</a>)</p> <p>Now, I am interested in profiling the app i.e. determining the execution time of the top functions. Analysis with the functions sort_stats() gives information about the top function only. Moreover, using profiling tool cProfile throws the following error -- </p> <p>python3 -m profile cifar10_main.py</p> <p>File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/platform/app.py", line 124, in run _sys.exit(main(argv)) TypeError: main() takes 0 positional arguments but 1 was given</p> <p>It would be great if someone can help me out with collecting (nearly accurate, and with function level, or line-level) profiling information for ResNets. Thank you :)</p>
<p>I'd use <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/profiler/README.md" rel="nofollow noreferrer">tf.profiler</a>. Unless you're executing eagerly, the interesting performance issues (once the graph is built) will be in TensorFlow C++ code rather than Python.</p>
tensorflow|neural-network|profiling|conv-neural-network|resnet
0
1,004
47,743,832
Assign requires shapes of both tensors to match. lhs shape= [1024] rhs shape= [1200]
<p>I am new to TensorFlow and trying to create my own NMT based on the tutorial from <code>https://github.com/tensorflow/nmt/</code>.<br> I am experiencing an error upon restoring the trained model for inference:</p> <blockquote> <p>Assign requires shapes of both tensors to match. lhs shape= [1024] rhs shape= [1200]</p> </blockquote> <p>This is the code where I think it occurs:</p> <pre><code>def _build_decoder_(self, encoder_outputs, encoder_state): tgt_sos_id = tf.cast(self.output_vocab_table.lookup(tf.constant('&lt;SOS&gt;')), tf.int32) tgt_eos_id = tf.cast(self.output_vocab_table.lookup(tf.constant('&lt;EOS&gt;')), tf.int32) with tf.variable_scope('decoder', reuse=self.reuse): batch_size = tf.size(self.batched_input.source_lengths) decoder_cell = tf.nn.rnn_cell.BasicLSTMCell(self.num_units) source_lengths = self.batched_input.source_lengths attention_mechanism = tf.contrib.seq2seq.LuongAttention(self.num_units, encoder_outputs, memory_sequence_length=source_lengths) decoder_cell = tf.contrib.seq2seq.AttentionWrapper(decoder_cell, attention_mechanism, attention_layer_size=self.num_units / 2, alignment_history=( self.mode == tf.contrib.learn.ModeKeys.INFER)) initial_state = decoder_cell.zero_state(dtype=tf.float32, batch_size=batch_size) if self.mode != tf.contrib.learn.ModeKeys.INFER: target = self.batched_input.target target_lengths = self.batched_input.target_lengths embed_input = tf.nn.embedding_lookup(self.dec_embeddings, target) helper = tf.contrib.seq2seq.TrainingHelper(embed_input, target_lengths) decoder = tf.contrib.seq2seq.BasicDecoder(decoder_cell, helper=helper, initial_state=initial_state, ) max_decoder_length = tf.reduce_max(target_lengths) decoder_outputs, final_context_state, _ = tf.contrib.seq2seq.dynamic_decode(decoder=decoder, impute_finished=True, ) sample_id = decoder_outputs.sample_id # logits = decoder_outputs.rnn_output logits = self.projection_layer(decoder_outputs.rnn_output) else: start_tokens = tf.fill([batch_size], tgt_sos_id) end_token = tgt_eos_id helper = tf.contrib.seq2seq.GreedyEmbeddingHelper( self.dec_embeddings, start_tokens, end_token) decoder = tf.contrib.seq2seq.BasicDecoder(decoder_cell, helper, initial_state=initial_state, output_layer=self.projection_layer) max_encoder_length = tf.reduce_max(self.batched_input.source_lengths) maximum_iterations = tf.to_int32( tf.round(tf.to_float(max_encoder_length) * self.decoding_length_factor)) decoder_outputs, final_context_state, _ = tf.contrib.seq2seq.dynamic_decode(decoder=decoder, impute_finished=True, maximum_iterations=maximum_iterations) logits = decoder_outputs.rnn_output sample_id = decoder_outputs.sample_id return logits, sample_id, final_context_state </code></pre> <p>I think the problem here is that the <code>batch_size</code> is being saved when saving the model, however I cannot think of any other way to fix this.</p> <p>I am using this code to save and to restore:</p> <pre><code>self.saver = tf.train.Saver(tf.global_variables(), max_to_keep=5) infer_model.saver.restore(infer_sess, latest) </code></pre> <p>I also tried setting <code>reshape=True</code> but I still can't restore the model.</p>
<p>I fixed it now, just a human error, switched variable (<code>batch_size</code> and <code>num_units</code>) arrangement in another method.</p>
tensorflow
0
1,005
58,950,272
How to Multi-Head learning
<p>I have about 5 models that work pretty well trained individually but I want to fuse them together in order to have one big model. I'm looking into it because one big model is more easy to update (in production) than many small model this is an image of what I want to achieve. <a href="https://i.stack.imgur.com/hyQdX.png" rel="noreferrer"><img src="https://i.stack.imgur.com/hyQdX.png" alt="enter image description here"></a></p> <p>my question are, is it ok to do it like this ? having one dataset per head model, how am I supposed to train the whole model ?</p>
<blockquote> <p>my question are, is it ok to do it like this</p> </blockquote> <p>Sure you can do that. This approach is called <a href="https://ruder.io/multi-task/" rel="nofollow noreferrer">multi-task learning</a>. Depending on your datasets and what you are trying to do, it will maybe even increase the performance. Microsoft used a <a href="https://github.com/namisan/mt-dnn" rel="nofollow noreferrer">multi-task model</a> to achieve some good results for the NLP Glue benchmark, but they also noted that you can increase the performance further by finetuning the joint model for each individual task.</p> <blockquote> <p>having one dataset per head model, how am I supposed to train the whole model?</p> </blockquote> <p>All you need is pytorch <a href="https://pytorch.org/docs/stable/nn.html#modulelist" rel="nofollow noreferrer">ModuleList</a>:</p> <pre class="lang-py prettyprint-override"><code>#please note this is just pseudocode and I'm not well versed with computer vision #therefore you need to check if resnet50 import is correct and look #for the imports of the task specific stuff from torch import nn from torchvision.models import resnet50 class MultiTaskModel(nn.Module): def __init__(self): #shared part self.resnet50 = resnet50() #task specific stuff self.tasks = nn.ModuleList() self.tasks.add_module('depth', Depth()) self.tasks.add_module('denseflow', Denseflow()) #... def forward(self, tasktag, ...): #shared part resnet_output = self.resnet50(...) #task specific parts if tasktag == 'depth': return self.tasks.depth(resnet_output) elif tasktag == 'denseflow': return self.tasks.denseflow(resnet_output) #... </code></pre>
machine-learning|deep-learning|pytorch
4
1,006
59,002,475
Jupyter notebook magic command - use %who DataFrame to get list of DataFrames?
<p>I can print all interactive variables, with some minimal formatting using <code>%who</code>.</p> <p>If I only want defined DataFrames, <code>%who DataFrame</code> works great. </p> <p><strong>Is there a way to send the output of <code>%who DataFrame</code> to a list?</strong> </p>
<p>I believe <a href="https://ipython.readthedocs.io/en/stable/interactive/magics.html#magic-who_ls" rel="nofollow noreferrer"><code>%who_ls</code></a> is what you're looking for: </p> <blockquote> <p>Return a sorted list of all interactive variables. <br><br> If arguments are given, only variables of types matching these arguments are returned.</p> </blockquote> <p>Example use - </p> <pre><code>In [1]: x, y, z = 1, 1, 1 In [2]: ints = %who_ls int In [3]: print(ints) ['x', 'y', 'z'] </code></pre>
python|pandas|jupyter-notebook
2
1,007
58,620,311
return missing dates Python
<p>I have a CSV file with 1600 dates and I'm trying to find all missing dates. For example:<br> 03-10-2019<br> 01-10-2019<br> 29-09-2019<br> 28-09-2019<br> should return : 02-10-2019,30-09-2019.</p> <p>Here's what I've wrote:</p> <pre><code>with open('measurements.csv','r') as csvfile: df = pd.read_csv(csvfile, delimiter=',') timestamps = df['observation_time'] #Getting only the date for line in timestamps: date_str = line try: # convert string to time date = date_time_obj = datetime.datetime.strptime(date_str, '%Y-%m-%d %H:%M:%S') dates.append(date) except: print("Date parsing failed") dates = pd.DataFrame(dates,columns =['actual_date']) pd.date_range(start = dates.min(), end = dates.max()).difference(dates.index) </code></pre> <p>This returns an error that </p> <blockquote> <p>"Cannot convert input [actual_date 2018-09-17 22:00:00 dtype: datetime64[ns]] of type to Timestamp"</p> </blockquote>
<p>Idea is use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.asfreq.html" rel="nofollow noreferrer"><code>DataFrame.asfreq</code></a> for add all missing values to <code>DatetimeIndex</code>, so possible filter by <a href="http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#boolean-indexing" rel="nofollow noreferrer"><code>boolean indexing</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.isna.html" rel="nofollow noreferrer"><code>Series.isna</code></a>:</p> <pre><code>df['observation_time'] = pd.to_datetime(df['observation_time'], dayfirst=True) df1 = df.set_index(df['observation_time']).sort_index().asfreq('d') print (df1) observation_time observation_time 2019-09-28 2019-09-28 2019-09-29 2019-09-29 2019-09-30 NaT 2019-10-01 2019-10-01 2019-10-02 NaT 2019-10-03 2019-10-03 dates = df1.index[df1['observation_time'].isna()] print (dates ) DatetimeIndex(['2019-09-30', '2019-10-02'], dtype='datetime64[ns]', name='observation_time', freq=None) </code></pre>
python-3.x|pandas|dataframe|datetime
1
1,008
58,884,584
TensorFlow Serving Cluster Architecture
<p>Folks, I am writing an application which will produce recommendations based on ML model call. The application will have different models, some of them should be called in sequence. A data scientist should be able, to upload a model in the system. This means that the application should have logic to store models metadata as well as address of a model server. A model server will be instantiated dynamically on a model upload event. I would like to use a cluster of TensorFlow Serving here, however I am stacked with a question of architecture. Is there a way to have something like service registry for TensorFlow servers? What is the best way to build such a cluster of servers with different models?</p>
<p>I need some clarification on what you're trying to do. Is the feature vector for all the models the same? If not then it will be quite a bit harder to do this. Trained models are encapsulated in the SavedModel format. It sounds like you're trying to train an ensemble, but some of the models are frozen? You could certainly write a custom component to make an inference request as part of the input to Trainer, if that's what you need.</p> <p>UPDATE 1 From your comment below it sounds like what you might be looking for is a service mesh, such as Istio for example. That would help manage the connections between services running inside containers, and the connections between users and services. In this case tf.Serving instances running your models are the services, but the basic request-response pattern is the same. Does that help?</p>
tensorflow|tensorflow-serving|tfx
1
1,009
58,808,380
How to change pandas dataframe strings into integers?
<p>Here I have a csv file that I am attempting to turn into all integer values, but I am not sure how to do it. I have looked at other posts but they don't seem to be working.</p> <p>Here is my csv:</p> <pre><code>X1,X2,X3,X4,X5,X6,X7,X8,X9,PosNeg x,x,x,x,o,o,x,o,o,positive x,x,x,x,o,o,o,x,o,positive x,x,x,x,o,o,o,o,x,positive x,x,x,x,o,o,o,b,b,positive x,x,x,x,o,o,b,o,b,positive x,x,x,x,o,o,b,b,o,positive x,x,x,x,o,b,o,o,b,positive x,x,x,x,o,b,o,b,o,positive x,x,x,x,o,b,b,o,o,positive x,x,x,x,b,o,o,o,b,positive x,x,x,x,b,o,o,b,o,positive x,x,x,x,b,o,b,o,o,positive x,x,x,o,x,o,x,o,o,positive x,x,x,o,x,o,o,x,o,positive x,x,x,o,x,o,o,o,x,positive x,x,x,o,x,o,o,b,b,positive x,x,x,o,x,o,b,o,b,positive x,x,x,o,x,o,b,b,o,positive x,x,x,o,x,b,o,o,b,positive x,x,x,o,x,b,o,b,o,positive x,x,x,o,x,b,b,o,o,positive </code></pre> <p>I would like to transform it into something like this:</p> <pre><code>1,1,1,1,1,1,0,0,0,1 </code></pre> <p>Thank you.</p>
<p>For this you can use the replace() function of your pandas Dataframe to first replace all "x" values with 1 and then afterwards "o" with 0 as such:</p> <pre><code>&gt;&gt;&gt; df = pd.read_csv(r"&lt;PATH&gt;") &gt;&gt;&gt; df 1 2 3 0 x x o 1 x x o 2 o o x 3 x o x &gt;&gt;&gt; df = df.replace("x", 1) &gt;&gt;&gt; df 1 2 3 0 1 1 o 1 1 1 o 2 o o 1 3 1 o 1 &gt;&gt;&gt; df = df.replace("o", 0) &gt;&gt;&gt; df 1 2 3 0 1 1 0 1 1 1 0 2 0 0 1 3 1 0 1 </code></pre> <p><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.replace.html" rel="nofollow noreferrer">Pandas Documentation</a></p>
python|pandas|data-science
1
1,010
70,272,926
How do I split a dataframe column values in pandas to get another column using python?
<p>I have created a dataframe like this from a list.</p> <pre><code> Name 0 Security Name % to Net Assets* DEBENtURES 0.04 1 Britannia Industries Ltd. EQUity &amp; RELAtED 96.83 2 HDFC Bank 6.98 3 ICICI 4.82 4 Infosys 4.37 </code></pre> <p>using <code>df = pd.DataFrame(list1, columns=['Name'])</code> I want to split <code>Name</code> in another such that the dataframe looks like this:</p> <pre><code> Name value 0 Security Name % to Net Assets* DEBENtURES 0.04 1 Britannia Industries Ltd. EQUity &amp; RELAtED 96.83 2 HDFC Bank 6.98 3 ICICI 4.82 4 Infosys 4.37 </code></pre> <p>I tried doing something like <code>df = pd.DataFrame(df.row.str.split(&quot;,&quot;, 1).tolist(), columns=[&quot;Security Name&quot;, &quot;Weights&quot;])</code> however it gave an error like <code>AttributeError: 'DataFrame' object has no attribute 'row'</code></p> <p>How exactly do I split a column such that I get another column? Please help</p>
<p>There is no column <code>row</code> and no separator comma, so use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.rsplit.html" rel="nofollow noreferrer"><code>Series.str.rsplit</code></a> for split from right with <code>n=1</code> for first space:</p> <pre><code>print (df.columns.tolist()) Name df[['Name','value']] = df['Name'].str.rsplit(n=1, expand=True) print (df) Name value 0 Security Name % to Net Assets* DEBENtURES 0.04 1 Britannia Industries Ltd. EQUity &amp; RELAtED 96.83 2 HDFC Bank 6.98 3 ICICI 4.82 4 Infosys 4.37 </code></pre> <p>If possible some numbers missing is possible use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.extract.html" rel="nofollow noreferrer"><code>Series.str.extract</code></a> with match numbers after last space in regex:</p> <pre><code>print (df) Name 0 Security Name % to Net Assets* DEBENtURES 0.04 1 Britannia Industries Ltd. EQUity &amp; RELAtED 96.83 2 HDFC Bank 6.98 3 ICICI 4.82 4 Infosys 4.37 5 new 6 aa 58 df = (df['Name'].str.extract('(?P&lt;Name&gt;.*)\s+(?P&lt;value&gt;\d+\.\d+|\d+)$') .fillna({'Name':df['Name']})) print (df) Name value 0 Security Name % to Net Assets* DEBENtURES 0.04 1 Britannia Industries Ltd. EQUity &amp; RELAtED 96.83 2 HDFC Bank 6.98 3 ICICI 4.82 4 Infosys 4.37 5 new NaN 6 aa 58 </code></pre>
python|pandas|dataframe
2
1,011
70,319,286
Question about Google Colab Transformer Tutorial
<p>I'm trying to follow the Tensorflow Transformer tutorial here:</p> <p><a href="https://github.com/tensorflow/text/blob/master/docs/tutorials/transformer.ipynb" rel="nofollow noreferrer">https://github.com/tensorflow/text/blob/master/docs/tutorials/transformer.ipynb</a></p> <p>In the tutorial, they reproduce the image of the Transformer model from the original &quot;Attention is All You Need&quot; paper. In the image the final layers of the Transformer model are a Dense layer followed by Softmax Activation. However in the code I only see something like this:</p> <p><code>self.final_layer = tf.keras.layers.Dense(target_vocab_size)</code></p> <p>where the Dense layer is defined. But I cannot find the Softmax Activation applied anywhere in the tutorial.</p> <p>What am I missing? Thanks in advance for your assistance.</p>
<p>Looking at the notebook more carefully, I see that the loss function is calculated as:</p> <pre><code>loss_object = tf.keras.losses.SparseCategoricalCrossentropy( from_logits=True, reduction='none') </code></pre> <p>As explained in the link below, setting <em>from_logits</em> to <em>True</em> ensures that the Softmax is applied during the loss calculation.</p> <p><a href="https://datascience.stackexchange.com/questions/73093/what-does-from-logits-true-do-in-sparsecategoricalcrossentropy-loss-function">https://datascience.stackexchange.com/questions/73093/what-does-from-logits-true-do-in-sparsecategoricalcrossentropy-loss-function</a></p> <p>So the Softmax activation does not need to be applied within the Dense layer of the Transformer model.</p>
tensorflow|google-colaboratory|transformer-model
0
1,012
70,041,590
What does the ^ symbol mean when passed into the .where() funtion?
<p>Could someone please explain what does the <code>^^^</code> symbol does when passed into the <code>np.where</code> function?</p> <p><img src="https://i.stack.imgur.com/tOEje.jpg" alt="Video explaining np.where usage with Pandas dataframe" /></p> <p>Does it just represents the number of arguments that should be passed in?</p>
<blockquote> <p>Does it just represents the number of arguments that should be passed in?</p> </blockquote> <p>Yes.</p> <p>It's being used in this case to point to the first call to <code>np.where</code>.</p> <p>You can see for yourself:</p> <pre><code>$ python3 Python 3.9.7 (default, Oct 22 2021, 13:39:39) &gt;&gt;&gt; np.where(^^^) File &quot;&lt;stdin&gt;&quot;, line 1 np.where(^^^) ^ SyntaxError: invalid syntax </code></pre>
python|pandas|numpy
1
1,013
56,082,038
How to update da Pandas Panel without duplicates
<p>Currently i'm working on a Livetiming-Software for a motorsport-application. Therefore i have to crawl a Livetiming-Webpage and copy the Data to a big Dataframe. This Dataframe is the source of several diagramms i want to make. To keep my Dataframe up to date, i have to crawl the webpage very often. </p> <p>I can download the Data and save them as a Panda.Dataframe. But my Problem is step from the downloaded DataFrame to the Big Dataframe, that includes all the Data.</p> <pre><code>import pandas as pd import numpy as np df1= pd.DataFrame({'Pos':[1,2,3,4,5,6],'CLS':['V5','V5','V5','V4','V4','V4'], 'Nr.':['13','700','30','55','24','985'], 'Zeit':['1:30,000','1:45,000','1:50,000','1:25,333','1:13,366','1:17,000'], 'Laps':['1','1','1','1','1','1']}) df2= pd.DataFrame({'Pos':[1,2,3,4,5,6],'CLS':['V5','V5','V5','V4','V4','V4'], 'Nr.':['13','700','30','55','24','985'], 'Zeit':[np.nan,np.nan,np.nan,np.nan,np.nan,np.nan,], 'Laps':['2','2','2','2','2','2']}) df3= pd.DataFrame({'Pos':[1,2,3,4,5,6],'CLS':['V5','V5','V5','V4','V4','V4'], 'Nr.':['13','700','30','55','24','985'], 'Zeit':['1:31,000','1:41,000','1:51,000','1:21,333','1:11,366','1:11,000'], 'Laps':['2','2','2','2','2','2']}) df1.set_index(['CLS','Nr.','Laps'],inplace=True) df2.set_index(['CLS','Nr.','Laps'],inplace=True) df3.set_index(['CLS','Nr.','Laps'],inplace=True) </code></pre> <p>df1 shows a Dataframe from previous laps. df2 shows a Dataframe in the second lap. The Lap is not completed, so i have a nan. df3 shows a Dataframe after the second lap is completed.</p> <p>My target is to have just one row for each Lap per Car per Class. Either i have the problem, that i have duplicates with incomplete Laps or all date get overwritten.</p> <p>I hope that someone can help me with this problem.</p> <p>Thank you so far.</p> <p>MrCrunsh</p>
<p>If I understand your problem correctly, your issue is that you have overlapping data for the second lap: information while the lap is still in progress and information after it's over. If you want to put all the information for a given lap in one row, I'd suggest use multi-index columns or changing the column names to reflect the difference between measurements during and after laps.</p> <pre><code>df = pd.concat([df1, df3]) df = pd.concat([df, df2], axis=1, keys=['after', 'during']) </code></pre> <p>The result will look like this:</p> <pre><code> after during Pos Zeit Pos Zeit CLS Nr. Laps V4 24 1 5 1:13,366 NaN NaN 2 5 1:11,366 5.0 NaN 55 1 4 1:25,333 NaN NaN 2 4 1:21,333 4.0 NaN 985 1 6 1:17,000 NaN NaN 2 6 1:11,000 6.0 NaN V5 13 1 1 1:30,000 NaN NaN 2 1 1:31,000 1.0 NaN 30 1 3 1:50,000 NaN NaN 2 3 1:51,000 3.0 NaN 700 1 2 1:45,000 NaN NaN 2 2 1:41,000 2.0 NaN </code></pre>
python|pandas|dataframe
0
1,014
56,099,598
Binary-vectorize pandas DataFrame column
<p>In a fictional patients dataset one might encounter the following table:</p> <pre class="lang-py prettyprint-override"><code>pd.DataFrame({ "Patients": ["Luke", "Nigel", "Sarah"], "Disease": ["Cooties", "Dragon Pox", "Greycale &amp; Cooties"] }) </code></pre> <p>Which renders the following dataset:</p> <p><a href="https://i.stack.imgur.com/k8kay.png" rel="noreferrer"><img src="https://i.stack.imgur.com/k8kay.png" alt="Fictional diseases"></a></p> <p>Now, assuming that the rows with multiple illnesses use the same pattern (separation with a character, in this context a <code>&amp;</code>) and that there exists a complete list <code>diseases</code> of the illnesses, I've yet to find a simple solution to applying to these situations <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.get_dummies.html" rel="noreferrer"><code>pandas.get_dummies</code></a> one-hot encoder to obtain a binary vector for each patient.</p> <p>How can I obtain, in the simplest possible manner, the following binary vectorization from the initial DataFrame?</p> <pre class="lang-py prettyprint-override"><code>pd.DataFrame({ "Patients": ["Luke", "Nigel", "Sarah"], "Cooties":[1, 0, 1], "Dragon Pox":[0, 1, 0], "Greyscale":[0, 0, 1] }) </code></pre> <p><a href="https://i.stack.imgur.com/5ZRQJ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/5ZRQJ.png" alt="Desired result"></a></p>
<p>You can use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.get_dummies.html#pandas-series-str-get-dummies" rel="nofollow noreferrer">Series.str.get_dummies</a> with right separator,</p> <pre><code>df.set_index('Patients')['Disease'].str.get_dummies(' &amp; ').reset_index() Patients Cooties Dragon Pox Greycale 0 Luke 1 0 0 1 Nigel 0 1 0 2 Sarah 1 0 1 </code></pre>
python|pandas|dataframe
6
1,015
56,027,645
Merging column values in a data frame in Pandas / Python
<p>I'm trying to merge the values of columns (Columns B and C) within the same dataframe. B and C sometimes have the same values. Some values in B are present in C while some values in C are present in B. The final results would show one column that is the combination of the two columns.</p> <h2>Initial data:</h2> <pre><code> A B C D Apple Canada '' RED Bananas '' Germany BLUE Carrot US US GREEN Dorito '' '' INDIGO </code></pre> <h2>Expected Data:</h2> <pre><code> A B C Apple Canada RED Bananas Germany BLUE Carrot US GREEN Dorito '' INDIGO </code></pre>
<p>IIUC</p> <pre><code>df['B']=df[['B','C']].replace("''",np.nan).bfill(1).loc[:,'B'] df=df.drop('C',1).rename(columns={'D':'C'}) df Out[102]: A B C 0 Apple Canada RED 1 Bananas Germany BLUE 2 Carrot US GREEN 3 Dorito NaN INDIGO </code></pre>
python|pandas
2
1,016
55,635,300
How to add column data as rows in an efficient manner?
<p>I have a dataframe, df, that looks something like this</p> <pre><code> col1 col2 A 2 2 B 4 1 C 0 0 D 1 1 E 2 2 </code></pre> <p>and would like to add two columns, so that for each row i, the new column col3 contains the value of df.loc[i-1,col1] and col4 contains the value of df.loc[i-2,col1].</p> <pre><code> col1 col2 col3 col4 A 2 2 Nan Nan B 4 1 2 Nan C 0 0 4 2 D 1 1 0 4 E 2 2 1 0 </code></pre> <p>As of now, I loop through the dataframe and "manually" add each value. Is there a smarter way to solve this problem than my approach?</p> <p>My brute-force solution (neglecting first 2 rows):</p> <pre><code>for i in range(2,df.shape[0]): for j in range(2): df.iloc[i,j+2] = df.iloc[i-1-j, j] </code></pre>
<p>with a <code>map</code> and <code>pd.concat</code></p> <pre><code>df.join( pd.concat( dict(enumerate(map(df.col1.shift, range(1, 3)), 3)), axis=1 ).add_prefix('col') ) col1 col2 col3 col4 A 2 2 NaN NaN B 4 1 2.0 NaN C 0 0 4.0 2.0 D 1 1 0.0 4.0 E 2 2 1.0 0.0 </code></pre>
python|python-3.x|pandas
2
1,017
55,617,581
Barplot comparing two columns
<p>I would like to draw a barplot graph that would compare the evolution of 2 variables of revenues on a monthly time-axis (12 months of invoices).</p> <p>I wanted to use sns.barplot, but can't use "hue" (cause the 2 variables aren't subcategories?). Is there another way, as simple as with hue? Can I "create" a hue?</p> <p>Here is a small sample of my data:</p> <p>(I did transform my table into a pivot table)</p> <p><code>[In]</code> </p> <pre><code>data_pivot['Revenue-Small-Seller-in'] = data_pivot["Small-Seller"] + data_pivot["Best-Seller"] + data_pivot["Medium-Seller"] data_pivot['Revenue-Not-Small-Seller-in'] = data_pivot["Best-Seller"] + data_pivot["Medium-Seller"] data_pivot </code></pre> <p><code>[Out]</code></p> <pre><code>InvoiceNo Month Year Revenue-Small-Seller-in Revenue-Not-Small-Seller-in 536365 12 2010 139.12 139.12 536366 12 2010 22.20 11.10 536367 12 2010 278.73 246.93 </code></pre> <p>(<a href="https://i.stack.imgur.com/JZP0S.png" rel="nofollow noreferrer">sorry for the ugly presentation of my data, see the picture to see the complete table (as there are multiple columns))</a></p>
<p>You can do:</p> <pre><code>render_df = data_pivot[data_pivot.columns[-2:]] fig, ax = plt.subplots(1,1) render_df.plot(kind='bar', ax=ax) ax.legend() plt.show() </code></pre> <p>Output:</p> <p><a href="https://i.stack.imgur.com/DzTkA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DzTkA.png" alt="enter image description here"></a></p> <p>Or <code>sns</code> style like you requested</p> <pre><code>render_df = data_pivot[data_pivot.columns[-2:]].stack().reset_index() sns.barplot('level_0', 0, hue='level_1', render_df) </code></pre> <p>here <code>render_df</code> after <code>stack()</code> is:</p> <pre><code>+---+---------+-----------------------------+--------+ | | level_0 | level_1 | 0 | +---+---------+-----------------------------+--------+ | 0 | 0 | Revenue-Small-Seller-in | 139.12 | | 1 | 0 | Revenue-Not-Small-Seller-in | 139.12 | | 2 | 1 | Revenue-Small-Seller-in | 22.20 | | 3 | 1 | Revenue-Not-Small-Seller-in | 11.10 | | 4 | 2 | Revenue-Small-Seller-in | 278.73 | | 5 | 2 | Revenue-Not-Small-Seller-in | 246.93 | +---+---------+-----------------------------+--------+ </code></pre> <p>and output:</p> <p><a href="https://i.stack.imgur.com/9nJ62.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9nJ62.png" alt="enter image description here"></a></p>
python|pandas|bar-chart|seaborn
2
1,018
55,977,126
Updating older Keras models with deprecation warnings
<p>I have an older Keras model file that works perfectly. When I try to load it in <code>tensorflow==1.13.1</code> however, I'm given a host of warnings:</p> <pre class="lang-py prettyprint-override"><code>import tensorflow as tf model = tf.keras.models.load_model("best.h5") </code></pre> <blockquote> <p>WARNING:tensorflow:From .pyenv/versions/3.6.0/lib/python3.6/site-packages/tensorflow/python/ops/resource_variable_ops.py:435: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. Instructions for updating: Colocations handled automatically by placer.</p> <p>WARNING:tensorflow:From .pyenv/versions/3.6.0/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.cast instead.</p> </blockquote> <p>Assuming I don't want to retrain the model, how can I update it to not give these errors? If needed, the original network (a simple 1D covnet) is below:</p> <pre><code>M = keras.Sequential() M.add(Embedding(n_vocab, n_window, input_length=n_window)) M.add(Conv1D(128, 5, activation="relu")) M.add(MaxPooling1D(5)) M.add(Conv1D(128, 5, activation="relu")) M.add(MaxPooling1D(5)) M.add(Flatten()) M.add(Dense(128, activation="relu")) M.add(Dense(n_classes, activation="softmax")) </code></pre>
<p>These aren't errors, they relate to the internal Keras implementation in tensorflow, there is not much you can do other than to wait for <code>tf.keras</code> to update their implementation and not use deprecated functions.</p>
python|tensorflow|keras
0
1,019
64,650,192
Using multiple filter on multiple columns of numpy array - more efficient way?
<p>I have the following 2 arrays:</p> <pre><code>arr = np.array([[1, 2, 3, 4], [5, 6, 7, 8], [7, 5, 6, 3], [2, 4, 8, 9]] ids = np.array([6, 5, 7, 8]) </code></pre> <p>Each row in the array <code>arr</code> describes a 4-digit id, there are no redundant ids - neither in their values nor their combination. So if <code>[1, 2, 3, 4]</code> exists, no other combination of these 4 digits can exist. This will be important in a sec.</p> <p>The array <code>ids</code> contains a 4-digit id, however the order might not be correct. Now I need to go through each row of <code>arr</code> and look if this id exists. In this example <code>ids</code> fits to the 2nd row from the top of <code>arr</code>. So <code>arr[1,:]</code>.</p> <p>My current solution creates a filter of each column to check if the values of <code>ids</code> exist in any of the 4 columns. After that I use these filters on <code>arr</code>. This seems way too complicated.</p> <p>So I pretty much do this:</p> <pre><code> filter_1 = np.in1d(arr[:, 0], ids) filter_2 = np.in1d(arr[:, 1], ids) filter_3 = np.in1d(arr[:, 2], ids) filter_4 = np.in1d(arr[:, 3], ids) result = arr[filter_1 &amp; filter_2 &amp; filter_3 &amp; filter_4] </code></pre> <p>Does anyone know a simpler solution? Maybe using generators?</p>
<p>Use <a href="https://numpy.org/doc/stable/reference/generated/numpy.isin.html" rel="nofollow noreferrer"><code>np.isin</code></a> all across <code>arr</code> and <code>all</code>-reduce to get <code>result</code> -</p> <pre><code>In [15]: arr[np.isin(arr, ids).all(1)] Out[15]: array([[5, 6, 7, 8]]) </code></pre>
python-3.x|numpy|filter|generator
1
1,020
64,915,035
Aggregate based on value of a different column
<p>I would like to aggregate the sum of <code>source_bytes</code> if <code>destination_port</code> is <code>80</code> into a separate column called <code>source_bytes_port_80</code></p> <p>My dataframe</p> <pre><code>date | source_ip | destination_ip| source_bytes | destination_port 2020-11-13 13:57:51 | 192.168.1.1 | 10.0.0.1 | 5 | 80 2020-11-13 13:57:51 | 192.168.1.2 | 10.0.0.1 | 1 | 2200 2020-11-13 13:57:52 | 10.0.0.1 | 192.168.1.1 | 2 | 80 2020-11-13 13:59:53 | 192.168.1.1 | 192.168.1.2 | 3 | 443 2020-11-13 13:59:54 | 192.168.1.1 | 192.168.1.2 | 3 | 1100 </code></pre> <p>I was thinking of creating a separate function and then call it with <code>.agg({'source_bytes':[sum_of_port]})</code> but I am not sure how I can check the condition inside the function.</p>
<pre><code>df.groupby(&quot;desination_port&quot;)[&quot;source_bytes&quot;].sum() </code></pre> <p>Will give you the sum for each destination_port. Then add it back into the file as you would like it.</p>
pandas
0
1,021
39,964,254
Python Updating Global variables
<p>Could anyone tell me what I am doing wrong in my code. How come, I cannot update my global variable? To my understanding, if it is a global variable I can modify it anywhere.</p> <p>If the numpy is creating a new array (when I use np.delete), what would be the best way to delete an element in an numpy array. </p> <pre><code>import numpy as np global a a = np.array(['a','b','c','D']) def hello(): a = np.delete(a, 1) print a hello() </code></pre>
<p>If you want to use a global variable in a function, you have to say it's global IN THAT FUNCTION:</p> <pre><code>import numpy as np a = np.array(['a','b','c','D']) def hello(): global a a = np.delete(a, 1) print a hello() </code></pre> <p>If you wouldn't use the line <code>global a</code> in your function, a new, local variable a would be created. So the keyword <code>global</code> isn't used to create global variable, but to avoid creating a local one that 'hides' an already existing global variable.</p>
python|numpy
8
1,022
40,196,995
Create a copy and not a reference of a NumPy array
<p>I'm trying to make a Python program with NumPy, but I ran into a problem:</p> <pre><code>width, height, pngData, metaData = png.Reader(file).asDirect() planeCount = metaData['planes'] print('Bildgroesse: ' + str(width) + 'x' + str(height) + ' Pixel') image_2d = np.vstack(list(map(np.uint8, pngData))) imageOriginal_3d = np.reshape(image_2d, (width, height, planeCount)) imageEdited_3d = imageOriginal_3d </code></pre> <p>This is my code, to read in a PNG image. Now I want to edit <code>imageEdited_3d</code> but NOT <code>imageOriginal_3d</code>, like this:</p> <pre><code>imageEdited_3d[x,y,0] = 255 </code></pre> <p>But then the <code>imareOriginal_3d</code> variable has the same values as the <code>imageEdited_3d</code> one...</p> <p>Does anyone know, how I can fix this? So it doesn't only creates a reference, but it creates a real copy? :/</p>
<p>You need to create the copy of the object. You may do it using <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.copy.html" rel="noreferrer"><code>numpy.copy()</code></a> since you are having <code>numpy</code> object. Hence, your initialisation should be like:</p> <pre><code>imageEdited_3d = imageOriginal_3d.copy() </code></pre> <p>Also there is <a href="https://docs.python.org/2/library/copy.html" rel="noreferrer"><code>copy</code></a> module for creating the <em>deep copy</em> OR, <em>shallow copy</em>. This works independent of object type. For example, your code using <code>copy</code> should be as: </p> <pre><code>from copy import copy, deepcopy # Creates shallow copy of object imageEdited_3d = copy(imageOriginal_3d) # Creates deep copy of object imageEdited_3d = deepcopy(imageOriginal_3d) </code></pre> <p><em>Description:</em></p> <blockquote> <p>A <strong>shallow copy</strong> constructs a new compound object and then (to the extent possible) inserts references into it to the objects found in the original.</p> <p>A <strong>deep copy</strong> constructs a new compound object and then, recursively, inserts copies into it of the objects found in the original.</p> </blockquote>
python|numpy|copy
12
1,023
69,628,951
Word2Vec Tensorflow tutorial weird output
<p>I'm trying out the Word2Vec tutorial at tensorflow (see here: <a href="https://www.tensorflow.org/tutorials/text/word2vec" rel="nofollow noreferrer">https://www.tensorflow.org/tutorials/text/word2vec</a>)</p> <p>While all seems to work fine, the output is somewhat unexpected to me, especially the small cluster in the PCA. The 'closet' words in the embedding dimension also don't make much sense, especially compared to other examples.</p> <p>Am I doing something (trivially) wrong? Or is this expected?</p> <p>For completeness, I run this in the nvidia-docker image, but also found similar results running cpu only.</p> <p>Here is the projected embedding showing the cluster. <a href="https://i.stack.imgur.com/SnTCH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SnTCH.png" alt="enter image description here" /></a></p>
<p>There can be various reasons.</p> <p>One reason is that this is due to the so-called <a href="https://arxiv.org/pdf/1412.6568.pdf" rel="nofollow noreferrer">hubness problem</a> of embedding spaces, which is an artifact of the high-dimensional space. Some words end up close to a large part of the space and act as sort of hubs in the nearest neighbor search, so through these words, you can get quickly from everywhere to everywhere.</p> <p>Another reason might be that the model is just undertrained for this particular word. Word embeddings are typically trained on very large datasets, such that every word appears in sufficiently many contexts. If a word does not appear frequently enough or in too ambiguous contexts, then it also ends up to be similar to basically everything.</p>
tensorflow|pca|word2vec|embedding
1
1,024
69,655,075
cx_Oracle.NotSupportedError: Python value of type NAType not supported
<p>I am trying to insert data into an oracle table, while in the process I am getting this error:</p> <pre><code>cx_Oracle.NotSupportedError: Python value of type NAType not supported </code></pre> <p>My script:</p> <pre><code>data = data_df.values.tolist() sql = &quot;insert into %s(%s) values(%s)&quot; %(table_name, cols, values) cursor.executemany(sql, data) </code></pre> <p>I have tried the solution given by the cx-Oracle documentation(<a href="https://cx-oracle.readthedocs.io/en/7.2.3/user_guide/sql_execution.html#inserting-nulls" rel="nofollow noreferrer">https://cx-oracle.readthedocs.io/en/7.2.3/user_guide/sql_execution.html#inserting-nulls</a>). But it returned this error:</p> <pre><code>cx_Oracle.DatabaseError: ORA-04043: object SDO_GEOMETRY does not exist </code></pre> <p>Because it's in production environment, that I can't do anything to oracle's settings. Is there any solution that can I insert null value into the oracle table?</p>
<p>Here is a minimal example reproducing your problem</p> <pre><code>d = {'id': [1, pd.NA], 'col': [ pd.NA,'x' ]} df = pd.DataFrame(d) print(df.values.tolist()) cur.executemany(&quot;insert into tab(id, col) values (:1, :2)&quot;, df.values.tolist()) [[1, &lt;NA&gt;], [&lt;NA&gt;, 'x']] ... cx_Oracle.NotSupportedError: Python value of type NAType not supported. </code></pre> <p>Note that the table definition is as follows</p> <pre><code> create table tab(id int, col varchar2(10)); </code></pre> <p>The error appears while insering the NA in a <code>VARCHAR2</code> column - would it be a <em>number</em> column, you'll observe <code>TypeError: expecting number</code></p> <p>The solution is to remove the <code>NA</code> and to replace them with <code>None</code></p> <pre><code>df.loc[0,'col'] = None df.loc[1,'id'] = None </code></pre> <p>The data now looks as follows and the <code>insert</code> works fine</p> <pre><code>[[1, None], [None, 'x']] </code></pre>
python|pandas|oracle
2
1,025
69,549,094
No CUDA GPUs are available
<p>i get this error from the method during the model training process. i am using the google colab to run the code. the google colab dont have any GPU. Is their any other way i can make the code run without requiring cuda cpu.</p> <p>How can i fix this error ?</p> <pre><code>def train_model(model, train_loader, val_loader, epoch, loss_function, optimizer, path, early_stop): # GPU #device = torch.device(&quot;cuda:0&quot; if torch.cuda.is_available() else &quot;cpu&quot;) device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') # device = torch.device(&quot;cpu&quot;) device = torch.device(&quot;cuda&quot;) model = model.to(device) patience, eval_loss = 0, 0 # train for i in range(epoch): total_loss, count = 0, 0 y_pred = list() y_true = list() for idx, (x, y) in tqdm(enumerate(train_loader), total=len(train_loader)): x, y = x.to(device), y.to(device) u, m = model(x) predict = torch.sigmoid(torch.sum(u*m, 1)) y_pred.extend(predict.cpu().detach().numpy()) y_true.extend(y.cpu().detach().numpy()) loss = loss_function(predict, y.float()) optimizer.zero_grad() loss.backward() optimizer.step() total_loss += float(loss) count += 1 train_auc = roc_auc_score(np.array(y_true), np.array(y_pred)) torch.save(model, path.format(i+1)) print(&quot;Epoch %d train loss is %.3f and train auc is %.3f&quot; % (i+1, total_loss / count, train_auc)) # verify total_eval_loss = 0 model.eval() count_eval = 0 val_y_pred = list() val_true = list() for idx, (x, y) in tqdm(enumerate(val_loader), total=len(val_loader)): x, y = x.to(device), y.to(device) u, m = model(x) predict = torch.sigmoid(torch.sum(u*m, 1)) val_y_pred.extend(predict.cpu().detach().numpy()) val_true.extend(y.cpu().detach().numpy()) loss = loss_function(predict, y.float()) total_eval_loss += float(loss) count_eval += 1 val_auc = roc_auc_score(np.array(y_true), np.array(y_pred)) print(&quot;Epoch %d val loss is %.3fand train auc is %.3f&quot; % (i+1, total_eval_loss / count_eval, val_auc)) </code></pre>
<p>Just remove the line where you create your <code>torch.device()</code> and remove all the <code>.to(device)</code> functions where you use it. Then you also don't need to write <code>.cpu().detach()</code> also. You can simply write <code>predict.numpy()</code> as such. When you write <code>device = torch.device(&quot;cuda&quot;)</code> you are creating a GPU device and you are then transferring your model and data into the GPU device when its not available. This is the reason for the error.</p>
python|deep-learning|pytorch
1
1,026
41,088,064
ConcatOp : Dimensions of inputs should match
<p>I'm developing a deep learning model with tensor flow and python:</p> <ul> <li>First, using CNN layers, get features.</li> <li>Second, reshaping the feature map, I want to use LSTM layer.</li> </ul> <p>However, a error with not-matching dimension...</p> <p>ConcatOp : Dimensions of inputs should match: <code>shape[0] = [71,48]</code> vs. <code>shape[1] = [1200,24]</code></p> <pre><code>W_conv1 = weight_variable([1,conv_size,1,12]) b_conv1 = bias_variable([12]) h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1)+ b_conv1) h_pool1 = max_pool_1xn(h_conv1) W_conv2 = weight_variable([1,conv_size,12,24]) b_conv2 = bias_variable([24]) h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2) h_pool2 = max_pool_1xn(h_conv2) W_conv3 = weight_variable([1,conv_size,24,48]) b_conv3 = bias_variable([48]) h_conv3 = tf.nn.relu(conv2d(h_pool2, W_conv3) + b_conv3) h_pool3 = max_pool_1xn(h_conv3) print(h_pool3.get_shape()) h3_rnn_input = tf.reshape(h_pool3, [-1,x_size/8,48]) num_layers = 1 lstm_size = 24 num_steps = 4 lstm_cell = tf.nn.rnn_cell.LSTMCell(lstm_size, initializer = tf.contrib.layers.xavier_initializer(uniform = False)) cell = tf.nn.rnn_cell.MultiRNNCell([lstm_cell]*num_layers) init_state = cell.zero_state(batch_size,tf.float32) cell_outputs = [] state = init_state with tf.variable_scope("RNN") as scope: for time_step in range(num_steps): if time_step &gt; 0: scope.reuse_variables() cell_output, state = cell(h3_rnn_input[:,time_step,:],state) ***** Error In here... </code></pre>
<p>When you input to the rnn cell, the batch size of input tensor and state tensor should be same. </p> <p>In the error message, it says <code>h3_rnn_input[:,time_step,:]</code> has shape of <code>[71,48]</code> while <code>state</code> has shape of <code>[1200,24]</code> </p> <p>What you need to do is make the first dimensions(batch_size) to be same.</p> <p>If the number 71 is not intended, check the Convolution part. Stride/Padding Could be matter.</p>
python|tensorflow|deep-learning|lstm
5
1,027
41,109,480
Calculating the size of a full outer join in pandas
<h2>tl;dr</h2> <p>My issue here is that I'm stuck at calculating how many rows to anticipate on each part of a full outer merge when using Pandas DataFrames as part of a combinatorics graph.</p> <h2>Questions (repeated below).</h2> <ol> <li>The ideal solution would be to not require the merge and to query <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Panel.html" rel="nofollow noreferrer"><code>panel</code></a> objects. Given that there isn't a query method on the <code>panel</code> is there a cleaner solution which would solve this problem without hitting the memory ceiling?</li> <li>If the answer to 2 is no, how can I calculate the size of the required merge table for each combination of sets <em>without carrying out the merge</em>? This might be a sub-optimal approach but in this instance it would be acceptable for the purpose of the application.</li> <li>Is Python the right language for this or should I be looking at a more statistical language such as <code>R</code> or write it at a lower level (<code>c</code>, <code>cython</code>) - <em>Databases are out of the question.</em></li> </ol> <h2>The problem</h2> <p>Recently I re-wrote the <a href="https://github.com/mproffitt/py-upset/tree/feature/ISSUE-7-Severe-Performance-Degradation/src/pyupset" rel="nofollow noreferrer">py-upset graphing library</a> to make it more efficient in terms of time when calculating combinations across DataFrames. I'm not looking for a review of this code, it works perfectly well in most instances and I'm happy with the approach. What I am looking for now is the answer to a very specific problem; uncovered when working with large data-sets.</p> <p>The approach I took with the re-write was to formulate an in-memory merge of all provided dataframes on a full outer join as seen on lines <a href="https://github.com/mproffitt/py-upset/blob/feature/ISSUE-7-Severe-Performance-Degradation/src/pyupset/resources.py#L480" rel="nofollow noreferrer">480 - 502 of <code>pyupset.resources</code></a></p> <pre><code> for index, key in enumerate(keys): frame = self._frames[key] frame.columns = [ '{0}_{1}'.format(column, key) if column not in self._unique_keys else column for column in self._frames[key].columns ] if index == 0: self._merge = frame else: suffixes = ( '_{0}'.format(keys[index-1]), '_{0}'.format(keys[index]), ) self._merge = self._merge.merge( frame, on=self._unique_keys, how='outer', copy=False, suffixes=suffixes ) </code></pre> <p>For small to medium dataframes using joins works incredibly well. In fact recent performance tests have shown that it'll handle 5 or 6 Data-Sets containing 10,000's of lines each in a less than a minute which is more than ample for the application structure I require.</p> <p>The problem now moves from time based to memory based.</p> <p>Given datasets of potentially 100s of thousands of records, the library very quickly runs out of memory even on a large server.</p> <p>To put this in perspective, my test machine for this application is an 8-core VMWare box with 128GiB RAM running Centos7.</p> <p>Given the following dataset sizes, when adding the 5th dataframe, memory usage spirals exponentially. This was pretty much anticipated but underlines the heart of the problem I am facing.</p> <pre><code> Rows | Dataframe ------------------------ 13963 | dataframe_one 48346 | dataframe_two 52356 | dataframe_three 337292 | dataframe_four 49936 | dataframe_five 24542 | dataframe_six 258093 | dataframe_seven 16337 | dataframe_eight </code></pre> <p>These are not "<em>small</em>" dataframes in terms of the number of rows although the column count for each is limited to one unique key + 4 non-unique columns. The size of each column in <code>pandas</code> is</p> <pre><code>column | type | unique -------------------------- X | object | Y id | int64 | N A | float64 | N B | float64 | N C | float64 | N </code></pre> <p>This merge can cause problems as memory is eaten up. Occasionally it aborts with a MemoryError (great, I can catch and handle those), other times the kernel takes over and simply kills the application before the system becomes unstable, and occasionally, the system just hangs and becomes unresponsive / unstable until finally the kernel kills the application and frees the memory.</p> <p><em>Sample output (memory sizes approximate):</em></p> <pre><code>[INFO] Creating merge table [INFO] Merging table dataframe_one [INFO] Data index length = 13963 # approx memory &lt;500MiB [INFO] Merging table dataframe_two [INFO] Data index length = 98165 # approx memory &lt;1.8GiB [INFO] Merging table dataframe_three [INFO] Data index length = 1296665 # approx memory &lt;3.0GiB [INFO] Merging table dataframe_four [INFO] Data index length = 244776542 # approx memory ~13GiB [INFO] Merging table dataframe_five Killed # &gt; 128GiB </code></pre> <p>When the merge table has been produced, it is queried in set combinations to produce graphs similar to <a href="https://github.com/mproffitt/py-upset/blob/feature/ISSUE-7-Severe-Performance-Degradation/tests/generated/extra_additional_pickle.png" rel="nofollow noreferrer">https://github.com/mproffitt/py-upset/blob/feature/ISSUE-7-Severe-Performance-Degradation/tests/generated/extra_additional_pickle.png</a></p> <p>The approach I am trying to build for solving the memory issue is to look at the sets being offered for merge, pre-determine how much memory the merge will require, then if that combination requires too much, split it into smaller combinations, calculate each of those separately, then put the final dataframe back together (divide and conquer).</p> <p>My issue here is that I'm stuck at calculating how many rows to anticipate on each part of the merge.</p> <h2>Questions (repeated from above)</h2> <ol> <li>The ideal solution would be to not require the merge and to query <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Panel.html" rel="nofollow noreferrer"><code>panel</code></a> objects. Given that there isn't a query method on the <code>panel</code> is there a cleaner solution which would solve this problem without hitting the memory ceiling?</li> <li>If the answer to 2 is no, how can I calculate the size of the required merge table for each combination of sets <em>without carrying out the merge</em>? This might be a sub-optimal approach but in this instance it would be acceptable for the purpose of the application.</li> <li>Is Python the right language for this or should I be looking at a more statistical language such as <code>R</code> or write it at a lower level (<code>c</code>, <code>cython</code>).</li> </ol> <p>Apologies for the lengthy question. I'm happy to provide more information if required or possible.</p> <p>Can anybody shed some light on what might be the reason for this?</p> <p>Thank you.</p>
<h1>Question 1.</h1> <p><a href="http://dask.pydata.org/en/latest/" rel="nofollow noreferrer">Dask</a> shows a lot of promise in being able to calculate the merge table "out of memory" by using hdf5 files as a temporary store. </p> <p>By using multi-processing to create the merges, dask also offers a performance increase over <code>pandas</code>. Unfortunately this is not carried through to the <code>query</code> method so performance gains made on the merge are lost on querying.</p> <p>It is still not a completely viable solution as dask may still run out of memory on large, complex merges.</p> <h1>Question 2.</h1> <p>Pre-calculating the size of the merge is entirely possible using the following method.</p> <ol> <li>Group each dataframe by a unique key and calculate the size.</li> <li>Create a set of key names for each dataframe. </li> <li>Create an intersection of sets from 2.</li> <li>Create a set difference for set 1 and for set 2</li> <li>To accommodate for <code>np.nan</code> stored in the unique key, select all NAN values. If one frame contains nan and the other doesn't, write the other as 1.</li> <li>for sets in the intersection, multiply the count from each <code>groupby('...').size()</code></li> <li>Add counts from the set differences</li> <li>Add a count of <code>np.nan</code> values</li> </ol> <p>In python this could be written as:</p> <pre><code>def merge_size(left_frame, right_frame, group_by): left_groups = left_frame.groupby(group_by).size() right_groups = right_frame.groupby(group_by).size() left_keys = set(left_groups.index) right_keys = set(right_groups.index) intersection = right_keys &amp; left_keys left_sub_right = left_keys - intersection right_sub_left = right_keys - intersection left_nan = len(left_frame.query('{0} != {0}'.format(group_by))) right_nan = len(right_frame.query('{0} != {0}'.format(group_by))) left_nan = 1 if left_nan == 0 and right_nan != 0 else left_nan right_nan = 1 if right_nan == 0 and left_nan != 0 else right_nan sizes = [(left_groups[group_name] * right_groups[group_name]) for group_name in intersection] sizes += [left_groups[group_name] for group_name in left_sub_right] sizes += [right_groups[group_name] for group_name in right_sub_left] sizes += [left_nan * right_nan] return sum(sizes) </code></pre> <h1>Question 3</h1> <p>This method is fairly heavy on calculating and would be better written in <code>Cython</code> for performance gains.</p>
python-3.x|pandas|merge|combinatorics
1
1,028
41,151,435
Pandas idiomatic way to custom fillna
<p>I have time series data in the following format, where a value indicates an accumulated amount since the past recording. What I want to do is "spread" that accumulated amount over the past periods containing NaN so that this input:</p> <pre><code>s = pd.Series([0, 0, np.nan, np.nan, 75, np.nan, np.nan, np.nan, np.nan, 50], pd.date_range(start="Jan 1 2016", end="Jan 10 2016", freq='D')) 2016-01-01 0.0 2016-01-02 0.0 2016-01-03 NaN 2016-01-04 NaN 2016-01-05 75.0 2016-01-06 NaN 2016-01-07 NaN 2016-01-08 NaN 2016-01-09 NaN 2016-01-10 50.0 </code></pre> <p>Becomes this output:</p> <pre><code>2016-01-01 0.0 2016-01-02 0.0 2016-01-03 25.0 2016-01-04 25.0 2016-01-05 25.0 2016-01-06 10.0 2016-01-07 10.0 2016-01-08 10.0 2016-01-09 10.0 2016-01-10 10.0 </code></pre> <p>Is there an idiomatic Pandas way to do this rather than just do a for loop over the data? I've tried a variety of things involving <code>fillna</code>, <code>dropna</code>, <code>isnull</code>, doing <code>shift</code> to check the next value, etc but I can't see how to put the pieces together.</p>
<p>This might work, for each chunk of missing values, create a group variable with <code>cumsum</code>(from the end of the series) and then perform a grouped average operation on each chunk:</p> <pre><code>s.groupby(s.notnull()[::-1].cumsum()[::-1]).transform(lambda g: g[-1]/g.size) #2016-01-01 0.0 #2016-01-02 0.0 #2016-01-03 25.0 #2016-01-04 25.0 #2016-01-05 25.0 #2016-01-06 10.0 #2016-01-07 10.0 #2016-01-08 10.0 #2016-01-09 10.0 #2016-01-10 10.0 #Freq: D, dtype: float64 </code></pre> <hr> <p>Or another option:</p> <pre><code>s.groupby(s.shift().notnull().cumsum()).transform(lambda g: g[-1]/g.size) #2016-01-01 0.0 #2016-01-02 0.0 #2016-01-03 25.0 #2016-01-04 25.0 #2016-01-05 25.0 #2016-01-06 10.0 #2016-01-07 10.0 #2016-01-08 10.0 #2016-01-09 10.0 #2016-01-10 10.0 #Freq: D, dtype: float64 </code></pre>
python|pandas
5
1,029
54,240,939
euclidean distance calculation using Python and Dask
<p>I'm attempting to identify elements in the euclidean distance matrix that fall under a certain threshold. I then take the positional arguments for this search and use them to compare elements in a second array (for sake of demonstration this array is the first eigenvector of PCA, but the sort is the most relevant part for my question). The application needs to be applicable for an unknown number of observations, but should run effectively on several million. </p> # <pre><code>import numpy as np from scipy.spatial.distance import cdist threshold = 10 data = np.random.uniform((1, 2, 3), 5000) searchValues = np.where(cdist(data, data) &lt; threshold) </code></pre> # <p>My problem is two fold. </p> <p>Firstly the euclidean distance matrix quickly becomes too large for simply applying scipy.spatial.distance.cdist(). To solve this issue I apply the cdist function in batches over the dataset and implement the search iteratively.</p> # <pre><code>cdist(data, data) Traceback (most recent call last): File "C:\Users\tl928yx\AppData\Local\Continuum\anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 2862, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "&lt;ipython-input-10-fb93ae543712&gt;", line 1, in &lt;module&gt; cdist(data, data) File "C:\Users\tl928yx\AppData\Local\Continuum\anaconda3\lib\site-packages\scipy\spatial\distance.py", line 2142, in cdist dm = np.zeros((mA, mB), dtype=np.double) MemoryError </code></pre> # <p>The second problem is a runtime issue that results from constructing distance matrix iteratively. When I institute my iterative approach the runtime increases exponentially. This isn't unexpected due to the nature of the iterative approach.</p> # <pre><code>import numpy as np import dask.array as da from scipy.spatial.distance import cdist import itertools import timeit threshold = 10 data = np.random.uniform(1, 100, (200000,40)) #Build random data data = da.asarray(data) it = round(data.shape[0]/10000) dataArrays = [data[i*10000:(i+1)*10000] for i in range(0, it)] comparisons = itertools.combinations(dataArrays, 2) start = timeit.default_timer() searchvalues = [] for comparison in comparisons: searchvalues.append(np.where(cdist(comparison[0], comparison[1]) &lt; threshold)) time = timeit.default_timer() - start print(time) </code></pre> # <p>Neither of these issues are unexpected due to the nature of the problem. To try and offset both problems I've tried using dask to implement both a large data framework in python, and insert parallelization in the batch process. However, this hasn't resulted in a significant improvement in the time calculation, and I have a pretty strict memory limitation with this iterative method in dask (requiring taking in batches of 1000 obs at a time.</p> <pre><code>from dask.diagnostics import ProgressBar import dask.delayed import dask.bag @dask.delayed def eucDist(comparison): return da.asarray(cdist(comparison[0], comparison[1])) @dask.delayed def findValues(euclideanMatrix): return np.where(euclideanMatrix &lt; threshold) start = timeit.default_timer() searchvalues = [] test = [] for comparison in comparisons: comp = dask.delayed(eucDist)(comparison) test.append(comp) look = [] with ProgressBar(): for element in test: look.append(dask.delayed(findValues)(element).compute()) </code></pre> <p>I'm hoping that I can parallelize the comparisons to increase my speed, but I'm not sure how to implement that in python. Any help with that, or any recommendations for how I can improve the initial comparison code would be appreciated.</p>
<p>You can calculate the Euclidean distance in Dask by using <a href="https://dask-distance.readthedocs.io/en/latest/dask_distance.html#dask_distance.euclidean" rel="nofollow noreferrer"><code>dask_distance.euclidean(x,y)</code></a>.</p>
python|numpy|dask|euclidean-distance|dask-delayed
1
1,030
52,540,037
Create Image using Matplotlib imshow meshgrid and custom colors
<p>I am trying to create an image where the x axis is the width, and y axis is the height of the image. And where each point can be given a color based on a RBG mapping. From looking at imshow() from Matplotlib I guess I need to create a meshgrid on the form (NxMx3) where 3 is a tuple or something similar with the rbg colors. </p> <p>But so far I have not managed to understand how to do that. Lets say I have this example:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt from matplotlib.colors import LinearSegmentedColormap x_min = 1 x_max = 5 y_min = 1 y_max = 5 Nx = 5 #number of steps for x axis Ny = 5 #number of steps for y axis x = np.linspace(x_min, x_max, Nx) y = np.linspace(y_min, y_max, Ny) #Can then create a meshgrid using this to get the x and y axis system xx, yy = np.meshgrid(x, y) #imagine I have some funcion that does someting based on the x and y values def somefunc(x_value, y_value): #do something and return rbg based on that return x_value + y_value res = somefunc(xx, yy) cmap = LinearSegmentedColormap.from_list('mycmap', ['white', 'blue', 'black']) plt.figure(dpi=100) plt.imshow(res, cmap=cmap, interpolation='bilinear') plt.show() </code></pre> <p>And this creates a plot, but what would I have to do if my goal was to give spesific rbg values based on x and y values inside somefunc and make the resulting numpy array into a N x M x 3 array</p> <p>I tried to make the somefunc function return a tuple of rbg values to use (r, b g) but that does not seem to work</p>
<p>It will of course completely depend on what you want to do with the values you supply to the function. So let's assume you just want to put the x values as the red channel and the y values as the blue channel, this could look like</p> <pre><code>def somefunc(x_value, y_value): return np.dstack((x_value/5., np.zeros_like(x_value), y_value/5.)) </code></pre> <p>Complete example:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt x_min = 1 x_max = 5 y_min = 1 y_max = 5 Nx = 5 #number of steps for x axis Ny = 5 #number of steps for y axis x = np.linspace(x_min, x_max, Nx) y = np.linspace(y_min, y_max, Ny) #Can then create a meshgrid using this to get the x and y axis system xx, yy = np.meshgrid(x, y) #imagine I have some funcion that does someting based on the x and y values def somefunc(x_value, y_value): return np.dstack((x_value/5., np.zeros_like(x_value), y_value/5.)) res = somefunc(xx, yy) plt.figure(dpi=100) plt.imshow(res) plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/QtGCX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QtGCX.png" alt="enter image description here"></a></p> <p>If you already have a (more complicated) function that returns an RGB tuple you may loop over the grid to fill an empty array with the values of the function.</p> <pre><code>#If you already have some function that returns an RGB tuple def somefunc(x_value, y_value): if x_value &gt; 2 and y_value &lt; 3: return np.array(((y_value+1)/4., (y_value+2)/5., 0.43)) elif x_value &lt;=2: return np.array((y_value/5., (x_value+3)/5., 0.0)) else: return np.array((x_value/5., (y_value+5)/10., 0.89)) # you may loop over the grid to fill a new array with those values res = np.zeros((xx.shape[0],xx.shape[1],3)) for i in range(xx.shape[0]): for j in range(xx.shape[1]): res[i,j,:] = somefunc(xx[i,j],yy[i,j]) plt.figure(dpi=100) plt.imshow(res) </code></pre> <p><a href="https://i.stack.imgur.com/J8IYi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/J8IYi.png" alt="enter image description here"></a></p>
python-3.x|numpy|matplotlib
1
1,031
58,497,010
How to setup tfserving with inception/mobilenet model for image classification?
<p>I'm unable to find the proper documentation to successfully serve the inception or mobilenet models and write a grpc client to connect to the server and perform image classification.</p> <p>Till now, I've successfully configured the tfserving image on CPU only. Unable to run it on my GPU.</p> <p>But, when I make a grpc client request, the request fails with the error.</p> <pre><code>grpc._channel._Rendezvous: &lt;_Rendezvous of RPC that terminated with: status = StatusCode.INVALID_ARGUMENT details = "Expects arg[0] to be float but string is provided" debug_error_string = "{"created":"@1571717090.210000000","description":"Error received from peer","file":"src/core/lib/surface/call.cc","file_line":1017,"grpc_message":"Expects arg[0] to be float but string is provided","grpc_status":3}" </code></pre> <p>I understand there is some issue in the request format but I couldn't find a proper documentation for the grpc client that can pin-point to correct direction.</p> <p>Here's the grpc client that I used for the request.</p> <pre><code>from __future__ import print_function import grpc import tensorflow as tf import time from tensorflow_serving.apis import predict_pb2 from tensorflow_serving.apis import prediction_service_pb2_grpc tf.app.flags.DEFINE_string('server', 'localhost:8505', 'PredictionService host:port') tf.app.flags.DEFINE_string('image', 'E:/Data/Docker/tf_serving/cat.jpg', '‪path to image') FLAGS = tf.app.flags.FLAGS def main(_): channel = grpc.insecure_channel(FLAGS.server) stub = prediction_service_pb2_grpc.PredictionServiceStub(channel) # Send request with open(FLAGS.image, 'rb') as f: # See prediction_service.proto for gRPC request/response details. data = f.read() request = predict_pb2.PredictRequest() request.model_spec.name = 'inception' request.model_spec.signature_name = '' request.inputs['image'].CopyFrom(tf.contrib.util.make_tensor_proto(data, shape=[1])) result = stub.Predict(request, 5.0) # 10 secs timeout print(result) print("Inception Client Passed") if __name__ == '__main__': tf.app.run() </code></pre>
<p>Like I understood, there are 2 issues in your question.</p> <p>A) Running tfserving on GPU.</p> <p>B) Making a successfully grpc client request.</p> <p>Let's start one-by-one.</p> <hr> <p><strong>Running tfserving on GPU</strong></p> <p>It is simple 2-step process.</p> <ol> <li><p>Pulling latest image from the <a href="https://hub.docker.com/r/tensorflow/serving" rel="nofollow noreferrer">official docker hub page</a>.</p> <pre><code>docker pull tensorflow/serving:latest-gpu </code></pre></li> </ol> <p><em>Please note the label <code>latest-gpu</code> in above pull request as it pulls image meant for GPU.</em></p> <ol start="2"> <li><p>Running the docker container.</p> <pre><code>sudo docker run -p 8502:8500 --mount type=bind,source=/my_model_dir,target=/models/inception --name tfserve_gpu -e MODEL_NAME=inception --gpus device=3 -t tensorflow/serving:latest-gpu </code></pre></li> </ol> <p><em>Please note, I've passed argument <code>--gpus device=3</code> to select the 3rd GPU device. Change it accordingly to select a different GPU device.</em></p> <p>Verify, if the container has been started by <code>docker ps</code> command.</p> <p>Also, verify if the gpu has been allocated for the tfserving docker by <code>nvidia-smi</code> command.</p> <p><strong>Output of nvidia-smi</strong></p> <p><a href="https://i.stack.imgur.com/SV9Cr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SV9Cr.png" alt="Output of nvidia-smi command"></a></p> <p>But here seems a little problem. The tfserving docker has consumed all of gpu device memory.</p> <p>To restrict gpu memory usage, use <code>per_process_gpu_memory_fraction</code> flag.</p> <pre><code>sudo docker run -p 8502:8500 --mount type=bind,source=/my_model_dir,target=/models/inception --name tfserve_gpu -e MODEL_NAME=inception --gpus device=3 -t tensorflow/serving:latest-gpu --per_process_gpu_memory_fraction=0.02 </code></pre> <p><strong>Output of nvidia-smi</strong></p> <p><a href="https://i.stack.imgur.com/IeZpc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IeZpc.png" alt="Output of nvidia-smi command"></a></p> <p>Now, we have successfully configured tfserving docker on GPU device with reasonable gpu memory usage. Lets jump to the second problem.</p> <hr> <p><strong>Making GRPC client request</strong></p> <p>There is issue in formatting of your grpc client request. The tfserving docker image doesn't takes image in binary format directly, instead you'll have to make a tensor for that image and then pass it to the server.</p> <p>Here's the code for making the grpc client request.</p> <pre><code>from __future__ import print_function import argparse import time import numpy as np from cv2 import imread import grpc from tensorflow.contrib.util import make_tensor_proto from tensorflow_serving.apis import predict_pb2 from tensorflow_serving.apis import prediction_service_pb2_grpc import tensorflow as tf def read_tensor_from_image_file(file_name, input_height=299, input_width=299, input_mean=0, input_std=255): input_name = "file_reader" output_name = "normalized" file_reader = tf.io.read_file(file_name, input_name) if file_name.endswith(".png"): image_reader = tf.image.decode_png( file_reader, channels=3, name="png_reader") elif file_name.endswith(".gif"): image_reader = tf.squeeze( tf.image.decode_gif(file_reader, name="gif_reader")) elif file_name.endswith(".bmp"): image_reader = tf.image.decode_bmp(file_reader, name="bmp_reader") else: image_reader = tf.image.decode_jpeg( file_reader, channels=3, name="jpeg_reader") float_caster = tf.cast(image_reader, tf.float32) dims_expander = tf.expand_dims(float_caster, 0) resized = tf.compat.v1.image.resize_bilinear(dims_expander, [input_height, input_width]) normalized = tf.divide(tf.subtract(resized, [input_mean]), [input_std]) sess = tf.Session(config=tf.ConfigProto(gpu_options=tf.GPUOptions(per_process_gpu_memory_fraction=0.01))) result = sess.run(normalized) return result def run(host, port, image, model, signature_name): # Preparing tensor from the image tensor = read_tensor_from_image_file(file_name='images/bird.jpg', input_height=224, input_width=224, input_mean=128, input_std=128) # Preparing the channel channel = grpc.insecure_channel('{host}:{port}'.format(host=host, port=port)) stub = prediction_service_pb2_grpc.PredictionServiceStub(channel) # Preparing grpc request request = predict_pb2.PredictRequest() request.model_spec.name = model request.model_spec.signature_name = signature_name request.inputs['image'].CopyFrom(make_tensor_proto(tensor, shape=[1, 224, 224, 3])) # Making predict request result = stub.Predict(request, 10.0) # Analysing result to get the prediction output. predictions = result.outputs['prediction'].float_val print("Predictions : ", predictions) if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument('--host', help='Tensorflow server host name', default='localhost', type=str) parser.add_argument('--port', help='Tensorflow server port number', default=8502, type=int) parser.add_argument('--image', help='input image', default='bird.jpg', type=str) parser.add_argument('--model', help='model name', default='inception', type=str) parser.add_argument('--signature_name', help='Signature name of saved TF model', default='serving_default', type=str) args = parser.parse_args() run(args.host, args.port, args.image, args.model, args.signature_name) </code></pre> <p>I'm not very sure whether this is the best way to make tfserving grpc client request (<strong><em>since tensorflow library is required at the client end to prepare the tensor</em></strong>) but it works for me.</p> <p>Suggestions are welcomed if any.</p>
tensorflow|grpc|tensorflow-serving|tensorflow2.0
1
1,032
58,466,415
How to reshape a tensorflow Dataset structure?
<p>I'm coding a Pix2Pix network, with my own load_input/real_image function, and I'm currently creating the dataset with tf.data.Dataset. The problem is that my dataset has the wrong shape:</p> <p>I've tried applying a few tf.data.experimemtal functions, none of them work as I want.</p> <pre class="lang-py prettyprint-override"><code> raw_data = [load_image_train(category) for category in SELECTED_CATEGORIES for _ in range(min(MAX_SAMPLES_PER_CATEGORY, category[1]))] train_dataset = tf.data.Dataset.from_tensor_slices(raw_data) train_dataset = train_dataset.cache().shuffle(BUFFER_SIZE) train_dataset = train_dataset.batch(1) </code></pre> <p>I have : &lt; BatchDataset shapes: (None, 2, 256, 256, 3), types: tf.float32></p> <p>I want : &lt; DatasetV1Adapter shapes: ((None, 256, 256, 3), (None, 256, 256, 3)), types: (tf.float32, tf.float32)></p>
<p>You can do it in two ways.</p> <p><strong>Option 1 (Preferred)</strong></p> <pre><code>raw_data1, raw_data2 = tf.unstack(raw_data, axis=1) train_dataset = tf.data.Dataset.from_tensor_slices((raw_data1, raw_data2)) </code></pre> <p><strong>Option 2</strong></p> <pre><code>def map_fn(data): return tf.unstack(data, axis=0) train_dataset = tf.data.Dataset.from_tensor_slices(raw_data) train_dataset = train_dataset.map(map_fn) </code></pre>
python|tensorflow|tensorflow-datasets
2
1,033
58,600,411
Convert list to dataframe
<p>I am running a loop that appends three fields. Predictfinal is a list, though it is not necessary that it should be a list.</p> <pre><code> predictfinal.append(y_hat_orig[0]) predictfinal.append(mape) predictfinal.append(length) </code></pre> <p>At the end, predictfinal returns a long list. But I really want to conform the list into a Dataframe, where each row is 3 columns. However the list does not designate between the 3 columns, it's just a long list with commas in between. Somehow I am trying to slice predictfinal into 3 columns and a Dataframe from currnet unstructured list - any help how?</p> <pre><code>predictfinal Out[88]: [1433.0459967608983, 1.6407741379111223, 23, 1433.6389125340916, 1.6474721044455922, 22, 1433.867408791692, 1.6756763089082383, 21, 1433.8484984008207, 1.6457581105556003, 20, 1433.6340460965778, 1.6380908467895527, 19, 1437.0294365907992, 1.6147672264908473, 18, 1439.7485102740507, 1.5010415925555876, 17, 1440.950406295299, 1.433891246672529, 16, 1434.837060644701, 1.5252803314930383, 15, 1434.9716303636983, 1.6125952442799232, 14, 1441.3153523102953, 3.2633984339696185, 13, 1435.6932462859334, 3.2703435261200497, 12, 1419.9057834496082, 1.9100005818319687, 11, 1426.0739741342488, 1.947684057178654, 10] </code></pre>
<p>Based on <a href="https://stackoverflow.com/a/48347320/6926444">https://stackoverflow.com/a/48347320/6926444</a></p> <p>We can achieve it by using <strong>zip()</strong> and <strong>iter()</strong>. The code below iterates three elements each time.</p> <pre><code>res = pd.DataFrame(list(zip(*([iter(data)] * 3))), columns=['a', 'b', 'c']) </code></pre> <p><strong>Result:</strong></p> <pre><code> a b c 0 1433.045997 1.640774 23 1 1433.638913 1.647472 22 2 1433.867409 1.675676 21 3 1433.848498 1.645758 20 4 1433.634046 1.638091 19 5 1437.029437 1.614767 18 6 1439.748510 1.501042 17 7 1440.950406 1.433891 16 8 1434.837061 1.525280 15 9 1434.971630 1.612595 14 10 1441.315352 3.263398 13 11 1435.693246 3.270344 12 12 1419.905783 1.910001 11 13 1426.073974 1.947684 10 </code></pre>
python|pandas
2
1,034
69,143,408
Revisit "How to find the position/index of a particular file in a directory?"
<p>I have a question from the following discussion:</p> <p><a href="https://stackoverflow.com/questions/40675412/how-to-find-the-position-index-of-a-particular-file-in-a-directory">How to find the position/index of a particular file in a directory?</a></p> <p>Suppose I have <em>three excel files</em> in a folder: <em>test_3d, test_3d1, test_3d2</em></p> <p>It says we can read the <strong>index of a file</strong> from the following codes</p> <pre><code>folder = r'C:\Users\Denny\Desktop\Work\test_read' files = os.listdir(folder) files.index('test_3d1.xlsx') &gt;&gt; 1 </code></pre> <p>Also, we can read the <strong>data of each file</strong> by</p> <pre><code>folder = r'C:\Users\Denny\Desktop\Work\test_read' files = os.listdir(folder) dfs = {} for file in files: if file.endswith('.xlsx'): dfs[file[:-5]] = pd.read_excel(os.path.join(folder,file), header = None, skiprows=[0], usecols = &quot;B:M&quot;) dfs['test_3d1'] </code></pre> <p><a href="https://i.stack.imgur.com/sSPVM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sSPVM.png" alt="enter image description here" /></a></p> <p>Also, we can show all its files by using</p> <pre><code>files &gt;&gt; ['test_3d.xlsx', 'test_3d1.xlsx', 'test_3d2.xlsx'] </code></pre> <p>My question now is how to get the data of each file <strong>not</strong> by its name</p> <pre><code>dfs['test_3d1'] </code></pre> <p>but by its <strong>index</strong>, for example</p> <pre><code>dfs['files[1]'] # I want to pick up the 2nd file 'text_3d1' from files. </code></pre> <p>However, it shows an error</p> <p><a href="https://i.stack.imgur.com/8yQYa.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8yQYa.jpg" alt="enter image description here" /></a></p> <p>How to fix this error?</p>
<p>If you wanted to look values in the dictionary as per the screenshot you posted, you could do: <code>dfs[files[1][:-5]]</code>. This gets the file at index 1 and then excludes the file extension as you've done in in the step to build the <code>dfs</code> dictionary.</p> <p>Optionally, you could use the <a href="https://docs.python.org/3/reference/datamodel.html#object.__missing__" rel="nofollow noreferrer"><code>__missing__</code> method</a> available for dictionaries to change the behaviour of how keys which aren't present in the dictionary are handled. Using the <a href="https://stackoverflow.com/a/6229253/1431750">recipe from this answer</a>, you could use a lookup on the <a href="https://docs.python.org/3/library/stdtypes.html#dict.values" rel="nofollow noreferrer">dictionary <code>values()</code></a> or modify the key to remove the file extension and then return the value for that key. So you can use it with <code>dfs[files[1]]</code> without needing to strip off the extension each time.</p> <pre class="lang-py prettyprint-override"><code>In [1]: class smart_dict(dict): ...: def __missing__(self, key): ...: if isinstance(key, int): # skip this if you don't plan ...: return list(self.values())[key] # to use ints directly ...: # or ...: # return self[list(self.keys())[key]] ...: if key.endswith('.xlsx'): ...: return self[key[:-5]] ...: raise KeyError(key) ...: In [2]: dfs = smart_dict() ...: dfs['a'] = 'A' ...: dfs['b'] = 'B' In [3]: dfs['a'] # normal usage Out[3]: 'A' In [4]: dfs[0] # index-based lookup Out[4]: 'A' In [5]: dfs[1] Out[5]: 'B' In [6]: dfs['a.xlsx'] # lookup with the filename Out[6]: 'A' In [7]: dfs['does not exist'] # still raises KeyError ... KeyError: 'does not exist' In [8]: dfs['nope.xlsx'] # also raises KeyError ... KeyError: 'nope' </code></pre> <p>Btw, there is an overhead to constantly doing <code>list(dict.values())[index]</code> lookups for integer keys since <code>values()</code> can't indexed directly. So avoid using that. The '.xlsx' extension-removal lookup is okay since it merely removes the extension and then directly uses that as a key.</p>
python|excel|pandas
2
1,035
69,189,717
Filtering for and replacing values in one Pandas DataFrame based on common columns of another DataFrame
<p>I have a question regarding Pandas and the correct indexing and replacing of values.</p> <p>I have 2 DataFrames, df1 and df2, with the same columns (Col1, Col2, Col3 and Col4).</p> <pre><code>df1 = pd.DataFrame([['A','b','x',1], ['A','b','y',2], ['A','c','z',3], ['B','b','x',4]], columns=['Col1', 'Col2', 'Col3', 'Col4']) df2 = pd.DataFrame([['A','b','y',0], ['B','b','x',0]], columns=['Col1','Col2','Col3','Col4']) df1 Col1 Col2 Col3 Col4 0 A b x 1 1 A b y 2 2 A c z 3 3 B b x 4 df2 Col1 Col2 Col3 Col4 0 A b y 0 1 B b x 0 </code></pre> <p>In <strong>df1</strong>, I would like to replace the values in <strong>Col4</strong> in the rows that match the values of the <strong>other columns</strong> (Col1, Col2 and Col3) in <strong>df2</strong> with another value (let's say 100).</p> <p>The resulting df1 would look like this:</p> <pre><code>df1 Col1 Col2 Col3 Col4 0 A b x 1 1 A b y 100 2 A c z 3 3 B b x 100 </code></pre> <p>I have tried with something like this:</p> <pre><code>columns = list(df1.columns) columns.remove('Col4') df1.loc[(df1[cols] == df2[cols].values).all(axis=1)]['Col4']=100 </code></pre> <p>But I am getting errors and I am not sure if this is even achieving what I want.</p>
<p>You could do an <code>isin</code> with the indices, and assign the value via boolean masking:</p> <pre class="lang-py prettyprint-override"><code> cols = ['Col1', 'Col2', 'Col3'] temp1 = df1.set_index(cols) temp2 = df2.set_index(cols) # get the booleans here booleans = temp1.index.isin(temp2.index) # this assigns 100 to only rows in Col4 # that are True df1.loc[booleans, 'Col4'] = 100 df1 Col1 Col2 Col3 Col4 0 A b x 1 1 A b y 100 2 A c z 3 3 B b x 100 </code></pre> <p>Alternatively, you could resolve it with <code>pd.merge</code> and the <code>indicator</code> parameter:</p> <pre class="lang-py prettyprint-override"><code>(df1.merge(df2, on = cols, how = 'left', indicator=True, suffixes = (None, '_y')) .assign(Col4 = lambda df: np.where(df._merge == 'both', 100, df.Col4)) .loc[:, df1.columns] ) Col1 Col2 Col3 Col4 0 A b x 1 1 A b y 100 2 A c z 3 3 B b x 100 </code></pre>
python|pandas|indexing
4
1,036
68,963,686
Get closest datetime index value from pd DataFrame
<p>I've got following DataFrame:</p> <pre><code> holdings 2021-08-28 04:10:14.130412+00:00 {'$USD': 158, 'Apple': 3} 2021-08-25 18:10:14.130412+00:00 {'$USD': 158, 'Apple': 3} </code></pre> <p>With holdings as column and datetimes as index.</p> <p>I got this by converting following dict to a DataFrame: (data is not consistent with previous example but it is the same format, so please ignore that)</p> <pre><code>{ datetime.datetime(2021, 8, 28, 4, 10, 15, 180064, tzinfo=datetime.timezone.utc): { &quot;$USD&quot;: &quot;158.1727087865&quot;, &quot;Apple&quot;: &quot;3&quot;, &quot;MSFT&quot;: &quot;3&quot;, }, datetime.datetime(2021, 8, 24, 4, 10, 15, 180064, tzinfo=datetime.timezone.utc): { &quot;$USD&quot;: &quot;158.1727087865&quot;, &quot;Apple&quot;: &quot;3&quot;, &quot;MSFT&quot;: &quot;3&quot;, } } </code></pre> <p>I transform the dict to a dataframe by:</p> <pre><code> holdings_dict = { key: {&quot;holdings&quot;: holdings_dict[key]} for key in holdings_dict.keys() } holdings_df = pd.DataFrame.from_dict( holdings_dict, orient=&quot;index&quot;, columns=[&quot;holdings&quot;] ).sort_index(axis=0) </code></pre> <p>Now I try to get the nearest index and value to a certain date, let's say 2021-08-25, which is stored as cur_datetime</p> <pre><code>holdings_df.index.get_loc( pd.to_datetime(cur_datetime), method=&quot;previous&quot; )[&quot;holdings&quot;] </code></pre> <p>But this gives an error</p> <pre><code>ValueError: Invalid fill method. Expecting pad (ffill), backfill (bfill) or nearest. Got previous </code></pre> <p>How can I get the value of the nearest datetime (in a query you would do this by LTE)</p>
<p>I have to say you have a bit unconventional way to work with pandas ;)</p> <p>Nevertheless, <code>get_loc</code> return the range, so you need to use <code>iloc</code> to slice your row:</p> <pre><code>holdings_df.iloc[holdings_df.index.get_loc(pd.to_datetime(cur_datetime), method='backfill')]['holdings'] </code></pre> <p>output:</p> <pre><code>{'$USD': '158.1727087865', 'Apple': '3', 'MSFT': '3'} </code></pre>
python|pandas|dataframe
1
1,037
69,187,899
Find Last Available Date in Pandas Data Frame
<p>Suppose that I have a Pandas DataFrame as below:</p> <pre><code>+------------+-------+ | Date | Price | +------------+-------+ | 01/01/2021 | 10 | | 01/02/2021 | 20 | | 01/03/2021 | 30 | | 01/05/2021 | 40 | | 01/08/2021 | 20 | | 01/09/2021 | 10 | +------------+-------+ </code></pre> <p>The above data frame can be generated using code below:</p> <pre><code>df = pd.DataFrame({'Date': ['2021-01-01', '2021-01-02', '2021-01-03', '2021-01-05', '2021-01-08', '2021-01-09'], 'Price': [10, 20, 30, 40, 20, 10]}) df['Date'] = pd.to_datetime(df['Date']) </code></pre> <p>Now given a date stored in variable <code>end_date</code>. The first step is to find if the date exists in the data frame. It can be done using the below code:</p> <pre><code>if end_date in df.Date.values: pass else: # find last available date. </code></pre> <p>What would be the most elegant way to find the last available date in data frame.</p> <p>E.g. if <code>end_date = '2021-01-10'</code>. Since it does not exists in data frame I want <code>end_date</code> value to be set at <code>2021-01-09</code>. Similarly, if <code>end_date = 2021-01-07</code> I want <code>end_date</code> value to be set at <code>2021-01-05</code>.</p> <p>Alternatively if <code>end_date = 2021-01-08</code> <code>end_date</code> won't be overwritten and would remain as is i.e. <code>end_date = 2021-01-08</code>.</p>
<p>The other answers are assuming the dates are always in order in your dataframe.</p> <p>Since your dates are sortable, you can just use comparison operators (note that this will work even if you keep them as strings, as the format you are using is lexicographically sortable).</p> <p>To get the last available date, first filter out dates after <code>end_date</code> and then find the max:</p> <pre class="lang-py prettyprint-override"><code>end_date = df[df['Date'] &lt;= end_date]['Date'].max() </code></pre>
python|pandas|datetime
1
1,038
44,593,948
Using df.column.str.contains and updating a pandas dataframe column
<p>I have a pandas dataframe with two columns.</p> <pre><code>df= pd.DataFrame({"C": ['this is orange','this is apple','this is pear','this is plum','this is orange'], "D": [0,0,0,0,0]}) </code></pre> <p>I want to be able to read this C column and return in the D column the name of the fruit. So my thought process was using df.C.str.contains to determine if a certain string appears in each row of C and then D updates accordingly.The elements in C may be really long strings: ex. "This is apple which is red" but I only care if the word apple appears in the cell. I should note that i'm not tied to using str.contains but this seemed the most obvious path to me. Just not sure how I would apply it.</p> <p>The final dataframe will look like:</p> <pre><code>df= pd.DataFrame({"C": ['this is orange','this is apple','this is pear','this is plum','this is orange'], "D": ['orange','apple','pear','plum','grapefruit']}) </code></pre>
<p>Consider this dataframe</p> <pre><code>df= pd.DataFrame({"C": ['this is orange','this is apple which is red','this is pear','this is plum','this is orange'], "D": [0,0,0,0,0]}) C D 0 this is orange 0 1 this is apple which is red 0 2 this is pear 0 3 this is plum 0 4 this is orange 0 </code></pre> <p>You can use the following code to extract the fruit name ASSUMING the name of the fruit follows 'this is'</p> <pre><code>df['D'] = df.C.str.extract('this is ([A-Za-z]+)\s?.*?') </code></pre> <p>You get</p> <pre><code> C D 0 this is orange orange 1 this is apple which is red apple 2 this is pear pear 3 this is plum plum 4 this is orange orange </code></pre> <p>For the example dataset that you have posted, a simple split on space and extracting the last element works</p> <pre><code>df['D'] = df.C.str.split(' ').str[-1] </code></pre>
python|regex|pandas
1
1,039
44,400,860
Pandas Filter by string
<p>I need to filter my groups to show only the groups that contain a string in all the rows of a group.</p> <pre><code>Index A B C 0 A1 B5 T 1 A1 B2 T 2 A1 B2 F 3 A2 B5 T 4 A2 F5 T 5 A3 F4 T 6 A4 F4 F </code></pre> <p>Returns: </p> <pre><code>Index A B C 3 A2 B5 T 4 A2 F5 T 5 A3 F4 T </code></pre> <p>Tried: <code>df.groupby('A').apply(lambda x: x[x['C']==T])</code></p> <p>And as you may have known it returns: </p> <pre><code>Index A B C 0 A1 B5 T 1 A1 B2 T 3 A2 B5 T 4 A2 F5 T 5 A3 F4 T </code></pre> <p>When I change apply to filter I get an error.</p> <p>Help Please!</p>
<p><strong>Using <code>transform</code></strong><br> <em>Fastest solution that is also simple</em> </p> <pre><code>df[df.C.eq('T').groupby(df.A.values).transform('all')] A B C Index 3 A2 B5 T 4 A2 F5 T 5 A3 F4 T </code></pre> <hr> <p><strong>Using <code>crosstab</code></strong><br> <em>Shortest solution I could think of... but slow</em> </p> <pre><code>df[df.A.map(pd.crosstab(df.A, df.C).F.eq(0))] A B C Index 3 A2 B5 T 4 A2 F5 T 5 A3 F4 T </code></pre> <hr> <p><strong><code>project</code>/<em>kill</em></strong><br> <em>Very fast solution... but complicated</em> </p> <pre><code>f, u = pd.factorize(df.A.values) t = (df.C.values == 'T').astype(int) b0 = np.bincount(f * 2 + t) pad = np.zeros(2 * u.size - b0.size, dtype=int) b = np.append(b0, pad) df[~b.reshape(-1, 2)[:, 0].astype(bool)[f]] A B C Index 3 A2 B5 T 4 A2 F5 T 5 A3 F4 T </code></pre> <hr> <p><strong>Timing</strong> </p> <pre><code>%timeit df[df.C.eq('T').groupby(df.A.values).transform('all')] %timeit df[df.A.map(pd.crosstab(df.A, df.C).F.eq(0))] %timeit df.groupby('A').filter(lambda x: len(x[x.C=='T'])==len(x)) 1000 loops, best of 3: 1.67 ms per loop 100 loops, best of 3: 6.15 ms per loop 100 loops, best of 3: 3.05 ms per loop %%timeit f, u = pd.factorize(df.A.values) t = (df.C.values == 'T').astype(int) b0 = np.bincount(f * 2 + t) pad = np.zeros(2 * u.size - b0.size, dtype=int) b = np.append(b0, pad) df[~b.reshape(-1, 2)[:, 0].astype(bool)[f]] 1000 loops, best of 3: 279 µs per loop d1 = df.assign(mydummy=df['C']=='T') d1['mysum'] = d1.groupby('A').mydummy.transform('sum') d1['mycount'] = d1.groupby('A').mysum.transform('size') d1.loc[d1.mysum == d1.mycount, df.columns] 100 loops, best of 3: 3.68 ms per loop </code></pre>
python|pandas|numpy
2
1,040
44,661,035
Restore vgg16 network in tensorflow
<p>This one has been giving me a headache for quite some time now, even though it seems to be very basic.</p> <p>I have the vgg16 network downloaded as a .cpkt (from <a href="https://github.com/tensorflow/models/blob/master/slim/README.md#Pretrained" rel="nofollow noreferrer">https://github.com/tensorflow/models/blob/master/slim/README.md#Pretrained</a>)</p> <p>Now what I want to do is loading for example the tensor of the first convolution layer of this network as an array in R. </p> <p>I tried</p> <p>restorer = tf$train$Saver()</p> <p>sess = tf$Session()</p> <p>restorer$restore(sess, "/home/beheerder/R/vgg_16.ckpt")</p> <p>But then I do not see any variables apearing in my enviroment.</p> <p>I'm working in R, but an awnser in Python is OK as well, as I can probably translate it to R.</p>
<p>Saver takes the variables to restore in constructor. In other words, you have to create the variables before you can restore them. Here is the example from Saver's doc:</p> <pre><code>v1 = tf.Variable(..., name='v1') v2 = tf.Variable(..., name='v2') # Pass the variables as a dict: saver = tf.train.Saver({'v1': v1, 'v2': v2}) # Or pass them as a list. saver = tf.train.Saver([v1, v2]) </code></pre> <p>If you were to run the first line of your code in python you would get:</p> <pre><code>In [1]: import tensorflow as tf In [2]: saver = tf.train.Saver() --------------------------------------------------------------------------- ValueError Traceback (most recent call last) &lt;ipython-input-2-18da33d742f9&gt; in &lt;module&gt;() ----&gt; 1 saver = tf.train.Saver() /usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.pyc in __init__(self, var_list, reshape, sharded, max_to_keep, keep_checkpoint_every_n_hours, name, restore_sequentially, saver_def, builder, defer_build, allow_empty, write_version, pad_step_number) 1054 self._pad_step_number = pad_step_number 1055 if not defer_build: -&gt; 1056 self.build() 1057 if self.saver_def: 1058 self._check_saver_def() /usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.pyc in build(self) 1075 return 1076 else: -&gt; 1077 raise ValueError("No variables to save") 1078 self._is_empty = False 1079 self.saver_def = self._builder.build( ValueError: No variables to save </code></pre> <p>You can see how model variables are created before being restored in the 20 lines starting from <a href="https://github.com/tensorflow/models/blob/master/slim/train_image_classifier.py#L338" rel="nofollow noreferrer">https://github.com/tensorflow/models/blob/master/slim/train_image_classifier.py#L338</a></p> <p>This code gets executed if you make a call to train_image_classifier.py similar to the flower example in <a href="https://github.com/tensorflow/models/blob/master/slim/README.md#fine-tuning-a-model-from-an-existing-checkpoint" rel="nofollow noreferrer">https://github.com/tensorflow/models/blob/master/slim/README.md#fine-tuning-a-model-from-an-existing-checkpoint</a></p>
r|tensorflow
0
1,041
44,494,221
How to store `pandas.DataFrame` in a PANDAS-LOADABLE binary format other than `pickle`
<p>I have a problem with saving <code>pandas.DataFrame</code> (1 440 000 000 rows).</p> <p>From what I can see in the API, the only available options to store (and then load) the array are either CSV or pickle.</p> <p>Saving in pickle format ends with a mysterious exception (<code>SystemError: error return without exception set</code>), while saving in CSV is a waste of space even if it is compressed (2-byte-long <code>np.float16</code> is much more efficient than ASCII-encoded value).</p> <p>How can I store my dataframe in a loadable, memory-efficient (including disk space) format?</p>
<p>I would guess that your data frame is too big. Pickle has some limits. You are much better off either saving in a database or using <em>to_hdf</em> (or lots of other IO routines, <em>to_msgpack</em> might works as well).</p> <p><a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_hdf.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_hdf.html</a></p>
python|python-2.7|python-3.x|pandas|dataframe
1
1,042
61,152,176
Joining time series by common date in Python (dataframe & series/list question)
<p>Noob here. PLEASE FORGIVE ABYSMAL FORMATTING as I am still learning. I am trying to create a time series (a dataframe, I think?) that consists of three columns. One is a date column, the next is an inventory column, and the last is a price column.</p> <p>I have pulled two separate series (date &amp; inventory; date &amp; price) and I want to meld the two series so that I can see three columns instead of two sets of two. This is my code.</p> <pre><code>import json import numpy as np import pandas as pd from urllib.error import URLError, HTTPError from urllib.request import urlopen class EIAgov(object): def __init__(self, token, series): ''' Purpose: Initialise the EIAgov class by requesting: - EIA token - id code(s) of the series to be downloaded Parameters: - token: string - series: string or list of strings ''' self.token = token self.series = series def __repr__(self): return str(self.series) def Raw(self, ser): # Construct url url = 'http://api.eia.gov/series/?api_key=' + self.token + '&amp;series_id=' + ser.upper() try: # URL request, URL opener, read content response = urlopen(url); raw_byte = response.read() raw_string = str(raw_byte, 'utf-8-sig') jso = json.loads(raw_string) return jso except HTTPError as e: print('HTTP error type.') print('Error code: ', e.code) except URLError as e: print('URL type error.') print('Reason: ', e.reason) def GetData(self): # Deal with the date series date_ = self.Raw(self.series[0]) date_series = date_['series'][0]['data'] endi = len(date_series) # or len(date_['series'][0]['data']) date = [] for i in range (endi): date.append(date_series[i][0]) # Create dataframe df = pd.DataFrame(data=date) df.columns = ['Date'] # Deal with data lenj = len(self.series) for j in range (lenj): data_ = self.Raw(self.series[j]) data_series = data_['series'][0]['data'] data = [] endk = len(date_series) for k in range (endk): data.append(data_series[k][1]) df[self.series[j]] = data return df if __name__ == '__main__': tok = 'mytoken' # Natural Gas - Weekly Storage # ngstor = ['NG.NW2_EPG0_SWO_R48_BCF.W'] # w/ several series at a time ['ELEC.REV.AL-ALL.M', 'ELEC.REV.AK-ALL.M', 'ELEC.REV.CA-ALL.M'] stordata = EIAgov(tok, ngstor) print(stordata.GetData()) # Natural Gas - Weekly Prices # ngpx = ['NG.RNGC1.W'] # w/ several series at a time ['ELEC.REV.AL-ALL.M', 'ELEC.REV.AK-ALL.M', 'ELEC.REV.CA-ALL.M'] pxdata = EIAgov(tok, ngpx) print(pxdata.GetData()) </code></pre> <p>Note that 'mytoken' needs to be replaced by an eia.gov API key. I can get this to successfully create an output of two lists...but then to get the lists merged I tried to add this at the end:</p> <pre><code>joined_frame = pd.concat([ngstor, ngpx], axis = 1, sort=False) print(joined_frame.GetData()) </code></pre> <p>But I get an error</p> <pre><code> ("TypeError: cannot concatenate object of type '&lt;class 'list'&gt;'; only Series and DataFrame objs are valid") </code></pre> <p>because apparently I don't know the difference between a list and a series. </p> <p>How do I merge these lists by date column? Thanks very much for any help. (Also feel free to advise why I am terrible at formatting code correctly in this post.)</p>
<p>If you want to manipulate them as DataFrames in the rest of your code, you can transform <code>ngstor</code> and <code>ngpx</code> into DataFrames as follows:</p> <pre><code>import pandas as pd # I create two lists that look like yours ngstor = [[1,2], ["2020-04-03", "2020-05-07"]] ngpx = [[3,4] , ["2020-04-03", "2020-05-07"]] # I transform them to DataFrames ngstor = pd.DataFrame({"value1": ngstor[0], "date_col": ngstor[1]}) ngpx = pd.DataFrame({"value2": ngpx[0], "date_col": ngpx[1]}) </code></pre> <p>Then you can either use <code>pandas.merge</code> or <code>pandas.concat</code> :</p> <pre><code># merge option joined_framed = pd.merge(ngstor, ngpx, on="date_col", how="outer") </code></pre> <pre><code># concat option ngstor = ngstor.set_index("date_col") ngpx = ngpx.set_index("date_col") joined_framed = pd.concat([ngstor, ngpx], axis=1, join="outer").reset_index() </code></pre> <p>The result will be:</p> <pre><code> date_col value1 value2 0 2020-04-03 1 3 1 2020-05-07 2 4 </code></pre>
python|pandas|list|join|series
1
1,043
60,777,165
Filter a multidimensional numpy array by column
<p>I have a multidimensional numpy array and I only want specific values in each column of the array. If the vlaue does not match that of what I am filtering by I want to delete the entire row. Code snippet:</p> <pre><code>array = ([4, 78.01, 65.00, 98.00], [5, 23.08, 87.68, 65.3], [6, 45.98, 56.54, 98.76], [7, 98.23, 26.65, 46.56]) </code></pre> <p>For example column 1 I would like numbers between 0-90 and column 4 I would want values between 70-100. So my ideal output would be:</p> <pre><code> array = ([4, 78.01, 65.00, 98.00], [6, 45.98, 56.54, 98.76]) </code></pre> <p>Is there any way to do this?</p>
<p>You need to chain all conditions with <code>bitwise operators</code> and the perform boolean indexing:</p> <pre><code>array[(array[:,0] &gt; 0) &amp; (array[:,0] &lt; 100) &amp; (array[:,3] &gt; 90) &amp; (array[:,3] &lt; 100)] array([[ 4. , 78.01, 65. , 98. ], [ 6. , 45.98, 56.54, 98.76]]) </code></pre>
python|arrays|numpy|multidimensional-array
1
1,044
71,540,057
numpy matrix row to csv
<p>in the code below, my intention is to copy the rows of this matrix to a csv file. I know that the csv function writer copies an array perfectly. But when the row comes from a matrix this doesn't seem to work. The csv file then looks like this <a href="https://i.stack.imgur.com/jtTI9.png" rel="nofollow noreferrer">'unwanted csv-file'</a>. The numbers of one row are together in 1 place of the csv file. Each one on a separate row though, which is what I desired. But I need the numbers of 1 row seperated with commas.</p> <p>This code is a simplification of the code needed for a simulation program. In which I want to copy the first row of a calculated matrix per time interval to a csv file. i in this example is replaced by time t and so each time the first row is sent to file1, second row to file2 and so on. But when I can solve the problem for this code, I can solve it for my simulation program as well.</p> <pre><code>import numpy as np import csv import pandas as pd V = np.matrix([[ 8.12500000e-03+4.12066060e-02j, -4.02435390e-18+9.30274988e-19j, -5.41422932e-18+4.03695160e-19j], [-7.21532153e-18+7.93138177e-19j, 8.12500000e-03+4.12066060e-02j, -6.50521303e-18+4.33680869e-18j], [-7.09036473e-18+5.11438288e-19j, -6.50521303e-18+4.33680869e-18j, 8.12500000e-03+4.12066060e-02j]]) path = 'C:/Users/Gebruiker/OneDrive/Documenten/' for i in range(0,3): if i ==0: with open(path + 'test12345678911.csv', mode='w', newline='') as voltage_file_a: voltage_writer_a = csv.writer(voltage_file_a, delimiter=';', quotechar='&quot;', quoting=csv.QUOTE_MINIMAL) row = np.array(abs(V[i,:])) voltage_writer_a.writerow(row) else: with open(path + 'test12345678911.csv', mode = 'a', newline = '') as voltage_file_a: voltage_writer_a = csv.writer(voltage_file_a, delimiter=';', quotechar='&quot;', quoting=csv.QUOTE_MINIMAL) row = np.array(abs(V[i,:])) voltage_writer_a.writerow(row) </code></pre> <p>I tried to solve this with the step 'row = np.array...' but this doesn't help. Has someone an idea how to convert the row of the matrix to the csv file.</p> <p>This code results in the following csv-file: <a href="https://i.stack.imgur.com/jtTI9.png" rel="nofollow noreferrer">unwanted csv-file</a></p>
<p>Some Commands that may be useful in this solution:</p> <pre><code>np.matrix.ravel() np.matrix.flatten() np.matrix.tolist() </code></pre> <p>In your example, replace the <code>np.matrix</code> with <code>V</code>.</p>
arrays|matrix|multidimensional-array|numpy-ndarray|csvwriter
0
1,045
71,670,673
How to divide each value in column by the maximum value of a subset of that column
<p>I am trying to divide each row in a column by the maximum of a sub-list in the column where the sub-list if the column filtered by a category variable</p> <blockquote> <p>Is there a single line vector equation that creates col3? I have been trying to use groupby with transform(lambda x: x...) but can't seem to get the effect of maxif where it only takes the max of col2 where col1 = the rows with the same category as the row in col2 being divided.</p> </blockquote> <p>Sample input code:</p> <pre><code>import pandas as pd data = {'col1':['A', 'A', 'B', 'B'], 'col2':[1, 2, 3, 4]} df = pd.DataFrame(data) df </code></pre> <p>Desired output:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th><code>col1</code></th> <th><code>col2</code></th> <th><code>col3</code></th> <th><code>explanation</code></th> </tr> </thead> <tbody> <tr> <td><code>A</code></td> <td><code>1</code></td> <td><code>0.5</code></td> <td><code>e.g. 1/2</code></td> </tr> <tr> <td><code>A</code></td> <td><code>2</code></td> <td><code>1</code></td> <td><code>e.g. 2/2</code></td> </tr> <tr> <td><code>B</code></td> <td><code>3</code></td> <td><code>0.75</code></td> <td><code>e.g. 3/4</code></td> </tr> <tr> <td><code>B</code></td> <td><code>4</code></td> <td><code>1</code></td> <td><code>e.g. 4/4</code></td> </tr> </tbody> </table> </div>
<p>Sure:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; df['col2'] / df.groupby('col1')['col2'].transform(max) 0 0.50 1 1.00 2 0.75 3 1.00 </code></pre> <p>You could then assign that result to a new column of your choice.</p>
python|pandas|lambda|max
2
1,046
71,482,239
Diagonalizing a pandas DataFrame
<p>Consider the following pandas DataFrame:</p> <pre><code>import numpy as np import pandas as pd df_foo = pd.DataFrame([1,2,3]) </code></pre> <p>I believe I used to be able to diagonalize this DataFrame as follows (see e.g. this thread <a href="https://stackoverflow.com/questions/17408896/diagonalising-a-pandas-series">Diagonalising a Pandas series</a>)</p> <pre><code>df_foo_diag = pd.DataFrame(np.diag(df_foo), index=df_foo.index, columns = df_foo.index) </code></pre> <p>However, when I do this now, it seems that <code>np.diag(df_foo)</code> returns a 1 by 1 array containing the first value of the DataFrame. In other words, it seems like numpy extracts the diagonal, instead of constructing a diagonal array.</p> <p>How can I construct a diagonal DataFrame out of a 1-dimensional DataFrame?</p>
<p>Convert one column Dataframe to <code>Series</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.squeeze.html" rel="nofollow noreferrer"><code>DataFrame.squeeze</code></a> and then your solution working well:</p> <pre><code>df_foo_diag = pd.DataFrame(np.diag(df_foo.squeeze()), index=df_foo.index, columns = df_foo.index) print (df_foo_diag) 0 1 2 0 1 0 0 1 0 2 0 2 0 0 3 </code></pre> <hr /> <pre><code>df_foo = pd.DataFrame([10,20,30]) df_foo_diag = pd.DataFrame(np.diag(df_foo.squeeze()), index=df_foo.index, columns = df_foo.index) print (df_foo_diag) 0 1 2 0 10 0 0 1 0 20 0 2 0 0 30 </code></pre>
python|pandas|numpy
3
1,047
71,478,959
Removing dot symbol from string - Pandas Dataframe
<p>I'm trying to replace '.' symbol to '':</p> <pre><code>excel_data_df['serialNumber'] = df2[['Serial number', 'Serial number.1']].agg(''.join, axis=1).replace(to_replace = '.', value = '', regex = True) </code></pre> <p>My string: &quot;TF013168.&quot; Name: serialNumber, dtype: object, number saved as text in the excel.</p> <p>But as the result I get all the characters removed from the string. Is there any other way to do it?</p> <p>Thanks in advance.</p>
<p>Escape <code>.</code> by <code>\.</code>, because <code>.</code> is special regex character for replace substrings:</p> <pre><code>excel_data_df['serialNumber'] = df2[['Serial number', 'Serial number.1']].agg(''.join, axis=1).replace(to_replace = '\.', value = '', regex = True) </code></pre>
python-3.x|pandas
2
1,048
42,257,725
Spedup distance and summary computation between two HUGE multi-dimensional arrays in python
<p>I have only a year of experience with using python. I would like to find summary statistics based on two multi-dimensional arrays <code>DF_All</code> and <code>DF_On</code>. Both have <code>X</code>,<code>Y</code> values. A function is created that computes distance as <code>sqrt((X-X0)^2 + (Y-Y0)^2)</code> and generates summaries as shown in the code below. My question is: Is there any way to make this code run faster? I would prefer a native python method but other strategies (like <code>numba</code> are also welcomed).</p> <p>The example (toy) code below takes only 50 milliseconds to run on my windows-7 x64 desktop. But my <code>DF_All</code> has more than 10,000 rows and I need to do this calculation a huge number of times as well resulting in a huge execution time. </p> <pre><code>import numpy as np import pandas as pd import json, random # create data KY = ['ER','WD','DF'] DS = ['On','Off'] DF_All = pd.DataFrame({'KY': np.random.choice(KY,20,replace = True), 'DS': np.random.choice(DS,20,replace = True), 'X': random.sample(range(1,100),20), 'Y': random.sample(range(1,100),20)}) DF_On = DF_All[DF_All['DS']=='On'] # function def get_values(DF_All,X = list(DF_On['X'])[0],Y = list(DF_On['Y'])[0]): dist_vector = np.sqrt((DF_All['X'] - X)**2 + (DF_All['Y'] - Y)**2) # computes distance DF_All = DF_All[dist_vector&lt;35] # filters if distance is &lt; 35 # print(DF_All.shape) DS_summary = [sum(DF_All['DS']==x) for x in ['On','Off']] # get summary KY_summary = [sum(DF_All['KY']==x) for x in ['ER','WD','DF']] # get summary joined_summary = DS_summary + KY_summary # join two summary lists return(joined_summary) # return Array_On = DF_On.values.tolist() # convert to array then to list Values = [get_values(DF_All,ZZ[2],ZZ[3]) for ZZ in Array_On] # list comprehension to get DS and KY summary for all rows of Array_On list Array_Updated = [x + y for x,y in zip(Array_On,Values)] # appending the summary list to Array_On list Array_Updated = pd.DataFrame(Array_Updated) # converting to pandas dataframe print(Array_Updated) </code></pre>
<p>Here's an approach making use of <code>vectorization</code> by getting rid of the looping there -</p> <pre><code>from scipy.spatial.distance import cdist def get_values_vectorized(DF_All, Array_On): a = DF_All[['X','Y']].values b = np.array(Array_On)[:,2:].astype(int) v_mask = (cdist(b,a) &lt; 35).astype(int) DF_DS = DF_All.DS.values DS_sums = v_mask.dot(DF_DS[:,None] == ['On','Off']) DF_KY = DF_All.KY.values KY_sums = v_mask.dot(DF_KY[:,None] == ['ER','WD','DF']) return np.column_stack(( DS_sums, KY_sums )) </code></pre> <p>Using a bit less memory, a tweaked one -</p> <pre><code>def get_values_vectorized_v2(DF_All, Array_On): a = DF_All[['X','Y']].values b = np.array(Array_On)[:,2:].astype(int) v_mask = cdist(a,b) &lt; 35 DF_DS = DF_All.DS.values DS_sums = [((DF_DS==x)[:,None] &amp; v_mask).sum(0) for x in ['On','Off']] DF_KY = DF_All.KY.values KY_sums = [((DF_KY==x)[:,None] &amp; v_mask).sum(0) for x in ['ER','WD','DF']] out = np.column_stack(( np.column_stack(DS_sums), np.column_stack(KY_sums))) return out </code></pre> <p><strong>Runtime test -</strong></p> <p>Case #1 : Original sample size of <code>20</code> </p> <pre><code>In [417]: %timeit [get_values(DF_All,ZZ[2],ZZ[3]) for ZZ in Array_On] 100 loops, best of 3: 16.3 ms per loop In [418]: %timeit get_values_vectorized(DF_All, Array_On) 1000 loops, best of 3: 386 µs per loop </code></pre> <p>Case #2: Sample size of <code>2000</code> </p> <pre><code>In [420]: %timeit [get_values(DF_All,ZZ[2],ZZ[3]) for ZZ in Array_On] 1 loops, best of 3: 1.39 s per loop In [421]: %timeit get_values_vectorized(DF_All, Array_On) 100 loops, best of 3: 18 ms per loop </code></pre>
python|numpy
1
1,049
69,809,867
Custom Loss Function returning - InvalidArgumentError: The second input must be a scalar, but it has shape [64]
<p>I'm trying to use a modified version of <a href="https://stackoverflow.com/questions/69803718/keras-custom-loss-penalize-more-when-actual-and-prediction-are-on-opposite-sides/69807735#69807735">this custom loss</a> and I'm getting the error below</p> <pre><code>InvalidArgumentError: The second input must be a scalar, but it has shape [64] [[{{node gradient_tape/custom_loss/cond_1/StatelessIf/gradient_tape/custom_loss/weighted_loss/Mul/_30}}]] [Op:__inference_train_function_147002] Function call stack: train_function </code></pre> <p>This is the code</p> <pre><code>import time import numpy as np import tensorflow as tf from tensorflow.keras.losses import Loss from tensorflow.keras.models import Sequential, load_model from tensorflow.keras.layers import Dense, Dropout, LSTM, BatchNormalization, Flatten from tensorflow.compat.v1.keras.layers import CuDNNLSTM from tensorflow.keras.callbacks import TensorBoard, ModelCheckpoint def custom_loss(y_true, y_pred): mse = tf.keras.losses.MeanSquaredError() penalty = 10 # penalize the loss heavily if the actual and the prediction are on different sides of zero loss = tf.cond( tf.logical_or( (tf.logical_and(tf.greater(y_true, 0.0), tf.less(y_pred, 0.0))), (tf.logical_and(tf.less(y_true, 0.0), tf.greater(y_pred, 0.0))) ), lambda: mse(y_true, y_pred) * penalty, lambda: mse(y_true, y_pred) * penalty / 4) print(&quot;starting second condition&quot;) # add slightly more penalty if prediction overshoots actual in any direction loss = tf.cond( tf.logical_or( (tf.logical_and(tf.greater(y_true, 0.0), tf.greater(y_pred, y_true))), (tf.logical_and(tf.less(y_true, 0.0), tf.less(y_pred, y_true))) ), lambda: loss * penalty / 5, lambda: loss * penalty / 10) return loss EPOCHS = 25 BATCH_SIZE = 64 MODEL_NAME = f&quot;MODEL 01-{str(int(time.time())}&quot; model = Sequential() model.add(LSTM(128, input_shape=(train_x.shape[1:]), return_sequences=True)) model.add(Dropout(0.2)) model.add(BatchNormalization()) model.add(LSTM(128, input_shape=(train_x.shape[1:]), return_sequences=True)) model.add(Dropout(0.2)) model.add(BatchNormalization()) model.add(LSTM(128, input_shape=(train_x.shape[1:]))) model.add(Dropout(0.2)) model.add(BatchNormalization()) model.add(Flatten()) model.add(Dense(32, activation='relu')) model.add(Dropout(0.2)) model.add(BatchNormalization()) model.add(Dense(1)) opt = tf.keras.optimizers.Adam(learning_rate=1e-3, decay=1e-6) metric= tf.keras.metrics.MeanSquaredError() model.compile(loss=custom_loss, optimizer=opt, metrics=[metric]) val_metric = 'val_'+metric.name tensorboard = TensorBoard(log_dir=f'logs/{MODEL_NAME}') filepath = base_path+&quot;cryptodata/models/RNN_Final-{epoch:02d}-{val_mean_squared_error:.3f}-&quot;+str(int(time.time()))+&quot;.hd5&quot; checkpoint = ModelCheckpoint(filepath=filepath, monitor=val_metric, verbose=0, mode='max',metric=metric) train_x = np.random.randn(1588, 60, 34) train_y = np.random.rand(1588,) val_x = np.random.randn(85, 60, 34) val_y = np.random.randn(85,) history = model.fit(train_x, train_y, batch_size=BATCH_SIZE, epochs=100, validation_data=(val_x, val_y), callbacks=[checkpoint, tensorboard]) </code></pre> <p>I've tried casting the <code>y_true</code> and <code>y_pred</code> in the custom loss function like so <code>y_pred=tf.convert_to_tensor(y_pred); y_true = tf.cast(y_true, y_pred.dtype</code> but that didn't work. Also adding the print function showed that the function was called twice successfully but failed after that.</p> <p>I don't get the error when I use in-built loss functions.</p>
<p>The problem is that your <code>custom_loss</code> is returning a function rather than a scalar value. If you replace <code>tf.cond</code> with <code>tf.where</code> your code will work.</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import tensorflow as tf from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Dropout, LSTM, BatchNormalization, Flatten def custom_loss(y_true, y_pred): mse = tf.keras.losses.MeanSquaredError() penalty = 10 # penalize the loss heavily if the actual and the prediction are on different sides of zero loss = tf.where( condition=tf.logical_or((tf.logical_and(tf.greater(y_true, 0.0), tf.less(y_pred, 0.0))), (tf.logical_and(tf.less(y_true, 0.0), tf.greater(y_pred, 0.0)))), x=mse(y_true, y_pred) * penalty, y=mse(y_true, y_pred) * penalty / 4 ) # add slightly more penalty if prediction overshoots actual in any direction loss = tf.where( condition=tf.logical_or((tf.logical_and(tf.greater(y_true, 0.0), tf.greater(y_pred, y_true))), (tf.logical_and(tf.less(y_true, 0.0), tf.less(y_pred, y_true)))), x=loss * penalty / 5, y=loss * penalty / 10 ) return loss </code></pre> <pre><code>train_x = np.random.randn(1588, 60, 34) train_y = np.random.rand(1588, ) val_x = np.random.randn(85, 60, 34) val_y = np.random.randn(85, ) </code></pre> <pre><code>model = Sequential() model.add(LSTM(128, input_shape=(train_x.shape[1:]), return_sequences=True)) model.add(Dropout(0.2)) model.add(BatchNormalization()) model.add(LSTM(128, input_shape=(train_x.shape[1:]), return_sequences=True)) model.add(Dropout(0.2)) model.add(BatchNormalization()) model.add(LSTM(128, input_shape=(train_x.shape[1:]))) model.add(Dropout(0.2)) model.add(BatchNormalization()) model.add(Flatten()) model.add(Dense(32, activation='relu')) model.add(Dropout(0.2)) model.add(BatchNormalization()) model.add(Dense(1)) opt = tf.keras.optimizers.Adam(learning_rate=1e-3, decay=1e-6) model.compile(loss=custom_loss, optimizer=opt, metrics=['mse']) model.fit(train_x, train_y, batch_size=128, epochs=3, validation_data=(val_x, val_y)) </code></pre> <pre><code># Epoch 1/3 # 13/13 [==============================] - 8s 321ms/step - loss: 11.3129 - mse: 1.6341 - val_loss: 6.9313 - val_mse: 1.1116 # Epoch 2/3 # 13/13 [==============================] - 3s 234ms/step - loss: 7.3409 - mse: 1.0789 - val_loss: 7.2055 - val_mse: 1.1238 # Epoch 3/3 # 13/13 [==============================] - 3s 231ms/step - loss: 5.3962 - mse: 0.8513 - val_loss: 7.4492 - val_mse: 1.1512 model.predict(train_x) # array([[0.25150445], # [0.2647993 ], # [0.2405027 ], # ..., # [0.31251353], # [0.29376918], # [0.21620636]], dtype=float32) </code></pre>
python|tensorflow|keras|deep-learning|loss-function
2
1,050
69,886,845
How do 1x1 convolutions preserve learned features?
<p>Below, I use channels and feature maps interchangeably.</p> <p>I'm trying to better understand how 1x1 convolution works with multiple input channels and have yet to find a good explanation of this. Before getting into 1x1, I'd like to ensure my understanding of 2D vs 3D convolution. Let's look at a simplistic example of 2D convolution in Keras API:</p> <pre><code>i = Input(shape=(64,64,3)) x = Conv2D(filters=32,kernel_size=(3,3),padding='same',activation='relu') (i) </code></pre> <p>In the above example, the input image has 3 channels and the convolutional layer will produce 32 feature maps. Will the 2D convolutional layer apply 3 different kernels to each of the 3 input channels to generate each feature map? If so, this means the number of kernels used in each 2D convolutional operation = #input channels * #feature maps. In this case, 96 different kernels would be used to produce 32 feature maps.</p> <p>Now let's look at 3D convolution:</p> <pre><code>i = Input(shape=(1,64,64,3)) x = Conv3D(filters=32,kernel_size=(3,3,3),padding='same',activation='relu') (i) </code></pre> <p>In the above example, based on my current understanding, each kernel is convolved with all input channels simultaneously. Therefore, the # of kernels used in each 3D convolution operation = #input channels. In this case, 32 different kernels would be used to produce 32 feature maps.</p> <p>I understand the purpose of downsampling channels before computations with bigger kernels (3x3, 5x5, 7x7). I'm asking because I'm confused as to how 1x1 convolutions preserve learned features. Let's look at a 1x1 convolution:</p> <pre><code>i = Input(shape=(64,64,3)) x = Conv2D(filters=32,kernel_size=(3,3),padding='same',activation='relu') (i) x = Conv2D(filters=8,kernel_size=(1,1),padding='same',activation='relu') (x) </code></pre> <p>If my above understanding of 2D convolutions is correct, then the 1x1 convolutional layer will use 32 different kernels to generate each feature map. This operation would use a total of 256 kernels (32*8) to generate 8 feature maps. Each feature map computation essentially combines 32 pixels into one. How does this one pixel somehow retain all of the features from the previous 32 pixels?</p>
<p>A 1x1 convolution is a 2D convolution just with a &quot;kernel size&quot; of 1. Since there is no sense of spatial neighborhoods, like in a 3x3 kernel, how they are able to learn spatial features depends on the architecture.</p> <p>By the way, the difference in a 2D convolution and a 3D convolution is in the movement of the convolution. A 2D convolution correlates the filter along &quot;x and y&quot; and is learning (kernel x kernel x input_channel) parameters per output channel. A 3D convolution correlates along &quot;x, y, and z&quot; and is learning (kernel x kernel x kernel x input_channel) parameters per output channel. You could do a 3D convolution on an image with channels, but it doesn't really make sense because we already know the &quot;depth&quot; is correlated. 3D convolutions are generally used with geometric volumes, e.g. data from a CT scan.</p> <p>Maybe this link would be helpful <a href="https://medium.com/analytics-vidhya/talented-mr-1x1-comprehensive-look-at-1x1-convolution-in-deep-learning-f6b355825578" rel="nofollow noreferrer">https://medium.com/analytics-vidhya/talented-mr-1x1-comprehensive-look-at-1x1-convolution-in-deep-learning-f6b355825578</a></p>
python|tensorflow|keras|deep-learning|conv-neural-network
0
1,051
69,797,187
Keras input Pandas dataframe
<p>I'm new to Keras and I want to fit my train data in an Excel file. My data has shape(1000, 5, 5), 1000 batches of data which are saved in 1000 spreadsheets, each sheet contain 5 columns and rows:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: center;">A</th> <th style="text-align: center;">B</th> <th style="text-align: center;">C</th> <th style="text-align: center;">D</th> <th style="text-align: center;">E</th> </tr> </thead> <tbody> <tr> <td style="text-align: center;">-</td> <td style="text-align: center;">-</td> <td style="text-align: center;">-</td> <td style="text-align: center;">-</td> <td style="text-align: center;">label</td> </tr> <tr> <td style="text-align: center;">-</td> <td style="text-align: center;">-</td> <td style="text-align: center;">-</td> <td style="text-align: center;">-</td> <td style="text-align: center;">label</td> </tr> <tr> <td style="text-align: center;">-</td> <td style="text-align: center;">-</td> <td style="text-align: center;">-</td> <td style="text-align: center;">-</td> <td style="text-align: center;">label</td> </tr> <tr> <td style="text-align: center;">-</td> <td style="text-align: center;">-</td> <td style="text-align: center;">-</td> <td style="text-align: center;">-</td> <td style="text-align: center;">label</td> </tr> <tr> <td style="text-align: center;">-</td> <td style="text-align: center;">-</td> <td style="text-align: center;">-</td> <td style="text-align: center;">-</td> <td style="text-align: center;">label</td> </tr> </tbody> </table> </div> <p>I want Column A, B, C to be training features and Column E to be label.</p> <pre><code>import pandas as pd import tensorflow as tf import multiprocessing df = pd.read_excel('File.xlsx', sheet_name=None) data_list = list(df.values()) def input_parser(x): Y = x.pop('E') features = ['A','B','C'] X = x[features] return X, Y dataset = tf.data.Dataset.from_tensor_slices(data_list) dataset = dataset.map(lambda x: tuple(tf.py_function(func=input_parser, inp=[x], Tout=[tf.float32,tf.int64])), num_parallel_calls=multiprocessing.cpu_count()) </code></pre> <p>and then I got an error:</p> <pre><code>ValueError: Can't convert non-rectangular Python sequence to Tensor. </code></pre> <p>Why do I get this error? How can I fit this data to my model?</p>
<p>Maybe try omitting your <code>map</code> function altogether and simply passing your data directly to <code>tf.data.Dataset.from_tensor_slices</code>:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import tensorflow as tf import numpy as np spread_sheet1 = {'A': [1, 2, 1, 2, 9], 'B': [3, 4, 6, 1, 4], 'C': [3, 4, 3, 1, 4], 'D': [1, 2, 6, 1, 4], 'E': [0, 1, 1, 0, 1]} df1 = pd.DataFrame(data=spread_sheet1) spread_sheet2 = {'A': [1, 2, 1, 2, 4], 'B': [3, 5, 2, 1, 4], 'C': [9, 4, 1, 1, 4], 'D': [1, 5, 6, 1, 7], 'E': [1, 1, 1, 0, 1]} df2 = pd.DataFrame(data=spread_sheet2) features = ['A','B','C'] Y = np.stack([df1['E'].to_numpy(), df2['E'].to_numpy()]) Y = tf.convert_to_tensor(Y, dtype=tf.int32) X = np.stack([df1[features].to_numpy(), df2[features].to_numpy()]) X = tf.convert_to_tensor(X, dtype=tf.float32) dataset = tf.data.Dataset.from_tensor_slices((X, Y)) print('Shape of X --&gt; ', X.shape) for x, y in dataset: print(x, y) </code></pre> <pre><code>Shape of X --&gt; (2, 5, 3) tf.Tensor( [[1. 3. 3.] [2. 4. 4.] [1. 6. 3.] [2. 1. 1.] [9. 4. 4.]], shape=(5, 3), dtype=float32) tf.Tensor([0 1 1 0 1], shape=(5,), dtype=int32) tf.Tensor( [[1. 3. 9.] [2. 5. 4.] [1. 2. 1.] [2. 1. 1.] [4. 4. 4.]], shape=(5, 3), dtype=float32) tf.Tensor([1 1 1 0 1], shape=(5,), dtype=int32) </code></pre> <p>Reading from an excel file <code>file.xlsx</code> with multiple sheets can be done like this:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import tensorflow as tf import multiprocessing df = pd.read_excel('file.xlsx', sheet_name=None) file_names = list(df.keys()) columns = ['A','B','C'] features = [] labels = [] for n in file_names: temp_df = df[n] features.append(temp_df[columns].to_numpy()) labels.append(temp_df['E'].to_numpy()) Y = tf.convert_to_tensor(np.stack(labels), dtype=tf.int32) X = tf.convert_to_tensor(np.stack(features), dtype=tf.float32) dataset = tf.data.Dataset.from_tensor_slices((X, Y)) print('Shape of X --&gt; ', X.shape) for x, y in dataset: print(x, y) </code></pre>
python|pandas|dataframe|tensorflow|keras
1
1,052
69,920,013
How to combine two dataframes on a column, with missing rows in one of them?
<p>I am trying to combine dataframe_A:</p> <pre><code>file 1 file 2 file 3 file 4 file 5 </code></pre> <p>with dataframe_B:</p> <pre><code>file 2 | some data | more data file 4 | other data | additional data file 5 | data | data data </code></pre> <p>along the file_name column, to end up with something like this:</p> <pre><code>file 1 | ~ | ~ file 2 | some data | more data file 3 | ~ | ~ file 4 | other data | additional data file 5 | data | data data </code></pre> <p>I want to end up with a dataframe the length of dataframe_A, with all the data from dataframe_B, and with blank/whatever in the intervening spaces.</p> <p>the joins and merges I have tried so far just end up with something that looks like dataframe_B, which is not what I want. What do I need to do?</p>
<p>Use <code>merge</code> with <code>how='left'</code> parameter:</p> <pre><code>&gt;&gt;&gt; dfA.merge(dfB, on='A', how='left').fillna('~') A B C 0 file 1 ~ ~ 1 file 2 some data more data 2 file 3 ~ ~ 3 file 4 other data additional data 4 file 5 data data data </code></pre> <p>I recommend reading our extended introduction: <a href="https://stackoverflow.com/q/53645882/15239951">Pandas Merging 101</a></p> <p>Setup:</p> <pre><code>dfA = pd.DataFrame({'A': ['file 1', 'file 2', 'file 3', 'file 4', 'file 5']}) dfB = pd.DataFrame({'A': ['file 2', 'file 4', 'file 5'], 'B': ['some data', 'other data', 'data'], 'C': ['more data', 'additional data', 'data data']}) </code></pre>
python|pandas|dataframe
2
1,053
69,974,884
Problem when modifying a copy of slice from DataFrame. (feat. numpy.where)
<p>Why does warning occur when modifying a copy of <code>Pandas.dataframe</code>? And why doesn't warning occur when modifying using <code>numpy.where</code>? (df = DataFrame Object)</p> <p>Warning Code</p> <pre><code>[186] df = input_df.copy() [187] df['trade_status'][df['trade_status'] == 'DONE'] = 'FILLED' --------------------------------------------------------------------------- C:\Practice\Report\src\service\ReportService.py:187: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame </code></pre> <p>No Warning Code</p> <pre><code>[186] df = input_df.copy() [187] df['trade_status'] = np.where(df['trade_status'] == 'DONE', 'FILLED', df['trade_status']) --------------------------------------------------------------------------- Clear </code></pre>
<p>The result of <code>df = input_df.copy()</code> is indeed a new DataFrame. In this point you are right.</p> <p>But you don't operate on it directly. Note that <code>df['trade_status'][df['trade_status'] == 'DONE']</code> creates a <strong>view</strong> of <em>df</em>.</p> <p>So when you attempt to save a new value there (<code>… = 'FILLED'</code>), the exception is raised.</p>
python|dataframe|numpy
1
1,054
72,148,923
how to parse numpy array by line
<p>Use cv2 to process PNG image, I want some areas to be transparent. change point [0, 0, 0, 255] to [0, 0, 0, 0].</p> <p>for example,</p> <pre class="lang-py prettyprint-override"><code># a is ndarray(880, 1330, 4) a = [[[100, 90, 80, 255], [80, 10, 10, 255],], ..., [[0, 0, 0, 255], [0, 0, 0, 255], ..., ]] # i want b = [[100, 90, 80, 255], [80, 10, 10, 255],], ..., [[0, 0, 0, 0], [0, 0, 0, 0], ..., ]] </code></pre> <p>thanks.</p>
<p>You need to create a mask.</p> <p>Here is a simple example:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np # Create some data a = (np.random.rand(10, 10, 4)*255).astype(int) a[ :5, :5, :] = 0 a[:, :, 3] = 255 b = a.copy() </code></pre> <p>Now create a mask:</p> <pre class="lang-py prettyprint-override"><code>mask = (a[:,:,0] == 0) &amp; (a[:,:,1] == 0) &amp; (a[:,:,2] == 0) print(mask*1) array([[1, 1, 1, 1, 1, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]) </code></pre> <p>Now set the requisite values to 0</p> <pre class="lang-py prettyprint-override"><code>b[mask, 3] = 0 print(b) array([[[ 0, 0, 0, 0], [ 0, 0, 0, 0], [ 0, 0, 0, 0], [ 0, 0, 0, 0], [ 0, 0, 0, 0], [204, 156, 208, 255], [ 59, 220, 240, 255], [217, 175, 19, 255], &lt;. other rows..&gt; [235, 127, 178, 255], [168, 29, 119, 255], [ 25, 228, 112, 255], [110, 237, 39, 255], [164, 23, 191, 255], [169, 232, 5, 255], [164, 59, 206, 255], [ 52, 65, 60, 255]]]) ​ </code></pre>
python|numpy
1
1,055
72,230,762
How to completely reorganise a table using aggregate data from qualitative information
<p>I have a pandas dataframe which has the following layout:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Column</th> <th>data type</th> </tr> </thead> <tbody> <tr> <td>'Water-Binder'</td> <td>float</td> </tr> <tr> <td>'Fly Ash'</td> <td>float</td> </tr> <tr> <td>'Age'</td> <td>int</td> </tr> <tr> <td>'Strength %'</td> <td>float</td> </tr> </tbody> </table> </div> <p><a href="https://i.stack.imgur.com/hUl34.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hUl34.png" alt="Sample of the dataset" /></a></p> <p>The age column is qualitative, features records at 1, 3, 7, 14, and 28 days. I want to group the rows by water-binder and fly ash, calculate the mean Strength % for the groups at each age, resulting in a table looking something like this:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Column</th> <th>data type</th> </tr> </thead> <tbody> <tr> <td>'Water-Binder'</td> <td>float</td> </tr> <tr> <td>'Fly Ash'</td> <td>float</td> </tr> <tr> <td>'Mean Strength % 1 day'</td> <td>float</td> </tr> <tr> <td>'Mean Strength % 3 days'</td> <td>float</td> </tr> <tr> <td>'Mean Strength % 7 days'</td> <td>float</td> </tr> <tr> <td>'Mean Strength % 14 days'</td> <td>float</td> </tr> <tr> <td>'Mean Strength % 28 days'</td> <td>float</td> </tr> </tbody> </table> </div> <p>I've been trying to figure out how this could be done. This is the closest thing to something that works that I've managed to achieve:</p> <pre><code>age_strength_model = data[['Water-Binder', 'Fly Ash', 'Age', 'Strength %']].copy() ages = np.unique(age_strength_model['Age'].values) # create table investigating the relationship between fly ash, age, and compressive strength for a in ages: age_strength_model.query(f'`Fly Ash` == 0 &amp; Age == {a}').groupby(['Water-Binder'])['Strength %'].transform(lambda x: x.mean()) </code></pre> <p>However, that just shows me the values rather than organising them into a dataset and doesn't accommodate for grouping water-binder and fly ash together. How would I achieve the desired end result here?</p>
<p>Try to concatenate 'Water-Binder', 'Fly Ash', 'Age' fields and then group:</p> <pre><code>data = [ [0.43, 0.0, 3, 26.446759], [0.43, 0.0, 7, 44.444444], [0.43, 0.0, 28, 100.00000], [0.43, 0.0, 3, 11.316173], [0.43, 0.0, 7, 37.493929] ] df = pd.DataFrame(data, columns= ['Water-Binder', 'Fly Ash', 'Age', 'Strength %']) df['Water-Binder-Fly-Ash-Age'] = (df['Water-Binder'].astype(str) + '%' + df['Fly Ash'].astype(str) + '%' + df['Age'].astype(str)) df Water-Binder Fly Ash Age Strength % Water-Binder-Fly-Ash 0 0.43 0.0 3 26.446759 0.43%0.0%3 1 0.43 0.0 7 44.444444 0.43%0.0%7 2 0.43 0.0 28 100.000000 0.43%0.0%28 3 0.43 0.0 3 11.316173 0.43%0.0%3 4 0.43 0.0 7 37.493929 0.43%0.0%7 # Creates a grouped df with the indices reset and 'Strength %' field renamed. df_grouped = df.groupby(by='Water-Binder-Fly-Ash-Age').mean()['Strength %'].reset_index() df_grouped.rename(columns={'Strength %': 'Mean Strength %'}, inplace=True) df_grouped Water-Binder-Fly-Ash-Age Mean Strength % 0 0.43%0.0%28 100.000000 1 0.43%0.0%3 18.881466 2 0.43%0.0%7 40.969186 </code></pre> <p>If you want to recover 'Water-Binder', 'Fly Ash', 'Age' fields, just split it based on the '%' delimiter and 'expand' it:</p> <pre><code>df_grouped[['Water-Binder', 'Fly Ash', 'Age']] = df['Water-Binder-Fly-Ash-Age'].str.split('%', expand=True) df_grouped Water-Binder-Fly-Ash-Age Mean Strength % Water-Binder Fly Ash Age 0 0.43%0.0%28 100.000000 0.43 0.0 3 1 0.43%0.0%3 18.881466 0.43 0.0 7 2 0.43%0.0%7 40.969186 0.43 0.0 28 </code></pre>
python|pandas|dataframe|aggregate
0
1,056
72,473,268
Is there a way to resample dataframe and apply customised function but with different frequence?
<p>I've defined a customised function using an array as argument. I've a DataFrame where the indexes are minutely timestamps. They look like:</p> <pre><code>2022-05-12 00:01:03 2022-05-12 00:03:17 2022-05-12 00:06:10 </code></pre> <p>What I want to do is resampling the data so I have a dataframe where the indexes are:</p> <pre><code>2022-05-12 00:02:00 2022-05-12 00:03:00 2022-05-12 00:04:00 2022-05-12 00:05:00 2022-05-12 00:06:00 2022-05-12 00:07:00 </code></pre> <p>I know I could use</p> <pre><code>df.resample('1min', label='right', closed='right').apply(lambda x: custom_f(x)) </code></pre> <p>The problem is my customised function would be applied on the last minute and I want it to be applied on the last 10 minutes for example. So the result at the row</p> <pre><code>2022-05-12 00:07:00 </code></pre> <p>would be computed on all the rows from the timestamp</p> <pre><code>2022-05-11 23:57:00 </code></pre> <p>What is the best way to do that ? I can do a for loop with 10 iterations and change the origin of the resample each time but creating and concatenating all the dataframes would be a mess. Any idea ?</p> <p>Thanks</p>
<p>You could use Pandas <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.rolling.html" rel="nofollow noreferrer">rolling</a> function. Since this function also accepts a time period as the moving window size, you could set the window parameter to the desired <code>10min</code> span.</p> <pre class="lang-py prettyprint-override"><code>def custom_f(x): return sum(x*2) df = df.resample('1min', label='right', closed='right').interpolate() df = df.rolling('10min').apply(lambda x: custom_f(x)) </code></pre>
python|pandas
0
1,057
50,532,497
Issue with columns in csv using pandas groupby
<p>I have these below columns in my csv . Usually all these columns have value like below and the code works smoothly .</p> <pre><code>dec list_namme list device Service Gate 12 food cookie 200.56.57.58 Shop 123 </code></pre> <p>Now I encountered issue, I got one csv file that has all these columns but there is no content for them. Here it looks like..</p> <pre><code>dec list_namme list device Service Gate </code></pre> <p>and once the code runs over it , it creates new csv with below columns that was not expected. I got new columns name as <strong>index</strong> and also , instead of 3(<code>device service Gate</code>) columns I am getting wrong 2. </p> <pre><code>index Gate </code></pre> <p>For the csv having contents I didnot faced any issue , even the columns are coming correctly.</p> <p>Below is the code. The code is :</p> <pre><code>if os.path.isfile(client_csv_file): df=pd.read_csv(csv_file) #Read CSV df['Gate']=df.Gate.astype(str) df = df.groupby(['device', 'Service'])['Gate'].apply(lambda x: ', '.join(set(x))).reset_index() df.to_csv(client_out_file, index=False) </code></pre> <p>Please help me in this code to fix this.</p>
<p>Performing a <code>groupby</code> on an empty dataframe is resulting in a dataframe without groupby-key columns.</p> <p>One solution is to test if your dataframe is empty before performing manipulations:</p> <pre><code>if os.path.isfile(client_csv_file): df = pd.read_csv(csv_file) if df.empty: df = df[['device', 'Service', 'Gate']] else: df['Gate'] = df.Gate.astype(str) df = df.groupby(['device', 'Service'])['Gate']\ .apply(lambda x: ', '.join(set(x))).reset_index() df.to_csv(client_out_file, index=False) </code></pre>
python|python-3.x|pandas|csv|pandas-groupby
1
1,058
50,270,283
RNN with many-to-one setup - which output to use
<p>I going through a series of machine learning examples that use RNNs for document classification (many-to-one). In most tutorials, the RNN output of the last time step is used, i.e., fed into one or more dense layers to map it to the number of classes (e.g., <a href="https://discuss.pytorch.org/t/example-of-many-to-one-lstm/1728/2" rel="nofollow noreferrer">[1]</a>, <a href="https://discuss.pytorch.org/t/lstm-for-many-to-one-multiclass-classification-problem/14268" rel="nofollow noreferrer">[2]</a>).</p> <p>However, I also came across some examples where, instead of the last output, the average of the outputs over all time steps is used (mean pooling?, e.g., <a href="https://stackoverflow.com/questions/35355528/how-to-implement-a-mean-pooling-layer-in-keras">[3]</a>). The dimensions of this averaged output are of course the same as for the last output. So computationally, both approaches just work the same.</p> <p>My questions is now, what is the intuition between the two different approaches. Due to recursive nature, the last output also reflects the output of the previous time steps. So why the idea of averaging the RNN outputs over all time steps. When to use what?</p>
<p><em>Pooling over time</em> is a specific technique that is used to extract the features from the input sequence. From <a href="https://stackoverflow.com/q/48549670/712995">this question</a>:</p> <blockquote> <p>The reason to do this, instead of "down-sampling" the sentence like in a CNN, is that in NLP the sentences naturally have different length in a corpus. This makes the feature maps different for different sentences, but we'd like to reduce the tensor to a fixed size to apply softmax or regression head in the end. As stated in the paper, it allows to capture the most important feature, one with the highest value for each feature map.</p> </blockquote> <p>It's important to note here that max-over-time (or average-over-time) is usually an <em>intermediate</em> layer. In particular, there can be several of them in a row or in parallel (with different window size). The end result produced by the network can still be either many-to-one or many-to-many (at least in theory).</p> <p>However, in most of the cases, there is a <em>single</em> output from the RNN. If the output must be a sequence, this output is usually fed into another RNN. So it all boils down to how exactly this single value is learned: take the last cell output or aggregate across the whole sequence or apply attention mechanism, etc.</p>
python|machine-learning|recurrent-neural-network|pytorch|rnn
1
1,059
45,705,002
numpy.array with and without specifying dtype behaves strange
<p>I am completely puzzled by this.</p> <p>From the following</p> <pre><code>import numpy as np a = np.array([4, -9]) a[0] = 0.4 a </code></pre> <p>I <strong>expected output:</strong> <code>array([ 0.4, -9])</code>. But it gives me</p> <p><code>array([ 0, -9])</code>.</p> <p>But when I changed the <code>dtype</code> to <code>f</code> </p> <pre><code>a = np.array([4, -9], 'f') a[0] = 0.4 a </code></pre> <p>It gives me the expected out put of <code>array([ 0.40000001, -9. ], dtype=float32)</code></p> <p>The documentation for <code>numpy.array(object, dtype=None, copy=True, order='K', subok=False, ndmin=0)</code> says:</p> <blockquote> <p>dtype : data-type, optional The desired data-type for the array. If not given, then the type will be determined as the minimum type required to hold the objects in the sequence. This argument can only be used to ‘upcast’ the array. For downcasting, use the .astype(t) method.</p> </blockquote> <p>When I initialized the array it initialized the values to <code>integers</code> and so when I indexed the array with a <code>float</code> it only recognized the <code>integer</code> part of <code>0.4</code> and hence gave me <code>0</code>. This is how I understand it. Is this correct?. But I am still surprised by this behavior.</p> <p><strong>Question</strong>: What exactly is going on here?</p>
<p>The problem is that your array is of <code>dtype=np.int64</code>:</p> <pre><code>In [141]: a = np.array([4, -9]) In [142]: a.dtype Out[142]: dtype('int64') </code></pre> <p>This means that you can only store integers, and any floats are truncated before assignment is done. If you want to store floats and ints together, you should specify <code>dtype=object</code> first:</p> <pre><code>In [143]: a = np.array([4, -9], dtype=object) In [144]: a[0] = 0.4 In [145]: a Out[145]: array([0.4, -9], dtype=object) </code></pre> <p>As for the issue with <code>array([ 0.40000001, -9. ]</code>, <code>0.4</code>, as a floating point number does not have an exact representation in memory (only an approximate one), which accounts for the imprecision you see.</p>
python|arrays|numpy
3
1,060
45,285,663
Is it possible to join all the same terms into the same pandas dataframe column?
<p>I have the following large pandas dataframe, which is composed of several terms:</p> <pre><code>type name exp ------------------- feline tiger True feline cat False rodent rabbit True canine dog False feline puma True feline bobcat False </code></pre> <p>Is it possible to join all the terms in the <code>name</code> column that have the same type in the <code>type</code> column into the same cell?. For example:</p> <pre><code>type name exp ---------------------------------- feline tiger cat puma bobcat True rodent rabbit True canine dog False </code></pre>
<p>Here's one way.</p> <pre><code>In [797]: df.groupby('type', as_index=False).agg({'name': ' '.join, 'exp': 'max'}) Out[797]: type name exp 0 canine dog False 1 feline tiger cat puma bobcat True 2 rodent rabbit True </code></pre>
python|python-3.x|pandas|data-structures
2
1,061
45,288,297
Meaning and dimensions of tf.contrib.learn.DNNClassifier's extracted weights and biases
<p>I relatively new to tensorflow, but even with a lot of research I was unable to find a documentation of certain variable meanings.</p> <p>For my current project, I want to train a DNN with the help of tensorflow, and afterwards I want to extract the weight and bias matrices from it to use it in another application OUTSIDE tensorflow. For the first try, I set up a simple network with a [4, 10, 2] structure, which predicts a binary outcome.</p> <p>I used 3 real_valued_columns and a single sparse_column_with_keys (wrapped in an embedding_column) as features:</p> <pre><code>def build_estimator(optimizer=None, activation_fn=tf.sigmoid): """Build an estimator""" # Sparse base columns column_stay_point = tf.contrib.layers.sparse_column_with_keys( column_name='stay_point', keys=['no', 'yes']) # Continuous base columns column_heading = tf.contrib.layers.real_valued_column('heading') column_velocity = tf.contrib.layers.real_valued_column('velocity') column_acceleration = tf.contrib.layers.real_valued_column('acceleration') pedestrian_feature_columns = [column_heading, column_velocity, column_acceleration, tf.contrib.layers.embedding_column( column_stay_point, dimension=8, initializer=tf.truncated_normal_initializer)] # Create classifier estimator = tf.contrib.learn.DNNClassifier( hidden_units=[10], feature_columns=pedestrian_feature_columns, model_dir='./tmp/pedestrian_model', n_classes=2, optimizer=optimizer, activation_fn=activation_fn) return estimator </code></pre> <p>I called this function with default arguments and used estimator.fit(...) to train the DNN. Aside from some warnings concerning the deprecated 'scalar_summary' function, it ran successfully and produced reasonable results. I printed all variables of the model by using the following line:</p> <pre><code>var = {k: estimator.get_variable_value(k) for k in estimator.get_variable_names()) </code></pre> <p>I expected to get a weight matrices of size 10x4 and 2x10 as well as bias matrices of size 10x1 and 2x1. But I got the following:</p> <pre><code>'dnn/binary_logistic_head/dnn/learning_rate': 0.05 (actual value, scalar) 'dnn/input_from_feature_columns/stay_point_embedding/weights': 2x8 array 'dnn/hiddenlayer_0/weights/hiddenlayer_0/weights/part_0/Adagrad': 11x10 array 'dnn/input_from_feature_columns/stay_point_embedding/weights/int_embedding/weights/part_0/Adagrad': 2x8 array 'dnn/hiddenlayer_0/weights': 11x10 array 'dnn/logits/biases': 1x1' array 'dnn/logits/weights/nn/dnn/logits/weights/part_0/Adagrad': 10x1 array 'dnn/logits/weights': 10x1 array 'dnn/logits/biases/dnn/dnn/logits/biases/part_0/Adagrad': 1x1 array 'global_step': 5800, (actual value, scalar) 'dnn/hiddenlayer_0/biases': 1x10 array 'dnn/hiddenlayer_0/biases//hiddenlayer_0/biases/part_0/Adagrad': 1x10 array </code></pre> <p>Is there any documentation what these cryptic names mean and why do the matrices have these weird dimensions? Also, why are there references to the Adagrad optimizer despite never specifying it?</p> <p>Any help is highly appreciated!</p>
<p>The number of input nodes in your network is 11 and not 4 8(embedding_column)+column_heading(1),column_velocity(1),column_acceleration(1) = 11</p> <p>And based on the variable names the output is a binary logistic node, so the number of output nodes is only one and not 2.</p> <p>Below are the weights/biases you are interested in. </p> <p>dnn/hiddenlayer_0/weights': 11x10 array --> There are the weights from inputs to hidden nodes</p> <p>dnn/hiddenlayer_0/biases': 1x10 array --> Biases of hidden nodes</p> <p>dnn/logits/weights': 10x1 array --> Weights from hidden nodes to the output node</p> <p>dnn/logits/biases': 1x1' array --> Bias of the output node. </p> <p><strong>why are there references to the Adagrad optimizer despite never specifying it?</strong><br> Most probably the default optimizer is AdaGrad.</p>
tensorflow
1
1,062
62,710,706
Drop duplicate rows conditionally - Pecking order
<p>I have a df as below</p> <pre><code>A B_x B_y C 1 USD GBP, USD, EUR V1 1 USD V2 2 GBP GBP, USD, EUR V1 3 JPY GBP, USD, EUR V1 3 JPY V4 4 SEK GBP, USD, EUR v5 </code></pre> <p>The idea is if B_y contains B_x then the value of C must be chosen from that row else C must be chosen from the row wherever B_y is blank (Blank is a wildcard). I should end up with a unique row for every A (wherever a match can be found).</p> <p>For the above the result df should be</p> <pre><code>A B_x B_y C 1 USD GBP,USD,EUR V1 2 GBP GBP,USD,EUR V1 3 JPY v4 </code></pre> <p>For A = 4 no valid matches are found hence no entries in the output.</p> <p>My approach: I have tried the following.</p> <pre><code>df[~df.duplicated(['A', keep=False]) | df.apply(lambda x: x.B_x in x.B_y, axis=0) | df.apply(lambda x: x.B_y='', axis=0)] </code></pre> <p>Expectedly this matches rows which has both B_y with values 'GBP, USD, EUR' and row with the blank (wildcard).</p> <p>Thanks</p>
<p>You can use list comprehension to create the mask by <code>zip</code>:</p> <pre><code>mask = [x in y or not y for x, y in zip(df[&quot;B_x&quot;], df[&quot;B_y&quot;])] print (df.loc[mask].drop_duplicates(&quot;A&quot;, keep=&quot;first&quot;)) A B_x B_y C 0 1 USD GBP, USD, EUR V1 2 2 GBP GBP, USD, EUR V1 4 3 JPY V4 </code></pre>
pandas
2
1,063
62,787,638
Handling multiple arrays
<p>I have 6 arrays named , x1,......x6 that i read from 'npz' file. I need to perform some mathematical job on each array and stored that into 10 new arrays. I am doing it step by step in a very simple way. To read the file and store variables,</p> <pre><code>files = np.load(&quot;particle.npz&quot;) x1 = files['x1'] x2 = files ['x2'] x3 = files['x3'] x4 = files ['x4'] x5 = files['x5'] x6 = files ['x6'] </code></pre> <p>create another array from previous one,</p> <pre><code>pox1= x1[:,0] pox2= x2[:,0] pox3= x3[:,0] pox4= x4[:,0] pox5= x5[:,0] pox6= x6[:,0] </code></pre> <p>Then create some new arrays,</p> <pre><code>sq_diff_x1 = np.zeros(40002) sq_diff_x2 = np.zeros(40002) sq_diff_x3 = np.zeros(40002) sq_diff_x4 = np.zeros(40002) sq_diff_x5 = np.zeros(40002) sq_diff_x6 = np.zeros(40002) </code></pre> <p>And lastly perform calculation using for loop and store into new arrays,</p> <pre><code>for i in range (len(x1)-1): sq_diff_x1[i] = (pox1[i]-pox1[0])**2 sq_diff_x2[i] = (pox1[i]-pox1[0])**2 sq_diff_x3[i] = (pox1[i]-pox1[0])**2 sq_diff_x4[i] = (pox1[i]-pox1[0])**2 sq_diff_x5[i] = (pox1[i]-pox1[0])**2 sq_diff_x6[i] = (pox1[i]-pox1[0])**2 </code></pre> <p>The code is working fine but is there any other way where I can do it automatically by not assigning everything one by one? Because using my method is simple but will be very time consuming when I need work with 100 arrays.So something automated things are required.</p>
<pre><code>files = np.load(&quot;particle.npz&quot;) x_s = [files[key] for key in files.keys()] </code></pre> <p>Creating a list of arrays rather than individually named ones is the preferred method in Python.</p> <pre><code>pox_s = [x[:,0] for x in x_s] </code></pre> <p>Looks like the arrays are all the same size. So we can turn the list into an array:</p> <pre><code>pox_s = np.array(pox_s) </code></pre> <p>Or even</p> <pre><code>x_s = np.array(x_s) # (6, 40002, ?) pox_s = x_s[:,:,0] # (6, 40002) sq_diffs = (pox_s - pox_s[:,[0]])**2 # (6, 40002)-(6,1) </code></pre> <p>Without a small concrete example, I can't test this code. I think I've got the shapes right.</p>
python|numpy
0
1,064
62,641,466
Show a Camera Activity which runs tflite on image frames in a flutter app
<p>I have an android app having a <strong>CameraActivity</strong> which runs a <strong>tflite classifier</strong> periodically on image frames from the preview stream. The implementation of the Camera and tflite works great in the Android part and gives a good FPS.</p> <p>I want to show this <strong>CameraActivity</strong> in my Flutter App as a Screen. The Flutter app has all the Frontend and UI part implemented already.</p> <p>I've already tried using the official Flutter <a href="https://pub.dev/packages/camera" rel="nofollow noreferrer">Camera</a> plugin to implement the same by using <strong>camera.startImageStream</strong> but was unable to get matching FPS and the Camera Preview lags when calling the tflite model asynchronously using <strong>methodChannel</strong>.</p> <p>I also came across <a href="https://api.flutter.dev/flutter/widgets/AndroidView-class.html" rel="nofollow noreferrer">AndroidView</a> which embeds an Android view in the Widget hierarchy but the docs say it is an expensive operation and should be avoided when a Flutter equivalent is possible.</p> <p>Is there a way to write a plugin for showing the <strong>CameraActivity</strong> (i.e. the UI) on the Flutter end similar to the way methodChannel is used for exchanging data between the Flutter and the Native codes. Or if there's another possible way of achieving this, please let me know.</p> <p>Thanks in advance!</p>
<pre><code>//Flutter side Future&lt;Null&gt; showNativeView() async { if (Platform.isAndroid) {//check if platform is android var methodChannel = MethodChannel(&quot;methodchannelname&quot;);//create a method channel name await methodChannel.invokeMethod('showNativeCameraView');//create method name } } Container( child: InkWell( onTap: () { showNativeView();//call to open native android activity }, child: Center( child: new Text(&quot;takeimage&quot;)))) //Android side //inside mainactivity on create var channel: MethodChannel? = null channel = MethodChannel(flutterView, &quot;methodchannelname&quot;); MethodChannel(flutterView, &quot;methodchannelname&quot;)//same as flutterside .setMethodCallHandler { call, result -&gt; if (call.method == &quot;showNativeCameraView&quot;) {//same methodname as flutterside val intent = Intent(this, ActivityCamera::class.java)//native camera activity code can be added in the android folder of the flutter application startActivity(intent) result.success(true) } else { result.notImplemented() } } </code></pre> <p>This link will help u further <a href="https://medium.com/@Chetan/flutter-communicate-with-android-activity-or-ios-viewcontroller-through-method-channel-c11704429cd0" rel="nofollow noreferrer">https://medium.com/@Chetan/flutter-communicate-with-android-activity-or-ios-viewcontroller-through-method-channel-c11704429cd0</a></p>
android|flutter|android-camera|tensorflow-lite|flutter-platform-channel
0
1,065
54,423,677
Hvplot/bokeh summed Bar chart from Pandas Dataframe
<p>I'm trying to print a "simple" Bar chart, using HVPlot and bokeh in jupyter notebook. Here is some simplified data:</p> <p>My Data originally looks like this: <a href="https://i.stack.imgur.com/NRXjg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NRXjg.png" alt=""></a></p> <p>My goal is to get a bar chart like That (Note it doesn't have to be stacked. The only importatnt thing are the Totals.):</p> <p><a href="https://i.stack.imgur.com/BWZWp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BWZWp.png" alt=""></a></p> <p>Since I couldn't figure out how to get a bar chart with the sum of certain columns, I used <code>pandas.melt</code> to model the Data to look like that: </p> <p><a href="https://i.stack.imgur.com/PJF54.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PJF54.png" alt=""></a></p> <p>With this Data I can plot it, but then the values aren't summed. Instead, there are multiple Bars behind each other. <a href="https://i.stack.imgur.com/PQfuk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PQfuk.png" alt=""></a></p> <p>Here is the code I used to test:</p> <pre><code> testd = {'Name': ['Item1', 'Item2','Item3','Item3'],'Filter': ['F1','F2','F1','F1'], 'Count': [1,5,2,1], 'CountCategory': ['CountA','CountB','CountA','CountD']} testdf = pd.DataFrame(data=testd) testdf.hvplot.bar('CountCategory','Count',groupby='Filter', rot=90, aggregator=np.sum) </code></pre> <p>It doesn't change anything if I omit the <code>aggregator=np.sum</code></p> <p>Does anyone know how to properly plot this? It doesn't have to use the "transposed" data since I'm only doing that because I have no idea how to plot the Original Data. And another question would be if there is a possibility </p>
<p>The <code>aggregator</code> is used by the datashade/rasterize operation to aggregate the data and indeed has no effect on bar plots. If you want to aggregate the data I recommend doing so using pandas methods. However in your case I don't think that's the issue, the main problem in implementing the plot you requested is that in holoviews the legend is generally linked to the styling, which means that you can't easily get the legend to display the filter <strong>and</strong> color each bar separately.</p> <p>You could do this and add the Filter as a hover column, which means you still have access to it:</p> <pre><code>testdf.hvplot.bar('CountCategory', 'Count', by='Name', stacked=True, rot=90, hover_cols=['Filter']) </code></pre> <p><a href="https://i.stack.imgur.com/ostwl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ostwl.png" alt="barplot1"></a></p> <p>I'll probably raise an issue in HoloViews to support a legend decoupled from the styling.</p>
python|pandas|bokeh|holoviews
1
1,066
54,477,327
How to avoid double quotes at the end of lines with .to_csv()
<p>I have .txt file as next:</p> <pre><code>Red Blue Green, 1354,8676,38 ------------------------------ Yellow Or,ange Black TU CD CL 0.26 0.265 0.25 ------------------------------- </code></pre> <p>I need to read it. For that purpose, I use <code>pd.read_csv()</code> and I avoid the last two lines with <code>skipfooter = 2</code> argument. So, I have:</p> <pre><code>path_file1 = pathlib.Path.cwd() / ("folder1") / ("file1.txt") df_folder1 = pd.read_csv(str(path_file1), engine = 'python', index_col = False, skipfooter = 2) </code></pre> <p>Then, I would like to paste all tha info in other file using <code>.to_csv()</code>:</p> <pre><code>path_file2 = pathlib.Path.cwd() / ("folder1") / ("file2.txt") df_folder2 = df_folder1.to_csv(str(path_file2), index = False, quoting=csv.QUOTE_NONE) </code></pre> <p>However, I end up with this result:</p> <pre><code>Red Blue Green, 1354,8676,38 ------------------------------,,, Yellow Or,ange Black,, TU CD CL,,, </code></pre> <p>So I cannot get rid of the commas at the end of each line and the empty line is omitted. Is it possible to achieve this using <code>to_csv()</code>? Should I use <code>csv.writer()</code> as in this thread? <a href="https://stackoverflow.com/questions/25056881/write-csv-file-with-double-quotes-for-particular-column-not-working">write csv file with double quotes for particular column not working</a> Is there any chance of specifying a lineterminator?</p>
<p>the problem you're facing is that csv is a representation of tabular data.</p> <blockquote> <p>In computing, a comma-separated values file is a delimited text file that uses a comma to separate values. A CSV file stores tabular data in plain text. Each line of the file is a data record. Each record consists of one or more fields, separated by commas. The use of the comma as a field separator is the source of the name for this file format. <a href="https://en.wikipedia.org/wiki/Comma-separated_values" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Comma-separated_values</a></p> </blockquote> <p>And the additional commas at the end of your lines represent empty fields in that column. Even though I'm not sure that what you ask is impossible I would argue that it would be out of spec.</p> <p>Therefore, I suggest, if you what you really want is to remove the last to lines from that file, you read the file with </p> <pre><code>with open('file') as f: for i in range(len(f.readlines()) - 2): # do what ever you want </code></pre> <p>and write it back to file.</p>
python-3.x|pandas
1
1,067
54,338,108
Loading large dataframe to Vertica
<p>I have a rather large dataframe (500k+ rows) that I'm trying to load to Vertica. I have the following code working, but it is extremely slow. </p> <pre><code>#convert df to list format lists = output_final.values.tolist() #make insert string insert_qry = " INSERT INTO SCHEMA.TABLE(DATE,ID, SCORE) VALUES (%s,%s,%s) " # load into database for i in range(len(lists)): cur.execute(insert_qry, lists[i]) conn_info.commit() </code></pre> <p>I have seen a few posts talking about using COPY rather than EXECUTE to do this large of a load, but haven't found a good working example. </p>
<p>After a lot of trial and error... I found that the following worked for me. </p> <pre><code> # insert statements copy_str = "COPY SCHEMA.TABLE(DATE,ID, SCORE)FROM STDIN DELIMITER ','" # turn the df into a csv-like object stream = io.StringIO() contact_output_final.to_csv(stream, sep=",",index=False, header=False) # reset the position of the stream variable stream.seek(0) # load to data with conn_info.cursor() as cursor: cur.copy(copy_str,stream.getvalue()) conn_info.commit() </code></pre>
python|pandas|vertica
4
1,068
54,315,043
Generate (n, 1, 2) arrays with np.tile
<p>I want to create n times (1,2) arrays and each array should have the same elements. First I generate n times 1 D array and then I use a loop to iterate over these elements and repeat each element to fill (n, 1,2) array. my code is the following:</p> <pre><code>import numpy as np def u_vec(): return np.array([np.random.rand(1)]) n=10 u1 = np.zeros(n) for i in range(n): u1[i] = u_vec() print(u1) def u_vec1(): u_vec = np.zeros((n, 2,1)) for i in range(len(u1)): u_vec[i] += np.tile(u1[i], (2,1)) return u_vec u = u_vec1() print(u) </code></pre> <p>the output that I get is</p> <pre><code>[0.4594466 0.80924903 0.3186138 0.03601917 0.9116031 0.68199505 0.78999837 0.33778259 0.97626521 0.84925156] [[[0.4594466 0.4594466]] [[0. 0. ]] [[0. 0. ]] [[0. 0. ]] [[0. 0. ]] [[0. 0. ]] [[0. 0. ]] [[0. 0. ]] [[0. 0. ]] [[0. 0. ]]] </code></pre> <p>I do not understand why only the first element is filled but the others are filled with zero. Could someone please help me? Thank you very much! the output that I would like to have</p> <pre><code>[[[0.4594466 0.4594466]] [[0.3186138 0.3186138]] [[ 0.03601917 0.03601917]] [[ 0.9116031 0.9116031 ]] [[0.68199505 0.68199505]] [[0.78999837 0.78999837]] [[0.33778259 0.33778259]] [[0.97626521 0.97626521]] [[0.84925156 0.84925156]]]] </code></pre>
<p>I see the problem. The problem is that your <code>return u_vec</code> statement is enclosed in the <code>for</code> loop. So only the first subarray is updated with the random values and the rest of <code>u_vec</code>remains 0 because you return <em>immediately after the first iteration</em> of the for loop. You should use</p> <pre><code>def u_vec1(): u_vec = np.zeros((n, 2,1)) for i in range(len(u1)): u_vec[i] += np.tile(u1[i], (2,1)) return u_vec # &lt;---- moved outside the for loop </code></pre> <p>Having solved this problem, you might also be interested in knowing an alternative solution using <code>repeat</code> and <code>reshape</code> to get the desired result as </p> <pre><code>import numpy as np n=10 u1 = np.random.rand(n) print(u1) u = np.repeat(u1,2).reshape((n,2,1)) print(u) </code></pre> <hr> <pre><code>[0.17106854 0.7346424 0.53370937 0.39838919 0.42247593 0.61545304 0.97014742 0.85912941 0.51137618 0.08148184] [[[0.17106854] [0.17106854]] [[0.7346424 ] [0.7346424 ]] [[0.53370937] [0.53370937]] [[0.39838919] [0.39838919]] [[0.42247593] [0.42247593]] [[0.61545304] [0.61545304]] [[0.97014742] [0.97014742]] [[0.85912941] [0.85912941]] [[0.51137618] [0.51137618]] [[0.08148184] [0.08148184]]] </code></pre>
python|numpy
1
1,069
73,646,375
Use PyTorch DistributedDataParallel with Hugging Face on Amazon SageMaker
<p>Even for single-instance training, PyTorch DistributedDataParallel (DDP) is generally recommended over PyTorch DataParallel (DP) because DP's strategy is less performant and it uses more memory on the default device. (Per <a href="https://discuss.pytorch.org/t/cuda-out-of-memory-error-when-using-multi-gpu/72333" rel="nofollow noreferrer">this PyTorch forums thread</a>)</p> <p>Hugging Face <a href="https://github.com/huggingface/transformers/tree/master/examples/pytorch#distributed-training-and-mixed-precision" rel="nofollow noreferrer">recommend</a> to run distributed training via the <code>python -m torch.distributed.launch</code> launcher, because their Trainer API supports DDP but will fall back to DP if you don't. (Per <a href="https://discuss.huggingface.co/t/multi-gpu-training/4021" rel="nofollow noreferrer">this HF forums thread</a>)</p> <p>I recently ran in to this problem: scaling a HF training job from <code>p3.8xlarge</code> to <code>p3.16xlarge</code> increased memory consumption on (I think) one of the GPUs to the point where I had to significantly reduce batch size to avoid CUDA Out of Memory errors - basically losing all scaling advantage.</p> <p>So the good news is for p3.16xl+ I can just <a href="https://huggingface.co/docs/sagemaker/train#distributed-training" rel="nofollow noreferrer">enable SageMaker Distributed Data Parallel</a> and the PyToch DLC will automatically <a href="https://github.com/aws/sagemaker-pytorch-training-toolkit/blob/88ca48a831bf4f099d4c57f3c18e0ff92fa2b48c/src/sagemaker_pytorch_container/training.py#L23" rel="nofollow noreferrer">launch via torch.distributed for me</a>.</p> <p>The bad news for use cases with smaller workloads or wanting to test before they scale up, is that SMDistributed <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/data-parallel-intro.html#data-parallel-allreduce" rel="nofollow noreferrer">doesn't support all multi-GPU instance types</a>. No p3.8xl or g series, for example. I did try manually setting the <code>sagemaker_distributed_dataparallel_enabled</code> environment variable, but no joy.</p> <p>So how else can we launch HF Trainer scripts with PyTorch DDP on SageMaker?</p>
<p>Great question, thanks for asking! PyTorch DDP runs data parallel workers in multiple processes, that must be launched and managed by developers. DDP should be seen as a managed allreduce, more than a managed data-parallelism library, since it requires you to launch and manage the workers and even assigning resources to workers. In order to launch the DDP processes in a SageMaker Training job you have many options:</p> <ol> <li>If you do multi-GPU, single-machine, you can use <code>torch.multiprocessing.spawn</code>, as shown in <a href="https://pytorch.org/tutorials/intermediate/ddp_tutorial.html" rel="nofollow noreferrer">this official PyTorch demo</a> (that is <a href="https://pytorch.org/tutorials/intermediate/ddp_tutorial.html" rel="nofollow noreferrer">broken</a> by the way)</li> <li>If you do multi-GPU, single-machine, you can also use the <a href="https://docs.ray.io/en/latest/train/train.html" rel="nofollow noreferrer">Ray Train</a> library to launch those processes. I was able to use it in a Notebook, but not in the DLC yet (recent library that is a bit rough to learn and make work, see <a href="https://discuss.ray.io/u/lacruche/activity/topics" rel="nofollow noreferrer">all my issues here</a>). Ray Train should work on multi-node too.</li> <li>If you do multi-GPU, any-machine, you can use <code>torch.distributed.launch</code>, wrapped in a launcher script in shell or Python. Example here <a href="https://gitlab.aws.dev/cruchant/a2d2-segmentation/-/blob/main/3_2D-Seg-Audi-A2D2-Distributed-Training-DDP.ipynb" rel="nofollow noreferrer">https://gitlab.aws.dev/cruchant/a2d2-segmentation/-/blob/main/3_2D-Seg-Audi-A2D2-Distributed-Training-DDP.ipynb</a></li> <li>You can also launch those processes with the SageMaker MPI integration instead of <code>torch.distributed</code>. Unfortunately, we didn't create documentation for this, so no one uses it nor pitches it. But it looks cool, because it allows to run copies of your script directly in the EC2 machines without the need to invoke an intermediary PyTorch launcher. <a href="https://github.com/aws/amazon-sagemaker-examples/blob/master/training/distributed_training/mpi_on_sagemaker/intro/mpi_demo.ipynb" rel="nofollow noreferrer">Example here</a></li> </ol> <p>So for now, my recommendation would be to go the route (3), which is the closest to what the PyTorch community does, so provides easier development and debugging path.</p> <p><strong>Notes</strong>:</p> <ul> <li>PyTorch DDP evolves fast. In PT 1.10 <code>torch.distributed</code> is replaced by <code>torchrun</code>, and a <a href="https://discuss.pytorch.org/t/how-to-map-processes-to-gpu-in-ddp-and-how-to-launch-the-ddp-cluster/135253/4" rel="nofollow noreferrer">torchX</a> tool is being created to...simplify things!).</li> <li>Not having to manage that mess is a reason why SageMaker Distributed Data Parallel is a great value prop: you only need to edit your script, and the SM service handles process creation. Unfortunately, as you point out, SMDP being limited to P3 and P4 training jobs seriously limits its use.</li> <li>Below are important PT DDP concepts to understand to alter single-GPU code into multi-machine code <ul> <li>Unlike Apache Spark, which takes care of workload partitioning on your behalf, Pytorch distributed training requires the user to assign specific pieces of work to specific GPUs. In the following section, we assume that we train on GPU.</li> <li>In PyTorch DDP, each GPU runs a customized copy of you training code. A copy of the training code running on one GPU is generally called a <em>rank</em>, a <em>data parallel replica</em>, a <em>process</em>, a <em>worker</em>, but other names may exist.</li> <li>For PyTorch DDP to launch a training cluster on the MxN GPUs spread over your M machines, you must specify to PyTorch DDP the number of machines you have and the number of processes to launch per machine. This is respectively done by the parameters <code>-nnodes</code> and <code>-nproc_per_node</code> of the <code>torch.distributed.launch</code> utility. You must run the <code>torch.distributed.lauch</code> once on each node of the training cluster. You can achieve this parallel command with multiple tools, for example with MPI or SageMaker Training as mentioned above. In order to establish the necessary handshakes and form a cluster, you must also specify in the <code>torch.distributed.launch</code> command <code>-node_rank</code>, which must take a unique machine ID between 0 and N-1 on each of the machines, and <code>-master_addr</code> and <code>-master_port</code>, optional if you run a single-machine cluster, which must be the same across all machines.</li> <li>In the <code>init_process_group</code> DDP initialization method running from within each data parallel replica script, you must specify the world size and replica ID, respectively with the <code>world_size</code> and <code>rank</code> parameters. Hence you must have a way to communicate to each script a unique ID, generally called the <em>global rank</em>. The global rank can help you personalize the work done by each GPU, for example saving a model just from one card, or running validation only in one card. In a cluster composed of 3 machines having 4 GPUs each, global ranks would range from 0 to 11. Within a machine, in order to assign DDP data parallel replicas to available GPUs, the script running in each replica must be assigned a GPU ID, unique within the machine it's running on. This is called the local rank and can be set as an argument by the PyTorch DDP <code>torch.distributed.launch</code>. In a cluster composed of 3 machines having 4 GPUs each, on each machine the DDP processes would have local ranks ranging from 0 to 3</li> </ul> </li> </ul>
pytorch|amazon-sagemaker|huggingface-transformers
0
1,070
73,710,779
Generate Boolean Matrix of overlapping intervals
<p>I have a dataframe where two columns represent the start and end points of intervals on a real number line. I want to generate a third column as a list of the indices of rows which said row has any overlap with. I'm having difficulty creating a inequality boolean matrix for this natively in pandas. I assume logic like this <code>s1&lt;=e2 and e1&gt;=s2</code> will do the trick, but I don't know how to effectively broadcast it.</p> <p>As a toy example I'm hoping for a simple way to at least generate a 5x5 boolean matrix (with all True down the diagonal) given this dataframe:</p> <pre><code>import pandas as pd intervals_df = pd.DataFrame({&quot;Starts&quot;:[0,1,5,10,15,20],&quot;Ends&quot;:[4,2,9,14,19,24]}) Starts Ends 0 0 4 1 1 2 2 5 9 3 10 14 4 15 19 5 20 24 </code></pre>
<p>The condition for the two intervals <code>(s1,e1)</code> and <code>(s2,e2)</code> to intersect is <code>max(s1,s2) &lt;= min(e1,e2)</code>. So you can do a cross merge (this is the broadcast), calculate the condition, the pivot:</p> <pre><code>d = (intervals_df.reset_index() .merge(intervals_df.reset_index(), how='cross') .assign(cond=lambda x: x.filter(like='Starts').max(axis=1) &lt;= x.filter(like='Ends').min(axis=1)) .pivot('index_x', 'index_y', 'cond') ) </code></pre> <p>You would get:</p> <pre><code>index_y 0 1 2 3 4 5 index_x 0 True True False False False False 1 True True False False False False 2 False False True False False False 3 False False False True False False 4 False False False False True False 5 False False False False False True </code></pre> <p>Or you can make do with numpy's broadcasting:</p> <pre><code>starts = intervals_df[['Starts']].to_numpy() ends = intervals_df[['Ends']].to_numpy() np.maximum(starts, starts.T) &lt;= np.minimum(ends, ends.T) </code></pre> <p>Output:</p> <pre><code>array([[ True, True, False, False, False, False], [ True, True, False, False, False, False], [False, False, True, False, False, False], [False, False, False, True, False, False], [False, False, False, False, True, False], [False, False, False, False, False, True]]) </code></pre>
python|pandas
2
1,071
73,761,272
Python color detection from a point with opencv
<p>I'm trying to do a sorting machine for color detection with a camera and a raspberry pi. I have succeeded to some extent but not really. I am currently reading the color from the center pixel in BGR format and examining it that way. My question would be how can i read this out of a zone not just a point and make the detection more accurate.</p> <p>Here's my code:</p> <pre><code>import cv2 import time import pandas as pd index = [&quot;color&quot;, &quot;color_name&quot;, &quot;hex&quot;, &quot;R&quot;, &quot;G&quot;, &quot;B&quot;] csv = pd.read_csv('colors.csv', names=index, header=None) cap = cv2.VideoCapture(0) cap.set(cv2.CAP_PROP_FRAME_WIDTH, 800) cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 600) def get_color_name(R, G, B): minimum = 10000 for i in range(len(csv)): d = abs(R - int(csv.loc[i, &quot;R&quot;])) + abs(G - int(csv.loc[i, &quot;G&quot;])) + abs(B - int(csv.loc[i, &quot;B&quot;])) if d &lt;= minimum: minimum = d cname = csv.loc[i, &quot;color_name&quot;] return cname while True: _, frame = cap.read() height, width, _ = frame.shape cx = int(width / 2) cy = int(height / 2) pixel_center = frame[cy, cx] b = int(pixel_center[0]) g = int(pixel_center[1]) r = int(pixel_center[2]) txt = get_color_name(r, g, b) print(txt) print(r, g, b) cv2.circle(frame, (cx, cy), 5, (255, 0, 0), 3) cv2.imshow(&quot;Frame&quot;, frame) key = cv2.waitKey(1) if key == 27: break cap.release() cv2.destroyAllWindows() </code></pre> <p>Can anyone help?</p>
<p>Why not try something like this:</p> <p>The image you captured is now a 2D array, slice it from the center, and read a bunch of points representing a small rectangle in the center. Then, you can average them or get the median, it is up to you.</p> <pre><code># capture a video frame here height, width, _ = frame.shape zone_width = 15 zone_height = 15 frame_center = (frame.shape[0] / 2, frame.shape[1] / 2) y1 = int(frame_center[0] - zone_height) y2 = int(frame_center[0] + zone_height) x1= int(frame_center[1] - zone_width) x2 = int(frame_center[1] + zone_width) selection = frame[y1:y2, x1:x2] r, g, b = get_avg_color() </code></pre>
python|numpy|opencv|raspberry-pi3
0
1,072
52,210,375
How can you get the most recent business day in Python?
<p>How can you get the most recent business day in python? </p> <p>E.g., if today is a business day, I'd like to get today as a datetime object, but if today is a Sunday, I'd like to get Friday as a datetime object, presuming Friday is the most recent business day.</p> <p>Thanks,</p> <p>Jack</p>
<p>You can use:</p> <pre><code>import pandas as pd pd.datetime.today() - pd.tseries.offsets.BDay(0) </code></pre> <p><strong>Update</strong></p> <pre><code>today = pd.datetime(2018,9,2) np.where((today - pd.tseries.offsets.BDay(0)) &gt; today, (today - pd.tseries.offsets.BDay(1)), (today - pd.tseries.offsets.BDay(0))) [output] Timestamp('2018-08-31 00:00:00') </code></pre>
python|pandas|datetime|python-datetime
8
1,073
52,241,853
numPy gives nan while reading a negative number from a file
<p>I tried to read the contents of a 3x3 matrix stored in a text file having the following values.</p> <pre><code>−30 10 20 10 40 −50 20 −50 −10 </code></pre> <p>I used <code>numpy.genfromtxt</code> as follows but when it gave <code>nan</code> in place of negative data values.</p> <pre><code>data = np.genfromtxt('file.txt',delimiter=' ') print(data) print(type(data[0][0])) </code></pre> <p>But the datatype of the negative value still shows <code>float64</code>:</p> <p><a href="https://i.stack.imgur.com/mUoWM.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mUoWM.jpg" alt="The screenshot of code and output."></a></p> <p>I also tried reading the values as a list and then converting to numpy array but that also didn't work.</p> <p>Is there any other way I can read negative values as a numpy array from a file input?</p>
<p>You've got U+2212 MINUS SIGN characters in your file, not the U+002D HYPHEN-MINUS people usually think of as the minus sign character. <code>genfromtxt</code> doesn't handle that character. Replace the minus sign characters with hyphen-minuses before trying to parse the data.</p>
python|numpy|genfromtxt
4
1,074
60,759,415
How can I plot the following pandas data set with three columns using matplotlib?
<p>I am able to plot the data set below when it only has the last two columns (the GDP per year and the population value) but I want to learn how to plot it to also include the year.</p> <p><code>suicides_gdp = suicides_russia.groupby(["year", " gdp_for_year ($) "])["suicides_no"].sum()</code><br /> <code>suicides_gdp</code></p> <p><code>year gdp_for_year ($)</code><br /> <code>1989 506,500,173,960 37921</code><br /> <code>1990 516,814,274,022 39028</code><br /> <code>1991 517,962,962,963 39281</code><br /> <code>1992 460,290,556,901 45923</code><br /> <code>1993 435,083,713,851 55846</code><br /> <code>1994 395,077,301,248 61420</code><br /> <code>1995 395,531,066,563 60548</code><br /> <code>1996 391,719,993,757 57511</code><br /> <code>1997 404,926,534,140 54746</code><br /> <code>1998 270,953,116,950 51518</code><br /> <code>1999 195,905,767,669 56974</code><br /> <code>2000 259,708,496,267 56619</code><br /> <code>2001 306,602,673,980 56958</code><br /> <code>2002 345,110,438,692 55024</code><br /> <code>2003 430,347,770,732 51445</code><br /> <code>2004 591,016,690,743 49096</code><br /> <code>2005 764,017,107,992 45802</code><br /> <code>2006 989,930,542,279 42614</code><br /> <code>2007 1,299,705,247,686 41149</code><br /> <code>2008 1,660,844,408,500 38211</code><br /> <code>2009 1,222,643,696,992 37408</code><br /> <code>2010 1,524,916,112,079 33356</code><br /> <code>2011 2,051,661,732,060 31038</code><br /> <code>2012 2,210,256,976,945 29643</code><br /> <code>2013 2,297,128,039,058 28690</code><br /> <code>2014 2,063,662,665,172 26541</code><br /> <code>2015 1,368,400,705,491 25432</code><br /></p> <p>I tried <code>plt.plot(suicides_gdp.index, suicides_gdp.values)</code> and <code>plt.barh(x="suicides_no", y=["year", " gdp_for_year ($) "], width=5)</code> but I get the following errors respectively:</p> <p><code>ValueError: setting an array element with a sequence.</code> for the line plot and <code>TypeError: bar() got multiple values for keyword argument 'x'</code> for the horizontal bar chart.</p> <p>How can I plot the following data set using either a line plot or bar chart? </p>
<p>I would plot <code>bar</code>, instead of <code>barh</code>. Also, since the two columns have different scales, it's best to plot them in twin axes:</p> <pre><code>suicides_gdp = suicides_gdp.reset_index() fig, ax = plt.subplots(figsize=(12,6)) ax2 = ax.twinx() ax2.bar(suicides_gdp['year'], suicides_gdp['suicides_no'], color='C1', alpha=0.5) ax.plot(suicides_gdp['year'], suicides_gdp['gdp_for_year ($)'], zorder=100) plt.show() </code></pre> <p>Output</p> <p><a href="https://i.stack.imgur.com/hDRiw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hDRiw.png" alt="enter image description here"></a></p>
pandas|dataframe|matplotlib|plot|multiple-columns
2
1,075
60,649,722
keras.get_ssion().graph is not working in tensorflow2.x
<p>I could graphs of keras model by code below in tensorflow1.x</p> <pre><code>from tensorflow.python.keras import backend as K model = keras.Sequential([ keras.layers.Flatten(input_shape=(28, 28)), keras.layers.Dense(128, activation='relu'), keras.layers.Dense(10) ]) ... graph=K.get_session().graph graph_def=graph.as_graph_def() print(graph_def) </code></pre> <p><a href="https://i.stack.imgur.com/f8Ztn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/f8Ztn.png" alt="result in tensorflow1.x"></a></p> <p>However, when I change tensorflow version to 2.x, it does not working. I got the result like a picture below in tensorflow2.1.</p> <p><a href="https://i.stack.imgur.com/zWbnu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zWbnu.png" alt="result in tensorflow2.1"></a></p> <p>How can I get graphs of keras model in tensorflow2.x?</p>
<p>Tensorflow migration guide would be the place where you can start -</p> <p><a href="https://www.tensorflow.org/guide/migrate" rel="nofollow noreferrer">Migrate your TensorFlow 1 code to TensorFlow 2</a></p>
tensorflow|keras
0
1,076
60,553,804
reading csv files with pandas fills dataframe with NaNs
<p>I have a csv output file from a datalogger that I want to bring into Python.<br> <a href="https://i.stack.imgur.com/Y8u9D.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Y8u9D.jpg" alt="temp_table"></a></p> <p>Here is some of the data in csv:</p> <pre><code>"Name:,Data Instr INSTR 3/5/2020 11:51:59" "Owner:,lab1" "Comments:," "Acquisition Date:,3/5/2020 11:51:59 AM" "&amp;Instrument:,34970A,Address:,ASRL11::INSTR,Modules:,1,Slot3:,34901A" "Total Channels:,4" "Channel,Name,Function,Range,Resolution,AdvSettings,Scale,Gain,Offset,Label,Test,Low,High,HWAlarm" "316,PCB_CTR,Temp (Type K),None,C,Temp (Type K)#1#0.016#Auto#0.001#C#Internal#0#false,False,1,0,C,High Only,0,105,Alarm 1" "317,Q24,Temp (Type K),None,C,Temp (Type K)#1#0.016#Auto#0.001#C#Internal#0#false,False,1,0,C,High Only,0,105,Alarm 1" "318,Q25,Temp (Type K),None,C,Temp (Type K)#1#0.016#Auto#0.001#C#Internal#0#false,False,1,0,C,High Only,0,105,Alarm 1" "319,Q18,Temp (Type K),None,C,Temp (Type K)#1#0.016#Auto#0.001#C#Internal#0#false,False,1,0,C,High Only,0,105,Alarm 1" "Scan Control:,Start Action:,Immediately,Stop Action:,User Terminated" "Scan,Time,316 &lt;PCB_CTR&gt; (C),Alarm 316,317 &lt;Q24&gt; (C),Alarm 317,318 &lt;Q25&gt; (C),Alarm 318,319 &lt;Q18&gt; (C),Alarm 319" "1,3/5/2020 11:51:59:168,30.471,0,29.241,0,29.165,0,33.302,0" "2,3/5/2020 11:52:01:152,32.197,0,30.634,0,30.564,0,34.819,0" "3,3/5/2020 11:52:03:152,33.795,0,32.019,0,31.879,0,36.848,0" </code></pre> <p>I'm trying to use pandas to do it. when I try reading it into python using </p> <pre><code>x[i]=pd.read_csv(file_name,usecols=[2,4,6,8],skiprows= 13,encoding = "ISO-8859-1") </code></pre> <p>I get:</p> <pre><code>print(x[i]) Unnamed: 2 Unnamed: 4 Unnamed: 6 Unnamed: 8 0 NaN NaN NaN NaN 1 NaN NaN NaN NaN 2 NaN NaN NaN NaN 3 NaN NaN NaN NaN 4 NaN NaN NaN NaN .. ... ... ... ... 518 NaN NaN NaN NaN 519 NaN NaN NaN NaN </code></pre> <p>Thank you.</p> <p><strong>EDIT</strong> The NaNs were due to the csv files actually being saved as unicode text out of Excel. I thought I had saved them as CSV correctly, but maybe I missed it. After saving the files as CSVs (again, ;-)), now that I got rid of the NaNs. However, everything now imports as a single column, even when I explicitly add "delimiter =',' " to the pd.read_csv statement:</p> <pre><code> x[i]=pd.read_csv(path+filename[i], delimiter= ',', skiprows=12) print(x[i]) Scan,Time,316 &lt;PCB_CTR&gt; (C),Alarm 316,317 &lt;Q24&gt; (C),Alarm 317,318 &lt;Q25&gt; (C),Alarm 318,319 &lt;Q18&gt; (C),Alarm 319 0 1,3/5/2020 10:03:21:164,46.334,0,43.755,0,45.7... 1 2,3/5/2020 10:03:22:149,46.997,0,44.262,0,46.3... 2 3,3/5/2020 10:03:23:149,47.615,0,44.671,0,46.9... 3 4,3/5/2020 10:03:24:149,48.267,0,45.229,0,47.6... 4 5,3/5/2020 10:03:25:149,48.861,0,45.711,0,48.1... .. ... 922 923,3/5/2020 10:18:43:149,97.59,0,88.915,0,91.... 923 924,3/5/2020 10:18:44:149,96.879,0,88.514,0,91... 924 925,3/5/2020 10:18:45:149,96.027,0,87.984,0,90... 925 926,3/5/2020 10:18:46:149,95.168,0,87.333,0,89... 926 927,3/5/2020 10:18:47:149,94.385,0,86.8,0,89.1... [927 rows x 1 columns] </code></pre> <p><strong>EDIT 2</strong> So, the reason why it is all importing as 1 column is because the csv file looks like this:</p> <pre><code>"Name:,Data Instr INSTR 3/5/2020 10:03:21" "Owner:,lab1" "Comments:," "Acquisition Date:,3/5/2020 10:03:21 AM" "&amp;Instrument:,34970A,Address:,ASRL11::INSTR,Modules:,1,Slot3:,34901A" "Total Channels:,4" "Channel,Name,Function,Range,Resolution,AdvSettings,Scale,Gain,Offset,Label,Test,Low,High,HWAlarm" "316,PCB_CTR,Temp (Type K),None,C,Temp (Type K)#1#0.016#Auto#0.001#C#Internal#0#false,False,1,0,C,High Only,0,105,Alarm 1" "317,Q24,Temp (Type K),None,C,Temp (Type K)#1#0.016#Auto#0.001#C#Internal#0#false,False,1,0,C,High Only,0,105,Alarm 1" "318,Q25,Temp (Type K),None,C,Temp (Type K)#1#0.016#Auto#0.001#C#Internal#0#false,False,1,0,C,High Only,0,105,Alarm 1" "319,Q18,Temp (Type K),None,C,Temp (Type K)#1#0.016#Auto#0.001#C#Internal#0#false,False,1,0,C,High Only,0,105,Alarm 1" "Scan Control:,Start Action:,Immediately,Stop Action:,User Terminated" "Scan,Time,316 &lt;PCB_CTR&gt; (C),Alarm 316,317 &lt;Q24&gt; (C),Alarm 317,318 &lt;Q25&gt; (C),Alarm 318,319 &lt;Q18&gt; (C),Alarm 319" "1,3/5/2020 10:03:21:164,46.334,0,43.755,0,45.706,0,49.129,0" "2,3/5/2020 10:03:22:149,46.997,0,44.262,0,46.35,0,49.773,0" "3,3/5/2020 10:03:23:149,47.615,0,44.671,0,46.974,0,50.402,0" "4,3/5/2020 10:03:24:149,48.267,0,45.229,0,47.628,0,50.879,0" "5,3/5/2020 10:03:25:149,48.861,0,45.711,0,48.164,0,51.495,0" "6,3/5/2020 10:03:26:149,49.455,0,46.323,0,48.783,0,51.9,0" "7,3/5/2020 10:03:27:149,50.014,0,46.796,0,49.351,0,52.334,0" "8,3/5/2020 10:03:28:149,50.586,0,47.237,0,49.845,0,52.959,0" </code></pre> <p>where every row is in quotes. I didn't notice it initially when I first posted the question above....</p> <p>I'm not sure why the quotes were there in the first place, and I'm not sure why Pandas cares anyway, if it is to use comma as a delimiter. Note that the quotes won't show up in Excel --you'd need to open the file in Notepad to see these.</p> <p>Once I removed the quotes, all is good. Still, I don't understand why even if pandas interprets the row as a string, it doesn't separate the items based on the comma delimiter....</p>
<pre><code>Pet_Data= pd.read_csv('D:/data.csv',skiprows=14) Pet_Data=Pet_Data.iloc[:,[2,4,6,8]] </code></pre>
python|pandas
0
1,077
72,682,146
Is this group by size behavior correct?
<p>I have this sample dataset:</p> <pre><code>mydf = pd.DataFrame({'city':['Porto','Loa','Porto','Porto','Loa'],\ 'town':['A','C','A','B','C']}) mydf['city'] = pd.Categorical(mydf['city']) mydf['town'] = pd.Categorical(mydf['town']) mydf city town 0 Porto A 1 Loa C 2 Porto A 3 Porto B 4 Loa C </code></pre> <p>And I want to count the occurrences grouped by city and town. So I tried this:</p> <pre><code>mydf.groupby(['city','town']).size().to_frame() 0 city town Loa A 0 B 0 C 2 Porto A 2 B 1 C 0 </code></pre> <p>But this is wrong, since city C is located only in Loa, not in Porto, and cities A and B are located only in Porto. My expected result is this:</p> <pre><code> 0 city town Loa C 2 Porto A 2 B 1 </code></pre> <p>Sure I can avoid the <code>pd.Categorical</code> conversion in 'city' and 'town', but I don't understand that behavior. Is there a parameter I should use to avoid this and get the right and simplified expected result?</p>
<p><strong>Yes, the <code>groupby</code> + <code>size</code> behavior is expected.</strong></p> <p>By default, if any of the grouping columns are categorical then it will show all the values for categorical columns regardless whether they appear in a particular group or not.</p> <p>To turn this default behaviors off, you can set the optional parameter <code>observed=True</code> in <code>groupby</code> which will show only observed values(actual appearing values) of categorical columns:</p> <pre><code>mydf.groupby(['city','town'], observed=True).size().to_frame() </code></pre> <hr /> <pre><code> 0 city town Porto A 2 B 1 Loa C 2 </code></pre>
python|pandas
4
1,078
72,819,710
Fix TypeError in Keras class - how to cast Keras.Sequential to Layer class so it can be part of a Keras.Model instance
<p>I have the following code, and I am getting a TypeError. My guess is that I am making a <code>Keras.Model</code> part of a, well, <code>Keras.Model</code> when I should have been using <code>Layer</code> instead. Long story short, how can I cast the <code>Keras.Sequential</code> object that <code>mlp</code> returns so that when I construct an instance of <code>SumNet</code>, I don't get:</p> <pre><code>raise TypeError('The added layer must be ' TypeError: The added layer must be an instance of class Layer. Found: input_1 Placeholder FLOAT32(&lt;tile.Value SymbolicDim UINT64()&gt;, 5) : </code></pre> <p><strong>EDIT:</strong> It needs to be a Keras API-only way due to the answer of this <a href="https://stackoverflow.com/questions/58156573/amd-plaidml-vs-cpu-tensorflow-unexpected-results">question</a>. Perhaps, the function <code>mlp</code> needs to be rewritten so it returns a type <code>Layer</code> not <code>Model</code> somehow.</p> <p>The code is here:</p> <pre><code>def mlp(size_in, size_out, act=layers.ReLU): return keras.Sequential([layers.Input(shape=(size_in,)), layers.Dense(hidden, name='layer1'), act(), layers.Dense(hidden, name='layer2'), act(), layers.Dense(hidden, name='layer3'), act(), layers.Dense(size_out, name='layer4')]) class SumNet(keras.Model): def __init__(self): super(SumNet, self).__init__() ######################################################## # The same inductive bias as above! self.g = mlp(5, 1) self.f = mlp(1, 1) def call(self, x): y_i = self.g(x)[:, :, 0] y = keras.backend.sum(y_i, axis=1, keepdims=True) / y_i.shape[1] z = self.f(y) return z[:, 0] </code></pre>
<p>After much discussion with AloneTogether I figured out a way to rewrite the <code>mlp</code> function so that it plays nice with the subclassed model. Apparently, the <code>Sequential</code> module does not cast well.</p> <p>Here is the rewritten function:</p> <pre><code>def mlp2(size_in, size_out): hidden = 128 inputs = keras.Input(shape=(size_in,)) x = keras.layers.Dense(hidden, name='layer1', activation='relu')(inputs) x = keras.layers.Dense(hidden, name='layer2', activation='relu')(x) x = keras.layers.Dense(hidden, name='layer3', activation='relu')(x) outputs = keras.layers.Dense(size_out, name='layer4', activation='relu')(x) m = keras.Model(inputs, outputs) return m </code></pre>
python|tensorflow|keras
0
1,079
72,498,115
How to combine values of multiple rows in panda
<p>I have dataframe file that split text into multiple rows, like:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>A</th> <th>B</th> </tr> </thead> <tbody> <tr> <td>aaa</td> <td>bbbb</td> </tr> <tr> <td>ccccc</td> <td>NaN</td> </tr> <tr> <td>NaN</td> <td>NaN</td> </tr> <tr> <td>dddd</td> <td>ffff</td> </tr> <tr> <td>eeee</td> <td>NaN</td> </tr> <tr> <td>gg</td> <td>NaN</td> </tr> </tbody> </table> </div> <p>I hope to merge the value of each row to its next rows unless it is blank and get a data frame like:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>A</th> <th>B</th> </tr> </thead> <tbody> <tr> <td>aaacccc</td> <td>bbbb</td> </tr> <tr> <td>ddddeeeegg</td> <td>ffff</td> </tr> </tbody> </table> </div> <p>Is there an efficient way to convert the dataframe in python?</p>
<p>You can create a mask and group from the rows with all NaNs, then <a href="https://pandas.pydata.org/docs/reference/api/pandas.core.groupby.DataFrameGroupBy.aggregate.html" rel="nofollow noreferrer"><code>GroupBy.agg</code></a> to <a href="https://docs.python.org/3/library/stdtypes.html#str.join" rel="nofollow noreferrer"><code>join</code></a> the strings:</p> <pre><code># rows with all NaN? mask = df.isna().all(axis=1) # create group starting with all-NaN rows group = mask.cumsum() # filter, group, aggregate out = df[~mask].groupby(group).agg(lambda x: ''.join(x.dropna())) </code></pre> <p>output:</p> <pre><code> A B 0 aaaccccc bbbb 1 ddddeeeegg ffff </code></pre>
python|pandas
1
1,080
72,489,534
multiple annotations on bar seaborn chart
<div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">STRUD</th> <th style="text-align: center;">Struct_Count</th> <th style="text-align: right;">Perc</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">Row</td> <td style="text-align: center;">1151</td> <td style="text-align: right;">38.37</td> </tr> <tr> <td style="text-align: left;">Single</td> <td style="text-align: center;">865</td> <td style="text-align: right;">28.83</td> </tr> <tr> <td style="text-align: left;">Detached</td> <td style="text-align: center;">447</td> <td style="text-align: right;">14.90</td> </tr> <tr> <td style="text-align: left;">Row End</td> <td style="text-align: center;">384</td> <td style="text-align: right;">12.80</td> </tr> <tr> <td style="text-align: left;">Multi</td> <td style="text-align: center;">146</td> <td style="text-align: right;">4.87</td> </tr> <tr> <td style="text-align: left;">Inside</td> <td style="text-align: center;">3</td> <td style="text-align: right;">0.10</td> </tr> <tr> <td style="text-align: left;">Missing</td> <td style="text-align: center;">2</td> <td style="text-align: right;">0.07</td> </tr> <tr> <td style="text-align: left;">Default</td> <td style="text-align: center;">1</td> <td style="text-align: right;">0.03</td> </tr> <tr> <td style="text-align: left;">Town End</td> <td style="text-align: center;">1</td> <td style="text-align: right;">0.03</td> </tr> </tbody> </table> </div> <pre><code>plt.figure(figsize=(15, 8)) plots = sns.barplot(x=&quot;STRUD&quot;, y=&quot;Struct_Count&quot;, data=df2) for bar in plots.patches: # Using Matplotlib's annotate function and # passing the coordinates where the annotation shall be done plots.annotate(format(bar.get_height(), '.0f'), (bar.get_x() + bar.get_width() / 2, bar.get_height()), ha='center', va='center', size=13, xytext=(0, 5), textcoords='offset points') plt.title(&quot;Distribution of STRUCT&quot;) plt.show() </code></pre> <p>With the above code learnt from the forum, I am able to plot the 'struct_count' values, how can I plot the corresponding percentage values on the bars. Thanks for the help in advance.</p>
<p>You can play around with the <code>ax.bar_label</code> in order to set custom labels. No need for annotations and loops.</p> <p>I'm assuming the below example is what you mean by &quot;plot the corresponding percentage values on the bars&quot;, but it can be adjusted flexibly.</p> <p>Note that this doesn't show values smaller than 1%, since those would be overlapping the x-axis and the other label. This can also be easily adjusted below.</p> <p><a href="https://matplotlib.org/stable/gallery/lines_bars_and_markers/bar_label_demo.html" rel="nofollow noreferrer">The docs</a> have some instructive examples.</p> <pre><code>import seaborn as sns import matplotlib.pyplot as plt fig, ax = plt.subplots(1, 1, figsize=(15, 8)) plots = sns.barplot(x=&quot;STRUD&quot;, y=&quot;Struct_Count&quot;, data=df2, ax=ax) ax.bar_label(ax.containers[0]) ax.bar_label(ax.containers[0], labels=[f'{e}%' if e &gt; 1 else &quot;&quot; for e in df2.Perc], label_type=&quot;center&quot;) plt.title(&quot;Distribution of STRUCT&quot;) </code></pre> <p><a href="https://i.stack.imgur.com/ebmeD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ebmeD.png" alt="enter image description here" /></a></p>
python|pandas|matplotlib|seaborn
2
1,081
72,490,742
Key error in merge function between dataframes
<p>I have a question. I have two data set as under,</p> <pre><code>df1 Sl No Address 1 1111 2 2222 3 2345 4 7890 5 0987 6 3456 7 1233 </code></pre> <pre><code>df2 email Add. AA A123 AA 1111 AA 99999 BB a9999 BB 345689 BB 345699 CC 1233 </code></pre> <p>I'm trying to merge the two dataframe based on address column and bring column named email to the df1.</p> <p>I have renamed the column, and passed the merge function as under.</p> <pre><code>df2.rename(columns = {'Add.':'Address'}, inplace = True) df1 = df1.merge(df2['email'],how=&quot;left&quot;, on = &quot;Address&quot;) </code></pre> <p>I'm not sure why but i'm getting a key error</p> <pre><code>~\anaconda3\lib\site-packages\pandas\core\generic.py in _get_label_or_level_values(self, key, axis) 1682 values = self.axes[axis].get_level_values(key)._values 1683 else: -&gt; 1684 raise KeyError(key) 1685 1686 # Check for duplicates KeyError: 'Address' </code></pre> <p>I verified the files, column named &quot;Address&quot; is present in the source file. Not sure why merge function is saying otherwise. Note - Both the address columns in df1 and df2 are objects</p> <p>Help would be appreciated!</p>
<p>Do not select the email column when you <code>merge</code> , this will make the merged df become series</p> <pre><code>df1 = df1.merge(df2[['Address','email']], how = &quot;left&quot;, on = &quot;Address&quot;) </code></pre>
python|pandas|dataframe|merge
2
1,082
72,552,722
Why is the output shape (3,2,32,64) when I convolve my (3,3,64,64) input with another (3,3,64,64) input using tf.nn.conv2d?
<p>I'm using TensorFlow's tf.nn.conv2d to convolve a (3,3,64,64) input with a (3,3,64,64) filter using stride 2 and SAME padding. I was expecting the output shape to be (2,2,64,64), but I'm getting (3,2,32,64) instead. I think using stride 2 seems to be the cause but I'm not exactly sure why it's outputting the shape (3,2,32,64). Anyone familiar with this know why this is happening? Is this an issue with the stride, padding, or data format?</p>
<p>I just realized the expected input format for tf.nn.conv2d is (size, h, w, in_channel) when I was using (h, w, in_channel, size). To get my desired output shape (2,2,64,64), I can use tf.transpose to change the format of my input, convolve, and then transpose my output back to the original format:</p> <pre><code>input = tf.transpose(input, perm=(3, 0, 1, 2)) output = tf.nn.conv2d(input, filter, strides=2, padding='SAME') output = tf.transpose(output, perm=(1, 2, 3, 0)) </code></pre>
python|tensorflow
0
1,083
61,658,842
How to merge only on rows where there is no value in the rows of a certain column in pandas dataframe
<p>I have the following dataframe df1,</p> <pre><code> CompanyName Country Ticker .................... 0 Apple Inc. US AAPL 1 Microsoft US 2 Sony US 3 DBS SG D05 4 Razer HK 0700 5 General Electric US GE </code></pre> <p>Then I have a list of all the company names with just their tickers tickerdf,</p> <pre><code> CompanyName Ticker 0 Apple Inc. AAPL 1 Microsoft MSFT 2 Sony SNE 3 DBS D05.SI 4 Razer 0700.HK 5 General Electric GE </code></pre> <p>If I wanted to merge, on the company name I would do, </p> <pre><code>mergeddf = pd.merge(df1,tickerdf,on=['CompanyName'], how='left') </code></pre> <p>But if I did that I would end up with all the Ticker values from tickerdf1 like this</p> <pre><code> CompanyName Country Ticker .................... 0 Apple Inc. US AAPL 1 Microsoft US MSFT 2 Sony US SNE 3 DBS SG D05.SI 4 Razer HK 0700.HK 5 General Electric US GE </code></pre> <p>But, I want it to retain the values from the df1, basically only merge on the rows where there is no data on the ticker column, the output should look like this.</p> <pre><code> CompanyName Country Ticker .................... 0 Apple Inc. US AAPL 1 Microsoft US MSFT 2 Sony US SNE 3 DBS SG D05 4 Razer HK 0700 5 General Electric US GE </code></pre> <p>Is it possible to only merge data on rows where the Ticker column is empty?</p>
<p>You can do <code>fillna</code>:</p> <pre><code># Ticker by company s = df2.set_index('CompanyName')['Ticker'] df['Ticker'] = df['Ticker'].fillna(df['CompanyName'].map(s) ) </code></pre>
python|pandas|dataframe|join|merge
0
1,084
61,808,025
Match coloring of slices for series of pandas pie charts
<p>I have a pandas dataframe that looks like this : </p> <pre><code>df = pd.DataFrame( {'Judge': {0: 1, 1: 1, 2: 1, 3: 2, 4: 2, 5: 2, 6: 3, 7: 3, 8: 3}, 'Category': {0: 'A', 1: 'B', 2: 'C', 3: 'A', 4: 'B', 5: 'C', 6: 'A', 7: 'B', 8: 'C'}, 'Rating': {0: 'Excellent', 1: 'Very Good', 2: 'Good', 3: 'Very Good', 4: 'Very Good', 5: 'Very Good', 6: 'Excellent', 7: 'Very Good', 8: 'Excellent'}} ) </code></pre> <p>I'm plotting a pie chart to show the ratings of each judge like this:</p> <pre><code>grouped = df.groupby('Judge') for group in grouped: group[1].Rating.value_counts().plot(kind='pie', autopct="%1.1f%%") plt.legend(group[1].Rating.value_counts().index.values, loc="upper right") plt.title('Judge ' + str(group[0])) plt.axis('equal') plt.ylabel('') plt.tight_layout() plt.show() </code></pre> <p>Unfortunately, the colors of the slices are different for each judge. For example, Judge 1's "Excellent" slice is blue where Judge 2's "Very Good" slice is blue. </p> <p>How can enforce slice color consistency from plot to plot?</p>
<p>I think you can unstack and plot:</p> <pre><code>axes = (df.groupby('Judge').Rating.value_counts() .unstack('Judge') .plot.pie(subplots=True, figsize=(6,6), layout=(2,2)) ) # do some thing with the axes for ax in axes.ravel(): pass </code></pre> <p>Output:</p> <p><a href="https://i.stack.imgur.com/jXE7B.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jXE7B.png" alt="enter image description here"></a></p>
pandas|matplotlib|pandas-groupby
1
1,085
61,870,811
EfficientDet-Custom Dataset -StringToNumberOp could not correctly convert string
<p>As a beginner, I am trying to train my custom datasets with TensorFlow, but getting the following error when start training:</p> <p><a href="https://i.stack.imgur.com/dmPD8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dmPD8.png" alt="enter image description here"></a></p> <p>here is my command line:</p> <pre><code>python main.py --mode=train_and_eval --training_file_pattern=tfrecord/train.record --validation_file_pattern=tfrecord/test.record --model_name=efficientdet-d0 --model_dir=/tmp/efficientdet-d0-scratch --backbone_ckpt=efficientnet-b0 --train_batch_size=8 --eval_batch_size=8 --eval_samples=512 --num_examples_per_epoch=5717 --num_epochs=1 --hparams="num_classes=4,moving_average_decay=0" --use_tpu=False </code></pre>
<p>Answered here on github. Its an issue with tfrecord creations. In your tf record creation script, change source_id</p> <pre><code>'image/source_id': dataset_util.bytes_feature(input_image_filename.encode('utf8')), </code></pre> <p>to</p> <pre><code>'image/source_id': dataset_util.bytes_feature('0'.encode('utf8')), </code></pre> <p>You don't need to modify any filenames. Original link below. </p> <p><a href="https://github.com/google/automl/issues/307#issuecomment-626587210" rel="nofollow noreferrer">https://github.com/google/automl/issues/307#issuecomment-626587210</a></p>
python|tensorflow|error-handling|efficientnet
1
1,086
61,894,015
Pandas extractall merge
<p>Not sure if I should fix my regex pattern, or process more with pandas.</p> <p>Here's a mock setup:</p> <pre><code>import re import pandas as pd regex = r"(?P&lt;adv&gt;This)|(?P&lt;noun&gt;test)" texts = ["This is a test", "Random stuff with no match"] series = pd.Series(texts) </code></pre> <p>I want to find all matches for groups (<code>&lt;adv&gt;</code>, <code>&lt;noun&gt;</code> -- there are typically more than two). These groups are designed to be <strong>exclusive</strong> hence I would want to have only one row result with the captured string / NaN. </p> <p>Current output: multi-index rows, only for texts that have a match</p> <pre><code>&gt;&gt;&gt; print(series.str.extractall(regex)) adv noun match 0 0 This NaN 1 NaN test </code></pre> <p>Expected output: one row per input text, and aggregated matchs per group</p> <pre><code> adv noun 0 This test 1 NaN NaN </code></pre> <p>Any chance for a hand on this? Either fix the regex, or post-process with pandas. Thanks!</p>
<p>You can try;</p> <pre><code>series.str.extractall(regex).groupby(level=0).first() adv noun 0 This test </code></pre>
python|regex|pandas
2
1,087
57,985,196
How do you get and set a 1-D array with column indexes of a 2-D matrix?
<p>Suppose you have a matrix:</p> <pre><code>a = np.arange(9).reshape(3,3) array([[0, 1, 2], [3, 4, 5], [6, 7, 8]]) </code></pre> <p>and I want get or set over the values 1, 5, and 6, how would I do that. </p> <p>For example I thought doing</p> <pre class="lang-py prettyprint-override"><code># getting b = a[:, np.array([1,2,0])] # want b = [1,5,6] # setting a[:, np.array([1,2,0])] = np.array([9, 10, 11]) # want: # a = array([[0, 9, 2], # [3, 4, 10], # [11, 7, 8]]) </code></pre> <p>would do it, but that is not the case. Any thoughts on this?</p>
<p>Only a small tweak makes this work:</p> <pre><code>import numpy as np a = np.arange(9).reshape(3,3) # getting b = a[range(a.shape[0]), np.array([1,2,0])] # setting a[range(a.shape[0]), np.array([1,2,0])] = np.array([9, 10, 11]) </code></pre> <p>The reason why your code didn't work as expected is because you were indexing the x-axis with slices instead of indices. Slices mean take all rows, but specifying the index directly will get you the row you want for each index value.</p>
python|numpy
4
1,088
54,800,887
How to create a mm-yyyy column in pandas?
<p>Is there any way to extract mm-yyyy (or even quarter-yyyy) information from a datetime variable?</p> <p>My datetime column is df['event'] which is a dd-mm-yyyy. I know I can extract mm and year from my variable, but is there a way I can extract the two combined?</p> <p>I can resample my data to monthly frequency, but then I would only get the mean of my variable of interests, while I want to keep all my observations. </p>
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.dt.to_period.html" rel="nofollow noreferrer"><code>Series.dt.to_period</code></a> for month or quarter periods:</p> <pre><code>df = pd.DataFrame({'event':['01-01-2015','02-05-2015','01-08-2016','01-11-2015']}) df['event'] = pd.to_datetime(df['event'], dayfirst=True) df['months'] = df['event'].dt.to_period('M') df['quarters'] = df['event'].dt.to_period('Q') print (df) event months quarters 0 2015-01-01 2015-01 2015Q1 1 2015-05-02 2015-05 2015Q2 2 2016-08-01 2016-08 2016Q3 3 2015-11-01 2015-11 2015Q4 </code></pre> <p>If need strings in custom formats add <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.dt.strftime.html" rel="nofollow noreferrer"><code>strftime</code></a>:</p> <pre><code>df['months'] = df['event'].dt.strftime('%m-%Y') df['quarters'] = df['event'].dt.to_period('Q').dt.strftime('%q-%Y') print (df) event months quarters 0 2015-01-01 01-2015 1-2015 1 2015-05-02 05-2015 2-2015 2 2016-08-01 08-2016 3-2016 3 2015-11-01 11-2015 4-2015 </code></pre>
python|pandas|datetime
4
1,089
54,873,635
How do I plot a column vector as a contour with non uniform grid points that are also column vectors?
<p>I have data (<a href="https://drive.google.com/open?id=1Owokm3Xz31INJoA-BW2qsaPFFttxdaFk" rel="nofollow noreferrer">link</a>) in the form given below:</p> <p><strong>Input data format</strong></p> <pre><code>Header: 'x' 'y' 'a' 'b' x1 , y1 , ... (data) x2 , y2 , ... (data) x3 , y3 , ... (data) : : : xn-2, yn-2, ... (data) xn-1, yn-1, ... (data) xn , yn , ... (data) </code></pre> <p>They are non uniformly spaced grids, and I want to plot a filled contour that is colored by a, b in this case. Because of the arrangement of the points and non uniformity, I cannot use <code>np.meshgrid</code> (correct me if I am wrong). How do I plot a column vector as a contour with non uniform grid points that are also column vectors?</p> <p><strong>MWE</strong></p> <pre><code>import numpy as np import matplotlib.pyplot as plt data = np.genfromtxt('./plot_data.dat', skip_header=1, dtype = None, delimiter = '\t') test = np.column_stack([data[:,0],data[:,1],data[:,3]]) plt.imshow(test) plt.xlim([np.min(data[:,0]), np.max(data[:,0])]) plt.ylim([np.min(data[:,1]), np.max(data[:,1])]) plt.show() </code></pre>
<p><a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.meshgrid.html" rel="nofollow noreferrer"><code>numpy.meshgrid</code></a> should have no problem with non uniform 1-d domains. However, your 2-D data are irregularly distributed in the data file (see plots at the end of post). This has other issues. However, as suggested by @Thomas-Kühn, <a href="https://matplotlib.org/api/_as_gen/matplotlib.pyplot.tricontour.html" rel="nofollow noreferrer"><code>matplotlib.pyplot.tricontour</code></a> and <a href="https://matplotlib.org/api/_as_gen/matplotlib.pyplot.tricontourf.html" rel="nofollow noreferrer"><code>matplotlib.pyplot.tricontourf</code></a> can handle your data (below I use <code>tricontourf</code>):</p> <pre><code>import numpy as np import matplotlib.pyplot as plt x,y,a,b = np.loadtxt('./plot_data.dat', skiprows=1, delimiter = '\t').T plt.tricontourf(x,y,a) </code></pre> <p>Result of the <code>a</code> and <code>b</code> data are on the left and right (note the black separates the two figures, and white indicates a lack of data):</p> <p><a href="https://i.stack.imgur.com/H192z.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/H192z.png" alt="a on left, b on right"></a></p> <p>Due to the sparsity of your data, a scatter plot with <code>matplotlib.pyplot.scatter(x,y,c=a)</code> may be useful too (of course with adequate <code>x</code> and <code>y</code> axes labels):</p> <p><a href="https://i.stack.imgur.com/MeVNl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MeVNl.png" alt="scatter plot, a on left, b on right"></a></p> <p>Or combining the filled contour plot with some representation of where the points are (combining the previous <code>tricontourf</code> with a simple <code>plot</code> of the domain:</p> <p><a href="https://i.stack.imgur.com/Rgqw0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Rgqw0.png" alt="a on left and b on right"></a></p> <p>Finally, you may like, <a href="https://matplotlib.org/api/_as_gen/matplotlib.pyplot.hexbin.html" rel="nofollow noreferrer"><code>matplotlib.pyplot.hexbin</code></a>:</p> <p><a href="https://i.stack.imgur.com/rZx1Y.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rZx1Y.png" alt="enter image description here"></a></p>
python|numpy|matplotlib|contour
1
1,090
73,362,254
Pandas - drop n rows by column value
<p>I need to remove last n rows where Status equals 1</p> <pre><code>v = df[df['Status'] == 1].count() f = df[df['Status'] == 0].count() diff = v - f diff df2 = df[~df['Status'] == 1].tail(diff).all() #ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all() df2 </code></pre>
<p>Check whether <code>Status</code> is <code>eq</code>ual to <code>1</code> and get only those places where it is (<code>.loc[lambda s: s]</code> is doing that using boolean indexing). The <code>index</code> of <code>n</code> such rows from <code>tail</code> will be <code>drop</code>ped:</p> <pre class="lang-py prettyprint-override"><code>df.drop(df.Status.eq(1).loc[lambda s: s].tail(n).index) </code></pre> <p>sample run:</p> <pre class="lang-py prettyprint-override"><code>In [343]: df Out[343]: Status 0 1 1 2 2 3 3 2 4 1 5 1 6 1 7 2 In [344]: n Out[344]: 2 In [345]: df.Status.eq(1) Out[345]: 0 True 1 False 2 False 3 False 4 True 5 True 6 True 7 False Name: Status, dtype: bool In [346]: df.Status.eq(1).loc[lambda s: s] Out[346]: 0 True 4 True 5 True 6 True Name: Status, dtype: bool In [347]: df.Status.eq(1).loc[lambda s: s].tail(n) Out[347]: 5 True 6 True Name: Status, dtype: bool In [348]: df.Status.eq(1).loc[lambda s: s].tail(n).index Out[348]: Int64Index([5, 6], dtype='int64') In [349]: df.drop(df.Status.eq(1).loc[lambda s: s].tail(n).index) Out[349]: Status 0 1 1 2 2 3 3 2 4 1 7 2 </code></pre>
pandas
2
1,091
67,570,175
Missing trainable parameter when loading model from tensorflow hub
<p>I'm migrating our code from tensorflow 1 to tensorflow 2. One of the layers is embedding layer loaded as follows:</p> <pre class="lang-py prettyprint-override"><code>import tensorflow_hub as hub model_url = &quot;https://tfhub.dev/google/universal-sentence-encoder-multilingual/1&quot; self.use_embed = hub.Module(model_url, trainable=False) </code></pre> <p>In Tensorflow 2 this will become</p> <pre class="lang-py prettyprint-override"><code>import tensorflow_hub as hub model_url = &quot;https://tfhub.dev/google/universal-sentence-encoder-multilingual/3&quot; self.use_embed = hub.load(model_url) </code></pre> <p><a href="https://www.tensorflow.org/hub/api_docs/python/hub/Module" rel="nofollow noreferrer">because</a></p> <blockquote> <p>The hub.Module API works for TF1 only. For TF2, switch to plain SavedModels and hub.load().</p> </blockquote> <p>However, <code>load()</code> method does not support <code>trainable</code> parameter?</p> <p>What has happened to this parameter and how can I apply it in Tensorflow 2?</p>
<p>The <a href="https://www.tensorflow.org/hub/model_compatibility#compatibility_of_tf2_savedmodel" rel="nofollow noreferrer">Model Compatibility Guide</a> mentions that the parameter has a different name for <code>hub.load()</code> and <code>hub.KerasLayer()</code>:</p> <blockquote> <p>Use either hub.load:<br /> m = hub.load(handle)<br /> outputs = m(inputs, training=is_training)</p> </blockquote> <blockquote> <p>or hub.KerasLayer:<br /> m = hub.KerasLayer(handle, trainable=True)<br /> outputs = m(inputs)</p> </blockquote>
tensorflow-hub
2
1,092
60,284,375
How to create a new column in a dataframe, with either 1 or 0 based on percentage of results in previous columns?
<p>I have a data frame with 144 rows and 48 columns. It contains results from various prediction models as either 1 or 0. I want to go through a row, find the percentage of 1's in that row and add a new column with either 1 if the percentage is greater than 80, else 0.</p> <p>I know how to do this in excel with <strong>if</strong> and <strong>countif/count%</strong>, but here I don't really know how to do it. I hope I provided enough info, I am sorry if I did not. Thank you very much for any advice.</p>
<p>You can find the percentage of 1's in each row with:</p> <pre><code>df['percentage'] = df.mean(axis=1) </code></pre> <p>Then to create your new binary column you can use <code>np.where</code>:</p> <pre><code>df['new'] = np.where(df['percentage'] &gt; 0.8, 1, 0) </code></pre> <p>This works the same way as the excel <code>=IF</code> (condition, value if true, value if false).</p> <p>Example with dummy data:</p> <pre><code>import pandas as pd import numpy as np df = pd.DataFrame({'var1':[0,0,1],'var2':[0,1,1], 'var3':[1,1,1]}) df['percentage'] = df.mean(axis=1) df['new'] = np.where(df['percentage'] &gt; 0.8, 1, 0) print(df) </code></pre> <p>Output:</p> <pre><code> var1 var2 var3 percentage new 0 0 0 1 0.333333 0 1 0 1 1 0.666667 0 2 1 1 1 1.000000 1 </code></pre>
python|pandas|machine-learning
1
1,093
60,143,296
How to open excel template and save it to another path with pandas
<p>I have a simple excel template <a href="https://i.stack.imgur.com/KOPCd.png" rel="nofollow noreferrer">like this</a> with some deferent sheet</p> <p>and i already has a few dataframe for each sheet with the the same header name However, i want to write it with this template and save it to another directory. How can i do that using python and pandas? Or any other solutions?</p>
<p>Pandas deals with only tables not formatting</p> <p>Use <a href="https://openpyxl.readthedocs.io/en/stable/" rel="nofollow noreferrer">openpyxl library</a> instead</p>
python-3.x|pandas
1
1,094
65,309,311
Merging two tables using left join, drop_duplicates doesn't work?
<p>I'm creating a merge from two tables. First table look like this:</p> <pre><code> a b c 0 32 171 28 1 32 172 28 2 1014 173 28 3 1014 179 28 4 1014 154 26 5 1049 156 26 </code></pre> <p>2nd table looks like this:</p> <pre><code> a d 0 32 fdxjgset 1 32 5j4j64j4 2 1014 4564jsr5 3 1014 5jhszxse 4 1014 kuts5555 5 1049 srh5jx5x </code></pre> <p>I'm expecting to get something like this:</p> <pre><code> a b c d 0 32 171 28 fdxjgset 1 32 172 28 5j4j64j4 2 1014 173 28 4564jsr5 3 1014 179 28 5jhszxse 4 1014 154 26 kuts5555 5 1049 156 26 srh5jx5x </code></pre> <p>But I'm getting duplicates of the duplicate 'a' rows like this:</p> <pre><code> a b c d 0 32 171 28 fdxjgset 1 32 172 28 5j4j64j4 2 32 171 28 fdxjgset 3 32 172 28 5j4j64j4 4 1014 173 28 4564jsr5 5 1014 179 28 5jhszxse 6 1014 154 26 kuts5555 7 1014 173 28 4564jsr5 8 1014 179 28 5jhszxse 9 1014 154 26 kuts5555 10 1049 156 26 srh5jx5x </code></pre> <p>My code is:</p> <pre class="lang-py prettyprint-override"><code>data_1 = pd.read_csv(&quot;First file.csv&quot;,encoding='latin1') data_2 = pd.read_csv(&quot;Second file.csv&quot;,encoding='latin1') data_2_dups = data_zips.drop_duplicates() #remove duplicates data = data_1.merge(data_2_dups, on='a', how = 'left', indicator=True) #data1 = data.drop_duplicates() data.to_csv(&quot;merged file.csv&quot;) </code></pre> <p>Now I did remove all duplicates like others said on different threads here, but that doesn't seem to work. It's still creating duplicate rows for duplicates. Any idea what am I doing wrong? Thanks.</p>
<p>If the indeces are exactly the same you can simply do this.</p> <pre><code>df3 = pd.merge(df[['a','b','c']], df2['d'], right_index=True, left_index=True) </code></pre>
python|pandas|dataframe|merge|left-join
1
1,095
65,118,696
Numpy Array - Advanced slicing using sum of a one hot encoded column
<p>I'm trying to slice an array based on a one-hot encoded column, so for an array like this:</p> <pre><code>import numpy as np arr = np.array([[0.1,1,0,0],[0.2,1,0,0],[0.3,1,0,0]]) </code></pre> <p>I would like to select from the first column, any rows before the cumulative sum of column 2 equals 3:</p> <pre><code>output = arr[0:2,0] </code></pre> <p>Is there any way of doing this without looping to create the cumulative sum of column 2?</p>
<p><code>np.cumsum</code> works to create the cumulative sum column without looping... oops</p>
arrays|numpy|sum|slice
0
1,096
65,434,747
Python pandas timedelta64 fillna
<p>I have a dataset with hundreds of columns. Some of the columns have timedelta64 type. When i use</p> <pre><code>fillna(0) </code></pre> <p>I got an error <code>Passing integers to fillna for timedelta64[ns] dtype is no longer supported. To obtain the old behavior, pass pd.Timedelta(seconds=n)</code></p> <p>How can i fix this error?</p>
<p>You may want to update the columns separately accordingly to their dtypes:</p> <pre class="lang-py prettyprint-override"><code># Update inplace numeric columns df.update(df.select_dtypes('number').fillna(0)) # Update inplace timedelta columns df.update(df.select_dtypes('timedelta64[ns]').fillna(pd.Timedelta(seconds=0))) </code></pre> <p>Another possible syntax for updating:</p> <pre><code>df.loc[:, df.dtypes.eq(float)] = df.select_dtypes(float).fillna(0) </code></pre>
python|pandas|fillna
1
1,097
50,220,082
Does the seed function in numpy and random work need to be set in every module?
<p>I am calling</p> <pre><code>np.random.seed(seed) random.seed(seed) </code></pre> <p>in the <code>__main__</code> module <code>foo.py</code>. That module calls out to another module <code>bar.py</code> that also uses results from <code>np.random</code> and <code>random</code>. Does the latter also need to set the seed?</p>
<p>No. Using <code>np.random.seed(...)</code> sets a global random state.</p> <p>Usually this is not desirable. You may prefer to use a <code>np.random.RandomState()</code> instance in your code, so that you don't also seed the PRNGs for all other library code within your runtime.</p>
python|numpy|random|random-seed
2
1,098
49,821,843
pandas.DataFrame: Filter rows of df A based on data in df B?
<pre><code>import pandas as pd C = {'name': ['Alice', 'Alice', 'Bob', 'Charlie'], 'phone': ['007', '1764', '1317210', '314159']} CONTACTS = pd.DataFrame(data = C) answer = {'guest_name': ['Alice', 'Bob', 'Charlie'], 'attending': [True, False, True]} guest_list = pd.DataFrame(data = answer) </code></pre> <hr> <p><strong>Illustrative context:</strong><br> I'm throwing a party, but there is a last-minute modification to the location. Thus, I want to call guests that said they will come.</p> <p>I have two <code>pandas.DataFrame</code>: </p> <ol> <li>my <code>CONTACTS</code>: with all my friends' name and phone. <br>Note that some friends (e.g. Alice) are listed twice if they have multiple phone numbers. This DataFrame is a constant and I cannot (or don't want to) modify it.</li> <li>my <code>guest_list</code>: with all my friends' name and attending status (a boolean). <br>Note that, unlike in <code>CONTACTS</code>, friends name are listed here only once. All friends <code>name</code> listed in <code>CONTACTS</code> exist in <code>guest_list</code> and vice-versa (in other words, <code>CONTACTS.name</code> is surjective onto <code>guest_list.guest_name</code>).</li> </ol> <p><strong>Problem:</strong><br> I want to create the <code>attending_guests_contact</code> DataFrame containing the contact of my friends who attend the party only.</p> <p><strong>Question:</strong><br> <strong><em>How to get a subset of <code>CONTACTS</code> based on <code>answer.attending</code> boolean?</em></strong></p> <p>Note that:</p> <ul> <li>I don't want to modify <code>CONTACTS</code>,</li> <li>I would prefer not to create a copy of <code>CONTACTS</code>, as I have 'a lot' of contacts (~10^3—10^4) and multiple parties thrown so it would be time and memory consuming (i.e. I would like to perform the sub-selection in line).</li> </ul> <hr> <p><strong>Edit:</strong> the two DataFrame don't share a same labeled column anymore.</p>
<p>Here's one way:</p> <pre><code>attending_guests_contact = CONTACTS.merge(guest_list[guest_list.attending], \ left_on="name", right_on="guest_name") print attending_guests_contact # name phone attending # 0 Alice 007 True # 1 Alice 1764 True # 2 Charlie 314159 True </code></pre> <p>This uses boolean indexing to filter <code>guest_list</code> to just the rows where <code>attending</code> is true, and then performs an inner join between <code>guest_list</code> and <code>CONTACTS</code> with <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.merge.html" rel="nofollow noreferrer"><code>.merge</code></a>.</p>
python|python-3.x|pandas|dataframe
2
1,099
63,835,941
How to enable the python to apply custom function faster
<p>I have developed a customer function to classify product type in my dataframe.</p> <pre><code>def RST_FINAL_SECOND_FUNCTION(DF_FRAME_NAME): if (DF_FRAME_NAME['Column1'] == 'Yes'): return 'YES' elif (DF_FRAME_NAME['Column1'] == 'No'): return DF_FRAME_NAME['column2'] df['column3'] = df.apply(RST_FINAL_SECOND_FUNCTION, axis =1) </code></pre> <p>When applying a similar function to a dataset with 200,000 rows consume time, What are the alternative ways to do the above. can we lambda function</p> <p>Sample Data used for is:</p> <pre><code>df = pd.DataFrame({'Column1': ['Yes','No','Yes','No','No','No','Yes'], 'Column2': [5,5,5,7,7,10,10] }) </code></pre>
<p>You can use <a href="https://numpy.org/doc/stable/reference/generated/numpy.where.html" rel="nofollow noreferrer">numpy.where</a></p> <pre><code>import numpy as np df['column3'] = np.where(df['Column1'].eq('Yes'), 'Yes', df['Column2']) print(df) </code></pre> <p><strong>Output:</strong></p> <pre><code> Column1 Column2 column3 0 Yes 5 Yes 1 No 5 5 2 Yes 5 Yes 3 No 7 7 4 No 7 7 5 No 10 10 6 Yes 10 Yes </code></pre>
pandas|python-3.x
1