Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
300
45,541,188
Sample x number of days from a data frame with multiple entries per day in pandas
<p>I have a data frame with multiple time indexed entries per day. I want to sample and x number of days (eg 2 days) and the iterate forward 1 day to the end of the range of days. How can I achieve this.</p> <p>For example if each day has greater than one entry:</p> <pre><code> datetime value 2015-12-02 12:02:35 1 2015-12-02 12:02:44 2 2015-12-03 12:39:05 4 2015-12-03 12:39:12 7 2015-12-04 14:27:41 2 2015-12-04 14:27:45 8 2015-12-07 09:52:58 3 2015-12-07 13:52:15 5 2015-12-07 13:52:21 9 </code></pre> <p>I would like to iterate through taking two day samples at a time eg</p> <pre><code> 2015-12-02 12:02:35 1 2015-12-02 12:02:44 2 2015-12-03 12:39:05 4 2015-12-03 12:39:12 7 </code></pre> <p>then </p> <pre><code> 2015-12-03 12:39:05 4 2015-12-03 12:39:12 7 2015-12-04 14:27:41 2 2015-12-04 14:27:45 8 </code></pre> <p>ending with </p> <pre><code> 2015-12-04 14:27:41 2 2015-12-04 14:27:45 8 2015-12-07 09:52:58 3 2015-12-07 13:52:15 5 2015-12-07 13:52:21 9 </code></pre> <p>Any help would be appreciated!</p>
<p>You can use:</p> <pre><code>#https://stackoverflow.com/a/6822773/2901002 from itertools import islice def window(seq, n=2): "Returns a sliding window (of width n) over data from the iterable" " s -&gt; (s0,s1,...s[n-1]), (s1,s2,...,sn), ... " it = iter(seq) result = tuple(islice(it, n)) if len(result) == n: yield result for elem in it: result = result[1:] + (elem,) yield result dfs = [df[df['datetime'].dt.day.isin(x)] for x in window(df['datetime'].dt.day.unique())] print (dfs[0]) datetime value 0 2015-12-02 12:02:35 1 1 2015-12-02 12:02:44 2 2 2015-12-03 12:39:05 4 3 2015-12-03 12:39:12 7 print (dfs[1]) datetime value 2 2015-12-03 12:39:05 4 3 2015-12-03 12:39:12 7 4 2015-12-04 14:27:41 2 5 2015-12-04 14:27:45 8 </code></pre>
python|pandas
1
301
62,646,775
PyTorch arguments not valid on android
<p>I want to use <a href="https://github.com/wolverinn/Depth-Estimation-PyTorch" rel="nofollow noreferrer">this</a> model in my android app. But when I start the app it falls with an error. The model works fine on my PC.</p> <h2>To Reproduce</h2> <p>Steps to reproduce the behavior:</p> <ol> <li>Clone <a href="https://github.com/wolverinn/Depth-Estimation-PyTorch" rel="nofollow noreferrer">repository</a> and use instructions in readme to run the model.</li> <li>Add code below to save the model</li> </ol> <pre><code> traced_script_module = torch.jit.trace(i2d, data) traced_script_module.save(&quot;i2d.pt&quot;) </code></pre> <ol start="3"> <li>I used PyTorch Android DemoApp <a href="https://github.com/pytorch/android-demo-app/tree/master/PyTorchDemoApp" rel="nofollow noreferrer">link</a> to run the model on android.</li> </ol> <p>Error:</p> <pre><code> E/AndroidRuntime: FATAL EXCEPTION: ModuleActivity Process: com.hypersphere.depthvisor, PID: 4765 com.facebook.jni.CppException: Arguments for call are not valid. The following variants are available: aten::upsample_bilinear2d(Tensor self, int[2] output_size, bool align_corners) -&gt; (Tensor): Expected at most 3 arguments but found 5 positional arguments. aten::upsample_bilinear2d.out(Tensor self, int[2] output_size, bool align_corners, *, Tensor(a!) out) -&gt; (Tensor(a!)): Argument out not provided. The original call is: D:\ProgramData\Anaconda\envs\ml3 torch\lib\site-packages\torch\nn\functional.py(3013): interpolate D:\ProgramData\Anaconda\envs\ml3 torch\lib\site-packages\torch\nn\functional.py(2797): upsample &lt;ipython-input-1-e1d92bec6901&gt;(75): _upsample_add &lt;ipython-input-1-e1d92bec6901&gt;(89): forward D:\ProgramData\Anaconda\envs\ml3 torch\lib\site-packages\torch\nn\modules\module.py(534): _slow_forward D:\ProgramData\Anaconda\envs\ml3 torch\lib\site-packages\torch\nn\modules\module.py(548): __call__ D:\ProgramData\Anaconda\envs\ml3 torch\lib\site-packages\torch\jit\__init__.py(1027): trace_module D:\ProgramData\Anaconda\envs\ml3 torch\lib\site-packages\torch\jit\__init__.py(875): trace &lt;ipython-input-12-19d2ccccece4&gt;(16): &lt;module&gt; D:\ProgramData\Anaconda\envs\ml3 torch\lib\site-packages\IPython\core\interactiveshell.py(3343): run_code D:\ProgramData\Anaconda\envs\ml3 torch\lib\site-packages\IPython\core\interactiveshell.py(3263): run_ast_nodes D:\ProgramData\Anaconda\envs\ml3 torch\lib\site-packages\IPython\core\interactiveshell.py(3072): run_cell_async D:\ProgramData\Anaconda\envs\ml3 torch\lib\site-packages\IPython\core\async_helpers.py(68): _pseudo_sync_runner D:\ProgramData\Anaconda\envs\ml3 torch\lib\site-packages\IPython\core\interactiveshell.py(2895): _run_cell D:\ProgramData\Anaconda\envs\ml3 torch\lib\site-packages\IPython\core\interactiveshell.py(2867): run_cell D:\ProgramData\Anaconda\envs\ml3 torch\lib\site-packages\ipykernel\zmqshell.py(536): run_cell D:\ProgramData\Anaconda\envs\ml3 torch\lib\site-packages\ipykernel\ipkernel.py(300): do_execute D:\ProgramData\Anaconda\envs\ml3 torch\lib\site-packages\tornado\gen.py(209): wrapper D:\ProgramData\Anaconda\envs\ml3 torch\lib\site-packages\ipykernel\kernelbase.py(545): execute_request D:\ProgramData\Anaconda\envs\ml3 torch\lib\site-packages\tornado\gen.py(209): wrapper D:\ProgramData\Anaconda\envs\ml3 torch\lib\site-packages\ipykernel\kernelbase.py(268): dispatch_shell D:\ProgramData\Anaconda\envs\ml3 torch\lib\site-packages\tornado\gen.py(209): wrapper D:\ProgramData\Anaconda\envs\ml3 torch\lib\site-packages\ipykernel\kernelbase.py(365): process_one D:\ProgramData\Anaconda\envs\ml3 torch\lib\site-packages\tornado\gen.py(748): run D:\ProgramData\Anaconda\envs\ml3 torch\lib\site-packages\tornado\gen.py(787): inner D:\ProgramData\Anaconda\envs\ml3 torch\lib\site-packages\tornado\ioloop.py(743): _run_callback D:\ProgramData\Anaconda\envs\ml3 torch\lib\site-packages\tornado\ioloop.py(690): &lt;lambda&gt; D:\ProgramData\Anaconda\envs\ml3 torch\lib\asyncio\events.py(88): _run D:\ProgramData\Anaconda\envs\ml3 torch\lib\asyncio\base_events.py(1786): _run_once D:\ProgramData\Anaconda\envs\ml3 torch\lib\asyncio\base_events.py(541): run_forever D:\ProgramData\Anaconda\envs\ml3 torch\lib\site-packages\tornado\platform\asyncio.py(149): start D:\ProgramData\Anaconda\envs\ml3 torch\lib\site-packages\ipykernel\kernelapp.py(597): start D:\ProgramData\Anaconda\envs\ml3 torch\lib\site-packages\traitlets\config\application.py(664): launch_instance D:\ProgramData\Anaconda\envs\ml3 torch\lib\site-packages\ipykernel_launcher.py(16): &lt;module&gt; D:\ProgramData\Anaconda\envs\ml3 torch\lib\runpy.py(85): _run_code D:\ProgramData\Anaconda\envs\ml3 torch\lib\runpy.py(193): _run_module_as_main Serialized File &quot;code/__torch__/___torch_mangle_907.py&quot;, line 39 _17 = ops.prim.NumToTensor(torch.size(_16, 2)) _18 = ops.prim.NumToTensor(torch.size(_16, 3)) 2020-06-29 23:50:09.536 4765-4872/com.hypersphere.depthvisor E/AndroidRuntime: _19 = torch.upsample_bilinear2d(_15, [int(_17), int(_18)], False, None, None) ~~~~~~~~~~~~~~~~~~~~~~~~~ &lt;--- HERE input = torch.add(_19, _16, alpha=1) _20 = (_6).forward(input, ) at org.pytorch.NativePeer.initHybrid(Native Method) at org.pytorch.NativePeer.&lt;init&gt;(NativePeer.java:18) at org.pytorch.Module.load(Module.java:23) at com.hypersphere.depthvisor.MainActivity.analyzeImage(MainActivity.java:56) at com.hypersphere.depthvisor.MainActivity.analyzeImage(MainActivity.java:21) at com.hypersphere.depthvisor.AbstractCameraXActivity.lambda$setupCameraX$2$AbstractCameraXActivity(AbstractCameraXActivity.java:86) at com.hypersphere.depthvisor.-$$Lambda$AbstractCameraXActivity$KgCZmrRflavSsq5aSHYb53Fi-P4.analyze(Unknown Source:2) at androidx.camera.core.ImageAnalysisAbstractAnalyzer.analyzeImage(ImageAnalysisAbstractAnalyzer.java:57) at androidx.camera.core.ImageAnalysisNonBlockingAnalyzer$1.run(ImageAnalysisNonBlockingAnalyzer.java:135) at android.os.Handler.handleCallback(Handler.java:873) at android.os.Handler.dispatchMessage(Handler.java:99) at android.os.Looper.loop(Looper.java:214) at android.os.HandlerThread.run(HandlerThread.java:65) </code></pre> <h2>Environment</h2> <pre><code>PyTorch version: 1.5.0 Is debug build: No CUDA used to build PyTorch: Could not collect OS: Windows 10 Pro GCC version: Could not collect CMake version: Could not collect Python version: 3.7 Is CUDA available: No CUDA runtime version: 10.2.89 GPU models and configuration: Could not collect Nvidia driver version: Could not collect cuDNN version: Could not collect Versions of relevant libraries: [pip3] numpy==1.18.5 [pip3] torch==1.5.0 [pip3] torchvision==0.6.0 [conda] _pytorch_select 0.1 cpu_0 [conda] blas 1.0 mkl [conda] cudatoolkit 10.2.89 h74a9793_1 [conda] libmklml 2019.0.5 0 [conda] mkl 2019.4 245 [conda] mkl-service 2.3.0 py37hb782905_0 [conda] mkl_fft 1.1.0 py37h45dec08_0 [conda] mkl_random 1.1.0 py37h675688f_0 [conda] numpy 1.18.5 py37h6530119_0 [conda] numpy-base 1.18.5 py37hc3f5095_0 [conda] pytorch 1.5.0 cpu_py37h9f948e0_0 [conda] torchvision 0.6.0 py37_cu102 pytorch Android Studio 4.0 Device: Samsung s8 plus Android version: 9 </code></pre>
<p>My pc PyTorch version was 1.5 and in dependences were 1.4. So solution is:</p> <pre><code>implementation 'org.pytorch:pytorch_android:1.5.0' implementation 'org.pytorch:pytorch_android_torchvision:1.5.0' </code></pre>
android|pytorch
0
302
54,581,339
Pass series instead of integer to pandas offsets
<p>I have a dataframe (df) with a date and a number. I want to add the number to the date. How do I add the df['additional_days'] series to the df['start_date'] series using pd.offsets()? Is there a better way to do this?</p> <blockquote> <p>start_date additional_days</p> <p>2018-03-29 360</p> <p>2018-07-31 0</p> <p>2018-11-01 360</p> <p>2016-11-03 720</p> <p>2018-12-04 480</p> </blockquote> <p>I get an error when I try</p> <pre><code>df['start_date'] + pd.offsets.Day(df['additional_days']) </code></pre> <p>Here is the error</p> <pre><code>TypeError Traceback (most recent call last) pandas/_libs/tslibs/offsets.pyx in pandas._libs.tslibs.offsets._BaseOffset._validate_n() /opt/conda/lib/python3.6/site-packages/pandas/core/series.py in wrapper(self) 117 raise TypeError("cannot convert the series to " --&gt; 118 "{0}".format(str(converter))) 119 TypeError: cannot convert the series to &lt;class 'int'&gt; During handling of the above exception, another exception occurred: TypeError Traceback (most recent call last) &lt;ipython-input-76-03920804db29&gt; in &lt;module&gt; ----&gt; 1 df_test['start_date'] + pd.offsets.Day(df_test['additional_days']) /opt/conda/lib/python3.6/site-packages/pandas/tseries/offsets.py in __init__(self, n, normalize) 2219 def __init__(self, n=1, normalize=False): 2220 # TODO: do Tick classes with normalize=True make sense? -&gt; 2221 self.n = self._validate_n(n) 2222 self.normalize = normalize 2223 pandas/_libs/tslibs/offsets.pyx in pandas._libs.tslibs.offsets._BaseOffset._validate_n() TypeError: `n` argument must be an integer, got &lt;class 'pandas.core.series.Series'&gt; </code></pre>
<p>Use <code>pd.to_timedelta</code></p> <pre><code>import pandas as pd #df['start_date'] = pd.to_datetime(df.start_date) df['start_date'] + pd.to_timedelta(df.additional_days, unit='d') #0 2019-03-24 #1 2018-07-31 #2 2019-10-27 #3 2018-10-24 #4 2020-03-28 #dtype: datetime64[ns] </code></pre>
python|pandas
2
303
73,781,386
tensorflow sequential model outputting nan
<p>Why is my code outputting nan? I'm using a sequential model with a 30x1 input vector and a single value output. I'm using tensorflow and python. This is one of my firs</p> <pre><code>While True: # Define a simple sequential model def create_model(): model = tf.keras.Sequential([ keras.layers.Dense(30, activation='relu',input_shape=(30,)), keras.layers.Dense(12, activation='relu'), keras.layers.Dropout(0.2), keras.layers.Dense(7, activation='relu'), keras.layers.Dense(1, activation = 'sigmoid') ]) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=[tf.keras.metrics.SparseCategoricalAccuracy()]) return model # Create a basic model instance model = create_model() # Display the model's architecture model.summary() train_labels=[1] test_labels=[1] train_images= [[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30]] test_images=[[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30]] model.fit(train_images, train_labels, epochs=10, validation_data=(test_images, test_labels), verbose=1) print('predicted:',model.predict(train_images)) </code></pre>
<p>You are using SparseCategoricalCrossentropy. It expects labels to be integers starting from 0. So, you have only one label <code>1</code>, but it means you have at least two categories - 0 and 1. So you need at least two neurons in the last layer:</p> <pre><code>keras.layers.Dense(2, activation = 'sigmoid') </code></pre> <p>( If your goal is classification, you should maybe consider to use softmax instead of sigmoid, without <code>from_logits=True</code> )</p>
keras|deep-learning|tensorflow2.0
0
304
73,836,043
Extracting keys from dataframe of json
<p>I'm sorry, I am new to Python and wondering if anyone can help me with extracting data? I've been trying to extract data from a df with json-content.</p> <pre><code>0 [{'@context': 'https://schema.org', '@type': '... 1 [{'@context': 'https://schema.org', '@type': '... 2 [{'@context': 'https://schema.org', '@type': '... 3 [{'@context': 'https://schema.org', '@type': '... 4 [{'@context': 'https://schema.org', '@type': '... 5 [{'@context': 'https://schema.org', '@type': '... </code></pre> <p>So rows look like this:</p> <pre><code>&quot;[{'@context': 'https://schema.org', '@type': 'Audiobook', 'bookFormat': 'AudiobookFormat', 'name': 'Balle-Lars og mordet i Ugledige 1858', 'description': '&lt;p&gt;I 1858 blev der begået et mord i Ugledige mellem Præstø og Vordingborg. Den 58-årige enke Ane Marie Hemmingsdatter blev skudt.&lt;/p&gt;&lt;p&gt;Lars Peter Poulsen (1866-1941) var en dansk lærer og forfatter.&lt;/p&gt;', 'image': '/images/e/200x200/0002496352.jpg', 'author': [{'@type': 'Person', 'name': 'L.P. Poulsen'}], 'readBy': [], 'publisher': {'@type:': 'Organization', 'name': ''}, 'isbn': '', 'datePublished': '', 'inLanguage': 'da', 'aggregateRating': {'@type': 'AggregateRating', 'ratingValue': 3.56, 'ratingCount': 9}}, {'@context': 'https://schema.org', '@type': 'Book', 'bookFormat': 'EBook', 'name': 'Balle-Lars og mordet i Ugledige 1858', 'description': '&lt;p&gt;I 1858 blev der begået et mord i Ugledige mellem Præstø og Vordingborg. Den 58-årige enke Ane Marie Hemmingsdatter blev skudt.&lt;/p&gt;', 'image': '/images/e/200x200/0002496352.jpg', 'author': [{'@type': 'Person', 'name': 'L.P. Poulsen'}], 'publisher': {'@type:': 'Organization', 'name': 'SAGA Egmont'}, 'isbn': '9788726519877', 'datePublished': '2021-06-21', 'inLanguage': 'da', 'aggregateRating': {'@type': 'AggregateRating', 'ratingValue': 3.56, 'ratingCount': 9}}]&quot; </code></pre> <p>What I want is to get some of the keys (e.g. 'name') from the json data, for all rows. I've been trying:</p> <pre><code>for d in unsorted: print (d[&quot;name&quot;]) </code></pre> <p>... and variations. Is that the way to go (somehow) or should I convert everything to json and go from there?</p> <p>Thank you!</p>
<p>Considering that the dataframe looks like this</p> <pre><code>df = pd.DataFrame({'json_data': ['[{&quot;@context&quot;: &quot;https://schema.org&quot;, &quot;@type&quot;: &quot;Audiobook&quot;, &quot;bookFormat&quot;: &quot;AudiobookFormat&quot;, &quot;name&quot;: &quot;Balle-Lars og mordet i Ugledige 1858&quot;, &quot;description&quot;: &quot;&lt;p&gt;I 1858 blev der begået et mord i Ugledige mellem Præstø og Vordingborg. Den 58-årige enke Ane Marie Hemmingsdatter blev skudt, da hun stod ved vinduet i sin stue efter at være kommet hjem fra et begravelsesg.&quot;}]', '[{&quot;@context&quot;: &quot;https://schema.org&quot;, &quot;@type&quot;: &quot;Audiobook&quot;, &quot;bookFormat&quot;: &quot;AudiobookFormat&quot;, &quot;name&quot;: &quot;Balle-Lars og mordet i Ugledige 1858&quot;, &quot;description&quot;: &quot;&lt;p&gt;I 1858 blev der begået et mord i Ugledige mellem Præstø og Vordingborg. Den 58-årige enke Ane Marie Hemmingsdatter blev skudt, da hun stod ved vinduet i sin stue efter at være kommet hjem fra et begravelsesg.&quot;}]', '[{&quot;@context&quot;: &quot;https://schema.org&quot;, &quot;@type&quot;: &quot;Audiobook&quot;, &quot;bookFormat&quot;: &quot;AudiobookFormat&quot;, &quot;name&quot;: &quot;Balle-Lars og mordet i Ugledige 1858&quot;, &quot;description&quot;: &quot;&lt;p&gt;I 1858 blev der begået et mord i Ugledige mellem Præstø og Vordingborg. Den 58-årige enke Ane Marie Hemmingsdatter blev skudt, da hun stod ved vinduet i sin stue efter at være kommet hjem fra et begravelsesg.&quot;}]', '[{&quot;@context&quot;: &quot;https://schema.org&quot;, &quot;@type&quot;: &quot;Audiobook&quot;, &quot;bookFormat&quot;: &quot;AudiobookFormat&quot;, &quot;name&quot;: &quot;Balle-Lars og mordet i Ugledige 1858&quot;, &quot;description&quot;: &quot;&lt;p&gt;I 1858 blev der begået et mord i Ugledige mellem Præstø og Vordingborg. Den 58-årige enke Ane Marie Hemmingsdatter blev skudt, da hun stod ved vinduet i sin stue efter at være kommet hjem fra et begravelsesg.&quot;}]', '[{&quot;@context&quot;: &quot;https://schema.org&quot;, &quot;@type&quot;: &quot;Audiobook&quot;, &quot;bookFormat&quot;: &quot;AudiobookFormat&quot;, &quot;name&quot;: &quot;Balle-Lars og mordet i Ugledige 1858&quot;, &quot;description&quot;: &quot;&lt;p&gt;I 1858 blev der begået et mord i Ugledige mellem Præstø og Vordingborg. Den 58-årige enke Ane Marie Hemmingsdatter blev skudt, da hun stod ved vinduet i sin stue efter at være kommet hjem fra et begravelsesg.&quot;}]'] }) [Out]: json_data 0 [{&quot;@context&quot;: &quot;https://schema.org&quot;, &quot;@type&quot;: &quot;... 1 [{&quot;@context&quot;: &quot;https://schema.org&quot;, &quot;@type&quot;: &quot;... 2 [{&quot;@context&quot;: &quot;https://schema.org&quot;, &quot;@type&quot;: &quot;... 3 [{&quot;@context&quot;: &quot;https://schema.org&quot;, &quot;@type&quot;: &quot;... 4 [{&quot;@context&quot;: &quot;https://schema.org&quot;, &quot;@type&quot;: &quot;... </code></pre> <p>And assuming that OP's goal is just to obtain a list with the names, one can get it as follows</p> <pre><code>import json as js name_list = [js.loads(x)[0]['name'] for x in df['json_data'].tolist()] [Out]: ['Balle-Lars og mordet i Ugledige 1858', 'Balle-Lars og mordet i Ugledige 1858', 'Balle-Lars og mordet i Ugledige 1858', 'Balle-Lars og mordet i Ugledige 1858', 'Balle-Lars og mordet i Ugledige 1858'] </code></pre> <p>If OP wants to store the names on a different column, called <code>name</code>, of the dataframe <code>df</code>, then one can do the following</p> <pre><code>import json as js df['name'] = [js.loads(x)[0]['name'] for x in df['json_data'].tolist()] [Out]: json_data name 0 [{&quot;@context&quot;: &quot;https://schema.org&quot;, &quot;@type&quot;: &quot;... Balle-Lars og mordet i Ugledige 1858 1 [{&quot;@context&quot;: &quot;https://schema.org&quot;, &quot;@type&quot;: &quot;... Balle-Lars og mordet i Ugledige 1858 2 [{&quot;@context&quot;: &quot;https://schema.org&quot;, &quot;@type&quot;: &quot;... Balle-Lars og mordet i Ugledige 1858 3 [{&quot;@context&quot;: &quot;https://schema.org&quot;, &quot;@type&quot;: &quot;... Balle-Lars og mordet i Ugledige 1858 4 [{&quot;@context&quot;: &quot;https://schema.org&quot;, &quot;@type&quot;: &quot;... Balle-Lars og mordet i Ugledige 1858 </code></pre>
python|json|pandas|key|extract
1
305
73,803,762
Python: Add dictionary to an existing dataframe where dict.keys() match dataframe row
<p>I'm trying to add a dictionary to a 26x26 dataframe with row and column both go from a to z: <a href="https://i.stack.imgur.com/gisCe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gisCe.png" alt="my existing dataframe" /></a></p> <p>My dictionary where I want to put in the dataframe is:</p> <p><code>{'b': 74, 'c': 725, 'd': 93, 'e': 601, 'f': 134, 'g': 200, 'h': 1253, 'i': 355, 'j': 5, 'k': 2, 'l': 324, 'm': 756, 'n': 317, 'o': 88, 'p': 227, 'r': 608, 's': 192, 't': 456, 'u': 152, 'v': 142, 'w': 201, 'x': 51, 'y': 10, 'z': 53}</code></p> <p>I want each of my dictionary keys to match the row name of my dataframe, meaning I want this dictionary to be added vertically under the <strong>column a</strong>. As you can see, the 'a' and 'q' are missing in my dictionary, and I want them to be 0 instead of being skipped. How can I possibly achieve this?</p>
<p>You can use:</p> <pre><code>df.loc[list(dic), 'a'] = pd.Series(dic) </code></pre> <p>Or:</p> <pre><code>df.loc[list(dic), 'a'] = list(dic.values()) </code></pre> <p>Full example:</p> <pre><code>dic = {'b': 74, 'c': 725, 'd': 93, 'e': 601, 'f': 134, 'g': 200, 'h': 1253, 'i': 355, 'j': 5, 'k': 2, 'l': 324, 'm': 756, 'n': 317, 'o': 88, 'p': 227, 'r': 608, 's': 192, 't': 456, 'u': 152, 'v': 142, 'w': 201, 'x': 51, 'y': 10, 'z': 53} from string import ascii_lowercase idx = list(ascii_lowercase) df = pd.DataFrame(0, index=idx, columns=idx) df.loc[list(dic), 'a'] = pd.Series(dic) print(df) </code></pre> <p>output:</p> <pre><code> a b c d e f g h i j ... q r s t u v w x y z a 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0 b 74 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0 c 725 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0 d 93 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0 e 601 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0 f 134 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0 g 200 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0 h 1253 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0 i 355 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0 j 5 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0 k 2 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0 l 324 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0 m 756 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0 n 317 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0 o 88 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0 p 227 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0 q 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0 r 608 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0 s 192 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0 t 456 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0 u 152 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0 v 142 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0 w 201 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0 x 51 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0 y 10 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0 z 53 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0 [26 rows x 26 columns] </code></pre>
python|pandas|dataframe|dictionary
0
306
71,203,685
Splitting strings in dataframe
<p>I have a column with strings. I want to split and create a new column in the dataframe.</p> <p>For example:</p> <pre><code>2022-01-28 15-43-45 150 </code></pre> <p>I want to split after <code>45</code> and create a new column.</p>
<p>We can use <code>str.extract</code> here:</p> <pre class="lang-py prettyprint-override"><code>df[&quot;new_col&quot;] = df[&quot;filename&quot;].str.extract(r'(\d+)$') df[&quot;filename&quot;] = df[&quot;filename&quot;].str.extract(r'(.*)\s+\d+$') </code></pre>
pandas|string|dataframe|split
0
307
71,288,635
Is there a way to covert date (with different format) into a standardized format in python?
<p>I have a column calls &quot;date&quot; which is an object and it has very different date format like dd.m.yy, dd.mm.yyyy, dd/mm/yyyy, dd/mm, m/d/yyyy etc as below. Obviously by simply using df['date'] = pd.to_datetime(df['date']) will not work. I wonder for messy date value like that, is there anyway to standardized and covert the date into one single format ?</p> <pre><code>date 17.2.22 # means Feb 17 2022 23.02.22 # means Feb 23 2022 17/02/2022 # means Feb 17 2022 18.2.22 # means Feb 18 2022 2/22/2022 # means Feb 22 2022 3/1/2022 # means March 1 2022 &lt;more messy different format&gt; </code></pre>
<p>Coerce the dates to datetime and allow invalid entries to be turned into nulls.Also, allow pandas to infer the format. code below</p> <pre><code>df['date'] = pd.to_datetime(df['date'], errors='coerce',infer_datetime_format=True) date 0 2022-02-17 1 2022-02-23 2 2022-02-17 3 2022-02-18 4 2022-02-22 5 2022-03-01 </code></pre>
python|pandas|date|python-re
1
308
52,256,503
Why does tf.variable_scope has a default_name argument?
<p>The first two arguments of <a href="https://www.tensorflow.org/api_docs/python/tf/variable_scope#__init__" rel="nofollow noreferrer"><code>tf.variable_scope</code>'s <code>__init__</code> method</a> are</p> <blockquote> <ul> <li><code>name_or_scope</code>: <code>string</code> or <code>VariableScope</code>: the scope to open.</li> <li><code>default_name</code>: The default name to use if the <code>name_or_scope</code> argument is <code>None</code>, this name will be uniquified. If <code>name_or_scope</code> is provided it won't be used and therefore it is not required and can be <code>None</code>.</li> </ul> </blockquote> <p>If I understand correctly, this argument is equivalent to (and therefore could be easily replaced with)</p> <pre><code>if name_or_scope is None: name_or_scope = default_name with tf.variable_scope(name_or_scope, ...): ... </code></pre> <p>Now, I am not sure I understand why it was deemed necessary to have this special treatment for the scope name — after all, many parameters could use a parameterizable default argument.</p> <p>So what is the rationale behind the introduction of this argument?</p>
<p>You are right. It is just a convenience. </p> <p>Take the case of TensorFlow models defined <a href="https://github.com/tensorflow/models/tree/master/research/slim/nets" rel="nofollow noreferrer">here</a>. If you take a specific look at <a href="https://github.com/tensorflow/models/blob/master/research/slim/nets/inception_v4.py#L257-L334" rel="nofollow noreferrer">InceptionV4.py</a>, you will see that it has a scope argument in its definition. Just below you will see that <code>InceptionV4</code> has been passed as a default scope. Therefore it was entirely not required to even has a <code>scope</code> argument in the definition. But it makes sense, if somebody gives <code>scope=None</code>. </p> <p>Think about it. Model definitions can get very comples very quickly. Therefore, a default_scope argument, helps in reinforcing the wisdom of the model definition writer to introduce some sort of deliberate structure in the model definition, even if the end user is very naive about it.</p>
python|tensorflow
0
309
52,243,060
Get row value of maximum count after applying group by in pandas
<p>I have the following df</p> <pre><code>&gt;In [260]: df &gt;Out[260]: size market vegetable confirm availability 0 Large ABC Tomato NaN 1 Large XYZ Tomato NaN 2 Small ABC Tomato NaN 3 Large ABC Onion NaN 4 Small ABC Onion NaN 5 Small XYZ Onion NaN 6 Small XYZ Onion NaN 7 Small XYZ Cabbage NaN 8 Large XYZ Cabbage NaN 9 Small ABC Cabbage NaN </code></pre> <p>1) How to get the size of a vegetable whose size count is maximum?</p> <p>I used groupby on vegetable and size to get the following df But I need to get the rows which contain the maximum count of size with vegetable </p> <pre><code>In [262]: df.groupby(['vegetable','size']).count() Out[262]: market confirm availability vegetable size Cabbage Large 1 0 Small 2 0 Onion Large 1 0 Small 3 0 Tomato Large 2 0 Small 1 0 df2['vegetable','size'] = df.groupby(['vegetable','size']).count().apply( some logic ) </code></pre> <p>Required Df :</p> <pre><code> vegetable size max_count 0 Cabbage Small 2 1 Onion Small 3 2 Tomato Large 2 </code></pre> <p>2) Now I can say 'Small Cabbages' are available in huge quantity from df. So I need to populate the confirm availability column with small for all cabbage rows How to do this?</p> <pre><code> size market vegetable confirm availability 0 Large ABC Tomato Large 1 Large XYZ Tomato Large 2 Small ABC Tomato Large 3 Large ABC Onion Small 4 Small ABC Onion Small 5 Small XYZ Onion Small 6 Small XYZ Onion Small 7 Small XYZ Cabbage Small 8 Large XYZ Cabbage Small 9 Small ABC Cabbage Small </code></pre>
<p>1)</p> <pre><code>required_df = veg_df.groupby(['vegetable','size'], as_index=False)['market'].count()\ .sort_values(by=['vegetable', 'market'])\ .drop_duplicates(subset='vegetable', keep='last') </code></pre> <p>2)</p> <pre><code>merged_df = veg_df.merge(required_df, on='vegetable') cols = ['size_x', 'market_x', 'vegetable', 'size_y'] dict_renaming_cols = {'size_x': 'size', 'market_x': 'market', 'size_y': 'confirm_availability'} merged_df = merged_df.loc[:,cols].rename(columns=dict_renaming_cols) </code></pre>
python|pandas|dataframe|pandas-groupby
2
310
60,549,871
How to continuously update the empty rows within specific columns using pandas and openpyxl
<p>Currently I'm running a live test that uses 3 variables data1, data2 and data 3. The Problem is that whenever I run my python code that it only writes to the first row within the respective columns and overwrites any previous data I had. </p> <pre><code>import pandas as pd import xlsxwriter from openpyxl import load_workbook def dataholder(data1,data2,data3): df = pd.DataFrame({'Col1':[data1],'Col2':[data2],'Col3':[data3]}) with pd.ExcelWriter('data_hold.xlsx', engine='openpyxl') as writer: df.to_excel(writer,sheet_name='Sheet1') writer.save() </code></pre> <p>Is what I'm trying to accomplish feasible? </p>
<p>Use <code>startrow=...</code> of <code>to_excel</code> to shift every subsequent update down.</p>
python|pandas|openpyxl
0
311
60,347,228
How to confirm convergence of LSTM network?
<p>I am using LSTM for time-series prediction using Keras. I am using 3 LSTM layers with dropout=0.3, hence my training loss is higher than validation loss. To monitor convergence, I using plotting training loss and validation loss together. Results looks like the following. </p> <p><a href="https://i.stack.imgur.com/JnSC0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JnSC0.png" alt="enter image description here"></a></p> <p>After researching about the topic, I have seen multiple answers for example (<a href="https://stackoverflow.com/questions/48393438/validation-loss-when-using-dropout">[1]</a><a href="https://plot.ly/~jinjiren/41.embed" rel="nofollow noreferrer">[2]</a> but I have found several contradictory arguments on various different places on the internet, which makes me a little confused. I am listing some of them below : </p> <p>1) <strong>Article presented by Jason Brownlee</strong> suggests that validation and train data should meet for the convergence and if they don't, I might be <strong>under-fitting</strong> the data.</p> <p><a href="https://machinelearningmastery.com/diagnose-overfitting-underfitting-lstm-models/" rel="nofollow noreferrer">https://machinelearningmastery.com/diagnose-overfitting-underfitting-lstm-models/</a></p> <p><a href="https://machinelearningmastery.com/learning-curves-for-diagnosing-machine-learning-model-performance/" rel="nofollow noreferrer">https://machinelearningmastery.com/learning-curves-for-diagnosing-machine-learning-model-performance/</a> <a href="https://i.stack.imgur.com/PqtY4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PqtY4.png" alt="enter image description here"></a></p> <p>2) However, following answer on here suggest that <strong>my model is just converged</strong> : </p> <p><a href="https://stackoverflow.com/questions/52145992/how-do-we-analyse-a-loss-vs-epochs-graph">How do we analyse a loss vs epochs graph?</a></p> <p><a href="https://i.stack.imgur.com/T8TwF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/T8TwF.png" alt="enter image description here"></a> </p> <p>Hence, I am just bit confused about the whole concept in general. Any help will be appreciated.</p>
<p>Convergence implies you have something to converge <em>to</em>. For a learning system to converge, you would need to know the right model beforehand. Then you would train your model until it was the same as the right model. At that point you could say the model converged! ... but the whole point of machine learning is that we don't know the right model to begin with.</p> <p>So when do you stop training? In practice, you stop when the model works well enough to do what you want it to do. This might be when validation error drops below a certain threshold. It might just be when you can't afford any more computing power. It's really up to you.</p>
python|tensorflow|keras|lstm|recurrent-neural-network
0
312
72,832,661
Adding or replacing a Column based on values of a current Column
<p>I am attempting to add a new column and base its value from another column of a dataframe, on the following 2 conditions, which will not change and will be written to a file after.</p> <ol> <li>If number -&gt; (##) (4 character string)</li> <li>If NaN -&gt; (4 character string of white space)</li> </ol> <p>This is my dataframe. The column I am interested in is &quot;Code&quot; and that is of type float64.</p> <p>Current Data Frame Format</p> <pre><code>| | Num | T(h) | T(m) | T(s) | Code | |:--:|:---:|:----:|:----:|:-------:|:----:| | 0 | 1 | 10 | 15 | 47.1234 | NaN | | 1 | 2 | 10 | 15 | 48.1238 | 1.0 | | 2 | 3 | 10 | 15 | 48.1364 | NaN | | 3 | 4 | 10 | 15 | 49.0101 | 2.0 | </code></pre> <p>Desired Data Frame Format</p> <pre><code>| | Num | T(h) | T(m) | T(s) | Term Code | |:--:|:---:|:----:|:----:|:-------:|:---------:| | 0 | 1 | 10 | 15 | 47.1234 | | | 1 | 2 | 10 | 15 | 48.1238 | ( 1) | | 2 | 3 | 10 | 15 | 48.1364 | | | 3 | 4 | 10 | 15 | 49.0101 | ( 2) | </code></pre> <p>The function I wrote:</p> <pre><code>def insertSoftbrace(tCode): value = [] for item in tCode: if str(tCode) == 'NaN': #Blank Line 4 characters newCode = ' ' value.append(newCode) else: fnum = tCode.astype(float) num = fnum.astype(int) #I also tried: num = int(fnum) numStr = str(num) newCode = '(' + numStr.rjust(2) + ')' value.append(newCode) return value #Changing the float64 to string object, so can use ( ) df['Code'] = df['Code'].astype(str) #Inserting new column df.insert(4, &quot;Term Code&quot;, insertSoftbrace(df[&quot;Code&quot;])) #I receive the error on: num = fnum.astype(int) # &quot;IncastingNaNError: Cannot convert non-finite values (NA or inf) to intefer. (10 tracebacks) #When I replace &quot;num = fnum.astype(int)&quot; with &quot; num = int(fnum)&quot; # &quot;TypeError: cannot convert the series to &lt;class 'int'&gt; (3 tracebacks) </code></pre> <p>I also attempted this the following way, keeping the Code column as a float64</p> <pre><code>def insertSoft(tCode): value = [] for item in tCode: if tCode &gt; 0: #Format (##) num = int(tCode) newCode = '(' + numStr.rjust(2) + ')' value.append(newCode) else: #Format (4) Spaces newCode = ' ' value.append(newCode) return value df.insert(4, &quot;Term Code&quot;, insertSoft(df[&quot;Code&quot;])) #Error is given # ValueError: The truth value of a series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). </code></pre> <p>What am I missing with the functions? And how can I produce the desired format?</p>
<p>In this solution, first use <code>convert_dtypes</code> which converts <code>float</code> into <code>int</code>. Then change to <code>str</code>. This is just to remove the decimal point. Change <code>&lt;NA&gt;</code> to 4 white spaces. The last step, if the the string isnumeric, use left padding which will ensure the string has length of 2 with white space as filling and add the parenthesis on both sides.</p> <pre><code>df['term_code'] = df['code'].convert_dtypes().astype(str) df.loc[df['termcode'] == '&lt;NA&gt;', 'termcode'] = 4 * ' ' df.loc[df['termcode'].str.isnumeric(), 'termcode'] = '(' + df['data'].str.pad(2, 'left') + ')' </code></pre>
python|python-3.x|pandas|dataframe
0
313
72,526,514
Tensorboard: How to view pytorch model summary?
<p>I have the following network.</p> <pre><code>import torch import torch.nn as nn from torch.utils.tensorboard import SummaryWriter class Net(nn.Module): def __init__(self,input_shape, num_classes): super(Net, self).__init__() self.conv = nn.Sequential( nn.Conv2d(1, 64, kernel_size=(3, 3), stride=(1, 1), padding=1), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=(4,4)), nn.Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=1), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=(4,4)), ) x = self.conv(torch.rand(input_shape)) in_features = np.prod(x.shape) self.classifier = nn.Sequential( nn.Linear(in_features=in_features, out_features=num_classes), ) def forward(self, x): x = self.feature_extractor(x) x = x.view(x.size(0), -1) x = self.classifier(x) return x net = Net(input_shape=(1,64,1292), num_classes=4) print(net) </code></pre> <p>This prints the following:-</p> <pre><code>Net( (conv): Sequential( (0): Conv2d(1, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): ReLU(inplace=True) (2): MaxPool2d(kernel_size=(4, 4), stride=(4, 4), padding=0, dilation=1, ceil_mode=False) (3): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (4): ReLU(inplace=True) (5): MaxPool2d(kernel_size=(4, 4), stride=(4, 4), padding=0, dilation=1, ceil_mode=False) ) (classifier): Sequential( (0): Linear(in_features=320, out_features=4, bias=True) ) ) </code></pre> <p>However, I am trying various experiments and I want to keep track of network architecture on Tensorboard. I know there is a function <code>writer.add_graph(model, input_to_model)</code> but it requires input, or at least its shape should be known.</p> <p>So, I tried <code>writer.add_text(&quot;model&quot;, str(model))</code>, but formatting is screwed up in tensorboard.</p> <p><a href="https://i.stack.imgur.com/q7bAz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/q7bAz.png" alt="enter image description here" /></a></p> <h3>My question is, is there a way to at least visualize the way I can see by using print function in the tensorboard?</h3>
<p>I can see everything is going right but there is just a formatting issue. Tensorboard understands markdown so you can actually replace <code>\n</code> with <code>&lt;br/&gt;</code> and <code> </code> with <code>&amp;nbsp;</code>.</p> <p>Here is a detailed walkthrough. Suppose you have the following model:-</p> <pre><code>import torch import torch.nn as nn from torch.utils.tensorboard import SummaryWriter class Net(nn.Module): def __init__(self,input_shape, num_classes): super(Net, self).__init__() self.conv = nn.Sequential( nn.Conv2d(1, 64, kernel_size=(3, 3), stride=(1, 1), padding=1), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=(4,4)), nn.Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=1), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=(4,4)), ) x = self.conv(torch.rand(input_shape)) in_features = np.prod(x.shape) self.classifier = nn.Sequential( nn.Linear(in_features=in_features, out_features=num_classes), ) def forward(self, x): x = self.feature_extractor(x) x = x.view(x.size(0), -1) x = self.classifier(x) return x net = Net(input_shape=(1,64,1292), num_classes=4) print(net) </code></pre> <p>This prints the following and if can actually show it in the Tensorboard.</p> <pre><code>Net( (conv): Sequential( (0): Conv2d(1, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): ReLU(inplace=True) (2): MaxPool2d(kernel_size=(4, 4), stride=(4, 4), padding=0, dilation=1, ceil_mode=False) (3): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (4): ReLU(inplace=True) (5): MaxPool2d(kernel_size=(4, 4), stride=(4, 4), padding=0, dilation=1, ceil_mode=False) ) (classifier): Sequential( (0): Linear(in_features=320, out_features=4, bias=True) ) ) </code></pre> <p>There is function in <code>add_graph(model, input)</code> in <code>SummaryWriter</code> but you must create dummy input and in some cases it is difficult of to always know them. Instead do following:-</p> <pre><code>writer = SummaryWriter() model_summary = str(model).replace( '\n', '&lt;br/&gt;').replace(' ', '&amp;nbsp;') writer.add_text(&quot;model&quot;, model_summary) writer.close() </code></pre> <p>Above produces following text in tensorboard:-</p> <p><a href="https://i.stack.imgur.com/2Tlpn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2Tlpn.png" alt="enter image description here" /></a></p>
deep-learning|pytorch|tensorboard|modelsummary
0
314
72,670,305
How to plot histogram for chosen cells using mean as condition in python?
<p>I have some data as x,y arrays and an array of v values corresponding to them, i.e for every x and y there is a v with matching index.</p> <p><strong>What I have done</strong>: I am creating a grid on the x-y plane and then the v-values fall in cells of that grid. I am then taking mean of the v-values in each cell of the grid.</p> <p><strong>Where I am stuck</strong>: Now, I want to identify the cells where the mean of v is greater than 2 and plot the histograms of those cells (histogram of original v values in that cell). Any ideas on how to do that? Thanks!</p> <p><strong>EDIT:</strong> I am getting some histogram plots for mean&gt;2 but it also includes histograms of empty cells. I want to get rid of the empty ones and just keep mean&gt;2 cells. I tried <code>print(mean_pix[(mean_pix!=[])])</code> but it returns errors.</p> <p>My full code is:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt x=np.array([11,12,12,13,21,14]) y=np.array([28,5,15,16,12,4]) v=np.array([10,5,2,10,6,7]) x = x // 4 y = y // 4 k=10 cells = [[[] for y in range(k)] for x in range(k)] #creating cells or pixels on x-y plane #letting v values to fall into the grid cells for ycell in range(k): for xcell in range(k): cells[ycell][xcell] = v[(y == ycell) &amp; (x == xcell)] for ycell in range(k): for xcell in range(k): this = cells[ycell][xcell] #print(this) #fig, ax = plt.subplots() #plt.hist(this) #getting mean from velocity values in each cell mean_v = [[[] for y in range(k)] for x in range(k)] for ycell in range(k): for xcell in range(k): cells[ycell][xcell] = v[(y == ycell) &amp; (x == xcell)] this = cells[ycell][xcell] mean_v[ycell][xcell] = np.mean(cells[ycell][xcell]) mean_pix= mean_v[ycell][xcell] fig, ax = plt.subplots() plt.hist(this[(mean_pix&gt;2)]) # this gives me histograms of cells that have mean&gt;2 but it also gives histograms of empty cells. I want to avoid getting the empty histograms. </code></pre>
<p>Maybe there is a better way, but you can create an empty list and append the lists that you want to plot:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt x=np.array([11,12,12,13,21,14]) y=np.array([28,5,15,16,12,4]) v=np.array([10,5,2,10,6,7]) x = x // 4 y = y // 4 k=10 cells = [[[] for y in range(k)] for x in range(k)] #creating cells or pixels on x-y plane #letting v values to fall into the grid cells for ycell in range(k): for xcell in range(k): cells[ycell][xcell] = v[(y == ycell) &amp; (x == xcell)] for ycell in range(k): for xcell in range(k): this = cells[ycell][xcell] #getting mean from velocity values in each cell mean_v = [[[] for y in range(k)] for x in range(k)] to_plot = [] for ycell in range(k): for xcell in range(k): cells[ycell][xcell] = v[(y == ycell) &amp; (x == xcell)] mean_v[ycell][xcell] = np.mean(cells[ycell][xcell]) if mean_v[ycell][xcell]&gt;2: to_plot.append(cells[ycell][xcell]) for x in to_plot: fig, ax = plt.subplots() plt.hist(x) </code></pre> <p>I also removed some unnecessary code. It should output something like this:</p> <p><a href="https://i.stack.imgur.com/6kTwL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6kTwL.png" alt="enter image description here" /></a></p>
python|arrays|numpy|histogram|mean
1
315
72,622,081
Cannot export QNN brevitas to ONNX
<p>I have trained my model as QNN with brevitas. Basically my input shape is:</p> <blockquote> <p>torch.Size([1, 3, 1024])</p> </blockquote> <p>I have exported the .pt extended file. As I try my model and generate a confusion matrix I was able to observe everything that I want. So I believe that there is no problem about the model.</p> <p>On the other hand as I try to export the .onnx file to implement this brevitas trained model on FINN, I wrote the code given below:</p> <pre><code>from brevitas.export import FINNManager FINNManager.export(my_model, input_shape=(1, 3, 1024), export_path='myfinnmodel.onnx') </code></pre> <p>But as I do that I get the error as:</p> <blockquote> <p>torch.onnx.export(module, input_t, export_target, **kwargs)</p> <p>TypeError: export() got an unexpected keyword argument 'enable_onnx_checker'</p> </blockquote> <p>I do not think this is related with the version. But if you want me to be sure about the version, I can check these too.</p> <p>If you can help me I will be really appreciated. Sincerely;</p>
<p>The problem is related to pytorch version &gt; 1.10. Where &quot;enable_onnx_checker&quot; is no more a parameter of torch.onnx.export function.</p> <p>This is the official solution from the repository. <a href="https://github.com/Xilinx/brevitas/pull/408/files" rel="nofollow noreferrer">https://github.com/Xilinx/brevitas/pull/408/files</a></p> <p>The fix is not yet release. Is in dev branch. You need to compile brevitas by yourself or simply change the code in brevitas/export/onnx/manager.py following official solution.</p> <p>After that i am able to get onnx converted model.</p>
python|machine-learning|pytorch|fpga|onnx
0
316
59,806,689
Remove values above/below standard deviation
<p>I have a database that is made out of 18 columns and 15 million rows, in each column there are outliers and I wanted to remove values above and below 2 standard deviations. My code doesn't seem to edit anything in the database though.</p> <p>Thank you.</p> <pre><code>import pandas as pd import random as r import numpy as np df = pd.read_csv('D:\\Project\\database\\3-Last\\LastCombineHalf.csv') df[df.apply(lambda x :(x-x.mean()).abs()&lt;(2*x.std()) ).all(1)] df.to_csv('D:\\Project\\database\\3-Last\\Removal.csv', index=False) </code></pre>
<p>Perhaps because you didn't assign the results back to <code>df</code>?</p> <p>From:</p> <pre class="lang-py prettyprint-override"><code>df[df.apply(lambda x :(x-x.mean()).abs()&lt;(2*x.std()) ).all(1)] </code></pre> <p>To:</p> <pre class="lang-py prettyprint-override"><code>df = df[df.apply(lambda x :(x-x.mean()).abs()&lt;(2*x.std()) ).all(1)] </code></pre>
python|python-3.x|pandas|csv|jupyter-notebook
1
317
59,481,895
How to differentiate between trees and buildings in OpenCV and NumPy in Python
<p>I am trying to classify buildings and trees in digital elevation models. </p> <p>Trees normally look like this: <a href="https://i.stack.imgur.com/ovnq1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ovnq1.png" alt="enter image description here"></a></p> <p>Buildings normally look something like this: <a href="https://i.stack.imgur.com/Lr21F.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Lr21F.png" alt="enter image description here"></a></p> <p>Note the increased disorder in trees compared to buildings. I originally tried to use np.var to differentiate between the two but I am getting inconsistent results. Is there any other non machine learning way to classify these two, preferably on the basis of increased disorder in trees? </p>
<p>Disclaimer: My answer might be super overfitted and wrong, as it is based on just the two sample images</p> <p>Approach 1 : </p> <p>Just classify based on the 'squareness' - </p> <pre><code>delta_x = |x_min - x_max|, delta_y = |y_min - y_max| spread_ratio = delta_y/delta_x if spread_ratio &gt; thresh: classify as tree else: classify as building </code></pre> <p>Approach 2: Your images have very different colors. If that corresponds to height, you can just find a thresholding based on average height of a tree and building</p>
python|numpy|opencv|image-processing|classification
0
318
61,787,472
Reshape input layer 'requested shape' size always 'input shape' size squared
<p>I am trying to run a SavedModel using the C-API. When it comes to running <code>TF_SessionRun</code> it always fails on various input nodes with the same error.</p> <pre><code>TF_SessionRun status: 3:Input to reshape is a tensor with 6 values, but the requested shape has 36 TF_SessionRun status: 3:Input to reshape is a tensor with 19 values, but the requested shape has 361 TF_SessionRun status: 3:Input to reshape is a tensor with 3111 values, but the requested shape has 9678321 ... </code></pre> <p>As can be seen, the number of requested shape values is always the square of the expected input size. It's quite odd.</p> <p>The model runs fine with the <code>saved_model_cli</code> command. The inputs are all either scalar DT_STRING or DT_FLOATs, I'm not doing image recogition. Here's the output of that command:</p> <p><code>signature_def['serving_default']: The given SavedModel SignatureDef contains the following input(s): inputs['f1'] tensor_info: dtype: DT_STRING shape: (-1) name: f1:0 inputs['f2'] tensor_info: dtype: DT_STRING shape: (-1) name: f2:0 inputs['f3'] tensor_info: dtype: DT_STRING shape: (-1) name: f3:0 inputs['f4'] tensor_info: dtype: DT_FLOAT shape: (-1) name: f4:0 inputs['f5'] tensor_info: dtype: DT_STRING shape: (-1) name: f5:0 The given SavedModel SignatureDef contains the following output(s): outputs['o1_probs'] tensor_info: dtype: DT_DOUBLE shape: (-1, 2) name: output_probs:0 outputs['o1_values'] tensor_info: dtype: DT_STRING shape: (-1, 2) name: output_labels:0 outputs['predicted_o1'] tensor_info: dtype: DT_STRING shape: (-1, 1) name: output_class:0 Method name is: tensorflow/serving/predict </code></p> <p>Any clues into what's going on are much appreciated. The saved_model.pb file is coming from AutoML, my code is merely querying that model. I don't change the graph.</p>
<p>It turns out that the issue was caused by me not using the TF_AllocateTensor function correctly.</p> <p>The original code was like:</p> <pre><code>TF_Tensor* t = TF_AllocateTensor(TF_STRING, nullptr, 0, sz); </code></pre> <p>when it appears it should have been:</p> <pre><code>int64_t dims = 0; TF_Tensor* t = TF_AllocateTensor(TF_STRING, &amp;dims, 1, sz); </code></pre>
tensorflow|predict|c-api
0
319
61,939,491
Two questions on DCGAN: data normalization and fake/real batch
<p>I am analyzing a meta-learning <a href="https://github.com/LuEE-C/FIGR/blob/master/train.py" rel="nofollow noreferrer">class</a> that uses DCGAN + Reptile within the image generation.</p> <p>I have two questions about this code. </p> <p>First question: why during DCGAN training (line 74)</p> <pre><code>training_batch = torch.cat ([real_batch, fake_batch]) </code></pre> <p>is a training_batch made up of real examples (real_batch) and fake examples (fake_batch) created? Why is training done by mixing real and false images? I have seen many DCGANs, but never with training done in this way.</p> <p>The second question: why is the normalize_data function (line 49) and the unnormalize_data function (line 55) used during training?</p> <pre><code>def normalize_data(data): data *= 2 data -= 1 return data def unnormalize_data(data): data += 1 data /= 2 return data </code></pre> <p>The project uses the Mnist dataset, if I wanted to use a color dataset like CIFAR10, do I have to modify those normalizations?</p>
<p>Training GANs involves giving the discriminator real and fake examples. Usually, you will see that they are given in two separate occasions. By default <a href="https://pytorch.org/docs/stable/torch.html#torch.cat" rel="nofollow noreferrer"><code>torch.cat</code></a> concatenates the tensors on the first dimension (<code>dim=0</code>), which is the batch dimensions. Therefore it just doubled the batch size, where the first half are the real images and the second half the fake images. </p> <p>To calculate the loss, they adapt the targets, such that the first half (original batch size) is classified as real, and the second half is classified as fake. From <a href="https://github.com/LuEE-C/FIGR/blob/18cd48f9688acd305eafac4a89985a8ff1930e3e/train.py#L208" rel="nofollow noreferrer"><code>initialize_gan</code></a>:</p> <pre class="lang-py prettyprint-override"><code>self.discriminator_targets = torch.tensor([1] * self.batch_size + [-1] * self.batch_size, dtype=torch.float, device=device).view(-1, 1) </code></pre> <p>Images are represented with float values between [0, 1]. The normalisation changes that to produce values between [-1, 1]. GANs generally use tanh in the generator, therefore the fake images have values between [-1, 1], hence the real images should be in the same range, otherwise it would be trivial for the discriminator to distinguish the fake images from the real ones.</p> <p>If you want to display these images, you need to unnormalise them first, i.e. convert them to values between [0, 1].</p> <blockquote> <p>The project uses the Mnist dataset, if I wanted to use a color dataset like CIFAR10, do I have to modify those normalizations?</p> </blockquote> <p>No, you don't need to change them, because images in colour also have their values between [0, 1], there are simply more values, representing the 3 channels (RGB).</p>
deep-learning|pytorch|generative-adversarial-network|dcgan
1
320
58,052,135
separate 2D gaussian kernel into two 1D kernels
<p>A gaussian kernel is calculated and checked that it can be separable by looking in to the rank of the kernel. </p> <pre><code>kernel = gaussian_kernel(kernel_size,sigma) print(kernel) [[ 0.01054991 0.02267864 0.0292689 0.02267864 0.01054991] [ 0.02267864 0.04875119 0.06291796 0.04875119 0.02267864] [ 0.0292689 0.06291796 0.0812015 0.06291796 0.0292689 ] [ 0.02267864 0.04875119 0.06291796 0.04875119 0.02267864] [ 0.01054991 0.02267864 0.0292689 0.02267864 0.01054991]] rank = np.linalg.matrix_rank(kernel) if rank == 1: print('The Kernel is separable') else: print('The kernel is not separable') </code></pre> <p>Now I believe the separation is not correct. I am doing it in the following manner:</p> <pre><code> u,s,v = np.linalg.svd(kernel) k1 = (u[:,0] * np.sqrt(s[0]))[np.newaxis].T k2 = v[:,0] * np.sqrt(s[0]) </code></pre> <p>Then I multiplied the above two kernels to get the original kernel back. But I did not get it.</p> <pre><code>if not np.all(k1 * k2 == kernel): print('k1 * k2 is not equal to kernel') </code></pre> <p>I assume that the separation that I am trying to do using svd and further is not correct. Some explanation would help. </p>
<p>matrix rank 1 means that all the rows are either zero or the same up to scaling and the same is true for columns. They are also up to scaling equal to the two factors. Therefore you can recover them using something like</p> <pre><code>I,J = np.unravel_index(np.abs(kernel).argmax(), kernel.shape) f1 = np.nansum(kernel / (kernel[None,:,J]@kernel),1,keepdims=True) f2 = np.nansum(kernel / (kernel@kernel[I,:,None]),0,keepdims=True) scaling = np.sqrt(np.abs(kernel).sum()/np.abs(f1*f2).sum()) f1 *= scaling * np.sign(f1[I,0]) * np.sign(kernel[I,J]) f2 *= scaling * np.sign(f2[0,J]) </code></pre> <p>Note that most of the complexity comes from my trying to average as many data as possible. A simpler but I'd assume numerically not quite as stable method would be</p> <pre><code>I,J = np.unravel_index(np.abs(kernel).argmax(), kernel.shape) f1 = kernel[:,J,None] f2 = kernel[None,I,:] / kernel[I,J] </code></pre> <p>Of course, your method also works once you get the indexing right:</p> <pre><code>k1 = u[:,0,None] * np.sqrt(s[0]) k2 = v[None,0,:] * np.sqrt(s[0]) np.allclose(kernel, k1*k2) # True </code></pre>
numpy|convolution
0
321
54,783,721
Add a fix value to a dataframe (accumulating to future ones)
<p>I am trying to simulate inventory level during the next 6 months:</p> <p>1- I have the expected accumulated demand for each day of next 6 months. <a href="https://i.stack.imgur.com/hEYd6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hEYd6.png" alt="enter image description here"></a> So, with no reorder, my balance would be more negative everyday.</p> <p>2- My idea is: Everytime the inventory level is lower than 3000, I would send an order to buy 10000, and after 3 days, my level would increase again: <a href="https://i.stack.imgur.com/TyWMs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TyWMs.png" alt="enter image description here"></a></p> <p>How is the best way to add this value into all the future values ?</p> <pre><code> ds saldo 0 2019-01-01 10200.839819 1 2019-01-02 5219.412952 2 2019-01-03 3.161876 3 2019-01-04 -5507.506201 4 2019-01-05 -10730.291221 5 2019-01-06 -14406.833593 6 2019-01-07 -17781.500396 7 2019-01-08 -21545.503098 8 2019-01-09 -25394.427708 </code></pre> <p>I started doing like this :</p> <pre><code>c = 0 for index, row in forecast_data.iterrows(): if row['saldo'] &lt; 3000: c += 1 if c == 3: row['saldo'] + 10000 c = 0 </code></pre> <p>But it just adds to the actual row, not for the accumulated future ones.</p> <pre><code>print(row['ds'], row['saldo']) 9 2019-01-10 -29277.647817 </code></pre>
<p>You forgot to assign the value i think. use <code>row['saldo'] += 10000</code> instead of <code>row['saldo'] + 10000</code></p>
python|pandas|numpy|dataframe
1
322
54,818,602
How to get pandas to return datetime64 rather than Timestamp?
<p>How can I tell pandas to return <code>datetime64</code> rather than <code>Timestamp</code>? For example, in the following code <code>df['dates'][0]</code> returns a pandas <code>Timestamp</code> object rather than the numpy <code>datetime64</code> object that I put in.</p> <p>Yes, I can convert it after getting it, but is it possible to tell pandas to give me back exactly what I put in? </p> <pre><code>&gt;&gt;&gt; import numpy as np &gt;&gt;&gt; import pandas as pd &gt;&gt;&gt; np.__version__ '1.10.4' &gt;&gt;&gt; pd.__version__ u'0.19.2' &gt;&gt;&gt; df = pd.DataFrame() &gt;&gt;&gt; df['dates'] = [np.datetime64('2019-02-15'), np.datetime64('2019-08-15')] &gt;&gt;&gt; df.dtypes dates datetime64[ns] dtype: object &gt;&gt;&gt; type(df['dates'][0]) &lt;class 'pandas.tslib.Timestamp'&gt; </code></pre>
<p>Adding <code>values</code> </p> <pre><code>df.dates.values[0] Out[55]: numpy.datetime64('2019-02-15T00:00:00.000000000') type(df.dates.values[0]) Out[56]: numpy.datetime64 </code></pre>
pandas|datetime64
0
323
49,438,360
In Pandas how can I use the values in one table as an index to extract data from another table?
<p>I feel like this should be really simple but I'm having a hard time with it. Suppose I have this:</p> <pre><code>df1: ticker hhmm &lt;--- The hhmm value corresponds to the column in df2 ====== ==== AAPL 0931 IBM 0930 XRX 1559 df2: ticker 0930 0931 0932 ... 1559 &lt;&lt;---- 390 columns ====== ==== ==== ==== ... ==== AAPL 4.56 4.57 ... ... IBM 7.98 ... ... ... XRX 3.33 ... ... 3.78 </code></pre> <p>The goal is to create a new column in df1 whose value is df2[df1['hhmm']].</p> <p>For example:</p> <pre><code>df1: ticker hhmm df2val ====== ==== ====== AAPL 0931 4.57 IBM 0930 7.98 XRX 1559 3.78 </code></pre> <p>Both df's have 'ticker' as their index, so I could simply join them BUT assume that this uses too much memory (the dataframes I'm using are much larger than the examples shown here).</p> <p>I've tried apply and it's slooooow (15 minutes to run).</p> <p>What's the Pandas Way to do this? Thanks!</p>
<p>There is a function called <code>lookup</code></p> <pre><code>df1['val']=df2.set_index('ticker').lookup(df1.ticker,df1.hhmm) df1 Out[290]: ticker hhmm val 0 AAPL 0931 4.57 1 IBM 0930 7.98 2 XRX 1559 33.00# I make up this number </code></pre>
python|pandas|numpy|dataframe
1
324
73,367,040
Is there a way in python to read a text block within a csv cell and only select cell data based on key word with in text block?
<p>I am working with a CSV file in <strong>Pandas/Python</strong> and I need to find when a supplier response was submitted. The column &quot;time Line&quot; contains the info I'm looking for and can vary on how much information was put into the response at the time but the keyword I am looking for is the same.</p> <p>Text block</p> <p><strong>(This is the sub-section I need!)</strong></p> <pre><code>October 29, 2021 10:34:30 AM -05:00 - Jim Supplier assignment notification sent to supplier &quot;ALB-example&quot; - Alex ([email protected]) -------- </code></pre> <h1></h1> <pre><code>November 04, 2021 07:06:31 PM -05:00 - Levi A-Quality Dept assigned as approver -------- November 01, 2021 05:11:19 PM -05:00 - Jim CAR #454 created from this record -------- October 29, 2021 10:34:30 AM -05:00 - Jim Supplier assignment notification sent to supplier &quot;ALB-Aeroexample&quot; - Alex ([email protected]) -------- October 29, 2021 10:34:28 AM -05:00 - Jim NCP Updated with the following changes: + Supplier assigned changed from &quot;False to True </code></pre> <p>This text block is in one cell and I haven't figured out how to go about it.</p> <p>Thank you in advance.</p>
<p>Assuming your dataframe has a &quot;time line&quot; column:</p> <p><code>new_df = df.loc[df['time Line'].str.contains('the string you are looking for')]</code></p> <p>this will create a new dataframe with all rows that contains the string you need, is this what you are looking for?</p>
python|pandas|csv|data-analysis|data-extraction
0
325
73,196,008
How to replace a dataframe rows with other rows based on column values?
<p>I have a dataframe of this type:</p> <pre><code> Time Copy_from_Time Rest_of_data 0 1 1 foo1 1 2 1 foo2 2 3 3 foo3 3 4 4 foo4 4 5 4 foo5 5 6 4 foo6 </code></pre> <p>I want to update &quot;Rest of data&quot; with data associated at the Time specified by &quot;Copy_from_Time&quot;. So it would look like:</p> <pre><code> Time Copy_from_Time Rest_of_data 0 1 1 foo1 1 2 1 foo1 2 3 3 foo3 3 4 4 foo4 4 5 4 foo4 5 6 4 foo4 </code></pre> <p>I can do it with iterrows(), but it is very slow. Is there a faster way with indexing tricks and maybe map()?</p> <p>(The real example has Time, Time2, Copy_from_Time and Copy_from_Time2, so I would need to match several fields, but I guess it would be easy to adapt it)</p>
<p>use map in updating the value in rest_of_data column</p> <pre><code>df['Rest_of_data']=df['Copy_from_Time'].map(df.set_index('Time')['Rest_of_data']) df </code></pre> <pre><code> Time Copy_from_Time Rest_of_data 0 1 1 foo1 1 2 1 foo1 2 3 3 foo3 3 4 4 foo4 4 5 4 foo4 5 6 4 foo4 </code></pre>
python|pandas|dataframe
1
326
73,314,741
How to combine two columns
<p>I have a merged Pandas dataframe in the following format</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>index</th> <th>value_x</th> <th>value_y</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>nan</td> <td>3</td> </tr> <tr> <td>1</td> <td>3</td> <td>nan</td> </tr> <tr> <td>2</td> <td>nan</td> <td>nan</td> </tr> <tr> <td>3</td> <td>-1</td> <td>1</td> </tr> <tr> <td>4</td> <td>6</td> <td>nan</td> </tr> <tr> <td>5</td> <td>nan</td> <td>6</td> </tr> <tr> <td>6</td> <td>-1</td> <td>nan</td> </tr> <tr> <td>7</td> <td>-1</td> <td>6</td> </tr> <tr> <td>8</td> <td>nan</td> <td>nan</td> </tr> </tbody> </table> </div> <p>Since the original dataframes have the <code>value</code> field, therefore <code>value_x</code> and <code>value_y</code> column is gnerated during the merge process. I would like to merge the two columns so the final column would look like:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>index</th> <th>value_x</th> <th>value_y</th> <th>value</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>nan</td> <td>3</td> <td>3</td> </tr> <tr> <td>1</td> <td>3</td> <td>nan</td> <td>3</td> </tr> <tr> <td>2</td> <td>nan</td> <td>nan</td> <td>nan</td> </tr> <tr> <td>3</td> <td>nan</td> <td>1</td> <td>1</td> </tr> <tr> <td>4</td> <td>6</td> <td>nan</td> <td>6</td> </tr> <tr> <td>5</td> <td>nan</td> <td>6</td> <td>6</td> </tr> <tr> <td>6</td> <td>-1</td> <td>nan</td> <td>-1</td> </tr> <tr> <td>7</td> <td>nan</td> <td>6</td> <td>6</td> </tr> <tr> <td>8</td> <td>nan</td> <td>nan</td> <td>nan</td> </tr> </tbody> </table> </div> <p>In addition, I would like to know if I could avoid the column combining process during the merge process?</p> <p>Thanks in advance</p>
<p>You can use <code>max</code></p> <pre><code>df[&quot;value&quot;] = df[[&quot;value_x&quot;, &quot;value_y&quot;]].max(axis=1) </code></pre> <p>as this will pick the non-nan value for each row. For this question:</p> <blockquote> <p>In addition, I would like to know if I could avoid the column combining process during the merge process?</p> </blockquote> <p>the answer depends on what the two dataframes were before the merge.</p>
python|pandas
2
327
67,522,909
Create a new dataframe from an old dataframe where the new dataframe contains row-wise avergae of columns at different locations in the old dataframe
<p>I have a dataframe called &quot;frame&quot; with 16 columns and 201 rows. A screenshot is attached that provides an example dataframe</p> <p><a href="https://i.stack.imgur.com/kEFnU.png" rel="nofollow noreferrer">enter image description here</a></p> <p>Please note the screenshot is just an example, the original dataframe is much larger.</p> <p>I would like to find an efficient way (maybe using for loop or writing a function) to row-wise average different columns in the dataframe. For instance, to find an average of column <strong>&quot;rep&quot; and &quot;rep1&quot;</strong> and column <strong>&quot;repcycle&quot; and &quot;repcycle1&quot; (similarly for set and setcycle)</strong> and save in a new dataframe with only averaged columns.</p> <p>I have tried writing a code using iloc</p> <pre><code>newdf= frame[['sample']].copy() newdf['rep_avg']=frame.iloc[:, [1,5]].mean(axis=1) #average row-wise newdf['repcycle_avg']=frame.iloc[:, [2,6]].mean(axis=1) newdf['set_avg']=frame.iloc[:, [3,7]].mean(axis=1) #average row-wise newdf['setcycle_avg']=frame.iloc[:, [4,8]].mean(axis=1) newdf.columns = ['S', 'Re', 'Rec', 'Se', 'Sec'] </code></pre> <p>The above code does the job, but it is tedious to note the locations for every column. I would rather like to automate this process since this is repeated for other data files too.</p>
<p>based on your desire &quot;I would rather like to automate this process since this is repeated for other data files too&quot; what I can think of is this below:</p> <pre><code>in [1]: frame = pd.read_csv('your path') </code></pre> <p>result shown below, now as you can see what you want to average are columns 1,5 and 2,6 and so on.</p> <pre><code>out [1]: sample rep repcycle set setcycle rep1 repcycle1 set1 setcycle1 0 66 40 4 5 3 40 4 5 3 1 78 20 5 6 3 20 5 6 3 2 90 50 6 9 4 50 6 9 4 3 45 70 7 3 2 70 7 7 2 </code></pre> <p>so, we need to create 2 lists</p> <pre><code>in [2]: import numpy as np list_1 = np.arange(1,5,1).tolist() in [3]: list_1 out[3]: [1,2,3,4] </code></pre> <p>this for the first half you want to average[rep,repcycle,set,setcycle]</p> <pre><code>in [4]: list_2 = [x+4 for x in list_1] in [5]: list_2 out[5]: [5,6,7,8] </code></pre> <p>this for the second half you want to average[rep1,repcycle1,set1,setcycle1]</p> <pre><code>in [6]: result = pd.concat([frame.iloc[:, [x,y].mean(axis=1) for x, y in zip(list_1,list_2)],axis=1) in [7]: result.columns = ['Re', 'Rec', 'Se', 'Sec'] </code></pre> <p>and now you get what you want, and it's automate, all you need to do is change the two lists from above.</p> <pre><code>in [8]: result out[8]: Re Rec Se Sec 0 40.0 4.0 5.0 3.0 1 20.0 5.0 6.0 3.0 2 50.0 6.0 9.0 4.0 3 70.0 7.0 5.0 2.0 </code></pre>
python|pandas|dataframe|loops|mean
0
328
60,079,541
Using tensorflow when a session is already running on the gpu
<p>I am training a neural network with tensorflow 2 (gpu) on my local machine, I'd like to do some tensorflow code in parallel (just loading a model and saving it's graph).</p> <p>When loading the model I get a cuda error. How can I use tensorflow 2 on cpu to load and save a model, when another instance of tensorflow is training on the gpu?</p> <pre><code> 132 self._config = config 133 self._hyperparams['feature_extractor'] = self._get_feature_extractor(hyperparams['feature_extractor']) --&gt; 134 self._input_shape_tensor = tf.constant([input_shape[0], input_shape[1]]) 135 self._build(**self._hyperparams) 136 # save parameter dict for serialization ~/.anaconda3/envs/posenet2/lib/python3.7/site-packages/tensorflow_core/python/framework/constant_op.py in constant(value, dtype, shape, name) 225 """ 226 return _constant_impl(value, dtype, shape, name, verify_shape=False, --&gt; 227 allow_broadcast=True) 228 229 ~/.anaconda3/envs/posenet2/lib/python3.7/site-packages/tensorflow_core/python/framework/constant_op.py in _constant_impl(value, dtype, shape, name, verify_shape, allow_broadcast) 233 ctx = context.context() 234 if ctx.executing_eagerly(): --&gt; 235 t = convert_to_eager_tensor(value, ctx, dtype) 236 if shape is None: 237 return t ~/.anaconda3/envs/posenet2/lib/python3.7/site-packages/tensorflow_core/python/framework/constant_op.py in convert_to_eager_tensor(value, ctx, dtype) 93 except AttributeError: 94 dtype = dtypes.as_dtype(dtype).as_datatype_enum ---&gt; 95 ctx.ensure_initialized() 96 return ops.EagerTensor(value, ctx.device_name, dtype) 97 ~/.anaconda3/envs/posenet2/lib/python3.7/site-packages/tensorflow_core/python/eager/context.py in ensure_initialized(self) 490 if self._default_is_async == ASYNC: 491 pywrap_tensorflow.TFE_ContextOptionsSetAsync(opts, True) --&gt; 492 self._context_handle = pywrap_tensorflow.TFE_NewContext(opts) 493 finally: 494 pywrap_tensorflow.TFE_DeleteContextOptions(opts) InternalError: CUDA runtime implicit initialization on GPU:0 failed. Status: out of memory </code></pre>
<p>It took me a while to find this answer:</p> <pre><code>import os os.environ[&quot;CUDA_VISIBLE_DEVICES&quot;] = &quot;-1&quot; import tensorflow as tf </code></pre> <p>Starting your code with those lines allows you to run your tf code on CPU (avoid using CUDA is the solution, obviously) while at the same time running a heavy GPU loaded training.</p>
python|tensorflow|tensorflow2.0
1
329
59,952,399
pandas multiindex - remove rows based on number of sub index
<p>Here is my dataframe :</p> <pre><code>df = pd.DataFrame(pd.DataFrame({"C1" : [0.5, 0.9, 0.1, 0.2, 0.3, 0.5, 0.2], "C2" : [200, 158, 698, 666, 325, 224, 584], "C3" : [15, 99, 36, 14, 55, 62, 37]}, index = pd.MultiIndex.from_tuples([(0,0), (1,0), (1,1), (2,0), (2,1), (3,0), (4,0)], names=['L1','L2']))) </code></pre> <p>df :</p> <pre><code> C1 C2 C3 L1 L2 0 0 0.5 200 15 1 0 0.9 158 99 1 0.1 698 36 2 0 0.2 666 14 1 0.3 325 55 3 0 0.5 224 62 4 0 0.2 584 37 </code></pre> <p>I would like to keep the rows that only have one value in L1 subindex (0 in that case) in order to get something like that :</p> <pre><code> C1 C2 C3 L1 L2 0 0 0.5 200 15 3 0 0.5 224 62 4 0 0.2 584 37 </code></pre> <p>Please, could you let me know if you have any clue to solve this problem ?</p> <p>Sincerely</p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.transform.html" rel="noreferrer"><code>GroupBy.transform</code></a> by first level with any column with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.size.html" rel="noreferrer"><code>GroupBy.size</code></a> and compare by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.eq.html" rel="noreferrer"><code>Series.eq</code></a> and filter by <a href="http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#boolean-indexing" rel="noreferrer"><code>boolean indexing</code></a>:</p> <pre><code>df1 = df[df.groupby(level=0)['C1'].transform('size').eq(1)] </code></pre> <p>Or extract index of first level by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Index.get_level_values.html" rel="noreferrer"><code>Index.get_level_values</code></a> and filter with inverted mask by <code>~</code> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Index.duplicated.html" rel="noreferrer"><code>Index.duplicated</code></a> and <code>keep=False</code> for all dupes:</p> <pre><code>df1 = df[~df.index.get_level_values(0).duplicated(keep=False)] </code></pre>
python|python-3.x|pandas|multi-index
6
330
65,352,321
Optimize the Weight of a layer while training CNN
<p>I am trying to train a neural network whose last layer like this,</p> <pre><code>add_5_proba = Add()([out_of_1,out_of_2,out_of_3,out_of_4, out_of_5 ]) # Here I am adding 5 probability from 5 different layer model = Model(inputs=inp, outputs=add_5_proba) </code></pre> <p>But now I want to give weight to them ,Like</p> <pre><code>[a * out_of_1, b* out_of_2, c * out_of_3, d * out_of_4, e * out_of_5] </code></pre> <p>and <code>optimize the weights (a,b,c,d,e) during training</code>. How can I do that ? My idea is Using custom Loss function it can be done, but I have no idea how to implement this.</p> <p>Thanks in advance for your help.</p>
<p>Just create <code>tf.Variables</code>:</p> <pre><code>a = tf.Variable(1.) b = tf.Variable(1.) c = tf.Variable(1.) d = tf.Variable(1.) e = tf.Variable(1.) add_5_proba = Add()([a * out_of_1, b * out_of_2, c * out_of_3, d * out_of_4, e * out_of_5 ]) model = Model(inputs=inp, outputs=add_5_proba) </code></pre> <p>These variables are trainable by default - <a href="https://www.tensorflow.org/api_docs/python/tf/Variable" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/Variable</a>. They should be optimized during training.</p>
python|python-3.x|tensorflow|keras
1
331
49,820,811
What does sess.run( LAYER ) return?
<p>I have tried to search around, but oddly enough, I can't find anything similar. </p> <p>Let's say I have a few fully connected layers:</p> <pre><code>fc_1 = tf.contrib.layers.fully_connected(fc_input, 100) fc_2 = tf.contrib.layers.fully_connected(fc_1, 10) fc_3 = tf.contrib.layers.fully_connected(fc_2, 1) </code></pre> <p>When I run these with <code>sess.run(...)</code> I get a tensor back. </p> <p>What is this tensor? Is it the weights? Gradients? Does <code>sess.run</code>return this for all types of layers we give it?</p>
<p>A fully-connected layer is a math operation that transforms an input tensor into an output tensor. The output tensor contains the values returned by the layer's activation function, which operates on the sum of the weighted values in the layer's input tensor.</p> <p>When you execute <code>sess.run(fc_3)</code>, TensorFlow performs the transformations for the three layers and gives you the output tensor produced by the third layer.</p>
python|tensorflow
1
332
63,995,367
Bin using cumulative sum rather than observations in python
<p>Let's say that I have a data frame that has a column like this:</p> <pre><code>Weight 1 1 0.75 0.5 0.25 0.5 1 1 1 1 </code></pre> <p>I want to create two bins and add a column to my data frame that shows which bin each row is in, but I don't want to bin on the observations (i.e. the first 5 observations got to bin 1 and the last five to bin 2). Instead, I want to bin such that the sum of weight for each bin is equal or as close to equal as possible without changing the order of the column.</p> <p>So, I want the result to be</p> <pre><code>Weight I want Not this 1 1 1 1 1 1 0.75 1 1 0.5 1 1 0.25 1 1 0.5 1 2 1 2 2 1 2 2 1 2 2 1 2 2 </code></pre> <p>Is there something built into Pandas that already does this, or can someone share any ideas on how to make this happen? Thanks!</p>
<p>This should do it:</p> <pre class="lang-py prettyprint-override"><code>df = pd.DataFrame( {'Weight': [1, 1, 0.75, 0.5, 0.25, 0.5, 1, 1, 1, 1]}) weight_sum = df.Weight.sum() df['bin'] = 1 df.loc[df.Weight.cumsum() &gt; weight_sum / 2, 'bin'] = 2 print(df) </code></pre> <p>Output:</p> <pre><code> Weight bin 0 1.00 1 1 1.00 1 2 0.75 1 3 0.50 1 4 0.25 1 5 0.50 1 6 1.00 2 7 1.00 2 8 1.00 2 9 1.00 2 </code></pre>
python|pandas
3
333
63,750,679
Find the remainder mask between 2 masks in numpy for 2D array
<p>Let's say I have a 2D array:</p> <pre><code>main = np.random.random((300, 200)) </code></pre> <p>And I have two masks for this array: e.g.,</p> <pre><code>mask1 = list((np.random.randint((100), size = 50), np.random.randint((200), size = 50))) mask2 = list((np.random.randint((20), size = 10), np.random.randint((20), size = 10))) </code></pre> <p>I want to substitute the main values in the 2D array like:</p> <pre><code>main[mask1]=2 main[mask2]=1 </code></pre> <p>which works great, but I also want to substitue all the indexes that are not mask 1 nor mask 2, by zero.</p> <p>I thought about something like:</p> <pre><code>main[~mask1] &amp; main[~mask2] = 0 </code></pre> <p>which is leading me nowhere, so any help is appreciated!</p>
<p>I think for your requirement a better approach is constructing a zero filled array same shape as <code>main</code> and assign <code>1</code> and <code>2</code> using <code>mask1</code> and <code>mask2</code></p> <pre><code>main = np.zeros(main.shape) main[mask1]=2 main[mask2]=1 </code></pre>
numpy|multidimensional-array|mask
1
334
63,792,503
How to color nodes within networkx using a column in Pandas
<p>I have this dataset:</p> <pre><code> User Val Color 92 Laura NaN red 100 Laura John red 148 Laura Mike red 168 Laura Mirk red 293 Laura Sara red 313 Laura Sim red 440 Martyn Pierre orange 440 Martyn Hugh orange 440 Martyn Lauren orange 440 Martyn Sim orange </code></pre> <p>I would like to assign to each User (no duplicates) the corresponding colour: in this example, the node called Laura should be red; the node called Martyn should be orange; the other nodes (John, Mike, Mirk, Sara, Sim, Pierrre, Hugh and Lauren) should be in green. I have tried to use this column (Color) to define a set of colours within my code by using networkx, but the approach seems to be wrong, since the nodes are not coloured as I previously described, i.e. as I would expect. Please see below the code I have used:</p> <p>I am using the following code:</p> <pre><code>G = nx.from_pandas_edgelist(df, 'User', 'Val') labels = [i for i in dict(G.nodes).keys()] labels = {i:i for i in dict(G.nodes).keys()} colors = df[[&quot;User&quot;, &quot;Color&quot;]].drop_duplicates()[&quot;Color&quot;] plt.figure(3,figsize=(30,50)) pos = nx.spring_layout(G) nx.draw(G, node_color = df.Color, pos = pos) net = nx.draw_networkx_labels(G, pos = pos) </code></pre>
<p>Looks like you're in the right track, but got a couple of things wrong. Along with using <code>drop_duplicates</code>, build a dictionary and use it to lookup the color in <code>nx.draw</code>. Also, you don't need to construct a <code>labels</code> dictionary, <code>nx.draw</code> can handle that for you.</p> <pre><code>G = nx.from_pandas_edgelist(df, 'User', 'Val') d = dict(df.drop_duplicates(subset=['User','Color'])[['User','Color']] .to_numpy().tolist()) # {'Laura': 'red', 'Martyn': 'orange'} nodes = G.nodes() plt.figure(figsize=(10,6)) pos = nx.draw(G, with_labels=True, nodelist=nodes, node_color=[d.get(i,'lightgreen') for i in nodes], node_size=1000) </code></pre> <p><a href="https://i.stack.imgur.com/QQdhm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QQdhm.png" alt="enter image description here" /></a></p>
python|pandas|networkx
2
335
63,298,842
Placing dataframes into excel sheets
<p>i have two dataframes; df and df2. I need to place them into an excel, with df being in one sheet and df2 being in another sheet. What would be the easiest way to do this in python?</p>
<p>Refer <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_excel.html" rel="nofollow noreferrer">Documentation</a>:</p> <pre><code>with pd.ExcelWriter('output.xlsx') as writer: df.to_excel(writer, sheet_name='Sheet_name_1') df2.to_excel(writer, sheet_name='Sheet_name_2') </code></pre>
python|excel|pandas|dataframe
1
336
67,935,182
Website crawling based on keyword in Excel file
<p>I would like to crawl the website price based on the search keyword on my keyword.xlsx file , the first input should be dyson, second is lego, third input should be sony, but my result in the attached image only has dyson, do you know why?</p> <p><a href="https://i.stack.imgur.com/sL7lw.jpg" rel="nofollow noreferrer">image is here</a></p> <pre><code>import time from random import randint import ast import requests from bs4 import BeautifulSoup #A python library to help you to exract HTML information headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36'} import xlrd import pandas as pd df_keywords = pd.read_excel('keyword.xlsx', sheet_name='Sheet1', usecols=&quot;A&quot;) workbook = xlrd.open_workbook('keyword.xlsx') worksheet = workbook.sheet_by_name('Sheet1') index=df_keywords.index number_of_row=len(index) print(number_of_row) #worksheet.cell(2,0).value for i in range (1,number_of_row+1): keyword_input=worksheet.cell(i,0).value print (keyword_input) prefix=&quot;https://tw.buy.yahoo.com/search/product?disp=list&amp;p=&quot; sortbyprice=&quot;&amp;sort=price&quot; url=prefix+keyword_input+sortbyprice r=requests.get(url) soup=BeautifulSoup(r.text) for i in soup.findAll(&quot;div&quot;, {&quot;class&quot;:&quot;ListItem_price_2CMKZ&quot;}): lowest=i.find(&quot;span&quot;,{&quot;class&quot;:&quot;ListItem_priceContent_5WbI9&quot;}).text.strip() print(lowest) lowest_first=lowest.split(&quot;&quot;,1)[0] print(lowest_first) </code></pre>
<p>There's a few issues here. First, I'm not sure what <code>lowest_first=lowest.split(&quot;&quot;,1)[0]</code> is supposed to be doing in your code. It is throwing an error in your code preventing it from hitting the next iteration of your for loop. You can't split a string on nothing (&quot;&quot;). If you are trying to get rid of the '$', you can just do <code>lowest[1:]</code>.</p> <p>Second, you can accomplish your task directly from <code>pandas</code> without having to call <code>xlrd</code> (which is often used as the backend engine for reading excel files (along with <code>openpyxl</code>).</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd df_keywords = pd.read_excel('keyword.xlsx') for keyword in df_keywords['keyword'].to_list(): prefix=&quot;https://tw.buy.yahoo.com/search/product?disp=list&amp;p=&quot; print(prefix + keyword) </code></pre> <p>Output</p> <pre><code>https://tw.buy.yahoo.com/search/product?disp=list&amp;p=dyson https://tw.buy.yahoo.com/search/product?disp=list&amp;p=lego https://tw.buy.yahoo.com/search/product?disp=list&amp;p=sony </code></pre>
python|excel|pandas
0
337
67,997,979
Pandas DataFrame create new columns based on a logic dependent on other columns with cumulative counting rule
<p>I have a DataFrame originally as follows:</p> <p><code>d1={'on':[0,1,0,1,0,0,0,1,0,0,0],'off':[0,0,0,0,0,0,1,0,1,0,1]}</code></p> <p><a href="https://i.stack.imgur.com/DCbJ6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DCbJ6.png" alt="Original" /></a></p> <p>My end objective is to add a new column 'final' where it will show a value of '1' once an 'on' indicator' is triggered (ignoring any duplicate) but then 'final' is switched back to '0' if the 'off' indicator is triggered AND ONLY when the 'on' sign was triggered for 3 rows. I did try coming up with any code but failed to tackle it at all.</p> <p>My desired output is as follows:</p> <p><a href="https://i.stack.imgur.com/MpUlk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MpUlk.png" alt="Desired" /></a></p> <p>Column 'final' is first triggered in row 1 when the 'on' indicator is switched to 1. 'on' indictor in row 3 is ignored as it is just a redundant signal. 'off' indictor at row 6 is triggered and the 'final' value is switched back to 0 because it has been turned on for more than 3 rows already, unlike the case in row 8 where the 'off' indicator is triggered but the 'final' value cannot be switched off until encountering another 'off' indicator in row 10 because that was the time when the 'final' value has been switched off for &gt; 3 rows.</p> <p>Thank you for assisting. Appreciate.</p>
<p>One solution using a &quot;state machine&quot; implemented with <code>yield</code>:</p> <pre class="lang-py prettyprint-override"><code>def state_machine(): on, off = yield cnt, current = 0, on while True: current = int(on or current) cnt += current if off and cnt &gt; 3: cnt = 0 current = 0 on, off = yield current machine = state_machine() next(machine) df = pd.DataFrame(d1) df['final'] = df.apply(lambda x: machine.send((x['on'], x['off'])), axis=1) print(df) </code></pre> <p>Prints:</p> <pre class="lang-none prettyprint-override"><code> on off final 0 0 0 0 1 1 0 1 2 0 0 1 3 1 0 1 4 0 0 1 5 0 0 1 6 0 1 0 7 1 0 1 8 0 1 1 9 0 0 1 10 0 1 0 </code></pre>
python|pandas|dataframe
2
338
67,980,140
How to change a non top 3 values columns in a dataframe in Python
<p>I have a dataframe that was made out of BOW results called df_BOW</p> <p>dataframe looks like this</p> <pre><code>df_BOW Out[42]: blue drama this ... book mask 0 3 0 1 ... 1 0 1 0 1 0 ... 0 4 2 0 1 3 ... 6 0 3 6 0 0 ... 1 0 4 7 2 0 ... 0 0 ... ... ... ... ... ... ... 81991 0 0 0 ... 0 1 81992 0 0 0 ... 0 1 81993 3 3 5 ... 4 1 81994 4 0 0 ... 0 0 81995 0 1 0 ... 9 2 </code></pre> <p>this data frame has around 12,000 column and 82,000 rows</p> <p>I want to reduce the number of columns by doing this</p> <p>for each row keep only top 3 columns and make everything else 0</p> <p>so for row number 543 ( the original record looks like this)</p> <pre><code> blue drama this ... book mask 543 1 11 21 ... 7 4 </code></pre> <p>It should become like this</p> <pre><code> blue drama this ... book mask 543 0 11 21 ... 7 0 </code></pre> <p>only top 3 columns kept (drama, this, book) all other columns became zeros</p> <pre><code> blue drama this ... book mask 929 5 3 2 ... 4 3 </code></pre> <p>will become</p> <pre><code> blue drama this ... book mask 929 5 3 0 ... 4 0 </code></pre> <p>at the end of I should remove all columns that are zeros for all rows</p> <p>I start putting this function to loop all rows and all columns</p> <pre><code>for i in range(0, len(df_BOW.index)): Col1No = 0 Col1Val = 0 Col2No = 0 Col2Val = 0 Col3No = 0 Col3Val = 0 for j in range(0, len(df_BOW.columns)): if (df_BOW.iloc[i,j] &gt; min(Col1Val, Col2Val, Col3Val)): if (Col1Val &lt;= Col2Val) &amp; (Col1Val &lt;= Col3Val): df_BOW.iloc[i,Col1No] = 0 Col1Val = df_BOW.iloc[i,j] Col1No = j elif (Col2Val &lt;= Col1Val) &amp; (Col2Val &lt;= Col3Val): df_BOW.iloc[i,Col2No] = 0 Col2Val = df_BOW.iloc[i,j] Col2No = j elif (Col3Val &lt;= Col1Val) &amp; (Col3Val &lt;= Col2Val): df_BOW.iloc[i,Col3No] = 0 Col3Val = df_BOW.iloc[i,j] Col3No = j </code></pre> <p>I don't think this loop is the best way to do that.</p> <p>beside it will become impossible to do for top 50 columns with this loop.</p> <p>is there a better way to do that?</p>
<p>You can use <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.nlargest.html" rel="nofollow noreferrer"><code>pandas.Series.nlargest</code></a>, pass keep as <code>first</code> to include the first record only if multiple value exists for top 3 largest values. Finally use <code>fillna(0)</code> to fill all the <code>NaN</code> columns with 0</p> <pre class="lang-py prettyprint-override"><code>df.apply(lambda row: row.nlargest(3, keep='first'), axis=1).fillna(0) </code></pre> <p><strong>OUTPUT:</strong></p> <pre class="lang-py prettyprint-override"><code> blue book drama mask this 0 0.0 1.0 0.0 0.0 1.0 1 1.0 0.0 1.0 4.0 0.0 2 2.0 6.0 0.0 0.0 3.0 3 3.0 1.0 0.0 0.0 0.0 4 4.0 0.0 2.0 0.0 0.0 5 0.0 0.0 0.0 1.0 0.0 6 0.0 0.0 0.0 1.0 0.0 7 3.0 4.0 0.0 0.0 5.0 8 4.0 0.0 0.0 0.0 0.0 9 0.0 9.0 1.0 2.0 0.0 </code></pre>
python|pandas
3
339
67,840,664
CNN-LSTM with TimeDistributed Layers behaving weirdly when trying to use tf.keras.utils.plot_model
<p>I have a CNN-LSTM that looks as follows;</p> <pre><code>SEQUENCE_LENGTH = 32 BATCH_SIZE = 32 EPOCHS = 30 n_filters = 64 n_kernel = 1 n_subsequences = 4 n_steps = 8 def DNN_Model(X_train): model = Sequential() model.add(TimeDistributed( Conv1D(filters=n_filters, kernel_size=n_kernel, activation='relu', input_shape=(n_subsequences, n_steps, X_train.shape[3])))) model.add(TimeDistributed(Conv1D(filters=n_filters, kernel_size=n_kernel, activation='relu'))) model.add(TimeDistributed(MaxPooling1D(pool_size=2))) model.add(TimeDistributed(Flatten())) model.add(LSTM(100, activation='relu')) model.add(Dense(100, activation='relu')) model.add(Dense(1, activation='sigmoid')) model.compile(loss='mse', optimizer='adam') return model </code></pre> <p>I'm using this CNN-LSTM for a multivariate time series forecasting problem. the CNN-LSTM input data comes in the 4D format: [samples, subsequences, timesteps, features]. For some reason, I need <code>TimeDistributed</code> Layers; or I get errors like <code>ValueError: Input 0 of layer conv1d is incompatible with the layer: expected ndim=3, found ndim=4. Full shape received: [None, 4, 8, 35]</code>. I think this has to do with the fact that <code>Conv1D</code> is officially not meant for time series, so to preserve time-series data shape we need to use a wrapper layer like <code>TimeDistributed</code>. I don't really mind using TimeDistributed layers - They're wrappers and if they make my model work I am happy. However, when I try to visualize my model with</p> <pre><code> file = 'CNN_LSTM_Visualization.png' tf.keras.utils.plot_model(model, to_file=file, show_layer_names=False, show_shapes=False) </code></pre> <p>The resulting visualization only shows the <code>Sequential()</code>:</p> <p><a href="https://i.stack.imgur.com/d6n0u.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/d6n0u.png" alt="enter image description here" /></a></p> <p>I suspect this has to do with the TimeDistributed layers and the model not being built yet. I cannot call <code>model.summary()</code> either - it throws <code>ValueError: This model has not yet been built. Build the model first by calling </code>build()<code>or calling</code>fit()<code>with some data, or specify an</code>input_shape<code> argument in the first layer(s) for automatic build</code> Which is strange because I <em>have</em> specified the input_shape, albeit in the <code>Conv1D</code> layer and not in the <code>TimeDistributed</code> wrapper.</p> <p>I would like a working model together with a working <code>tf.keras.utils.plot_model</code> function. Any explanation as to why I need TimeDistributed and why it makes the plot_model function behave weirdly would be greatly awesome.</p>
<p>An alternative to using an <code>Input</code> layer is to simply pass the <code>input_shape</code> to the <code>TimeDistributed</code> wrapper, and not the <code>Conv1D</code> layer:</p> <pre class="lang-py prettyprint-override"><code>def DNN_Model(X_train): model = Sequential() model.add(TimeDistributed( Conv1D(filters=n_filters, kernel_size=n_kernel, activation='relu'), input_shape=(n_subsequences, n_steps, X_train.shape[3]))) model.add(TimeDistributed(Conv1D(filters=n_filters, kernel_size=n_kernel, activation='relu'))) model.add(TimeDistributed(MaxPooling1D(pool_size=2))) model.add(TimeDistributed(Flatten())) model.add(LSTM(100, activation='relu')) model.add(Dense(100, activation='relu')) model.add(Dense(1, activation='sigmoid')) model.compile(loss='mse', optimizer='adam') return model </code></pre>
python|tensorflow|keras|deep-learning|conv-neural-network
2
340
61,518,032
problem with pandas drop_duplicates removing empty values
<p>Im using drop_duplicates to remove duplicates from my dataframe based on a column, the problem is this column is empty for some entries and those ended being removed to is there a way to make the function ignore the empty value. here is an example </p> <pre><code> Title summary 0 TITLE A summaryA 1 TITLE A summaryB 2 summaryC 3 summaryD </code></pre> <p>using this </p> <pre><code>data.drop_duplicates(subset ="TITLE", keep = 'first', inplace = True) </code></pre> <p>i get a result like this:</p> <pre><code> Title summary 0 TITLE A summaryA 2 summaryC </code></pre> <p>but since last two rows are not duplicates i want to keep them.. is there a ways for drop_duplicates to ignore empty values?</p>
<p>Fill missing values with the index number? Maybe not the prettiest way but it works</p> <pre><code>df = pd.DataFrame( {'Title':['TITLE A', 'TITLE A', None, None], 'summary':['summaryA', 'summaryB', 'summaryC', 'summaryD']} ) df['_id'] = df.index df['_id'] = df['_id'].apply(str) df['Title2'] = df['Title'].fillna(df['_id']) df.drop_duplicates(subset ="Title2", keep = 'first') </code></pre>
pandas|drop-duplicates
0
341
61,194,028
Adding labels at end of line chart in Altair
<p>So I have been trying to get it so there is a label at the end of each line giving the name of the country, then I can remove the legend. Have tried playing with <code>transform_filter</code> but no luck.</p> <p>I used data from here <a href="https://ourworldindata.org/coronavirus-source-data" rel="nofollow noreferrer">https://ourworldindata.org/coronavirus-source-data</a> I cleaned and reshaped the data so it looks like this:-</p> <pre><code> index days date country value 0 1219 0 2020-03-26 Australia 11.0 1 1220 1 2020-03-27 Australia 13.0 2 1221 2 2020-03-28 Australia 13.0 3 1222 3 2020-03-29 Australia 14.0 4 1223 4 2020-03-30 Australia 16.0 5 1224 5 2020-03-31 Australia 19.0 6 1225 6 2020-04-01 Australia 20.0 7 1226 7 2020-04-02 Australia 21.0 8 1227 8 2020-04-03 Australia 23.0 9 1228 9 2020-04-04 Australia 30.0 </code></pre> <pre><code>import altair as alt countries_list = ['Australia', 'China', 'France', 'Germany', 'Iran', 'Italy','Japan', 'South Korea', 'Spain', 'United Kingdom', 'United States'] chart = alt.Chart(data_core_sub).mark_line().encode( alt.X('days:Q'), alt.Y('value:Q', scale=alt.Scale(type='log')), alt.Color('country:N', scale=alt.Scale(domain=countries_list,type='ordinal')), ) labels = alt.Chart(data_core_sub).mark_text().encode( alt.X('days:Q'), alt.Y('value:Q', scale=alt.Scale(type='log')), alt.Text('country'), alt.Color('country:N', legend=None, scale=alt.Scale(domain=countries_list,type='ordinal')), ).properties(title='COVID-19 total deaths', width=600) alt.layer(chart, labels).resolve_scale(color='independent') </code></pre> <p>This is the current mess that the chart is in.</p> <p><a href="https://i.stack.imgur.com/fY2PH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fY2PH.png" alt="enter image description here" /></a></p> <p>How would I go about just showing the last 'country' name?</p> <h1>EDIT</h1> <p>Here is the result. I might look at adjusting some of the countries separately as adjusting as a group means that some of the labels are always badly positioned no matter what I do with the <code>dx</code> and <code>dy</code> alignment.</p> <p><a href="https://i.stack.imgur.com/642n0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/642n0.png" alt="enter image description here" /></a></p>
<p>You can do this by aggregating the x and y encodings. You want the text to be at the maximum x value, so you can use a <code>'max'</code> aggregate in x. For the y-value, you want the y value associated with the max x-value, so you can use an <code>{"argmax": "x"}</code> aggregate.</p> <p>With a bit of adjustment of text alignment, the result looks like this:</p> <pre><code>labels = alt.Chart(data_core_sub).mark_text(align='left', dx=3).encode( alt.X('days:Q', aggregate='max'), alt.Y('value:Q', aggregate={'argmax': 'days'}, scale=alt.Scale(type='log')), alt.Text('country'), alt.Color('country:N', legend=None, scale=alt.Scale(domain=countries_list,type='ordinal')), ).properties(title='COVID-19 total deaths', width=600) </code></pre>
python|pandas|label|altair
8
342
61,309,146
Using the Python WITH statement to create temporary variable
<p>Suppose I have Pandas data. Any data. I import <code>seaborn</code> to make a colored version of the correlation between varibales. Instead of passing the correlation expression into the heatmap fuction, and instead of creating a one-time variable to store the correlation output, how can I use the <code>with</code> statement to create temporary variable that no longer existss after the heatmap is plotted?</p> <p><strong>Doesn't work</strong></p> <pre><code># Assume: season = sns, Data is heatmapable with mypandas_df.correlation(method="pearson") as heatmap_input: # possible other statements sns.heatmap(heatmap_input) # possible other statements </code></pre> <p>If this exissted, then after seaborn plots the map, <code>heatmap_input</code> no longer exists as a variable. I would like tat functionality.</p> <p><strong>Long way</strong></p> <pre><code># this could be temporary but is now global tcbtbing = mypandas_df.correlation(method="pearson") sns.heatmap(tcbtbing) </code></pre> <p><strong>Compact way</strong></p> <pre><code>sns.heatmap( mypandas_df.correlation(method="pearson") ) </code></pre> <p>I'd like to use the <code>with</code> statement (or similar <strong>short</strong>) construction to avoid the Long Way and the Compact way, but leave room for other manipulations, such as to the plot itself.</p>
<p>You need to implement <strong>enter</strong> and <strong>exit</strong> for the class you want to use it. see: <a href="https://stackoverflow.com/questions/3774328/implementing-use-of-with-object-as-f-in-custom-class-in-python">Implementing use of &#39;with object() as f&#39; in custom class in python</a></p>
python-3.x|pandas|with-statement
1
343
61,504,356
Cross-validation of neural network: How to treat the number of epochs?
<p>I'm implementing a pytorch neural network (regression) and want to identify the best network topology, optimizer etc.. I use cross validation, because I have x databases of measurements and I want to evaluate whether I can train a neural network with a subset of the x databases and apply the neural network to the unseen databases. Therefore, I also introduce a test database, which I doesn't use in the phase of the hyperparameter identification. I am confused on how to treat the number of epochs in cross validation, e.g. I have a number of epochs = 100. There are two options:</p> <ol> <li><p>The number of epochs is a hyperparameter to tune. In each epoch, the mean error across all cross validation iterations is determined. After models are trained with all network topologies, optimizers etc. the model with the smallest mean error is determined and has parameters like: <br />-network topology: 1<br /> -optimizer: SGD<br /> -number of epochs: 54<br /> To calculate the performance on the test set, a model is trained with exactly these parameters (number of epochs = 54) on the training and the validation data. Then it is applied and evaluated on the test set.</p></li> <li><p>The number of epochs is NOT a hyperparameter to tune. Models are trained with all the network topologies, optimizers etc. For each model, the number of epochs, where the error is the smallest, is used. The models are compared and the best model can be determined with parameters like:<br /> -network topology: 1 <br /> -optimizer: SGD<br /> To calculate the performance on the test data, a “simple” training and validation split is used (e.g. 80-20). The model is trained with the above parameters and 100 epochs on the training and validation data. Finally, a model with a number of epochs yielding the smallest validation error, is evaluated on the test data.</p></li> </ol> <p>Which option is the correct or the better one?</p>
<p>The number of epochs is better not to be fine-tuned. Option 2 is a better option. Actually, if the # of epochs is fixed, you need not to have validation set. Validation set gives you the optimal epoch of the saved model.</p>
python|neural-network|pytorch|cross-validation
0
344
68,549,090
pandas concat two column into a new one
<p>I have a csv file with the following column:</p> <pre><code>timestamp. message. name. DestinationUsername. sourceUsername 13.05. hello. hello. name1. 13.05. hello. hello. name2. 43565 </code></pre> <p>what I would like to achieve is to merge together <code>DestinationUsername</code> and <code>SourceUsername</code> into a new column called <code>ID</code></p> <p>What I have done so far is the following</p> <pre><code>f=pd.read_csv('file.csv') f['userID'] = f.destinationUserName + f.sourceUserName keep_col = ['@timestamp', 'message', 'name', 'destinationUserName', 'sourceUserName', 'userID'] new_f = f[keep_col] new_f.to_csv(&quot;newFile.csv&quot;, index=False) </code></pre> <p>But this does not work as expected, because in the output I can see if one of the column <code>destinationUserName</code> or <code>sourceUsername</code> is empty, than the <code>userID</code> is empty, the userId get populated only id <code>both</code> destinationUserName and sourceUserName are populated already.</p> <p>Can anyone help me to understand how I can go over this problem please?</p> <p>And please if you need more infos just ask me</p>
<p>you can typecast the column to string and then remove 'nan' by <code>replace()</code> method:</p> <pre><code>df['ID']=(df['DestinationUsername'].astype(str) + df['sourceUsername'].astype(str).replace('nan','',regex=True)) </code></pre> <p><strong>OR</strong></p> <pre><code>df['ID']=df[['DestinationUsername','sourceUsername']].astype(str) .agg(''.join,1) .replace('nan','',regex=True) </code></pre> <p><strong>Note:</strong> you can also use <code>apply()</code> in place of <code>agg()</code> method</p> <p>output of <code>df['ID']</code>:</p> <pre><code>0 name1. 1 name2.43565.0 dtype: object </code></pre>
python-3.x|pandas|dataframe
1
345
68,704,376
Transform or change values of columns in based on values of others columns
<p>I have a dataframe that contains 5 columns. What I would like to do is to change the last 4 columns to the first column.</p> <p>Basically if the value of the first column is below a certain threshold, the following columns are modified and if this value is higher than the threshold there is no change.</p> <p>So I tried this :</p> <pre><code>import pandas as pd df = pd.DataFrame({ 'col1' : [0.1, 0.3, 0.1, 0.2], 'col2' : [2,4,3,7], 'col3' : [3,4,4,9], 'col4' : [4,2,2,6], 'col5' : [0.3, 2.1, 1.0, .9], }) def motif(col1, col2, col3, col4, col5): col2 = col2 col3 = col3 col4 = col4 col5 = col5 if col1 &lt;=.15: col2 = col2 * .15 col3 = col3 * .15 col4 = col4 * .15 col5 = col5 * .15 return col2, col3, col4, col5 else: return col2, col3, col4, col5 df.apply(lambda x: modify(x[col1], x[col2], x[col3], x[col4], x[col5]), axis=1) </code></pre> <p>But this does not work. If you have any ideas I would be very grateful</p>
<p>We can use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.loc.html" rel="nofollow noreferrer"><code>loc</code></a> to select rows where <code>col1</code> is less than or equal to <code>.15</code> then multiply the rest of the columns by <code>.15</code>:</p> <pre><code>df.loc[df['col1'] &lt;= 0.15, 'col2':] *= 0.15 </code></pre> <p><code>df</code>:</p> <pre><code> col1 col2 col3 col4 col5 0 0.1 0.30 0.45 0.6 0.045 1 0.3 4.00 4.00 2.0 2.100 2 0.1 0.45 0.60 0.3 0.150 3 0.2 7.00 9.00 6.0 0.900 </code></pre> <hr /> <p>Naturally other column selections work if all columns after <code>col2</code> is overly broad:</p> <pre><code>df.loc[df['col1'] &lt;= 0.15, ['col2', 'col3', 'col4', 'col5']] *= 0.15 </code></pre> <pre><code>df.loc[df['col1'] &lt;= 0.15, 'col2':'col5'] *= 0.15 </code></pre> <hr /> <p>The mask can also be saved and reused if different columns need different modifications:</p> <pre><code>m = df['col1'] &lt;= 0.15 df.loc[m, 'col2':'col4'] *= 0.15 df.loc[m, 'col5'] *= 0.5 # col5 is different than col2-4 </code></pre> <p><code>df</code>:</p> <pre><code> col1 col2 col3 col4 col5 0 0.1 0.30 0.45 0.6 0.15 1 0.3 4.00 4.00 2.0 2.10 2 0.1 0.45 0.60 0.3 0.50 3 0.2 7.00 9.00 6.0 0.90 </code></pre> <hr /> <p>The <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.apply.html" rel="nofollow noreferrer"><code>apply</code></a> can work (although it is slower and a <em>lot</em> more code), but since <code>apply</code> can produce both aggregated and unaggregated results the overwritten columns will need explicitly defined and the result needs to be a <code>Series</code> not a <code>tuple</code>:</p> <pre><code>def modify(col1, col2, col3, col4, col5): if col1 &lt;= .15: col2 = col2 * .15 col3 = col3 * .15 col4 = col4 * .15 col5 = col5 * .15 return pd.Series([col2, col3, col4, col5]) df[['col2', 'col3', 'col4', 'col5']] = df.apply(lambda x: modify( x['col1'], x['col2'], x['col3'], x['col4'], x['col5'] ), axis=1) </code></pre> <p><code>df</code>:</p> <pre><code> col1 col2 col3 col4 col5 0 0.1 0.30 0.45 0.6 0.045 1 0.3 4.00 4.00 2.0 2.100 2 0.1 0.45 0.60 0.3 0.150 3 0.2 7.00 9.00 6.0 0.900 </code></pre>
python|pandas|dataframe
3
346
68,656,060
KeyError: 'Failed to format this callback filepath: Reason: \'lr\''
<p>I recently switched form Tensorflow 2.2.0 to 2.4.1 and now I have a problem with <code>ModelCheckpoint</code> callback path. This code works fine if I use an environment with tf 2.2 but get an error when I use tf 2.4.1.</p> <pre><code>checkpoint_filepath = 'path_to/temp_checkpoints/model/epoch-{epoch}_loss-{lr:.2e}_loss-{val_loss:.3e}' checkpoint = ModelCheckpoint(checkpoint_filepath, monitor='val_loss') history = model.fit(training_data, training_data, epochs=10, batch_size=32, shuffle=True, validation_data=(validation_data, validation_data), verbose=verbose, callbacks=[checkpoint]) </code></pre> <p>Error:</p> <blockquote> <p>KeyError: 'Failed to format this callback filepath: &quot;path_to/temp_checkpoints/model/epoch-{epoch}_loss-{lr:.2e}_loss-{val_loss:.3e}&quot;. Reason: 'lr''</p> </blockquote>
<p>In <a href="https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/ModelCheckpoint" rel="nofollow noreferrer"><code>ModelCheckpoint</code></a>, formatted name of <code>filepath</code> argument, can only be contain: <strong><code>epoch</code> + keys in <code>logs</code> after epoch ends</strong>.</p> <p>You can see available keys in logs like this:</p> <pre><code>class CustomCallback(tf.keras.callbacks.Callback): def on_epoch_end(self, epoch, logs=None): keys = list(logs.keys()) print(&quot;Log keys: {}&quot;.format(keys)) model.fit(..., callbacks=[CustomCallback()]) </code></pre> <p>If you run code above, you will see something like this:</p> <pre><code>Log keys: ['loss', 'mean_absolute_error', 'val_loss', 'val_mean_absolute_error'] </code></pre> <p>Which shows you available keys you can use (plus <code>epoch</code>) and <strong><code>lr</code> is not available for you</strong> (You have used 3 keys: <code>epoch</code>, <code>lr</code> and <code>val_loss</code> in <code>filepath</code> name).</p> <hr /> <p><strong>Solution:</strong></p> <p>You can add learning rate to logs yourself:</p> <pre><code>import tensorflow.keras.backend as K class CustomCallback(tf.keras.callbacks.Callback): def on_epoch_end(self, epoch, logs=None): logs.update({'lr': K.eval(self.model.optimizer.lr)}) keys = list(logs.keys()) print(&quot;Log keys: {}&quot;.format(keys)) #you will see now `lr` available checkpoint_filepath = 'path_to/temp_checkpoints/model/epoch-{epoch}_loss-{lr:.2e}_loss-{val_loss:.3e}' checkpoint = ModelCheckpoint(checkpoint_filepath, monitor='val_loss') history = model.fit(training_data, training_data, epochs=10, batch_size=32, shuffle=True, validation_data=(validation_data, validation_data), verbose=verbose, callbacks=[checkpoint, CustomCallback()]) </code></pre>
tensorflow|keras|callback
1
347
53,309,583
Reading a datafile (abalone) and converting to numpy array
<p>When I try to load the UCI abalone data file as follows:</p> <pre><code>dattyp = [('sex',object),('length',float),('diameter',float),('height',float),('whole weight',float),('shucked weight',float),('viscera weight',float),('shell weight',float),('rings',int)] abalone_data = np.loadtxt('C:/path/abalone.dat',dtype = dattyp, delimiter = ',') print(abalone_data.shape) print(abalone_data[0]) &gt;&gt;(4177,) ('M', 0.455, 0.365, 0.095, 0.514, 0.2245, 0.101, 0.15, 15) </code></pre> <p><code>Abalone_data</code> is an array with 1 column instead of 9. Later on, when I want to add other data as extra columns, this gives me problems. Is there any way to transform this data to a <code>(4177, 9)</code> matrix where I can do the usual adding of columns etc? <br>Thanks!</p>
<p>You can use pandas:</p> <pre><code>import pandas as pd abalone_data = pd.read_csv('C:/path/abalone.dat', header=None).values abalone_data.shape </code></pre> <p>OUtput:</p> <pre><code>(4177, 9) </code></pre>
python|numpy
2
348
65,638,874
Python error messages including "ImportError: cannot import name 'string_int_label_map_pb2'"
<p>So I have been trying to get a captcha solver I found <a href="https://drive.google.com/file/d/1tSrLELxq4YMn1-whRQ5yvU4Ns7n61MPQ/view" rel="nofollow noreferrer">here</a> to work for quite some time now. I have fixed many weird problems with that time, but I honestly don't know what's wrong this time. So I am starting the program and I get some error messages. I am using python 3.6.2 and tensorflow 1.15 for this and this is the whole message:</p> <pre><code>Traceback (most recent call last): File &quot;C:\Users\Linus\Desktop\captcha solver\main_.py&quot;, line 1, in &lt;module&gt; from CAPTCHA_object_detection import * File &quot;C:\Users\Linus\Desktop\captcha solver\CAPTCHA_object_detection.py&quot;, line 19, in &lt;module&gt; from object_detection.utils import label_map_util File &quot;C:\Users\Linus\AppData\Local\Programs\Python\Python36\lib\site-packages\object_detection\utils\label_map_util.py&quot;, line 21, in &lt;module&gt; from object_detection.protos import string_int_label_map_pb2 ImportError: cannot import name 'string_int_label_map_pb2' </code></pre> <p>I have been focusing on the last line <code>from object_detection.protos import string_int_label_map_</code> I think there is a stackoverflow regarding this last line already, but I have been trying to fix this in different ways already. I somehow came to the idea of installing protoc but ig the installation didn't even work. Can someone help me and/or bring me on the right track? I guess I should also mention that I am quite new to this.</p>
<p>Read through the answer, few them contains step to step guide on installing protoc,many useful answers on issues thread. <a href="https://github.com/tensorflow/models/issues/1595" rel="nofollow noreferrer">https://github.com/tensorflow/models/issues/1595</a></p>
python|python-3.x|tensorflow|object-detection
0
349
63,388,627
How am I able to separate a DataFrame into many DataFrames, based on a label and then do computation for each DataFrame?
<p>I have the following DataFrame:</p> <p><img src="https://i.stack.imgur.com/ISzkd.png" alt="1" /></p> <p>I am trying to make one DataFrame for each unique value in df1['Tub']. Right now I am creating a dictionary and trying to append to each new DataFrame instances where there is a matching Tub. I think my logic is on the right track.</p> <pre><code>tub_df = {} tubs = [] for tub in df1['Tub']: if tub not in tubs: tubs.append(tub) #['Tub 1', 'Tub 2', 'Tub 3'] for tub_name in tubs: for tub_row in df1['Tub']: if tub_row == tub_name: tub_df[tub] = pd.DataFrame.copy(df1.loc[tub_row]) </code></pre> <p>Thank you for any help.</p>
<p>Here is a shorter version, identify unique values in <code>Tub</code> &amp; use dict comprehension to create a filtered <code>dict</code></p> <pre><code>{tub: df1[df1.Tub.eq(tub)] for tub in df1.Tub.unique()} </code></pre>
python|pandas|dataframe
2
350
63,535,578
how to i change the format of date from dd-mm-yyyy to dd/mm/yyyy in a csv file
<p><a href="https://i.stack.imgur.com/198ak.png" rel="nofollow noreferrer">image</a></p> <p>I have CSV with date in this format which is to be changed? how can I do that?</p>
<pre><code>import pandas as pd # I have taken an example. You could do a pd.read_csv(filename) to read from file #Input in dd-mm-yyyy format df = pd.DataFrame({'DOB': {0: '26-01-2016', 1: '26-01-2016'}}) #Convert to pandas datetime object df['DOB'] = pd.to_datetime(df.DOB) #Convert to dd/mm/yyyy format('%d/%m/%Y') df['DOB'] = df['DOB'].dt.strftime('%d/%m/%Y') </code></pre> <p>You can read more here: <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.dt.strftime.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.dt.strftime.html</a></p>
python|pandas|date
0
351
63,569,743
Equalizing indexes of Pandas Series to fit into Dataframe
<p>I have a pandas Dataframe that uses a datetime index. I want to add a column onto the dataframe that returns an average of a particular slice of the data. This column does not always include the entire index, I need a way to fill in the missing portions with zeros.</p> <p>Dataframe:<br /> <code>[2020-7-26 | 29.3] [2020-8-02 | 28.2] [2020-8-09 | 26.7] [2020-8-16 | 24.1] [2020-8-30 | 23.2] </code></p> <p>Series I wish to append: Note the missing august 16th<br /> <code>[2020-7-26 | 20.3] [2020-8-02 | 21.2] [2020-8-09 | 23.7] [2020-8-30 | 22.2] </code></p> <p>Is there a way to transform this series into:<br /> <code>[2020-7-26 | 20.3] [2020-8-02 | 21.2] [2020-8-09 | 23.7] [2020-8-16 | 0.0] [2020-8-30 | 22.2] </code><br /> In order to be able to form this Dataframe:<br /> <code>[2020-7-26 | 29.3 | 20.3] [2020-8-02 | 28.2 | 21.2] [2020-8-09 | 26.7 | 23.7] [2020-8-16 | 24.1 | 0.0] [2020-8-30 | 23.2 | 22.2] </code><br /> Thanks in advance!</p>
<p>If I'm understanding you correctly, you simply want to join the two together on their datetime index. Let <code>df</code> be your dataframe with more indices and <code>ser</code> be your series with missing indices.</p> <p>if <code>df</code> is:</p> <pre><code> val date 2019-08-01 1 2019-08-02 2 2019-08-03 3 </code></pre> <p>and <code>ser</code> is:</p> <pre><code>date 2019-08-01 4 2019-08-03 5 </code></pre> <p>It should be simply:</p> <pre><code>df.join(ser,how='left').fillna(0) </code></pre> <p>which yields:</p> <pre><code> val val2 date 2019-08-01 1 4.0 2019-08-02 2 0.0 2019-08-03 3 5.0 </code></pre> <p>as the left join would fill any missing on the right with <code>nans</code>, which <code>fillna()</code> would impute with 0.</p> <p>Make sure your series has a name however otherwise the join doesn't know how to name your new column. You can do so by setting <code>ser.name = 'column_name'</code> before you call join, which in my case here is <code>'val2'</code>.</p> <p>Also if you don't understand why I'm calling <code>how='left'</code> I would recommend you take some time to read into what left,right,outer,inner joins are as it is quite essential to not just preprocessing in python but sql as well. Good luck!</p>
python|pandas|dataframe
0
352
63,408,380
Locating columns values in pandas dataframe with conditions
<p>We have a dataframe (<code>df_source</code>):</p> <pre><code>Unnamed: 0 DATETIME DEVICE_ID COD_1 DAT_1 COD_2 DAT_2 COD_3 DAT_3 COD_4 DAT_4 COD_5 DAT_5 COD_6 DAT_6 COD_7 DAT_7 0 0 200520160941 002222111188 35 200408100500.0 12 200408100400 16 200408100300 11 200408100200 19 200408100100 35 200408100000 43 1 19 200507173541 000049000110 00 190904192701.0 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 2 20 200507173547 000049000110 00 190908185501.0 08 190908185501 NaN NaN NaN NaN NaN NaN NaN NaN NaN 3 21 200507173547 000049000110 00 190908205601.0 08 190908205601 NaN NaN NaN NaN NaN NaN NaN NaN NaN 4 22 200507173547 000049000110 00 190909005800.0 08 190909005800 NaN NaN NaN NaN NaN NaN NaN NaN NaN ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... 159 775 200529000843 000049768051 40 200529000601.0 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 160 776 200529000843 000049015792 00 200529000701.0 33 200529000701 NaN NaN NaN NaN NaN NaN NaN NaN NaN 161 779 200529000843 000049180500 00 200529000601.0 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 162 784 200529000843 000049089310 00 200529000201.0 03 200529000201 61 200529000201 NaN NaN NaN NaN NaN NaN NaN 163 786 200529000843 000049768051 40 200529000401.0 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN </code></pre> <p>We calculated <code>values_cont</code>, a <code>dict</code>, for a subset:</p> <pre><code>v_subset = ['COD_1', 'COD_2', 'COD_3', 'COD_4', 'COD_5', 'COD_6', 'COD_7'] values_cont = pd.value_counts(df_source[v_subset].values.ravel()) </code></pre> <p>We obtained as result (values, counter):</p> <pre><code>00 134 08 37 42 12 40 12 33 3 11 3 03 2 35 2 43 2 44 1 61 1 04 1 12 1 60 1 05 1 19 1 34 1 16 1 </code></pre> <p>Now, the question is:</p> <p>How to locate values in columns corresponding to counter, for instance:</p> <p>How to locate:</p> <pre><code> df['DEVICE_ID'] # corresponding with values ('00') and counter ('134') df['DEVICE_ID'] # corresponding with values ('08') and counter ('37') ... df['DEVICE_ID'] # corresponding with values ('16') and counter ('1') </code></pre>
<ul> <li>I believe you need <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.melt.html" rel="nofollow noreferrer"><code>DataFrame.melt</code></a> with aggregate join for <code>ID</code> and <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.size.html" rel="nofollow noreferrer"><code>GroupBy.size</code></a> for counts.</li> <li>This implementation will result in a dataframe with a column (<code>value</code>) for the <code>CODES</code>, all the associated <code>DEVICE_ID</code>s, and the count of ids associated with each code. <ul> <li>This is an alternative to <code>values_cont</code> in the question.</li> </ul> </li> </ul> <pre class="lang-py prettyprint-override"><code>v_subset = ['COD_1', 'COD_2', 'COD_3', 'COD_4', 'COD_5', 'COD_6', 'COD_7'] df = (df_source.melt(id_vars='DEVICE_ID', value_vars=v_subset) .dropna(subset=['value']) .groupby('value') .agg(DEVICE_ID = ('DEVICE_ID', ','.join), count= ('value','size')) .reset_index()) print (df) value DEVICE_ID count 0 00 000049000110,000049000110,000049000110,0000490... 7 1 03 000049089310 1 2 08 000049000110,000049000110,000049000110 3 3 11 002222111188 1 4 12 002222111188 1 5 16 002222111188 1 6 19 002222111188 1 7 33 000049015792 1 8 35 002222111188,002222111188 2 9 40 000049768051,000049768051 2 10 43 002222111188 1 11 61 000049089310 1 # print DEVICE_ID for CODES == '03' print(df.DEVICE_ID[df.value == '03']) [out]: 1 000049089310 Name: DEVICE_ID, dtype: object </code></pre> <ul> <li>Given the question as related to <code>df_source</code>, to select specific parts of the dataframe, use <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#boolean-indexing" rel="nofollow noreferrer">Pandas: Boolean Indexing</a></li> </ul> <pre class="lang-py prettyprint-override"><code># to return all rows where COD_1 is '00' df_source[df_source.COD_1 == '00'] # to return only the DEVICE_ID column where COD_1 is '00' df_source['DEVICE_ID'][df_source.COD_1 == '00'] </code></pre>
python|pandas
1
353
72,079,647
How to count the values of multiple '0' and '1' columns and group by another binary column ('Male' and Female')?
<p>I'd like to group the binary information by 'Gender' and count the values of the other/ following fields 'Married', 'Citizen' and 'License'</p> <p>The below code was my attempt, but it was unsucessful.</p> <pre><code>dmo_df.groupby(['Gender'], as_index = True)['Married', 'Citizen','License'].apply(pd.Series.value_counts) </code></pre> <p><a href="https://i.stack.imgur.com/NsYh6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NsYh6.png" alt="enter image description here" /></a></p> <p>The resulting data frame/ output should look as such:</p> <p><a href="https://i.stack.imgur.com/Iaimz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Iaimz.png" alt="enter image description here" /></a></p> <p>Sorry for the poor quality photos.</p>
<p>I think you're trying to get <code>sum</code> and not <code>value_counts</code>:</p> <pre><code>&gt;&gt;&gt; df.groupby('Gender')[[&quot;Married&quot;,&quot;Citizen&quot;,&quot;License&quot;]].sum() Married Citizen License Gender Female 3 3 0 Male 5 7 4 </code></pre> <p>If you want <code>value_count</code>, try:</p> <pre><code>&gt;&gt;&gt; df.groupby('Gender').agg({i:&quot;value_counts&quot; for i in [&quot;Married&quot;, &quot;Citizen&quot;, &quot;License&quot;]}).fillna(0) Married Citizen License Gender Female 0 0.0 0.0 3.0 1 3.0 3.0 0.0 Male 0 2.0 0.0 3.0 1 5.0 7.0 4.0 </code></pre>
python|python-3.x|pandas|python-2.7|pandas-groupby
1
354
55,541,644
Is there a function to split rows in the dataframe if one of the column contains more than one keyword?
<p>My dataset contains the column "High-Level-Keyword(s)" and it contains more than one keywords separated by '\n'. I want to group the data on the basis of these Keywords.</p> <p>I tried using function unique() but it treats 'Multilangant Systems', 'Multilangant Systems\nMachine Learning' and 'Machine Learning' differently. </p> <p>I want the output to be like:</p> <p>Multilangant - 2</p> <p>Machine Learning -2 </p> <p>but what I'm getting is</p> <p>Multilangant - 1 </p> <p>Machine Learning - 1</p> <p>Multilangant\nMachine Learning - 1</p> <p>Can you suggest some way to do the same?</p>
<p>You should <code>.split</code> on the separator, then count.</p> <pre><code>from collections import Counter from itertools import chain Counter(chain.from_iterable(df["High-Level-Keyword(s)"].str.split('\n'))) #Counter({'Machine Learning': 2, 'Multilangant': 2}) </code></pre> <p>Or make it a Series:</p> <pre><code>import pandas as pd pd.Series(Counter(chain.from_iterable(df["High-Level-Keyword(s)"].str.split('\n')))) #Multilangant 2 #Machine Learning 2 #dtype: int64 </code></pre>
python|pandas|dataframe|data-analysis
1
355
66,916,275
Which way is right in tf-idf? Fit all then transform train set and test set or fit train set then transform test set
<p>1.Fit train set then transform test set <a href="https://scikit-learn.org/stable/auto_examples/text/plot_document_classification_20newsgroups.html#sphx-glr-auto-examples-text-plot-document-classification-20newsgroups-py" rel="nofollow noreferrer">scikit-learn provide this example</a></p> <pre><code>from sklearn.feature_extraction.text import TfidfVectorizer vectorizer = TfidfVectorizer(sublinear_tf=True, max_df=0.5, stop_words='english') X_train = vectorizer.fit_transform(data_train.data) X_test = vectorizer.transform(data_test.data) </code></pre> <p>2.Fit all then transform train set and test set which I've seen in many cases</p> <pre><code>import numpy as np from sklearn.feature_extraction.text import TfidfVectorizer vectorizer = TfidfVectorizer(sublinear_tf=True, max_df=0.5, stop_words='english') X_all = np.append(train_x, test_x, axis=0) vectorizer.fit(X_all) X_train = vectorizer.transform(train_x) X_test = vectorizer.transform(test_x) </code></pre> <p>So, I'm confused which way is right and why</p>
<p>It really depends on your use case.</p> <p>In the first situation, your test set TF-IDF values are only based on the frequencies in the train set. This allows you to control the &quot;reference&quot; corpus and decorrelates your results to data in the testing set which makes sense when data in your test set is sampled from a data distribution that is very different from what you could expect in a normal situation. Note that this only works because scikit implements TF-IDF in a way that is robust to previously unseen words.</p> <p>In the second situation, when you use the test set for training, your frequencies are also going to be based on what is in your test set. This allows for more representative frequency values for data in your test set domain which can lead to performance improvements on your downstream task, and also ensures no new unseen words appear at test time.</p> <p>tl;dr both work</p>
python|numpy|scikit-learn|tf-idf|tfidfvectorizer
0
356
47,289,057
How to group and pivot(?) dataframe
<p>I have a dataframe looking like this:</p> <pre><code>ID Species Count 1 Pine 1000 1 Spruce 1000 2 Pine 2000 3 Pine 1000 3 Spruce 500 3 Birch 500 </code></pre> <p>What i want is this:</p> <pre><code> Pine Spruce Birch ID Count Count Count 1 1000 1000 2 2000 3 1000 500 500 </code></pre> <p>So im trying:</p> <pre><code>a = df.groupby(['ID']).cumcount().astype(str) newdf = df.set_index(['ID', a]).unstack(fill_value=0).sort_index(level=1, axis=1) </code></pre> <p>Which gives me:</p> <pre><code>ID Count Species Count Species Count Species 1 1000 Pine 1000 Spruce 2 2000 Pine 3 1000 Pine 500 Spruce 500 Spruce </code></pre> <p>How can i fix this?</p>
<p>Simple <code>pivot</code> </p> <pre><code>df.pivot('ID','Species','Count') Out[493]: Species Birch Pine Spruce ID 1 NaN 1000.0 1000.0 2 NaN 2000.0 NaN 3 500.0 1000.0 500.0 </code></pre>
python|pandas
2
357
47,117,498
Does `tf.data.Dataset.repeat()` buffer the entire dataset in memory?
<p> Looking at this code example from the TF documentation:</p> <pre class="lang-py prettyprint-override"><code>filenames = ["/var/data/file1.tfrecord", "/var/data/file2.tfrecord"] dataset = tf.data.TFRecordDataset(filenames) dataset = dataset.map(...) dataset = dataset.shuffle(buffer_size=10000) dataset = dataset.batch(32) dataset = dataset.repeat(num_epochs) iterator = dataset.make_one_shot_iterator() </code></pre> <p>Does the <code>dataset.repeat(num_epochs)</code> require that the entire dataset be loaded into memory? Or is it re-initializing the dataset(s) that came before it when it receives an end-of-dataset exception?</p> <p>The documentation is ambiguous about this point.</p>
<p> Based on this simple test it appears that <code>repeat</code> does <em>not</em> buffer the dataset, it must be re-initializing the upstream datasets.</p> <pre class="lang-py prettyprint-override"><code>n = tf.data.Dataset.range(5).shuffle(buffer_size=5).repeat(2).make_one_shot_iterator().get_next() [sess.run(n) for _ in range(10)] Out[83]: [2, 0, 3, 1, 4, 3, 1, 0, 2, 4] </code></pre> <p>Logic suggests that if <code>repeat</code> were buffering its input, the same random shuffle pattern would have have been repeated in this simple experiment.</p>
tensorflow
0
358
68,391,775
Creating a df with index = years and columns = mean length of an event occurring in a year, from previous df columns
<p>I have the df_winter as viewed below, and would like to create a new df that displays the year, and the average length of storms occuring in a 1 year, so I can visualize the change in length of storms over time.</p> <p>I thought I could use groupby like this:</p> <pre><code>df_winter_length= df_winter.groupby(['Start_year', 'Disaster_Length']).mean() </code></pre> <p>However, I receive the error : DataError: No Numeric types to aggregate</p> <p>so I printed:</p> <p>print(df_winter.Disaster_Length.dtype) return: int64</p> <p>Which as an int64, I thought could be calculated. What am I missing? Thanks!</p> <p>Entire code:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt import pandas as pd import seaborn as sns df_time = pd.read_pickle('df_time.pkl') df_winter = df_time[(df_time['Disaster_Type'] == 'Winter') | (df_time['Disaster_Type'] == 'Snow') | (df_time['Disaster_Type'] == 'Ice')] df_winter.drop(columns=['Start_Date_A', 'End_Date_A'], axis=1, inplace=True) df_winter.drop_duplicates(keep='first') df_winter = df_winter.reset_index(drop=True, inplace=False) #Change in Average Length of Winter Weather Events from 1965 - 2017 df_winter_length= df_winter.groupby(['Start_year', 'Disaster_Length']).mean() </code></pre> <p>df_winter :</p> <pre><code> County Disaster_Type Disaster_Length Start_year 2400 Perry County Snow 7 1996 2401 Pike County Snow 7 1996 2402 Powell County Snow 7 1996 2403 Pulaski County Snow 7 1996 2404 Robertson County Snow 7 1996 2405 Rockcastle County Snow 7 1996 2406 Rowan County Snow 7 1996 2407 Russell County Snow 7 1996 2408 Scott County Snow 7 1996 2409 Shelby County Snow 7 1996 2410 Simpson County Snow 7 1996 2411 Spencer County Snow 7 1996 2412 Taylor County Snow 7 1996 2413 Todd County Snow 7 1996 2414 Trigg County Snow 7 1996 2415 Trimble County Snow 7 1996 2416 Union County Snow 7 1996 2417 Warren County Snow 7 1996 2418 Washington County Snow 7 1996 2419 Wayne County Snow 7 1996 2420 Webster County Snow 7 1996 2421 Whitley County Snow 7 1996 2422 Wolfe County Snow 7 1996 2423 Woodford County Snow 7 1996 2424 Barnstable County Snow 6 1996 2425 Berkshire County Snow 6 1996 2426 Bristol County(in PMSA 1120,1200,2480,5400,6060 Snow 6 1996 2427 Dukes County Snow 6 1996 2428 Essex County(in PMSA 1120,4160,7090 Snow 6 1996 2429 Franklin County Snow 6 1996 2430 Hampden County Snow 6 1996 2431 Hampshire County Snow 6 1996 2432 Middlesex County(in PMSA 1120,2600,4560 Snow 6 1996 2433 Nantucket County Snow 6 1996 2434 Norfolk County(in PMSA 1120,1200,6060 Snow 6 1996 2435 Plymouth County(in PMSA 1120,1200,5400 Snow 6 1996 2436 Suffolk County Snow 6 1996 2437 Worcester County in PMSA 1120,2600,9240 Snow 6 1996 2438 Bristol County Snow 6 1996 2439 Kent County Snow 6 1996 2440 Newport County(in PMSA 2480,6480 Snow 6 1996 2441 Providence County(in PMSA 6060,6480 Snow 6 1996 2442 Washington County(in PMSA 5520,6480 Snow 6 1996 2443 Fairfield County(in PMSA 1160,1930,5760,8040 Snow 6 1996 2444 Hartford County(in PMSA 1170,3280,5440 Snow 6 1996 2445 Litchfield County(in PMSA 1170,1930,3280,8880 Snow 6 1996 2446 Middlesex County(in PMSA 3280,5020,5480 Snow 6 1996 2447 New Haven County(in PMSA 1160,5480,8880 Snow 6 1996 2448 New London County(in PMSA 3280,5520 Snow 6 1996 2449 Tolland County Snow 6 1996 2450 Windham County Snow 6 1996 2451 Alexander County Snow 7 1996 2452 Burke County Snow 7 1996 2453 Caldwell County Snow 7 1996 2454 Caswell County Snow 7 1996 2455 Catawba County Snow 7 1996 2456 Cherokee County Snow 7 1996 2457 Cleveland County Snow 7 1996 2458 Davidson County Snow 7 1996 2459 Davie County Snow 7 1996 2460 Forsyth County Snow 7 1996 2461 Gaston County Snow 7 1996 2462 Gates County Snow 7 1996 2463 Guilford County Snow 7 1996 2464 Halifax County Snow 7 1996 2465 Haywood County Snow 7 1996 2466 Henderson County Snow 7 1996 2467 Hertford County Snow 7 1996 2468 Iredell County Snow 7 1996 2469 Lincoln County Snow 7 1996 2470 McDowell County Snow 7 1996 2471 Madison County Snow 7 1996 2472 Montgomery County Snow 7 1996 2473 Northampton County Snow 7 1996 2474 Polk County Snow 7 1996 2475 Randolph County Snow 7 1996 2476 Rockingham County Snow 7 1996 2477 Rutherford County Snow 7 1996 2478 Stokes County Snow 7 1996 2479 Surry County Snow 7 1996 2480 Warren County Snow 7 1996 2481 Watauga County Snow 7 1996 2482 Wilkes County Snow 7 1996 2483 Yadkin County Snow 7 1996 2484 Yancey County Snow 7 1996 2485 Klickitat County Ice 15 1996 2486 Pend Oreille County Ice 15 1996 2487 Spokane County Ice 15 1996 2488 Cass County Snow 2 1997 2489 Clarke County Snow 2 1997 2490 Iowa County Snow 2 1997 2491 Jasper County Snow 2 1997 2492 Madison County Snow 2 1997 2493 Mahaska County Snow 2 1997 2494 Marion County Snow 2 1997 2495 Mills County Snow 2 1997 2496 Polk County Snow 2 1997 2497 Pottawattamie County Snow 2 1997 2498 Poweshiek County Snow 2 1997 2499 Union County Snow 2 1997 2500 Warren County Snow 2 1997 2501 Clinton County Snow 12 1998 2502 Essex County Snow 12 1998 2503 Franklin County Snow 12 1998 2504 Genesee County Snow 12 1998 2505 Jefferson County Snow 12 1998 2506 Lewis County Snow 12 1998 2507 Monroe County Snow 12 1998 2508 Niagara County Snow 12 1998 2509 St. Lawrence County Snow 12 1998 2510 Saratoga County Snow 12 1998 2511 Adams County Snow 14 1999 2512 Brown County Snow 14 1999 2513 Bureau County Snow 14 1999 2514 Calhoun County Snow 14 1999 2515 Cass County Snow 14 1999 2516 Champaign County Snow 14 1999 2517 Christian County Snow 14 1999 2518 Cook County Snow 14 1999 2519 De Witt County Snow 14 1999 2520 Douglas County Snow 14 1999 2521 DuPage County Snow 14 1999 2522 Ford County Snow 14 1999 2523 Fulton County Snow 14 1999 2524 Greene County Snow 14 1999 2525 Grundy County Snow 14 1999 2526 Hancock County Snow 14 1999 2527 Henderson County Snow 14 1999 2528 Henry County Snow 14 1999 2529 Iroquois County Snow 14 1999 2530 Kane County Snow 14 1999 2531 Kankakee County Snow 14 1999 2532 Kendall County Snow 14 1999 2533 Knox County Snow 14 1999 2534 Lake County Snow 14 1999 2535 La Salle County Snow 14 1999 2536 Livingston County Snow 14 1999 2537 Logan County Snow 14 1999 2538 McDonough County Snow 14 1999 2539 McHenry County Snow 14 1999 2540 McLean County Snow 14 1999 2541 Macon County Snow 14 1999 2542 Marshall County Snow 14 1999 2543 Mason County Snow 14 1999 2544 Menard County Snow 14 1999 2545 Mercer County Snow 14 1999 2546 Morgan County Snow 14 1999 2547 Moultrie County Snow 14 1999 2548 Peoria County Snow 14 1999 2549 Piatt County Snow 14 1999 2550 Pike County Snow 14 1999 2551 Putnam County Snow 14 1999 2552 Sangamon County Snow 14 1999 2553 Schuyler County Snow 14 1999 2554 Scott County Snow 14 1999 2555 Shelby County Snow 14 1999 2556 Stark County Snow 14 1999 2557 Tazewell County Snow 14 1999 2558 Vermilion County Snow 14 1999 2559 Warren County Snow 14 1999 2560 Will County Snow 14 1999 2561 Winnebago County Snow 14 1999 2562 Woodford County Snow 14 1999 2563 Cattaraugus County Snow 14 1999 2564 Chautauqua County Snow 14 1999 2565 Erie County Snow 14 1999 2566 Genesee County Snow 14 1999 2567 Jefferson County Snow 14 1999 2568 Lewis County Snow 14 1999 2569 Niagara County Snow 14 1999 2570 Orleans County Snow 14 1999 2571 St. Lawrence County Snow 14 1999 2572 Wyoming County Snow 14 1999 2573 Adams County Snow 14 1999 2574 Allen County Snow 14 1999 2575 Benton County Snow 14 1999 2576 Blackford County Snow 14 1999 2577 Boone County Snow 14 1999 2578 Carroll County Snow 14 1999 2579 Cass County Snow 14 1999 2580 Clay County Snow 14 1999 2581 Clinton County Snow 14 1999 2582 DeKalb County Snow 14 1999 2583 Delaware County Snow 14 1999 2584 Elkhart County Snow 14 1999 2585 Fayette County Snow 14 1999 2586 Fountain County Snow 14 1999 2587 Fulton County Snow 14 1999 2588 Grant County Snow 14 1999 2589 Hamilton County Snow 14 1999 2590 Hancock County Snow 14 1999 2591 Hendricks County Snow 14 1999 2592 Henry County Snow 14 1999 2593 Howard County Snow 14 1999 2594 Huntington County Snow 14 1999 2595 Jasper County Snow 14 1999 2596 Jay County Snow 14 1999 2597 Johnson County Snow 14 1999 2598 Kosciusko County Snow 14 1999 2599 LaGrange County Snow 14 1999 </code></pre>
<p>There may be an easier route but I ended up dropping the unused columns from df_Winter -- ei county and disaster type, and then using .groupby and .mean like so:</p> <pre><code>df_winter_length = df_winter.drop(columns=['County','Disaster_Type']) df_winter_length = df_winter_length.groupby(['Start_year']).mean() </code></pre>
python|pandas|dataframe
0
359
68,197,672
"Invalid argument: indices[0,0,0,0] = 30 is not in [0, 30)"
<p><strong>Error:</strong></p> <pre><code>InvalidArgumentError: indices[0,0,0,0] = 30 is not in [0, 30) [[{{node GatherV2}}]] [Op:IteratorGetNext] </code></pre> <p><strong>History:</strong></p> <p>I have a custom data loader for a <code>tf.keras</code> based U-Net for semantic segmentation, based on <a href="https://yann-leguilly.gitlab.io/post/2019-12-14-tensorflow-tfdata-segmentation/" rel="nofollow noreferrer">this example</a>. It is written as follows:</p> <pre><code>def parse_image(img_path: str) -&gt; dict: # read image image = tf.io.read_file(img_path) #image = tfio.experimental.image.decode_tiff(image) if xf == &quot;png&quot;: image = tf.image.decode_png(image, channels = 3) else: image = tf.image.decode_jpeg(image, channels = 3) image = tf.image.convert_image_dtype(image, tf.uint8) #image = image[:, :, :-1] # read mask mask_path = tf.strings.regex_replace(img_path, &quot;X&quot;, &quot;y&quot;) mask_path = tf.strings.regex_replace(mask_path, &quot;X.&quot; + xf, &quot;y.&quot; + yf) mask = tf.io.read_file(mask_path) #mask = tfio.experimental.image.decode_tiff(mask) mask = tf.image.decode_png(mask, channels = 1) #mask = mask[:, :, :-1] mask = tf.where(mask == 255, np.dtype(&quot;uint8&quot;).type(NoDataValue), mask) return {&quot;image&quot;: image, &quot;segmentation_mask&quot;: mask} train_dataset = tf.data.Dataset.list_files( dir_tls(myear = year, dset = &quot;X&quot;) + &quot;/*.&quot; + xf, seed = zeed) train_dataset = train_dataset.map(parse_image) val_dataset = tf.data.Dataset.list_files( dir_tls(myear = year, dset = &quot;X_val&quot;) + &quot;/*.&quot; + xf, seed = zeed) val_dataset = val_dataset.map(parse_image) ## data transformations-------------------------------------------------------- @tf.function def normalise(input_image: tf.Tensor, input_mask: tf.Tensor) -&gt; tuple: input_image = tf.cast(input_image, tf.float32) / 255.0 return input_image, input_mask @tf.function def load_image_train(datapoint: dict) -&gt; tuple: input_image = tf.image.resize(datapoint[&quot;image&quot;], (imgr, imgc)) input_mask = tf.image.resize(datapoint[&quot;segmentation_mask&quot;], (imgr, imgc)) if tf.random.uniform(()) &gt; 0.5: input_image = tf.image.flip_left_right(input_image) input_mask = tf.image.flip_left_right(input_mask) input_image, input_mask = normalise(input_image, input_mask) return input_image, input_mask @tf.function def load_image_test(datapoint: dict) -&gt; tuple: input_image = tf.image.resize(datapoint[&quot;image&quot;], (imgr, imgc)) input_mask = tf.image.resize(datapoint[&quot;segmentation_mask&quot;], (imgr, imgc)) input_image, input_mask = normalise(input_image, input_mask) return input_image, input_mask ## create datasets------------------------------------------------------------- buff_size = 1000 dataset = {&quot;train&quot;: train_dataset, &quot;val&quot;: val_dataset} # -- Train Dataset --# dataset[&quot;train&quot;] = dataset[&quot;train&quot;]\ .map(load_image_train, num_parallel_calls = tf.data.experimental.AUTOTUNE) dataset[&quot;train&quot;] = dataset[&quot;train&quot;].shuffle(buffer_size = buff_size, seed = zeed) dataset[&quot;train&quot;] = dataset[&quot;train&quot;].repeat() dataset[&quot;train&quot;] = dataset[&quot;train&quot;].batch(bs) dataset[&quot;train&quot;] = dataset[&quot;train&quot;].prefetch(buffer_size = AUTOTUNE) #-- Validation Dataset --# dataset[&quot;val&quot;] = dataset[&quot;val&quot;].map(load_image_test) dataset[&quot;val&quot;] = dataset[&quot;val&quot;].repeat() dataset[&quot;val&quot;] = dataset[&quot;val&quot;].batch(bs) dataset[&quot;val&quot;] = dataset[&quot;val&quot;].prefetch(buffer_size = AUTOTUNE) print(dataset[&quot;train&quot;]) print(dataset[&quot;val&quot;]) </code></pre> <p>Now I wanted to use a <strong>weighted version</strong> of <code>tf.keras.losses.SparseCategoricalCrossentropy</code> for my model and I found <a href="https://www.tensorflow.org/tutorials/images/segmentation" rel="nofollow noreferrer">this tutorial</a>, which is rather similar to the example above. However, they also offered a weighted version of the loss, using:</p> <pre><code>def add_sample_weights(image, label): # The weights for each class, with the constraint that: # sum(class_weights) == 1.0 class_weights = tf.constant([2.0, 2.0, 1.0]) class_weights = class_weights/tf.reduce_sum(class_weights) # Create an image of `sample_weights` by using the label at each pixel as an # index into the `class weights` . sample_weights = tf.gather(class_weights, indices=tf.cast(label, tf.int32)) return image, label, sample_weights </code></pre> <p>and</p> <pre><code>weighted_model.fit( train_dataset.map(add_sample_weights), epochs=1, steps_per_epoch=10) </code></pre> <p><strong>I combined those approaches</strong> since the latter tutorial uses previously loaded data, while I want to draw the images from disc (not enough RAM to load all at once).</p> <p>Resulting in the code from the first example (long code block above) followed by</p> <pre><code>def add_sample_weights(image, segmentation_mask): class_weights = tf.constant(inv_weights, dtype = tf.float32) class_weights = class_weights/tf.reduce_sum(class_weights) sample_weights = tf.gather(class_weights, indices = tf.cast(segmentation_mask, tf.int32)) return image, segmentation_mask, sample_weights </code></pre> <p>(<code>inv_weights</code> are my weights, an array of 30 float64 values) and</p> <pre><code> model.fit(dataset[&quot;train&quot;].map(add_sample_weights), epochs = 45, steps_per_epoch = np.ceil(N_img/bs), validation_data = dataset[&quot;val&quot;], validation_steps = np.ceil(N_val/bs), callbacks = cllbs) </code></pre> <p>When I run <code>dataset[&quot;train&quot;].map(add_sample_weights).element_spec</code> as in the second example, I get an output that looks reasonable to me (similar to the one in the example):</p> <pre><code>Out[58]: (TensorSpec(shape=(None, 512, 512, 3), dtype=tf.float32, name=None), TensorSpec(shape=(None, 512, 512, 1), dtype=tf.float32, name=None), TensorSpec(shape=(None, 512, 512, 1), dtype=tf.float32, name=None)) </code></pre> <p>However, when I try to fit the model or run something like</p> <pre><code>a, b, c = dataset[&quot;train&quot;].map(add_sample_weights).take(1) </code></pre> <p>I will receive the error mentioned above.</p> <p>So far, I have found quite some questions regarding this error (e.g., <a href="https://stackoverflow.com/q/60545512/11611246">a</a>, <a href="https://stackoverflow.com/q/68062029/11611246">b</a>, <a href="https://stackoverflow.com/q/60480806/11611246">c</a>, <a href="https://github.com/tensorflow/tensorflow/issues/23698" rel="nofollow noreferrer">d</a>), however, they all talk of &quot;embedding layers&quot; and things I am not aware of using.</p> <p>Where does this error come from and how can I solve it?</p>
<p>Picture <code>tf.gather</code> as a fancy way to do indexing. The error you get is akin to the following example in python:</p> <pre><code>&gt;&gt;&gt; my_list = [1,2,3] &gt;&gt;&gt; my_list[3] IndexError: list index out of range </code></pre> <p>If you want to use <code>tf.gather</code>, then the range of value of your <code>indices</code> should not be bigger than the dimension size of the Tensor you are willing to index.</p> <p>In your case, in the call <code>tf.gather(class_weights,indices = tf.cast(segmentation_mask, tf.int32))</code>, with <code>class_weights</code> being a Tensor of dimension <code>(30,)</code>, the range of values of <code>segmentation_mask</code> should be between 0 and 29. As far as I can tell from your data pipeline, <code>segmentation_mask</code> has a range of value between 0 and 255. The fix will be problem dependent.</p>
python|tensorflow|keras|tf.keras
1
360
68,364,213
pandas multi index sort with several conditions
<p>I have a dataframe like below,</p> <pre><code> MATERIALNAME CURINGMACHINE HEADERCOUNTER 0 1015 PPU03R 1529 1 3005 PPY12L 305 2 3005 PPY12R 359 3 3005 PPY12R 404 4 K843 PPZB06L 435 5 K928 PPZ03L 1850 </code></pre> <p>I created a pivot table from this df,</p> <pre><code>pivot = pd.pivot_table(df, index = ['MATERIALNAME', 'CURINGMACHINE'], values = ['HEADERCOUNTER'], aggfunc = 'count', fill_value = 0) pivot (output) HEADERCOUNTER MATERIALNAME CURINGMACHINE 1015 PPU03R 1 3005 PPY12L 1 PPY12R 2 K843 PPZB06L 1 K928 PPZ03L 1 </code></pre> <p>I add subtotals of each material name with the help of this post 'pandas.concat' <a href="https://stackoverflow.com/questions/41383302/pivot-table-subtotals-in-pandas">Pivot table subtotals in Pandas</a></p> <pre><code>pivot = pd.concat([ d.append(d.sum().rename((k, 'Total'))) for k, d in pivot.groupby(level=0) ]).append(pivot.sum().rename(('Grand', 'Total'))) </code></pre> <p>My final df is,</p> <pre><code> HEADERCOUNTER MATERIALNAME CURINGMACHINE 1015 PPU03R 1 Total 1 3005 PPY12L 1 PPY12R 2 Total 3 K843 PPZB06L 1 Total 1 K928 PPZ03L 1 Total 1 Grand Total 6 </code></pre> <p>I want to sort according to 'HEADERCOUNTER' column. I' m using this code,</p> <pre><code>sorted_df = pivot.sort_values(by =['HEADERCOUNTER'], ascending = False) </code></pre> <p>When I sort it, 'MATERIALNAME' column is effecting like below, 'MATERIALNAME' is broken as you can see from 3005 code.</p> <pre><code> HEADERCOUNTER MATERIALNAME CURINGMACHINE Grand Total 6 3005 Total 3 PPY12R 2 1015 PPU03R 1 Total 1 3005 PPY12L 1 K843 PPZB06L 1 Total 1 K928 PPZ03L 1 Total 1 </code></pre> <p>When I sort it, I want to see in that order;</p> <pre><code> HEADERCOUNTER MATERIALNAME CURINGMACHINE Grand Total 6 3005 Total 3 PPY12R 2 PPY12L 1 1015 PPU03R 1 Total 1 K843 PPZB06L 1 Total 1 K928 PPZ03L 1 Total 1 </code></pre> <p>If you have any suggestions to change process, I can try it also.</p> <p><strong>Edit:</strong> I tried <strong>BENY</strong>'s way, but it doesn' t work when data increases.</p> <p>You can see the not ok result below;</p> <p><a href="https://i.stack.imgur.com/PRVMU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PRVMU.png" alt="enter image description here" /></a></p>
<p>Fix it by adding <code>argsort</code></p> <pre><code>pivot = pivot.sort_values('HEADERCOUNTER',ascending=False) out = pivot.iloc[(-pivot.groupby(level=0)['HEADERCOUNTER'].transform('max')).argsort()] Out[136]: HEADERCOUNTER MATERIALNAME CURINGMACHINE Grand Total 6 3005 Total 3 PPY12R 2 PPY12L 1 1015 PPU03R 1 Total 1 K843 PPZB06L 1 Total 1 K928 PPZ03L 1 Total 1 </code></pre>
python|pandas
2
361
56,996,633
saving the numpy image datasets. without increase in size and easy to save and load data
<p>i have saved my train test val array into pickle file. but the size of images is 1.5GB ,pickle file is 16GB i.e size increased. is another any another way to save those numpy images array without increase in size?</p>
<p>Use <code>numpy.save</code> function (<a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.save.html" rel="nofollow noreferrer">documentation</a>) or <code>numpy.savez_compresion</code> function (<a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.savez_compressed.html" rel="nofollow noreferrer">documentation</a>). </p> <p>Read documentation before ask question. </p> <p>Sample code:</p> <pre><code>import numpy as np image = np.random.randint(0, 200, (199818,50,50,3), dtype=np.uint8) image2 = np.random.randint(0, 200, (1998,50,50,3), dtype=np.uint8) np.savez_compressed("test.npz", image, img=image2) img_dkt = np.load("test.npz") print("First_array:", img_dkt["arr_0"].shape, "equality", np.all(image == img_dkt["arr_0"])) print("second_array:", img_dkt["img"].shape, "equality", np.all(image2 == img_dkt["img"])) </code></pre>
python|numpy
0
362
57,017,398
TypeErorr: 'Tensor' object cannot be interpreted as an integer
<p>I want to make the some dynamic shape weight matrix.</p> <p>The matrix has 3-dimension, [x, y, z].</p> <p>So I define some function.</p> <pre class="lang-py prettyprint-override"><code>x = tf.reduce_max(some_tensor_x_length) y = tf.reduce_max(some_tensor_y_length) z = tf.reduce_max(some_tensor_z_length) w = self._get_weight(x,y,z) </code></pre> <pre class="lang-py prettyprint-override"><code>def _get_weight(self, x, y, z): W = np.zeros(x, y, z) for x in range(W.shape[0]): for y in range(W.shape[1]): for z in range(W.shape[2]): W[x,y,z] = some_eq_output_number return W </code></pre> <p>But I got the below error.</p> <pre class="lang-none prettyprint-override"><code>TypeError: 'Tensor' object cannot be interpreted as an integer </code></pre> <p>I guess the error caused by length tensor is not integer type.</p>
<p>The calls:</p> <pre><code>x = tf.reduce_max(some_tensor_x_length) y = tf.reduce_max(some_tensor_y_length) z = tf.reduce_max(some_tensor_z_length) </code></pre> <p>return scalar tensors, and integers. As such, when you call:</p> <pre><code>W = np.zeros(x, y, z) </code></pre> <p>you're passing tensors as arguments, while numpy expects integers. If you're using TensorFlow v1, you can get the value of a tensor with a session by running:</p> <pre><code>session.run(x) </code></pre> <p>Also, you're using references to x, y and z twice in your code: one for tensors, and the other during the <em>for loops</em>. I suggest changing the loops to:</p> <pre><code>for i in range(W.shape[0]): for j in range(W.shape[1]): for k in range(W.shape[2]): W[i,j,k] = some_eq_output_number </code></pre>
python|tensorflow
0
363
57,032,082
Pandas groupby: combine distinct values into another column
<p>I need to group by a subset of columns and count the number of distinct combinations of their values. However, there are other columns that may or may not have distinct values, and I want to somehow retain this information in my output. Here is an example: </p> <pre><code>gb1 gb2 text1 text2 bebop skeletor blue fisher bebop skeletor blue wright rocksteady beast_man orange haldane rocksteady beast_man orange haldane tokka kobra_khan green lande tokka kobra_khan red arnold </code></pre> <p>I <strong>only</strong> want to group by <code>gb1</code> and <code>gb2</code>. </p> <p>Here is what I need:</p> <pre><code>gb1 gb2 count text1 text2 bebop skeletor 2 blue fisher, wright rocksteady beast_man 2 orange haldane tokka kobra_khan 2 green, red lande, arnold </code></pre> <p>I've got everything working except for handling the <code>text1</code> and <code>text2</code> columns.</p> <p>Thanks in advance.</p>
<p>You can check with </p> <pre><code>s=df.assign(count=1).groupby(['gb1','gb2']).agg({'count':'sum','text1':lambda x : ','.join(set(x)),'text2':lambda x : ','.join(set(x))}).reset_index() s gb1 gb2 count text1 text2 0 bebop skeletor 2 blue wright,fisher 1 rocksteady beast_man 2 orange haldane 2 tokka kobra_khan 2 green,red lande,arnold </code></pre>
python|pandas|pandas-groupby
4
364
57,294,262
Empty results from concurrent psycopg2 postgres select queries
<p>I am attempting to retrieve my label and feature datasets from a postgres database using the <strong>getitem</strong> method from a custom pytorch dataset. When I attempt to sample with random indexes my queries return no results</p> <p>I have checked to see if my queries work directly on the psql cli. They do. I have checked my database connection pool for issues. Does not seem to be any. I have reverted back to sequential sampling and it is still fully functional so it is the random index values that seem to be an issue for query.</p> <p>The <strong>getitem</strong> method which performs the queries is place below. This shows both the sequential and attempt to shuffle queries. Both of these are clearly labeled via variable name. </p> <pre><code>def __getitem__(self, idx): query = """SELECT ls.taxonomic_id, it.tensor FROM genomics.tensors2 AS it INNER JOIN genomics.labeled_sequences AS ls ON ls.accession_number = it.accession_number WHERE (%s) &lt;= it.index AND CARDINALITY(tensor) = 89 LIMIT (%s) OFFSET (%s)""" shuffle_query = """BEGIN SELECT ls.taxonomic_id, it.tensor FROM genomics.tensors2 AS it INNER JOIN genomics.labeled_sequences AS ls ON ls.accession_number = it.accession_number WHERE it.index BETWEEN (%s) AND (%s) END""" batch_size = 500 upper_bound = idx + batch_size query_data = (idx, batch_size, batch_size) shuffle_query_data = (idx, upper_bound) result = None results = None conn = self.conn_pool.getconn() try: conn.set_session(readonly=True, autocommit=True) cursor = conn.cursor() cursor.execute(query, query_data) results = cursor.fetchall() self.conn_pool.putconn(conn) print(idx) print(results) except Error as conn_pool_error: print('Multithreaded __getitem__ query error') print(conn_pool_error) label_list = [] sequence_list = [] for (i,result) in enumerate(results): if result is not None: (label, sequence) = self.create_batch_stack_element(result) label_list.append(label) sequence_list.append(sequence) label_stack = torch.stack(label_list).to('cuda') sequence_stack = torch.stack(sequence_list).to('cuda') return (label_stack, sequence_stack) def create_batch_stack_element(self, result): if result is not None: label = np.array(result[0], dtype=np.int64) sequence = np.array(result[1], dtype=np.int64) label = torch.from_numpy(label) sequence = torch.from_numpy(sequence) return (label, sequence) else: return None </code></pre> <p>The error I receive comes from my attempt to stack my list of tensors after the for loop. This fails because the lists are empty. Since the lists are filled in the loop based off the results of the query. It points to the query being the issue. </p> <p>I would like some help with my source code to solve this issue and possibly an explanation as to why my concurrent queries with random indexes are failing. </p> <p>Thanks. Any help is appreciated.</p> <p>E: I believe I have found the source of the issue and it comes from the pytorch RandomSampler source code. I believe it is providing indexed out of the range of my database keys. This explains why I have no results from the queries. I will have to write my own sampler class to limit this value to the length of my dataset. What an oversight on my part. </p> <p>E2: The random sampling now works with a customized sampler class but prevents mutlithreaded querying. </p> <p>E3: I now have the entire problem solved. Using multiple processes to load data to the GPU with a custom random sampler. Will post applicable code when I get a chance and accept it as an answer to close out the thread. </p>
<p>This is a properly constructed getitem for pytorch from a postgres table with indexable keys. </p> <pre><code>def __getitem__(self, idx: int) -&gt; tuple: query = """SELECT ls.taxonomic_id, it.tensor FROM genomics.tensors2 AS it INNER JOIN genomics.labeled_sequences AS ls ON ls.accession_number = it.accession_number WHERE (%s) = it.index""" query_data = (idx,) result = None conn = self.conn_pool.getconn() try: conn.set_session(readonly=True, autocommit=True) cursor = conn.cursor() cursor.execute(query, query_data) result = cursor.fetchone() self.conn_pool.putconn(conn) except Error as conn_pool_error: print('Multithreaded __getitem__ query error') print(conn_pool_error) return result def collate(self, results: list) -&gt; tuple: label_list = [] sequence_list = [] for result in results: if result is not None: print(result) result = self.create_batch_stack_element(result) if result is not None: label_list.append(result[0]) sequence_list.append(result[1]) label_stack = torch.stack(label_list) sequence_stack = torch.stack(sequence_list) return (label_stack, sequence_stack) def create_batch_stack_element(self, result: tuple) -&gt; tuple: if result is not None: label = np.array(result[0], dtype=np.int64) sequence = np.array(result[1], dtype=np.int64) label = torch.from_numpy(label) sequence = torch.from_numpy(sequence) return (label, sequence) return None </code></pre> <p>Then I called my training function with:</p> <pre><code>for rank in range(num_processes): p = mp.Process(target=train, args=(dataloader,)) p.start() processes.append(p) for p in processes: p.join() </code></pre>
python|postgresql|concurrency|pytorch|psycopg2
0
365
57,004,603
Interpolate values in one column of a dataframe (python)
<p>I have a dataframe with three columns (timestamp, temperature and waterlevel). What I want to do is to replace all NaN values in the waterlevel column with interpolated values. For example: </p> <p><a href="https://i.stack.imgur.com/MuaSH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MuaSH.png" alt="enter image description here"></a></p> <p>The waterlevel value is always decreasing till it is 0. Therefore, the waterlevel cannot be negative. Also, if the waterlevel is staying the same, the interpolated values should also be the same. Ideally, the stepsize between the interpolated values (within two available waterlevel values) should be the same.</p> <p>What I have tried so far was:</p> <pre><code>df['waterlevel'].interpolate(method ='linear', limit_direction ='backward') # backwards because the waterlevel value is always decreasing. </code></pre> <p>This does not work. After executing this line, every NaN value has turned to a 0 with the parameter 'forward' and stays NaN with the parameter 'backward'.</p> <p>and </p> <pre><code>df = df['waterlevel'].assign(InterpolateLinear=df.target.interpolate(method='linear')) </code></pre> <p>Any suggestions on how to solve this?</p>
<p>I assume NaN is <code>np.nan</code> Object </p> <pre><code>import pandas as pd import numpy as np df = pd.DataFrame({"waterlevel": ['A',np.nan,np.nan,'D'],"interpolated values":['Ai','Bi','Ci','D']}) print(df) df.loc[df['waterlevel'].isnull(),'waterlevel'] = df['interpolated values'] print(df) </code></pre> <p>O/P:</p> <pre><code> waterlevel interpolated values 0 A Ai 1 NaN Bi 2 NaN Ci 3 D D waterlevel interpolated values 0 A Ai 1 Bi Bi 2 Ci Ci 3 D D </code></pre>
python|pandas|numpy|interpolation|linear-interpolation
0
366
46,117,577
How to merge DataFrame in for loop?
<p>Am trying to merge the multiindexed dataframe in a for loop into a single dataframe on index.</p> <p>i have a reproducible code at <a href="https://gist.github.com/RJUNS/f4ad32d9b6da8cf4bedde0046a26f368#file-prices-py" rel="nofollow noreferrer">https://gist.github.com/RJUNS/f4ad32d9b6da8cf4bedde0046a26f368#file-prices-py</a> I wanted to post the code here, but i got an error 'your post has lots of code' therefore i posted it on gist.</p> <p>But it produces this:</p> <blockquote> <pre><code> CLOSE HIGH LOW OPEN VOLUME 2017-09-08 09:30:00 VEDL 330.2 330.40 328.3 329.10 1873261 2017-09-08 09:45:00 VEDL 333.1 333.15 329.5 330.15 1643970 2017-09-08 10:00:00 VEDL 332.4 333.20 331.4 333.10 767922 CLOSE HIGH LOW OPEN VOLUME 2017-09-08 09:30:00 INFY 892.65 898.6 892.6 898.05 163020 2017-09-08 09:45:00 INFY 892.45 893.6 891.4 892.80 152179 2017-09-08 10:00:00 INFY 891.55 892.5 891.1 892.40 104931 </code></pre> </blockquote> <p>Am expecting the following output:</p> <blockquote> <pre><code> CLOSE HIGH LOW OPEN VOLUME 2017-09-08 09:30:00 VEDL 330.2 330.40 328.3 329.10 1873261 INFY 892.65 898.6 892.6 898.05 163020 2017-09-08 09:45:00 VEDL 333.1 333.15 329.5 330.15 1643970 INFY 892.45 893.6 891.4 892.80 152179 2017-09-08 10:00:00 VEDL 332.4 333.20 331.4 333.10 767922 INFY 891.55 892.5 891.1 892.40 104931 </code></pre> </blockquote> <p>I tried using <code>.join</code> method, but i couldn't make it work. does anyone have any solution please?</p>
<p>I think you need append <code>df</code> to <code>list of DataFrames</code> and then use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html" rel="nofollow noreferrer"><code>concat</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_index.html" rel="nofollow noreferrer"><code>sort_index</code></a>:</p> <pre><code>dfs =[] for security in stocks: dfs.append(get_google_data(security,900, 1)) df = pd.concat(dfs).sort_index() print(df) CLOSE HIGH LOW OPEN VOLUME 2017-09-08 06:00:00 INFY 892.65 898.60 892.60 898.05 163020 VEDL 330.20 330.40 328.30 329.10 1873261 2017-09-08 06:15:00 INFY 892.45 893.60 891.40 892.80 152179 VEDL 333.10 333.15 329.50 330.15 1643970 2017-09-08 06:30:00 INFY 891.55 892.50 891.10 892.40 104931 VEDL 332.40 333.20 331.40 333.10 767922 2017-09-08 06:45:00 INFY 891.10 891.55 889.55 891.55 282589 VEDL 332.10 332.80 331.30 332.40 384417 2017-09-08 07:00:00 INFY 890.90 891.60 890.25 891.10 119252 VEDL 332.15 332.70 331.65 332.05 345358 </code></pre> <p><code>List comprehension</code> version for create <code>list of DataFrames</code>:</p> <pre><code>df = pd.concat([get_google_data(x,900, 1) for x in stocks]).sort_index() print(df) CLOSE HIGH LOW OPEN VOLUME 2017-09-08 06:00:00 INFY 892.65 898.60 892.60 898.05 163020 VEDL 330.20 330.40 328.30 329.10 1873261 2017-09-08 06:15:00 INFY 892.45 893.60 891.40 892.80 152179 VEDL 333.10 333.15 329.50 330.15 1643970 2017-09-08 06:30:00 INFY 891.55 892.50 891.10 892.40 104931 VEDL 332.40 333.20 331.40 333.10 767922 2017-09-08 06:45:00 INFY 891.10 891.55 889.55 891.55 282589 VEDL 332.10 332.80 331.30 332.40 384417 2017-09-08 07:00:00 INFY 890.90 891.60 890.25 891.10 119252 VEDL 332.15 332.70 331.65 332.05 345358 </code></pre>
python|database|pandas|numpy|dataframe
5
367
46,035,718
Add a number to a element in a tensor rank 1 if a condition is met in tensorflow
<p>I have a tensor rank 1, which may look like this: <code>[-1,2,3,-2,5]</code> now I want to add a constant to the absolut value of an element, if the element is negative. If the element is positive, nothing shall happen.</p> <p>I know how to do this with a scalar like:</p> <pre><code>res = tf.cond(tensor &lt; 0,\ lambda: tf.add(tf.constant(m.pi),\ tf.abs(tensor)),lambda: tf.constant(tensor) </code></pre> <p>Furthermore, I know how to iterate over a tensor with <code>tf.scan</code> , like here in the fibonacci example:</p> <pre><code>elems = np.array([1, 0, 0, 0, 0, 0]) initializer = (np.array(0), np.array(1)) fibonaccis = scan(lambda a, _: (a[1], a[0] + a[1]), elems, initializer) </code></pre> <p>But how can I combine the <code>tf.condition</code> with <code>tf.scan</code>?</p>
<p>you can just use <a href="https://www.tensorflow.org/api_docs/python/tf/where" rel="nofollow noreferrer"><code>tf.where</code></a></p> <pre><code>a = tf.Variable([-1,2,3,-2,5]) b = tf.where(tf.less(a, 0), tf.abs(a)+tf.constant(m.pi), a) </code></pre>
python|tensorflow
1
368
50,914,335
How to plot in Wireframe with CSV file - Numpy / Matplotlib
<p>I would like to plot in 3D with Pandas / MatplotLib / Numpy as a Wireframe</p> <p>I'm using RFID sensors and I'm trying to record the signal I receive at different distance + different angles. And I want to see the correlation between the rising of the distance and the angle.</p> <p>I've already a full CSV file which looks like this :</p> <pre><code>Distance;0 ;23 ;45 ;90 0 ;-33.24 ;-36.72;-39.335;-35.21 5 ;-31.73 ;-35.26;-41.56 ;-27.41 15 ;-31.175;-36.91;-40.74 ;-44.615 25 ;-35.305;-51.13;-45.515;-50.485 40 ;-35.205;-49.27;-55.565;-53.64 60 ;-41.8 ;-62.19;-58.14 ;-54.685 80 ;-47.79 ;-64.24;-58.285;-56.08 100 ;-48.43 ;-63.37;-64.595;-60.0 120 ;-49.07 ;-66.07;-63.475;-76.0 140 ;-50.405;-61.43;-62.635;-76.5 160 ;-52.805;-69.25;-71.0 ;-77.0 180 ;-59.697;-66.45;-70.1 ;nan 200 ;-56.515;-68.60;-73.4 ;nan </code></pre> <p>So that's why I want to plot in 3D :</p> <ul> <li>X Axis : Angle</li> <li>Y Axis : Distance</li> <li>Z Axis : signal (for each couple angle/distance)</li> </ul> <p>On the first row we have the name of the index : <code>Distance</code>and the different angles : 0°, 23°, 45°, 90°</p> <p>And on the first column we have the different distances which represent the Y axis.</p> <p>And the matrix inside represents the signal, so, values of Z Axis...</p> <p>I loaded my rawdata with Numpy :</p> <pre><code>raw_data = np.loadtxt('data/finalData.csv', delimiter=';', dtype=np.string_) </code></pre> <p>Then I used matplotlib to generate my wireframe :</p> <pre><code>angle = raw_data[0 , 1:].astype(float) distance = raw_data[1:, 0 ].astype(float) data = ???? fig = plt.figure() ax = fig.add_subplot(111, projection='3d') Z = data X, Y = np.meshgrid(angle, distance) ax.plot_wireframe(X, Y, Z) ax.set_xticks(angle) ax.set_yticks(distance[::2]) ax.set_xlabel('angle') ax.set_ylabel('distance') plt.title('RSSI/angle/distance in wireframe') plt.savefig('data/3d/3d.png') plt.show() </code></pre> <p>But I don't know how to extract the signal for each couple angle/distance and put it in data. </p> <p>I would like to know how to select the data to create the wireframe or to find another way to extract the data.</p> <p>Thank you !</p>
<p>I read the data in with pandas then grabbed the numpy arrays. Note the use of .values.</p> <pre><code>import pandas as pd import matplotlib.pylab as plt import numpy as np from mpl_toolkits.mplot3d import axes3d df= pd.read_csv('test.txt', sep=';') df.index = df.Distance del df['Distance'] raw_data = df angle = raw_data.columns.to_numpy().astype(float) distance = raw_data.index.to_numpy().astype(float) data = raw_data.to_numpy() fig = plt.figure() ax = fig.add_subplot(111, projection='3d') Z = data X, Y = np.meshgrid(angle, distance) ax.plot_wireframe(X, Y, Z) ax.set_xticks(angle) ax.set_yticks(distance[::2]) ax.set_xlabel('angle') ax.set_ylabel('distance') plt.title('RSSI/angle/distance in wireframe') plt.savefig('data/3d/3d.png') plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/8AwoZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8AwoZ.png" alt="Wireframe" /></a></p> <p>Edit Jan 2021: Pandas recommends user use <code>to_numpy()</code> instead of <code>values</code> now. see: <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.values.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.values.html</a></p>
python|pandas|numpy|matplotlib
2
369
66,638,217
Append dataframes in a loop from files located in different directories?
<p>I want to create one pandas dataframe from files which are in different directories. In this directories are also other files and I want to read only .parquet files.</p> <p>I created a function but it returns nothing:</p> <pre><code>def all_files(root, extensions): files = pd.DataFrame() for dir_path, dir_names, file_names in os.walk(root): for file in file_names: if os.path.splitext(file)[1] in extensions: data = pd.read_parquet(os.path.join(dir_path, file)) files.append(data) return files </code></pre> <p>Im calling this functions like this:</p> <p><code>one_file = all_files(&quot;.&quot;, [&quot;.parquet&quot;])</code></p> <p>While Im replacing <code>return files</code> for <code>return data</code> it returns correctly one from the files so the issue may lay in line <code>files.append(data)</code>. I would be happy with any advice.</p>
<p><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.append.html" rel="nofollow noreferrer">pandas.DataFrame.append</a> does not work in place, it is <em>returning a new object</em> (unlike <code>append</code> method of built-in python <code>list</code>), try replacing</p> <pre><code>files.append(data) </code></pre> <p>using</p> <pre><code>files = files.append(data) </code></pre>
python|pandas
1
370
51,352,544
Groupby and how value_counts work
<p>I've got a dataframe with the following data</p> <pre><code> idpresm teamid competicion fecha local \ 0 12345 dummy1 ECU D1 2018-07-07 Deportivo Cuenca 1 12345 dummy1 ECU D1 2018-07-03 Liga Dep. Universitaria Quito 2 12345 dummy1 ECU D1 2018-06-24 Universidad Catolica 3 12345 dummy1 ECU D1 2018-06-18 Club Sport Emelec 4 12345 dummy1 ECU D1 2018-06-12 Universidad Catolica 5 12345 dummy1 ECU D1 2018-06-05 Delfin SC 6 12345 dummy1 ECU D1 2018-05-31 Sociedad Deportiva Aucas 7 12345 dummy1 ECU D1 2018-05-26 Universidad Catolica 8 12345 dummy1 ECU D1 2018-05-12 Universidad Catolica 9 12345 dummy1 ECU D1 2018-05-05 Macara 10 12345 dummy1 ECU D1 2018-04-28 Universidad Catolica 11 12345 dummy1 ECU D1 2018-04-21 Guayaquil City 12 12345 dummy1 ECU D1 2018-04-14 Universidad Catolica 13 12345 dummy1 ECU D1 2018-04-07 CD El Nacional 14 12345 dummy1 ECU D1 2018-03-31 Universidad Catolica 15 12345 dummy1 ECU D1 2018-03-25 Independiente Jose Teran 16 12345 dummy1 ECU D1 2018-03-20 Universidad Catolica 17 12345 dummy1 ECU D1 2018-03-10 Tecnico Universitario 18 12345 dummy1 INT CF 2018-03-09 Colchagua CD 19 12345 dummy1 ECU D1 2018-03-04 Universidad Catolica aw homeha line awayha r1 r3 0 2.39 0.96 0 0.80 1 1 1 3.79 0.85 0.5 0.91 2 1 2 9.32 1.00 1.5 0.84 4 0 3 5.80 0.99 1 0.85 2 3 4 2.93 0.85 0/0.5 0.97 1 1 5 3.86 1.04 0.5 0.80 5 2 6 2.61 0.85 0 0.99 0 1 7 3.32 1.04 0/0.5 0.80 1 1 8 5.56 0.90 1 0.94 2 1 9 2.82 0.70 0 1.16 1 2 10 3.60 1.00 0.5 0.84 3 1 11 2.20 1.04 0 0.80 1 1 12 4.07 0.99 0.5 0.85 2 0 13 2.77 0.97 0/0.5 0.85 0 0 14 3.36 0.80 0.5 1.02 3 1 15 6.11 0.97 0.5 0.85 2 1 16 2.03 0.91 0/-0.5 0.85 2 0 17 2.21 0.70 0/-0.5 1.13 0 2 18 1.44 NaN NaN NaN 0 0 19 2.76 0.80 0 1.02 1 2 </code></pre> <p>what I do is I <code>gruopby</code> by local column, and then I intend to get the average of the column r1, for that I do the following</p> <pre><code>homedata.groupby('local')['r1'].agg({'media':np.average,'contador': lambda x: x.value_counts()}) </code></pre> <p>I would expect a column of integers in 'contador'. what I get is this</p> <pre><code> media contador local CD El Nacional 0.000000 1 Club Sport Emelec 2.000000 1 Colchagua CD 0.000000 1 Delfin SC 5.000000 1 Deportivo Cuenca 1.000000 1 Guayaquil City 1.000000 1 Independiente Jose Teran 2.000000 1 Liga Dep. Universitaria Quito 2.000000 1 Macara 1.000000 1 Sociedad Deportiva Aucas 0.000000 1 Tecnico Universitario 0.000000 1 Universidad Catolica 2.111111 [3, 3, 2, 1] </code></pre> <p>Why do I get a list instead of a 9?</p>
<p>You are looking for <code>'size'</code>. For common functions, you should trust strings are mapped to efficient algorithms. For example:</p> <pre><code>d = {'media': 'mean', 'contador': 'size'} res = homedata.groupby('local')['r1'].agg(d) </code></pre> <hr> <blockquote> <p>I would expect a column of integers in 'contador'.</p> </blockquote> <p>This is not what you should expect. First note that <a href="https://pandas.pydata.org/pandas-docs/version/0.23/generated/pandas.Series.value_counts.html" rel="nofollow noreferrer"><code>pd.Series.value_counts</code></a> returns a <code>pd.Series</code> object of counts, not an integer. It's unclear what integers you <em>expect</em> this method to return.</p> <p>The reason why some values are integers and others lists indicates that <code>groupby</code> is performing some transformation: it assumes that if <code>value_counts</code> returns a series of length 1 you are only interested in the first value of that series.</p> <p>To illustrate, let's look at a <em>minimal</em> example of what you're seeing:</p> <pre><code>import pandas as pd df = pd.DataFrame([['A', 1], ['B', 2], ['B', 2], ['C', 4], ['B', 2], ['B', 6]], columns=['Group', 'Value']) res = df.groupby('Group')['Value'].agg({'counts': lambda x: x.value_counts()}) print(res) counts Group A 1 B [3, 1] C 1 </code></pre>
python|python-3.x|pandas|count|pandas-groupby
1
371
51,476,960
I have a worksheet of multiple sheets and i want each of them to be get assigned as individual dataframe in python
<p>Example :- </p> <p>Example_workbook has 20 sheets. I want each of them to get assign as individual dataframe in python.I have tried as below but this would be only helpful to get single sheet at a time. Do anyone know how can we use "<strong>Def</strong>" function to iterate through sheets and assign each of them as new dataframe.</p> <p>e.g</p> <pre><code>df = pd.read_excel("practice1.xlsx",sheet_name=0) </code></pre>
<p>The <code>read_excel</code> method reads all the sheets at once if you set the <code>sheet_name</code> kwarg to be <code>None</code>.</p> <pre><code>sheets = pd.read_excel("practice1.xlsx",sheet_name=None) # this is a dict for sheet_name, df in sheets.items(): "calculations on the dataframe df" </code></pre> <p>you can read more info about the <code>sheet_name</code> kwarg <a href="https://pandas.pydata.org/pandas-docs/version/0.23/generated/pandas.read_excel.html#pandas-read-excel" rel="nofollow noreferrer">here</a></p>
python|pandas
3
372
51,529,545
Python Pandas | Create separate lists for each of the columns
<p>I am not sure how to use tolist to achieve the following. I have a dataframe like this:</p> <pre><code>Param_1 Param_2 Param_3 -0.171321 0.0118587 -0.148752 1.93377 0.011752 1.9707 4.10144 0.0112963 4.06861 6.25064 0.0103071 5.83927 </code></pre> <p>What I want is to create separate lists for each of the columns, the list name being the column label.</p> <p>I don't want to keep doing:</p> <pre><code>Param_1 = df["Param_1"].values.tolist() </code></pre> <p>Please let me know if there's a way to do this. Thanks. </p>
<p>Adding <code>.T</code></p> <pre><code>df.values.T.tolist() Out[465]: [[-0.171321, 1.93377, 4.10144, 6.25064], [0.0118587, 0.011752, 0.011296299999999999, 0.0103071], [-0.148752, 1.9707, 4.06861, 5.83927]] </code></pre> <p>Or we can create the <code>dict</code> </p> <pre><code>{x:df[x].tolist() for x in df.columns} Out[489]: {'Param_1': [-0.171321, 1.93377, 4.10144, 6.25064], 'Param_2': [0.0118587, 0.011752, 0.011296299999999999, 0.0103071], 'Param_3': [-0.148752, 1.9707, 4.06861, 5.83927]} </code></pre> <p>Or using <code>locals</code> (Not recommended but seems like what you need)</p> <pre><code>variables = locals() for key in df.columns: variables["{0}".format(key)]= df[key].tolist() Param_1 Out[501]: [-0.171321, 1.93377, 4.10144, 6.25064] </code></pre>
python|list|pandas|dataframe|tolist
3
373
70,780,842
Python-Pandas: How do I create a create columns from rows in a DataFrame without redundancy?
<p>I Joined multiple DataFrames and now I got only one DataFrame. Now I want to make the same ID rows to columns without redundancy. To make it clear:</p> <p>The DataFrame that I have now:</p> <pre><code> column1 column2 column3 row1 2 4 8 row2 1 18 7 row3 54 24 69 row3 54 24 10 row4 26 32 8 row4 26 28 8 </code></pre> <p>You can see that I have two row3 and row4 but they are different in column2 and column3</p> <p>This is the DataFrame that I would like to get:</p> <pre><code> column1 column2 column3 row3_a row4_a row1 2 4 8 NULL NUll row2 1 18 7 NULL NULL row3 54 24 69 10 NULL row4 26 28 8 NULL 28 </code></pre> <p>Any ideas how should I solve this?</p>
<p>This is a weird reshaping as you will have ambiguity if there are also duplicates in column1 or column2. Thus having a MultiIndex is probably a good solution.</p> <p>This solution reshapes using a combination of <code>melt</code> + <code>drop_duplicates</code> and <code>pivot</code></p> <pre><code>from string import ascii_lowercase letters = dict(enumerate(ascii_lowercase, start=1)) # add a/b/c to duplicated rows suffix = df.groupby(level=0).cumcount().map(letters) idx2 = (df.index+suffix).fillna('') df2 = ( df.assign(row=idx2) .reset_index() .melt(id_vars=['index', 'row']) .drop_duplicates(['variable', 'value']) .pivot(index='index', columns=['variable', 'row'], values='value') .rename_axis(columns=(None, None), index=None) # cleanup index names ) </code></pre> <p>output:</p> <pre><code> column1 column2 column3 row4a row3a row1 2.0 4.0 NaN 8.0 NaN row2 1.0 18.0 NaN 7.0 NaN row3 54.0 24.0 NaN 69.0 10.0 row4 26.0 32.0 28.0 NaN NaN </code></pre> <p>You can flatten the multiindex if you want: <code>df2.columns = df2.columns.map(''.join)</code>, of if really you want your ambiguous names: <code>df2.columns = df2.columns.map(max)</code></p>
python|pandas|dataframe
0
374
51,817,742
How could I detect subtypes in pandas object columns?
<p>I have the next DataFrame:</p> <pre><code>df = pd.DataFrame({'a': [100, 3,4], 'b': [20.1, 2.3,45.3], 'c': [datetime.time(23,52), 30,1.00]}) </code></pre> <p>and I would like to <em>detect <strong>subtypes</strong></em> in columns without explicit programming a loop, if possible.</p> <p>I am looking for the next output:</p> <pre><code>column a = [int] column b = [float] column c = [datetime.time, int, float] </code></pre>
<p>You should appreciate that with Pandas you can have 2 broad types of series:</p> <ol> <li>Optimised structures: Usually numeric data, this includes <code>np.datetime64</code> and <code>bool</code>.</li> <li><code>object</code> dtype: Used for series with mixed types or types which cannot be held natively in a NumPy array. The series is structured as a sequence of pointers to arbitrary Python objects and is generally inefficient.</li> </ol> <p>The reason for this preamble is you should only ever need to apply element-wise logic to the second type. Data in the first category is homogeneous by nature.</p> <p>So you should separate your logic accordingly.</p> <h3>Regular dtypes</h3> <p>Use <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.dtypes.html" rel="noreferrer"><code>pd.DataFrame.dtypes</code></a>:</p> <pre><code>print(df.dtypes) a int64 b float64 c object dtype: object </code></pre> <h3><code>object</code> dtype</h3> <p>Isolate these series via <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.select_dtypes.html" rel="noreferrer"><code>pd.DataFrame.select_dtypes</code></a> and then use a dictionary comprehension:</p> <pre><code>obj_types = {col: set(map(type, df[col])) for col in df.select_dtypes(include=[object])} print(obj_types) {'c': {int, datetime.time, float}} </code></pre> <p>You will need to do a little more work to get the <em>exact</em> format you require, but the above should be your plan of attack.</p>
python|pandas
11
375
51,986,601
How to check if a file contains email addresses or md5 using python
<p>How to check if a source_file contains email addresses or md5 once you download</p> <pre><code>data2 = pd.read_csv(source_file, header=None) </code></pre> <p>tried using regrex and str.contains...but not able to figure out how to proceed</p> <p>if that is checked then according to that i need to proceed for rest of the script</p> <pre><code>source_file1: [email protected] [email protected] source_file2: d131dd02c5e6vrc4 55ad340609f4fw02 </code></pre> <p>So far, I have tried:</p> <pre><code>if(data2['email/md5'].str.contains(r'[a-zA-Z0-9._-]+@[a-zA-Z.]+')==1): print "yes" </code></pre>
<p>Try this pattern <code>r'@\w+\.com'</code>.</p> <p><strong>Ex:</strong></p> <pre><code>import pandas as pd df1 = pd.read_csv(filename1, names=['email/md5']) if df1['email/md5'].str.contains(r'@\w+\.com').all(): print("Email") else: print("md5") </code></pre>
python|pandas
1
376
51,844,794
Finding hierarchical structure in messy energy data
<p>I have energy profile data (sampled at 3 hour intervals) for about 25 electricity meters in a building as pandas dataframe time series.</p> <p>The meters form a hierarchical structure where the top level meters include consumption data for the lower level meters.</p> <p>For example , ( a possible layered structure )</p> <pre> total - A - A1 - A2 - B - C - C1 - C2 - C21 - C22 </pre> <p>where the lower levels add up to higher level consumption. (eg. C = C1 + C2)</p> <p>Now the task is to identify the inherent structure present in the data to use for other energy data analysis.</p> <p>Is there any algorithm that can be used to detect this layered structure from messy data? Must I exhaustively try all possible combinations for lets say 4 level structures to identify a possible match ( with some tolerance since the data is messy)? Kindly advise certain strategies to think about this problem differently from an algorithmic perspective.</p> <p>Note: The meter names are numbers and can not be interpreted to be different levels directly. I do not have a metering strategy . The magnitude of energy consumption varies (for eg. it may well be the case that A2 > C (in the above fig.)) Put in a better way , the hierarchy can only represent relative magnitudes between levels.</p>
<p>This general problem is very close to <a href="https://en.wikipedia.org/wiki/3SUM" rel="nofollow noreferrer">3SUM</a>, unfortunately a solution has not been found with a complexity less than quadratic. </p> <p>It is likely that your best solution won't be much better than exhaustively trying combinations, however with <code>n = 25</code> that shouldn't be too much of an issue. </p>
python|algorithm|pandas|numpy|energy
2
377
41,679,110
How to use tensorflow-wavenet
<p>I am trying to use the <a href="https://github.com/ibab/tensorflow-wavenet" rel="noreferrer">tensorflow-wavenet</a> program for text to speech.</p> <p>These are the steps:</p> <ol> <li>Download Tensorflow</li> <li>Download librosa</li> <li>Install requirements <code>pip install -r requirements.txt</code></li> <li>Download corpus and put into directory named "corpus"</li> <li>Train the machine <code>python train.py --data_dir=corpus</code></li> <li>Generate audio <code>python generate.py --wav_out_path=generated.wav --samples 16000 model.ckpt-1000</code></li> </ol> <p>After doing this, how can I generate a voice read-out of a text file?</p>
<p>According to the <a href="https://github.com/ibab/tensorflow-wavenet/blob/master/README.md#missing-features" rel="nofollow noreferrer">tensorflow-wavenet page</a>: </p> <blockquote> <p>Currently there is no local conditioning on extra information which would allow context stacks or controlling what speech is generated.</p> </blockquote> <p>You can find more information about current development of the project by reading the issues on the repository (<a href="https://github.com/ibab/tensorflow-wavenet/issues/189" rel="nofollow noreferrer">local conditioning is a desired feature!</a>)</p> <p>The Wavenet paper compares Wavenet to two TTS baselines, one of which appears to have code for training available online: <a href="http://hts.sp.nitech.ac.jp" rel="nofollow noreferrer">http://hts.sp.nitech.ac.jp</a></p>
tensorflow
4
378
64,200,512
tensorflow evalutaion and earlystopping gives infinity overflow error
<p>I a model as seen in the code below, but when trying to evaluate it or using earlystopping on it it gives me the following error:</p> <pre><code> numdigits = int(np.log10(self.target)) + 1 OverflowError: cannot convert float infinity to integer </code></pre> <p>I must state that without using <code>.EarlyStopping</code> or <code>model.evaluate</code> everything works well.</p> <p>I know that <code>np.log10(0)</code> gives <code>-inf</code> so that could be a potential cause, but why is there a <code>0</code> there in the first place and how can it be prevented? How can this problem be fixed?</p> <p>NOTES</p> <p>this is the code I use:</p> <pre><code>import tensorflow as tf from tensorflow import keras TRAIN_PERCENT = 0.9 model = keras.Sequential([ keras.layers.Dense(128, input_shape=(100,), activation='relu'), keras.layers.Dense(128, activation='relu'), keras.layers.Dense(100) ]) earlystop_callback = keras.callbacks.EarlyStopping(min_delta=0.0001, patience=1 , monitor='accuracy' ) optimizer = keras.optimizers.Adam(lr=0.01) model.compile(optimizer=optimizer, loss=&quot;mse&quot;, metrics=['accuracy']) X_set, Y_set = some_get_data_function() sep = int(len(X_set)/TRAIN_PERCENT) X_train, Y_train = X_set[:sep], Y_set[:sep] X_test, Y_test = X_set[sep:], Y_set[sep:] model.fit(X_train, Y_train, batch_size=16, epochs=5, callbacks=[earlystop_callback]) ev = model.evaluate(X_test, Y_test) print(ev) </code></pre> <p>X,Y sets are <code>np</code> arrays. X is an array of arrays of 100 integers between <code>0</code> and <code>10</code>. Y is an array of arrays of 100 integers, all of them are either <code>0</code> or <code>1</code>.</p>
<p>Well it's hard to tell exactly as I can't run code without <code>some_get_data_function()</code> realization but recently I've got same error when <strong>mistakenly passed EMPTY array</strong> to <code>model.evaluate</code>. Taking into account that @meTchaikovsky comment solved your issue it's certainly due to messed up input arrays.</p>
python|numpy|tensorflow|keras|overflow
8
379
64,583,123
Two different numpy arrays are being assigned the same values when only one array is being referenced
<p>I'm trying to write some code to carry out the Jacobi method for solving linear equations (I realise my method is not the most efficient way to do this but I am trying to figure out why it's not working).</p> <p>I have tried to debug the problem and noticed the following issue. The code finishes after 2 iterations because on the second iteration on line 32 when xnew[i] is assigned a new value, the same value is also assigned to x[i], even though x[i] is not referenced. Why is this happening on the second iteration and not the first time the for loop is run and is there a way to fix this?</p> <p>Thanks in advance</p> <pre><code>import numpy as np A = np.array( [[0.93, 0.24, 0], [0.04, 0.54, 0.26], [1, 1, 1]]) b = np.array([[6.0], [2.0], [10.0]]) n , m = np.shape(A) x = np.zeros(shape=(n,1)) xnew = np.zeros(shape=(n,1)) iterlimit = 100 tol = 0.0000001 for iteration in range(iterlimit): convergence = True for i in range(n): sum=0 for j in range(n): if j != i: sum = sum + (A[i,j] * x[j]) #on second iteration (iteration =1) below line begins to #assign x[i] the same values as it assigns xnew[i] causing the #convergence check below to not run and results in a premature break xnew[i] = 1/A[i,i] * (b[i] - sum) if abs(xnew[i]-x[i]) &gt; tol: convergence = False if convergence: break x = xnew print(&quot;Iteration:&quot;, iteration+1) print(&quot;Solution:&quot;) print(np.matrix(xnew)) </code></pre>
<pre><code>x = xnew </code></pre> <p>This line assigns <code>xnew</code> to <code>x</code>. Not the <em>contents</em> of xnew, but the array itself. So after your first iteration, <code>x</code> and <code>xnew</code> reference the same array in memory.</p> <p>Try instead <code>x[:] = xnew[:]</code></p>
python|numpy
2
380
47,836,347
Python 2.7 - pandas.read_table - how to import quadruple-pipe-separated fields from flat file
<p>I am a decent SAS programmer, but I am quite new in Python. Now, I have been given Twitter feeds, each saved into <strong>very large</strong> flat files, with headers in row #1 and a data structure like the below:</p> <pre> CREATED_AT||||ID||||TEXT||||IN_REPLY_TO_USER_ID||||NAME||||SCREEN_NAME||||DESCRIPTION||||FOLLOWERS_COUNT||||TIME_ZONE||||QUOTE_COUNT||||REPLY_COUNT||||RETWEET_COUNT||||FAVORITE_COUNT Tue Nov 14 12:33:00 +0000 2017||||930413253766791168||||ICYMI: Football clubs join the craft beer revolution! A good read|||| ||||BAB||||BABBrewers||||Monthly homebrew meet-up at 1000 Trades, Jewellery Quarter. First Tuesday of the month. All welcome, even if you've never brewed before.||||95|||| ||||0||||0||||0||||0 Tue Nov 14 12:34:00 +0000 2017||||930413253766821456||||I'm up for it|||| ||||Misty||||MistyGrl||||You CAN DO it!||||45|||| ||||0||||0||||0||||0 </pre> <p>I guess it's like that because any sort of characters can be found in a Twitter feed, but a quadruple pipe is unlikely enough. </p> <p>I know some people use JSON for that, but I've got these files as such: lots of them. I could use SAS to easily transform these files, but I prefer to "go pythonic", this time.</p> <p>Now, I cannot seem to find a way to make Python (2.7) understand that the quadruple pipe is the actual separator. The output from the code below:</p> <pre><code>import pandas as pd with open('C:/Users/myname.mysurname/Desktop/my_twitter_flow_1.txt') as theInFile: inTbl = pd.read_table(theInFile, engine='python', sep='||||', header=1) print inTbl.head() </code></pre> <p>seem to suggest that Python does not see the distinct fields as distinct but, simply, brings in each of the first 5 rows, up to the line feed character, ignoring the |||| separator. </p> <p>Basically, I am getting an output like the one I wrote above to show you the data structure. </p> <p>Any hints?</p>
<p>Using just the data in your question:</p> <pre><code>&gt;&gt;&gt; df = pd.read_csv('rio.txt', sep='\|{4}', skip_blank_lines=True, engine='python') &gt;&gt;&gt; df CREATED_AT ID \ 0 Tue Nov 14 12:33:00 +0000 2017 930413253766791168 1 Tue Nov 14 12:34:00 +0000 2017 930413253766821456 TEXT IN_REPLY_TO_USER_ID \ 0 ICYMI: Football clubs join the craft beer revo... 1 I'm up for it NAME SCREEN_NAME DESCRIPTION \ 0 BAB BABBrewers Monthly homebrew meet-up at 1000 Trades, Jewel... 1 Misty MistyGrl You CAN DO it! FOLLOWERS_COUNT TIME_ZONE QUOTE_COUNT REPLY_COUNT RETWEET_COUNT \ 0 95 0 0 0 1 45 0 0 0 FAVORITE_COUNT 0 0 1 0 </code></pre> <p>Notice the <code>sep</code> parameter. When it's more than one character long and not equal to '\s+' it's interpreted as a regular expression. But the '|' character has special meaning in a regex, hence it must be escaped, using the '\' character. I could simply have written <code>sep='\|\|\|\|'</code>; however, I've used an abbreviation.</p>
python|pandas|separator
3
381
49,161,208
Keras - method on_batch_end is slow but only callback I have is checkpoint
<p>I set up a network with keras using TensorFlow backend.</p> <p>When I train my network I often times keep getting message:</p> <pre><code>UserWarning: Method on_batch_end() is slow compared to the batch update (0.195523). Check your callbacks. % delta_t_median) </code></pre> <p>The issue is that my network is set up with only checkpoint callback:</p> <pre><code>checkpoint = ModelCheckpoint(filepath, monitor='val_loss', verbose=1, save_best_only=True, mode='min') callbacks_list = [checkpoint] </code></pre> <p>As far as I see in documentation this method is called only on epoch end, so it can't slow down <code>on_batch_end</code> method. Can anyone provide some information on what is the issue?</p>
<p>This is most probably a Generator (<code>fit_generator()</code>) issue. When using a generator as data source it has to be called at the end of a batch. Consider revisiting your generator code, using multiprocessing (<code>workers &gt; 1</code>) or a higher batchsize (if possible) </p>
python|tensorflow|machine-learning|callback|keras
6
382
58,700,108
How can create an empty arrayin python like a C++ array
<p>I need to create an empty nd-array in python without zeros or ones functions looks like in c++ with this command for array 3*4 for integers:</p> <pre><code>int x[3][4] </code></pre> <p>Please help me</p>
<p>Numpy has a function for that. <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.empty.html#numpy.empty" rel="nofollow noreferrer">empty</a></p>
python|python-3.x|numpy|numpy-ndarray
2
383
56,305,466
Why might pandas resort the dataframe after joining?
<p>I am writing an application where I need to pull in a single column from another dataframe. I'm getting some strange behavior. When I run the function using one dataset, everything works great. When it executes on a secondary dataset, the same code <em>resorts</em> the data based on the index. I'm pulling my hair out trying to figure out why the very same code is producing two different results. </p> <p>Here's the code. I realize this isn't MCVE but I have verified this is exactly where the resorting is happening. I'm hoping someone knows <em>in general</em> why pandas might resort or not resort in various circumstances.</p> <pre><code>def new_curr_need(self, need): self.main_df.drop('Curr_need', axis=1, inplace=True) self.main_df = ( self.main_df.join(self.need_df[need], how='left')) #if it resorts, happens after the join self.main_df.rename({need:'Curr_need'}, axis='columns', inplace=True) </code></pre> <p>Potentially relevant info on the datasets:</p> <ul> <li><p>The main_df and need_df index is a string (customer name) and is essentially the same in both datasets</p></li> <li><p>The only major difference between the two datasets is that the resorting one is a good bit wider</p></li> <li><p>Elsewhere in my code is the ability for the user to sort the data in a customized way. That sorting will hold after running the function above using dataset 1 but not dataset 2.</p></li> </ul>
<p>Pandas' left join operation reorders the index of the right dataframe so that it matches the index of the left dataframe.</p> <p>For example, the following code produces a dataframe where the index of b is rearranged to match the index of a:</p> <pre><code>a = pd.DataFrame({'x':[1,2,3]}) b = pd.DataFrame({'y':[1,2,3]}) a.index = [2,0,1] a.join(b, how='left') x y 2 1 3 0 2 1 1 3 2 </code></pre> <p>If the indices of the dataframes you join are the same, the values will remain in the same order; if the index of the right dataframe is resorted, the values will be resorted.</p>
python|pandas
0
384
56,174,211
Concat 2 columns in a new phrase column using pandas DataFrame
<p>I have a DataFrame like this:</p> <pre><code>&gt;&gt;&gt; df = pd.DataFrame({'id_sin':['s123','s124','s125','s126','s127'], 'num1':[12,10,23,6,np.nan], 'num2':['BG','TC','AB','RC',np.nan], 'fr':[1,1,1,1,0], }) &gt;&gt;&gt; df fr id_sin num1 num2 0 1 s123 12 BG 1 1 s124 10 TC 2 1 s125 23 AB 3 1 s126 6 RC 4 0 s127 NaN NaN </code></pre> <p>I want to concatenate the columns <code>num1</code> &amp; <code>num2</code> <em>(num2 is num1)</em> in a phrase like this with fr being 1:</p> <pre><code> fr id_sin num1 num2 phrase 0 1 s123 12 BG BG is 12 1 1 s124 10 TC TC is 10 2 1 s125 23 AB AB is 23 3 1 s126 6 RC RC is 6 </code></pre> <p>I tried this but doesn't work:</p> <pre><code>df['phrase'] = str(df['num2']) + ' is ' + str(df['num1']) </code></pre>
<p><strong>Edit</strong>:<br> if you want <code>num1</code> has no decimal <code>.0</code>, convert it to <code>Int64</code>:</p> <pre><code>df.num1 = df.num1.astype('Int64') Out[32]: id_sin num1 num2 fr 0 s123 12 BG 1 1 s124 10 TC 1 2 s125 23 AB 1 3 s126 6 RC 1 4 s127 NaN NaN 0 </code></pre> <p>Try <code>Series.str.cat</code></p> <pre><code>df.num2.str.cat(df.num1.astype(str), sep=' is ') Out[2055]: 0 BG is 12 1 TC is 10 2 AB is 23 3 RC is 6 4 NaN Name: num2, dtype: object </code></pre> <hr> <p>On @rafael comment. His works, Just the typo in it causing error. It is:</p> <pre><code>dn['num2'].astype(str) + ' is ' + dn['num1'].astype(str) </code></pre>
python|pandas|dataframe
1
385
55,923,319
Transform time series data set to supervised learning data set
<p>I have a data set with time series (on daily basis) for multiple items (e.g. users). The data looks simlified like this: <a href="https://i.ibb.co/Pj4TnHW/trans-original.jpg" rel="nofollow noreferrer">https://i.ibb.co/Pj4TnHW/trans-original.jpg</a> (I can't post images, because of missing rep. points, sorry)</p> <p>This data set has all the same attributes (e.g. measures) for each user. Those measures are taken over a time window on daily basis. Every user has its own "event date".</p> <p>My goal is to transform this time series (row-oriented) data set to a dataset, which could be used for supervised learning. My desired layout would look like this: <a href="https://i.ibb.co/8DxYpCy/Unbenannt.jpg" rel="nofollow noreferrer">https://i.ibb.co/8DxYpCy/Unbenannt.jpg</a></p> <p>Currently, I apply my solution on a dataset with ~60 measures. So far I achieved this, by using an iteration over "user_id" and applying multiple steps with pandas.melt(), pandas.transpose() functions. But this requires a lot of preformatting, and becomes slower with larger data sets.</p> <p>Is there a better way to do my transformation? I read about this <a href="https://machinelearningmastery.com/convert-time-series-supervised-learning-problem-python/" rel="nofollow noreferrer">https://machinelearningmastery.com/convert-time-series-supervised-learning-problem-python/</a> but this seems to be another type of problem...</p> <p>//EDIT #1: As requested, I created the smallest possible notebook / python script, with a simplified dataset to demonstrate, what I'm doing: <a href="https://www.file-upload.net/download-13590592/timeseries_to_supervised.zip.html" rel="nofollow noreferrer">https://www.file-upload.net/download-13590592/timeseries_to_supervised.zip.html</a> (Jupyter Notebook, exported HTML-Version, sample input dataset)</p>
<p>I used to do stuff like this with <a href="https://en.wikipedia.org/wiki/R_(programming_language)" rel="nofollow noreferrer">R</a>, it's a language well designed to manipulate rows (functional programming). You can use the library <a href="https://cran.r-project.org/web/packages/data.table/vignettes/datatable-intro.html" rel="nofollow noreferrer">datatable</a>, it's very fast. If I may ask wich column are you trying to predict ? Be careful to not predict an outcome based on present or futur data, you can only use the past :)</p>
python|pandas|time-series|supervised-learning
0
386
39,707,080
Pandas - Alternative to rank() function that gives unique ordinal ranks for a column
<p>At this moment I am writing a Python script that aggregates data from multiple Excel sheets. The module I choose to use is Pandas, because of its speed and ease of use with Excel files. The question is only related to the use of Pandas and me trying to create a additional column that contains <em>unique, integer-only, ordinal</em> ranks within a group.</p> <p>My Python and Pandas knowledge is limited as I am just a beginner.</p> <p><strong>The Goal</strong></p> <p>I am trying to achieve the following data structure. Where the top 10 adwords ads are ranked vertically on the basis of their position in Google. In order to do this I need to create a column in the original data (see Table 2 &amp; 3) with a integer-only ranking that contains no duplicate values. </p> <p>Table 1: Data structure I am trying to achieve</p> <pre><code> device , weeks , rank_1 , rank_2 , rank_3 , rank_4 , rank_5 mobile , wk 1 , string , string , string , string , string mobile , wk 2 , string , string , string , string , string computer, wk 1 , string , string , string , string , string computer, wk 2 , string , string , string , string , string </code></pre> <p><strong>The Problem</strong></p> <p>The exact problem I run into is not being able to efficiently rank the rows with pandas. I have tried a number of things, but I cannot seem to get it ranked in this way. </p> <p>Table 2: Data structure I have</p> <pre><code> weeks device , website , ranking , adtext wk 1 mobile , url1 , *2.1 , string wk 1 mobile , url2 , *2.1 , string wk 1 mobile , url3 , 1.0 , string wk 1 mobile , url4 , 2.9 , string wk 1 desktop , *url5 , 2.1 , string wk 1 desktop , url2 , *1.5 , string wk 1 desktop , url3 , *1.5 , string wk 1 desktop , url4 , 2.9 , string wk 2 mobile , url1 , 2.0 , string wk 2 mobile , *url6 , 2.1 , string wk 2 mobile , url3 , 1.0 , string wk 2 mobile , url4 , 2.9 , string wk 2 desktop , *url5 , 2.1 , string wk 2 desktop , url2 , *2.9 , string wk 2 desktop , url3 , 1.0 , string wk 2 desktop , url4 , *2.9 , string </code></pre> <p>Table 3: The table I cannot seem to create</p> <pre><code> weeks device , website , ranking , adtext , ranking wk 1 mobile , url1 , *2.1 , string , 2 wk 1 mobile , url2 , *2.1 , string , 3 wk 1 mobile , url3 , 1.0 , string , 1 wk 1 mobile , url4 , 2.9 , string , 4 wk 1 desktop , *url5 , 2.1 , string , 3 wk 1 desktop , url2 , *1.5 , string , 1 wk 1 desktop , url3 , *1.5 , string , 2 wk 1 desktop , url4 , 2.9 , string , 4 wk 2 mobile , url1 , 2.0 , string , 2 wk 2 mobile , *url6 , 2.1 , string , 3 wk 2 mobile , url3 , 1.0 , string , 1 wk 2 mobile , url4 , 2.9 , string , 4 wk 2 desktop , *url5 , 2.1 , string , 2 wk 2 desktop , url2 , *2.9 , string , 3 wk 2 desktop , url3 , 1.0 , string , 1 wk 2 desktop , url4 , *2.9 , string , 4 </code></pre> <p>The standard .rank(ascending=True), gives averages on duplicate values. But since I use these ranks to organize them vertically this does not work out.</p> <pre><code>df = df.sort_values(['device', 'weeks', 'ranking'], ascending=[True, True, True]) df['newrank'] = df.groupby(['device', 'week'])['ranking'].rank( ascending=True) </code></pre> <p>The .rank(method="dense", ascending=True) maintains duplicate values and also does not solve my problem</p> <pre><code>df = df.sort_values(['device', 'weeks', 'ranking'], ascending=[True, True, True]) df['newrank'] = df.groupby(['device', 'week'])['ranking'].rank( method="dense", ascending=True) </code></pre> <p>The .rank(method="first", ascending=True) throws a ValueError</p> <pre><code>df = df.sort_values(['device', 'weeks', 'ranking'], ascending=[True, True, True]) df['newrank'] = df.groupby(['device', 'week'])['ranking'].rank( method="first", ascending=True) </code></pre> <p>ADDENDUM: If I would find a way to add the rankings in a column, I would then use pivot to transpose the table in the following way.</p> <pre><code>df = pd.pivot_table(df, index = ['device', 'weeks'], columns='website', values='adtext', aggfunc=lambda x: ' '.join(x)) </code></pre> <p><strong>My question to you</strong></p> <p>I was hoping any of you could help me find a solution for this problem. This could either an efficient ranking script or something else to help me reach the final data structure.</p> <p>Thank you!</p> <p>Sebastiaan</p> <hr> <p>EDIT: Unfortunately, I think I was not clear in my original post. I am looking for a ordinal ranking that only gives integers and has no duplicate values. This means that when there is a duplicate value it will randomly give one a higher ranking than the other.</p> <p>So what I would like to do is generate a ranking that labels each row with an ordinal value per group. The groups are based on the week number and device. The reason I want to create a new column with this ranking is so that I can make top 10s per week and device.</p> <p>Also Steven G asked me for an example to play around with. I have provided that here. </p> <p>Example data can be pasted directly into python</p> <p>! IMPORTANT: The names are different in this sample. The dataframe is called placeholder, the column names are as follows: 'week', 'website', 'share', 'rank_google', 'device'. </p> <pre><code>data = {u'week': [u'WK 1', u'WK 2', u'WK 3', u'WK 4', u'WK 2', u'WK 2', u'WK 1', u'WK 3', u'WK 4', u'WK 3', u'WK 3', u'WK 4', u'WK 2', u'WK 4', u'WK 1', u'WK 1', u'WK3', u'WK 4', u'WK 4', u'WK 4', u'WK 4', u'WK 2', u'WK 1', u'WK 4', u'WK 4', u'WK 4', u'WK 4', u'WK 2', u'WK 3', u'WK 4', u'WK 3', u'WK 4', u'WK 3', u'WK 2', u'WK 2', u'WK 4', u'WK 1', u'WK 1', u'WK 4', u'WK 4', u'WK 2', u'WK 1', u'WK 3', u'WK 1', u'WK 4', u'WK 1', u'WK 4', u'WK 2', u'WK 2', u'WK 2', u'WK 4', u'WK 4', u'WK 4', u'WK 1', u'WK 3', u'WK 4', u'WK 4', u'WK 1', u'WK 4', u'WK 3', u'WK 2', u'WK 4', u'WK 4', u'WK 4', u'WK 4', u'WK 1'], u'website': [u'site1.nl', u'website2.de', u'site1.nl', u'site1.nl', u'anothersite.com', u'url2.at', u'url2.at', u'url2.at', u'url2.at', u'anothersite.com', u'url2.at', u'url2.at', u'url2.at', u'url2.at', u'url2.at', u'anothersite.com', u'url2.at', u'url2.at', u'url2.at', u'url2.at', u'anothersite.com', u'url2.at', u'url2.at', u'anothersite.com', u'site2.co.uk', u'sitename2.com', u'sitename.co.uk', u'sitename.co.uk', u'sitename2.com', u'sitename2.com', u'sitename2.com', u'url3.fi', u'sitename.co.uk', u'sitename2.com', u'sitename.co.uk', u'sitename2.com', u'sitename2.com', u'ulr2.se', u'sitename2.com', u'sitename.co.uk', u'sitename2.com', u'sitename2.com', u'sitename2.com', u'sitename2.com', u'sitename2.com', u'sitename.co.uk', u'sitename.co.uk', u'sitename2.com', u'facebook.com', u'alsoasite.com', u'ello.com', u'instagram.com', u'alsoasite.com', u'facebook.com', u'facebook.com', u'singleboersen-vergleich.at', u'facebook.com', u'anothername.com', u'twitter.com', u'alsoasite.com', u'alsoasite.com', u'alsoasite.com', u'alsoasite.com', u'facebook.com', u'alsoasite.com', u'alsoasite.com'], 'adtext': [u'site1.nl 3,9 | &lt; 10\xa0%', u'website2.de 1,4 | &lt; 10\xa0%', u'site1.nl 4,3 | &lt; 10\xa0%', u'site1.nl 3,8 | &lt; 10\xa0%', u'anothersite.com 2,5 | 12,36 %', u'url2.at 1,3 | 78,68 %', u'url2.at 1,2 | 92,58 %', u'url2.at 1,1 | 85,47 %', u'url2.at 1,2 | 79,56 %', u'anothersite.com 2,8 | &lt; 10\xa0%', u'url2.at 1,2 | 80,48 %', u'url2.at 1,2 | 85,63 %', u'url2.at 1,1 | 88,36 %', u'url2.at 1,3 | 87,90 %', u'url2.at 1,1 | 83,70 %', u'anothersite.com 3,1 | &lt; 10\xa0%', u'url2.at 1,2 | 91,00 %', u'url2.at 1,1 | 92,11 %', u'url2.at 1,2 | 81,28 %' , u'url2.at 1,1 | 86,49 %', u'anothersite.com 2,7 | &lt; 10\xa0%', u'url2.at 1,2 | 83,96 %', u'url2.at 1,2 | 75,48 %' , u'anothersite.com 3,0 | &lt; 10\xa0%', u'site2.co.uk 3,1 | 16,24 %', u'sitename2.com 2,3 | 34,85 %', u'sitename.co.uk 3,5 | &lt; 10\xa0%', u'sitename.co.uk 3,6 | &lt; 10\xa0%', u'sitename2.com 2,1 | &lt; 10\xa0%', u'sitename2.com 2,2 | 13,55 %', u'sitename2.com 2,1 | 47,91 %', u'url3.fi 3,4 | &lt; 10\xa0%', u'sitename.co.uk 3,1 | 14,15 %', u'sitename2.com 2,4 | 28,77 %', u'sitename.co.uk 3,1 | 22,55 %', u'sitename2.com 2,1 | 17,03 %', u'sitename2.com 2,1 | 24,46 %', u'ulr2.se 2,7 | &lt; 10\xa0%', u'sitename2.com 2,0 | 49,12 %', u'sitename.co.uk 3,0 | &lt; 10\xa0%', u'sitename2.com 2,1 | 40,00 %', u'sitename2.com 2,1 | &lt; 10\xa0%', u'sitename2.com 2,2 | 30,29 %', u'sitename2.com 2,0 |47,48 %', u'sitename2.com 2,1 | 32,17 %', u'sitename.co.uk 3,2 | &lt; 10\xa0%', u'sitename.co.uk 3,1 | 12,77 %', u'sitename2.com 2,6 | &lt; 10\xa0%', u'facebook.com 3,2 | &lt; 10\xa0%', u'alsoasite.com 2,3 | &lt; 10\xa0%', u'ello.com 1,8 | &lt; 10\xa0%',u'instagram.com 5,0 | &lt; 10\xa0%', u'alsoasite.com 2,2 | &lt; 10\xa0%', u'facebook.com 3,0 | &lt; 10\xa0%', u'facebook.com 3,2 | &lt; 10\xa0%', u'singleboersen-vergleich.at 2,6 | &lt; 10\xa0%', u'facebook.com 3,4 | &lt; 10\xa0%', u'anothername.com 1,9 | &lt;10\xa0%', u'twitter.com 4,4 | &lt; 10\xa0%', u'alsoasite.com 1,1 | 12,35 %', u'alsoasite.com 1,1 | 11,22 %', u'alsoasite.com 2,0 | &lt; 10\xa0%', u'alsoasite.com 1,1| 10,86 %', u'facebook.com 3,4 | &lt; 10\xa0%', u'alsoasite.com 1,1 | 10,82 %', u'alsoasite.com 1,1 | &lt; 10\xa0%'], u'share': [u'&lt; 10\xa0%', u'&lt; 10\xa0%', u'&lt; 10\xa0%', u'&lt; 10\xa0%', u'12,36 %', u'78,68 %', u'92,58 %', u'85,47 %', u'79,56 %', u'&lt; 10\xa0%', u'80,48 %', u'85,63 %', u'88,36 %', u'87,90 %', u'83,70 %', u'&lt; 10\xa0%', u'91,00 %', u'92,11 %', u'81,28 %', u'86,49 %', u'&lt; 10\xa0%', u'83,96 %', u'75,48 %', u'&lt; 10\xa0%', u'16,24 %', u'34,85 %', u'&lt; 10\xa0%', u'&lt; 10\xa0%', u'&lt; 10\xa0%', u'13,55 %', u'47,91 %', u'&lt; 10\xa0%', u'14,15 %', u'28,77 %', u'22,55 %', u'17,03 %', u'24,46 %', u'&lt; 10\xa0%', u'49,12 %', u'&lt; 10\xa0%', u'40,00 %', u'&lt; 10\xa0%', u'30,29 %', u'47,48 %', u'32,17 %', u'&lt; 10\xa0%', u'12,77 %', u'&lt; 10\xa0%', u'&lt; 10\xa0%', u'&lt; 10\xa0%', u'&lt; 10\xa0%', u'&lt; 10\xa0%', u'&lt; 10\xa0%', u'&lt; 10\xa0%', u'&lt; 10\xa0%', u'&lt; 10\xa0%', u'&lt; 10\xa0%', u'&lt; 10\xa0%', u'&lt; 10\xa0%', u'12,35 %', u'11,22 %', u'&lt; 10\xa0%', u'10,86 %', u'&lt; 10\xa0%', u'10,82 %', u'&lt; 10\xa0%'], u'rank_google': [u'3,9', u'1,4', u'4,3', u'3,8', u'2,5', u'1,3', u'1,2', u'1,1', u'1,2', u'2,8', u'1,2', u'1,2', u'1,1', u'1,3', u'1,1', u'3,1', u'1,2', u'1,1', u'1,2', u'1,1', u'2,7', u'1,2', u'1,2', u'3,0', u'3,1', u'2,3', u'3,5', u'3,6', u'2,1', u'2,2', u'2,1', u'3,4', u'3,1', u'2,4', u'3,1', u'2,1', u'2,1', u'2,7', u'2,0', u'3,0', u'2,1', u'2,1', u'2,2', u'2,0', u'2,1', u'3,2', u'3,1', u'2,6', u'3,2', u'2,3', u'1,8', u'5,0', u'2,2', u'3,0', u'3,2', u'2,6', u'3,4', u'1,9', u'4,4', u'1,1', u'1,1', u'2,0', u'1,1', u'3,4', u'1,1', u'1,1'], u'device': [u'Mobile', u'Tablet', u'Mobile', u'Mobile', u'Tablet', u'Mobile', u'Tablet', u'Computer', u'Mobile', u'Tablet', u'Mobile', u'Computer', u'Tablet', u'Tablet', u'Computer', u'Tablet', u'Tablet', u'Tablet', u'Mobile', u'Computer', u'Tablet', u'Computer', u'Mobile', u'Tablet', u'Tablet', u'Mobile', u'Tablet', u'Mobile', u'Computer', u'Computer', u'Tablet', u'Mobile', u'Tablet', u'Mobile', u'Tablet', u'Mobile', u'Mobile', u'Mobile', u'Tablet', u'Computer', u'Tablet', u'Computer', u'Mobile', u'Tablet', u'Tablet', u'Tablet', u'Mobile', u'Computer', u'Mobile', u'Computer', u'Tablet', u'Tablet', u'Tablet', u'Mobile', u'Mobile', u'Tablet', u'Mobile', u'Mobile', u'Tablet', u'Mobile', u'Mobile', u'Computer', u'Mobile', u'Tablet', u'Mobile', u'Mobile']} placeholder = pd.DataFrame(data) </code></pre> <p><strong>Error I receive when I use the rank() function with method='first'</strong></p> <pre><code>C:\Users\username\code\report-creator&gt;python recomp-report-04.py Traceback (most recent call last): File "recomp-report-04.py", line 71, in &lt;module&gt; placeholder['ranking'] = placeholder.groupby(['week', 'device'])['rank_googl e'].rank(method='first').astype(int) File "&lt;string&gt;", line 35, in rank File "C:\Users\sthuis\AppData\Local\Continuum\Anaconda2\lib\site-packages\pand as\core\groupby.py", line 561, in wrapper raise ValueError ValueError </code></pre> <p><strong>My solution</strong></p> <p>Effectively, the answer is given by @Nickil Maveli. A huge thank you! Nevertheless, I thought it might be smart to outline how I finally incorporated the solution.</p> <p>Rank(method='first') is a good way to get an ordinal ranking. But since I was working with numbers that were formatted in the European way, pandas interpreted them as strings and could not rank them this way. I came to this conclusion by the reaction of Nickil Maveli and trying to rank each group individually. I did that through the following code.</p> <pre><code>for name, group in df.sort_values(by='rank_google').groupby(['weeks', 'device']): df['new_rank'] = group['ranking'].rank(method='first').astype(int) </code></pre> <p>This gave me the following error:</p> <pre><code>ValueError: first not supported for non-numeric data </code></pre> <p>So this helped me realize that I should convert the column to floats. This is how I did it.</p> <pre><code># Converting the ranking column to a float df['ranking'] = df['ranking'].apply(lambda x: float(unicode(x.replace(',','.')))) # Creating a new column with a rank df['new_rank'] = df.groupby(['weeks', 'device'])['ranking'].rank(method='first').astype(int) # Dropping all ranks after the 10 df = df.sort_values('new_rank').groupby(['weeks', 'device']).head(n=10) # Pivotting the column df = pd.pivot_table(df, index = ['device', 'weeks'], columns='new_rank', values='adtext', aggfunc=lambda x: ' '.join(x)) # Naming the columns with 'top' + number df.columns = ['top ' + str(i) for i in list(df.columns.values)] </code></pre> <p>So this worked for me. Thank you guys!</p>
<p>I think the way you were trying to use the <code>method=first</code> to rank them after sorting were causing problems. </p> <p>You could simply use the rank method with <code>first</code> arg on the grouped object itself giving you the desired unique ranks per group.</p> <pre><code>df['new_rank'] = df.groupby(['weeks','device'])['ranking'].rank(method='first').astype(int) print (df['new_rank']) 0 2 1 3 2 1 3 4 4 3 5 1 6 2 7 4 8 2 9 3 10 1 11 4 12 2 13 3 14 1 15 4 Name: new_rank, dtype: int32 </code></pre> <p>Perform pivot operation:</p> <pre><code>df = df.pivot_table(index=['weeks', 'device'], columns=['new_rank'], values=['adtext'], aggfunc=lambda x: ' '.join(x)) </code></pre> <p>Choose the second level of the multiindex columns which pertain to the rank numbers:</p> <pre><code>df.columns = ['rank_' + str(i) for i in df.columns.get_level_values(1)] df </code></pre> <p><a href="https://i.stack.imgur.com/iMT88.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iMT88.png" alt="Image_2"></a></p> <hr> <p><strong>Data:</strong>(to replicate)</p> <pre><code>df = pd.DataFrame({'weeks': ['wk 1', 'wk 1', 'wk 1', 'wk 1', 'wk 1', 'wk 1', 'wk 1', 'wk 1', 'wk 2', 'wk 2', 'wk 2', 'wk 2', 'wk 2', 'wk 2', 'wk 2', 'wk 2'], 'device': ['mobile', 'mobile', 'mobile', 'mobile', 'desktop', 'desktop', 'desktop', 'desktop', 'mobile', 'mobile', 'mobile', 'mobile', 'desktop', 'desktop', 'desktop', 'desktop'], 'website': ['url1', 'url2', 'url3', 'url4', 'url5', 'url2', 'url3', 'url4', 'url1', 'url16', 'url3', 'url4', 'url5', 'url2', 'url3', 'url4'], 'ranking': [2.1, 2.1, 1.0, 2.9, 2.1, 1.5, 1.5, 2.9, 2.0, 2.1, 1.0, 2.9, 2.1, 2.9, 1.0, 2.9], 'adtext': ['string', 'string', 'string', 'string', 'string', 'string', 'string', 'string', 'string', 'string', 'string', 'string', 'string', 'string', 'string', 'string']}) </code></pre> <p>Note: <code>method=first</code> assigns ranks in the order they appear in the array/series.</p>
python|pandas|ranking|rank|ordinal
3
387
39,745,881
Speed-up cython code
<p>I have code that is working in python and want to use cython to speed up the calculation. The function that I've copied is in a .pyx file and gets called from my python code. V, C, train, I_k are 2-d numpy arrays and lambda_u, user, hidden are ints. I don't have any experience in using C or cython. What is an efficient way to make this code faster. Using <code>cython -a</code> for compiling shows me that the code is flawed but how can I improve it. Using <code>for i in prange (user_size, nogil=True):</code> results in <code>Constructing Python slice object not allowed without gil</code>.</p> <p>How has the code to be modified to harvest the power of cython?</p> <pre><code> @cython.boundscheck(False) @cython.wraparound(False) def u_update(V, C, train, I_k, lambda_u, user, hidden): cdef int user_size = user cdef int hidden_dim = hidden cdef np.ndarray U = np.empty((hidden_dim,user_size), float) cdef int m = C.shape[1] for i in range(user_size): C_i = np.zeros((m, m), dtype=float) for j in range(m): C_i[j,j]=C[i,j] U[:,i] = np.dot(np.linalg.inv(np.dot(V, np.dot(C_i,V.T)) + lambda_u*I_k), np.dot(V, np.dot(C_i,train[i,:].T))) return U </code></pre>
<p>You are trying to use <code>cython</code> by diving into the deep end of pool. You should start with something small, such as some of the numpy examples. Or even try to improve on <code>np.diag</code>.</p> <pre><code> i = 0 C_i = np.zeros((m, m), dtype=float) for j in range(m): C_i[j,j]=C[i,j] </code></pre> <p>v.</p> <pre><code> C_i = diag(C[i,:]) </code></pre> <p>Can you improve the speed of this simple expression? <code>diag</code> is not compiled, but it does perform an efficient indexed assignment. </p> <pre><code> res[:n-k].flat[i::n+1] = v </code></pre> <p>But the real problem for <code>cython</code> is this expression:</p> <pre><code>U[:,i] = np.dot(np.linalg.inv(np.dot(V, np.dot(C_i,V.T)) + lambda_u*I_k), np.dot(V, np.dot(C_i,train[i,:].T))) </code></pre> <p><code>np.dot</code> is compiled. <code>cython</code> won't turn that in to <code>c</code> code, nor will it consolidate all 5 <code>dots</code> into one expression. It also won't touch the <code>inv</code>. So at best <code>cython</code> will speed up the iteration wrapper, but it will still call this Python expression <code>m</code> times.</p> <p>My guess is that this expression can be cleaned up. Replacing the inner <code>dots</code> with <code>einsum</code> can probably eliminate the need for <code>C_i</code>. The <code>inv</code> might make 'vectorizing' the whole thing difficult. But I'd have to study it more. </p> <p>But if you want to stick with the <code>cython</code> route, you need to transform that <code>U</code> expression into simple iterative code, without calls to numpy functions like <code>dot</code> and <code>inv</code>.</p> <p>===================</p> <p>I believe the following are equivalent:</p> <pre><code>np.dot(C_i,V.T) C[i,:,None]*V.T </code></pre> <p>In:</p> <pre><code>np.dot(C_i,train[i,:].T) </code></pre> <p>if <code>train</code> is 2d, then <code>train[i,:]</code> is 1d, and the <code>.T</code> does nothing.</p> <pre><code>In [289]: np.dot(np.diag([1,2,3]),np.arange(3)) Out[289]: array([0, 2, 6]) In [290]: np.array([1,2,3])*np.arange(3) Out[290]: array([0, 2, 6]) </code></pre> <p>If I got that right, you don't need <code>C_i</code>.</p> <p>======================</p> <p>Furthermore, these calculations can be moved outside the loop, with expressions like (not tested)</p> <pre><code>CV1 = C[:,:,None]*V.T # a 3d array CV2 = C * train.T for i in range(user_size): U[:,i] = np.dot(np.linalg.inv(np.dot(V, CV1[i,...]) + lambda_u*I_k), np.dot(V, CV2[i,...])) </code></pre> <p>A further step is to move both <code>np.dot(V,CV...)</code> out of the loop. That may require <code>np.matmul</code> (@) or <code>np.einsum</code>. Then we will have</p> <pre><code>for i... I = np.linalg.inv(VCV1[i,...]) U[:,i] = np.dot(I+ lambda_u), VCV2[i,]) </code></pre> <p>or even</p> <pre><code>for i... I[...i] = np.linalg.inv(...) # if inv can't be vectorized U = np.einsum(..., I+lambda_u, VCV2) </code></pre> <p>This is a rough sketch, and details will need to be worked out.</p>
python|numpy|cython
3
388
44,308,300
Fill gaps in Pandas multi index with start and end timestamp
<p>From a DataFrame like the following:</p> <pre><code> value fill start end 2016-07-15 00:46:11 2016-07-19 03:35:34 1 a 2016-08-21 07:55:31 2016-08-22 18:24:49 2 b 2016-09-26 03:09:12 2016-09-26 06:06:12 3 c </code></pre> <p>I'm looking for a way to add rows filling the gaps, each new row taking the <code>fill</code> column of the existing previous adjacent row as its new <code>value</code>.</p> <p>The output of the previous example would then be:</p> <pre><code> value start end 2016-07-15 00:46:11 2016-07-19 03:35:34 1 2016-07-19 03:35:34 2016-08-21 07:55:31 a 2016-08-21 07:55:31 2016-08-22 18:24:49 2 2016-08-22 18:24:49 2016-09-26 03:09:12 b 2016-09-26 03:09:12 2016-09-26 06:06:12 3 </code></pre> <p>A vectorized method, avoiding looping over the DataFrame in pure Python, would be heavily preferred as I have to deal with massive amounts of rows.</p>
<p>use <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.stack.html" rel="nofollow noreferrer">DataFrame.stack()</a> method:</p> <pre><code>In [189]: df.stack().reset_index(level=2, drop=True).to_frame('value') Out[189]: value start end 2016-07-15 00:46:11 2016-07-19 03:35:34 1 2016-07-19 03:35:34 a 2016-08-21 07:55:31 2016-08-22 18:24:49 2 2016-08-22 18:24:49 b 2016-09-26 03:09:12 2016-09-26 06:06:12 3 2016-09-26 06:06:12 c </code></pre>
python|pandas
2
389
44,030,114
Pandas between range lookup filtering
<p>My data looks like this:</p> <pre><code>import pandas as pd pd.DataFrame({ 'x_range':['101-200','101-200','201-300','201-300'], 'y':[5,6,5,6], 'z': ['Cat', 'Dog', 'Fish', 'Snake'] }) </code></pre> <p>How might I filter on an <code>x</code> value (that fit's inside x_range) and a <code>y</code> value to return an appropriate <code>z</code> value? For instance, if <code>x</code> = 248 and <code>y</code>= 5, I'd like to return <code>Fish</code>...</p>
<p>Simple filtering exercise:</p> <p>Save your dataframe:</p> <pre><code>df['x_range_start'] = [int(i.split('-')[0]) for i in df.x_range] </code></pre> <p>Add two columns for range start and end:</p> <pre><code>df['x_range_start'] = [int(i.split('-')[0]) for i in df.x_range] df['x_range_end'] = [int(i.split('-')[1]) for i in df.x_range] </code></pre> <p>Filter to find values:</p> <pre><code>x_value = 113 y_value = 5 df[(df.x_range_start &lt;= x_value) &amp;(x_value &lt;= df.x_range_end)][df.y == y_value]['z'] </code></pre>
python|python-3.x|pandas
1
390
69,544,050
Geopandas: How to associate a Point to a Linestring using the original Linestring order
<p>Using Geopandas, Shapely</p> <pre><code>import geopandas as gpd from shapely.geometry import Point, LineString street = gpd.GeoDataFrame({'street': ['st'], 'geometry': LineString([(1, 1), (2, 2), (3, 1)])}) pp = gpd.GeoDataFrame({'geometry': [Point((1.9, 1.9)), Point((1.5, 1.5)), Point((2.5, 1.5)), Point((1.2, 1.2))]}) print(street) print(pp) </code></pre> <p>Suppose I have a Linestring that represents a (cornered) street:</p> <p>LineString([(1, 1), (2, 2), (3, 1)])</p> <p>Note that the order of points in this linestring matters because LineString([(1, 1), (3, 1), (2, 2)]) would represent a very different street.</p> <p>Now, suppose I have list of points that <strong>belong</strong> to my street:</p> <p>Point((1.9, 1.9))</p> <p>Point((1.5, 1.5))</p> <p>Point((2.5, 1.5))</p> <p>Point((1.2, 1.2))</p> <p>I want to create a new Linestring where all the Points are &quot;merged&quot; with the original street coordinates. This &quot;merge&quot; mechanism has to maintain the original street shape as follows:</p> <p>LineString([<strong>(1, 1)</strong>, (1.2, 1.2), (1.5, 1.5), (1.9, 1.9), <strong>(2, 2)</strong>, (2.5, 1.5). <strong>(3, 1)</strong>])</p> <p>Any ideas how to approach this?</p>
<p><strong>Comment:</strong></p> <blockquote> <p>I wouldn't know there's an existing function to do that. It seems as your challenge is to identify the segment of the street where you have to add a point. You can calculate the linear distance of the point to each segment. The segment with the min distance is the one you have to add it to ... btw all shapely object have the distance method already implemented</p> </blockquote>
python|geopandas|shapely
0
391
69,467,417
reduce Pandas DataFrame by selecting specific rows (max/min) groupby
<p>I have a long pandas DataFrame and like to select a single row of a subset if a criterion applies (min of 'value' in my case).</p> <p>I have a dataframe that starts like this:</p> <pre><code> time name_1 name_2 idx value 0 0 A B 0 0.927323 1 0 A B 1 0.417376 2 0 A B 2 0.167633 3 0 A B 3 0.458307 4 0 A B 4 0.312337 5 0 A B 5 0.876870 6 0 A B 6 0.096035 7 0 A B 7 0.656454 8 0 A B 8 0.261049 9 0 A B 9 0.220294 10 0 A C 0 0.902397 11 0 A C 1 0.887394 12 0 A C 2 0.593686 13 0 A C 3 0.394785 14 0 A C 4 0.569566 15 0 A C 5 0.544009 16 0 A C 6 0.404803 17 0 A C 7 0.209683 18 0 A C 8 0.309946 19 0 A C 9 0.049598 </code></pre> <p>I like to select the rows with the minimum of 'value' to a given 'time','name_1' and 'idx'.</p> <p>This code does what I want:</p> <pre><code>import pandas as pd import numpy as np values = np.array([0.927323 , 0.41737634, 0.16763339, 0.45830677, 0.31233708, 0.87687015, 0.09603466, 0.65645383, 0.26104928, 0.22029422, 0.90239674, 0.88739363, 0.59368645, 0.39478497, 0.56956551, 0.54400922, 0.40480253, 0.20968343, 0.30994597, 0.04959793, 0.19251744, 0.52135761, 0.25858556, 0.21825577, 0.0371907 , 0.09493446, 0.11676115, 0.95710755, 0.20447907, 0.47587798, 0.51848566, 0.88683689, 0.33567338, 0.55024871, 0.90575771, 0.80171702, 0.09314208, 0.55236301, 0.84181111, 0.15364926, 0.98555741, 0.30371372, 0.05154821, 0.83176642, 0.32537832, 0.75952016, 0.85063717, 0.13447965, 0.2362897 , 0.51945735, 0.90693226, 0.85405705, 0.43393479, 0.91383604, 0.11018263, 0.01436286, 0.39829369, 0.66487798, 0.22727205, 0.13352898, 0.54781443, 0.60894777, 0.35963582, 0.12307987, 0.45876915, 0.02289212, 0.12621582, 0.42680046, 0.83070886, 0.40761464, 0.64063501, 0.20836704, 0.17291092, 0.75085509, 0.1570349 , 0.03859196, 0.6824537 , 0.84710239, 0.89886199, 0.2094902 , 0.58992632, 0.7078019 , 0.16779968, 0.2419259 , 0.73452264, 0.09091338, 0.10095228, 0.62192591, 0.20698809, 0.29000293, 0.20460181, 0.01493776, 0.52598607, 0.16651766, 0.89677289, 0.52880975, 0.67722748, 0.89929363, 0.30735003, 0.40878873, 0.66854908, 0.4131948 , 0.40704838, 0.59434805, 0.13346655, 0.47503708, 0.09459362, 0.48804776, 0.90442952, 0.81338104, 0.17684766, 0.19449489, 0.81657825, 0.76595993, 0.46624606, 0.27780779, 0.95146104, 0.37054388, 0.69655618, 0.39371977]) df = pd.DataFrame({'time':[j for j in range(2) for i in range(60)], 'name_1':[j for j in ['A','B','C']*2 for i in range(20)], 'name_2':[j for j in ['B','C','A']*4 for i in range(10)], 'idx':[i for j in range(12) for i in range(10)], 'value':values}) out_df = pd.DataFrame() for t in np.unique(df.time): a = df[df.time==t] for n1 in np.unique(df.name_1): b = a[a.name_1==n1] for idx in np.unique(df.idx): c = b[b.idx==idx] # find the minimum index in c of value min_idx = np.argmin(c.value) out_df=out_df.append(c.iloc[min_idx]) out_df[:10] time name_1 name_2 idx value 10 0.0 A C 0.0 0.902397 1 0.0 A B 1.0 0.417376 2 0.0 A B 2.0 0.167633 13 0.0 A C 3.0 0.394785 4 0.0 A B 4.0 0.312337 15 0.0 A C 5.0 0.544009 6 0.0 A B 6.0 0.096035 17 0.0 A C 7.0 0.209683 8 0.0 A B 8.0 0.261049 19 0.0 A C 9.0 0.049598 </code></pre> <p>But this is really slow on the 4Million rows - of cause. How to speed this up?</p> <p>I tried groupby, but unfortunately this behaves not as expected:</p> <p>If I take this DataFrame c:</p> <pre><code>print(c) time name_1 name_2 idx value 0 0 A B 0 0.927323 10 0 A C 0 0.902397 </code></pre> <p>groupby should select the second row since value is the minimum here. However groupby behaves different:</p> <pre><code>c.groupby(by=['time','name_1','idx']).apply(np.min) time name_1 name_2 idx value time name_1 idx 0 A 0 0 A B 0 0.902397 </code></pre> <p>The minimum value is correct, but name_2 should be C not B.</p> <p>Any suggestions?</p>
<p>you could try to use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.DataFrameGroupBy.idxmin.html" rel="nofollow noreferrer">idxmin()</a> and use the following line of code:</p> <pre><code>out_df = df.loc[df.loc[:,['time','name_1','idx','value']].groupby(by=['time','name_1','idx']).idxmin()['value'], :] </code></pre>
python|pandas|dataframe|subset
0
392
69,659,219
Convert a data frame in which one column contains array of numbers as string to a json file
<p>I'd like to convert a data frame into a json file. One of the columns of the data frame contains time series as a string. Thus, the final json looks like this:</p> <p><code>[{&quot;...&quot;:&quot;...&quot;,&quot;Dauer&quot;:&quot;24h&quot;,&quot;Wertereihe&quot;:&quot;8619.0,9130.0,8302.0,8140.0&quot;}, {...}, {...}]</code></p> <p>Is it possible to save the df to a json file in such a way that &quot;Wertereihe&quot; is an array of numbers? This would give: <code>[{&quot;...&quot;:&quot;...&quot;,&quot;Dauer&quot;:&quot;24h&quot;,&quot;Wertereihe&quot;:[8619.0,9130.0,8302.0,8140.0]}, {...}, {...}]</code></p> <p>I used the following snippet to save the df to a json file: <code>df.to_json(jsonFile, orient = &quot;records&quot;)</code></p>
<p>IIUC, you need:</p> <pre><code>df['Wertereihe'] = df['Wertereihe'].apply(lambda x: list(map(float, x.split(',')))) df.to_json(jsonFile, orient = &quot;records&quot;) </code></pre>
python|json|pandas|csv|csvtojson
1
393
38,508,458
Comparing scalars to Numpy arrays
<p>What I am trying to do is make a table based on a piece-wise function in Python. For example, say I wrote this code:</p> <pre><code>import numpy as np from astropy.table import Table, Column from astropy.io import ascii x = np.array([1, 2, 3, 4, 5]) y = x * 2 data = Table([x, y], names = ['x', 'y']) ascii.write(data, "xytable.dat") xytable = ascii.read("xytable.dat") print xytable </code></pre> <p>This works as expected, it prints a table that has <code>x</code> values 1 through 5 and <code>y</code> values 2, 4, 6, 8, 10. </p> <p>But, what if I instead want <code>y</code> to be <code>x * 2</code> only if <code>x</code> is 3 or less, and <code>y</code> to be <code>x + 2</code> otherwise? </p> <p>If I add:</p> <pre><code>if x &gt; 3: y = x + 2 </code></pre> <p>it says:</p> <blockquote> <p>The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()</p> </blockquote> <p>How do I code my table so that it works as a piece-wise function? How do I compare scalars to Numpy arrays?</p>
<p>You can possibly use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html#numpy.where" rel="nofollow"><code>numpy.where()</code></a>:</p> <pre><code>In [196]: y = np.where(x &gt; 3, x + 2, y) In [197]: y Out[197]: array([2, 4, 6, 6, 7]) </code></pre> <p>The code above gets the job done in a fully vectorized manner. This approach is generally more efficient (and arguably more elegant) than using list comprehensions and type conversions.</p>
python|arrays|variables|numpy|astropy
3
394
66,034,080
make correlation plot on time series data in python
<p>I want to see a correlation on a rolling week basis in time series data. The reason because I want to see how rolling correlation moves each year. To do so, I tried to use <code>pandas.corr()</code>, <code>pandas.rolling_corr()</code> built-in function for getting rolling correlation and tried to make line plot, but I couldn't correct the correlation line chart. I don't know how should I aggregate time series for getting rolling correlation line chart. Does anyone knows any way of doing this in python? Is there any workaround to get rolling correlation line chart from time series data in pandas? any idea?</p> <p><strong>my attempt</strong>:</p> <p>I tried of using <code>pandas.corr()</code> to get correlation but it was not helpful to generate rolling correlation line chart. So, here is my new attempt but it is not working. I assume I should think about the right way of data aggregation to make rolling correlation line chart.</p> <pre><code>import pandas as pd import matplotlib.pyplot as plt import seaborn as sns url = 'https://gist.githubusercontent.com/adamFlyn/eb784c86c44fd7ed3f2504157a33dc23/raw/79b6aa4f2e0ffd1eb626dffdcb609eb2cb8dae48/corr.csv' df = pd.read_csv(url) df['date'] = pd.to_datetime(df['date']) def get_corr(df, window=4): dfs = [] for key, value in df: value[&quot;ROLL_CORR&quot;] = pd.rolling_corr(value[&quot;prod_A_price&quot;],value[&quot;prod_B_price&quot;], window) dfs.append(value) df_final = pd.concat(dfs) return df_final corr_df = get_corr(df, window=12) fig, ax = plt.subplots(figsize=(7, 4), dpi=144) sns.lineplot(x='week', y='ROLL_CORR', hue='year', data=corr_df,alpha=.8) plt.show() plt.close() </code></pre> <p>doing this way is not working to me. By doing this, I want to see how the rolling correlations move each year. Can anyone point me out possible of doing rolling correlation line chart from time-series data in python? any thoughts?</p> <p><strong>desired output</strong></p> <p>here is the <a href="https://ibb.co/1LrpLVq" rel="nofollow noreferrer">desired rolling correlation line chart</a> that I want to get. Note that desired plot was generated from MS excel. I am wondering is there any possible way of doing this in python? Is there any workaround to get a rolling correlation line chart from time-series data in python? how should I correct my current attempt to get the desired output? any thoughts?</p>
<p>Using your code and description as a starting point. Panda's <code>Rolling</code> class has an <code>apply</code> function which can be leveraged (<a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.window.rolling.Rolling.apply.html#pandas.core.window.rolling.Rolling.apply" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.window.rolling.Rolling.apply.html#pandas.core.window.rolling.Rolling.apply</a>)</p> <p>Two tricks are involved to make the code work:</p> <ol> <li>Accessing the whole row in the applied function (<a href="https://stackoverflow.com/questions/60736556/pandas-rolling-apply-using-multiple-columns">Pandas rolling apply using multiple columns</a>)</li> <li>We call the <code>rolling</code> function on a <code>pandas.Series</code> (here <code>df['week']</code>) to avoid going the applied function once per column</li> </ol> <pre class="lang-py prettyprint-override"><code>import pandas as pd import matplotlib.pyplot as plt import seaborn as sns url = 'https://gist.githubusercontent.com/adamFlyn/eb784c86c44fd7ed3f2504157a33dc23/raw/79b6aa4f2e0ffd1eb626dffdcb609eb2cb8dae48/corr.csv' df = pd.read_csv(url) def get_corr(ser): rolling_df = df.loc[ser.index] return rolling_df['prod_A_price'].corr(rolling_df['prod_B_price']) df['ROLL_CORR'] = df['week'].rolling(4).apply(get_corr) number_years = 3 for week, df_week in df.groupby('week'): df = df.append({ 'week': week, 'year': f'{number_years} year avg', 'ROLL_CORR': df_week.sort_values(by='date').head(number_years)['ROLL_CORR'].mean() }, ignore_index=True) fig, ax = plt.subplots(figsize=(7, 4), dpi=144) sns.lineplot(x='week', y='ROLL_CORR', hue='year', data=df,alpha=.8) plt.show() plt.close() </code></pre> <p><a href="https://i.stack.imgur.com/RsPef.png" rel="nofollow noreferrer">You'll find here the generated image by <code>seaborn</code></a></p> <p><a href="https://i.stack.imgur.com/qyTkI.png" rel="nofollow noreferrer">With the 3 year average</a></p>
python|pandas|matplotlib
1
395
66,133,811
Need to sort a nested tuple with numbers
<p>I trying to sort a tuple as below</p> <pre><code>input: ROI: [[191 60 23 18] [143 60 23 19] [ 95 52 24 21] [237 51 24 21] [ 47 38 27 22] [281 35 25 22] [ 4 17 26 24] [324 13 22 21]] Expected Output = S_ROI: [[4 17 26 24] [47 38 27 22] [ 95 52 24 21] [143 60 23 19] [ 191 60 23 18] [237 51 24 21] [281 35 25 22] [324 13 22 21]] </code></pre> <p>I have got intermediate array</p> <pre><code>column=[191 143 95 237 47 281 4 324] </code></pre> <p>I have tried this - But ROI is getting updated inside loop</p> <pre><code>sort_index = np.argsort(column) column.sort() sorted_led_ROI=ROI; index=0 for y in sort_index: sorted_led_ROI[index]=ROI[y] index =index+1 print('sorted_led_ROI:', sorted_led_ROI) </code></pre> <p>Result:</p> <pre><code>sorted_led_ROI: [[ 4 17 26 24] [ 47 38 27 22] [ 95 52 24 21] [ 47 38 27 22] [ 4 17 26 24] [ 47 38 27 22] [ 47 38 27 22] [324 13 22 21]] </code></pre> <p>help me out to sort this in python using np or cv</p>
<p>Do you mean just this:</p> <pre><code>print(ROI[ROI[:,0].argsort()]) </code></pre> <p>Output:</p> <pre><code>[[ 4 17 26 24] [ 47 38 27 22] [ 95 52 24 21] [143 60 23 19] [191 60 23 18] [237 51 24 21] [281 35 25 22] [324 13 22 21]] </code></pre>
python|numpy|opencv
1
396
66,011,974
How to get x_train and y_train from ImageDataGenerator?
<p>I am working on some image classification problem and I made Y Network for it. Y Network is a type of Neural Network which has two inputs and one output. If we want to fit our Tensorflow model we have to feed x_train and y_train in model.fit(). Like this -</p> <pre><code>model.fit([x_train, x_train], y_train, epochs=100, batch_size=64) </code></pre> <p>but how do I get <strong>x_train</strong> and <strong>y_train</strong> if I got my data from <strong>ImageDataGenerator</strong> ? Like this -</p> <pre><code>train_generator = train_datagen.flow_from_dataframe(... , batch_size=64, ...) </code></pre> <p>I tried getting x_train and y_train by this method:</p> <pre><code>x_train, y_train = train_generator.next() </code></pre> <p>but resulted x_train and y_train consist of only <strong>64</strong> images, I want all my <strong>8644</strong> images. I cannot increase batch_size to <strong>8644</strong> because it will need more memory and Google Colab will crash. What should I do ?</p>
<p>you can get the list of all images and labels from</p> <pre><code>class_dict=train_generator.class_indices labels= train_generator.labels file_names= train_generator.filenames </code></pre> <p>the class dictionary is useful to correlate the class index to the class name, it is of the form {class name, index} I find it useful to reverse the order to get a dictionary of the form {index, class name} using the code below</p> <pre><code>for key,value in class_dict.items(): new_dict[value]=key </code></pre> <p>So when you do predictions and get the index of the prediction using index= np.argmax(p) you can get the corresponding class name from</p> <pre><code>class_name=new_dict[index] </code></pre>
tensorflow|machine-learning|keras|deep-learning|conv-neural-network
1
397
65,957,329
problem with importing @tensorflow/tfjs-node while working with face-api.js package (node.js)
<p>i use @tensorflow/tfjs-node package for face-api.js package to speed up things (as they said ) that is my code :</p> <pre><code> // import nodejs bindings to native tensorflow, // not required, but will speed up things drastically (python required) require('@tensorflow/tfjs-node'); // implements nodejs wrappers for HTMLCanvasElement, HTMLImageElement, ImageData const { loadImage,Canvas, Image, ImageData } = require('canvas') const faceapi = require('face-api.js'); // patch nodejs environment, we need to provide an implementation of // HTMLCanvasElement and HTMLImageElement faceapi.env.monkeyPatch({ Canvas, Image, ImageData }) // patch nodejs environment, we need to provide an implementation of // HTMLCanvasElement and HTMLImageElement faceapi.env.monkeyPatch({ Canvas, Image, ImageData }) Promise.all([ faceapi.nets.ssdMobilenetv1.loadFromDisk('./models'), faceapi.nets.faceRecognitionNet.loadFromDisk('./models'), faceapi.nets.faceLandmark68Net.loadFromDisk('./models') ]) .then(async () =&gt; { const image1= await loadImage(&quot;https://enigmatic-waters-76106.herokuapp.com/1.jpeg&quot;) const image2= await loadImage(&quot;https://enigmatic-waters-76106.herokuapp.com/8.jpeg&quot;) const result = await faceapi.detectSingleFace(image1).withFaceLandmarks() .withFaceDescriptor() const singleResult = await faceapi .detectSingleFace(image2) .withFaceLandmarks() .withFaceDescriptor() const labeledDescriptors = [ new faceapi.LabeledFaceDescriptors( 'saied', [result.descriptor] ) ] const faceMatcher = new faceapi.FaceMatcher(labeledDescriptors) const bestMatch = faceMatcher.findBestMatch(singleResult.descriptor) console.log(labeledDescriptors[0].descriptors) }) </code></pre> <p>and when i run the code i get this error</p> <p>TypeError: forwardFunc_1 is not a function at G:\test\node_modules@tensorflow\tfjs-core\dist\tf-core.node.js:3166:55 at G:\test\node_modules@tensorflow\tfjs-core\dist\tf-core.node.js:2989:22 at Engine.scopedRun (G:\test\node_modules@tensorflow\tfjs-core\dist\tf-core.node.js:2999:23) at Engine.tidy (G:\test\node_modules@tensorflow\tfjs-core\dist\tf-core.node.js:2988:21) at kernelFunc (G:\test\node_modules@tensorflow\tfjs-core\dist\tf-core.node.js:3166:29) at G:\test\node_modules@tensorflow\tfjs-core\dist\tf-core.node.js:3187:27 at Engine.scopedRun (G:\test\node_modules@tensorflow\tfjs-core\dist\tf-core.node.js:2999:23) at Engine.runKernelFunc (G:\test\node_modules@tensorflow\tfjs-core\dist\tf-core.node.js:3183:14) at mul_ (G:\test\node_modules\face-api.js\node_modules@tensorflow\tfjs-core\dist\ops\binary_ops.js:327:28) at Object.mul (G:\test\node_modules\face-api.js\node_modules@tensorflow\tfjs-core\dist\ops\operation.js:46: 29) (Use <code>node --trace-warnings ...</code> to show where the warning was created) (node:3496) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throw ing inside of an async function without a catch block, or by rejecting a promise which was not handled with .cat ch(). To terminate the node process on unhandled promise rejection, use the CLI flag <code>--unhandled-rejections=strict</code> (see <a href="https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode" rel="nofollow noreferrer">https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode</a>). (rejection id: 1) (node:3496) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code</p> <p>when i delete &quot; require('@tensorflow/tfjs-node'); &quot; the code run prefectly but i need to import @tensorflow/tfjs-node to make the proccess faster</p> <p>node: v14.15.4</p> <p>npm: 6.14.10</p> <p>@tensorflow/tfjs-node: v3.0.0 Python 2.7.15 (required for @tensorflow/tfjs-node)</p> <p>face-api.js: v0.22.2</p> <p>thanks in advance for :)</p>
<p>As explained <a href="https://github.com/justadudewhohacks/face-api.js/issues/768#issuecomment-798908869" rel="nofollow noreferrer">in this github issue</a></p> <blockquote> <p>The version of face-api.js you are using is not compatible with tfjs 2.0+ or 3.0+, only obsolete 1.x. Why it worked before you added tfjs-node? because face-api.js actually includes bundled version of tfjs-core 1.x. Once you added tfjs-node, it overrode global tf namespace, but its a much newer version and not compatible.</p> </blockquote> <p>You must install obsolete tfjs-node 1.x OR follow the pointers they give to use a <a href="https://github.com/vladmandic/face-api" rel="nofollow noreferrer">newer port of face-api.js that supports TF 2.0</a>.</p>
javascript|node.js|tensorflow|face-recognition|face-api
2
398
52,667,035
Python + Pandas + dataframe : couldn't append one dataframe to another
<p>I have two big CSV files. I have converted them to Pandas dataframes. Both of them have columns of same names and in same order : event_name, category, category_id, description. I want to append one dataframe to another, and, finally want to write the resultant dataframe to a CSV. I wrote a code for that:</p> <pre><code> #appendind a new dataframe to the older dataframe data = pd.read_csv("dataset.csv") data1 = pd.read_csv("dataset_new.csv") dfs = [data, data1] pd.concat([df.squeeze() for df in dfs], ignore_index=True) dfs = pd.DataFrame(columns=['event_name','category', 'category_id', 'description']) dfs.to_csv('dataset_append.csv', encoding='utf-8', index=False) </code></pre> <p>I wanted to show you the output of <code>print(dfs)</code> but I couldn't because Stackoverflow is showing following error because the output is too long:</p> <pre><code>Body is limited to 30000 characters; you entered 32132. </code></pre> <p>Would you please tell me a code snippet which you use succesfully to append Pandas dataframe?</p> <p><strong>Edit1:</strong></p> <pre><code>print(dfs) </code></pre> <p>outout:</p> <pre><code>--------------------------------------------------------- [ Unnamed: 10 Unnamed: 100 Unnamed: 101 Unnamed: 102 Unnamed: 103 \ 0 NaN NaN NaN NaN NaN 1 NaN NaN NaN NaN NaN 2 NaN NaN NaN NaN NaN 3 NaN NaN NaN NaN NaN 4 NaN NaN NaN NaN NaN 5 NaN NaN NaN NaN NaN 6 NaN NaN NaN NaN NaN 7 NaN NaN NaN NaN NaN 8 NaN NaN NaN NaN NaN 9 NaN NaN NaN NaN NaN 10 NaN NaN NaN NaN NaN 11 NaN NaN NaN NaN NaN 12 NaN NaN NaN NaN NaN 13 NaN NaN NaN NaN NaN 14 NaN NaN NaN NaN NaN 15 NaN NaN NaN NaN NaN 16 NaN NaN NaN NaN NaN 17 NaN NaN NaN NaN NaN 18 NaN NaN NaN NaN NaN 19 NaN NaN NaN NaN NaN 20 NaN NaN NaN NaN NaN 21 NaN NaN NaN NaN NaN 22 NaN NaN NaN NaN NaN 23 NaN NaN NaN NaN NaN 24 NaN NaN NaN NaN NaN 25 NaN NaN NaN NaN NaN 26 NaN NaN NaN NaN NaN 27 NaN NaN NaN NaN NaN 28 NaN NaN NaN NaN NaN 29 NaN NaN NaN NaN NaN ... ... ... ... ... ... 1159 NaN NaN NaN NaN NaN 1160 NaN NaN NaN NaN NaN 1161 NaN NaN NaN NaN NaN 1162 NaN NaN NaN NaN NaN Unnamed: 104 Unnamed: 105 Unnamed: 106 Unnamed: 107 Unnamed: 108 \ 0 NaN NaN NaN NaN NaN 1 NaN NaN NaN NaN NaN 2 NaN NaN NaN NaN NaN 3 NaN NaN NaN NaN NaN 4 NaN NaN NaN NaN NaN 5 NaN NaN NaN NaN NaN 6 NaN NaN NaN NaN NaN 7 NaN NaN NaN NaN NaN ... ... ... ... ... ... 1161 NaN NaN NaN NaN NaN 1162 NaN NaN NaN NaN NaN ... Unnamed: 94 \ 0 ... NaN 1 ... NaN 2 ... NaN 3 ... NaN 4 ... NaN 5 ... NaN 6 ... NaN 7 ... NaN 8 ... NaN 9 ... NaN 10 ... NaN 11 ... NaN 12 ... NaN 13 ... NaN 14 ... NaN 15 ... NaN 16 ... NaN 17 ... NaN 18 ... NaN 19 ... NaN 20 ... NaN 21 ... NaN 22 ... NaN 23 ... NaN 24 ... NaN 25 ... NaN 26 ... NaN 27 ... NaN 28 ... NaN 29 ... NaN ... ... ... 1133 ... NaN 1134 ... NaN 1135 ... NaN 1136 ... NaN 1137 ... NaN 1138 ... NaN 1139 ... NaN 1140 ... NaN 1141 ... NaN 1142 ... NaN 1143 ... NaN 1144 ... NaN 1145 ... NaN 1146 ... NaN 1147 ... NaN 1148 ... NaN 1149 ... NaN 1150 ... NaN 1151 ... NaN 1152 ... NaN 1153 ... NaN 1154 ... NaN 1155 ... NaN 1156 ... NaN 1157 ... NaN 1158 ... NaN 1159 ... NaN 1160 ... NaN 1161 ... NaN 1162 ... NaN Unnamed: 95 Unnamed: 96 Unnamed: 97 Unnamed: 98 Unnamed: 99 \ 0 NaN NaN NaN NaN NaN 1 NaN NaN NaN NaN NaN 2 NaN NaN NaN NaN NaN 3 NaN NaN NaN NaN NaN 4 NaN NaN NaN NaN NaN ... ... ... ... ... ... 1133 NaN NaN NaN NaN NaN 1134 NaN NaN NaN NaN NaN 1135 NaN NaN NaN NaN NaN 1136 NaN NaN NaN NaN NaN category category_id \ 0 Business 2 1 stage shows 33 2 Literature 15 3 Science &amp; Technology 22 4 health 11 5 Science &amp; Technology 22 6 Outdoor 19 7 stage shows 33 8 nightlife 30 9 fashion &amp; lifestyle 6 10 Government &amp; Activism 25 11 stage shows 33 12 Religion &amp; Spirituality 21 13 Outdoor 19 14 management 17 15 Science &amp; Technology 22 16 nightlife 30 17 Outdoor 19 18 FAMILy &amp; kids 5 19 fashion &amp; lifestyle 6 20 FAMILy &amp; kids 5 21 games 10 22 hobbies 32 23 hobbies 32 24 Religion &amp; Spirituality 21 25 health 11 26 fashion &amp; lifestyle 6 27 career &amp; education 31 28 health 11 29 arts 1 ... ... ... 1133 Sports &amp; Fitness 23 1134 Sports &amp; Fitness 23 1135 Sports &amp; Fitness 23 1136 Sports &amp; Fitness 23 1137 Sports &amp; Fitness 23 1138 Sports &amp; Fitness 23 1139 Sports &amp; Fitness 23 1140 Sports &amp; Fitness 23 1141 Sports &amp; Fitness 23 1142 Sports &amp; Fitness 23 1143 Sports &amp; Fitness 23 1144 Sports &amp; Fitness 23 1145 Sports &amp; Fitness 23 1146 Sports &amp; Fitness 23 1147 Sports &amp; Fitness 23 1148 Sports &amp; Fitness 23 1149 Sports &amp; Fitness 23 1150 Sports &amp; Fitness 23 1151 Sports &amp; Fitness 23 1152 Sports &amp; Fitness 23 1153 Sports &amp; Fitness 23 1154 Sports &amp; Fitness 23 1155 Sports &amp; Fitness 23 1156 Sports &amp; Fitness 23 1157 Sports &amp; Fitness 23 1158 Sports &amp; Fitness 23 1159 Sports &amp; Fitness 23 1160 Sports &amp; Fitness 23 1161 Sports &amp; Fitness 23 1162 Sports &amp; Fitness 23 description \ 0 Josh Talks in partnership with Facebook is all... 1 Unwind on the strums of Guitar &amp; immerse your... 2 Book review for grade 3 and above learners. 3 ... 3 ..About Organizer:.This is the official page f... 4 Blood Donation is organized under the banner o... 5 A day "Etched with Innovation and Learning" to... 6 Our next destination for Fun with us is "Goa" ... 7 Enjoy the Soulful and Unplugged Performance of... 8 Get ready with your dance shoes on as our favo... 9 FESTIVE HUES -- a fashion and lifestyle exhibi... 10 On Aug. 8, Dr. Ambedkar presides over the Depr... 11 It's A Rapper Boys..And M Write A New Rap song... 12 The Spiritual Makeover..A weekend workshop tha... 13 Our next destination for Fun with us is "Goa" ... 14 Project Management is all about getting the th... 15 World Conference Next Generation Testing 2018 ... 16 ..About Organizer:.Whitefield is now #Sherlocked! 17 On occasion of 72th Independence Day , Udaan O... 18 *Smilofy Special Superstar*.A Talent hunt for ... 19 ITEEHA is coming back to Bengaluru, after a fa... 20 This is an exciting course for kids to teach t... 21 ..About Organizer:.PPG Lounge is a next genera... 22 Touch Feel Try &amp; Buy the latest #car and #bike... 23 Sniper Media is organising an exclusive semina... 24 He has all sorts of powers and able solve any ... 25 registration fee 50/₹ we r providing free c... 26 World Biggest Pageant Miss &amp; Mrs World Queen a... 27 ..About Organizer:.Canam Consultants - India's... 28 Innopharm is an effort to bring innovations in... 29 The first Central India Art and Design Expo - ... ... ... 1133 As the cricket fever grips the country again, ... 1134 An evening of fun, food, drinks and rooting fo... 1135 The time has come, who will take their place S... 1136 Do you want to prove that Age is not a barrier... 1137 We Invite All The Corporate Companies To Be A ... 1138 PlayTM happy to announce you that conducting o... 1139 A Mix of fun rules and cricketing skills. Afte... 1140 Shuttle Swap presents Singles, Doubles and Mix... 1141 Yonex Mavis 350 Shuttle will be used State/Nat... 1142 Light up the FIFA World Cup with Bud90 Match S... 1143 We are charmed to launch the SVSEVENTZ.COM 5-A... 1144 We corephysio FC invite you for our first foot... 1145 After completing the 2nd season of Bangalore S... 1146 As the cricket fever grips the country again, ... 1147 Introducing BOX Cricket Super 6 Corporate Cric... 1148 After the sucess of '1st Matt &amp; Mudd T20 Leagu... 1149 Hi All, It is my pleasure to officially announ... 1150 Sign up: Get early updates, free movie voucher... 1151 About VIVO Pro Kabaddi 2018: A new season of t... 1152 The Hero Indian Super League (ISL) is India's ... 1153 Limited time offer: Free Paytm Movie Voucher w... 1154 The 5th edition of the Indian Super League is ... 1155 Calling all Jamshedpur FC fans! Here's your ch... 1156 Empower yourself and progress towards a health... 1157 Making people happy when they feel that its en... 1158 LOVE YOGA ?- but too busy with work during the... 1159 The coolest way to tour the city ! Absorb the ... 1160 Ready to be a part of India's Biggest Walkatho... 1161 The event will comprise of the following Open ... 1162 RUN FOR CANCER CHILDREN On world Cancer Day 3r... event_name 0 Josh Talks Hyderabad 2018 1 Guitar Night With Ashmik Patil 2 Book Review - August 2018 - 2 3 Csaw'18 4 Blood donation camp 5 Rajasthan Youth Innovation and Technical Intel... 6 Goa – Fun All the Way!!! - Mom N Kids 7 The AnshUdhami Project LIVE at Tales &amp; Spirits... 8 Friday Fiesta featuring Pearl 9 FESTIVE HUES 10 Nagpur 11 Yo Yo Deep SP The Rapper 12 The Spiritual Makeover 13 Goa Fun All the Way - Women Only group Tour 14 MS Project 2016 - A one day seminar 15 World Conference Next Generation Testing 16 Weekend Booster - Happy Hour 17 Ladies Only Camping : Freedom To Travel (Seaso... 18 Special superstar 19 Malaysian Batik Workshop 20 EQ Enhancement Course (5-10 years) 21 CS:GO Tournament 2018 - PPGL 22 Auto Mall at Mantri Square Bangalore 23 A Seminar by Ojas Rajani (Bollywood celebrity ... 24 rishikesh katti greatest Spirituality guru of ... 25 free BMD camp held on 26 jan 2018 26 Miss and Mrs Bhopal Madhya Pradesh India World... 27 USA, Canada &amp; Singapore Application Days 2018 28 Innopharm 3 29 Kalasrishti Art and Design Expo ... ... 1133 Asia cup live screening at la casa Brewery+ ki... 1134 Asia Cup 2018 live screening at La Casa Brewer... 1135 FIFA FINAL AT KORAMANGALA TETTO - With #fifa#f... 1136 Womenasia Indoor Cricket Championship 1137 Switch Hit Corporate Cricket Tournament 1138 PlayTM Sports Arena Box Cricket league 1139 The Box Cricket League Edition II (16-17-18 No... 1140 Shuttle Swap Badminton Tournament - With Singl... 1141 SPARK BADMINTON LEAGUE - OCT 14th 2018 1142 Bud90 Match Screenings at Loft38 1143 5 A-Side Football Tournament 1144 5 vs 5 Football league - With Back 2 Track events 1145 Bangalore Sports Carnival Table Tennis Juniors... 1146 Asia cup live screening at la casa Brewery+ ki... 1147 Super 6 Corporate Cricket League 1148 Coolulu is organizing MATT &amp; MUD T20 Cricket L... 1149 United Sportzs Pure Corporate Cricket season-10 1150 Sign up for updates on the VIVO Pro Kabaddi Se... 1151 VIVO Pro Kabaddi - UP Yoddha vs Patna Pirates ... 1152 HERO Indian Super League 2018-19: Kerala Blast... 1153 HERO ISL: FC Goa Memberships 1154 Hero Indian Super League 2018-19: Delhi Dynamo... 1155 HERO Indian Super League 2018-19: Jamshedpur F... 1156 Yoga Therapy Classes in Bangalore 1157 Saree Walkathon 1158 Weekend Yoga Teachers Training Program 1159 Bangalore Walks 1160 Oxfam Trailwalker Bengaluru 1161 TAD Pune 2018 (Triathlon Aquathlon Duathlon) 1162 RUN FOR CANCER CHILDREN [1163 rows x 241 columns], event_name category \ 0 Musical Camping at Dahanu Chiku farm outdoor 1 Adventure Camping at Wada outdoor 2 Kaas Plateau Tour outdoor 3 Pawna Lake Camping, kevre, Lonavala outdoor 4 Night Trek and Camping at Korigad Fort outdoor 5 PARAMOTORING outdoor 6 WATERFALL TREK &amp; BEACH CAMPING (NAGALAPURAM: N... outdoor 7 Happiest Land On Earth - Bhutan outdoor 8 4 Days serial hiking in Sahyadris - Sep 29 to ... outdoor 9 Ride To Valparai outdoor 10 Dzongri Trek - Gateway to Kanchenjunga Mountain outdoor 11 Skandagiri Night Trek With Camping outdoor 12 Kalsubai Trek | Plan The Unplanned outdoor 13 Bike N Hike Skandagiri outdoor 14 Unplanned Stories - Episode 6 - Travel Tales outdoor 15 Feast on authentic flavors from Goa! outdoor 16 The Boot Camp outdoor 17 The HandleBards: Romeo and Juliet at Ranga Sha... outdoor 18 Workshop on Metagenomic Sequencing on the Grid... Science &amp; Technology 19 Aerovision Science &amp; Technology 20 Electric Vehicle Technology Workshop Science &amp; Technology 21 BPM Strategy Summit Science &amp; Technology 22 Summit of Interior Designers &amp; Architecture Science &amp; Technology 23 SMART ASIA India Expo&amp; Summit Science &amp; Technology 24 A Smart City Life Exhibition Science &amp; Technology 25 OPEN SOURCE INDIA Science &amp; Technology 26 SolarRoofs India Bangalore Science &amp; Technology 27 International Conference on Innovative Researc... Science &amp; Technology 28 International Conference on Business Managemen... Science &amp; Technology 29 DevOn Summit Bangalore - Digital Transformations Science &amp; Technology .. ... ... 144 Asia cup live screening at la casa Brewery+ ki... Sports &amp; Fitness 145 Asia Cup 2018 live screening at La Casa Brewer... Sports &amp; Fitness 146 FIFA FINAL AT KORAMANGALA TETTO - With #fifa#f... Sports &amp; Fitness 147 Womenasia Indoor Cricket Championship Sports &amp; Fitness 148 Switch Hit Corporate Cricket Tournament Sports &amp; Fitness 149 PlayTM Sports Arena Box Cricket league Sports &amp; Fitness 150 The Box Cricket League Edition II (16-17-18 No... Sports &amp; Fitness 151 Shuttle Swap Badminton Tournament - With Singl... Sports &amp; Fitness 152 SPARK BADMINTON LEAGUE - OCT 14th 2018 Sports &amp; Fitness 153 Bud90 Match Screenings at Loft38 Sports &amp; Fitness s 170 Bangalore Walks Sports &amp; Fitness 171 Oxfam Trailwalker Bengaluru Sports &amp; Fitness 172 TAD Pune 2018 (Triathlon Aquathlon Duathlon) Sports &amp; Fitness 173 RUN FOR CANCER CHILDREN Sports &amp; Fitness category_id description \ 0 19 Dear All Camping Lovers, Come take camping exp... 1 19 Our Adventure campsite at Wada is developed wi... 2 19 Type: Eco Tour Height: 3937 FT above MSL (Appr... 3 19 Our Pawna Lake Camping site is located near Ke... 4 19 Type: Hill Fort Height: 3050 Feet above MSL (A... 23 22 Making 'Smart Cities Mission' a Reality The SM... 24 22 A Smart City Life A Smart City Life Exhibition... 25 22 Asia's No. 1 Convention on Open Source Started... 26 22 The conference will offer an excellent platfor... 27 22 Provides a leading forum for the presentation ... 28 22 Provide opportunity for the global participant... 29 22 The biggest event about Digital Transformation... .. ... ... 144 23 As the cricket fever grips the country again, ... 145 23 An evening of fun, food, drinks and rooting fo... 146 23 The time has come, who will take their place S... 147 23 Do you want to prove that Age is not a barrier... 148 23 We Invite All The Corporate Companies To Be A ... 149 23 PlayTM happy to announce you that conducting o... 150 23 A Mix of fun rules and cricketing skills. Afte... 151 23 Shuttle Swap presents Singles, Doubles and Mix... 152 23 Yonex Mavis 350 Shuttle will be used State/Nat... 153 23 Light up the FIFA World Cup with Bud90 Match S... 154 23 We are charmed to launch the SVSEVENTZ.COM 5-A... 155 23 We corephysio FC invite you for our first foot... 156 23 After completing the 2nd season of Bangalore S... 157 23 As the cricket fever grips the country again, ... 158 23 Introducing BOX Cricket Super 6 Corporate Cric... 159 23 After the sucess of '1st Matt &amp; Mudd T20 Leagu... 160 23 Hi All, It is my pleasure to officially announ... 161 23 Sign up: Get early updates, free movie voucher... 162 23 About VIVO Pro Kabaddi 2018: A new season of t... 163 23 The Hero Indian Super League (ISL) is India's ... 164 23 Limited time offer: Free Paytm Movie Voucher w... 165 23 The 5th edition of the Indian Super League is ... 166 23 Calling all Jamshedpur FC fans! Here's your ch... 167 23 Empower yourself and progress towards a health... 168 23 Making people happy when they feel that its en... 169 23 LOVE YOGA ?- but too busy with work during the... 170 23 The coolest way to tour the city ! Absorb the ... 171 23 Ready to be a part of India's Biggest Walkatho... 172 23 The event will comprise of the following Open ... 173 23 RUN FOR CANCER CHILDREN On world Cancer Day 3r... Unnamed: 4 Unnamed: 5 0 NaN NaN 1 NaN NaN 2 NaN NaN 3 NaN NaN 4 NaN NaN 24 NaN NaN 25 NaN NaN 26 NaN NaN 27 NaN NaN 28 NaN NaN 29 NaN NaN .. ... ... 144 NaN NaN 145 NaN NaN 146 NaN NaN 147 NaN NaN 148 NaN NaN 149 NaN NaN [174 rows x 6 columns]] </code></pre>
<p>Whats wrong with a simple:</p> <pre><code>pd.concat([df1, df2], ignore_index=True)).to_csv('File.csv', index=False) </code></pre> <p>this will work if they have the <strong>same columns</strong>.</p> <p>A more verbose way to extract specific columns would be:</p> <pre><code>(pd.concat([df1[['event_name','category', 'category_id', 'description']], df2[['event_name','category', 'category_id', 'description']]], ignore_index=True)) .to_csv('File.csv', index=False)) </code></pre> <p>Separate Notes: </p> <ol> <li>you are initializing a DF with just columns and then outputting that to a CSV.</li> <li>Why are you using <code>.squeeze</code> to convert it to 1-D dataset?</li> </ol>
python|pandas|csv|dataframe
1
399
58,572,345
How to write a program using Numpy to generate and print 5 random number between 0 & 1
<p>How to write a program using Numpy to generate and print 5 random number between 0 and 1</p>
<pre><code>import numpy as np numbers = np.random.rand(5) print(numbers) </code></pre> <p><a href="https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.random.rand.html" rel="nofollow noreferrer">np.random.rand</a> will produce a sample from the uniform distribution over [0,1]</p> <p>If you want to generate numbers from a different interval, let's say [a,b] , you can use:</p> <pre><code>import numpy as np a = 2 b = 4 numbers = a + np.random.rand(5)*(b-a) print(numbers) </code></pre>
python|numpy|random
0