Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
---|---|---|---|---|---|---|
900 | 18,091,694 | Monte Carlo Simulation with Python: building a histogram on the fly | <p>I have a conceptual question on building a histogram on the fly with Python. I am trying to figure out if there is a good algorithm or maybe an existing package.</p>
<p>I wrote a function, which runs a Monte Carlo simulation, gets called 1,000,000,000 times, and returns a 64 bit floating number at the end of each run. Below is the said function:</p>
<pre><code>def MonteCarlo(df,head,span):
# Pick initial truck
rnd_truck = np.random.randint(0,len(df))
full_length = df['length'][rnd_truck]
full_weight = df['gvw'][rnd_truck]
# Loop using other random trucks until the bridge is full
while True:
rnd_truck = np.random.randint(0,len(df))
full_length += head + df['length'][rnd_truck]
if full_length > span:
break
else:
full_weight += df['gvw'][rnd_truck]
# Return average weight per feet on the bridge
return(full_weight/span)
</code></pre>
<p><code>df</code> is a Pandas dataframe object having columns labeled as <code>'length'</code> and <code>'gvw'</code>, which are truck lengths and weights, respectively. <code>head</code> is the distance between two consecutive trucks, <code>span</code> is the bridge length. The function randomly places trucks on the bridge as long as the total length of the truck train is less than the bridge length. Finally, calculates the average weight of the trucks existing on the bridge per foot (total weight existing on the bridge divided by the bridge length).</p>
<p>As a result I would like to build a tabular histogram showing the distribution of the returned values, which can be plotted later. I had some ideas in mind:</p>
<ol>
<li><p>Keep collecting the returned values in a numpy vector, then use existing histogram functions once the MonteCarlo analysis is completed. This would not be feasable, since if my calculation is correct, I would need 7.5 GB of memory for that vector only (1,000,000,000 64 bit floats ~ 7.5 GB)</p></li>
<li><p>Initialize a numpy array with a given range and number of bins. Increase the number of items in the matching bin by one at the end of each run. The problem is, I do not know the range of values I would get. Setting up a histogram with a range and an appropriate bin size is an unknown. I also have to figure out how to assign values to the correct bins, but I think it is doable.</p></li>
<li><p>Do it somehow on the fly. Modify ranges and bin sizes each time the function returns a number. This would be too tricky to write from scratch I think.</p></li>
</ol>
<p>Well, I bet there may be a better way to handle this problem. Any ideas would be welcome!</p>
<p>On a second note, I tested running the above function for 1,000,000,000 times only to get the largest value that is computed (the code snippet is below). And this takes around an hour when <code>span = 200</code>. The computation time would increase if I run it for longer spans (the while loop runs longer to fill the bridge with trucks). Is there a way to optimize this you think?</p>
<pre><code>max_w = 0
i = 1
while i < 1000000000:
if max_w < MonteCarlo(df_basic, 15., 200.):
max_w = MonteCarlo(df_basic, 15., 200.)
i += 1
print max_w
</code></pre>
<p>Thanks!</p> | <p>Here is a possible solution, with fixed bin size, and bins of the form [k * size, (k + 1) * size[. The function finalizebins returns two lists: one with bin counts (a), and the other (b) with bin lower bounds (the upper bound is deduced by adding binsize).</p>
<pre><code>import math, random
def updatebins(bins, binsize, x):
i = math.floor(x / binsize)
if i in bins:
bins[i] += 1
else:
bins[i] = 1
def finalizebins(bins, binsize):
imin = min(bins.keys())
imax = max(bins.keys())
a = [0] * (imax - imin + 1)
b = [binsize * k for k in range(imin, imax + 1)]
for i in range(imin, imax + 1):
if i in bins:
a[i - imin] = bins[i]
return a, b
# A test with a mixture of gaussian distributions
def check(n):
bins = {}
binsize = 5.0
for i in range(n):
if random.random() > 0.5:
x = random.gauss(100, 50)
else:
x = random.gauss(-200, 150)
updatebins(bins, binsize, x)
return finalizebins(bins, binsize)
a, b = check(10000)
# This must be 10000
sum(a)
# Plot the data
from matplotlib.pyplot import *
bar(b,a)
show()
</code></pre>
<p><img src="https://i.stack.imgur.com/mTeCA.png" alt="enter image description here"></p> | python|numpy|pandas|histogram|montecarlo | 3 |
901 | 55,296,464 | How can I feed a sparse placeholder in a TensorFlow model from Java | <p>I'm trying to calculate the best match for a given address with the kNN algorithm in TensorFlow, which works pretty good, but when I'm trying to export the model and use it in our Java Environment I got stuck on how to feed the sparse placholders from Java. </p>
<p>Here is a pretty much stripped down version of the python part, which returns the smallest distance between the test name and the best reference name. So far this work's as expected. When I export the model and import it in my Java program it always returns the same value (distance of the placeholders default). I asume, that the python function <code>sparse_from_word_vec(word_vec)</code> isn't in the model, which would totally make sense to me, but then how should i make this sparse tensor? My input is a single string and I need to create a fitting sparse tensor (value) to calculate the distance. I also searched for a way to generate the sparse tensor on the Java side, but without success.</p>
<pre><code>import tensorflow as tf
import pandas as pd
d = {'NAME': ['max mustermann',
'erika musterfrau',
'joseph haydn',
'johann sebastian bach',
'wolfgang amadeus mozart']}
df = pd.DataFrame(data=d)
input_name = tf.placeholder_with_default('max musterman',(), name='input_name')
output_dist = tf.placeholder(tf.float32, (), name='output_dist')
test_name = tf.sparse_placeholder(dtype=tf.string)
ref_names = tf.sparse_placeholder(dtype=tf.string)
output_dist = tf.edit_distance(test_name, ref_names, normalize=True)
def sparse_from_word_vec(word_vec):
num_words = len(word_vec)
indices = [[xi, 0, yi] for xi,x in enumerate(word_vec) for yi,y in enumerate(x)]
chars = list(''.join(word_vec))
return(tf.SparseTensorValue(indices, chars, [num_words,1,1]))
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
t_data_names=tf.constant(df['NAME'])
reference_names = [el.decode('UTF-8') for el in (t_data_names.eval())]
sparse_ref_names = sparse_from_word_vec(reference_names)
sparse_test_name = sparse_from_word_vec([str(input_name.eval().decode('utf-8'))]*5)
feeddict={test_name: sparse_test_name,
ref_names: sparse_ref_names,
}
output_dist = sess.run(output_dist, feed_dict=feeddict)
output_dist = tf.reduce_min(output_dist, 0)
print(output_dist.eval())
tf.saved_model.simple_save(sess,
"model-simple",
inputs={"input_name": input_name},
outputs={"output_dist": output_dist})
</code></pre>
<p>And here is my Java method:</p>
<pre><code>public void run(ApplicationArguments args) throws Exception {
log.info("Loading model...");
SavedModelBundle savedModelBundle = SavedModelBundle.load("/model", "serve");
byte[] test_name = "Max Mustermann".toLowerCase().getBytes("UTF-8");
List<Tensor<?>> output = savedModelBundle.session().runner()
.feed("input_name", Tensor.<String>create(test_names))
.fetch("output_dist")
.run();
System.out.printl("Nearest distance: " + output.get(0).floatValue());
}
</code></pre> | <p>I was able to get your example working. I have a couple of comments on your python code before diving in.</p>
<p>You use the variable <code>output_dist</code> for 3 different value types throughout the code. I'm not a python expert, but I think it's bad practice. You also never actually use the <code>input_name</code> placeholder, except for exporting it as an input. Last one is that <code>tf.saved_model.simple_save</code> is deprecated, and you should use the <a href="https://www.tensorflow.org/api_docs/python/tf/saved_model/Builder" rel="nofollow noreferrer"><code>tf.saved_model.Builder</code></a> instead.</p>
<p>Now for the solution.</p>
<p>Looking at the <code>libtensorflow</code> jar file using the command <code>jar tvf libtensorflow-x.x.x.jar</code> (thanks to <a href="https://stackoverflow.com/a/15720911/1097517">this</a> post), you can see that there are no useful bindings for creating a sparse tensor (maybe make a feature request?). So we have to change the input to a dense tensor, then add operations to the graph to convert it to sparse. In your original code the sparse conversion was on the python side which means that the loaded graph in java wouldn't have any ops for it.</p>
<p>Here is the new python code:</p>
<pre><code>import tensorflow as tf
import pandas as pd
def model():
#use dense tensors then convert to sparse for edit_distance
test_name = tf.placeholder(shape=(None, None), dtype=tf.string, name="test_name")
ref_names = tf.placeholder(shape=(None, None), dtype=tf.string, name="ref_names")
#Java Does not play well with the empty character so use "/" instead
test_name_sparse = tf.contrib.layers.dense_to_sparse(test_name, "/")
ref_names_sparse = tf.contrib.layers.dense_to_sparse(ref_names, "/")
output_dist = tf.edit_distance(test_name_sparse, ref_names_sparse, normalize=True)
#output the index to the closest ref name
min_idx = tf.argmin(output_dist)
return test_name, ref_names, min_idx
#Python code to be replicated in Java
def pad_string(s, max_len):
return s + ["/"] * (max_len - len(s))
d = {'NAME': ['joseph haydn',
'max mustermann',
'erika musterfrau',
'johann sebastian bach',
'wolfgang amadeus mozart']}
df = pd.DataFrame(data=d)
input_name = 'max musterman'
#pad dense tensor input
max_len = max([len(n) for n in df['NAME']])
test_input = [list(input_name)]*len(df['NAME'])
#no need to pad, all same length
ref_input = list(map(lambda x: pad_string(x, max_len), [list(n) for n in df['NAME']]))
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
test_name, ref_names, min_idx = model()
#run a test to make sure the model works
feeddict = {test_name: test_input,
ref_names: ref_input,
}
out = sess.run(min_idx, feed_dict=feeddict)
print("test output:", out)
#save the model with the new Builder API
signature_def_map= {
"predict": tf.saved_model.signature_def_utils.predict_signature_def(
inputs= {"test_name": test_name, "ref_names": ref_names},
outputs= {"min_idx": min_idx})
}
builder = tf.saved_model.Builder("model")
builder.add_meta_graph_and_variables(sess, ["serve"], signature_def_map=signature_def_map)
builder.save()
</code></pre>
<p>And here is the java to load and run it. There is probably a lot of room for improvement here (java isn't my main language), but it gives you the idea.</p>
<pre><code>import org.tensorflow.Graph;
import org.tensorflow.Session;
import org.tensorflow.Tensor;
import org.tensorflow.TensorFlow;
import org.tensorflow.SavedModelBundle;
import java.util.ArrayList;
import java.util.List;
import java.util.Arrays;
public class Test {
public static byte[][] makeTensor(String s, int padding) throws Exception
{
int len = s.length();
int extra = padding - len;
byte[][] ret = new byte[len + extra][];
for (int i = 0; i < len; i++) {
String cur = "" + s.charAt(i);
byte[] cur_b = cur.getBytes("UTF-8");
ret[i] = cur_b;
}
for (int i = 0; i < extra; i++) {
byte[] cur = "/".getBytes("UTF-8");
ret[len + i] = cur;
}
return ret;
}
public static byte[][][] makeTensor(List<String> l, int padding) throws Exception
{
byte[][][] ret = new byte[l.size()][][];
for (int i = 0; i < l.size(); i++) {
ret[i] = makeTensor(l.get(i), padding);
}
return ret;
}
public static void main(String[] args) throws Exception {
System.out.println("Loading model...");
SavedModelBundle savedModelBundle = SavedModelBundle.load("model", "serve");
List<String> str_test_name = Arrays.asList("Max Mustermann",
"Max Mustermann",
"Max Mustermann",
"Max Mustermann",
"Max Mustermann");
List<String> names = Arrays.asList("joseph haydn",
"max mustermann",
"erika musterfrau",
"johann sebastian bach",
"wolfgang amadeus mozart");
//get the max length for each array
int pad1 = str_test_name.get(0).length();
int pad2 = 0;
for (String var : names) {
if(var.length() > pad2)
pad2 = var.length();
}
byte[][][] test_name = makeTensor(str_test_name, pad1);
byte[][][] ref_names = makeTensor(names, pad2);
//use a with block so the close method is called
try(Tensor t_test_name = Tensor.<String>create(test_name))
{
try (Tensor t_ref_names = Tensor.<String>create(ref_names))
{
List<Tensor<?>> output = savedModelBundle.session().runner()
.feed("test_name", t_test_name)
.feed("ref_names", t_ref_names)
.fetch("ArgMin")
.run();
System.out.println("Nearest distance: " + output.get(0).longValue());
}
}
}
}
</code></pre> | java|tensorflow | 1 |
902 | 55,577,551 | Transform all rows of data frame into arrays and pass to function | <p>I want to transform all rows of a data frame to arrays and use the arrays in a function. The function should create a new column with the results of the function for every row.</p>
<pre><code>def harmonicMean(arr):
sum = 0;
for item in arr:
sum = sum + float(1.0/item);
print "inside" + str(float(1.0/item));
print sum;
return float(len(arr) / sum);
</code></pre>
<p>The function actually generates harmonic mean for every row in the data frame. These values should be populated in a new column in the data frame. (the data frame also contains <code>Nan</code> values)</p> | <p>You can calculate without iterating over the rows:</p>
<pre><code>df['hmean'] = df.notnull().sum(axis=1)/(1/df).sum(axis=1)
a b c d e hmean
0 4 5.0 2.0 5.0 10 4.000000
1 2 8.0 1.0 8.0 6 2.608696
2 7 NaN 1.0 1.0 8 1.763780
3 7 1.0 9.0 4.0 9 3.095823
4 8 5.0 8.0 NaN 3 5.106383
5 3 8.0 6.0 10.0 6 5.607477
6 3 7.0 3.0 9.0 9 4.846154
7 8 NaN NaN NaN 6 6.857143
8 2 4.0 1.0 5.0 2 2.040816
9 5 7.0 5.0 3.0 1 2.664975
</code></pre> | python|pandas|dataframe | 3 |
903 | 55,470,614 | Dataframe writing to Postgresql poor performance | <p>working in postgresql I have a cartesian join producing ~4 million rows.
The join takes ~5sec and the write back to the DB takes ~1min 45sec.</p>
<p>The data will be required for use in python, specifically in a pandas dataframe, so I am experimenting with duplicating this same data in python. I should say here that all these tests are running on one machine, so nothing is going across a network.</p>
<p>Using psycopg2 and pandas, reading in the data and performing the join to get the 4 million rows (from an answer here:<a href="https://stackoverflow.com/questions/13269890/cartesian-product-in-pandas">cartesian product in pandas</a>) takes consistently under 3 secs, impressive.</p>
<p>Writing the data back to a table in the database however takes anything from 8 minutes (best method) to 36+minutes (plus some methods I rejected as I had to stop them after >1hr).</p>
<p>While I was not expecting to reproduce the "sql only" time, I would hope to be able to get closer than 8 minutes (I`d have thought 3-5 mins would not be unreasonable).</p>
<p>Slower methods include:</p>
<p>36min - sqlalchemy`s table.insert (from 'test_sqlalchemy_core' here <a href="https://docs.sqlalchemy.org/en/latest/faq/performance.html#i-m-inserting-400-000-rows-with-the-orm-and-it-s-really-slow" rel="nofollow noreferrer">https://docs.sqlalchemy.org/en/latest/faq/performance.html#i-m-inserting-400-000-rows-with-the-orm-and-it-s-really-slow</a>)</p>
<p>13min - psycopg2.extras.execute_batch (<a href="https://stackoverflow.com/a/52124686/3979391">https://stackoverflow.com/a/52124686/3979391</a>)</p>
<p>13-15min (depends on chunksize) - pandas.dataframe.to_sql (again using sqlalchemy) (<a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_sql.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_sql.html</a>)</p>
<p>Best way (~8min) is using psycopg2`s cursor.copy_from method (found here: <a href="https://github.com/blaze/odo/issues/614#issuecomment-428332541" rel="nofollow noreferrer">https://github.com/blaze/odo/issues/614#issuecomment-428332541</a>).
This involves dumping the data to a csv first (in memory via io.StringIO), that alone takes 2 mins.</p>
<p>So, my questions:</p>
<p>1) Anyone have any potentially faster ways of writing millions of rows from a pandas dataframe to postgresql?</p>
<p>2) The docs for the cursor.copy_from method (<a href="http://initd.org/psycopg/docs/cursor.html" rel="nofollow noreferrer">http://initd.org/psycopg/docs/cursor.html</a>) state that the source object needs to support the read() and readline() methods (hence the need for io.StringIO). Presumably, if the dataframe supported those methods, we could dispense with the write to csv. Is there some way to add these methods?</p>
<p>Thanks.
Giles</p> | <p>Answering Q 1 myself:
It seems the issue had more to do with Postgresql (or rather Databases in general). Taking into account points made in this article:<a href="https://use-the-index-luke.com/sql/dml/insert" rel="nofollow noreferrer">https://use-the-index-luke.com/sql/dml/insert</a> I found the following:</p>
<p>1) Removing all indexes from the destination table resulted in the query running in 9 seconds. Rebuilding the indexes (in postgresql) took a further 12 seconds, so still well under the other times.</p>
<p>2) With only a primary key in place, Inserting rows ordered by the primary key columns reduced the time taken to about a third. This makes sense as there should be little or no shuffling of the index rows required. I also verified that this is the reason why my cartesian join in postgresql was faster in the first place (IE the rows were ordered by the index, purely by chance), placing the same rows in a temporary table (unordered) and inserting from that actually took allot longer.</p>
<p>3) I tried similar experiments on our mysql systems and found the same increase in insert speed when removing indexes. With mysql however it seemed that rebuilding the indexes used up any time gained.</p>
<p>I hope this helps anyone else who comes across this question from a search.</p>
<p>I still wonder if it is possible to remove the write to csv step in python (Q2 above) as I believe I could then write something in python that would be faster than pure postgresql.</p>
<p>Thanks, Giles</p> | python|pandas|postgresql|dataframe|sqlalchemy | 1 |
904 | 10,159,178 | How to create a 2D "rect" array (square block of 1's, else 0's) in numpy? | <p>What is the "correct" way of creating a 2D numpy "rect" array, like:</p>
<pre><code>0000000000000000000
0000000000000000000
0000000000111110000
0000000000111110000
0000000000111110000
0000000000000000000
</code></pre>
<p>i.e. an array which has a given value inside certain bounds, or zero otherwise?</p> | <p>Just create an array of zeros and set the area you want to one. </p>
<p>E.g.</p>
<pre><code>import numpy as np
data = np.zeros((6,18))
data[2:5, 9:14] = 1
print data
</code></pre>
<p>This yields:</p>
<pre><code>[[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 1. 1. 1. 1. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 1. 1. 1. 1. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 1. 1. 1. 1. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]]
</code></pre> | python|arrays|numpy|rect | 16 |
905 | 10,149,416 | Numpy modify array in place? | <p>I have the following code which is attempting to normalize the values of an <code>m x n</code> array (It will be used as input to a neural network, where <code>m</code> is the number of training examples and <code>n</code> is the number of features).</p>
<p>However, when I inspect the array in the interpreter after the script runs, I see that the values are not normalized; that is, they still have the original values. I guess this is because the assignment to the <code>array</code> variable inside the function is only seen within the function. </p>
<p>How can I do this normalization in place? Or do I have to return a new array from the normalize function?</p>
<pre><code>import numpy
def normalize(array, imin = -1, imax = 1):
"""I = Imin + (Imax-Imin)*(D-Dmin)/(Dmax-Dmin)"""
dmin = array.min()
dmax = array.max()
array = imin + (imax - imin)*(array - dmin)/(dmax - dmin)
print array[0]
def main():
array = numpy.loadtxt('test.csv', delimiter=',', skiprows=1)
for column in array.T:
normalize(column)
return array
if __name__ == "__main__":
a = main()
</code></pre> | <p>If you want to apply mathematical operations to a numpy array in-place, you can simply use the standard in-place operators <code>+=</code>, <code>-=</code>, <code>/=</code>, etc. So for example:</p>
<pre><code>>>> def foo(a):
... a += 10
...
>>> a = numpy.arange(10)
>>> a
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
>>> foo(a)
>>> a
array([10, 11, 12, 13, 14, 15, 16, 17, 18, 19])
</code></pre>
<p>The in-place version of these operations is a tad faster to boot, especially for larger arrays:</p>
<pre><code>>>> def normalize_inplace(array, imin=-1, imax=1):
... dmin = array.min()
... dmax = array.max()
... array -= dmin
... array *= imax - imin
... array /= dmax - dmin
... array += imin
...
>>> def normalize_copy(array, imin=-1, imax=1):
... dmin = array.min()
... dmax = array.max()
... return imin + (imax - imin) * (array - dmin) / (dmax - dmin)
...
>>> a = numpy.arange(10000, dtype='f')
>>> %timeit normalize_inplace(a)
10000 loops, best of 3: 144 us per loop
>>> %timeit normalize_copy(a)
10000 loops, best of 3: 146 us per loop
>>> a = numpy.arange(1000000, dtype='f')
>>> %timeit normalize_inplace(a)
100 loops, best of 3: 12.8 ms per loop
>>> %timeit normalize_copy(a)
100 loops, best of 3: 16.4 ms per loop
</code></pre> | python|arrays|numpy|in-place | 30 |
906 | 56,473,742 | DataFrame detect when one column becomes bigger than another | <p>I am wondering about code that detect when values in one column BECOME bigger than values in another column. So in the example below in row index 1 B becomes bigger than A and in row index 3 A becomes bigger than B. I would like to get a DataFrame that highlights row 1 and 2 and also which column that became bigger than which.</p>
<pre><code>In [1]: df
Out[1]:
A B
0 3 2
1 5 6
2 3 7
3 8 2
</code></pre>
<p>Desired result:</p>
<pre><code>In [1]: df_result
Out[1]:
RES
0 0
1 -1
2 0
3 1
</code></pre> | <p>You could check where <code>A</code> is greater than <code>B</code> cast to <code>int8</code> with <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.view.html" rel="nofollow noreferrer"><code>view</code></a> and take the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.diff.html" rel="nofollow noreferrer"><code>diff</code></a>:</p>
<pre><code>df.A.gt(df.B).view('i1').diff().fillna(0, downcast = 'i1')
0 0
1 -1
2 0
3 1
dtype: int8
</code></pre> | python|pandas|dataframe | 5 |
907 | 56,710,907 | How to increment value of a new column when duplicate value is found in another column of a dataframe in python? | <p>I've a CSV file that looks like :</p>
<pre><code>Timestamp Status
1501 Normal
1501 Normal
1502 Delay
1503 Received
1504 Normal
1504 Delay
1505 Received
1506 Received
1507 Delay
1507 Received
</code></pre>
<p>I want to add a new "Notif" column to dataframe that appears as a counter variable and has an increment when it comes across the "Received" value in the "Status" column.
I want the output to look like :</p>
<pre><code>Timestamp Status Notif
1501 Normal N0
1501 Normal N0
1502 Delay N0
1503 Received N1
1504 Normal N1
1504 Delay N1
1505 Received N2
1506 Received N3
1507 Delay N3
1507 Received N4
</code></pre>
<p>I tried searching for the solution to this and various sources suggest to code using the arcpy package but I want to make this work without it as PyCharm doesn't seem to support arcpy package</p>
<p>Also tried using numpy to use as a conditional operator but that doesn't seem to work</p> | <p>Iterating over the rows with <code>df.iterrows</code> you can achieve the following:</p>
<pre><code>df['Notif'] = None
counter = 0
for idx, row in df.iterrows():
if df.iloc[idx, 1] == "Received":
counter +=1
df.iloc[idx,-1] = "N" + str(counter)
print(df)
</code></pre>
<p><strong>Output</strong></p>
<pre><code>+----+------------+-----------+-------+
| | Timestamp | Status | Notif |
+----+------------+-----------+-------+
| 0 | 1501 | Normal | N0 |
| 1 | 1501 | Normal | N0 |
| 2 | 1502 | Delay | N0 |
| 3 | 1503 | Received | N1 |
| 4 | 1504 | Normal | N1 |
| 5 | 1504 | Delay | N1 |
| 6 | 1505 | Received | N2 |
| 7 | 1506 | Received | N3 |
| 8 | 1507 | Delay | N3 |
| 9 | 1507 | Received | N4 |
+----+------------+-----------+-------+
</code></pre> | python|pandas|dataframe|pycharm | 2 |
908 | 56,810,488 | Calculation of Laplacian in real pyFFTW | <p>For the forward (multidimensional) FFTW algorithm you can specify that the input <code>numpy.ndarray</code> is real, and the output should be complex. This is done when creating the byte-aligned arrays that go in the arguments of the <code>fft_object</code>:</p>
<pre><code>import numpy as np
import pyfftw
N = 256 # Input array size (preferrably 2^{a}*3^{b}*5^{c}*7^{d}*11^{e}*13^{f}, (e+f = 0,1))
dx = 0.1 # Spacing between mesh points
a = pyfftw.empty_aligned((N, N), dtype='float64')
b = pyfftw.empty_aligned((N, N//2+1), dtype='complex128')
fft_object = pyfftw.FFTW(a, b, axes=(0, 1), direction='FFTW_FORWARD')
</code></pre>
<p>The output array is not symmetric and the second axis is truncated up to the positive frequencies. For the complex FFT you can compute the laplacian with the following <code>np.ndarray</code></p>
<pre><code>kx, ky = np.meshgrid(np.fft.fftfreq(N, dx), np.fft.fftfreq(N, dx)) # Wave vector components
k2 = -4*np.pi**2*(kx*kx+ky*ky) # np.ndarray for the Laplacian operator in "frequency space"
</code></pre>
<p>How would it be done in the truncated case? I thought about using:</p>
<pre><code>kx, ky = np.meshgrid(np.fft.fftfreq(N//2+1, dx), np.fft.fftfreq(N, dx)) # The axes conven-
# tions are different
</code></pre>
<p>But, would this really work? It seems like it is neglecting the negative frequencies in the "y" direction.</p> | <p>I'm not familiar with <code>pyfftw</code>, but with the <code>numpy.fft</code> module it would work just fine (assuming you use <code>rfftfreq</code> as mentioned in the comments).</p>
<p>To recap: for a real array, <code>a</code>, the fourier transform, <code>b</code>, has a Hermtian-like property: <code>b(-kx,-ky)</code> is the complex conjugate of <code>b(kx,ky)</code>.
The real version of the forward fft discards (most of) the redundant information by omitting the negative <code>ky</code>s. The real version of the backward fft assumes that the values at the missing frequencies can be found by complex conjugating the appropriate elements.</p>
<p>If you had used the complex fft and kept all frequencies, <code>-k2 * b</code> would still have the Hermitian-like property. So the assumption made by the real backward fft still holds and would give the correct answer.</p>
<p>I guess with <code>pyfftw</code> it will work just fine provided that you specify a <code>float64</code> array of the correct size for the output for the <code>direction=FFT_BACKWARD</code> case.</p> | python|numpy|fft|numerical-methods|pyfftw | 0 |
909 | 66,874,360 | Converting nested JSON structures to Pandas DataFrames | <p>I've been struggling with the nested structure in json, how to convert to correct form</p>
<pre><code>{
"id": "0c576f35-d704-4fa8-8cbb-311c6be36358",
"employee_id": null,
"creator_id": "16ca2db9-206c-4e18-891d-a00a5252dbd3",
"closed_by_id": null,
"request_number": 23,
"priority": "2",
"form_id": "urlaub-weitere-abwesenheiten",
"status": "opened",
"name": "Urlaub & weitere Abwesenheiten",
"read_by_employee": false,
"custom_status": {
"id": 15793,
"name": "In Bearbeitung HR"
},
"due_date": null,
"created_at": "2021-03-29T15:18:37.572040+02:00",
"updated_at": "2021-03-29T15:22:15.590156+02:00",
"closed_at": null,
"archived_at": null,
"attachment_count": 1,
"category": {
"id": "payroll-time-management",
"name": "Payroll, Time & Attendance"
},
"public_comment_count": 0,
"form_data": [
{
"field_id": "subcategory",
"values": [
"Time & Attendance - Manage monthly/year-end consolidation and report"
]
},
{
"field_id": "separator-2",
"values": [
null
]
},
{
"field_id": "art-der-massnahme",
"values": [
"Fortbildung"
]
},
{
"field_id": "bezeichnung-der-schulung-kurses",
"values": [
"dfgzhujiko"
]
},
{
"field_id": "startdatum",
"values": [
"2021-03-26"
]
},
{
"field_id": "enddatum",
"values": [
"2021-03-27"
]
},
{
"field_id": "freistellung",
"values": [
"nein"
]
},
{
"field_id": "mit-bildungsurlaub",
"values": [
""
]
},
{
"field_id": "kommentarfeld_fortbildung",
"values": [
""
]
},
{
"field_id": "separator",
"values": [
null
]
},
{
"field_id": "instructions",
"values": [
null
]
},
{
"field_id": "entscheidung-hr-bp",
"values": [
"Zustimmen"
]
},
{
"field_id": "kommentarfeld-hr-bp",
"values": [
"wsdfghjkmhnbgvfcdxsybvnm,"
]
},
{
"field_id": "individuelle-abstimmung",
"values": [
""
]
}
],
"form_files": [
{
"id": 30129,
"filename": "empty_background.png",
"field_id": "anhang"
}
],
"visible_by_employee": false,
"organization_ids": [],
"need_edit_by_employee": false,
"attachments": []
</code></pre>
<p>}</p>
<hr />
<p>using a simple solution with pandas, dataframe</p>
<pre><code>Request = pd.DataFrame.from_dict(pd.json_normalize(data), orient='columns')
</code></pre>
<p>it's displaying almost in its correct form:</p>
<p><a href="https://i.stack.imgur.com/xtumr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xtumr.png" alt="enter image description here" /></a></p>
<p>how to split a dictionary from columns form_data i form_files, I've done a lot of research, but I'm still having a lot of trouble solving this problem, how to split form_data for columns, no rows for meta to ID</p> | <p>You can do something like this.</p>
<p>pass the <code>dataframe</code> and the column to the function as arguments</p>
<pre><code>def explode_node(child_df, column_value):
child_df = child_df.dropna(subset=[column_value])
if isinstance(child_df[str(column_value)].iloc[0], str):
child_df[column_value] = child_df[str(column_value)].apply(ast.literal_eval)
expanded_child_df = (pd.concat({i: json_normalize(x) for i, x in child_df.pop(str(column_value)).items()}).reset_index(level=1,drop=True).join(child_df, how='right', lsuffix='_left', rsuffix='_right').reset_index(drop=True))
expanded_child_df.columns = map(str.lower, expanded_child_df.columns)
return expanded_child_df
</code></pre> | python|json|pandas|dataframe | 1 |
910 | 66,777,182 | find common rows based on specific columns in a dataframe | <p>I have a dataframe, I would like to find the common rows based on a specific columns.</p>
<pre><code>packing_id col1 col2 col3 col4
1 1.0 2.0
2 2.0 2.0
3 1.0 1.0
4 3.0 3.0
. . .
. . .
</code></pre>
<p>I would like to find the rows where the <code>col1</code> and <code>col2</code> values are the same.
I tried
<code>np.where(df.col1==df.col2)</code> but it is not the right approach. I would appreciate your advice. Thanks.</p> | <p>I think your soluton is good, only add 2 parameters to <code>np.where</code>:</p>
<pre><code>df['new'] = np.where(df.col1==df.col2, 'same', 'no same')
</code></pre>
<p>If need filter them:</p>
<pre><code>df1 = df[df.col1==df.col2]
</code></pre> | python|pandas | 1 |
911 | 47,520,820 | Implementing momentum weight update for neural network | <p>I'm following along mnielsen's online <a href="http://neuralnetworksanddeeplearning.com/chap1.html" rel="nofollow noreferrer">book</a>. I'm trying to implement momentum weight update as defined <a href="http://cs231n.github.io/neural-networks-3/#sgd" rel="nofollow noreferrer">here</a> to his code <a href="https://github.com/mnielsen/neural-networks-and-deep-learning/blob/master/src/network.py" rel="nofollow noreferrer">here</a>. The overall idea is that for momentum weight update, you don't directly change weight vector with negative gradient. You have a parameter <code>velocity</code> which you set to zero to begin with and then you set hyperparameter <code>mu</code> to typically <code>0.9</code> .</p>
<pre><code># Momentum update
v = mu * v - learning_rate * dx # integrate velocity
x += v # integrate position
</code></pre>
<p>So I have weight w and change in weight as <code>nebla_w</code> in the following code snippet: </p>
<pre><code>def update_mini_batch(self, mini_batch, eta):
"""Update the network's weights and biases by applying
gradient descent using backpropagation to a single mini batch.
The ``mini_batch`` is a list of tuples ``(x, y)``, and ``eta``
is the learning rate."""
nabla_b = [np.zeros(b.shape) for b in self.biases]
nabla_w = [np.zeros(w.shape) for w in self.weights]
for x, y in mini_batch:
delta_nabla_b, delta_nabla_w = self.backprop(x, y)
nabla_b = [nb+dnb for nb, dnb in zip(nabla_b, delta_nabla_b)]
nabla_w = [nw+dnw for nw, dnw in zip(nabla_w, delta_nabla_w)]
self.weights = [w-(eta/len(mini_batch))*nw
for w, nw in zip(self.weights, nabla_w)]
self.biases = [b-(eta/len(mini_batch))*nb
for b, nb in zip(self.biases, nabla_b)]
</code></pre>
<p>so in the last two lines you update <code>self.weight</code> as</p>
<pre><code>self.weights = [w-(eta/len(mini_batch))*nw
for w, nw in zip(self.weights, nabla_w)]
</code></pre>
<p>for momentum weight update I'm doing the following:</p>
<pre><code>self.momentum_v = [ (momentum_mu * self.momentum_v) - ( ( float(eta) / float(len(mini_batch)) )* nw)
for nw in nebla_w ]
self.weights = [ w + v
for w, v in zip (self.weights, self.momentum_v)]
</code></pre>
<p>However, I'm getting following error:</p>
<pre><code> TypeError: can't multiply sequence by non-int of type 'float'
</code></pre>
<p>for <code>momentum_v</code> update. My <code>eta</code> hyperparameter is already float although I again wrapped it by float function. I also wrapped <code>len(mini_batch)</code> by float as well. I also tried doing <code>nw.astype(float)</code> but I will still get the error. I am not sure why. <code>nabla_w</code> is a numpy array of floats.</p> | <p>As discussed in the comments, something is not a numpy array here. The error given above</p>
<pre><code>TypeError: can't multiply sequence by non-int of type 'float'
</code></pre>
<p>is an error issued by Python for the sequence types (list, tuple, etc). The error message means that a sequence cannot be multiplied by a non-int. They <em>can</em> be multiplied by an int, but this doesn't change the values---it just repeats the sequence, i.e.,</p>
<pre><code>>>> [1, 0] * 3
[1, 0, 1, 0, 1, 0]
</code></pre>
<p>And of course in this frame, multiplying by a float makes no sense:</p>
<pre><code>>>> [1, 0] * 3.14
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: can't multiply sequence by non-int of type 'float'
</code></pre>
<p>You'll see the same error message you've got here. So one of the variables you're multiplying is indeed not a numpy array and is one of the generic sequence types. A simple cast of <code>np.array()</code> around the offending variable would fix it, or of course you can just change the definition to an array.</p> | python|numpy|machine-learning|neural-network|mnist | 0 |
912 | 47,109,980 | How can i replace image masking in for-loop with logical indexing in python? | <p>I am trying to track an object in a video by its color. Can i simplify this code:</p>
<pre><code>while True:
ret, frame = cap.read()
if not ret:
break
height, width, channel = frame.shape
hue = cv2.cvtColor(frame, cv2.COLOR_RGB2HSV)
for i in range(width):
for j in range(height):
if (hue[j, i, 0] < 110 or hue[j, i, 0] > 140):
hue[j, i, 0] = 0
hue[j, i, 1] = 0
hue[j, i, 2] = 0
</code></pre> | <p>Get rid of the two nested for-loops with <a href="https://docs.scipy.org/doc/numpy-1.13.0/user/basics.indexing.html#boolean-or-mask-index-arrays" rel="nofollow noreferrer"><code>masking</code></a>, like so -</p>
<pre><code>hue[(hue[...,0] < 110) | (hue[...,0] >140)] = 0
</code></pre>
<p>This works because the mask created with <code>(hue[...,0] < 110) | (hue[...,0] >140)</code> would be of same shape as the first two dims of <code>hue</code> and that would be used for masking into <code>hue</code> as it indexes along the first two dims of it and applies for all the elements along the last axis. This does the job of three steps assignments : <code>hue[j, i, 0] = 0; hue[j, i, 1] = 0; hue[j, i, 2] = 0;</code> in one go. </p> | python|numpy|opencv|indexing | 0 |
913 | 47,383,584 | Pandas adds "\r" to csv file | <h1>This boils down to a simpler problem <a href="https://stackoverflow.com/questions/47384652/python-write-replaces-n-with-r-n-in-windows">here</a></h1>
<p>I have a pandas dataframe that looks like this: </p>
<pre><code>In [1]: df
Out[1]:
0 1
0 a A\nB\nC
1 a D\nE\nF
2 b A\nB\nC
</code></pre>
<p>When I write it to a csv file then read it back, I expect to have the same dataframe. This is not the case:</p>
<pre><code>In [2]: df.to_csv("out.csv")
In [3]: df = pd.read_csv("out.csv", index_col=0)
In [4]: df
Out[4]:
0 1
0 a A\r\nB\r\nC
1 a D\r\nE\r\nF
2 b A\r\nB\r\nC
</code></pre>
<p>A <code>\r</code> character is added before each <code>\n</code>. Writing and reading it again, the same thing happens:</p>
<pre><code>In [5]: df.to_csv("out.csv")
In [6]: df = pd.read_csv("out.csv", index_col=0)
In [7]: df
Out[7]:
0 1
0 a A\r\r\nB\r\r\nC
1 a D\r\r\nE\r\r\nF
2 b A\r\r\nB\r\r\nC
</code></pre>
<p>How can I stop pandas from adding a <code>\r</code> character?</p>
<hr>
<p>Edits:<br>
Yes I am on windows.
<hr>
<code>pd.read_csv(pd.compat.StringIO(df.to_csv(index=False)))</code> gives me the same dataframe, so the problem seems to be writing to a file
<hr>
Passing an open file object in binary mode like this:</p>
<pre><code>with open("out.csv", "wb") as file:
df.to_csv(file)
</code></pre>
<p>results in:</p>
<pre><code>TypeError Traceback (most recent call last)
<ipython-input-20-f31d52fb2ce3> in <module>()
1 with open("out.csv", "wb") as file:
----> 2 df.to_csv(file)
3
C:\Program Files\Anaconda3\lib\site-packages\pandas\core\frame.py in to_csv(self, path_or_buf, sep, na_rep, float_format, columns, header, index, index_label, mode, encoding, compression, quoting, quotechar, line_terminator, chunksize, tupleize_cols, date_format, doublequote, escapechar, decimal, **kwds)
1342 doublequote=doublequote,
1343 escapechar=escapechar, decimal=decimal)
-> 1344 formatter.save()
1345
1346 if path_or_buf is None:
C:\Program Files\Anaconda3\lib\site-packages\pandas\formats\format.py in save(self)
1549
1550 else:
-> 1551 self._save()
1552
1553 finally:
C:\Program Files\Anaconda3\lib\site-packages\pandas\formats\format.py in _save(self)
1636 def _save(self):
1637
-> 1638 self._save_header()
1639
1640 nrows = len(self.data_index)
C:\Program Files\Anaconda3\lib\site-packages\pandas\formats\format.py in _save_header(self)
1632
1633 # write out the index label line
-> 1634 writer.writerow(encoded_labels)
1635
1636 def _save(self):
TypeError: a bytes-like object is required, not 'str'
</code></pre>
<p><hr>
Using regular write does not help</p>
<pre><code>In [1]: with open("out.csv", "w") as file:
...: df.to_csv(file)
...:
In [2]: df = pd.read_csv("out.csv")
In [3]: df
Out[3]:
Unnamed: 0 0 1
0 0 a A\r\nB\r\nC
1 1 a D\r\nE\r\nF
2 2 b A\r\nB\r\nC
</code></pre>
<p><hr>
My python version is <code>Python 3.5.2 :: Anaconda 4.2.0 (64-bit)</code>
<hr>
I have determined that the problem is with <code>pandas.read_csv</code> and not <code>pandas.to_csv</code></p>
<pre><code>In [1]: df
Out[1]:
0 1
0 a A\nB\nC
1 a D\nE\nF
2 b A\nB\nC
In [2]: df.to_csv("out.csv")
In [3]: with open("out.csv", "r") as file:
...: s = file.read()
...:
In [4]: s # Only to_csv has been used, no \r's!
Out[4]: ',0,1\n0,a,"A\nB\nC"\n1,a,"D\nE\nF"\n2,b,"A\nB\nC"\n'
In [5]: pd.read_csv("out.csv") # Now the \r's come in
Out[5]:
Unnamed: 0 0 1
0 0 a A\r\nB\r\nC
1 1 a D\r\nE\r\nF
2 2 b A\r\nB\r\nC
</code></pre> | <p>As some have already said in comments above and on the post you have put in reference <a href="https://stackoverflow.com/questions/47384652/python-write-replaces-n-with-r-n-in-windows">here</a>, this is a typical windows issue when serializing newlines. The issue has been reported on pandas-dev github <a href="https://github.com/pandas-dev/pandas/issues/17365" rel="noreferrer">#17365</a> as well.</p>
<p>Hopefully on Python3, you can specify the newline:</p>
<pre><code>with open("out.csv", mode='w', newline='\n') as f:
df.to_csv(f, sep=",", line_terminator='\n', encoding='utf-8')
</code></pre> | python|python-3.x|pandas|csv | 9 |
914 | 68,329,805 | 'numpy.float64' object has no attribute 'ttest_ind' | <p>I am trying to perform a t-test with the following. It worked initially. But, now, it is showing the following error,</p>
<p><strong>'numpy.float64' object has no attribute 'ttest_ind'</strong></p>
<pre><code>col=list(somecolumns)
for i in col:
x = np.array(data1[data1.LoanOnCard == 0][i])
y = np.array(data1[data1.LoanOnCard == 1][i])
t, p_value = stats.ttest_ind(x,y, axis = 0,equal_var=False)
if p_value < 0.05: # Setting our significance level at 5%
print('Rejecting Null Hypothesis. Loan holders and non-Loan holders are not same for',i,'P value is %.2f' %p_value)
else:
print('Fail to Reject Null Hypothesis. Loan holders and non-Loan holders are same for',i,'P value is %.2f' %p_value)
</code></pre>
<p>I am struggling to find an answer to this. Cant able to resolve.</p> | <p>Add <code>from scipy import stats</code> to your code.</p>
<p>If you already did it, this means you likely overwrote <code>stats</code> with another object. Then you can do <code>import scipy.stats</code> and use <code>scipy.stats.ttest_ind</code> instead of <code>stats.ttest_ind</code></p> | pandas|numpy|data-science|scipy.stats | 1 |
915 | 68,138,677 | How to count specific words in a list from a panda dataframe? | <p>I was wondering how I can count the number of unique words that I have in a list from a specific data frame.
For example, let's say I have a list = <code>['John','Bob,'Hannah']</code>
Next, I have a data frame with a column called sentences</p>
<pre><code>df =
['sentences']
0 Bob went to the shop
1 John visited Hannah
2 Hannah ate a burger
</code></pre>
<p>I want the output to be:</p>
<pre><code>John 1
Bob 1
Hannah 2
</code></pre>
<p>How can I count the unique names in any given sentence in any row in a dataset?</p> | <p>You can use <code>Series.str.contains</code> and call the <code>sum</code> to get the number of occurances of a word in the given column, just iterate over the list for all the substrings and do the same for each word, store the result as dictionary.</p>
<pre class="lang-py prettyprint-override"><code>list1 = ['John','Bob','Hannah']
output = {}
for word in list1:
output[word] = df['sentences'].str.contains(word).sum()
</code></pre>
<p><strong>OUTPUT:</strong></p>
<pre class="lang-py prettyprint-override"><code>{'John': 1, 'Bob': 1, 'Hannah': 2}
</code></pre>
<p>You can even use it in a dictionary comprehension:</p>
<pre class="lang-py prettyprint-override"><code>>>> {word: df['sentences'].str.contains(word).sum() for word in list1}
{'John': 1, 'Bob': 1, 'Hannah': 2}
</code></pre>
<p><strong>PS:</strong> If a word/substring is present multiple time in the same row of the given column, the above method will count those multiple occurrences as 1, if you want to get multiple counts in that case, you can implement the same logic for each cell value</p> | python|pandas | 2 |
916 | 68,252,257 | Understanding Python Numpy while performing mathematical operations | <p>I have the following Numpy array. And I write the following code :</p>
<pre><code>import numpy as np
np_mat = np.array([[1, 2],[3, 4],[5, 6]])
np_mat * 2
print(np_mat.shape)
np_mat = np_mat + np.array([[10, 11],[-1,-2]])
print(np_mat)
</code></pre>
<p>Surprisingy Python throws an error saying</p>
<pre><code>ValueError Traceback (most recent call last)
<ipython-input-23-557c7da40077> in <module>
5 np_mat * 2
6 print(np_mat.shape)
----> 7 np_mat = np_mat + np.array([[10, 11],[-1,-2]])
8 print(np_mat)
9 print(np.median(np_mat))
ValueError: operands could not be broadcast together with shapes (3,2) (2,2)
</code></pre>
<p>Again when I make a small modification to the code that is, the same snippet but just
<code>np_mat = np_mat + np.array([10, 11])</code> it runs well showing the output</p>
<pre><code>[[11 13]
[13 15]
[15 17]]
</code></pre>
<p>I didn't understand the difference, how does Numpy addition work inside? Is there any rule for it? I am pretty sure that Numpy addition/multiplications don't work like normal matrix operations in algebra. Then what's the trick?</p>
<p>Also, for a Numpy 2D array like</p>
<pre><code>np_mat = np.array([[1, 2],
[3, 4],
[5, 6]])
</code></pre>
<p>when I write <code>np.median(np_mat)</code> it says the result is 3.5. How is it possible that the median is a number when the array is 2D? I could see the same results when I do a multiplication : <code>np_mat = np_mat * np.array([10, 11])</code> the output comes</p>
<pre><code>[[10 22]
[30 44]
[50 66]]
</code></pre>
<p>Numpy seems baffling! Please assist. Thanks in advance.</p> | <p><a href="https://numpy.org/doc/stable/user/basics.broadcasting.html#general-broadcasting-rules" rel="nofollow noreferrer">General Broadcasting Rules</a></p>
<blockquote>
<p>When operating on two arrays, NumPy compares their shapes
element-wise. It starts with the trailing (i.e. rightmost) dimensions
and works its way left. Two dimensions are compatible when</p>
<ul>
<li><p>they are equal, or</p>
</li>
<li><p>one of them is 1</p>
</li>
</ul>
</blockquote>
<p>When one of the dimensions is <code>1</code>, it will expand to the shape of the other array in the calculation.</p>
<p>In your first example the shapes are <code>(3,2)</code> and <code>(2,2)</code> so the right most dimension works, but the 3 and 2 are incompatible.</p>
<p>In the second example the shapes are <code>(3,2)</code> and <code>(1,2)</code>, so the right most dimension works and 3 is compatible with 1.</p>
<p>For the median example, just read the docs for what is happening:</p>
<blockquote>
<p>axis{int, sequence of int, None}, optional
Axis or axes along which the medians are computed. The default is to compute the median along a
flattened version of the array.</p>
</blockquote>
<p>By default, the median is calculated across all of the values in the array. However, you can select an axis that you would like to compute across.</p> | python|numpy | 1 |
917 | 68,343,441 | How to I change a specific range of pixels with respect to a condition given in numpy? | <p>Libraries Imported:</p>
<pre><code> %matplotlib inline
import numpy as np
from scipy import misc
import imageio
import matplotlib.pyplot as plt
from skimage import data
</code></pre>
<p>Like I created a:</p>
<pre><code> low_value_filter= dogs2 < 80
</code></pre>
<p>and did:</p>
<pre><code> dogs2[low_value_filter]=0
</code></pre>
<p>so all the pixels in the image with pixel value less than 80 became black.
I want to add the low_value_filter just to the right half of the image(or actually a specific range of pixels) instead of the full image. Below are images of a few attempts I made.</p>
<p><a href="https://i.stack.imgur.com/MUDYm.png" rel="nofollow noreferrer">This is the image after adding a circular mask</a> <a href="https://i.stack.imgur.com/afY4m.png" rel="nofollow noreferrer">Low_value_filter to whole image</a>
<a href="https://i.stack.imgur.com/KrBHF.png" rel="nofollow noreferrer">Attempt 1</a><a href="https://i.stack.imgur.com/wxOgC.png" rel="nofollow noreferrer">Attempt 2</a><a href="https://i.stack.imgur.com/IkSvI.png" rel="nofollow noreferrer">Attempt 3</a></p>
<p>Edit:</p>
<pre><code> center_col= total_cols/2
</code></pre> | <p>Hey I've found the solution to this problem.</p>
<p><em>Create an array of zeros of the same shape as the image:</em></p>
<p><code>low_value_filter = np.zeros_like(dogs)</code></p>
<p><em>Select the range of the image you want to add your filter to:</em></p>
<p><code>low_value_filter[upper:lower, left:right] = dogs[upper:lower, left:right] < 80</code>
<strong>(This will make the low_value_filter an array of 0's and 1's with respect to the range you've selected)</strong></p>
<p><em>Now, just add the filter to your image as an index range of bool type and set it equal to whatever pixel value you want to (I've chosen 0 in this case)</em></p>
<p><code>dogs[low_value_filter.astype(bool)] = 0</code></p> | python|numpy|image-processing|imagefilter|python-imageio | 0 |
918 | 59,199,502 | StyleGAN image generation doesn't work, TensorFlow doesn't see GPU | <p>After reinstalling Ubuntu 18.04, I cannot generate images anymore using a StyleGAN agent. The error message I get is <code>InvalidArgumentError: Cannot assign a device for operation Gs_1/_Run/Gs/latents_in: {{node Gs_1/_Run/Gs/latents_in}}was explicitly assigned to /device:GPU:0 but available devices are [ /job:localhost/replica:0/task:0/device:CPU:0, /job:localhost/replica:0/task:0/device:XLA_CPU:0, /job:localhost/replica:0/task:0/device:XLA_GPU:0 ]. Make sure the device specification refers to a valid device.</code></p>
<p>I have CUDA 10.1 and my driver version is 418.87. The yml file for the conda environment is available <a href="https://gist.github.com/albusdemens/3aadd9f2586095ce0963dd5e0d380bf0" rel="nofollow noreferrer">here</a>. I installed tensorflow-gpu==1.14 using pip. </p>
<p><a href="https://gist.github.com/albusdemens/c3389936915462c6a991ebdedb129900" rel="nofollow noreferrer">Here</a> yopu can find the jupyter notebook I'm using to generate the images.</p>
<p>If I check the available resources as recommended using the commands </p>
<pre><code>from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
</code></pre>
<p>I get the answer</p>
<pre><code>[name: "/device:CPU:0"
device_type: "CPU"
memory_limit: 268435456
locality {
}
incarnation: 7185754797200004029
, name: "/device:XLA_GPU:0"
device_type: "XLA_GPU"
memory_limit: 17179869184
locality {
}
incarnation: 18095173531080603805
physical_device_desc: "device: XLA_GPU device"
, name: "/device:XLA_CPU:0"
device_type: "XLA_CPU"
memory_limit: 17179869184
locality {
}
incarnation: 10470458648887235209
physical_device_desc: "device: XLA_CPU device"
]
</code></pre>
<p>Any suggestion on how to fix the issue is very welcome!</p> | <p>Might be because TensorFlow is looking for <code>GPU:0</code> to assign a device for operation when the name of your graphical unit is actually <code>XLA_GPU:0</code>.</p>
<p>What you could try to do is using soft placement when opening your session, so that TensorFlow uses any existing GPU (or any other supported devices if unavailable) when running:</p>
<pre><code># using allow_soft_placement=True
se = tf.Session(config=tf.ConfigProto(allow_soft_placement=True))
</code></pre> | python|tensorflow|machine-learning|gpu | 1 |
919 | 59,333,293 | How to append multiple columns to the first 3 columns and repeat the index values using pandas? | <p>I have a data set in which the columns are in multiples of 3 (excluding index column[0]).
I am new to python.</p>
<p>Here there are 9 columns excluding index. So I want to append 4th column to the 1st,5th column to 2nd,6th to 3rd, again 7th to 1st, 8th to 2nd, 9th to 3rd, and so on for large data set. My large data set will always be in multiples of 3 (excl.index col.).</p>
<p>Also I want the index values to repeat in same order. In this case 6,9,4,3 to repeat 3 times.</p>
<pre><code>import pandas as pd
import io
data =io.StringIO("""
6,5.6,4.6,8.2,2.5,9.4,7.6,9.3,4.1,1.9
9,2.3,7.8,1,4.8,6.7,8.4,45.2,8.9,1.5
4,4.8,9.1,0,7.1,5.6,3.6,63.7,7.6,4
3,9.4,10.6,7.5,1.5,4.3,14.3,36.1,6.3,0
""")
df = pd.read_csv(data,index_col=[0],header = None)
</code></pre>
<p>Expected Output:
df</p>
<pre><code>6,5.6,4.6,8.2
9,2.3,7.8,1
4,4.8,9.1,0
3,9.4,10.6,7.5
6,2.5,9.4,7.6
9,4.8,6.7,8.4
4,7.1,5.6,3.6
3,1.5,4.3,14.3
6,9.3,4.1,1.9
9,45.2,8.9,1.5
4,63.7,7.6,4
3,36.1,6.3,0
</code></pre> | <p>Idea is reshape by <code>stack</code> with sorting second level of <code>MultiIndex</code> and also for correct ordering create ordered <code>CategoricalIndex</code>:</p>
<pre><code>a = np.arange(len(df.columns))
df.index = pd.CategoricalIndex(df.index, ordered=True, categories=df.index.unique())
df.columns = [a // 3, a % 3]
df = df.stack(0).sort_index(level=1).reset_index(level=1, drop=True)
print (df)
0 1 2
0
6 5.6 4.6 8.2
9 2.3 7.8 1.0
4 4.8 9.1 0.0
3 9.4 10.6 7.5
6 2.5 9.4 7.6
9 4.8 6.7 8.4
4 7.1 5.6 3.6
3 1.5 4.3 14.3
6 9.3 4.1 1.9
9 45.2 8.9 1.5
4 63.7 7.6 4.0
3 36.1 6.3 0.0
</code></pre> | python|pandas | 3 |
920 | 59,315,132 | How to get which semester a day belongs to using pandas.Period | <p>I would like to know an easy way to get which semester a day belongs to while displaying it on following format ('YYYY-SX'); 2018-01-01 -> (2018S1).</p>
<p>I have a date range and is pretty easy to do it for quarters:</p>
<pre><code>import pandas as pd
import datetime
start = datetime.datetime(2018, 1, 1)
end = datetime.datetime(2020, 1, 1)
all_days = pd.date_range(start, end, freq='D')
all_quarters = []
for day in all_days:
all_quarters.append(str(pd.Period(day, freq='Q')))
</code></pre>
<p>However given the docs there is no frequency for semesters:</p>
<p><a href="https://pandas.pydata.org/pandas-docs/version/0.23.4/generated/pandas.Period.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/version/0.23.4/generated/pandas.Period.html</a></p>
<p><a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases</a></p>
<p>I don't want to necessarily use any specific modules.</p>
<p>Any ideas on how to do it in a clean way?</p> | <p>You can do something like this.</p>
<pre><code>df['sem']= df.date.dt.year.astype(str) + 'S'+ np.where(df.date.dt.quarter.gt(2),2,1).astype(str)
</code></pre>
<p>Note: the column <code>date</code> needs to be as <code>datetime</code> object</p>
<p><strong>Input</strong></p>
<pre><code>date
0 2019-09-30
1 2019-10-31
2 2019-11-30
3 2019-12-31
4 2020-01-31
5 2020-02-29
6 2020-03-31
7 2020-04-30
8 2020-05-31
9 2020-06-30
</code></pre>
<p><strong>Output</strong></p>
<pre><code> date sem
0 2019-09-30 2019S2
1 2019-10-31 2019S2
2 2019-11-30 2019S2
3 2019-12-31 2019S2
4 2020-01-31 2020S1
5 2020-02-29 2020S1
6 2020-03-31 2020S1
7 2020-04-30 2020S1
8 2020-05-31 2020S1
9 2020-06-30 2020S1
</code></pre> | python|pandas|datetime | 2 |
921 | 59,404,051 | Each element in list at least equal to the previous one | <p>A list of monthly sales until November, I want to know if the sales trend is upward.</p>
<p>Definition of upward: each monthly sales is greater or at least equal to previous month's.</p>
<p><a href="https://i.stack.imgur.com/gkKWQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gkKWQ.png" alt="enter image description here"></a></p>
<p>This question is similar to <a href="https://stackoverflow.com/questions/10048571/python-finding-a-trend-in-a-set-of-numbers">Python: Finding a trend in a set of numbers</a>, but much simpler - only looking the list of numbers.</p>
<p>What I have now is, to check them one by one (in a list). If one month satisfies the condition, it adds an "ok" to a new list. When total number of "ok" equals to 10. The original list of numbers are upward.</p>
<pre><code>import pandas as pd
df_a = pd.DataFrame([['Jan','Feb','Mar','Apr','May','Jun','Jul','Aug','Sep','Oct','Nov'],
[278,342,476,500,559,594,687,739,917,940,982]]).T
df_a.columns = ["Month", "Sales"]
sales_a = df_a['Sales'].tolist()
ok_a = []
for num, a in enumerate(sales_a):
if sales_a[num] >= sales_a[num-1]:
ok_a.append("ok")
if ok_a.count("ok") == 10:
print ("df_a is uptrend.")
</code></pre>
<p>What's the smarter way to do it? Thank you.</p> | <p>Pandas Series has attribute <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.is_monotonic.html" rel="nofollow noreferrer"><code>is_monotonic</code></a> to check on monotonic increasing. Don't need to sort or do anything fancy.</p>
<pre><code>print(df_a.Sales.is_monotonic)
Out[94]: True
if df_a.Sales.is_monotonic:
print('df_a is uptrend')
Output:
df_a is uptrend
</code></pre> | python|pandas|list|dataframe | 4 |
922 | 45,187,318 | Reading data from xlsx into Pandas dataframe | <p><strong>Scenario:</strong> I put together this little Frankenstein of a code (with some awesome help from SO users) to get data from excel files and put into a pandas dataframe.</p>
<p><strong>What I am trying to do:</strong> I am trying to get data from files that may contain one or more worksheets of data. After that I intend to organize the dataframe accordingly. For example:</p>
<pre><code>date1 identifier 1 bid ask
date1 identifier 2 bid ask
date1 identifier 3 bid ask
date2 identifier 1 bid ask
date2 identifier 3 bid ask
date3 identifier 4 bid ask
date3 identifier 5 bid ask
</code></pre>
<p><strong>Obs1:</strong> Each file can have values for "Bid", "Ask" or both, each in a separate worksheet.</p>
<p><strong>Obs2:</strong> The identifiers and dates may or may not be the same across files.</p>
<p><strong>What I did so far:</strong> My current code reads the files, and each worksheet. If it follows the condition, it attaches to a specific dataframe. Then it fixes the column headings.</p>
<p><strong>Issue:</strong> When my code runs, it yields two empty dataframes for some reason.</p>
<p><strong>Question:</strong> How can I account for different worksheets and output the values accordingly (to the structure above) to a dataframe?</p>
<p><strong>Current Code:</strong></p>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import glob, os
import datetime as dt
from datetime import datetime
import matplotlib as mpl
from openpyxl import load_workbook
directory = os.path.join("C:\\","Users\\DGMS\\Desktop\\final 2")
list_of_dfs = []
dfbid = pd.DataFrame()
dfask = pd.DataFrame()
for root,dirs,files in os.walk(directory):
for file in files:
f = os.path.join(root, file)
wb = load_workbook(f)
for sheet in wb.worksheets:
if sheet == "Bid":
dfbid = pd.concat([dfbid, pd.read_excel(f, "Bid")])
for i in range(1,len(dfbid.columns)):
dfbid.columns.values[i] = pd.to_datetime(dfbid.columns.values[i])
elif sheet == "Ask":
dfask = pd.concat([dfask, pd.read_excel(f, "Ask")])
for i in range(1,len(dfask.columns)):
dfask.columns.values[i] = pd.to_datetime(dfask.columns.values[i])
</code></pre> | <p>Separate the different things your code does in different functions.</p>
<ul>
<li>look for the excel-files</li>
<li>read the excel-files</li>
<li>convert the content to <code>datetime</code></li>
<li>concatenate the DataFrames</li>
</ul>
<p>This way you can check and inspect each step separately instead of have it all intertwined</p>
<h1>Look for the excel-files</h1>
<pre><code>import pandas as pd
from pathlib import Path
root_dir = Path(r"C:\Users\DGMS\Desktop\final 2")
files = root_dir.glob('**/*.xlsx')
</code></pre>
<h1>Read the excel-files</h1>
<p>Read each file and return the worksheets <code>'Bid'</code> and <code>'Ask'</code>, then generate 2 lists of Dataframes</p>
<pre><code>def parse_workbook(file):
d = pd.read_excel(file, sheetname=None)
return d.get('Bid', None), d.get('Ask', None)
df_bid_dfs, df_ask_dfs = zip(*(parse_workbook(file) for file in files))
</code></pre>
<h1>Convert the content to <code>datetime</code></h1>
<pre><code>def parse_datetime(df):
for column_name, column in df.iteritems():
df[column_name] = pd.to_datetime(column)
return df
</code></pre>
<h1>Concatenate the DataFrames</h1>
<pre><code>df_bid = pd.concat(parse_datetime(df) for df in df_bid_dfs if df)
df_ask = pd.concat(parse_datetime(df) for df in df_ask_dfs if df)
</code></pre>
<h1>testing <code>parse_datetime</code> and the concatenation</h1>
<pre><code>df1 = pd.DataFrame(['20170718'])
df2 = pd.DataFrame(['20170719'])
df_bid_dfs = (df1, df2)
</code></pre>
<blockquote>
<pre><code>pd.concat(parse_datetime(df) for df in df_bid_dfs)
</code></pre>
</blockquote>
<pre><code> 0
0 2017-07-18
0 2017-07-19
</code></pre> | python|excel|pandas|dataframe | 2 |
923 | 44,961,181 | DRQN - Prefix tensor must be either a scalar or vector, but saw tensor | <p>In following <a href="https://github.com/awjuliani/DeepRL-Agents/blob/master/Deep-Recurrent-Q-Network.ipynb" rel="nofollow noreferrer">this tutorial</a>, I am receiving the following error: </p>
<p><code>ValueError: prefix tensor must be either a scalar or vector, but saw tensor: Tensor("Placeholder_2:0", dtype=int32)</code></p>
<p>The error originates from these lines:</p>
<pre><code># Take the output from the final convolutional layer and send it to a recurrent layer
# The input must be reshaped into [batch x trace x units] for rnn processing, and then returned to
# [batch x units] when sent through the upper levels
self.batch_size = tf.placeholder(dtype=tf.int32)
self.convFlat = tf.reshape(slim.flatten(self.conv4), [self.batch_size, self.trainLength, h_size])
# !!!!This is the line where error city happens!!!!
self.state_in = rnn_cell.zero_state(self.batch_size, tf.float32)
</code></pre>
<p>After the network is initialized:</p>
<pre><code>mainQN = Qnetwork(h_size, cell, 'main')
</code></pre>
<p>This error is still present when solely running the code in a python console so the error is consistent.</p>
<p>I will post more of the code if that will be helpful</p> | <p>There is another solution to solve this problem.</p>
<p>Change</p>
<pre><code>self.batch_size = tf.placeholder(dtype=tf.int32)
</code></pre>
<p>TO</p>
<pre><code>self.batch_size = tf.placeholder(dtype=tf.int32, [])
</code></pre> | python|tensorflow | 3 |
924 | 45,071,327 | Why doesn't calling replace on a pandas dataFrame not act on the original object? | <p>If you look at the following simple example:</p>
<pre><code>import pandas as pd
l1 = [1,2,'?']
df = pd.DataFrame(l1)
df.replace('?',3)
</code></pre>
<p>Why does this not replace the '?' in the dataframe df?
Wouldn't the object that is referred to by df be affected when replace is called on it? </p>
<p>If I write:</p>
<pre><code>df = df.replace('?',3)
</code></pre>
<p>Then df.replace returns a new dataFrame that has replaced the value of ? with 3. </p>
<p>I'm just confused as to why a function that acts on an object can't change the object itself. </p> | <p>You need <code>inplace=True</code>:</p>
<pre><code>df.replace('?',3, inplace=True)
print (df)
0
0 1
1 2
2 3
</code></pre> | python|pandas|object | 3 |
925 | 44,948,007 | TensorFlow fails to use gpu | <p>I'm getting started with TensorFlow, but I cannot make it use GPU instead of CPU with TensorFlow 1.2.1.</p>
<p>I've got a laptop equipped with a NVIDIA GTX 850M which is CUDA 5.0 compatibility.</p>
<p>The CUDA Toolkit is installed with the latest version available.</p>
<p>cuDNN is installed with the latest version available.</p>
<p>I've set up the environment variables just as is shown here : <a href="https://nitishmutha.github.io/tensorflow/2017/01/22/TensorFlow-with-gpu-for-windows.html" rel="nofollow noreferrer">https://nitishmutha.github.io/tensorflow/2017/01/22/TensorFlow-with-gpu-for-windows.html</a></p>
<p>If I install the latest version of TensorFlow via pip: "pip install tensorflow-gpu" in the cmd prompt, then TensorFlow does not recognize my GPU and acts like I've got none: 'Device mapping: no known device'.</p>
<p>If instead I install tensorflow via 'pip install --upgrade <a href="https://storage.googleapis.com/tensorflow/windows/gpu/tensorflow_gpu-0.12.1-cp35-cp35m-win_amd64.whl" rel="nofollow noreferrer">https://storage.googleapis.com/tensorflow/windows/gpu/tensorflow_gpu-0.12.1-cp35-cp35m-win_amd64.whl</a>' then everything works fine.</p>
<p>Has anyone an idea why the latest version of TF does that?</p> | <p>In the latest version of Tensorflow, you can check the GPU availability as</p>
<pre><code>gpu_available = tf.test.is_gpu_available()
is_cuda_gpu_available = tf.test.is_gpu_available(cuda_only=True)
is_cuda_gpu_min_3 = tf.test.is_gpu_available(True, (3,0))
</code></pre>
<p><code>tf.test.is_gpu_available</code> will be removed in a future version. Instructions for updating: Use <code>tf.config.list_physical_devices('GPU')</code> instead</p> | python|tensorflow|gpu | 0 |
926 | 57,130,097 | A fast method to add a label column to large pd dataframe based on a range of another column | <p>I'm fairly new to python and am working with large dataframes with upwards of 40 million rows. I would like to be able to add another 'label' column based on the value of another column.</p>
<p>if I have a pandas dataframe (much smaller here for detailing the problem)</p>
<pre class="lang-py prettyprint-override"><code>
import pandas as pd
import numpy as np
#using random to randomly get vals (as my data is not sorted)
my_df = pd.DataFrame(np.random.randint(0,100,1000),columns = ['col1'])
</code></pre>
<p>I then have another dictionary containing ranges associated with a specific label, similar to something like:</p>
<pre class="lang-py prettyprint-override"><code>my_label_dict ={}
my_label_dict['label1'] = np.array([[0,10],[30,40],[50,55]])
my_label_dict['label2'] = np.array([[11,15],[45,50]])
</code></pre>
<p>Where any data in my_df should be 'label1' if it is between 0,10 or 30,40 or 50,55
And any data should be 'label2' if it between 11 to 15 or 45 to 50.</p>
<p>I have only managed to isolate data based on the labels and retrieve an index through something like:</p>
<pre class="lang-py prettyprint-override"><code>idx_save = np.full(len(my_label_dict['col1']),False,dtype = bool).reshape(-1,1)
for rng in my_label_dict['label1']:
idx_temp = np.logical_and( my_label_dict['col1']> rng[0], my_label_dict['col1'] < rng[1]
idx_save = idx_save | idx_temp
</code></pre>
<p>and then use this index to access label1 values from my_dict. and then repeat for label2.</p>
<p>Ideally I would like to add another column on my_label_dict named 'labels' which would add 'label1' for all datapoints that satisfy the given ranges etc. Or just a quick method to retrieve all values from the dataframe that satisfy the ranges in the labels.</p>
<p>I'm new to generator functions, and havent completely gotten my head around them but maybe they could be used here?</p>
<p>Thanks for any help!!</p> | <p>You can to the task in a "more pandasonic" way.</p>
<p>Start from creating a <em>Series</em>, named <em>labels</em>, initially with empty strings:</p>
<pre><code>labels = pd.Series([''] * 100).rename('label')
</code></pre>
<p>The length is 100, just as the upper limit of your values.</p>
<p>Then fill it with proper labels:</p>
<pre><code>for key, val in my_label_dict.items():
for v in val:
labels[v[0]:v[1]+1] = key
</code></pre>
<p>And the only thing to do is to merge your DataFrame with <em>labels</em>:</p>
<pre><code>my_df = my_df.merge(labels, how='left', left_on='col1', right_index=True)
</code></pre>
<p>I also noticed such a contradiction in <em>my_label_dict</em>:</p>
<ul>
<li>you have <em>label1</em> for range between <em>50</em> and <em>55</em> (I assume inclusive),</li>
<li>you have also <em>label2</em> for range between <em>45</em> and <em>50</em>,</li>
</ul>
<p>so for value of <em>50</em> you have <strong>two</strong> definitions.</p>
<p>My program acts on the "last decision takes precedence" principle, so the label
for <em>50</em> is <em>label2</em>. Maybe you should change one of these range borders?</p>
<h1>Edit</h1>
<p>A modified solution if the upper limit of <em>col1</em> is "unpredictable":</p>
<p>Define <em>labels</em> the following way:</p>
<pre><code>rngMax = max(np.array(list(itertools.chain.from_iterable(
my_label_dict.values())))[:,1])
labels = pd.Series([np.nan] * (rngMax + 1)).rename('label')
for key, val in my_label_dict.items():
for v in val:
labels[v[0]:v[1]+1] = key
labels.dropna(inplace=True)
</code></pre>
<p>Add <code>.fillna('')</code> to <code>my_df.merge(...)</code>.</p> | python|pandas|dataframe|indexing | 0 |
927 | 57,031,897 | Assistance with comparing multiple columns in 2 different dataframes | <p>I have 2 dataframes:</p>
<p><code>df1</code> has between 100-300 rows depending on the day<br>
<code>df2</code> contains between 10,000-40,000 depending on the day</p>
<p>The columns are as such:</p>
<p>df1:</p>
<pre><code>ID STATE STATUS
1 NY ACCEPTED
1 PA ACCEPTED
1 CA ACCEPTED
2 NY ACCEPTED
3 NY ACCEPTED
</code></pre>
<p>df2:</p>
<pre><code>ID COUNTRY STATUS
1 US ACCEPTED
2 US
3 US ACCEPTED
4 US
5 US ACCEPTED
</code></pre>
<p>I need to be able to take each entry from df1 and determine which entries from df1 have an accepted STATUS in df2. All the entries in df1 have been accepted, so the only check I need is whether they also were accepted in df2.</p>
<p>What I am not figuring out is:</p>
<p>How do I locate the same ID, then check the STATUS of that row and return true or false for each?</p>
<p>The bonus is that after that I still need to then extract all the ID's from df2 that are not accepted so that I can use them, so I cannot destroy df2.</p> | <p>You could merge both dataframes and check the status with pd.merge:</p>
<pre><code>pd.merge(left=df_a, right=df_b, on='id',suffixes=('_df_a','_df_b'))
</code></pre>
<p><a href="https://i.stack.imgur.com/HSvuB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HSvuB.png" alt="enter image description here"></a></p> | python|pandas | 0 |
928 | 57,192,304 | Numpy Python: Exception: Data must be 1-dimensional | <p>Getting exception <code>Exception: Data must be 1-dimensional</code></p>
<p>using NumPy in Python 3.7</p>
<p>Same code is working for others but not in my case. Bellow is my code please help</p>
<blockquote>
<p><a href="https://i.stack.imgur.com/6XBq8.jpg" rel="nofollow noreferrer">Working_code_in_diff_system</a></p>
<p><a href="https://i.stack.imgur.com/Mw3r6.jpg" rel="nofollow noreferrer">Same_code_not_working_in_my_system</a></p>
</blockquote>
<pre><code>import numpy as np
from sklearn import linear_model
from sklearn.model_selection import train_test_split
import seaborn as sns
from sklearn import metrics
import matplotlib.pyplot as plt
%matplotlib inline
df = pd.read_csv('./Data/new-data.csv', index_col=False)
x_train, x_test, y_train, y_test = train_test_split(df['Hours'], df['Marks'], test_size=0.2, random_state=42)
sns.jointplot(x=df['Hours'], y=df['Marks'], data=df, kind='reg')
x_train = np.reshape(x_train, (-1,1))
x_test = np.reshape(x_test, (-1,1))
y_train = np.reshape(y_train, (-1,1))
y_test = np.reshape(y_test, (-1,1))
#
print('Train - Predictors shape', x_train.shape)
print('Test - Predictors shape', x_test.shape)
print('Train - Target shape', y_train.shape)
print('Test - Target shape', y_test.shape)
</code></pre>
<p>Expected output should be</p>
<blockquote>
<p>Train - Predictors shape (80, 1)</p>
<p>Test - Predictors shape (20, 1)</p>
<p>Train - Target shape (80, 1)</p>
<p>Test - Target shape (20, 1)</p>
</blockquote>
<p>As output getting exception <code>Exception: Data must be 1-dimensional</code></p> | <p>I think you need to call <code>np.reshape</code> on the underlying numpy array rather than on the Pandas series - you can do this using <code>.values</code>:</p>
<pre><code>x_train = np.reshape(x_train.values, (-1, 1))
</code></pre>
<p>Repeat the same idea for the next three lines.</p>
<p>Or, if you are on a recent version of Pandas >= 0.24, <code>to_numpy</code> is preferred:</p>
<pre><code>x_train = np.reshape(x_train.to_numpy(), (-1, 1))
</code></pre> | python-3.x|numpy|regression|linear-regression | 5 |
929 | 57,285,680 | Retain few NA's and drop rest of NA's during Stack operation in Python | <p>I have a dataframe like shown below </p>
<pre><code>df2 = pd.DataFrame({'person_id':[1],'H1_date' : ['2006-10-30 00:00:00'], 'H1':[2.3],'H2_date' : ['2016-10-30 00:00:00'], 'H2':[12.3],'H3_date' : ['2026-11-30 00:00:00'], 'H3':[22.3],'H4_date' : ['2106-10-30 00:00:00'], 'H4':[42.3],'H5_date' : [np.nan], 'H5':[np.nan],'H6_date' : ['2006-10-30 00:00:00'], 'H6':[2.3],'H7_date' : [np.nan], 'H7':[2.3],'H8_date' : ['2006-10-30 00:00:00'], 'H8':[np.nan]})
</code></pre>
<p><a href="https://i.stack.imgur.com/uLBJI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uLBJI.png" alt="enter image description here"></a></p>
<p>As shown in my screenshot above, my source datframe (<code>df2</code>) contains few NA's</p>
<p>When I do <code>df2.stack()</code>, I lose all the NA's from the data.</p>
<p>However I would like to retain NA for <code>H7_date</code> and <code>H8</code> because they have got their corresponding value / date pair. For <code>H7_date</code>, I have a valid value <code>H7</code> and for <code>H8</code>, I have got it's corresponding <code>H8_date</code>.</p>
<p>I would like to drop records only when both the values (<code>H5_date</code>,<code>H5</code>) are NA.</p>
<p>Please note I have got only few columns here and my real data has more than 150 columns and column names aren't known in advance.</p>
<p>I expect my output to be like as shown below <strong>which doesn't have <code>H5_date</code>,<code>H5</code> though they are NA's</strong></p>
<p><a href="https://i.stack.imgur.com/HBPyv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HBPyv.png" alt="enter image description here"></a></p> | <p>On approach is to melt the DF, apply a key that identifies columns in the same "group" (in this case <code>H<some digits></code> but you can amend that as required), then group by person and that key, filter those groups to those containing at least one non-NA value), eg:</p>
<p>Starting with:</p>
<pre><code>df = pd.DataFrame({'person_id':[1],'H1_date' : ['2006-10-30 00:00:00'], 'H1':[2.3],'H2_date' : ['2016-10-30 00:00:00'], 'H2':[12.3],'H3_date' : ['2026-11-30 00:00:00'], 'H3':[22.3],'H4_date' : ['2106-10-30 00:00:00'], 'H4':[42.3],'H5_date' : [np.nan], 'H5':[np.nan],'H6_date' : ['2006-10-30 00:00:00'], 'H6':[2.3],'H7_date' : [np.nan], 'H7':[2.3],'H8_date' : ['2006-10-30 00:00:00'], 'H8':[np.nan]})
</code></pre>
<p>Use:</p>
<pre><code>df2 = (
df.melt(id_vars='person_id')
.assign(_gid=lambda v: v.variable.str.extract('H(\d+)'))
.groupby(['person_id', '_gid'])
.filter(lambda g: bool(g.value.any()))
.drop('_gid', 1)
)
</code></pre>
<p>Which gives you:</p>
<pre><code> person_id variable value
0 1 H1_date 2006-10-30 00:00:00
1 1 H1 2.3
2 1 H2_date 2016-10-30 00:00:00
3 1 H2 12.3
4 1 H3_date 2026-11-30 00:00:00
5 1 H3 22.3
6 1 H4_date 2106-10-30 00:00:00
7 1 H4 42.3
10 1 H6_date 2006-10-30 00:00:00
11 1 H6 2.3
12 1 H7_date NaN
13 1 H7 2.3
14 1 H8_date 2006-10-30 00:00:00
15 1 H8 NaN
</code></pre>
<p>You can then use that as a starting point to tweak if necessary.</p> | python|pandas|dataframe | 1 |
930 | 46,020,617 | Best way to avoid merge nulls | <p>Let's say I have those 2 pandas dataframes.</p>
<pre><code>In [3]: df1 = pd.DataFrame({'id':[None,20,None,40,50],'value':[1,2,3,4,5]})
In [4]: df2 = pd.DataFrame({'index':[None,20,None], 'value':[1,2,3]})
In [7]: df1
Out[7]: id value
0 NaN 1
1 20.0 2
2 NaN 3
3 40.0 4
4 50.0 5
In [8]: df2
Out[8]: index value
0 NaN 1
1 20.0 2
2 NaN 3
</code></pre>
<p>When I'm merging those dataframes (based on the id and index columns) - the result include rows that the id and index have missing values.</p>
<pre><code>df3 = df1.merge(df2, left_on='id', right_on = 'index', how='inner')
In [9]: df3
Out[9]: id value_x index value_y
0 NaN 1 NaN 1
1 NaN 1 NaN 3
2 NaN 3 NaN 1
3 NaN 3 NaN 3
4 20.0 2 20.0 2
</code></pre>
<p>that's what I tried but I guess it's not the best solution:</p>
<p>I replaced all the missing values with some value in one dataframe column,
and the same in the second dataframe but with another value - the purpose is that the condition will return False and the rows will not be in the result.</p>
<pre><code>In [14]: df1_fill = df1.fillna({'id':'NONE1'})
In [13]: df2_fill = df2.fillna({'index':'NONE2'})
In [15]: df1_fill
Out[15]: id value
0 NONE1 1
1 20 2
2 NONE1 3
3 40 4
4 50 5
In [16]: df2_fill
Out[16]: index value
0 NONE2 1
1 20 2
2 NONE2 3
</code></pre>
<p>What is the best solution for that issue? </p>
<p>Also, in the example - the daya type of the join columns is numeric, but it can be another type like text or date...</p>
<p><strong>EDIT:</strong></p>
<p>So, with the solutions here I can use dropna function to drop the rows with the missing values before the join - but this is good with inner join that I don't want those rows at all.</p>
<p>What about a left join or full join?</p>
<p>Let's say I have those 2 dataframes I've used before - df1, df2.</p>
<p>So for inner and left join I realy can use the dropna function:</p>
<pre><code>In [61]: df_inner = df1.dropna(subset=['id']).merge(df2.dropna(subset=['index']), left_on='id', right_on = 'index', how='inner')
In [62]: df_inner
Out[62]: id value_x index value_y
0 20.0 2 20.0 6
In [63]: df_left = df1.merge(df2.dropna(subset=['index']), left_on='id', right_on = 'index', how='left')
In [64]: df_left
Out[64]: id value_x index value_y
0 NaN 1 NaN NaN
1 20.0 2 20.0 6.0
2 NaN 3 NaN NaN
3 40.0 4 NaN NaN
4 50.0 5 NaN NaN
In [65]: df_full = df1.merge(df2, left_on='id', right_on = 'index', how='outer')
In [66]: df_full
Out[66]: id value_x index value_y
0 NaN 1 NaN 5.0
1 NaN 1 NaN 7.0
2 NaN 3 NaN 5.0
3 NaN 3 NaN 7.0
4 20.0 2 20.0 6.0
5 40.0 4 NaN NaN
6 50.0 5 NaN NaN
</code></pre>
<p>In the left I droped the missing-values-rows from the "right" dataframe and then I used merge.</p>
<p>It was ok because in left join you know that If the condition returns false you have null in the right-source columns - so it's not matter if the rows realy exists or they jusr return false.</p>
<p>But for full join - I need all the rows from the 2 sources both...</p>
<p>I cant use dropna because it will drop me rows that I need and if I don't use it - I get wrong result.</p>
<p>Thanks.</p> | <p>If you dont want nan values then you can drop the nan values i.e </p>
<pre><code>df3 = df1.merge(df2, left_on='id', right_on = 'index', how='inner').dropna()
</code></pre>
<p>or </p>
<pre><code>df3 = df1.dropna().merge(df2.dropna(), left_on='id', right_on = 'index', how='inner')
</code></pre>
<p>Output: </p>
<pre><code> id value_x index value_y
0 20.0 2 20.0 2
</code></pre>
<p>For outer merge drop after merging ie. </p>
<pre><code>df_full = df1.merge(df2, left_on='id', right_on = 'index', how='outer').dropna(subset = ['id'])
</code></pre>
<p>Output: </p>
<pre><code> id value_x index value_y
4 20.0 2 20.0 2.0
5 40.0 4 NaN NaN
6 50.0 5 NaN NaN
</code></pre> | python|pandas|merge|null | 1 |
931 | 46,030,481 | ImportError: No module named 'nets' | <p>I am trying to convert trained_checkpoint to final frozen model from the export_inference_graph.py script provided in tensorflow/models,but the following error results.
And yes,I have already setup $PYTHONPATH to "models/slim" but still I get this error,can someone help me out?</p>
<pre><code>$ echo $PYTHONPATH
:/home/ishara/tensorflow_models/models:/home/ishara/tensorflow_models/models/slim
</code></pre>
<p>*****************************problem****************************************************************************</p>
<pre><code>$sudo python3 object_detection/export_inference_graph.py --input_type image_tensor --pipeline_config_path = "ssd_inception_v2_pets.config" --trained_checkpoint_prefix="output/model.ckpt-78543" --output_directory="birds_inference_graph.pb"
Traceback (most recent call last):
File "object_detection/export_inference_graph.py", line 74, in <module>
from object_detection import exporter
File "/usr/local/lib/python3.5/dist-packages/object_detection-0.1-py3.5.egg/object_detection/exporter.py", line 28, in <module>
File "/usr/local/lib/python3.5/dist-packages/object_detection-0.1-py3.5.egg/object_detection/builders/model_builder.py", line 30, in <module>
File "/usr/local/lib/python3.5/dist-packages/object_detection-0.1-py3.5.egg/object_detection/models/faster_rcnn_inception_resnet_v2_feature_extractor.py", line 28, in <module>
ImportError: No module named 'nets'
</code></pre>
<hr>
<p>I have been struggling with this for days now,tried many solutions nothing work
I am using Ubuntu 16.04 with tensorflow-gpu version.</p> | <p>Take a look at Protobuf Compilation at
<a href="https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md" rel="noreferrer">https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md</a>
and set PYTHONPATH correctly, this is how I solved this for Windows </p>
<p>For Windows:</p>
<h1>From tensorflow/models/research/</h1>
<p>Step1: <code>protoc object_detection/protos/*.proto --python_out=.</code></p>
<p>Step2: </p>
<pre><code>set PYTHONPATH= <Path to 'research' Directory> ; <Path to 'slim' Directory>
</code></pre>
<p>For Eg: </p>
<pre><code>set PYTHONPATH=C:\Users\Guy\Desktop\models\research;C:\Users\Guy\Desktop\models\research\slim
</code></pre> | python|python-3.x|tensorflow | 20 |
932 | 23,189,506 | Maximum allowed value for a numpy data type | <p>I am working with numpy arrays of a range of data types (uint8, uint16, int16, etc.). I would like to be able to check whether a number can be represented within the limits of an array for a given datatype. I am imagining something that looks like:</p>
<pre><code>>>> im.dtype
dtype('uint16')
>>> dtype_max(im.dtype)
65535
>>> dtype_min(im.dtype)
0
</code></pre>
<p>Does something like this exist? By the way, I feel like this has to have been asked before, but my search came up empty, and all of the "similar questions" appear to be unrelated.</p>
<p>Edit: Of course, now that I've asked, one of the "related" questions does have the answer. Oops. </p> | <pre class="lang-py prettyprint-override"><code>min_value = np.iinfo(im.dtype).min
max_value = np.iinfo(im.dtype).max
</code></pre>
<p>docs:<br></p>
<ul>
<li><a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.iinfo.html#numpy-iinfo" rel="noreferrer"><code>np.iinfo</code></a> (machine limits for integer types)</li>
<li><a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.finfo.html#numpy-finfo" rel="noreferrer"><code>np.finfo</code></a> (machine limits for floating point types)</li>
</ul> | python|numpy | 126 |
933 | 23,170,415 | Self Join in Pandas: Merge all rows with the equivalent multi-index | <p>I have one dataframe in the following form: </p>
<pre><code>df = pd.read_csv('data/original.csv', sep = ',', names=["Date", "Gran", "Country", "Region", "Commodity", "Type", "Price"], header=0)
</code></pre>
<p>I'm trying to do a self join on the index Date, Gran, Country, Region producing rows in the form of
Date, Gran, Country, Region, CommodityX, TypeX, Price X, Commodity Y, Type Y, Prixe Y, Commodity Z, Type Z, Price Z</p>
<p>Every row should have all the different commodities and prices of a specific region. </p>
<p>Is there a simple way of doing this? </p>
<p>Any help is much appreciated! </p>
<hr>
<p>Note: I simplified the example by ignoring a few attributes </p>
<p>Input Example: </p>
<pre><code> Date Country Region Commodity Price
1 03/01/2014 India Vishakhapatnam Rice 25
2 03/01/2014 India Vishakhapatnam Tomato 30
3 03/01/2014 India Vishakhapatnam Oil 50
4 03/01/2014 India Delhi Wheat 10
5 03/01/2014 India Delhi Jowar 60
6 03/01/2014 India Delhi Bajra 10
</code></pre>
<p>Output Example: </p>
<pre><code> Date Country Region Commodit1 Price1 Commodity2 Price2 Commodity3 Price3
1 03/01/2014 India Vishakhapatnam Rice 25 Tomato 30 Oil 50
2 03/01/2014 India Delhi Wheat 10 Jowar 60 Bajra 10
</code></pre> | <p>What you want to do is called a reshape (specifically, from long to wide). See <a href="https://stackoverflow.com/questions/22798934/pandas-long-to-wide-reshape">this answer</a> for more information.</p>
<p>Unfortunately as far as I can tell pandas doesn't have a simple way to do that. I adapted the answer in the other thread to your problem:</p>
<pre><code>df['idx'] = df.groupby(['Date','Country','Region']).cumcount()
df.pivot(index= ['Date','Country','Region'], columns='idx')[['Commodity','Price']]
</code></pre>
<p>Does that solve your problem?</p> | python-2.7|pandas|self-join | 1 |
934 | 35,709,042 | How to create a masked array using numpy.ma imported by PyCall in Julia | <p>I want to create a masked array using <code>numpy.ma</code> imported by PyCall in Julia.</p>
<p>A Python example in the the help of <code>is_masked()</code> in <code>numpy.ma</code> module.</p>
<pre><code>>>> import numpy.ma as ma
>>> x = ma.masked_equal([0, 1, 0, 2, 3], 0)
>>> x
masked_array(data = [-- 1 -- 2 3],
mask = [ True False True False False],
fill_value=999999)
>>> ma.is_masked(x)
True
</code></pre>
<p>I tried to translate it into Julia using PyCall.</p>
<pre><code>julia> using PyCall
julia> @pyimport numpy.ma as ma
julia> x = ma.masked_equal([0, 1, 0, 2, 3], 0);
julia> x
5-element Array{Int64,1}:
0
1
0
2
3
julia> ma.is_masked(x)
false
</code></pre>
<p>The above code is NOT working. It fails to create a Python object. It just creates a usual Julia array. I tried other ways such as <code>ma.array([1, 2, 3], mask=[0, 0, 1])</code>, but still not working.</p>
<p>However, from a example in <a href="https://github.com/stevengj/PyCall.jl" rel="nofollow">https://github.com/stevengj/PyCall.jl</a>,</p>
<pre><code>julia> @pyimport Bio.Seq as s
julia> @pyimport Bio.Alphabet as a
julia> my_dna = s.Seq("AGTACACTGGT", a.generic_dna)
PyObject Seq('AGTACACTGGT', DNAAlphabet())
julia> my_dna[:find]("ACT")
5
</code></pre>
<p>In this case, the python object can be created directly without effort.</p>
<p>Question: What's wrong with my translation? How can I create a numpy masked array in Julia?</p> | <p>I don't think there's anything wrong with your translation — this looks like a bug in PyCall. PyCall tries to map Julia and Python types back and forth so that you can seamlessly use Julia's arrays like NumPy arrays (for example). In this case, it looks like it's a bit overzealous in doing the conversion.</p>
<p>You can disable the conversion by using <code>pycall</code> directly. The second argument is the return type:</p>
<pre><code>julia> x = pycall(ma.masked_equal, Any, [0,1,0,2,3], 0)
PyObject masked_array(data = [-- 1L -- 2L 3L],
mask = [ True False True False False],
fill_value = 0)
julia> ma.is_masked(x)
true
</code></pre>
<p>This is a bug in the python type identification. PyCall thinks that the <code>masked_array</code> object type should map to a builtin array, so that's why it defaults to returning an array back:</p>
<pre><code>julia> pytype_query(x)
Array{Int64,N}
</code></pre> | python|arrays|numpy|julia | 3 |
935 | 35,780,734 | Python Pandas replace() not working | <p>I have some fields that have some junk in them from an upstream process. I'm trying to delete <strong>'\r\nName: hwowneremail, dtype: object'</strong> from a column that has this junk appended to an email address. </p>
<pre><code>report_df['Owner'].replace('\r\nName: hwowneremail, dtype: object',inplace=True)
report_df['Owner'][26]
</code></pre>
<p>Output:</p>
<pre><code>' [email protected]\r\nName: hwowneremail, dtype: object'
</code></pre>
<p>I've also tried the following variants w/o success:</p>
<pre><code>replace('Name: hwowneremail, dtype: object', inplace=True)
replace('\\r\\nName: hwowneremail, dtype: object', inplace=True
replace(r'\r\nName: hwowneremail, dtype: object', inplace=True)
replace('\r\nName: hwowneremail, dtype: object', "", inplace=True)
replace(to_value='\r\nName: hwowneremail, dtype: object', value=' ',inplace=True)
replace('\\r\\nName: hwowneremail, dtype: object',regex=True,inplace=True)
</code></pre>
<p>Thanks in advance for your insight!</p> | <p>As far as I remember, Python Pandas was changed a little bit in replace. You should try passing over a regex keyword argument.</p>
<p>Like so;</p>
<pre><code>report_df['Owner'].replace({'\r\nName: hwowneremail, dtype: object':''},regex=True)
</code></pre> | pandas | 5 |
936 | 35,585,104 | How does Pandas calculate quantiles? | <p>I have the following simple dataframe:</p>
<pre><code>> df = pd.DataFrame({'calc_value': [0, 0.081928, 0.94444]})
> df
calc_value
0 0.000000
1 0.081928
2 0.944440
</code></pre>
<p>Why does <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.quantile.html" rel="nofollow"><code>df.quantile</code></a> calculate the 90th percentile as <code>0.7719376</code>? </p>
<pre><code>> df.quantile(0.9)['calc_value']`
0.7719376
</code></pre>
<p>According to my calculations it should be <code>0.69</code>, via <code>(0.944444-0.081928)*((90-50)/(100-50))</code>. </p> | <p>Well, step per 0.1 in the range from 0.5...1.0 is equal to (0.94444-0.081928)/5 and is equal to 0.1725024</p>
<pre><code>So 50q is 0.081928
60q is 0.081928+0.1725024=0.25443
70q is 0.081928+2*0.1725024=0.426933
80q is 0.081928+3*0.1725024=0.599435
90q is 0.081928+4*0.1725024=0.771938
100q is 0.081928+5*0.1725024=0.94444
</code></pre> | python|pandas | 2 |
937 | 11,728,836 | Efficiently applying a function to a grouped pandas DataFrame in parallel | <p>I often need to apply a function to the groups of a very large <code>DataFrame</code> (of mixed data types) and would like to take advantage of multiple cores.</p>
<p>I can create an iterator from the groups and use the multiprocessing module, but it is not efficient because every group and the results of the function must be pickled for messaging between processes.</p>
<p>Is there any way to avoid the pickling or even avoid the copying of the <code>DataFrame</code> completely? It looks like the shared memory functions of the multiprocessing modules are limited to <code>numpy</code> arrays. Are there any other options?</p> | <p>From the comments above, it seems that this is planned for <code>pandas</code> some time (there's also an interesting-looking <a href="https://pypi.python.org/pypi/rosetta/0.2.4"><code>rosetta</code> project</a> which I just noticed).</p>
<p>However, until every parallel functionality is incorporated into <code>pandas</code>, I noticed that it's very easy to write efficient & non-memory-copying parallel augmentations to <code>pandas</code> directly using <a href="http://cython.org/"><code>cython</code></a> + <a href="http://www.google.co.il/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0CB0QFjAA&url=http%3A%2F%2Fwww.openmp.org%2F&ei=HKpdVfyVJcj8ULXHgcAF&usg=AFQjCNGlD5aZM8ZP3Qx7WXT74Y7C54jLNQ&bvm=bv.93756505,d.d24">OpenMP</a> and C++.</p>
<p>Here's a short example of writing a parallel groupby-sum, whose use is something like this:</p>
<pre><code>import pandas as pd
import para_group_demo
df = pd.DataFrame({'a': [1, 2, 1, 2, 1, 1, 0], 'b': range(7)})
print para_group_demo.sum(df.a, df.b)
</code></pre>
<p>and output is:</p>
<pre><code> sum
key
0 6
1 11
2 4
</code></pre>
<hr>
<p><strong>Note</strong> Doubtlessly, this simple example's functionality will eventually be part of <code>pandas</code>. Some things, however, will be more natural to parallelize in C++ for some time, and it's important to be aware of how easy it is to combine this into <code>pandas</code>.</p>
<hr>
<p>To do this, I wrote a simple single-source-file extension whose code follows.</p>
<p>It starts with some imports and type definitions</p>
<pre><code>from libc.stdint cimport int64_t, uint64_t
from libcpp.vector cimport vector
from libcpp.unordered_map cimport unordered_map
cimport cython
from cython.operator cimport dereference as deref, preincrement as inc
from cython.parallel import prange
import pandas as pd
ctypedef unordered_map[int64_t, uint64_t] counts_t
ctypedef unordered_map[int64_t, uint64_t].iterator counts_it_t
ctypedef vector[counts_t] counts_vec_t
</code></pre>
<p>The C++ <code>unordered_map</code> type is for summing by a single thread, and the <code>vector</code> is for summing by all threads.</p>
<p>Now to the function <code>sum</code>. It starts off with <a href="http://docs.cython.org/src/userguide/memoryviews.html">typed memory views</a> for fast access:</p>
<pre><code>def sum(crit, vals):
cdef int64_t[:] crit_view = crit.values
cdef int64_t[:] vals_view = vals.values
</code></pre>
<p>The function continues by dividing the semi-equally to the threads (here hardcoded to 4), and having each thread sum the entries in its range:</p>
<pre><code> cdef uint64_t num_threads = 4
cdef uint64_t l = len(crit)
cdef uint64_t s = l / num_threads + 1
cdef uint64_t i, j, e
cdef counts_vec_t counts
counts = counts_vec_t(num_threads)
counts.resize(num_threads)
with cython.boundscheck(False):
for i in prange(num_threads, nogil=True):
j = i * s
e = j + s
if e > l:
e = l
while j < e:
counts[i][crit_view[j]] += vals_view[j]
inc(j)
</code></pre>
<p>When the threads have completed, the function merges all the results (from the different ranges) into a single <code>unordered_map</code>:</p>
<pre><code> cdef counts_t total
cdef counts_it_t it, e_it
for i in range(num_threads):
it = counts[i].begin()
e_it = counts[i].end()
while it != e_it:
total[deref(it).first] += deref(it).second
inc(it)
</code></pre>
<p>All that's left is to create a <code>DataFrame</code> and return the results:</p>
<pre><code> key, sum_ = [], []
it = total.begin()
e_it = total.end()
while it != e_it:
key.append(deref(it).first)
sum_.append(deref(it).second)
inc(it)
df = pd.DataFrame({'key': key, 'sum': sum_})
df.set_index('key', inplace=True)
return df
</code></pre> | python|pandas|multiprocessing|shared-memory | 12 |
938 | 28,814,490 | python array indexing list in list | <p>I want to do array indexing. I would have expected the result to be [0,1,1,0], however I just get an error. How can I do this type of indexing?</p>
<pre><code>a_np_array=np.array(['a','b','c','d'])
print a_np_array in ['b', 'c']
Traceback (most recent call last):
File "dfutmgmt_alpha_osis.py", line 130, in <module>
print a_np_array in ['b', 'c']
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
</code></pre>
<p> Up top, i actually meant to say [False,True,True,False] not [0,1,1,0] as I want the bools so i can do indexing</p> | <p>Try this <a href="https://docs.python.org/2/tutorial/datastructures.html#list-comprehensions" rel="nofollow">list comprehension</a>:</p>
<pre><code>>>> print [int(x in ['b', 'c']) for x in a_np_array]
[0, 1, 1, 0]
</code></pre>
<p>Utilizing the fact that <code>int(True) == 1</code> and <code>int(False) == 0</code></p> | python|arrays|numpy|indexing | 2 |
939 | 20,415,414 | python pandas 3 smallest & 3 largest values | <p>How can I find the index of the 3 smallest and 3 largest values in a column in my pandas dataframe? I saw ways to find max and min, but none to get the 3.</p> | <p>What have you tried? You could sort with <code>s.sort()</code> and then call <code>s.head(3).index</code> and <code>s.tail(3).index</code>.</p> | python|pandas|dataframe | 5 |
940 | 33,508,110 | Subtracting two timestamp arrays | <p>I have two numpy arrays I created which hold a number of timestamps. The timestamps are in month,day,year,hour,sec format (eg. 12/8/2009 10:00) and I hope to use them to calculate speed. I have the speed function almost finished, I just cannot figure out how to be able to subtract the two arrays to find the difference between them. I tried using np.subtract..</p>
<pre><code>def speedofelephant(lat1, long1, time1, lat2, long2, time2):
distance = haversine_distance(lat1, long1, lat2, long2) # meter
delta_time = np.subtract(time1,time2)
print delta_time
# set speed
speed = (distance / delta_time) # speed in m/s
speed = speed * 3.6 # speed in km/h
</code></pre>
<p>But I get this error...</p>
<pre><code>NotImplemented
Traceback (most recent call last):
File "C:/script.py", line 187, in <module>
speed=speedofelephant(lat1, long1, time1, lat2, long2, time2)
File "C:/script.py", line 182, in speedofelephant
speed = (distance / delta_time) # speed in m/s
TypeError: unsupported operand type(s) for /: 'float' and 'NotImplementedType'
</code></pre>
<p>Any suggestions?</p> | <p>Just use <code>delta = time1 - time2</code> if they are in most datetime formats.</p>
<p>Use <code>dateutil.parser</code> to parse to <code>datetime.datetime</code> objects.</p>
<p><strong>EDIT</strong>: Subtracting datetimes gives you a timedelta. Youll need to convert this to seconds so use <code>delta.totalseconds()</code>.</p> | python|arrays|numpy | -1 |
941 | 66,471,820 | Pandas:Unique values of a column based on a condition | <p>I have two columns in a dataframe:fueltype and number of doors.Fueltype has 3 categories:Petrol,Diesel and CNG.How do I find the unique values of number of doors in petrol fueltype?</p> | <p>Say that your dataframe looks like this:</p>
<pre><code> fueltype number of doors
0 Petrol 2
1 Petrol 4
2 Petrol 4
3 Petrol 4
4 Diesel 2
5 Diesel 2
6 Diesel 4
7 Diesel 4
8 Diesel 4
9 Diesel 4
10 CNG 2
11 CNG 2
12 CNG 4
13 CNG 2
14 CNG 4
</code></pre>
<p>then this:</p>
<pre><code>group = df.groupby('fueltype')
df2 = group.apply(lambda x: x['number of doors'].unique())
</code></pre>
<p>gives you the unique value for each category of fuels</p>
<pre><code>fueltype
CNG [2, 4]
Diesel [2, 4]
Petrol [2, 4]
dtype: object
</code></pre>
<p><b>EDIT</b></p>
<p>If you want to all all unique values for each fueltype:</p>
<pre><code>df2.explode()
</code></pre>
<p>will return</p>
<pre><code>fueltype
CNG 2
CNG 4
Diesel 2
Diesel 4
Petrol 2
Petrol 4
dtype: object
</code></pre> | python|pandas | 0 |
942 | 66,640,612 | Calculate intersecting sums from flat DataFrame for a heatmap | <p>I'm trying to wrangle some data to show how many items a range of people have in common. The goal is to show this data in a heatmap format via Seaborn to understand these overlaps visually.</p>
<p>Here's some sample data:</p>
<pre><code>demo_df = pd.DataFrame([
("Get Back", 1,0,2),
("Help", 5, 2, 0),
("Let It Be", 0,2,2)
],columns=["Song","John", "Paul", "Ringo"])
demo_df.set_index("Song")
John Paul Ringo
Song
Get Back 1 0 2
Help 5 2 0
Let It Be 0 2 2
</code></pre>
<p>I don't need a breakdown by song, just the total of shared items. The resulting data would show a sum of how many items they share like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Name</th>
<th>John</th>
<th>Paul</th>
<th>Ringo</th>
</tr>
</thead>
<tbody>
<tr>
<td>John</td>
<td>-</td>
<td>7</td>
<td>3</td>
</tr>
<tr>
<td>Paul</td>
<td>7</td>
<td>-</td>
<td>4</td>
</tr>
<tr>
<td>Ringo</td>
<td>3</td>
<td>4</td>
<td>-</td>
</tr>
</tbody>
</table>
</div>
<p>So far I've tried a few options with <code>groupby</code> and <code>unstack</code> but haven't been able to work out how to cross match the names into both column and header rows.</p> | <p>We may do <code>dot</code> then fill diag</p>
<pre><code>out = df.T.dot(df.ne(0)) + df.T.ne(0).dot(df)
np.fill_diagonal(out.values, 0)
out
Out[176]:
John Paul Ringo
John 0 7 3
Paul 7 0 4
Ringo 3 4 0
</code></pre> | pandas|seaborn | 3 |
943 | 66,677,756 | Pandas update column value based on values of groupby having multiple if else | <p>I have a pandas data frame, where 3 columns X, Y, and Z are used for grouping. I want to update column B (or store it in a separate column) for each group based on the conditions shown in the code. But all I'm getting is nulls as the final outcome.
I'm not sure what am I doing incorrectly</p>
<p>Below is the sample of the table (I have not taken all the cases, but I'm including them in the code):</p>
<p><a href="https://i.stack.imgur.com/2kROj.png" rel="nofollow noreferrer">enter image description here</a></p>
<pre><code>group=df.groupby(['X','Y','Z'])
for a,b in group:
if ((b.colA==2).all()):
df['colB']=b.colB.max()
elif (((b.colA>2).all()) and (b.colB.max() >=2)):
df['colB']=b.colB.max()
elif (((b.ColC.str.isdigit()).all()) and ((b.ColC.str.len()==2).all())):
df['colB']=b.ColC.str[0].max()
elif (((b.ColC.str.isdigit()).all()) and ((b.ColC.str.len()>2).all())):
df['ColB']=b.ColC.str[:-2].max()
elif ((b.ColC.str[0].str.isdigit().all()) and (b.ColC.str.contains('[A-Z]').all()) and
(b.ColC.str[-1].str.isalpha().all())):
df['colB']=b.ColC.str[:-1].astype(float).max()
elif (b.ColC.str[0].str.isalpha().all() and b.ColC.str.contains('[0-9]').all()):
df['ColB']=len(set(" ".join(re.findall("[A-Z]+", str(b.ColC)))))
else:
df['colB']=np.nan
</code></pre> | <p>The main flaw in your code is that you set some value in
the <strong>whole</strong> <em>colB</em> column, whereas it should be set
only in rows from the current group.</p>
<p>To do your task the right way, define a function to be applied to
each group:</p>
<pre><code>def myFun(b):
if (b.colA == 2).all():
rv = b.colB.max()
elif (b.colA > 2).all() and (b.colB.max() >= 2):
rv = b.colB.max()
elif (b.colC.str.isdigit()).all() and (b.colC.str.len() == 2).all():
rv = b.colC.str[0].max()
elif b.colC.str.isdigit().all() and (b.colC.str.len() > 2).all():
rv = b.colC.str[:-2].max()
elif b.colC.str[0].str.isdigit().all() and b.colC.str[-1].str.isalpha().all():
rv = b.colC.str[:-1].astype(int).max()
elif b.colC.str[1].str.isalpha().all() and b.colC.str.contains('[0-9]').all():
rv = len(set("".join(b.colC.str.extract("([A-Z]+)")[0])))
else:
rv = np.nan
return pd.Series(rv, index=b.index)
</code></pre>
<p>Another flaw is in your data. The last group <em>('J', 'K', 'L')</em> will
be processed by the first <em>if</em> path.
In order to be processed by the fifth path, I put <em>0</em> in <em>colA</em>
in this group, so that the source DataFrame contains:</p>
<pre><code> X Y Z colA colB colC
0 A B C 2 3 NaN
1 A B C 2 1 NaN
2 D E F 3 4 NaN
3 D E F 3 1 NaN
4 D E F 3 2 NaN
5 G H I 3 0 35
6 G H I 3 0 63
7 G H I 3 0 78
8 J K L 0 0 2H
9 J K L 0 0 5B
</code></pre>
<p>And to fill the result column, run:</p>
<pre><code>df['Result'] = df.groupby(['X','Y','Z'], group_keys=False).apply(myFun)
</code></pre>
<p>The result is:</p>
<pre><code> X Y Z colA colB colC Result
0 A B C 2 3 NaN 3
1 A B C 2 1 NaN 3
2 D E F 3 4 NaN 4
3 D E F 3 1 NaN 4
4 D E F 3 2 NaN 4
5 G H I 3 0 35 7
6 G H I 3 0 63 7
7 G H I 3 0 78 7
8 J K L 0 0 2H 5
9 J K L 0 0 5B 5
</code></pre>
<p>Or, to place the result in <em>colB</em>, change the output column
name in the above code.</p> | python|pandas|if-statement|group-by | 0 |
944 | 66,677,482 | UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb0 in position 136: invalid start byte | <p>Hello I am trying to read a csv file. This was my code:</p>
<pre><code>df = pd.read_csv("2021VAERSDATA.csv")
df.head()
</code></pre>
<p>and this was the error I received:</p>
<pre><code>---------------------------------------------------------------------------
UnicodeDecodeError Traceback (most recent call last)
pandas\_libs\parsers.pyx in pandas._libs.parsers.TextReader._convert_tokens()
pandas\_libs\parsers.pyx in pandas._libs.parsers.TextReader._convert_with_dtype()
pandas\_libs\parsers.pyx in pandas._libs.parsers.TextReader._string_convert()
pandas\_libs\parsers.pyx in pandas._libs.parsers._string_box_utf8()
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb0 in position 136: invalid start byte
</code></pre>
<p>I'm not sure how to correct this. Any advice would be greatly appreciated!</p>
<p>Edit:</p>
<p>Here are the first 3 rows of my file:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">VAERS_ID</th>
<th style="text-align: center;">RECVDATE</th>
<th style="text-align: right;">STATE</th>
<th style="text-align: left;">AGE_YRS</th>
<th style="text-align: center;">CAGE_YR</th>
<th style="text-align: right;">CAGE_MO</th>
<th style="text-align: left;">SEX</th>
<th style="text-align: center;">RPT_DATE</th>
<th style="text-align: right;">SYMPTOM_TEXT</th>
<th style="text-align: left;">DIED</th>
<th style="text-align: center;">DATEDIED</th>
<th style="text-align: right;">L_THREAT</th>
<th style="text-align: left;">ER_VISIT</th>
<th style="text-align: center;">HOSPITAL</th>
<th style="text-align: right;">HOSPDAYS</th>
<th style="text-align: left;">X_STAY</th>
<th style="text-align: center;">DISABLE</th>
<th style="text-align: right;">RECOVD</th>
<th style="text-align: left;">VAX_DATE</th>
<th style="text-align: center;">ONSET_DATE</th>
<th style="text-align: right;">NUMDAYS</th>
<th style="text-align: left;">LAB_DATA</th>
<th style="text-align: center;">V_ADMINBY</th>
<th style="text-align: right;">V_FUNDBY</th>
<th style="text-align: left;">OTHER_MEDS</th>
<th style="text-align: center;">CUR_ILL</th>
<th style="text-align: right;">HISTORY</th>
<th style="text-align: left;">PRIOR_VAX</th>
<th style="text-align: center;">SPLTTYPE</th>
<th style="text-align: right;">FORM_VERS</th>
<th style="text-align: left;">TODAYS_DATE</th>
<th style="text-align: center;">BIRTH_DEFECT</th>
<th style="text-align: right;">OFC_VISIT</th>
<th style="text-align: left;">ER_ED_VISIT</th>
<th style="text-align: center;">ALLERGIES</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">916600</td>
<td style="text-align: center;">1/1/2021</td>
<td style="text-align: right;">TX</td>
<td style="text-align: left;">33</td>
<td style="text-align: center;">33</td>
<td style="text-align: right;"></td>
<td style="text-align: left;">F</td>
<td style="text-align: center;"></td>
<td style="text-align: right;">Right of epiglottis swelled up and hinder swallowing pictures taken Benadryl Tylenol taken</td>
<td style="text-align: left;"></td>
<td style="text-align: center;"></td>
<td style="text-align: right;"></td>
<td style="text-align: left;"></td>
<td style="text-align: center;"></td>
<td style="text-align: right;"></td>
<td style="text-align: left;"></td>
<td style="text-align: center;"></td>
<td style="text-align: right;">Y</td>
<td style="text-align: left;">12/28/2020</td>
<td style="text-align: center;">12/30/2020</td>
<td style="text-align: right;">2</td>
<td style="text-align: left;">None</td>
<td style="text-align: center;">PVT</td>
<td style="text-align: right;"></td>
<td style="text-align: left;">None</td>
<td style="text-align: center;">None</td>
<td style="text-align: right;">None</td>
<td style="text-align: left;"></td>
<td style="text-align: center;"></td>
<td style="text-align: right;">2</td>
<td style="text-align: left;">1/1/2021</td>
<td style="text-align: center;"></td>
<td style="text-align: right;">Y</td>
<td style="text-align: left;"></td>
<td style="text-align: center;">Pcn and bee venom</td>
</tr>
<tr>
<td style="text-align: left;">916601</td>
<td style="text-align: center;">1/1/2021</td>
<td style="text-align: right;">CA</td>
<td style="text-align: left;">73</td>
<td style="text-align: center;">73</td>
<td style="text-align: right;"></td>
<td style="text-align: left;">F</td>
<td style="text-align: center;"></td>
<td style="text-align: right;">Approximately 30 min post vaccination administration patient demonstrated SOB and anxiousness. Assessed at time of event: Heart sounds normal, Lung sounds clear. Vitals within normal limits for patient. O2 91% on 3 liters NC Continuous flow. 2 consecutive nebulized albuterol treatments were administered. At approximately 1.5 hours post reaction, patients' SOB and anxiousness had subsided and the patient stated that they were feel "much better".</td>
<td style="text-align: left;"></td>
<td style="text-align: center;"></td>
<td style="text-align: right;"></td>
<td style="text-align: left;"></td>
<td style="text-align: center;"></td>
<td style="text-align: right;"></td>
<td style="text-align: left;"></td>
<td style="text-align: center;"></td>
<td style="text-align: right;">Y</td>
<td style="text-align: left;">12/31/2020</td>
<td style="text-align: center;">12/31/2020</td>
<td style="text-align: right;">0</td>
<td style="text-align: left;"></td>
<td style="text-align: center;">SEN</td>
<td style="text-align: right;"></td>
<td style="text-align: left;">Patient residing at nursing facility. See patients chart.</td>
<td style="text-align: center;">Patient residing at nursing facility. See patients chart.</td>
<td style="text-align: right;">Patient residing at nursing facility. See patients chart.</td>
<td style="text-align: left;"></td>
<td style="text-align: center;"></td>
<td style="text-align: right;">2</td>
<td style="text-align: left;">1/1/2021</td>
<td style="text-align: center;"></td>
<td style="text-align: right;">Y</td>
<td style="text-align: left;"></td>
<td style="text-align: center;">"Dairy"</td>
</tr>
</tbody>
</table>
</div> | <p>I accidentally faced the same issue while trying to load the same dataset. The code below should solve your problem.</p>
<pre><code>df = pd.read_csv("2021VAERSDATA.csv", encoding_errors='ignore', low_memory=False)
df.head()
</code></pre> | python|pandas|csv | 0 |
945 | 66,654,591 | Throw an error in read_csv if the number of rows does not match the number of headers | <p>I have a large number of <code>csv</code> files, where I am trying to identify if the records in the files are consistent with a predefined schema. For example, given a csv :</p>
<pre><code>col1,co2,col3,col4,col5,col6
A,B,,C,D,E
M,N,O,,,
U,V,W,
</code></pre>
<p>The first row is consistent as it has as many entries as the header ( even though there is a missing value). The second row is also consistent, as it has as many entries as the header, but the third row is inconsistent, as it has 3 entries only.</p>
<p>I am looking for a way in <code>pandas.read_csv</code> to raise and error for row number 3, but as of now, when I read the file in pandas, it reads all the rows with NA for missing values in row 3. I've tried playing with <code>error_bad_lines</code> and <code>na_filter</code>, but that does not solve my problem.
Any ideas how can I go about this issue? I don't want to iterate over every row in the csv as some of the files are fairly large and it would take a few mins per file, which isn't going to work out for me.</p> | <p>Well <code>error_bad_lines</code> will make sure there are no extra columns. As for missing columns, there is unfortunately no way to check for these without iterating over the data. You can do this with <code>assert(not df.isnull().values.any())</code>.</p> | python|pandas|csv | 0 |
946 | 66,753,095 | Insert into phpmyadmin with python sql fails | <p>i have the following notebook, where im trying to insert the data of a dataframe into my phpmyadmin sql database
to replicate run the following:</p>
<p>first i create the Database with the schema</p>
<pre><code>CREATE SCHEMA IF NOT EXISTS `proyecto` DEFAULT CHARACTER SET utf8 ;
USE `proyecto`;
CREATE TABLE IF NOT EXISTS `pueblos`(
`Pueblo` VARCHAR(60) NOT NULL,
`Comunidad` VARCHAR(60) NOT NULL,
`Provincia` VARCHAR(60) NOT NULL,
`Latitud` float NOT NULL,
`Longitud` float NOT NULL,
`Altitud` float NOT NULL,
`Habitantes` int NOT NULL,
`Hombres` int NOT NULL,
`Mujeres` int NOT NULL,
PRIMARY KEY (`Pueblo`))
ENGINE = InnoDB;
</code></pre>
<p>and in python i import the libraries</p>
<pre><code>import numpy as np
import pandas as pd
pip install mysqlclient
import MySQLdb
</code></pre>
<p>then i gather the data and transform it like so</p>
<pre><code>df= pd.read_excel('https://www.businessintelligence.info/resources/assets/listado-longitud-latitud-municipios-espana.xls')
df=df.drop(index = 0)
new_header = df.iloc[0]
df= df[1:]
df.columns = new_header
</code></pre>
<p>So far we have the data and the DB Schema, and so far so good.
now, i try to insert some data to make sure the connection works
so i run</p>
<pre><code>db=MySQLdb.connect("localhost","root","","proyecto")
insertrec=db.cursor()
a="se"
b="ha"
c="insertado"
sqlquery="INSERT INTO Pueblos (Pueblo, Comunidad,Provincia,Latitud,Longitud,Altitud,Habitantes,Hombres,Mujeres) VALUES('"+a+"', '"+b+"','"+c+"',7,8,9,10,11,12)"
insertrec.execute(sqlquery)
db.commit()
print("Success!!")
db.close()
</code></pre>
<p>and i can see that im able to insert data into my database, great!
so the issue comes when i try to now replicate the same and insert the data of my dataframe like this</p>
<pre><code>for index, row in df.iterrows():
Pueblo=row['Población']
Comunidad=row['Comunidad']
Provincia=row['Provincia']
Latitud=row['Latitud']
Longitud=row['Longitud']
Altitud=row['Altitud']
Habitantes=row['Habitantes']
Hombres=row['Hombres']
Mujeres=row['Mujeres']
sqlquery="INSERT INTO Pueblos (Pueblo, Comunidad,Provincia,Latitud,Longitud,Altitud,Habitantes,Hombres,Mujeres) VALUES(row['Población'], row['Comunidad'],row['Provincia'], row['Latitud'],row['Longitud'],row['Altitud'],row['Habitantes'],row['Hombres'],row['Mujeres'])"
insertrec.execute(sqlquery)
db.commit()
</code></pre>
<p>db.close()</p>
<pre><code>
this operation fails.
What am i doing wrong, i believe im simply doing the same as the simple insertion but i cant understand why it doesnt work
</code></pre>
<p>EDIT
currently attempting to implement @buran suggestion, to use df.to_sql, but it still fails
the code attempted is</p>
<pre><code>df.to_sql("pueblos",db,if_exists='append',index=False)
</code></pre>
<p>EDIT 2
the thread <a href="https://stackoverflow.com/questions/43136121/questions-about-pandas-to-sql">questions about pandas.to_sql</a> points that df.to_sql is no longer supported, so we are currently creating an engine and attempting it through their solution.
the first change was to add the column index with type int into the db schema, since df.to_sql also takes the index
i also made a an user ana with password ana with same priviledges as root for the engine syntax
from there attempting to implement their solution like so</p>
<pre><code>from sqlalchemy import create_engine
engine = create_engine("mysql://ana:ana@localhost/proyecto")
con = engine.connect()
df.to_sql(name='pueblos',con=con,if_exists='append')
con.close()
</code></pre>
<p>currently this yields the error:
OperationalError: (MySQLdb._exceptions.OperationalError) (1054,NULL)</p> | <p>To solve the insertion we had to create an engine like this</p>
<pre><code>from sqlalchemy import create_engine
engine = create_engine("mysql://user:password@localhost/database_name")
con = engine.connect()
df.to_sql(name='table you are inserting into',con=con,if_exists='append')
con.close()
</code></pre>
<p><strong>make sure that the table name is not an existing table</strong>, the above method does not work if a schema has already created the table, so this creates a new table and creates its own schema based on the Dataframe</p>
<p>Full credit to @Buran for his suggestions that lead to the answer</p> | python|mysql|pandas | 0 |
947 | 57,355,309 | Compare value of rows in Dataframe | <p>I want to know if the values in two different rows of a Dataframe are the same.
My df looks something like this:</p>
<pre><code>df['Name1']:
Alex,
Peter,
Herbert,
Seppi,
Huaba
df['Name2']:
Alexander,
peter,
herbert,
Sepp,
huaba
</code></pre>
<p>First I want to Apply .rstrip() and .toLower(), but these methods seem to only work on Strings. I tried <code>Str(df['Name1']</code> which worked, but the comparison gave me the wrong result.</p>
<p>I also tried the following:</p>
<pre><code> df["Name1"].isin(df["Name2"]).value_counts())
df["Name1"].eq(df["Name2"]).value_counts())
</code></pre>
<p>Problem 1: I think <code>.isin</code> also returns <code>true</code> if a Substring is found e.g. <code>alex.isin(alexander)</code>would return true then. Which is not what I´m looking for.</p>
<p>Problem 2: I think <code>.eg</code> would do it for me. But I still have the problem with the <code>.rstrip()</code> and <code>to.lower()</code> methods. </p>
<p>What is the best way to count the amount of same entries?</p> | <pre><code>print (df)
Name1 Name2
0 Alex Alexander
1 Peter peter
2 Herbert herbert
3 Seppi Sepp
4 Huaba huaba
</code></pre>
<p>If need compare each row:</p>
<pre><code>out1 = df["Name1"].str.lower().eq(df["Name2"].str.lower()).sum()
</code></pre>
<p>If need compare all values of <code>Name1</code> by all values by <code>Name2</code>:</p>
<pre><code>out2 = df["Name1"].str.lower().isin(df["Name2"].str.lower()).sum()
</code></pre> | python|pandas|dataframe | 1 |
948 | 57,357,342 | How to deal with "numpy.float64' object cannot be interpreted as an integer" when I'm trying to use np.nanmean for finding mean of two array elements? | <p>I'm trying to assign mean of specific elements inside two arrays without considering NAs in the operation:</p>
<pre><code>C [i] = nanmean(A[a, b, c, d], B[aa, bb, cc, dd])
</code></pre>
<p>The value of <code>A[a, b, c, d]</code> is equal to <code>0.053</code>, and the value of <code>B[aa, bb, cc, dd]</code> is equal to <code>0.245</code> in this situation, and they are numpy.float64 type. While executing the code I get this error: </p>
<p>'numpy.float64' object cannot be interpreted as an integer </p>
<p>What could be the solution for this???</p> | <p>The second argument to <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.nanmean.html" rel="nofollow noreferrer"><code>np.nanmean</code></a> is the axis alongside the mean is calculated. The axis cannot be a <code>float</code>, it has to be and <code>int</code>.</p>
<p>If you want the (nan)mean of the elements <code>x</code> and <code>y</code>, you need to call <code>nanmean([x,y])</code>, not <code>nanmean(x,y)</code>.
So, you need to change your line to:</p>
<pre><code>C [i] = nanmean([A[a, b, c, d], B[aa, bb, cc, dd]])
</code></pre> | python|numpy|mean | 1 |
949 | 24,410,243 | pandas convert_to_r_dataframe does not work with numpy.bool_ | <p>I have a pandas data frame that I would like to convert to an R data frame to use via <code>rpy2</code>. The data types of pandas data frame are booleans, specifically <code>numpy.bool_</code>. I get a <code>KeyError</code> when trying to use <code>convert_to_r_dataframe</code>. I am using pandas 0.13.1.</p>
<p>I am doing something I should not be doing? Should I not be using numpy booleans? </p>
<p>Here is an example, </p>
<pre><code>import pandas
import pandas.rpy.common as common
import numpy as np
# This works fine.
test_df_float = pandas.DataFrame(np.random.rand(10, 3), columns=list('xyz'))
r_test_df_float = common.convert_to_r_dataframe(test_df_float)
# This is a problem.
test_df_bool = pandas.DataFrame(np.random.rand(10, 3) > 0.5, columns=list('xyz'))
r_test_df_bool = common.convert_to_r_dataframe(test_df_bool)
KeyError Traceback (most recent call last)
<ipython-input-11-323084399e95> in <module>()
----> 1 r_test_df_bool = common.convert_to_r_dataframe(test_df_bool)
/usr/lib/python2.7/site-packages/pandas/rpy/common.pyc in convert_to_r_dataframe(df, strings_as_factors)
311 for item in value]
312
--> 313 value = VECTOR_TYPES[value_type](value)
314
315 if not strings_as_factors:
KeyError: <type 'numpy.bool_'>
</code></pre> | <p>I think this may be a bug, what is used to be <code>np.bool</code> now is called <code>np.bool_</code> and the key is missing for two dictionary in the source file, so modify the source (line 261 in <strong>.../site-packages/pandas/rpy/common.py</strong>) to the following will do the trick:</p>
<pre><code>VECTOR_TYPES = {np.float64: robj.FloatVector,
np.float32: robj.FloatVector,
np.float: robj.FloatVector,
np.int: robj.IntVector,
np.int32: robj.IntVector,
np.int64: robj.IntVector,
np.object_: robj.StrVector,
np.str: robj.StrVector,
np.bool: robj.BoolVector,
np.bool_: robj.BoolVector} #new key
NA_TYPES = {np.float64: robj.NA_Real,
np.float32: robj.NA_Real,
np.float: robj.NA_Real,
np.int: robj.NA_Integer,
np.int32: robj.NA_Integer,
np.int64: robj.NA_Integer,
np.object_: robj.NA_Character,
np.str: robj.NA_Character,
np.bool: robj.NA_Logical,
np.bool_: robj.NA_Logical} #new key
</code></pre>
<p>Basically you just need to add the last key into both dictionarys.</p> | python|r|pandas|rpy2 | 1 |
950 | 43,830,545 | pandas rolling max with groupby | <p>I have a problem getting the <code>rolling</code> function of Pandas to do what I wish. I want for each frow to calculate the maximum so far within the group. Here is an example:</p>
<pre><code>df = pd.DataFrame([[1,3], [1,6], [1,3], [2,2], [2,1]], columns=['id', 'value'])
</code></pre>
<p>looks like</p>
<pre><code> id value
0 1 3
1 1 6
2 1 3
3 2 2
4 2 1
</code></pre>
<p>Now I wish to obtain the following DataFrame:</p>
<pre><code> id value
0 1 3
1 1 6
2 1 6
3 2 2
4 2 2
</code></pre>
<p>The problem is that when I do</p>
<pre><code>df.groupby('id')['value'].rolling(1).max()
</code></pre>
<p>I get the same DataFrame back. And when I do </p>
<pre><code>df.groupby('id')['value'].rolling(3).max()
</code></pre>
<p>I get a DataFrame with Nans. Can someone explain how to properly use <code>rolling</code> or some other Pandas function to obtain the DataFrame I want? </p> | <p>It looks like you need <code>cummax()</code> instead of <code>.rolling(N).max()</code></p>
<pre><code>In [29]: df['new'] = df.groupby('id').value.cummax()
In [30]: df
Out[30]:
id value new
0 1 3 3
1 1 6 6
2 1 3 6
3 2 2 2
4 2 1 2
</code></pre>
<p><strong>Timing</strong> (using brand new Pandas version 0.20.1):</p>
<pre><code>In [3]: df = pd.concat([df] * 10**4, ignore_index=True)
In [4]: df.shape
Out[4]: (50000, 2)
In [5]: %timeit df.groupby('id').value.apply(lambda x: x.cummax())
100 loops, best of 3: 15.8 ms per loop
In [6]: %timeit df.groupby('id').value.cummax()
100 loops, best of 3: 4.09 ms per loop
</code></pre>
<p><strong>NOTE:</strong> <a href="http://pandas.pydata.org/pandas-docs/stable/whatsnew.html#whatsnew-0200" rel="noreferrer">from Pandas 0.20.0 what's new</a></p>
<ul>
<li>Improved performance of <code>groupby().cummin()</code> and <code>groupby().cummax()</code> (<a href="https://github.com/pandas-dev/pandas/issues/15048" rel="noreferrer">GH15048</a>, <a href="https://github.com/pandas-dev/pandas/issues/15109" rel="noreferrer">GH15109</a>, <a href="https://github.com/pandas-dev/pandas/issues/15561" rel="noreferrer">GH15561</a>, <a href="https://github.com/pandas-dev/pandas/issues/15635" rel="noreferrer">GH15635</a>)</li>
</ul> | python|python-3.x|pandas|dataframe|group-by | 11 |
951 | 70,620,304 | Pandas upsample rows with a start and end time | <p>I have a data frame of the form:</p>
<pre><code>In [5]: df = pd.DataFrame({
...: 'start_time': ['2022-01-01 01:15', '2022-01-01 13:00'],
...: 'end_time': ['2022-01-01 03:45', '2022-01-01 15:00'],
...: 'values': [1000, 750]})
In [6]: df
Out[6]:
start_time end_time values
0 2022-01-01 01:15 2022-01-01 03:45 1000
1 2022-01-01 13:00 2022-01-01 15:00 750
</code></pre>
<p>I would like to convert it to 24 hourly values, splitting the values proportionally across the hours in the start_time/end_time range. For the above example this should yield:</p>
<pre><code>In [10]: result
Out[10]:
value
2022-01-01 00:00:00 0
2022-01-01 01:00:00 300
2022-01-01 02:00:00 400
2022-01-01 03:00:00 300
2022-01-01 04:00:00 0
2022-01-01 05:00:00 0
2022-01-01 06:00:00 0
2022-01-01 07:00:00 0
2022-01-01 08:00:00 0
2022-01-01 09:00:00 0
2022-01-01 10:00:00 0
2022-01-01 11:00:00 0
2022-01-01 12:00:00 0
2022-01-01 13:00:00 375
2022-01-01 14:00:00 375
2022-01-01 15:00:00 0
2022-01-01 16:00:00 0
2022-01-01 17:00:00 0
2022-01-01 18:00:00 0
2022-01-01 19:00:00 0
2022-01-01 20:00:00 0
2022-01-01 21:00:00 0
2022-01-01 22:00:00 0
2022-01-01 23:00:00 0
</code></pre>
<p>The start_time/end_time ranges are non-overlapping. Any suggestions on how to accomplish this?</p> | <p>Use:</p>
<pre><code>#get differencies between start and end in minutes
df['diff'] = pd.to_datetime(df['end_time']).sub(pd.to_datetime(df['start_time'])).dt.total_seconds().div(60)
#create DataFrame with repeat values by minutes
s = pd.concat([pd.Series(r.Index,pd.date_range(r.start_time, r.end_time, freq='Min', closed='left')) for r in df.itertuples()])
s = pd.Series(s.index, s.to_numpy(), name='new')
df = df.join(s)
#resample to hours
df = df.resample('H', on='new').agg({'values':'first', 'diff':'first', 'new':'size'})
#multiple values by ratio
df['value'] = df['values'].mul(df['new'].div(df['diff'])).fillna(0)
#add missing rows
r = pd.date_range(df.index.min().normalize(), df.index.max().normalize() + pd.Timedelta('23H'), freq='H')
df = df[['value']].reindex(r, fill_value=0)
</code></pre>
<hr />
<pre><code>print (df)
value
2022-01-01 00:00:00 0.0
2022-01-01 01:00:00 300.0
2022-01-01 02:00:00 400.0
2022-01-01 03:00:00 300.0
2022-01-01 04:00:00 0.0
2022-01-01 05:00:00 0.0
2022-01-01 06:00:00 0.0
2022-01-01 07:00:00 0.0
2022-01-01 08:00:00 0.0
2022-01-01 09:00:00 0.0
2022-01-01 10:00:00 0.0
2022-01-01 11:00:00 0.0
2022-01-01 12:00:00 0.0
2022-01-01 13:00:00 375.0
2022-01-01 14:00:00 375.0
2022-01-01 15:00:00 0.0
2022-01-01 16:00:00 0.0
2022-01-01 17:00:00 0.0
2022-01-01 18:00:00 0.0
2022-01-01 19:00:00 0.0
2022-01-01 20:00:00 0.0
2022-01-01 21:00:00 0.0
2022-01-01 22:00:00 0.0
2022-01-01 23:00:00 0.0
</code></pre> | python|pandas|time-series | 1 |
952 | 70,642,281 | convert tf keras model to scikit MLP NN | <p>I am experimenting with training NLTK classifier model with tensorflow and keras, would anyone know if this could be recreated with sklearn neural work MLP classifier? For what I am using ML for I don't think I need tensorflow but something simplier and easier to install/deploy.</p>
<p>Not a lot of wisdom on machine learning wisdom here, any tips greatly appreciated even describing this deep learning tensorflow keras model greatly appreciated.</p>
<p>So my tf keras model architecture looks like this:</p>
<pre><code>training = []
random.shuffle(training)
training = np.array(training)
# create train and test lists. X - patterns, Y - intents
train_x = list(training[:,0])
train_y = list(training[:,1])
# Create model - 3 layers. First layer 128 neurons, second layer 64 neurons and 3rd output layer contains number of neurons
# equal to number of intents to predict output intent with softmax
model = Sequential()
model.add(Dense(128, input_shape=(len(train_x[0]),), activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(64, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(len(train_y[0]), activation='softmax'))
# Compile model. Stochastic gradient descent with Nesterov accelerated gradient gives good results for this model
sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])
# Fit the model
model.fit(np.array(train_x), np.array(train_y), epochs=200, batch_size=5, verbose=1)
</code></pre>
<p>SO the <a href="https://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPClassifier.html" rel="nofollow noreferrer">sklearn neural network</a>, am I on track at all with this below? Can someone help me I understand what exactly the tensorflow model architecture is and what cannot be duplicated with sklearn. I sort of understand tensorflow is probabaly much more powerful that sklearn that is something more simple.</p>
<pre><code>#Importing MLPClassifier
from sklearn.neural_network import MLPClassifier
model = MLPClassifier(hidden_layer_sizes=(128,64),activation ='relu',solver='sgd',random_state=1)
</code></pre> | <p>Just google converting a keras model to pytorch, there is quite a bit of tutorials out there for that... It doesnt look easy but maybe worth the effort for what ever you need it for...</p>
<p>Going down this road just using sklearn MLP neural network <em><strong>I can get good enough results with sklearn...</strong></em> without the hassle of getting tensorflow installed properly.</p>
<p>Also on a cloud linux instance tensorflow requires a LOT more memory and storage than a FREE account can do on pythonanywhere.com but free account seems just fine with sklearn</p>
<p>When experimenting with sklearn MLP NN for whatever reason better results just leaving architecture as default and playing around with the learning rate.</p>
<pre><code>from sklearn.neural_network import MLPClassifier
model = MLPClassifier(learning_rate_init=0.0001,max_iter=9000,shuffle=True).fit(train_x, train_y)
</code></pre> | python|tensorflow|machine-learning|keras|scikit-learn | 0 |
953 | 70,584,240 | How to use tf.gather_nd for multi-dimensional tensor | <p>I don't fully understand how I should use tf.gather_nd() to pick up elements along some axis if I have multi-dimensional tensor. Let's take a small example (if I get answer for this simple example, it solves also my more complex original problem). Let's say that I have rgb image and I am trying to pick the smallest pixel value along channels (last dimension if data order is (B,H,W,C)). I know that this can be done with <code>tf.recude_min(x, axis=-1)</code> but I would like to know that is it also possible to do the same thing with <code>tf.argmin()</code> and <code>tf.gather_nd()</code>?</p>
<pre><code>from skimage import data
import tensorflow as tf
import numpy as np
# Load RGB image from skimage, cast it to float32 and put it in order (B,H,W,C)
image = data.astronaut()
image = tf.cast(image, tf.float32)
image = tf.expand_dims(image, axis=0)
# Take minimum pixel value of each channel in a way number 1
min_along_channels_1 = tf.reduce_min(image, axis=-1)
# Take minimum pixel value of each channel in a way number 2
# The goal is that min_along_channels_1 is equal to min_along_channels_2
idxs = tf.argmin(image, axis=-1)
min_along_channels_2 = tf.gather_nd(image, idxs) # This line gives error :(
</code></pre> | <p>You will have to use <code>tf.meshgrid</code>, which will create a rectangular grid of two one-dimensional arrays representing the tensor indexing of the first and second dimension, since <code>tf.gather_nd</code> needs to know exactly where to extract values across the dimensions. Here is a simplified example:</p>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
image = tf.random.normal((1, 4, 4, 3))
image = tf.squeeze(image, axis=0)
idx = tf.argmin(image, axis=-1)
ij = tf.stack(tf.meshgrid(
tf.range(image.shape[0], dtype=tf.int64),
tf.range(image.shape[1], dtype=tf.int64),
indexing='ij'), axis=-1)
gather_indices = tf.concat([ij, tf.expand_dims(idx, axis=-1)], axis=-1)
result = tf.gather_nd(image, gather_indices)
print('First option -->', tf.reduce_min(image, axis=-1))
print('Second option -->', result)
</code></pre>
<pre><code>First option --> tf.Tensor(
[[-0.53245485 -0.29117298 -0.64434254 -0.8209638 ]
[-0.9386176 -0.5993224 -0.597746 -1.5392851 ]
[-0.5478666 -1.5280861 -1.0344954 -1.920418 ]
[-0.5580688 -1.425873 -1.9276617 -1.0668412 ]], shape=(4, 4), dtype=float32)
Second option --> tf.Tensor(
[[-0.53245485 -0.29117298 -0.64434254 -0.8209638 ]
[-0.9386176 -0.5993224 -0.597746 -1.5392851 ]
[-0.5478666 -1.5280861 -1.0344954 -1.920418 ]
[-0.5580688 -1.425873 -1.9276617 -1.0668412 ]], shape=(4, 4), dtype=float32)
</code></pre>
<p>Or with your example:</p>
<pre class="lang-py prettyprint-override"><code>from skimage import data
import tensorflow as tf
import numpy as np
image = data.astronaut()
image = tf.cast(image, tf.float32)
image = tf.expand_dims(image, axis=0)
min_along_channels_1 = tf.reduce_min(image, axis=-1)
image = tf.squeeze(image, axis=0)
idx = tf.argmin(image, axis=-1)
ij = tf.stack(tf.meshgrid(
tf.range(image.shape[0], dtype=tf.int64),
tf.range(image.shape[1], dtype=tf.int64),
indexing='ij'), axis=-1)
gather_indices = tf.concat([ij, tf.expand_dims(idx, axis=-1)], axis=-1)
min_along_channels_2 = tf.gather_nd(image, gather_indices)
print(tf.equal(min_along_channels_1, min_along_channels_2))
</code></pre>
<pre><code>tf.Tensor(
[[[ True True True ... True True True]
[ True True True ... True True True]
[ True True True ... True True True]
...
[ True True True ... True True True]
[ True True True ... True True True]
[ True True True ... True True True]]], shape=(1, 512, 512), dtype=bool)
</code></pre> | python|tensorflow|tensorflow2.0 | 1 |
954 | 70,657,015 | Joining two DataFrames with Pandas, one with 1 row per key, and the other with several rows per key | <p>First, I want to point out that I didn't found the answer for my question here in stackoverflow nor in pandas documentation, so, if the question had been asked before, I'd appreciate a link for that thread.</p>
<p>I want to join two DataFrames as follows.</p>
<p>df1 =</p>
<pre><code>key x y z
0 x0 y0 z0
1 x1 y1 z1
...
10 x10 y10 z10
</code></pre>
<p>df2 =</p>
<pre><code>key w v u
0 w0 v0 u0
0 w0 v0 u0
0 w0 v0 u0
1 w1 v1 u1
1 w1 v1 u1
2 w2 v2 u2
3 w3 v3 u3
...
10 w10 v10 u10
10 w10 v10 u10
</code></pre>
<p>desired_df_output =</p>
<pre><code>key x y z w v u
0 x0 y0 z0 w0 v0 u0
1 x1 y1 z1 w1 v1 u1
...
10 x10 y10 z10 w10 v10 u10
</code></pre>
<p>I've tried this <code>df1.join(df2, how='inner', on='key')</code>, but I get this error: <code>TypeError: object of type 'NoneType' has no len()</code>.</p>
<p>Thanks</p> | <p>It seems <code>df2</code> has duplicates values, so if you drop them using <code>drop_duplicates</code> method and merge with <code>df1</code> from the right side, you get the desired outcome.</p>
<pre><code>out = df1.merge(df2.drop_duplicates(), on='key')
</code></pre>
<p>Output:</p>
<pre><code> key x y z w v u
0 0 x0 y0 z0 w0 v0 u0
1 1 x1 y1 z1 w1 v1 u1
2 10 x10 y10 z10 w10 v10 u10
</code></pre> | pandas|join|concatenation | 0 |
955 | 70,557,439 | Pandas Function does not reduce | <p>I try to aggregate column contains numpy arrays
unfortunately, I have a error message <strong>Function does not reduce</strong></p>
<pre><code>results = pd.DataFrame([['p1', 'v1', 1, 0 ,np.array([1,3, 4])], ['p1', 'v1', 2, 0 ,np.array([1,3, 4])],['p1', 'v1', 1, 1 ,np.array([1,3, 4])], ['p1', 'v1', 2, 1 ,np.array([1,3, 4])],['p1', 'v2', 1, 0 ,np.array([1,3, 4])], ['p1', 'v2', 2, 0 ,np.array([1,3, 4])],['p1', 'v2', 2, 1 ,np.array([1,3, 4])], ['p1', 'v2', 1, 1 ,np.array([1,3, 4])],['p1', 'v3', 1, 0 ,np.array([1,3, 4])], ['p1', 'v3', 2, 0 ,np.array([1,3, 4])],['p1', 'v3', 3, 0 ,np.array([1,3, 4])], ['p1', 'v3', 4, 0 ,np.array([1,3, 4])],['p1', 'v4', 1, 0 ,np.array([1,3, 4])], ['p1', 'v4', 2, 0 ,np.array([1,3, 4])],['p1', 'v4', 3, 0 ,np.array([1,3, 4])], ['p1', 'v4', 4, 0 ,np.array([1,3, 4])]],columns=['P', 'V', 'G', 'month', 'Values'])
resultsilter = results.query('V=="v1" or V=="v2"')
resultsilter = resultsilter.groupby(['G','month']).agg({'Values': 'sum'})
print(resultsilter)
</code></pre>
<p>I would like to get this results like:</p>
<pre><code>[[1, 0 ,np.array(2,6,8])],[2, 0 ,np.array([2,6,8])],[1, 1 ,np.array([2,6,8])],[2, 1 ,np.array([2,6,8])]]
</code></pre>
<p>any ideas?</p> | <p>So I read up on the query() method and there is an alternative method. This is what I did:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-css lang-css prettyprint-override"><code>import pandas as pd
import numpy as np
results = pd.DataFrame([['p1', 'v1', 1, 0 ,np.array([1,3, 4])], ['p1', 'v1', 2, 0 ,np.array([1,3, 4])],['p1', 'v1', 1, 1 ,np.array([1,3, 4])], ['p1', 'v1', 2, 1 ,np.array([1,3, 4])],['p1', 'v2', 1, 0 ,np.array([1,3, 4])], ['p1', 'v2', 2, 0 ,np.array([1,3, 4])],['p1', 'v2', 2, 1 ,np.array([1,3, 4])], ['p1', 'v2', 1, 1 ,np.array([1,3, 4])],['p1', 'v3', 1, 0 ,np.array([1,3, 4])], ['p1', 'v3', 2, 0 ,np.array([1,3, 4])],['p1', 'v3', 3, 0 ,np.array([1,3, 4])], ['p1', 'v3', 4, 0 ,np.array([1,3, 4])],['p1', 'v4', 1, 0 ,np.array([1,3, 4])], ['p1', 'v4', 2, 0 ,np.array([1,3, 4])],['p1', 'v4', 3, 0 ,np.array([1,3, 4])], ['p1', 'v4', 4, 0 ,np.array([1,3, 4])]], columns=['P', 'V', 'G', 'month', 'Values'])
resultsilter = results[(results["V"] == "v1") | (results["V"] == "v2")] #this is the equivalent of query('V=="v1" or V=="v2"')
resultsilter = resultsilter.groupby(['G','month']).agg({'Values': 'sum'})
resultsilter.reset_index(inplace=True)#fixes format after groupby().agg() is used
print(resultsilter.head())</code></pre>
</div>
</div>
</p> | python|arrays|pandas|numpy | 0 |
956 | 70,480,290 | Not able to install packages that rely on Tensorflow on Mac M1 | <p>I successfully installed Tensorflow 2.7.0 on my MacBook with an M1 chip following this guide by Apple: <a href="https://developer.apple.com/metal/tensorflow-plugin/" rel="nofollow noreferrer">https://developer.apple.com/metal/tensorflow-plugin/</a></p>
<p>I now want to install a package (ethnicolr) in a project that relies on Tensorflow <code>>=1.15.2</code>. This should not be an issue, but it sadly is.</p>
<p><em>requirements.txt</em> of my project</p>
<pre><code>pandas==1.3.4
ethnicolr==0.4.0
</code></pre>
<p><em>requirements.txt</em> of ethnicolr:</p>
<pre><code>tensorflow>=1.15.2
</code></pre>
<p>Running <code>pip install -r requirements.txt</code> yields</p>
<blockquote>
<p>ERROR: Could not find a version that satisfies the requirement
tensorflow>=1.15.2 (from ethnicolr) (from versions: none) ERROR: No
matching distribution found for tensorflow>=1.15.2</p>
</blockquote>
<p>Running <code>pip list</code> shows, that Tensorflow was installed. But it's not called <code>tensorflow</code>, but <code>tensorflow-macos</code> or <code>tensorflow-metal</code>.</p>
<pre><code>tensorboard 2.7.0
tensorboard-data-server 0.6.1
tensorboard-plugin-wit 1.8.0
tensorflow-estimator 2.7.0
tensorflow-macos 2.7.0
tensorflow-metal 0.3.0
</code></pre>
<p>What is a solution here? There must be more packages out there with the requirement of Tensorflow...</p> | <p>So I got it to run. I'm not sure if this is ideal but I'm sharing my solution to maybe help anyone running into the same issue. I installed tensorflow <code>2.6.0</code> in a virtual environment using conda / mambaforge. I opted for <code>2.6.0</code> because <code>2.5.2</code> is not available for M1 and <code>2.5.0</code> wasn't working. You can read about installing Mamaforge <a href="https://coiled.io/blog/apple-arm64-mambaforge/" rel="nofollow noreferrer">here</a>. After that, I installed Tensorflow with</p>
<pre><code>conda install tensorflow==2.6.0
</code></pre>
<p>I also set the pandas version to <code>pandas>1.2.3</code> in the <code>requirements.txt</code> of my own project (same as in ethnicolr). This resolved to pandas <code>1.3.3</code>.</p>
<p>Next, I had to solve the dependency issue with ethnicolr, since ethnicolr requires tensorflow <code>2.5.2</code>. I did that by forking the ethnicolr repo and creating a branch where I pin the tensorflow version to <code>2.6.0</code> in the <code>requirements.txt</code> and <code>setup.py</code>. You find this branch over <a href="https://github.com/ospaarmann/ethnicolr/tree/apple_m1_support_tensorflow_2_6_0" rel="nofollow noreferrer">here</a>. To use this github branch, I changed the line in my <code>requirements.txt</code> to:</p>
<pre><code>git+https://github.com/ospaarmann/ethnicolr.git@apple_m1_support_tensorflow_2_6_0#egg=ethnicolr
</code></pre>
<p>Now I had an issue with a dependency mismatch with numpy. It is described in <a href="https://stackoverflow.com/questions/66060487/valueerror-numpy-ndarray-size-changed-may-indicate-binary-incompatibility-exp">this StackOverflow thread</a>. What happened was that importing pandas or ethnicolr would throw this error:</p>
<pre><code>>>> from ethnicolr import census_ln, pred_census_ln, pred_wiki_name
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/olespaarmann/mambaforge/envs/diversity_scraper/lib/python3.8/site-packages/ethnicolr/__init__.py", line 2, in <module>
from ethnicolr.census_ln import census_ln
File "/Users/olespaarmann/mambaforge/envs/diversity_scraper/lib/python3.8/site-packages/ethnicolr/census_ln.py", line 6, in <module>
import pandas as pd
File "/Users/olespaarmann/mambaforge/envs/diversity_scraper/lib/python3.8/site-packages/pandas/__init__.py", line 22, in <module>
from pandas.compat import (
File "/Users/olespaarmann/mambaforge/envs/diversity_scraper/lib/python3.8/site-packages/pandas/compat/__init__.py", line 15, in <module>
from pandas.compat.numpy import (
File "/Users/olespaarmann/mambaforge/envs/diversity_scraper/lib/python3.8/site-packages/pandas/compat/numpy/__init__.py", line 7, in <module>
from pandas.util.version import Version
File "/Users/olespaarmann/mambaforge/envs/diversity_scraper/lib/python3.8/site-packages/pandas/util/__init__.py", line 1, in <module>
from pandas.util._decorators import ( # noqa
File "/Users/olespaarmann/mambaforge/envs/diversity_scraper/lib/python3.8/site-packages/pandas/util/_decorators.py", line 14, in <module>
from pandas._libs.properties import cache_readonly # noqa
File "/Users/olespaarmann/mambaforge/envs/diversity_scraper/lib/python3.8/site-packages/pandas/_libs/__init__.py", line 13, in <module>
from pandas._libs.interval import Interval
File "pandas/_libs/interval.pyx", line 1, in init pandas._libs.interval
ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 88 from C header, got 80 from PyObject
</code></pre>
<p>The solution here is to just ignore the dependency issues and manually install a newer version of numpy. It doesn't work when I set the numpy version in my <code>requirements.txt</code> because this throws a dependency error:</p>
<pre><code>ERROR: Cannot install numpy>=1.20.0 and tensorflow==2.6.0 because these package versions have conflicting dependencies.
The conflict is caused by:
The user requested numpy>=1.20.0
tensorflow 2.6.0 depends on numpy~=1.19.2
</code></pre>
<p>So I just installed it with <code>python -m pip install numpy==1.20.0</code>. And now everything seems to work.</p> | python|tensorflow|pip|apple-m1 | 0 |
957 | 70,655,943 | pandas - expand array to columns | <p>I have a column in my pandas dataframe that contains array of numbers:</p>
<pre><code>index | col
0 | [106.43477116337492, 6.762679391732501, 0.0, 9...
1 | [106.43477116337492, 6.58742122158056, 0.0, 9....
2 | [106.22211427793361, 7.303693743071101, 0.0, 9...
3 | [106.43477116337492, 7.955196940809838, 0.0, 9...
4 | [106.43477116337492, 6.400733170766536, 0.0, 9...
One value:
array([106.43477116, 6.76267939, 0. , 9.26076567,
10.78086689, 106.63684122, 5.98865461, 0. ,
8.16789259, 9.94066589, 2.03606668, 0. ,
0. ])
</code></pre>
<p>I need to expand the values in the array to separate columns so I will have:</p>
<pre><code>col1 | col2 | col3 ...
106.434... | 6.7526.... | 0.0 ...
106.434... | 6.5874.... | 0.0 ...
</code></pre>
<p>How to do this? I already spent quite some time researching on this but only thing I found is <code>explode()</code> which is <strong>not</strong> what I want.</p> | <p>You can 'spread' the column with arrays values using <code>to_list</code>, then rebuild a dataframe, with if needed a prefix. And (eventually) get rid of the original column.</p>
<p>Assuming your dataframe column with arrays values is named <code>'array'</code>:</p>
<pre><code>dfs = (df.join(pd.DataFrame(df['array'].to_list())
.add_prefix('array_'))
.drop('array', axis = 1))
>>> print(dfs)
array_0 array_1 array_2 ... array_10 array_11 array_12
0 106.434771 6.762679 0.0 ... 2.036067 0.0 0.0
1 106.434771 6.762679 0.0 ... 2.036067 0.0 0.0
[2 rows x 13 columns]
</code></pre>
<p>If you have a single column, do not want prefixes, and do not want to keep the original column, it is a bit simpler:</p>
<pre><code>dfs = pd.DataFrame(df.iloc[:,0].to_list())
>>> print(dfs)
0 1 2 3 ... 9 10 11 12
0 106.434771 6.762679 0.0 9.260766 ... 9.940666 2.036067 0.0 0.0
1 106.434771 6.762679 0.0 9.260766 ... 9.940666 2.036067 0.0 0.0
[2 rows x 13 columns]
</code></pre> | python|arrays|pandas | 2 |
958 | 70,693,018 | Getting the rolling.sum of row values with irregular time intervals | <p>I am trying to get the rolling.sum of my time series. However, the rows have varying time intervals (see below my df_water_level_US1 dataframe):</p>
<pre><code> DATE TIMEREAD WATERLEVEL(M) DateAndTime
0 01/01/2016 0:00:15 0.65 01/01/2016 0:00:15
1 01/01/2016 0:10:14 0.65 01/01/2016 0:10:14
2 01/01/2016 0:20:11 0.64 01/01/2016 0:20:11
3 01/01/2016 0:30:12 0.66 01/01/2016 0:30:12
4 01/01/2016 0:40:12 0.64 01/01/2016 0:40:12
</code></pre>
<p>and so on.
I tried to use this to get the sum for each day and save it to final_1D:</p>
<pre><code>final_1D = df_water_level_US1.set_index('DateAndTime').rolling('1D').sum()
</code></pre>
<p>but I get this error:</p>
<pre><code>ValueError: window must be an integer 0 or greater
</code></pre>
<p>The expected output is:</p>
<pre><code>DATETIMEREAD WATERLEVEL(M) DateAndTime
01/01/2016 3.24 01/01/2016
</code></pre>
<p>and so on (02/01/2016, 03/01/2016 etc)</p>
<p>Anyone who have idea how to fix this?</p> | <p>Try:</p>
<pre><code>df_water_level_US1['DateAndTime'] = pd.to_datetime(df_water_level_US1['DateAndTime'])
final_1D = df_water_level_US1.resample('D', on='DateAndTime')['WATERLEVEL(M)'].sum()
print(final_1D.reset_index())
# Output
DateAndTime WATERLEVEL(M)
0 2016-01-01 3.24
</code></pre>
<p>The first line is not mandatory if your column <code>DateAndTime</code> is already a DatetimeIndex.</p> | python|pandas|datetimeindex|rolling-sum | 2 |
959 | 42,948,748 | How to "iron out" a column of numbers with duplicates in it | <p>If one has the following column:</p>
<pre><code>df = pd.DataFrame({"numbers":[1,2,3,4,4,5,1,2,2,3,4,5,6,7,7,8,1,1,2,2,3,4,5,6,6,7]})
</code></pre>
<p>How can one "iron" it out so that the duplicates become part of the series of numbers:</p>
<pre><code>numbers new_numbers
1 1
2 2
3 3
4 4
4 5
5 6
1 1
2 2
2 3
3 4
4 5
5 6
6 7
7 8
7 9
8 10
1 1
1 2
2 3
2 4
3 5
4 6
5 7
6 8
6 9
7 10
</code></pre>
<p>(I put spaces into the df for clarification)</p> | <p>It seems you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.cumcount.html" rel="nofollow noreferrer"><code>cumcount</code></a> by <code>Series</code> created with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.diff.html" rel="nofollow noreferrer"><code>diff</code></a> and compare with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.lt.html" rel="nofollow noreferrer"><code>lt</code></a> (<code><</code>) for finding starts of each group. Groups are made by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.cumsum.html" rel="nofollow noreferrer"><code>cumsum</code></a>:</p>
<pre><code>#for better testing helper df1
df1 = pd.DataFrame(index=df.index)
df1['dif'] = df.numbers.diff()
df1['compare'] = df.numbers.diff().lt(0)
df1['groups'] = df.numbers.diff().lt(0).cumsum()
print (df1)
dif compare groups
0 NaN False 0
1 1.0 False 0
2 1.0 False 0
3 1.0 False 0
4 0.0 False 0
5 1.0 False 0
6 -4.0 True 1
7 1.0 False 1
8 0.0 False 1
9 1.0 False 1
10 1.0 False 1
11 1.0 False 1
12 1.0 False 1
13 1.0 False 1
14 0.0 False 1
15 1.0 False 1
16 -7.0 True 2
17 0.0 False 2
18 1.0 False 2
19 0.0 False 2
20 1.0 False 2
21 1.0 False 2
22 1.0 False 2
23 1.0 False 2
24 0.0 False 2
25 1.0 False 2
</code></pre>
<pre><code>df['new_numbers'] = df.groupby(df.numbers.diff().lt(0).cumsum()).cumcount() + 1
print (df)
numbers new_numbers
0 1 1
1 2 2
2 3 3
3 4 4
4 4 5
5 5 6
6 1 1
7 2 2
8 2 3
9 3 4
10 4 5
11 5 6
12 6 7
13 7 8
14 7 9
15 8 10
16 1 1
17 1 2
18 2 3
19 2 4
20 3 5
21 4 6
22 5 7
23 6 8
24 6 9
25 7 10
</code></pre> | python|pandas|dataframe | 1 |
960 | 42,600,915 | Create a new DataFrame adding each key from a column dict as header | <p>I have a DataFrame which contains a certain column with Dictionaries.</p>
<p>I want to add a new header in the DataFrame for each key found on each element in the column that contains dicts, each new value assigned to those new cells should correspond to <code>None</code> if that element doesn't contain that header key and the respective key value otherwise.</p>
<p>Here's the data for testing and visualizing what I'm saying:</p>
<p>Importing dependencies:</p>
<pre><code>import pandas as pd
import numpy as np
</code></pre>
<p>Creating a dictionary that contains a inner dictionary list:</p>
<pre><code>data = {'string_info': ['User1', 'User2', 'User3'],
'dict_info': [{'elm1': 'attr5', 'elm2': 'attr9', 'elm3': 'attr33'},
{'elm5': 'attr31', 'elm7': 'attr13'},
{'elm5': 'attr28', 'elm1': 'attr23', 'elm2': 'attr33','elm6': 'attr33'}],
'int_info': [4, 24, 31],}
</code></pre>
<p>Creating an appropriate initial DataFrame for testing:</p>
<pre><code>df = pd.DataFrame.from_dict(data)
df
</code></pre>
<p>Manually stating what I want as output:</p>
<pre><code>data2 = {'string_info': ['User1', 'User2', 'User3'],
'elm1': ['attr5',None,'attr23'],
'elm2': ['attr9',None,'attr33'],
'elm3': ['attr33',None,None],
'elm4': [None,None,None],
'elm5': [None,'attr31',None],
'elm6': [None,None,'attr33'],
'elm7': [None,None,'attr13'],
'int_info': [4, 24, 31]}
</code></pre>
<p>The desired output would be:</p>
<pre><code>df2 = pd.DataFrame.from_dict(data2)
df2
</code></pre>
<p>Thanks!</p> | <p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html" rel="nofollow noreferrer"><code>concat</code></a> with <code>DataFrame</code> constructor for replace <code>dict</code> to columns:</p>
<pre><code>print (pd.DataFrame(df.dict_info.values.tolist()))
elm1 elm2 elm3 elm5 elm6 elm7
0 attr5 attr9 attr33 NaN NaN NaN
1 NaN NaN NaN attr31 NaN attr13
2 attr23 attr33 NaN attr28 attr33 NaN
print (pd.concat([pd.DataFrame(df.dict_info.values.tolist()),
df[['int_info','string_info']]], axis=1))
elm1 elm2 elm3 elm5 elm6 elm7 int_info string_info
0 attr5 attr9 attr33 NaN NaN NaN 4 User1
1 NaN NaN NaN attr31 NaN attr13 24 User2
2 attr23 attr33 NaN attr28 attr33 NaN 31 User3
</code></pre>
<p>And if need <code>None</code>s add <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.replace.html" rel="nofollow noreferrer"><code>replace</code></a>:</p>
<pre><code>print (pd.concat([pd.DataFrame(df.dict_info.values.tolist()).replace({np.nan:None}),
df[['int_info','string_info']]], axis=1))
elm1 elm2 elm3 elm5 elm6 elm7 int_info string_info
0 attr5 attr9 attr33 None None None 4 User1
1 None None None attr31 None attr13 24 User2
2 attr23 attr33 None attr28 attr33 None 31 User3
</code></pre> | python|pandas|dictionary|dataframe|multiple-columns | 1 |
961 | 42,913,138 | Avoid collision in importing data in R | <p>I faced an error trying to import a CSV into R which had multiple duplicate columns, is there a way I can ignore those columns?
It's easy to do that in case of small files and small number of columns but mine is a big one ~3k columns and 10M rows.</p> | <p>Alternatively, set the check.names arg to FALSE. </p> | r|rstudio|h2o|import-from-csv|sklearn-pandas | 2 |
962 | 42,678,993 | Merging only certain columns in Python | <p>I have two data frames I would like to merge. The Main Data Frame is Population</p>
<pre><code>Pop:
Country Name Country Code Year Population CountryYear
0 Aruba ABW 1960 54208.0 ABW-1960
1 Andorra AND 1960 13414.0 AND-1960
</code></pre>
<p>I have a similar table with Country GDP</p>
<p>GDP:</p>
<pre><code> Country Name Country Code Year GDP CountryYear
0 Aruba ABW 1960 0.000000e+00 ABW-1960
1 Andorra AND 1960 0.000000e+00 AND-1960
</code></pre>
<p>What I want is to have a new frame, Combined, that has fields:</p>
<pre><code>Country Name
Country Code
Year
Population
CountryYear
</code></pre>
<p>From the Population Table and the respective GDP in table based on CountryYear and have that be the only column added to it. </p>
<p>I tried this but I got duplicate tables:</p>
<pre><code>df_merged = pd.merge(poptransposed, gdptransposed, left_on=['CountryYear'],
right_on=['CountryYear'],
how='inner')
df_merged.head()
Country Name_x Country Code_x Year_x Population CountryYear Country Name_y Country Code_y Year_y GDP
Aruba ABW 1960 54208.0 ABW-1960 Aruba ABW 1960 0.000000e+00
Andorra AND 1960 13414.0 AND-1960 Andorra AND 1960 0.000000e+00
</code></pre> | <p>A solution is to use the Country Code as index and then use pandas concat function (<a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html" rel="nofollow noreferrer">http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html</a>):</p>
<pre><code>Pop = Pop.set_index('Country Code', drop = True)
GDP = GDP.set_index('Country Code', drop = True)
df_merged= pd.concat([Pop, GDP['GDP'].to_frame('GDP')], axis = 1, join = 'inner').reset_index(drop = False)
</code></pre> | python|pandas|merge | 1 |
963 | 42,759,401 | Numpy horizontal concat with failure | <p>I want to concatenate two numpy arrays with the shape <code>(100,3) and (100,7)</code> to get a <code>(100,10)</code> matrix.</p>
<p>I've tried it using <code>hstack, concatenate</code> but only receives a <code>ValueError: all the int arrays must have same number of dimensions</code></p>
<p>In a dummy example like the following it works ...</p>
<pre><code>x=np.arange(30).reshape(10,3)
y=np.arange(20).reshape(10,2)
np.concatenate((x,y), axis=1)
</code></pre>
<p><strong>UPDATE 1:</strong></p>
<p>I've created the first two metrics's with sklearn's preprocessing module (RobustScaler and OneHotEncoder).</p>
<p><strong>UPDATE 2:</strong></p>
<p>When using scipy.sparse.hstack it works, but why</p> | <p>If you want to concatenate it vertically <code>axis</code> must beequal to 0. This is explained in <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.concatenate.html" rel="nofollow noreferrer">the doc for concatenate</a>.</p>
<p>In this link we have this example:</p>
<blockquote>
<blockquote>
<blockquote>
<p>a = np.array([[1, 2], [3, 4]])</p>
<p>b = np.array([[5, 6]])</p>
<p>np.concatenate((a, b), axis=0)</p>
</blockquote>
</blockquote>
</blockquote>
<pre><code>array([[1, 2],
[3, 4],
[5, 6]])
</code></pre>
<blockquote>
<blockquote>
<blockquote>
<p>np.concatenate((a, b.T), axis=1)</p>
</blockquote>
</blockquote>
</blockquote>
<pre><code>array([[1, 2, 5],
[3, 4, 6]])
</code></pre> | python|arrays|numpy|concatenation | 0 |
964 | 30,735,728 | How to multiply 3 matrices using shared memory in Python? | <p>I want to multiply 3 matrices (like E = AxBxC) using shared memory in multiprocessing method. I can do it with 2 matrices but when I want to repeat the same procedure for the third matrix, I got stuck and don't know how to handle the shared array.
I know I must use the multiprocessing array but don't know how to manage it.
Here is the way I used array in my code: </p>
<pre><code>mp_arr_one = multiprocessing.Array(ctypes.c_int, 3*3)
</code></pre>
<p>and then in my function:</p>
<pre><code>arr = numpy.frombuffer(mp_arr_one.get_obj(), dtype=ctypes.c_int)
res = arr.reshape((3,3))
</code></pre>
<p>Everything's good for first part (D = AxB) but when I want to calculate E = DxC, the code goes wrong and the result is completely incorrect.</p>
<p>Thanks in advance.</p> | <p>Just use <code>numpy</code> with an optimized BLAS (e.g. OpenBLAS, BLAS ATLAS, MKL) that supports multi-threading. </p>
<p>Matrix multiplication will be parallelized, and because it includes extensive architecture dependent optimizations, this approach will be faster than explicitly managing shared memory in Python with <code>multiprocessing.Array</code>. By the way, you should use an optimized BLAS implementation with <code>numpy</code>, if speed is an issue, even in single thread execution, so there is no way around it.</p> | python|numpy|matrix|multiprocessing | 0 |
965 | 30,559,552 | Comparison between one element and all the others of a DataFrame column | <p>I have a list of tuples which I turned into a DataFrame with thousands of rows, like this:</p>
<pre><code> frag mass prot_position
0 TFDEHNAPNSNSNK 1573.675712 2
1 EPGANAIGMVAFK 1303.659458 29
2 GTIK 417.258734 2
3 SPWPSMAR 930.438172 44
4 LPAK 427.279469 29
5 NEDSFVVWEQIINSLSALK 2191.116099 17
...
</code></pre>
<p>and I have the follow rule:</p>
<pre><code>def are_dif(m1, m2, ppm=10):
if abs((m1 - m2) / m1) < ppm * 0.000001:
v = False
else:
v = True
return v
</code></pre>
<p>So, I only want the "frag"s that have a mass that difers from all the other fragments mass. How can I achieve that "selection"?</p>
<p>Then, I have a list named "pinfo" that contains:</p>
<pre><code>d = {'id':id, 'seq':seq_code, "1HW_fit":hits_fit}
# one for each protein
# each dictionary as the position of the protein that it describes.
</code></pre>
<p>So, I want to sum 1 to the "hits_fit" value, on the dictionary respective to the protein.</p> | <p>If I'm understanding correctly (not sure if I am), you can accomplish quite a bit by just sorting. First though, let me adjust the data to have a mix of close and far values for mass:</p>
<pre><code> Unnamed: 0 frag mass prot_position
0 0 TFDEHNAPNSNSNK 1573.675712 2
1 1 EPGANAIGMVAFK 1573.675700 29
2 2 GTIK 417.258734 2
3 3 SPWPSMAR 417.258700 44
4 4 LPAK 427.279469 29
5 5 NEDSFVVWEQIINSLSALK 2191.116099 17
</code></pre>
<p>Then I think you can do something like the following to select the "good" ones. First, create 'pdiff' (percent difference) to see how close mass is to the nearest neighbors:</p>
<pre><code>ppm = .00001
df = df.sort('mass')
df['pdiff'] = (df.mass-df.mass.shift()) / df.mass
Unnamed: 0 frag mass prot_position pdiff
3 3 SPWPSMAR 417.258700 44 NaN
2 2 GTIK 417.258734 2 8.148421e-08
4 4 LPAK 427.279469 29 2.345241e-02
1 1 EPGANAIGMVAFK 1573.675700 29 7.284831e-01
0 0 TFDEHNAPNSNSNK 1573.675712 2 7.625459e-09
5 5 NEDSFVVWEQIINSLSALK 2191.116099 17 2.817926e-01
</code></pre>
<p>The first and last data lines make this a little tricky so this next line backfills the first line and repeats the last line so that the following mask works correctly. This works for the example here, but might need to be tweaked for other cases (but only as far as the first and last lines of data are concerned).</p>
<pre><code>df = df.iloc[range(len(df))+[-1]].bfill()
df[ (df['pdiff'] > ppm) & (df['pdiff'].shift(-1) > ppm) ]
</code></pre>
<p>Results:</p>
<pre><code> Unnamed: 0 frag mass prot_position pdiff
4 4 LPAK 427.279469 29 0.023452
5 5 NEDSFVVWEQIINSLSALK 2191.116099 17 0.281793
</code></pre>
<p>Sorry, I don't understand the second part of the question at all.</p>
<p><strong>Edit to add:</strong> As mentioned in a comment to @AmiTavory's answer, I think possibly the sorting approach and groupby approach could be combined for a simpler answer than this. I might try at a later time, but everyone should feel free to give this a shot themselves if interested.</p> | python|pandas | 2 |
966 | 30,410,821 | Save vars pr iteration to df and when done save df to csv | <p>I need to make a DataFrame (df_max_res) with the 15 best performances from my stock strategies combined with company tickers (AAPL for Apple Computers etc.). I have a list of more than 500 stock tickers that I fetch and on which I analyze using four of my own strategies.</p>
<p>In the <code>for eachP in perf_array</code> nested inner iteration I obtain performance results from all combinations of the strategy and ticker. I want to save these results to a DataFrame and to a csv file using this code (or a better suggestion):</p>
<pre><code>#==============================================================================
# Saving results in pandas and to a csv-file
#==============================================================================
def saving_res_pandas():
global df_res, df_max_res
df_res = pd.DataFrame(columns=('Strategy', 'Ticker', 'Strat',
'ROI', 'Sharpe R', 'VaR'))
for eachP in perf_array:
df_res.loc[len(df_res) + 1] = [strategy, ticker, strat, stratROI]
# Select the top 15 of all results (ticker/strategy combo) into new df.
df_max_res = df_res[:15]
# Saving to a csv.
df_max_res.to_csv('df_performance_data_sp500ish.csv')
print('After analysing %1.1f Years ~ %d workdays - %d strategies and %d tickers' '\n'
'The following matrix of tickers and strategies show highest ROI: '
% (years, days, len(strategies), len(stock_list))
)
return df_res
#==============================================================================
# Chose which of below methods to save perf-data to disk with
#==============================================================================
saving_res_pandas()
# Reading in df_max_res with best ticker/strategy results
df_max_res = pd.read_csv('df_performance_data_sp500ish.csv')
print(df_max_res)
</code></pre>
<p>The code above creates my DataFrame just fine, but it does not save the iteration performance result as I expect.</p>
<p>I am getting this output:</p>
<pre><code>=======================================================
aa === <function strategy1 at 0x00000000159A0BF8> ==
=======================================================
Holdings: 0
Funds: 14659
Starting Valuation: USD 15000.00 ~ DKK: 100000.50
Current Valuation: USD 14659.05 ~ DKK: 97727.49
=== aa == <function strategy1 at 0x00000000159A0BF8> ==
ROI: -1.9 perc. & Annual Profit -1894 DKK ==
######################################################################
cannot set a row with mismatched columns
== ALL Tickers Done for == <function strategy1 at 0x00000000159A0BF8> ==================
Strategy analysis pr ticker - COMPLETE !
Empty DataFrame
Columns: [Unnamed: 0, Strategy, Ticker, ROI, SharpeR, VaR]
Index: []
</code></pre> | <p>Finally I managed to come up with the right answer to my worries.</p>
<p>I solved it this way:</p>
<p>Before the for loops:</p>
<pre><code># Creating the df that will save my results in the backtest iterations
cols = ('Strategy','Ticker','ROI') # ,'Sharpe R','VaR','Strat'
df_res = pd.DataFrame(columns = cols)
</code></pre>
<p>Inside the for, and nested for loops</p>
<pre><code>def saving_res_pandas():
global df_res, df_max_res
df_res = df_res.append({'Ticker':ticker,'Strategy':strategy, 'ROI':stratROI,}, ignore_index = True)
return df_res
</code></pre>
<p>Outside and after the for loops:</p>
<pre><code> df_res = df_res.sort(['ROI'], ascending=[0])
df_max_res = df_res.head(15) # Select the top x of all results (ticker/strategy combo) into new df
# saving to a csv #
df_max_res.to_csv('df_performance_data_sp500ish.csv')
print('After analysing %1.1f Years ~ %d workdays - %d strategies and %d tickers' '\n'
'The following matrix of tickers and strategies show highest ROI:' %(years, days, len(strategies), len(stock_list))
)
print()
print(df_max_res)
</code></pre>
<p>Thank you for all your help and inspiration. </p> | python|csv|pandas|dataframe | 0 |
967 | 26,805,434 | Saving numpy array into dictionary using loop | <p>Below is my loop to loop through a bigger array (sortdata), pull out individual columns, and save those into a dictionary based on its iteration in the loop. My problem is that this loop is only looping through and saving just one column. It saves the variabledict[1] array and nothing else. The sortdata array contains four columns (the first two do not have pertinent data so I omitted them in the code). There should be a variabledict[0]. Any help would be greatly appreciated.</p>
<p>datavalues = floating number that pertains to total columns</p>
<p>sortdata = large array I am pulling data from</p>
<pre><code>for k in range(int(datavalues - 2)):
datavalloop = sortdata[:][0:,k + 2]
variabledict = {}
variabledict[k] = datavalloop
</code></pre> | <p>Place <code>variabledict = {}</code> outside loop. It is clearing <strong>dictionary</strong> values to <strong>Null</strong> on every iteration leaving only values of the last iteration.</p> | python|arrays|for-loop|numpy|dictionary | 1 |
968 | 19,386,437 | Python - create mask of unique values in array | <p>I have two numpy arrays, <code>x</code> and <code>y</code> (the length are around 2M). The <code>x</code> are ordered, but some of the values are identical.</p>
<p>The task is to remove values for both <code>x</code> and <code>y</code> when the values in <code>x</code> are identical. My idea is to create a mask. Here is what I have done so far:</p>
<pre><code>def createMask(x):
idx = np.empty(x.shape, dtype=bool)
for i in xrange(len(x)-1):
if x[i+1] == x[i]:
idx[i] = False
return idx
idx = createMask(x)
x = x[idx]
y = y[idx]
</code></pre>
<p>This method works fine, but it is slow (705ms with <code>%timeit</code>). Also I think this look really clumpsy. Is there are more elegant and efficient way (I'm sure there is).</p>
<p><strong>Updated with best answer</strong></p>
<p>The <strong>second</strong> method is</p>
<pre><code>idx = [x[i+1] == x[i] for i in xrange(len(x)-1)]
</code></pre>
<p>And the <strong>third</strong> (and fastest) method is</p>
<pre><code>idx = x[:-1] == x[1:]
</code></pre>
<p>The results are (using ipython's <code>%timeit</code>):</p>
<p><strong>First</strong> method: 751ms</p>
<p><strong>Second</strong> method: 618ms</p>
<p><strong>Third</strong> method: 3.63ms</p>
<p>Credit to mtitan8 for both methods.</p> | <p>I believe the fastest method is to compare <code>x</code> using numpy's <code>==</code> array operator:</p>
<pre><code>idx = x[:-1] == x[1:]
</code></pre>
<p>On my machine, using <code>x</code> with a million random integers in [0, 100],</p>
<pre><code>In[15]: timeit idx = x[:-1] == x[1:]
1000 loops, best of 3: 1 ms per loop
</code></pre> | python|performance|optimization|numpy|mask | 3 |
969 | 19,394,328 | Installing Numpy and matplotlib on OS X 10.8.5 | <p>So I've been trying to install matplotlib and numpy on my Mac OS X 10.8 for two days straight now. Just can't seem to get them up and running. I get all sorts of errors. I finally managed to install numpy 1.5 then when I install matplotlib with "pip install matplotlib==1.0.1", I get an error after some progress through the installation: This is the last part of the error:</p>
<pre><code>gcc -fno-strict-aliasing -fno-common -dynamic -arch i386 -arch x86_64 -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -DPY_ARRAY_UNIQUE_SYMBOL=MPL_ARRAY_API -DPYCXX_ISO_CPP_LIB=1 -I/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/include -I. -I/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/include -Isrc -Iagg24/include -I. -I/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/include -I/opt/local/include/freetype2 -I/opt/local/include -I. -I/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c CXX/IndirectPythonInterface.cxx -o build/temp.macosx-10.5-intel-2.7/CXX/IndirectPythonInterface.o
gcc -fno-strict-aliasing -fno-common -dynamic -arch i386 -arch x86_64 -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -DPY_ARRAY_UNIQUE_SYMBOL=MPL_ARRAY_API -DPYCXX_ISO_CPP_LIB=1 -I/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/include -I. -I/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/include -Isrc -Iagg24/include -I. -I/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/include -I/opt/local/include/freetype2 -I/opt/local/include -I. -I/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c CXX/cxxextensions.c -o build/temp.macosx-10.5-intel-2.7/CXX/cxxextensions.o
gcc -fno-strict-aliasing -fno-common -dynamic -arch i386 -arch x86_64 -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -DPY_ARRAY_UNIQUE_SYMBOL=MPL_ARRAY_API -DPYCXX_ISO_CPP_LIB=1 -I/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/include -I. -I/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/include -Isrc -Iagg24/include -I. -I/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/include -I/opt/local/include/freetype2 -I/opt/local/include -I. -I/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c src/backend_agg.cpp -o build/temp.macosx-10.5-intel-2.7/src/backend_agg.o
In file included from src/backend_agg.cpp:9:
In file included from src/_backend_agg.h:32:
agg24/include/agg_renderer_outline_aa.h:1368:45: error: binding of reference to type 'agg::line_profile_aa' to a value of type 'const agg::line_profile_aa' drops qualifiers
line_profile_aa& profile() { return *m_profile; }
^~~~~~~~~~
1 error generated.
error: command 'gcc' failed with exit status 1
</code></pre>
<p>When I type "gcc" in the terminal, it runs fine. I installed XCode 5 and the command line tools. Does anyone know how to fix this?</p> | <p>Another (even simpler) option than Canopy and Anaconda, is just to download Spyder's <a href="http://spyderlib.googlecode.com/files/spyder-2.2.5.dmg" rel="nofollow">dmg</a>, which comes with the latest versions of Numpy, SciPy, matplotlib, Pandas, Sympy and the sci-kits. This is a pure drag and drop installer (like the Firefox or Chrome ones).</p> | python|macos|numpy|matplotlib | 0 |
970 | 29,028,213 | Coincidence matrix from array with clusters assignments | <p>I have an array containing the cluster assigned to every point.</p>
<pre><code>import numpy as np
cluster_labels = np.array([1,1,2,3,4])
</code></pre>
<p>How can I get a matrix like:</p>
<pre><code>1 1 0 0 0
1 1 0 0 0
0 0 1 0 0
0 0 0 1 0
0 0 0 0 1
</code></pre>
<p>I'm sure there is something clever than:</p>
<pre><code>import numpy as np
cluster_labels = np.array([1,1,2,3,4])
n = cluster_labels.shape[0]
pairwise_clustering = np.zeros((n, n))
for i in xrange(n):
for j in xrange(n):
if cluster_labels[i] == cluster_labels[j]:
pairwise_clustering[i,j] = 1
print pairwise_clustering
[[ 1. 1. 0. 0. 0.]
[ 1. 1. 0. 0. 0.]
[ 0. 0. 1. 0. 0.]
[ 0. 0. 0. 1. 0.]
[ 0. 0. 0. 0. 1.]]
</code></pre>
<p><strong>Edit</strong> (bonus):
I'm interested in the mean pairwise clustering of a set of $n$ <code>cluster_labels</code>. So I would like to get the mean of the pairwise_clustering directly from an array of many <code>cluster_labels</code>:</p>
<pre><code>n_cluster_labels = np.array([[1,1,2,3,4],
[1,2,3,3,4],
[1,1,2,3,4]])
</code></pre> | <p>It's difficult to say whether what your doing is the best way to tackle the problem without knowing more about the problem itself.</p>
<p>However, it is possible to get the matrix your looking for in far less code:</p>
<pre><code>x = np.array([1,1,2,3,4])
(x[None,:] == x[:,None]).astype(int)
</code></pre>
<p>Conceptually it does the same as your code. It just uses some more of numpy's features instead of python for-loops.</p>
<p>Indexing <code>x</code> as <code>x[None,:]</code> adds a dummy axis of length 1. We then exploit numpy's broadcasting feature and apply the equal operator element-wise on the broadcasted arrays. In the end we convert the boolean result to integers. (replace <code>int</code> with <code>float</code> to get floating point numbers instead).</p> | python|numpy | 1 |
971 | 23,705,107 | Cython 1D normalized slidding cross correlation Optimization | <p>I have the following code which does a normalized cross correlation looking for similarities in two signals in python:</p>
<pre><code>def normcorr(template,srchspace):
template=(template-np.mean(template))/(np.std(template)*len(template)) # Normalize template
CCnorm=srchspace.copy()
CCnorm=CCnorm[np.shape(template)[0]:] # trim CC matrix
for a in range(len(CCnorm)):
s=srchspace[a:a+np.shape(template)[0]]
sp=(s-np.mean(s))/np.std(s)
CCnorm[a]=numpy.sum(numpy.multiply(template,sp))
return CCnorm
</code></pre>
<p>but as you can imagine it is far too slow. Looking at the cython documentation, large increases in speed are promised when performing loops in raw python. So I attempted to write some cython code with data typing of the variables that looks like this:</p>
<pre><code>from __future__ import division
import numpy as np
import math as m
cimport numpy as np
cimport cython
def normcorr(np.ndarray[np.float32_t, ndim=1] template,np.ndarray[np.float32_t, ndim=1] srchspace):
cdef int a
cdef np.ndarray[np.float32_t, ndim=1] s
cdef np.ndarray[np.float32_t, ndim=1] sp
cdef np.ndarray[np.float32_t, ndim=1] CCnorm
template=(template-np.mean(template))/(np.std(template)*len(template))
CCnorm=srchspace.copy()
CCnorm=CCnorm[len(template):]
for a in range(len(CCnorm)):
s=srchspace[a:a+len(template)]
sp=(s-np.mean(s))/np.std(s)
CCnorm[a]=np.sum(np.multiply(template,sp))
return CCnorm
</code></pre>
<p>but once I compile it the code actually runs slower than the pure python code. I found here (<a href="https://stackoverflow.com/questions/16029050/how-to-call-numpy-scipy-c-functions-from-cython-directly-without-python-call-ov">How to call numpy/scipy C functions from Cython directly, without Python call overhead?</a>) that calling numpy from cython might significantly slow down the code, is this the issue for my code, in which case I have to define inline functions to replace all calls to np, or is there something else I am doing wrong that I am missing?</p> | <p>Because you call numpy functions in a cython loop, there will be no speed improvement.</p>
<p>If you use pandas, you can use <code>roll_mean()</code> and <code>roll_std()</code> and <code>convolve()</code> in numpy to do the calculation very fast, here is the code:</p>
<pre><code>import numpy as np
import pandas as pd
np.random.seed()
def normcorr(template,srchspace):
template=(template-np.mean(template))/(np.std(template)*len(template)) # Normalize template
CCnorm=srchspace.copy()
CCnorm=CCnorm[np.shape(template)[0]:] # trim CC matrix
for a in range(len(CCnorm)):
s=srchspace[a:a+np.shape(template)[0]]
sp=(s-np.mean(s))/np.std(s)
CCnorm[a]=np.sum(np.multiply(template,sp))
return CCnorm
def fast_normcorr(t, s):
n = len(t)
nt = (t-np.mean(t))/(np.std(t)*n)
sum_nt = nt.sum()
a = pd.rolling_mean(s, n)[n-1:-1]
b = pd.rolling_std(s, n)[n-1:-1]
b *= np.sqrt((n-1.0) / n)
c = np.convolve(nt[::-1], s, mode="valid")[:-1]
result = (c - sum_nt * a) / b
return result
n = 100
m = 1000
t = np.random.rand(n)
s = np.random.rand(m)
r1 = normcorr(t, s)
r2 = fast_normcorr(t, s)
assert np.allclose(r1, r2)
</code></pre>
<p>You can check the result <code>r1</code> and <code>r2</code> is same. And here is <code>timeit</code> test:</p>
<pre><code>%timeit normcorr(t, s)
%timeit fast_normcorr(t, s)
</code></pre>
<p>the output:</p>
<pre><code>10 loops, best of 3: 59 ms per loop
1000 loops, best of 3: 273 µs per loop
</code></pre>
<p>It's 200x faster.</p> | numpy|cython | 3 |
972 | 15,419,914 | Downsampling irregular time series in pandas | <p>I have a time series in pandas that looks like this:</p>
<pre>
<code>
2012-01-01 00:00:00.250000 12
2012-01-01 00:00:00.257000 34
2012-01-01 00:00:00.258000 45
2012-01-01 00:00:01.350000 56
2012-01-01 00:00:02.300000 78
2012-01-01 00:00:03.200000 89
2012-01-01 00:00:03.500000 90
2012-01-01 00:00:04.200000 12
</code>
</pre>
<p>Is there a way to downsample it to 1 second data without aligning to 1-second boundaries? For instance, is there a way to get this data out (assuming downsampling a way where the latest value that occurs before or on the sample time is used):</p>
<pre>
<code>
2012-01-01 00:00:00.250000 12
2012-01-01 00:00:01.250000 45
2012-01-01 00:00:02.250000 56
2012-01-01 00:00:03.250000 89
2012-01-01 00:00:04.250000 12
</code>
</pre> | <p>Create a DateTimeIndex a frequency of 1 second and an offset of a quarter-second like so.</p>
<pre><code>index = pd.date_range('2012-01-01 00:00:00.25',
'2012-01-01 00:00:04.25', freq='S')
</code></pre>
<p>Conform your data to this index, and "fill forward" to downsample the way you show in your desired result.</p>
<pre><code>s.reindex(index, method='ffill')
data
2012-01-01 00:00:00.250000 12
2012-01-01 00:00:01.250000 45
2012-01-01 00:00:02.250000 56
2012-01-01 00:00:03.250000 89
2012-01-01 00:00:04.250000 12
</code></pre> | python|pandas | 5 |
973 | 29,533,268 | Find density of points from their scatter plot in python | <p>Can I go through equal boxed areas in the scatter plot, so I can calculate how many there are on average on each box?
Or, is there a specific function in python to calculate this?
I don't want a colored density plot, but a number that represents the density of these points in the scatter plot.</p>
<p>Here is for example a plot of the eigenvalues of a random matrix:
<img src="https://i.stack.imgur.com/Ndy1Y.jpg" alt="Eigenvalues of a random matrix" /></p>
<p>How would I find their density?</p> | <pre><code>from scipy import linalg as la
e = la.eigvals(my_matrix)
hist,xedges,yedges = np.histogram2d(e.real,e.imag,bins=40,normed=False)
</code></pre>
<p>So in this case, 'hist' would be a 40x40 array (since bins=40). Its elements are the number of eigenvalues for each bin.</p>
<p>Thanks to @jepio and @plonser for the comments.</p> | python|python-2.7|numpy|matplotlib | 1 |
974 | 29,442,370 | How to correctly read csv in Pandas while changing the names of the columns | <p>An absolute basic read_csv question. </p>
<h2>I have data that looks like the following in a csv file -</h2>
<pre><code>Date,Open Price,High Price,Low Price,Close Price,WAP,No.of Shares,No. of Trades,Total Turnover (Rs.),Deliverable Quantity,% Deli. Qty to Traded Qty,Spread High-Low,Spread Close-Open
28-February-2015,2270.00,2310.00,2258.00,2294.85,2279.192067772602217319,73422,8043,167342840.00,11556,15.74,52.00,24.85
27-February-2015,2267.25,2280.85,2258.00,2266.35,2269.239841485775122730,50721,4938,115098114.00,12297,24.24,22.85,-0.90
26-February-2015,2314.90,2314.90,2250.00,2259.50,2277.198324862194860047,69845,8403,159050917.00,22046,31.56,64.90,-55.40
25-February-2015,2290.00,2332.00,2278.35,2318.05,2315.100614216488163214,161995,10174,375034724.00,102972,63.56,53.65,28.05
24-February-2015,2276.05,2295.00,2258.00,2278.15,2281.058946240263344242,52251,7726,119187611.00,13292,25.44,37.00,2.10
23-February-2015,2303.95,2311.00,2253.25,2270.70,2281.912259219760108491,75951,7344,173313518.00,24969,32.88,57.75,-33.25
20-February-2015,2324.00,2335.20,2277.00,2284.30,2301.631421152326354478,79717,10233,183479152.00,23045,28.91,58.20,-39.70
19-February-2015,2304.00,2333.90,2292.00,2326.60,2321.485466301625211160,85835,8847,199264705.00,29728,34.63,41.90,22.60
18-February-2015,2284.00,2305.00,2261.10,2295.75,2282.060986778089405300,69884,6639,159479550.00,26665,38.16,43.90,11.75
16-February-2015,2281.00,2305.85,2266.00,2278.50,2284.961866239581019628,85541,10149,195457923.00,22164,25.91,39.85,-2.50
13-February-2015,2311.00,2324.90,2286.95,2296.40,2311.371235111317676864,109731,5570,253629077.00,69039,62.92,37.95,-14.60
12-February-2015,2280.00,2322.85,2275.00,2315.45,2301.372038211769425569,79766,9095,183571242.00,33981,42.60,47.85,35.45
11-February-2015,2275.00,2295.00,2258.25,2287.20,2279.587966250020639664,60563,7467,138058686.00,20058,33.12,36.75,12.20
10-February-2015,2244.90,2297.40,2225.00,2280.30,2269.562228214830293104,141656,13026,321497107.00,55577,39.23,72.40,35.40
</code></pre>
<p>--</p>
<p>I am trying to read this data in a pandas dataframe using the following variations of read_csv. I am only interested in two columns. </p>
<pre><code>z = pd.read_csv('file.csv', parse_dates=True, index_col="Date", usecols=["Date", "Open Price", "Close Price"], names=["Date", "O", "C"], header=0)
</code></pre>
<p>What I get is </p>
<pre><code> O C
Date
2015-02-28 NaN NaN
2015-02-27 NaN NaN
2015-02-26 NaN NaN
2015-02-25 NaN NaN
2015-02-24 NaN NaN
Or
z = pd.read_csv('file.csv', parse_dates=True, index_col="Date", usecols=["Date", "Open", "Close"], names=["Date", "Open Price", "Close Price"], header=0)
</code></pre>
<p>The result is - </p>
<pre><code> Open Price Close Price
Date
2015-02-28 NaN NaN
2015-02-27 NaN NaN
2015-02-26 NaN NaN
2015-02-25 NaN NaN
</code></pre>
<p>Am I missing something fundamental or is there an issue with read_csv of pandas <code>0.13.1</code> - my version on Debian Wheezy?</p> | <p>You are right, something is odd with the <code>name</code> attributes. Seems to me that you can not use both in the same time. Either you set the name for every columns of the CSV file or you don't set the name at all. So it seems that you can't set the name when you are not taking all the colums (<code>usecols</code>)</p>
<p><code>names : array-like
List of column names to use. If file contains no header row, then you should explicitly pass header=None</code></p>
<p>You might already know it but you can rename the colums after also.</p>
<pre><code>import pandas as pd
from StringIO import StringIO
csv = r"""Date,Open Price,High Price,Low Price,Close Price,WAP,No.of Shares,No. of Trades,Total Turnover (Rs.),Deliverable Quantity,% Deli. Qty to Traded Qty,Spread High-Low,Spread Close-Open
28-February-2015,2270.00,2310.00,2258.00,2294.85,2279.192067772602217319,73422,8043,167342840.00,11556,15.74,52.00,24.85
27-February-2015,2267.25,2280.85,2258.00,2266.35,2269.239841485775122730,50721,4938,115098114.00,12297,24.24,22.85,-0.90
26-February-2015,2314.90,2314.90,2250.00,2259.50,2277.198324862194860047,69845,8403,159050917.00,22046,31.56,64.90,-55.40
25-February-2015,2290.00,2332.00,2278.35,2318.05,2315.100614216488163214,161995,10174,375034724.00,102972,63.56,53.65,28.05
24-February-2015,2276.05,2295.00,2258.00,2278.15,2281.058946240263344242,52251,7726,119187611.00,13292,25.44,37.00,2.10
23-February-2015,2303.95,2311.00,2253.25,2270.70,2281.912259219760108491,75951,7344,173313518.00,24969,32.88,57.75,-33.25
20-February-2015,2324.00,2335.20,2277.00,2284.30,2301.631421152326354478,79717,10233,183479152.00,23045,28.91,58.20,-39.70
19-February-2015,2304.00,2333.90,2292.00,2326.60,2321.485466301625211160,85835,8847,199264705.00,29728,34.63,41.90,22.60
18-February-2015,2284.00,2305.00,2261.10,2295.75,2282.060986778089405300,69884,6639,159479550.00,26665,38.16,43.90,11.75
16-February-2015,2281.00,2305.85,2266.00,2278.50,2284.961866239581019628,85541,10149,195457923.00,22164,25.91,39.85,-2.50
13-February-2015,2311.00,2324.90,2286.95,2296.40,2311.371235111317676864,109731,5570,253629077.00,69039,62.92,37.95,-14.60
12-February-2015,2280.00,2322.85,2275.00,2315.45,2301.372038211769425569,79766,9095,183571242.00,33981,42.60,47.85,35.45
11-February-2015,2275.00,2295.00,2258.25,2287.20,2279.587966250020639664,60563,7467,138058686.00,20058,33.12,36.75,12.20
10-February-2015,2244.90,2297.40,2225.00,2280.30,2269.562228214830293104,141656,13026,321497107.00,55577,39.23,72.40,35.40"""
df = pd.read_csv(StringIO(csv),
usecols=["Date", "Open Price", "Close Price"],
header=0)
df.columns = ['Date', 'O', 'C']
df
</code></pre>
<p>output:</p>
<pre><code> Date O C
0 28-February-2015 2270.00 2294.85
1 27-February-2015 2267.25 2266.35
2 26-February-2015 2314.90 2259.50
3 25-February-2015 2290.00 2318.05
4 24-February-2015 2276.05 2278.15
5 23-February-2015 2303.95 2270.70
6 20-February-2015 2324.00 2284.30
7 19-February-2015 2304.00 2326.60
8 18-February-2015 2284.00 2295.75
9 16-February-2015 2281.00 2278.50
10 13-February-2015 2311.00 2296.40
11 12-February-2015 2280.00 2315.45
12 11-February-2015 2275.00 2287.20
13 10-February-2015 2244.90 2280.30
</code></pre> | python|csv|pandas | 26 |
975 | 62,398,372 | create unique identifier in dataframe based on combination of columns, but only for duplicated rows | <p>A corollary of the question here:
<a href="https://stackoverflow.com/questions/62396518/create-unique-identifier-in-dataframe-based-on-combination-of-columns">create unique identifier in dataframe based on combination of columns</a></p>
<p>In the foll. dataframe, </p>
<pre><code> id Lat Lon Year Area State
50319 -36.0629 -62.3423 2019 90 Iowa
18873 -36.0629 -62.3423 2017 90 Iowa
18876 -36.0754 -62.327 2017 124 Illinois
18878 -36.0688 -62.3353 2017 138 Kansas
</code></pre>
<p>I want to create a new column which assigns a unique identifier based on whether the columns Lat, Lon and Area have the same values. E.g. in this case rows 1 and 2 have the same values in those columns and will be given the same unique identifier 0_Iowa where Iowa comes from the State column. However, if there is no duplicate for a row, then I just want to use the state name. The end result should look like this:</p>
<pre><code>id Lat Lon Year Area State unique_id
50319 -36.0629 -62.3423 2019 90 Iowa 0_Iowa
18873 -36.0629 -62.3423 2017 90 Iowa 0_Iowa
18876 -36.0754 -62.327 2017 124 Illinois Illinois
18878 -36.0688 -62.3353 2017 138 Kansas Kansas
</code></pre> | <p>You can use an <code>np.where</code>:</p>
<pre><code>df['unique_id'] = np.where(df.duplicated(['Lat','Lon'], keep=False),
df.groupby(['Lat','Lon'], sort=False).ngroup().astype('str') + '_' + df['State'],
df['State'])
</code></pre>
<p>Or similar idea with <code>pd.Series.where</code>:</p>
<pre><code>df['unique_id'] = (df.groupby(['Lat','Lon'], sort=False)
.ngroup().astype('str')
.add('_' + df['State'])
.where(df.duplicated(['Lat','Lon'], keep=False),
df['State']
)
)
</code></pre>
<p>Output:</p>
<pre><code> id Lat Lon Year Area State unique_id
0 50319 -36.0629 -62.3423 2019 90 Iowa 0_Iowa
1 18873 -36.0629 -62.3423 2017 90 Iowa 0_Iowa
2 18876 -36.0754 -62.3270 2017 124 Illinois Illinois
3 18878 -36.0688 -62.3353 2017 138 Kansas Kansas
</code></pre> | python|pandas | 1 |
976 | 62,441,827 | Imputation conditional on other column values - Titanic dataset Age imputation conditional on Class and Sex | <p>I am working on the Titanic dataset and want to impute for missing age values. I want to impute based on the Pclass and Sex - taking the average of all females in first class for missing female first class ages for example (obviously doing this for each class and both male and female).</p>
<p>I feel like something along the lines of df.groupby(['Pclass', 'Sex']) would work to group the Pclass and Sex then I could impute age based on these features. </p>
<p>I have also considered a loop to loop through class and sex columns but not sure how this would look.</p>
<p>I have not included code as all I have done up to this point is dropped the Cabin column and counted how many missing values there are using df.isna().sum().</p>
<p>Any suggestions of how to impute conditional on values contained in other columns would be appreciated.</p> | <p>check this source for learning imputation of values </p>
<p>this article has helped me a lot</p>
<p><a href="https://machinelearningmastery.com/handle-missing-data-python" rel="nofollow noreferrer">https://machinelearningmastery.com/handle-missing-data-python</a></p> | python|pandas|scikit-learn|sklearn-pandas | 0 |
977 | 62,416,511 | Preprocess text to feed in model trained on imdb dataset | <p>I've trained this model:</p>
<pre><code>(training_eins,training_zwei),(test_eins,test_zwei) = tf.keras.datasets.imdb.load_data(num_words=10_000)
training_eins = tf.keras.preprocessing.sequence.pad_sequences(training_eins,maxlen=200)
test_eins = tf.keras.preprocessing.sequence.pad_sequences(test_eins,maxlen=200)
modell = Sequential()
modell.add(layers.Embedding(10_000,256,input_length=200))
modell.add(layers.Dropout(0.3))
modell.add(layers.GlobalMaxPooling1D())
modell.add(layers.Dense(128))
modell.add(layers.Activation("relu"))
modell.add(layers.Dropout(0.5))
modell.add(layers.Dense(1))
modell.add(layers.Activation("sigmoid"))
modell.compile(loss = "binary_crossentropy", optimizer = "adam", metrics = ["acc"])
modell.summary()
ergebnis = modell.fit(training_eins,
training_zwei,
epochs = 10,
verbose = 1,
batch_size = 500,
validation_data = (test_eins,test_zwei))
</code></pre>
<p>Now, I want to test the model's performance on this text (as an example): <code>very bad, I am truly disappointed</code></p>
<p>So, how can I transform that text into a list that can be fed to the model?</p>
<hr>
<p>I only know the model expects lists like</p>
<pre><code>[1, 14, 22, 16, 43, 530, 973, 1622, 1385, 65, 458, 4468, 66, 3941, 4, 173, 36, 256, 5, 25, 100, 43, 838, 112, 50, 670, 2, 9, 35, 480, 284, 5, 150, 4, 172, 112, 167, 2, 336, 385, 39, 4, 172, 4536, 1111, 17, 546, 38, 13, 447, 4, 192, 50, 16, 6, 147, 2025, 19, 14, 22, 4, 1920, 4613, 469, 4, 22, 71, 87, 12, 16, 43, 530, 38, 76, 15, 13, 1247, 4, 22, 17, 515, 17, 12, 16, 626, 18, 2, 5, 62, 386, 12, 8, 316, 8, 106, 5, 4, 2223, 5244, 16, 480, 66, 3785, 33, 4, 130, 12, 16, 38, 619, 5, 25, 124, 51, 36, 135, 48, 25, 1415, 33, 6, 22, 12, 215, 28, 77, 52, 5, 14, 407, 16, 82, 2, 8, 4, 107, 117, 5952, 15, 256, 4, 2, 7, 3766, 5, 723, 36, 71, 43, 530, 476, 26, 400, 317, 46, 7, 4, 2, 1029, 13, 104, 88, 4, 381, 15, 297,
98, 32, 2071, 56, 26, 141, 6, 194, 7486, 18, 4, 226, 22, 21, 134, 476, 26, 480, 5, 144, 30, 5535, 18, 51, 36, 28, 224, 92, 25, 104, 4, 226, 65, 16, 38, 1334, 88, 12, 16, 283, 5, 16, 4472, 113, 103, 32, 15, 16, 5345, 19, 178, 32]
</code></pre> | <p>Figured it out:</p>
<pre><code>buch = tf.keras.datasets.imdb.get_word_index()
buch = {k:(v+3) for k,v in buch.items()}
buch["<PAD>"] = 0
buch["<START>"] = 1
buch["<UNK>"] = 2
buch["<UNUSED>"] = 3
eingabe = re.sub(r"[^a-zA-Z0-9 ]", "", "very bad, I am truly disappointed")
munition = [[((buch[inhalt] if buch[inhalt] < 10_000 else 0) if inhalt in buch else 0) for inhalt in (eingabe).split(" ")]]
training_eins = tf.keras.preprocessing.sequence.pad_sequences(munition, maxlen=200)
print(modell.predict(training_eins))
</code></pre>
<p>Or, more generally, if you have more than one sentence:</p>
<pre><code>buch = tf.keras.datasets.imdb.get_word_index()
buch = {k:(v+3) for k,v in buch.items()}
buch["<PAD>"] = 0
buch["<START>"] = 1
buch["<UNK>"] = 2
buch["<UNUSED>"] = 3
modell = tf.keras.models.load_model("...")
eingabe_eins = ["the movie was boring and i did not really enjoy it",
"i love it, great movie",
"that was a good movie, very funny. i recommend it highly",
"the worst thing i've seen in my life yet. not good, very very bad!",
"very bad, fully dissapointed!",
"very good if u wanna stress out someone. but couldnt ever watch this myself..",
"Yet another film that tries to pass off a whole lot of screaming and crying as great acting. This fails especially in Brosnan's big crying scene, in which he audibly squeaks",
"the movie is very good, i love it so much, my favorite movie ever, its beatiful. besides that i acknowledge that im not an expert, but i know this movie is somthing special. Ive never felt such a great atmosphere."]
eingabe_zwei = [re.sub(r"[^a-zA-Z0-9 ]", "", inhalt) for inhalt in eingabe_eins]
munition = [[((buch[inhalt] if buch[inhalt] < 10_000 else 0) if inhalt in buch else 0) for inhalt in eingabe.split(" ")] for eingabe in eingabe_zwei]
maximal = np.max([len(i) for i in munition])
munition = [np.append(i, np.zeros((maximal-len(i),))) if maximal != len(i) else i for i in munition]
print(munition)
training_eins = tf.keras.preprocessing.sequence.pad_sequences(munition,maxlen=200)
print(modell.predict(training_eins))
</code></pre>
<hr>
<p>Btw: All inputs were correctly classified, except <code>very good if u wanna stress out someone. but couldnt ever watch this myself..</code>. I used this to check if you can fool the AI - apparently, you can do so pretty easily.</p> | python|tensorflow|text | 0 |
978 | 62,166,591 | unable to update the column to the excel file which is checkout from perforce | <p>i have a code where i was reading the Excel file from Perforce and storing it to the local.<br>
then doing some other work like:<br>
-- read all sheets <br>
-- search for particular columns and extract that column data.<br>
-- and from that data extract the other info from JIRA.<br>
Till here its working fine so once we got all the data we will create a dataframe and then search for the column "STATUS" if there update the column with same data otherwise create a column in the same sheet and write the data to the column.<br>
<strong>Code:</strong></p>
<pre><code>import os
import pandas as pd
from jira import JIRA
from pandas import ExcelWriter
from openpyxl import load_workbook
def getStatus(issueID):
jiraURL='http://in-jira-test:0000' #Test server
options = {'server': jiraURL}
jira = JIRA(options, basic_auth=(userName, password))
"""Getting the status for the particular issueID"""
issue = jira.issue(issueID)
status = issue.fields.status
return status
def getFileFromPerforce():
"""
Getting the file from perforce
"""
p4File = ' "//depot/Planning/Configurations.xlsx" '
p4Localfile = "C:/depot/Planning/Configurations.xlsx"
global p4runcmd
p4runcmd = p4Cmd + " sync -f " + p4File
stream = os.popen(p4runcmd)
output = stream.read()
print(output)
return p4File, p4Localfile
def excelReader():
# function call to get the filepath
p4FileLocation, filePath = getFileFromPerforce()
xls=pd.ExcelFile(filePath)
# gets the all sheets names in a list
sheetNameList = xls.sheet_names
for sheets in sheetNameList:
data=pd.read_excel(filePath,sheet_name=sheets)
# Checking the Jira column availability in all sheets
if any("Jira" in columnName for columnName in data.columns):
Value = data['Jira']
colValue=Value.to_frame()
# Getting the status of particular jira issue and updating to the dataframe
for row,rowlen in zip(colValue.iterrows(), range(len(colValue))):
stringData=row[1].to_string()
# getting the issueID from the jira issue url
issueID = stringData.partition('/')[2].rsplit('/')[3]
status = getStatus(issueID)
# data.set_value(k, 'Status', status) #---> deprecated
data.at[rowlen, "Status"]=status
# writting the data to the same excel sheet
print("filePath-",filePath)
excelBook = load_workbook(filePath)
with ExcelWriter(filePath, engine='openpyxl') as writer:
# Save the file workbook as base
writer.book = excelBook
writer.sheets = dict((ws.title, ws) for ws in excelBook.worksheets)
# Creating the new column Status and writing to the sheet which having jira column
data.to_excel(writer, sheets, index=False)
# Save the file
writer.save()
else:
continue
if __name__ == '__main__':
# read userName and passwrod from account file
f = open("account.txt", "r")
lines = f.readlines()
userName = str(lines[0].rstrip())
password = str(lines[1].rstrip())
AdminUser = str(lines[2].rstrip())
AdminPassword = str(lines[3].rstrip())
p4Cmd = 'p4 -c snehil_tool -p indperforce:1444 -u %s -P %s '%(AdminUser,AdminPassword)
f.close
excelReader()
</code></pre>
<p>In this code i'm not able to write the data inside the file which i have checkout from perforce i was getting the error :<br></p>
<pre><code>Traceback (most recent call last):
File "C:/Users/snsingh/PycharmProjects/DemoProgram/JiraStatusUpdate/updateStatusInOpticalFile.py", line 105, in <module>
excelReader()
File "C:/Users/snsingh/PycharmProjects/DemoProgram/JiraStatusUpdate/updateStatusInOpticalFile.py", line 88, in excelReader
writer.save()
File "C:\Users\snsingh\AppData\Local\Programs\Python\Python37\lib\site-packages\pandas\io\excel\_base.py", line 779, in __exit__
self.close()
File "C:\Users\snsingh\AppData\Local\Programs\Python\Python37\lib\site-packages\pandas\io\excel\_base.py", line 783, in close
return self.save()
File "C:\Users\snsingh\AppData\Local\Programs\Python\Python37\lib\site-packages\pandas\io\excel\_openpyxl.py", line 44, in save
return self.book.save(self.path)
File "C:\Users\snsingh\AppData\Local\Programs\Python\Python37\lib\site-packages\openpyxl\workbook\workbook.py", line 392, in save
save_workbook(self, filename)
File "C:\Users\snsingh\AppData\Local\Programs\Python\Python37\lib\site-packages\openpyxl\writer\excel.py", line 291, in save_workbook
archive = ZipFile(filename, 'w', ZIP_DEFLATED, allowZip64=True)
File "C:\Users\snsingh\AppData\Local\Programs\Python\Python37\lib\zipfile.py", line 1204, in __init__
self.fp = io.open(file, filemode)
PermissionError: [Errno 13] Permission denied: 'C:/depot/Planning/Configurations.xlsx'
</code></pre>
<p>This piece of code is not working from the above code:<br></p>
<pre><code># writting the data to the same excel sheet
print("filePath-",filePath)
excelBook = load_workbook(filePath)
with ExcelWriter(filePath, engine='openpyxl') as writer:
# Save the file workbook as base
writer.book = excelBook
writer.sheets = dict((ws.title, ws) for ws in excelBook.worksheets)
# Creating the new column Status and writing to the sheet which having jira column
data.to_excel(writer, sheets, index=False)
# Save the file
writer.save()
</code></pre>
<p><strong>NOTE:</strong><br>
This code works fine with local file contain the same data and its able to write perfectly. but its only happens when i readed the file from perforce.<br>
Even i have given all the permission to the folder and tried with different folder path but i got the same error .Please tell me where i'm making mistake any help would be grate or any questions please fill free to write in comment.<br>
thanks </p> | <p>Three things:</p>
<ol>
<li>When you get the file from Perforce, use <code>p4 sync</code> instead of <code>p4 sync -f</code>.</li>
<li>After you <code>p4 sync</code> the file, <code>p4 edit</code> it. That makes it writable so that you can <em>edit</em> it.</li>
<li>After you save your edits to the file, <code>p4 submit</code> it. That puts your changes in the depot.</li>
</ol> | python|pandas|openpyxl|perforce|python-jira | 0 |
979 | 62,422,653 | How can I use integer division (//) to access the middle rows and columns in numpy python | <p>Print "+"
Description
Given a single positive odd integer 'n' greater than 2, create a NumPy array of size (n x n) with all zeros and ones such that the ones make a shape like '+'. The lines of the plus must be present at the middle row and column.</p>
<p>Hint: Start by creating a (n x n) array with all zeroes using the np.zeros() function and then fill in the ones at the appropriate indices. Use integer division (//) to access the middle rows and columns</p> | <p>index for middle row: n//2
index for middle column: n//2</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
def f(n):
if n<3:
print('Argument must greater or equal 3 !')
return
middle = n//2
array = np.zeros((n, n), dtype=np.uint)
array[middle] = 1
array[:, middle] = 1
print(array)
</code></pre> | python|numpy | 0 |
980 | 62,046,027 | Creating arrays with a loop (Python) | <p>I am trying to create several arrays from a big array that I have. What I mean is: </p>
<pre><code>data = [[0, 1, 0, 0, 0, 0, 0, 1, 0, 0], [0, 0, 1, 0, 0, 1, 0, 0, 0, 0],
[0, 1, 1, 0, 0, 0, 0, 0, 0, 1], [1, 0, 0, 0, 1, 0, 0, 0, 0, 1],
[0, 0, 1, 1, 0, 0, 0, 0, 0,1], [0, 0, 0, 0, 1, 1, 0, 0, 0, 0],
[1, 0, 0, 0, 0, 0, 0, 1, 0, 0], [0, 1, 0, 0, 0, 1, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 0, 0, 0, 1, 0]]
</code></pre>
<p>I want to create 10 different arrays - using the 10 data's columns - with different names.</p>
<pre><code>data1 = [0, 0, 0, 1, 0, 0, 1, 0, 0],
data2 = [1, 0, 1, 0, 0, 0, 0, 1, 0], and so on
</code></pre>
<p>I found a close solution <a href="https://stackoverflow.com/questions/14327548/creating-new-array-in-for-loop-python">here</a> - Also I take the example data from there - However, when I tried the solution suggested:</p>
<pre><code>for d in xrange(0,9):
exec 'x%s = data[:,%s]' %(d,d-1)
</code></pre>
<p>A error message appears:</p>
<pre><code>exec(code_obj, self.user_global_ns, self.user_ns)
File "", line 2, in
exec ('x%s = data[:,%s]') %(d,d-1)
File "", line 1
x%s = data[:,%s]
^
SyntaxError: invalid syntax
</code></pre>
<p>Please, any comments will be highly appreciated. Regards</p> | <p>Use numpy array index:</p>
<pre><code>data = [[0, 1, 0, 0, 0, 0, 0, 1, 0, 0], [0, 0, 1, 0, 0, 1, 0, 0, 0, 0],
[0, 1, 1, 0, 0, 0, 0, 0, 0, 1], [1, 0, 0, 0, 1, 0, 0, 0, 0, 1],
[0, 0, 1, 1, 0, 0, 0, 0, 0,1], [0, 0, 0, 0, 1, 1, 0, 0, 0, 0],
[1, 0, 0, 0, 0, 0, 0, 1, 0, 0], [0, 1, 0, 0, 0, 1, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 0, 0, 0, 1, 0]]
d = np.array(data)
d[:, 0]
#array([0, 0, 0, 1, 0, 0, 1, 0, 0])
d[:, 1]
#array([1, 0, 1, 0, 0, 0, 0, 1, 0])
</code></pre>
<p>etc...</p>
<pre><code>d[:, 9]
#array([0, 0, 1, 1, 1, 0, 0, 0, 0])
</code></pre>
<p>If you must, then dictionaries are the way to go:</p>
<pre><code>val = {i:d[:,i] for i in range(d.shape[1])}
</code></pre>
<p>To access the arrays:</p>
<pre><code>val[0]
#array([0, 0, 0, 1, 0, 0, 1, 0, 0])
...
val[9]
#array([0, 0, 1, 1, 1, 0, 0, 0, 0])
</code></pre> | python|numpy | 1 |
981 | 62,193,877 | Keras .fit giving better performance than manual Tensorflow | <p>I'm new to Tensorflow and Keras. To get started, I followed the <a href="https://www.tensorflow.org/tutorials/quickstart/advanced" rel="nofollow noreferrer">https://www.tensorflow.org/tutorials/quickstart/advanced</a> tutorial. I'm now adapting it to train on CIFAR10 instead of MNIST dataset. I recreated this model <a href="https://keras.io/examples/cifar10_cnn/" rel="nofollow noreferrer">https://keras.io/examples/cifar10_cnn/</a> and I'm trying to run it in my own codebase. </p>
<p>Logically, if the model, batch size and optimizer are all the same, then the two should perform identically, but they don't. I thought it might be that I'm making a mistake in preparing the data. So I copied the model.fit function from the keras code into my script, and it still performs better. Using .fit gives me around 75% accuracy in 25 epochs, while with the manual method it takes around 60 epochs. With .fit I also achieve slightly better max accuracy.</p>
<p>What I want to know is: Is .fit doing something behind the scenes that's optimizing training? What do I need to add to my code to get the same performance? Am I doing something obviously wrong? </p>
<p>Thanks for your time.</p>
<p>Main code:</p>
<pre><code>
import tensorflow as tf
from tensorflow import keras
import msvcrt
from Plotter import Plotter
#########################Configuration Settings#############################
BatchSize = 32
ModelName = "CifarModel"
############################################################################
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data()
print("x_train",x_train.shape)
print("y_train",y_train.shape)
print("x_test",x_test.shape)
print("y_test",y_test.shape)
x_train, x_test = x_train / 255.0, x_test / 255.0
# Convert class vectors to binary class matrices.
y_train = keras.utils.to_categorical(y_train, 10)
y_test = keras.utils.to_categorical(y_test, 10)
train_ds = tf.data.Dataset.from_tensor_slices(
(x_train, y_train)).batch(BatchSize)
test_ds = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(BatchSize)
loss_object = tf.keras.losses.CategoricalCrossentropy(from_logits=True)
optimizer = tf.keras.optimizers.RMSprop(learning_rate=0.0001,decay=1e-6)
# Create an instance of the model
model = ModelManager.loadModel(ModelName,10)
train_loss = tf.keras.metrics.Mean(name='train_loss')
train_accuracy = tf.keras.metrics.CategoricalAccuracy(name='train_accuracy')
test_loss = tf.keras.metrics.Mean(name='test_loss')
test_accuracy = tf.keras.metrics.CategoricalAccuracy(name='test_accuracy')
########### Using this function I achieve better results ##################
model.compile(loss='categorical_crossentropy',
optimizer=optimizer,
metrics=['accuracy'])
model.fit(x_train, y_train,
batch_size=BatchSize,
epochs=100,
validation_data=(x_test, y_test),
shuffle=True,
verbose=2)
############################################################################
########### Using the below code I achieve worse results ##################
@tf.function
def train_step(images, labels):
with tf.GradientTape() as tape:
predictions = model(images, training=True)
loss = loss_object(labels, predictions)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
train_loss(loss)
train_accuracy(labels, predictions)
@tf.function
def test_step(images, labels):
predictions = model(images, training=False)
t_loss = loss_object(labels, predictions)
test_loss(t_loss)
test_accuracy(labels, predictions)
epoch = 0
InterruptLoop = False
while InterruptLoop == False:
#Shuffle training data
train_ds.shuffle(1000)
epoch = epoch + 1
# Reset the metrics at the start of the next epoch
train_loss.reset_states()
train_accuracy.reset_states()
test_loss.reset_states()
test_accuracy.reset_states()
for images, labels in train_ds:
train_step(images, labels)
for test_images, test_labels in test_ds:
test_step(test_images, test_labels)
test_accuracy = test_accuracy.result() * 100
train_accuracy = train_accuracy.result() * 100
#Print update to console
template = 'Epoch {}, Loss: {}, Accuracy: {}, Test Loss: {}, Test Accuracy: {}'
print(template.format(epoch,
train_loss.result(),
train_accuracy ,
test_loss.result(),
test_accuracy))
# Check if keyboard pressed
while msvcrt.kbhit():
char = str(msvcrt.getch())
if char == "b'q'":
InterruptLoop = True
print("Stopping loop")
</code></pre>
<p>The model:</p>
<pre><code>from tensorflow.keras.layers import Dense, Flatten, Conv2D, Dropout, MaxPool2D
from tensorflow.keras import Model
class ModelData(Model):
def __init__(self,NumberOfOutputs):
super(ModelData, self).__init__()
self.conv1 = Conv2D(32, 3, activation='relu', padding='same', input_shape=(32,32,3))
self.conv2 = Conv2D(32, 3, activation='relu')
self.maxpooling1 = MaxPool2D(pool_size=(2,2))
self.dropout1 = Dropout(0.25)
############################
self.conv3 = Conv2D(64,3,activation='relu',padding='same')
self.conv4 = Conv2D(64,3,activation='relu')
self.maxpooling2 = MaxPool2D(pool_size=(2,2))
self.dropout2 = Dropout(0.25)
############################
self.flatten = Flatten()
self.d1 = Dense(512, activation='relu')
self.dropout3 = Dropout(0.5)
self.d2 = Dense(NumberOfOutputs,activation='softmax')
def call(self, x):
x = self.conv1(x)
x = self.conv2(x)
x = self.maxpooling1(x)
x = self.dropout1(x)
x = self.conv3(x)
x = self.conv4(x)
x = self.maxpooling2(x)
x = self.dropout2(x)
x = self.flatten(x)
x = self.d1(x)
x = self.dropout3(x)
x = self.d2(x)
return x
</code></pre> | <p>Mentioning the solution here (Answer Section) even though it is present in the Comments, for the <strong>benefit of the Community</strong>.</p>
<p>On the same <code>Dataset</code>, the Accuracy can differ when using <code>Keras Model.fit</code> with that of the <code>Model</code> built using <code>Tensorflow</code> mainly if the Data is shuffled because, when we shuffle the Data, the Split of Data between Training and Testing (or Validation) will be different resulting in different Train and Test Data in both the cases (Keras and Tensorflow).</p>
<p>If we want to observe the similar results on the Same Dataset and with similar Architecture in <code>Keras</code> and in <code>Tensorflow</code>, we can <code>Turn off Shuffling the Data</code>.</p>
<p>Hope this helps. Happy Learning!</p> | python|tensorflow|keras|deep-learning | 0 |
982 | 51,152,108 | Pandas add a series to dataframe column | <p>I'm trying to add a series as a new column in another data frame. But only 'NaN' is being added.</p>
<p>Series:</p>
<pre><code>a_attack = df.merge(df_2,left_on = ['team_A','year'],right_on =['countries','year_list'],how = 'left')['attack']
type(a_attack)
Out[4]: pandas.core.series.Series
a_attack.tail(5)
Out[5]:
38881 63.0
38882 80.0
38883 81.0
38884 59.0
38885 85.0
Name: attack, dtype: float64
</code></pre>
<p>Below is code that I'm using to add the series 'a_attack' to dataframe df.</p>
<pre><code>df['A_attack'] = a_attack
</code></pre>
<p>But I'm getting NaN values only in the dataframe</p>
<pre><code>df['A_attack'].tail(5)
Out[9]:
38881 NaN
38882 NaN
38883 NaN
38884 NaN
38885 NaN
Name: A_attack, dtype: float64
</code></pre> | <p>Try this:</p>
<pre><code>df['A_attack'] = a_attack.values
</code></pre> | python|python-3.x|pandas | 0 |
983 | 48,215,207 | OpenCV 3.4.0.12 with Python3.5 AttributeError: 'cv2.VideoCapture' object has no attribute 'imread' | <p>I'm trying out facial recognition for the first time with python 3.5 and OpenCV 3.4.0.12 and I get this error when I run my code.</p>
<pre><code> File "/Users/connorwoodford/anaconda3/envs/chatbot/lib/python3.5/site-packages/spyder/utils/site/sitecustomize.py", line 101, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "/Users/connorwoodford/Desktop/temp.py", line 11, in <module>
ret, img = cap.imread()
AttributeError: 'cv2.VideoCapture' object has no attribute 'imread'
</code></pre>
<p>Code:</p>
<pre><code>import cv2
import numpy as np
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
eye_cascade = cv2.CascadeClassifier('CascadeClassifier')
cap = cv2.VideoCapture(0)
while True:
ret, img = cap.read()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, 1.3, 5)
for(x,y,w,h) in faces:
cv2.rectangle(img, (x+y), (x+w, y+h), (255,0,0), 2)
roi_gray = gray[y:y+h, x:x+w]
roi_color = img[y:y+h, x:x+w]
eyes = eye_cascade.detectMultiScale(roi_gray)
for (ex,ey,ew,eh) in eyes:
cv2.rectangle(roi_color, (ex, ey), (ex+ew,ey+eh), (0,255,0), 2)
cv2.imshow('img',img)
k = cv2.waitkey(30) & 0xff
if k == 27:
break
cap.release()
cv2.destroyAllWindows()
</code></pre> | <p>Your eye_cascade is referring to wrong cascade file, it should end with .xml extension. You can download it from here <a href="https://github.com/opencv/opencv/blob/master/data/haarcascades/haarcascade_eye.xml" rel="nofollow noreferrer">haarcascade_eye.xml</a>.</p>
<p>Also note that your call to cv2.rectangle is not in correct format.</p>
<p>change it from</p>
<pre><code>cv2.rectangle(img, (x+y), (x+w, y+h), (255,0,0), 2)
</code></pre>
<p>to</p>
<pre><code>cv2.rectangle(img, (x,y), (x+w,y+h), (255,0,0), 2)
</code></pre>
<p>All you need to make this work is to wrap the capture logic in a method. Full code goes as follows:</p>
<pre><code>import numpy as np
import cv2
def sample_demo():
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
eye_cascade = cv2.CascadeClassifier('haarcascade_eye.xml')
cap = cv2.VideoCapture(0)
while 1:
ret, img = cap.read()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, 1.3, 5)
for (x,y,w,h) in faces:
cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
roi_gray = gray[y:y+h, x:x+w]
roi_color = img[y:y+h, x:x+w]
eyes = eye_cascade.detectMultiScale(roi_gray)
for (ex,ey,ew,eh) in eyes:
cv2.rectangle(roi_color,(ex,ey),(ex+ew,ey+eh),(0,255,0),2)
cv2.imshow('img',img)
k = cv2.waitKey(30) & 0xff
if k == 27:
break
cap.release()
cv2.destroyAllWindows()
</code></pre>
<p>and simply call method sample_demo()</p>
<pre><code>sample_demo()
</code></pre> | python|numpy|opencv|face-recognition | 0 |
984 | 48,108,909 | How to create array with initial value and 'decay function'? | <p>I'm looking for a more efficient way to compute this (searched for something similar to numpy's arange):</p>
<pre><code>R = 0
l1 = []
gamma = 0.99
x = 12
for i in range(0, 1000):
R = x - (1-gamma) * R
l1.append(R)
</code></pre>
<p>the iteration and appending are too slow </p> | <p>You can get an easy factor of 3 if you JIT the function using <a href="http://numba.pydata.org" rel="nofollow noreferrer">Numba</a>:</p>
<p><a href="https://i.stack.imgur.com/NuPpT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NuPpT.png" alt="Numba JIT"></a></p>
<p>You should also explore Numpy and play around in a notebook or IPython to do some performance tests depending on your typical input and come back if you have more specific questions.</p>
<p><strong>Just a small update:</strong>
With Julia (<a href="https://julialang.org" rel="nofollow noreferrer">https://julialang.org</a>) I get something around 2.5us, so there is still room for improvement ;)</p>
<p><a href="https://i.stack.imgur.com/gUd7n.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gUd7n.png" alt="Using Julia"></a></p> | python|numpy | 1 |
985 | 48,390,671 | SyntaxError: (unicode error) while using pd.read_table for a txt file | <p>I am reading a txt file with a pattern into Pandas:</p>
<pre><code>Alabama[edit]
Auburn (Auburn University)[1]
Florence (University of North Alabama)
Jacksonville (Jacksonville State University)[2]
Livingston (University of West Alabama)[2]
Montevallo (University of Montevallo)[2]
Troy (Troy University)[2]
Tuscaloosa (University of Alabama, Stillman College, Shelton State)[3][4]
Tuskegee (Tuskegee University)[5]
Alaska[edit]
Fairbanks (University of Alaska Fairbanks)[2]
Arizona[edit]
Flagstaff (Northern Arizona University)[6]
Tempe (Arizona State University)
Tucson (University of Arizona)
</code></pre>
<p>by:</p>
<pre><code>import pandas as pd
df = pd.read_table('file path', sep='\n', header=None)
</code></pre>
<p>But I am getting this error:</p>
<pre><code>SyntaxError: (unicode error) 'unicodeescape' codec can't decode bytes in `
enter code here`position 65-66: truncated \uXXXX escape
</code></pre>
<p>I do not really understand why. I would really appreciate the help. </p> | <p>It seems you need to set correct encoding to load the file.</p>
<p>Try </p>
<pre><code>df = pd.read_table('file path', sep='\n', header=None, encoding='utf-8')
</code></pre>
<p>or </p>
<pre><code>df = pd.read_table('file path', sep='\n', header=None, encoding='WINDOWS-1252')
</code></pre>
<p>If it still doesn't work, you can use a package like <a href="https://pypi.python.org/pypi/chardet" rel="nofollow noreferrer">chardet</a> to detect the encoding of the file and then load it.</p> | python|pandas|unicode | 0 |
986 | 48,443,203 | Tensorflow GetNext() failed because the iterator has not been initialized | <p>tensorflow recommends the tf.data.Dataset for importing data. Is it possible to use it for validation and training, if the validation size of the images is different to the training images?</p>
<pre><code>import tensorflow as tf
import generator
import glob
import cv2
BATCH_SIZE = 4
filenames_train = glob.glob("/home/user/Datasets/MsCoco/train2017/*.jpg")
filenames_valid = glob.glob("/home/user/Datasets/Set5_14/*.png")
# TensorFlow `tf.read_file()` operation.
def _read_py_function(filename):
image_decoded = cv2.imread(filename, cv2.IMREAD_COLOR)
image_blurred_decoded = cv2.GaussianBlur(image_decoded, (1, 1), 0)
return image_decoded, image_blurred_decoded
# Use standard TensorFlow operations to resize the image to a fixed shape.
def _resize_function(image_decoded, image_blurred_decoded):
image_decoded.set_shape([None, None, None])
image_blurred_decoded.set_shape([None, None, None])
image_resized = tf.cast(tf.image.resize_images(image_decoded, [288, 288]),tf.uint8)
image_blurred = tf.cast(tf.image.resize_images(image_blurred_decoded, [72, 72]),tf.uint8)
return image_resized, image_blurred
def _cast_function(image_decoded, image_blurred_decoded):
image_resized = tf.cast(image_decoded,tf.uint8)
image_blurred = tf.cast(image_blurred_decoded,tf.uint8)
return image_resized, image_blurred
dataset_train = tf.data.Dataset.from_tensor_slices(filenames_train)
dataset_train = dataset_train.map(
lambda filename: tuple(tf.py_func(
_read_py_function, [filename], [tf.uint8, tf.uint8])))
dataset_train = dataset_train.map(_resize_function)
#dataset_train = dataset_train.shuffle(buffer_size=10000)
dataset_train = dataset_train.repeat()
dataset_train = dataset_train.batch(BATCH_SIZE)
# validation dataset
dataset_valid = tf.data.Dataset.from_tensor_slices(filenames_valid)
dataset_valid = dataset_valid.map(
lambda filename: tuple(tf.py_func(
_read_py_function, [filename], [tf.uint8, tf.uint8])))
dataset_train = dataset_train.map(_cast_function)
dataset_valid = dataset_valid.batch(BATCH_SIZE)
handle = tf.placeholder(tf.string, shape=[])
iterator = tf.data.Iterator.from_string_handle(handle, dataset_train.output_types)
next_element = iterator.get_next()
training_iterator = dataset_train.make_one_shot_iterator()
validation_iterator = dataset_valid.make_initializable_iterator()
my_transformator = generator.johnson(tf.cast(next_element[1],tf.float32))
images_transformed = my_transformator.new_images
images_transformed_uint = tf.cast(images_transformed,tf.uint8)
loss_square = tf.square(tf.cast(next_element[0],tf.float32)-images_transformed)
loss_sum = tf.reduce_sum(loss_square)
loss_norm = tf.cast(tf.shape(next_element[0])[0]*tf.shape(next_element[0])[1]*tf.shape(next_element[0])[2]*tf.shape(next_element[0])[3],tf.float32)
loss = tf.reduce_sum(loss_square)/loss_norm
solver = tf.train.AdamOptimizer(learning_rate=0.001,beta1=0.5).minimize(loss)
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
with tf.Session(config=config) as sess:
sess.run(tf.global_variables_initializer())
training_handle = sess.run(training_iterator.string_handle())
validation_handle = sess.run(validation_iterator.string_handle())
for i in range(200000):
curr_norm,curr_loss_sum, _, curr_loss, curr_labels, curr_transformed, curr_loss_square = sess.run([loss_norm,loss_sum, solver,loss,next_element,images_transformed_uint, loss_square], feed_dict={handle: training_handle})
if i%1000 == 0:
curr_labels, curr_transformed = sess.run([next_element, images_transformed_uint], feed_dict={handle: validation_handle})
</code></pre>
<p>If I try that code I get the following error:</p>
<blockquote>
<p>FailedPreconditionError (see above for traceback): GetNext() failed
because the iterator has not been initialized. Ensure that you have
run the initializer operation for this iterator before getting the
next element. [[Node: IteratorGetNext =
IteratorGetNextoutput_shapes=[, ],
output_types=[DT_UINT8, DT_UINT8],
_device="/job:localhost/replica:0/task:0/device:CPU:0"]]</p>
</blockquote>
<p>In the code you can see, that I don't resize the images from validation dataset. These validation images have different image size.</p> | <p>You've just forgot to initialize the <code>validation_iterator</code>.</p>
<p>Just add <code>sess.run(validation_iterator.initializer)</code> before running the for-loop. </p> | python|tensorflow | 15 |
987 | 48,593,065 | TensorFlow - Dynamic Input Batch Size? | <p>In my case, I need to dynamically change the batch_size during training. For example, I need to double the batch_size for every 10 epochs. However, the problem is that, although I know how to make it dynamic, in <strong><em>input pipeline</em></strong> I have to determine the batch size, as the following code shows. That is, to use <code>tf.train.shuffle_batch</code> I have to determine the <code>batch_size</code> argument, and I do not find any ways to change it afterward. Therefore, I would appreciate any suggestions! How do you make a dynamic input batch?</p>
<pre><code> filename_queue = tf.train.string_input_producer([self.tfrecords_file])
reader = tf.TFRecordReader()
_, serialized_example = self.reader.read(filename_queue)
features = tf.parse_single_example(
serialized_example,
features={
'image/file_name': tf.FixedLenFeature([], tf.string),
'image/encoded_image': tf.FixedLenFeature([], tf.string),
})
image_buffer = features['image/encoded_image']
image = tf.image.decode_jpeg(image_buffer, channels=3)
image = self._preprocess(image)
images = tf.train.shuffle_batch(
[image], batch_size=self.batch_size, num_threads=self.num_threads,
capacity=self.min_queue_examples + 3*self.batch_size,
min_after_dequeue=self.min_queue_examples
)
</code></pre> | <p>I believe what you want to do is the following (I haven't tried this, so correct me if I make a mistake).</p>
<p>Create a placeholder for your batch size:</p>
<pre><code>batch_size_placeholder = tf.placeholder(tf.int64)
</code></pre>
<p>Create your shuffle batch using the placeholder:</p>
<pre><code>images = tf.train.shuffle_batch(
[image], batch_size=self.batch_size, num_threads=self.num_threads,
capacity=self.min_queue_examples + 3*batch_size_placeholder,
min_after_dequeue=self.min_queue_examples
)
</code></pre>
<p>Pass in batch size with your call to <code>sess.run</code>:</p>
<pre><code>sess.run(my_optimizer, feed_dict={batch_size_placeholder: my_dynamic_batch_size})
</code></pre>
<p>I expect <code>shuffle_batch</code> will accept tensors.</p>
<p>If there's any issue with that you might consider using the Dataset pipeline. It is the newer, fancier, shinier way to do data pipelining.</p>
<p><a href="https://www.tensorflow.org/programmers_guide/datasets" rel="nofollow noreferrer">https://www.tensorflow.org/programmers_guide/datasets</a></p> | python|tensorflow | 0 |
988 | 70,778,928 | Get value from previous column for each group in groupby | <p>This is my <code>df</code> -</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Site</th>
<th>Product</th>
<th>Period</th>
<th>Inflow</th>
<th>Outflow</th>
<th>Production</th>
<th>Opening Inventory</th>
<th>New Opening Inventory</th>
<th>Closing Inventory</th>
<th>Production Needed</th>
</tr>
</thead>
<tbody>
<tr>
<td>California</td>
<td>Apples</td>
<td>1</td>
<td>0</td>
<td>3226</td>
<td>4300</td>
<td>1213</td>
<td>1213</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>California</td>
<td>Apples</td>
<td>2</td>
<td>0</td>
<td>3279</td>
<td>3876</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>California</td>
<td>Apples</td>
<td>3</td>
<td>0</td>
<td>4390</td>
<td>4530</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>California</td>
<td>Apples</td>
<td>4</td>
<td>0</td>
<td>4281</td>
<td>3870</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>California</td>
<td>Apples</td>
<td>5</td>
<td>0</td>
<td>4421</td>
<td>4393</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>California</td>
<td>Oranges</td>
<td>1</td>
<td>0</td>
<td>505</td>
<td>400</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>California</td>
<td>Oranges</td>
<td>2</td>
<td>0</td>
<td>278</td>
<td>505</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>California</td>
<td>Oranges</td>
<td>3</td>
<td>0</td>
<td>167</td>
<td>278</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>California</td>
<td>Oranges</td>
<td>4</td>
<td>0</td>
<td>124</td>
<td>167</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>California</td>
<td>Oranges</td>
<td>5</td>
<td>0</td>
<td>106</td>
<td>124</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>Montreal</td>
<td>Maple Syrup</td>
<td>1</td>
<td>0</td>
<td>445</td>
<td>465</td>
<td>293</td>
<td>293</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>Montreal</td>
<td>Maple Syrup</td>
<td>2</td>
<td>0</td>
<td>82</td>
<td>398</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>Montreal</td>
<td>Maple Syrup</td>
<td>3</td>
<td>0</td>
<td>745</td>
<td>346</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>Montreal</td>
<td>Maple Syrup</td>
<td>4</td>
<td>0</td>
<td>241</td>
<td>363</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>Montreal</td>
<td>Maple Syrup</td>
<td>5</td>
<td>0</td>
<td>189</td>
<td>254</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
</tbody>
</table>
</div>
<p>As seen, there are three groups when grouped by <code>Site</code> and <code>Product</code>. For each of the three groups I want to do the following (for periods 2 to 5) -</p>
<ul>
<li>Set <code>New Opening Inventory</code> to <code>Closing Inventory</code> of previous period</li>
<li>Calculate <code>Closing Inventory</code> for next period using the formula, <code>Closing Inventory</code> = <code>Production</code> + <code>Inflow</code> + <code>New Opening Inventory</code> - <code>Outflow</code></li>
</ul>
<p>I am trying to solve this using a combination of <code>groupby</code> and <code>for loop</code></p>
<p>Here is what I have so far -</p>
<p>If <code>df</code> is a single group I can simply do</p>
<pre><code># calculate closing inventory of period 1
df['Closing Inventory'] = np.where(df['PeriodNo']==1, <formula>, 0)
for i in range(1, len(df)):
df.loc[i, 'New Opening Inventory'] = df.loc[i-1, 'Closing Inventory']
df.loc[i, 'Closing Inventory'] = df.loc[i, 'Production'] + df.loc[i, 'Inflow'] + df.loc[i, 'New Opening Inventory'] - df.loc[i, 'Outflow']
</code></pre>
<p>and when I try to nest this <code>for loop</code> in a loop over <code>groups</code></p>
<pre><code># calculate closing inventory of period 1 for all groups
df['Closing Inventory'] = np.where(df['PeriodNo']==1, <formula>, 0)
g = df.groupby(['Site', 'Product']
alist = []
for k in g.groups.keys():
temp = g.get_group(k).reset_index(drop=True)
for i in range(1, len(temp)):
temp.loc[i, 'New Opening Inventory'] = temp.loc[i-1, 'Closing Inventory']
temp.loc[i, 'Closing Inventory'] = temp.loc[i, 'Production'] + temp.loc[i, 'Inflow'] + temp.loc[i, 'New Opening Inventory'] - temp.loc[i, 'Outflow']
alist.append(temp)
df2 = pd.concat(alist, ignore_index=True)
df2
</code></pre>
<p>This solution works but seems very inefficient with the nested loop. Is there a better way to do this?</p> | <p>Your New Opening Inventory is always previous Closing Inventory.</p>
<p>So I can modify this formula</p>
<blockquote>
<p>Closing Inventory = Production + Inflow + New Opening Inventory -
Outflow</p>
</blockquote>
<p>to</p>
<blockquote>
<p>Closing Inventory = Production + Inflow + Previous Closing Inventory -
Outflow</p>
</blockquote>
<p>For the first row, you do not have the Closing Inventory. But from the 2nd row, you calculate the Closing Inventory and you carry over the Closing Inventory to the next row.</p>
<p>Before obtaining the Closing Inventory, at first, calculate "Production" + "Inflow" - "Overflow" using list comprehension. The list comprehension performs better than for loop.</p>
<pre class="lang-py prettyprint-override"><code>df['Closing Inventory'] = [x + y - z if p > 1 else 0 for p, x, y, z in zip(df['Period'], df['Production'], df['Inflow'], df['Outflow'])]
# df[['Site', 'Product', 'Closing Inventory']]
# Site Product Closing Inventory
# 0 California Apples 0
# 1 California Apples 597
# 2 California Apples 140
# 3 California Apples -411
# 4 California Apples -28
# 5 California Oranges 0
# 6 California Oranges 227
# 7 California Oranges 111
# ...
</code></pre>
<p>Then, the rest of formula is adding the previously calculated Closing Inventory, that means you can <code>cumsum</code> this result.</p>
<pre><code>For row 1, Previous Closing (0) + calculated part (597) = 597
For row 2, Previous Closing (597) + calculated part (140) = 737
...
</code></pre>
<pre class="lang-py prettyprint-override"><code>df['Closing Inventory'] = df.groupby(['Site', 'Product'])['Closing Inventory'].cumsum()
# df[['Site', 'Product', 'Closing Inventory']]
# Site Product Closing_Inventory
# 0 California Apples 0
# 1 California Apples 597
# 2 California Apples 737
# 3 California Apples 326
# 4 California Apples 298
# 5 California Oranges 0
# 6 California Oranges 227
# 7 California Oranges 338
# ...
</code></pre>
<p>Again the New Opening Inventory is always previous Closing Inventory except when period is 1. Hence first, shift the Closing Inventory then pick New Opening Inventory when period is 1.</p>
<p>I used <code>combine_first</code> to pick value from either New Opening or Closing Inventory.</p>
<pre class="lang-py prettyprint-override"><code>df['New Opening Inventory'] = (df['New Opening Inventory'].replace(0, np.nan)
.combine_first(
df.groupby(['Site', 'Product'])['Closing Inventory']
.shift()
.fillna(0)
).astype(int))
</code></pre>
<p>Result</p>
<pre><code> Site Product Period New Opening Inventory Closing Inventory
0 California Apples 1 1213 0
1 California Apples 2 0 597
2 California Apples 3 597 737
3 California Apples 4 737 326
4 California Apples 5 326 298
5 California Oranges 1 0 0
6 California Oranges 2 0 227
7 California Oranges 3 227 338
...
</code></pre>
<p>With the sample data on my laptop,</p>
<pre><code>Original solution: 8.44 ms ± 280 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
This solution: 2.95 ms ± 23.7 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
</code></pre>
<p>I think with list comprehension and vectorization function, this solution can perform faster.</p> | python|pandas|dataframe|pandas-groupby | 1 |
989 | 70,831,040 | pandas retain values on different index dataframes | <p>I need to merge two dataframes with different frequencies (daily to weekly). However, would like to retain the weekly values when merging to the daily dataframe.</p>
<p>There is a grouping variable in the data, <code>group</code>.</p>
<pre><code>import pandas as pd
import datetime
from dateutil.relativedelta import relativedelta
daily={'date':[datetime.date(2022,1,1)+relativedelta(day=i) for i in range(1,10)]*2,
'group':['A' for x in range(1,10)]+['B' for x in range(1,10)],
'daily_value':[x for x in range(1,10)]*2}
weekly={'date':[datetime.date(2022,1,1),datetime.date(2022,1,7)]*2,
'group':['A','A']+['B','B'],
'weekly_value':[100,200,300,400]}
daily_data=pd.DataFrame(daily)
weekly_data=pd.DataFrame(weekly)
</code></pre>
<p><code>daily_data</code> output:</p>
<pre><code> date group daily_value
0 2022-01-01 A 1
1 2022-01-02 A 2
2 2022-01-03 A 3
3 2022-01-04 A 4
4 2022-01-05 A 5
5 2022-01-06 A 6
6 2022-01-07 A 7
7 2022-01-08 A 8
8 2022-01-09 A 9
9 2022-01-01 B 1
10 2022-01-02 B 2
11 2022-01-03 B 3
12 2022-01-04 B 4
13 2022-01-05 B 5
14 2022-01-06 B 6
15 2022-01-07 B 7
16 2022-01-08 B 8
17 2022-01-09 B 9
</code></pre>
<p><code>weekly_data</code> output:</p>
<pre><code> date group weekly_value
0 2022-01-01 A 100
1 2022-01-07 A 200
2 2022-01-01 B 300
3 2022-01-07 B 400
</code></pre>
<p>The desired output</p>
<pre><code>desired={'date':[datetime.date(2022,1,1)+relativedelta(day=i) for i in range(1,10)]*2,
'group':['A' for x in range(1,10)]+['B' for x in range(1,10)],
'daily_value':[x for x in range(1,10)]*2,
'weekly_value':[100]*6+[200]*3+[300]*6+[400]*3}
desired_data=pd.DataFrame(desired)
</code></pre>
<p><code>desired_data</code> output:</p>
<pre><code> date group daily_value weekly_value
0 2022-01-01 A 1 100
1 2022-01-02 A 2 100
2 2022-01-03 A 3 100
3 2022-01-04 A 4 100
4 2022-01-05 A 5 100
5 2022-01-06 A 6 100
6 2022-01-07 A 7 200
7 2022-01-08 A 8 200
8 2022-01-09 A 9 200
9 2022-01-01 B 1 300
10 2022-01-02 B 2 300
11 2022-01-03 B 3 300
12 2022-01-04 B 4 300
13 2022-01-05 B 5 300
14 2022-01-06 B 6 300
15 2022-01-07 B 7 400
16 2022-01-08 B 8 400
17 2022-01-09 B 9 400
</code></pre> | <p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.merge_asof.html" rel="nofollow noreferrer"><code>merge_asof</code></a> with sorting values by datetimes, last sorting like original by both columns:</p>
<pre><code>daily_data['date'] = pd.to_datetime(daily_data['date'])
weekly_data['date'] = pd.to_datetime(weekly_data['date'])
df = (pd.merge_asof(daily_data.sort_values('date'),
weekly_data.sort_values('date'),
on='date',
by='group').sort_values(['group','date'], ignore_index=True))
print (df)
date group daily_value weekly_value
0 2022-01-01 A 1 100
1 2022-01-02 A 2 100
2 2022-01-03 A 3 100
3 2022-01-04 A 4 100
4 2022-01-05 A 5 100
5 2022-01-06 A 6 100
6 2022-01-07 A 7 200
7 2022-01-08 A 8 200
8 2022-01-09 A 9 200
9 2022-01-01 B 1 300
10 2022-01-02 B 2 300
11 2022-01-03 B 3 300
12 2022-01-04 B 4 300
13 2022-01-05 B 5 300
14 2022-01-06 B 6 300
15 2022-01-07 B 7 400
16 2022-01-08 B 8 400
17 2022-01-09 B 9 400
</code></pre> | pandas|pandas-groupby | 2 |
990 | 71,075,202 | Read spark csv dataframe as pandas | <p>After processing a big data on pyspark, I saved it on csv using the following command:</p>
<pre><code>df.repartition(1).write.option("header", "true").option("delimeter", "\t").csv("csv_data", mode="overwrite")
</code></pre>
<p>Now, I want use <code>pd.read_csv()</code> to load it again.</p>
<pre><code>info = pd.read_csv('part0000.csv', sep='\t', header='infer')
</code></pre>
<p><code>info</code> is returned as 1 column where the data is separated by comma not '\t'.</p>
<pre><code>col1name,col2name,col3name
val1,val2,val3
</code></pre>
<p>I tried to specify the <code>sep=','</code> but I got an parsing error where some rows have more than 3 cols.</p>
<p>How to fix that without skipping any rows ? Is there anything to do with spark to resolve it such as specify a <code>'|'</code> as delimiter</p> | <p>The csv format writer method <strong>DOESN'T</strong> have the <code>delimeter</code> option, guess what you need is the <code>sep</code> option.</p>
<p>Please refer to <a href="https://spark.apache.org/docs/latest/sql-data-sources-csv.html#data-source-option" rel="nofollow noreferrer">here</a></p> | python|pandas|dataframe|pyspark | 0 |
991 | 51,664,482 | Python Pandas Create unique dataframe out of many lists | <p>Hi I want to create a dataframe that stores a unique variable and its average in every column. Currently I have a dataframe that has 2 columns. One has a list of names while the other has a single value. I want to associate that value with all of the names in the list and eventually find the average value for all the names
This is the data I have:</p>
<pre><code>Df1:
names_col cost_col
[milk, eggs, cookies] 3
[water, milk, yogurt] 5
[cookies, diaper, yogurt] 7
</code></pre>
<p>This is what I want:</p>
<pre><code>Df2:
names_col avg_cost_col
milk 4
eggs 3
cookies 5
water 5
yogurt 6
diaper 7
</code></pre>
<p>I thought about doing an apply over all the rows somehow or using set() to remove duplicates out of every list but I am not sure. Any help would be appreicated</p> | <p>IIUC flatten your list (unnest)</p>
<pre><code>pd.DataFrame(data=df.cost_col.repeat(df.names_col.str.len()).values,index=np.concatenate(df.names_col.values)).mean(level=0)
Out[221]:
0
milk 4
eggs 3
cookies 5
water 5
yogurt 6
diaper 7
</code></pre> | python|pandas|dataframe | 1 |
992 | 51,835,891 | Correlation of Groups and joining to DataFrame | <pre><code>import pandas as pd
d = {'Val1': ['A','A','A','B','B','B','C','C','C','D','D','D'], 'Val2':
[5,4,6,4,8,7,4,5,2,1,1,9] , 'Val3': [4, 5,6,1,2,9,8,5,1,5,9,5]}
df = pd.DataFrame(data=d)
df
</code></pre>
<h3>Output</h3>
<pre><code>Val1 Val2 Val3 Val4+++
0 A 5 4
1 A 4 5
2 A 6 6
3 B 4 1
4 B 8 2
5 B 7 9
6 C 4 8
7 C 5 5
8 C 2 1
9 D 1 5
10 D 1 9
11 D 9 5
</code></pre>
<p>It is possible to calculate the correlation of two columns with this code, but I am not sure if it is the best/fastest way.</p>
<pre><code> vers1=df.groupby('Val1')[['Val2','Val3']].corr().iloc[0::2][['Val3']]
vers2=df.groupby('Val1')[['Val2','Val3']].corr().iloc[0::2]['Val3']
A Val2 0.500000
B Val2 0.385727
C Val2 0.714575
D Val2 -0.500000
</code></pre>
<p>I am not able to join the data to the original data, so they look like this.</p>
<pre><code> Val1 Val2 Val3 Val4+++
0 A 5 4 0.500000
1 A 4 5 0.500000
2 A 6 6 0.500000
3 B 4 1 0.385727
4 B 8 2 0.385727
5 B 7 9 0.385727
6 C 4 8 0.714575
7 C 5 5 0.714575
8 C 2 1 0.714575
9 D 1 5 -0.500000
10 D 1 9 -0.500000
11 D 9 5 -0.500000
</code></pre>
<p>I am open to other ways of calculating the correlation or measures of association if I am able to join them to the original.</p> | <p>You can shorten the correlation code a bit by doing the correlation on the series:</p>
<pre><code>df.groupby("Val1")["Val2"].corr(df["Val3"])
</code></pre> | python|pandas|group-by|correlation | 0 |
993 | 51,617,211 | Numpy Standard Deviation AttributeError: 'Float' object has no attribute 'sqrt' | <p>I know this was asked many times, but, I am still having trouble with the following problem. I defined my own functions for mean and stdev, but stdev takes too long to calculate std(Wapproxlist). So, I need a solution for the issue.</p>
<pre><code>import numpy as np
def Taylor_Integration(a, b, mu):
import sympy as sy
A, B, rho = sy.symbols('A B rho', real=True)
Wapp = (A + B*rho - rho/(2*mu*(1 - rho)))**2
eq1 = sy.diff(sy.integrate(Wapp, (rho, a, b)),A)
eq2 = sy.diff(sy.integrate(Wapp, (rho, a, b)),B)
sol = sy.solve([eq1,eq2], [A,B])
return sol[A], sol[B]
def Wapprox(rho, A, B):
return A + B*rho
def W(mu, rho):
return rho/(2*mu*(1-rho))
Wapproxlist = []
Wlist = []
alist = np.linspace(0, 0.98, 10)
for a in alist:
b = a+0.01; mu = 1
A, B = Taylor_Integration(a, b, mu)
rholist = np.linspace(a, b, 100)
for rho in rholist:
Wapproxlist.append(Wapprox(rho, A, B))
Wlist.append(W(mu, rho))
print('mean=%.3f stdv=%.3f' % (np.mean(Wapproxlist), np.std(Wapproxlist)))
print('mean=%.3f stdv=%.3f' % (np.mean(Wlist), np.std(Wlist)))
</code></pre>
<h2>Output:</h2>
<pre><code>AttributeError Traceback (most recent call last)
<ipython-input-83-468c8e1a9f89> in <module>()
----> 1 print('mean=%.3f stdv=%.3f' % (np.mean(Wapproxlist), np.std(Wapproxlist)))
2 print('mean=%.3f stdv=%.3f' % (np.mean(Wlist), np.std(Wlist)))
C:\Users\2tc\.julia\v0.6\Conda\deps\usr\lib\site-packages\numpy\core\fromnumeric.pyc in std(a, axis, dtype, out, ddof, keepdims)
3073
3074 return _methods._std(a, axis=axis, dtype=dtype, out=out, ddof=ddof,
-> 3075 **kwargs)
3076
3077
C:\Users\2tc\.julia\v0.6\Conda\deps\usr\lib\site-packages\numpy\core\_methods.pyc in _std(a, axis, dtype, out, ddof, keepdims)
140 ret = ret.dtype.type(um.sqrt(ret))
141 else:
--> 142 ret = um.sqrt(ret)
143
144 return ret
AttributeError: 'Float' object has no attribute 'sqrt'
</code></pre> | <p><code>numpy</code> doesn't know how to handle <code>sympy</code>'s <code>Float</code> type.</p>
<pre><code>(Pdb) type(Wapproxlist[0])
<class 'sympy.core.numbers.Float'>
</code></pre>
<p>Convert it to a numpy array before calling <code>np.mean</code> and <code>np.std</code>.</p>
<pre><code>Wapproxlist = np.array(Wapproxlist, dtype=np.float64) # can use np.float32 as well
print('mean=%.3f stdv=%.3f' % (np.mean(Wapproxlist), np.std(Wapproxlist)))
print('mean=%.3f stdv=%.3f' % (np.mean(Wlist), np.std(Wlist)))
</code></pre>
<p>output:</p>
<pre><code>mean=4.177 stdv=10.283
mean=4.180 stdv=10.300
</code></pre>
<p>Note: If you're looking to speed this up, you'll want to avoid <code>sympy</code>. Symbolic solvers are pretty cool, but they're also very slow compared to floating point computations.</p> | python|python-2.7|numpy | 12 |
994 | 42,086,185 | How to differenciate columns which are same in all rows from pandas dataframe? | <p>I have one dataframe like below - </p>
<pre><code>df1_data = {'sym' :{0:'AAA',1:'BBB',2:'CCC',3:'DDD',4:'DDD',5:'CCC'},
'id' :{0:'101',1:'102',2:'103',3:'104',4:'105',5:'106'},
'sal':{0:'1000',1:'1000',2:'1000',3:'1000',4:'1000',5:'1000'},
'loc':{0:'zzz',1:'zzz',2:'zzz',3:'zzz',4:'zzz',5:'zzz'},
'name':{0:'abc',1:'abc',2:'abc',3:'pqr',4:'pqr',5:'pqr'}}
df = pd.DataFrame(df1_data)
print df
id loc name sal sym
0 101 zzz abc 1000 AAA
1 102 zzz abc 1000 BBB
2 103 zzz abc 1000 CCC
3 104 zzz pqr 1000 DDD
4 105 zzz pqr 1000 DDD
5 106 zzz pqr 1000 CCC
</code></pre>
<p>I want to check which columns of above dataframe contains same values in all rows. On the basis of that requirement I want these same columns in one dataframe and unmatched columns in another dataframe.</p>
<p>Expected output - </p>
<p><strong>matched_df -</strong> </p>
<pre><code> loc sal
0 zzz 1000
1 zzz 1000
2 zzz 1000
3 zzz 1000
4 zzz 1000
5 zzz 1000
</code></pre>
<p><strong>unmatched_df -</strong> </p>
<pre><code> id name sym
0 101 abc AAA
1 102 abc BBB
2 103 abc CCC
3 104 pqr DDD
4 105 pqr DDD
5 106 pqr CCC
</code></pre> | <p>You can compare <code>df</code> with first row by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.eq.html" rel="nofollow noreferrer"><code>eq</code></a> and then check all <code>True</code> values by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.all.html" rel="nofollow noreferrer"><code>all</code></a>:</p>
<pre><code>print (df.eq(df.iloc[0]))
id loc name sal sym
0 True True True True True
1 False True True True False
2 False True True True False
3 False True False True False
4 False True False True False
5 False True False True False
mask = df.eq(df.iloc[0]).all()
print (mask)
id False
loc True
name False
sal True
sym False
dtype: bool
</code></pre>
<pre><code>print (df.loc[:, mask])
loc sal
0 zzz 1000
1 zzz 1000
2 zzz 1000
3 zzz 1000
4 zzz 1000
5 zzz 1000
print (df.loc[:, ~mask])
id name sym
0 101 abc AAA
1 102 abc BBB
2 103 abc CCC
3 104 pqr DDD
4 105 pqr DDD
5 106 pqr CCC
</code></pre>
<p>Another way for <code>mask</code> is compare <code>numpy arrays</code>:</p>
<pre><code>arr = df.values
mask = (arr == arr[0]).all(axis=0)
print (mask)
[False True False True False]
print (df.loc[:, mask])
loc sal
0 zzz 1000
1 zzz 1000
2 zzz 1000
3 zzz 1000
4 zzz 1000
5 zzz 1000
print (df.loc[:, ~mask])
id name sym
0 101 abc AAA
1 102 abc BBB
2 103 abc CCC
3 104 pqr DDD
4 105 pqr DDD
5 106 pqr CCC
</code></pre> | python|pandas | 3 |
995 | 41,724,432 | ML - Getting feature names after feature selection - SelectPercentile, python | <p>I have been struggling with this one for a while.
My goal is to take a text feature that I have, and find the best 5-10 words in it to help me classify. Hence, I am running a TfIdfVectorizer, and choosing ~90 best for now. however, after I downsize the feature amount, I am unable to see which features were actually chosen.</p>
<p>here is what I have:</p>
<pre><code>import pandas
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.feature_selection import SelectPercentile, f_classif
train=pandas.read_csv("train.tsv", sep='\t')
labels_train = train["label"]
documents = []
for i, row in train.iterrows():
documents.append((row['boilerplate'][1:-1].lower()))
vectorizer = TfidfVectorizer(sublinear_tf=True, stop_words="english")
features_train_transformed = vectorizer.fit_transform(documents)
selector = SelectPercentile(f_classif, percentile=0.1)
selector.fit(features_train_transformed, labels_train)
features_train_transformed = selector.transform(features_train_transformed).toarray()
</code></pre>
<p>The result is that features_train_transformed contains a matrix of all the tfidf scores per word per document of the selected words, however I have no idea which words were chosen, and methods like "get_feature_names()" are unavailable for the class SelectPercentile.</p>
<p>This is neccesary because i need to add these features to a bunch of numeric features and only then make my training and predictions.</p> | <ul>
<li>selector.get_support() to get you a boolean array of columns that were within the percentile range you specified</li>
<li>train.columns.values should get you the complete list of column names for the original dataframe</li>
<li>filtering the latter with the former should give you the names of columns that make up your chosen percentile range.</li>
</ul>
<p>the code below (cut-pasted from working code) is similar enough to yours, that it's hopefully helpful</p>
<pre><code>import numpy as np
selection = SelectPercentile(f_regression, percentile=2)
train_minus_target = train.drop("y", axis=1)
x_features = selection.fit_transform(train_minus_target, y_train)
columns = np.asarray(train_minus_target.columns.values)
support = np.asarray(selection.get_support())
columns_with_support = columns[support]
</code></pre>
<p>Reference:<br>
<a href="http://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.SelectPercentile.html#sklearn.feature_selection.SelectPercentile.get_support" rel="nofollow noreferrer">about get_support</a></p> | python|numpy|machine-learning|scikit-learn|feature-extraction | 1 |
996 | 64,265,621 | How to do slicing in pandas Series through elements instead of indices in case they are similar | <p>I have pandas Series like:</p>
<pre><code>s = pd.Series([1,9,3,4,5], index = [1,2,5,3,9])
</code></pre>
<p>How can I obtain, say, element <code>'3'</code>? Given that I do not know exact elements in advance. I need to write a function that gets, say, first element of the Series.</p>
<p>series[2] understands it like 'index=2' instead of 'second element', when we do have indices.</p>
<p>When I do not indicate indices, the slicing works fine, just through elements.</p>
<p>But how can I prioritize slicing through elements if they overlap with indices?</p> | <p>Like this, using boolean indexing:</p>
<pre><code>s[s==3]
</code></pre>
<p>Given:</p>
<pre><code>s = pd.Series([1,9,3,4,5], index = [1,2,5,3,9])
</code></pre>
<p>Let's find elements 3 and 9, use:</p>
<pre><code>s[s.isin([9,3])]
</code></pre>
<p>Output:</p>
<pre><code>2 9
5 3
dtype: int64
</code></pre>
<h1>Update per comment below</h1>
<p>Use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.iloc.html#pandas-series-iloc" rel="nofollow noreferrer"><code>iloc</code></a> for integer location:</p>
<pre><code>s.iloc[2]
</code></pre>
<p>Output:</p>
<pre><code>3
</code></pre> | python|pandas | 0 |
997 | 64,263,685 | replace dataframe column values with values from an other dataframe column | <p>Let me explain my problem a bit more. I have a dataframe with ID, name and surname, let's call him df_src ex :<br /></p>
<pre><code>ID Name Surname
177015H LAURE Thomas
198786X ANGEARD Audrey
136235G EYSSERIC Laurent
198786X ANGEARD Audrey
</code></pre>
<p>In this dataframe i have multiple values that are duplicated. Due to the fact that a person can manage diffrent people.<br />
In the other hand my second dataframe contains each of previous rows without the duplicated values + pseudonymize data, let's call him df_tem ex :</p>
<pre><code>ID Name Surname FakeID FakeName FakeSurname
177015H LAURE Thomas 127345H ELOR Lori
198786X ANGEARD Audrey 112846X RELARD Pierre
136235G EYSSERIC Laurent 108456G SERIC Marc
... ... ... .... ... ...
</code></pre>
<p>What i want to accomplish here is to replace all values from df_src that are similar to the one on df_tem by the fake value. For ex Replace all duplicated values of 177015H LAURE Thomas by 127345H ELOR Lori and so on.</p>
<p>I try to use</p>
<pre><code>df_src.replace(to_replace=dfsrc['column'], value=df_tem['column'], inplace=True)
</code></pre>
<p>just to have none in return.
It's been several hour that i'm on it without being able to find a way of doing it with pandas.</p>
<p>Do you have any idea ? Any hep will be appreciated.</p> | <p>I would merge both and then rename the columns:</p>
<pre><code>df = df_src.merge(df_tem, on=["ID", "Name", "Surname"], how="left"
).drop(columns=["ID", "Name", "Surname"]
).rename(columns={"FakeID": "ID", "FakeName": "Name", "FakeSurname": "Surname"})
</code></pre> | python|pandas | 0 |
998 | 64,574,222 | Accuracy and loss does not change with RMSprop optimizer | <p>The dataset is CIFAR10. I've created a VGG-like network:</p>
<pre><code>class FirstModel(nn.Module):
def __init__(self):
super(FirstModel, self).__init__()
self.vgg1 = nn.Sequential(
nn.Conv2d(3, 16, 3, padding=1),
nn.BatchNorm2d(16),
nn.ReLU(),
nn.Conv2d(16, 16, 3, padding=1),
nn.BatchNorm2d(16),
nn.ReLU(),
nn.MaxPool2d(2,2),
nn.Dropout(0.2)
)
self.vgg2 = nn.Sequential(
nn.Conv2d(16, 32, 3, padding=1),
nn.BatchNorm2d(32),
nn.ReLU(),
nn.Conv2d(32, 32, 3, padding=1),
nn.BatchNorm2d(32),
nn.ReLU(),
nn.MaxPool2d(2,2),
nn.Dropout(0.2)
)
self.vgg3 = nn.Sequential(
nn.Conv2d(32, 64, 3, padding=1),
nn.BatchNorm2d(64),
nn.ReLU(),
nn.Conv2d(64, 64, 3, padding=1),
nn.BatchNorm2d(64),
nn.ReLU(),
nn.MaxPool2d(2,2),
nn.Dropout(0.2)
)
self.fc1 = nn.Linear(4 * 4 * 64, 4096)
self.relu = nn.ReLU()
self.fc2 = nn.Linear(4096, 4096)
self.fc3 = nn.Linear(4096, 10)
self.softmax = nn.Softmax()
self.dropout = nn.Dropout(0.5)
def forward(self, x):
x = self.vgg3(self.vgg2(self.vgg1(x)))
x = nn.Flatten()(x)
x = self.relu(self.fc1(x))
x = self.dropout(x)
x = self.relu(self.fc2(x))
x = self.dropout(x)
x = self.softmax(self.fc3(x))
return x
</code></pre>
<p>Then I train it and visualize loss and accuracy:</p>
<pre><code>import matplotlib.pyplot as plt
from IPython.display import clear_output
def plot_history(train_history, val_history, title='loss'):
plt.figure()
plt.title('{}'.format(title))
plt.plot(train_history, label='train', zorder=1)
points = np.array(val_history)
steps = list(range(0, len(train_history) + 1, int(len(train_history) / len(val_history))))[1:]
plt.scatter(steps, val_history, marker='*', s=180, c='red', label='val', zorder=2)
plt.xlabel('train steps')
plt.legend(loc='best')
plt.grid()
plt.show()
def train_model(model, optimizer, train_dataloader, test_dataloader):
criterion = nn.CrossEntropyLoss()
train_loss_log = []
train_acc_log = []
val_loss_log = []
val_acc_log = []
for epoch in range(NUM_EPOCH):
model.train()
train_loss = 0.
train_size = 0
train_acc = 0.
for inputs, labels in train_dataloader:
inputs, labels = inputs.to(device), labels.to(device)
optimizer.zero_grad()
y_pred = model(inputs)
loss = criterion(y_pred, labels)
loss.backward()
optimizer.step()
train_loss += loss.item()
train_size += y_pred.size(0)
train_loss_log.append(loss.data / y_pred.size(0))
_, pred_classes = torch.max(y_pred, 1)
train_acc += (pred_classes == labels).sum().item()
train_acc_log.append(np.mean((pred_classes == labels).cpu().numpy()))
# блок validation
val_loss = 0.
val_size = 0
val_acc = 0.
model.eval()
with torch.no_grad():
for inputs, labels in test_dataloader:
inputs, labels = inputs.to(device), labels.to(device)
y_pred = model(inputs)
loss = criterion(y_pred, labels)
val_loss += loss.item()
val_size += y_pred.size(0)
_, pred_classes = torch.max(y_pred, 1)
val_acc += (pred_classes == labels).sum().item()
val_loss_log.append(val_loss/val_size)
val_acc_log.append(val_acc/val_size)
clear_output()
plot_history(train_loss_log, val_loss_log, 'loss')
plot_history(train_acc_log, val_acc_log, 'accuracy')
print('Train loss:', train_loss / train_size)
print('Train acc:', train_acc / train_size)
print('Val loss:', val_loss / val_size)
print('Val acc:', val_acc / val_size)
</code></pre>
<p>Then I train the model:</p>
<pre><code>first_model = FirstModel()
first_model.to(device)
optimizer = optim.RMSprop(first_model.parameters(), lr=0.001, momentum=0.9)
train_model(first_model_rms, optimizer, train_dataloader, test_dataloader)
</code></pre>
<p>The loss and accuracy do not change (accuracy at level of 0.1). However, if the optimizer is SGD with momentum everything works fine (loss and accuracy change). I've already tried to change momentum and lr, but it does not help.</p>
<p>What should be fixed? Would be grateful for any possible advice!</p> | <p>try to decrease the learning rate more .....if then also there is no affect on the accuracy and loss then change the optimizer to adams or something else and play with different learning rates.</p> | python|pytorch|vgg-net|rms|sgd | 1 |
999 | 64,229,511 | How to write tensorflow events to google cloud storage from Docker container inside VM instance | <p>I've created a VM instance on Google Compute Engine. After uploading my project and building my image, I ran into my container and authorized access to Google Cloud Platform with my service account:</p>
<pre><code>gcloud auth activate-service-account [email protected] --key-file=mykey.json
</code></pre>
<p>so that I can access my google cloud storage (I did a test with <code>gsutil cp</code> on my bucket and it works). Now I try to execute a tensorflow python script like this:</p>
<pre><code>python object_detection/model_main_tf2.py \
--pipeline_config_path=/raccoon/config.config \
--model_dir=gs://my-bucket/ \
--num_train_steps=10
</code></pre>
<p>specifying as model_dir my bucket so that checkpoints and events are stored there (in order to monitor the progress of the training with tensorboard from my laptop).</p>
<p>Problem is I get the following permission error from tensorflow:</p>
<pre><code>tensorflow.python.framework.errors_impl.PermissionDeniedError:
Error executing an HTTP request: HTTP response code
403 with body '{
"error": {
"code": 403,
"message": "Insufficient Permission",
"errors": [
{
"message": "Insufficient Permission",
"domain": "global",
"reason": "insufficientPermissions"
}
]
}
}
'
when initiating an upload to gs://my-bucket/train/events.out.tfevents.1601998426.266
1f74c3966.450.2928.v2
Failed to flush 1 events to gs://my-bucket/train/events.out.tfevents.1601998426.2661f
74c3966.450.2928.v2
Flushing first event.
Could not initialize events writer. [Op:CreateSummaryFileWriter]
</code></pre>
<p>The <code>train</code> directory exists on my bucket and as I said before the following command is working:</p>
<pre><code>gsutil cp test.txt gs://my-bucket/train/.
</code></pre>
<p>Am I missing something?</p> | <p>Authenticating <code>gcloud</code> just ensures that future <code>gcloud</code> commands are authenticated. Your script (likely) doesn't use <code>gcloud</code> and thus isn't authenticated.</p>
<p>Instead, if you have service account credentials in a JSON file, you can specify it via the <code>GOOGLE_APPLICATION_CREDENTIALS</code> environment variable to have TensorFlow be able to read/write to GCS via <code>gs://</code> URLs.</p> | python|authentication|google-cloud-platform|google-cloud-storage|tensorflow2.0 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.