GUI and Desktop Applications
int64 0
1
| A_Id
int64 5.3k
72.5M
| Networking and APIs
int64 0
1
| Python Basics and Environment
int64 0
1
| Other
int64 0
1
| Database and SQL
int64 0
1
| Available Count
int64 1
13
| is_accepted
bool 2
classes | Q_Score
int64 0
1.72k
| CreationDate
stringlengths 23
23
| Users Score
int64 -11
327
| AnswerCount
int64 1
31
| System Administration and DevOps
int64 0
1
| Title
stringlengths 15
149
| Q_Id
int64 5.14k
60M
| Score
float64 -1
1.2
| Tags
stringlengths 6
90
| Answer
stringlengths 18
5.54k
| Question
stringlengths 49
9.42k
| Web Development
int64 0
1
| Data Science and Machine Learning
int64 1
1
| ViewCount
int64 7
3.27M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 40,284,356 | 0 | 1 | 0 | 0 | 1 | true | 0 |
2016-10-27T12:12:00.000
| 4 | 1 | 0 |
python: split matrix in hermitian and anti-hermitian part
| 40,284,296 | 1.2 |
python,numpy,matrix
|
The Hermitian part is (A + A.T.conj())/2, the anti-hermitian part is (A - A.T.conj())/2 (it is quite easy to prove).
If A = B + C with B Hermitian and C anti-Hermitian, you can take the conjugate (I'll denote it *) on both sides, uses its linearity and obtain A* = B - C, from which the values of B and C follow easily.
|
Imagine I have a numpy array in python that has complex numbers as its elements.
I would like to know if it is possible to split any matrix of this kind into a hermitian and anti-hermitian part? My intuition says that this is possible, similar to the fact that any function can be split into an even and an uneven part.
If this is indeed possible, how would you do this in python? So, I'm looking for a function that takes as input any matrix with complex elements and gives a hermitian and non-hermitian matrix as output such that the sum of the two outputs is the input.
(I'm working with python 3 in Jupyter Notebook).
| 0 | 1 | 309 |
0 | 40,293,068 | 0 | 0 | 0 | 0 | 1 | false | 4 |
2016-10-27T14:20:00.000
| 4 | 1 | 0 |
Fastest way to calculate many regressions in python?
| 40,287,113 | 0.664037 |
python,python-3.x,numpy,linear-regression
|
some brief answers
1) Calling statsmodels repeatedly is not the fastest way. If we just need parameters, prediction and residual and we have identical explanatory variables, then I usually just use params = pinv(x).dot(y) where y is 2 dimensional and calculate the rest from there. The problem is that inference, confidence intervals and similar require work, so unless speed is crucial and only a restricted set of results is required, statsmodels OLS is still more convenient.
This only works if all y and x have the same observations indices, no missing values and no gaps.
Aside: The setup is a Multivariate Linear Model which will be supported by statsmodels in, hopefully not very far, future.
2) and 3) The fast simple linear algebra of case 1) does not work if there are missing cells or no complete overlap of observation (indices). In the analog to panel data, the first case requires "balanced" panels, the other cases imply "unbalanced" data. The standard way is to stack the data with the explanatory variables in a block-diagonal form. Since this increases the memory by a large amount, using sparse matrices and sparse linear algebra is better. It depends on the specific cases whether building and solving the sparse problem is faster than looping over individual OLS regressions.
Specialized code: (Just a thought):
In case 2) with not fully overlapping or cellwise missing values, we would still need to calculate all x'x, and x'y matrices for all y, i.e. 500 of those. Given that you only have two regressors 500 x 2 x 2 would still not require a large memory. So it might be possible to calculate params, prediction and residuals by using the non-missing mask as weights in the cross-product calculations.
numpy has vectorized linalg.inv, as far as I know. So, I think, this could be done with a few vectorized calculations.
|
I think I have a pretty reasonable idea on how to do go about accomplishing this, but I'm not 100% sure on all of the steps. This question is mostly intended as a sanity check to ensure that I'm doing this in the most efficient way, and that my math is actually sound (since my statistics knowledge is not completely perfect).
Anyways, some explanation about what I'm trying to do:
I have a lot of time series data that I would like to perform some linear regressions on. In particular, I have roughly 2000 observations on 500 different variables. For each variable, I need to perform a regression using two explanatory variables (two additional vectors of roughly 2000 observations). So for each of 500 different Y's, I would need to find a and b in the following regression Y = aX_1 + bX_2 + e.
Up until this point, I have been using the OLS function in the statsmodels package to perform my regressions. However, as far as I can tell, if I wanted to use the statsmodels package to accomplish my problem, I would have to call it hundreds of times, which just seems generally inefficient.
So instead, I decided to revisit some statistics that I haven't really touched in a long time. If my knowledge is still correct, I can put all of my observations into one large Y matrix that is roughly 2000 x 500. I can then stick my explanatory variables into an X matrix that is roughly 2000 x 2, and get the results of all 500 of my regressions by calculating (X'Y)/(X'X). If I do this using basic numpy stuff (matrix multiplication using * and inverses using matrix.I), I'm guessing it will be much faster than doing hundreds of statsmodel OLS calls.
Here are the questions that I have:
Is the numpy stuff that I am doing faster than the earlier method of calling statsmodels many times? If so, is it the fastest/most efficient way to accomplish what I want? I'm assuming that it is, but if you know of a better way then I would be happy to hear it. (Surely I'm not the first person to need to calculate many regressions in this way.)
How do I deal with missing data in my matrices? My time series data is not going to be nice and complete, and will be missing values occasionally. If I just try to do regular matrix multiplication in numpy, the NA values will propagate and I'll end up with a matrix of mostly NAs as my end result. If I do each regression independently, I can just drop the rows containing NAs before I perform my regression, but if I do this on the large 2000 x 500 matrix I will end up dropping actual, non-NA data from some of my other variables, and I obviously don't want that to happen.
What is the most efficient way to ensure that my time series data actually lines up correctly before I put it into the matrices in the first place? The start and end dates for my observations are not necessarily the same, and some series might have days that others do not. If I were to pick a method for doing this, I would put all the observations into a pandas data frame indexed by the date. Then pandas will end up doing all of the work aligning everything for me and I can extract the underlying ndarray after it is done. Is this the best method, or does pandas have some sort of overhead that I can avoid by doing the matrix construction in a different way?
| 0 | 1 | 3,105 |
0 | 40,290,127 | 0 | 1 | 0 | 0 | 1 | false | 5 |
2016-10-27T16:35:00.000
| 1 | 5 | 0 |
Converting a 3D List to a 3D NumPy array
| 40,289,943 | 0.039979 |
python,list,python-3.x,numpy
|
Looping and adding is likely better, since you want to preserve the structure of the original. Plus, the error you mentioned indicates that you would need to flatten the numpy array and then add to each element. Although numpy operations tend to be faster than list operations, converting, flattening, and reverting is cumbersome and will probably offset any gains.
|
Currently, I have a 3D Python list in jagged array format.
A = [[[0, 0, 0], [0, 0, 0], [0, 0, 0]], [[0], [0], [0]]]
Is there any way I could convert this list to a NumPy array, in order to use certain NumPy array operators such as adding a number to each element.
A + 4 would give [[[4, 4, 4], [4, 4, 4], [4, 4, 4]], [[4], [4], [4]]].
Assigning B = numpy.array(A) then attempting to B + 4 throws a type error.
TypeError: can only concatenate list (not "float") to list
Is a conversion from a jagged Python list to a NumPy array possible while retaining the structure (I will need to convert it back later) or is looping through the array and adding the required the better solution in this case?
| 0 | 1 | 7,011 |
0 | 41,733,461 | 0 | 0 | 0 | 0 | 1 | false | 1 |
2016-10-28T01:49:00.000
| 1 | 1 | 0 |
How to extract words used for Doc2Vec
| 40,296,765 | 0.197375 |
python,nlp,gensim,doc2vec
|
Gensim's Word2Vec/Doc2Vec models don't store the corpus data – they only examine it, in multiple passes, to train up the model. If you need to retrieve the original texts, you should populate your own lookup-by-key data structure, such as a Python dict (if all your examples fit in memory).
Separately, in recent versions of gensim, your code will actually be doing 1,005 training passes over your taggeddocs, including many with a nonsensically/destructively negative alpha value.
By passing it into the constructor, you're telling the model to train itself, using your parameters and defaults, which include a default number of iter=5 passes.
You then do 200 more loops. Each call to train() will do the default 5 passes. And by decrementing alpha from 0.025 by 0.002 199 times, the last loop will use an effective alpha of 0.025-(200*0.002)=-0.375 - a negative value essentially telling the model to make a large correction in the opposite direction of improvement each training-example.
Just use the iter parameter to choose the desired number of passes. Let the class manage the alpha changes itself. If supplying the corpus when instantiating the model, no further steps are necessary. But if you don't supply the corpus at instantiation, you'll need to do model.build_vocab(tagged_docs) once, then model.train(tagged_docs) once.
|
I am preparing a Doc2Vec model using tweets. Each tweet's word array is considered as a separate document and is labeled as "SENT_1", SENT_2" etc.
taggeddocs = []
for index,i in enumerate(cleaned_tweets):
if len(i) > 2: # Non empty tweets
sentence = TaggedDocument(words=gensim.utils.to_unicode(i).split(), tags=[u'SENT_{:d}'.format(index)])
taggeddocs.append(sentence)
# build the model
model = gensim.models.Doc2Vec(taggeddocs, dm=0, alpha=0.025, size=20, min_alpha=0.025, min_count=0)
for epoch in range(200):
if epoch % 20 == 0:
print('Now training epoch %s' % epoch)
model.train(taggeddocs)
model.alpha -= 0.002 # decrease the learning rate
model.min_alpha = model.alpha # fix the learning rate, no decay
I wish to find tweets similar to a given tweet, say "SENT_2". How?
I get labels for similar tweets as:
sims = model.docvecs.most_similar('SENT_2')
for label, score in sims:
print(label)
It prints as:
SENT_4372
SENT_1143
SENT_4024
SENT_4759
SENT_3497
SENT_5749
SENT_3189
SENT_1581
SENT_5127
SENT_3798
But given a label, how do I get original tweet words/sentence? E.g. what are the tweet words of, say, "SENT_3497". Can I query this to Doc2Vec model?
| 0 | 1 | 1,260 |
0 | 40,315,040 | 0 | 0 | 0 | 0 | 1 | true | 0 |
2016-10-28T06:04:00.000
| 2 | 2 | 0 |
How to handle nominal data in scikit learn, python?
| 40,298,966 | 1.2 |
python,scikit-learn,data-mining,categorical-data
|
Depends what type of model you're using, make_pipeline(LabelEncoder, OneHotEncoder) or pd.get_dummies) is the usual choice, and can work well with classifiers from either linear_model or tree. LabelEncoder on its own would another choice, although this won't work well unless there's a natural ordering on your labels (like level of education or something) or unless you're using very deep trees, which are able to separate individual labels.
|
I am new to data mining. I have a data set which includes directors' names. What is the right way to convert them to something that Scikit learn estimators can use without problem?
From what I found on the internet I thought that sklearn.preprocessing.LabelEncoder is the right choice.
| 0 | 1 | 861 |
0 | 40,299,648 | 0 | 1 | 0 | 0 | 1 | false | 0 |
2016-10-28T06:46:00.000
| 0 | 2 | 0 |
Python groupby threshold
| 40,299,538 | 0 |
python,group-by,itertools
|
Bucket them. You'd need to manually work out the breaks in advance. Can you sort then, in advance? That would make it easier.
Actually, if you use log, then a multiplicative threshold turns into a constant threshold, e.g. 0.98..1.02 in log-land ~= (-0.02, +0.02).
So, use the log of all your numbers.
You'll still need to bucket them before doing groupby.
If you want code, give us a better (random-seeded) reproducible example that has more numbers testing the corner-cases.
|
I have a list of numbers and I need to group it. itertools.grouby work perfectly for sequences of same numbers but I need same behavior for numbers with a threshold (2-3%)
E.X: lst = [1, 500, 19885, 19886, 19895, 90000000]
and I expect [[1], [500], [19885, 19886, 19895], [90000000]]
Can you suggest me something?
| 0 | 1 | 415 |
0 | 40,310,470 | 0 | 0 | 0 | 0 | 1 | false | 3 |
2016-10-28T16:55:00.000
| 2 | 2 | 0 |
cannot sum rows that match a regular expression in pandas / python
| 40,309,777 | 0.197375 |
python,regex,pandas
|
The problem is that the match function does not return True when it matches, it returns a match object. Pandas cannot add this match object because it is not an integer value. The reason you get a sum when you are using 'not' is because it returns a boolean value of True, which pandas can sum the True value and return a number.
|
I can find the number of rows in a column in a pandas dataframe that do NOT follow a pattern but not the number of rows that follow the very same pattern!
This works:
df.report_date.apply(lambda x: (not re.match(r'[0-9]{4}-[0-9]{1,2}-[0-9]{1,2}', x))).sum()
This does not: removing 'not' does not tell me how many rows match but raises a TypeError. Any idea why that would be the case?
df.report_date.apply(lambda x: (re.match(r'[0-9]{4}-[0-9]{1,2}-[0-9]{1,2}', x))).sum()
| 0 | 1 | 883 |
0 | 40,351,277 | 0 | 0 | 0 | 0 | 1 | false | 0 |
2016-10-30T05:05:00.000
| 0 | 1 | 0 |
How to do matmul operation in specific dimension in tensorflow
| 40,326,169 | 0 |
python,tensorflow,deep-learning
|
I would first reshape your Tensor to be (sequence_length * batch_size, word_dim), do the matmul to get (sequence_length * batch_size, hidden_dim), then reshape again to get (sequence_length, batch_size, hidden_dim). There is no copying involved with reshape(), and this is equivalent to multiplying each of the batch_size matrices individually if you only have one matrix to multiply them with.
|
I have a 3D tensor (sequence_length, batch_size, word_dim), I need to do matmul operation with "word_dim" dimension so that I can change tensor into (sequence_length, batch_size, hidden_dim). It seems that matmul operation can only be used in 2D tensor. And I can not change the 3D tensor into 2D because of the "batch_size". How can I do?
| 0 | 1 | 118 |
0 | 52,020,891 | 0 | 0 | 0 | 0 | 1 | false | 4 |
2016-10-30T12:52:00.000
| 0 | 2 | 0 |
Multiscale CNN - Keras Implementation
| 40,329,307 | 0 |
python,neural-network,concatenation,convolution,keras
|
I do not understand why to have 3 CNNs because you would mostly have the same results than on a single CNN. Maybe you could train faster.
Perhaps you could also do pooling and some resnet operation (I guess this could prove similar to what you want).
Nevertheless, for each CNN you need a cost function in order to optimize the "heuristic" you use (eg: to improve recognition). Also, you could do something as in the NN Style Transfer in which you compare results between several "targets" (the content and the style matrices); or simply train 3 CNNs then cutoff the last layers (or freeze them) and train again with the already trained weights but now with your target FN layer...
|
I want to implement a multiscale CNN in python. My aim is to use three different CNNs for three different scales and concatenate the final outputs of the final layers and feed them to a FC layer to take the output predictions.
But I don't understand how can I implement this. I know how to implement a single scale CNN.
Could anyone help me in this?
| 0 | 1 | 1,527 |
0 | 40,340,683 | 0 | 0 | 0 | 0 | 1 | true | 1 |
2016-10-31T09:20:00.000
| 1 | 1 | 0 |
Plot maximum of whats remaining on plot after xlim
| 40,339,355 | 1.2 |
python,matplotlib,contour
|
Without providing any code it's hard to give you a code example of what you should do, but I'm assuming you are providing contourplot some 2d numpy array of values that drives the visualization. What I would suggest is then to set the x-limit in that data-structure rather than providing the limit to matplotlib. If Xis your datastructure, then just do plt.contour(X[:10, :]).
|
I am plotting a contour on matplotlib using the contourf command.
I get huge numbers at the start of frequency, but lower peaks after that.
xlim only works in hiding the higher numbers - but I want the lower peaks to become the maximum on the colorbar (not shown in my images).
How to rescale the contour after the xlim has hidden the unrequired contour? Basically, the light blue (cool) portion of should become the red (hot) area after applying xlim(10,100)
| 0 | 1 | 30 |
0 | 40,349,994 | 0 | 1 | 0 | 0 | 1 | false | 0 |
2016-10-31T20:12:00.000
| 0 | 1 | 0 |
How to convert quantlib.time.date.Date to pandas.tslib.Timestamp
| 40,349,868 | 0 |
python,pandas,quantlib
|
Looks like something like this might work:
dataframe.set_index = [ pandas.to_datetime(str(a)) for a in list(data.keys()) ]
but I get the following error:
KeyError: Timestamp('2016-09-27 00:00:00')
Press any key to continue . . .
|
I have a list data.keys() which returns
dict_keys([2016-09-27, 2016-09-28, 2016-09-29, 2016-09-30, 2016-09-26])
Data type for each of the element is quantlib.time.date.Date
How do I create an index for pandas with the above so that the index is of type pandas.tslib.Timestamp?
| 0 | 1 | 411 |
0 | 40,370,495 | 0 | 0 | 0 | 0 | 1 | true | 0 |
2016-11-01T21:35:00.000
| 1 | 1 | 0 |
Python MLPClassifier Value Error
| 40,369,042 | 1.2 |
python,machine-learning,scikit-learn,artificial-intelligence
|
Your data does not make any sense from scikit-learn perspective of what is expected in the .fit call. Feature vectors is supposed to be a matrix of size N x d, where N - number of data points and d number of features, and your second variable should hold labels, thus it should be vector of length N (or N x k where k is number of outputs/labels per point). Whatever is represented in your variables - their sizes do not match what they should represent.
|
I'm currently trying to train the MLPClassifier implemented in sklearn...
When i try to train it with the given values i get this error:
ValueError: setting an array element with a sequence.
The format of the feature_vector is
[ [one_hot_encoded brandname], [different apps scaled to mean 0 and variance 1] ]
Does anybody know what I'm doing wrong ?
Thank you!
feature_vectors:
[
array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]),
array([ 0.82211852, -0.22976818, -0.22976818, -0.22976818, -0.22976818,
-0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818,
-0.22976818, -0.22976818, 4.45590895, -0.22976818, -0.22976818,
-0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818,
-0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818,
-0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818,
-0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818,
-0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818,
-0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818,
-0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818,
-0.22976818, 0.3439882 , -0.22976818, -0.22976818, -0.22976818,
4.93403927, -0.22976818, -0.22976818, -0.22976818, 0.63086639,
1.10899671, -0.22976818, -0.22976818, -0.22976818, -0.22976818,
-0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818,
-0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818,
-0.22976818, -0.22976818, -0.22976818, 1.58712703, -0.22976818,
1.77837916, -0.22976818, -0.22976818, -0.22976818, -0.22976818,
-0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818,
-0.22976818, -0.22976818, 2.16088342, -0.22976818, 2.16088342,
-0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818,
-0.22976818, -0.22976818, -0.22976818, 9.42846428, -0.22976818,
-0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818,
0.91774459, -0.22976818, -0.22976818, 4.16903076, -0.22976818,
-0.22976818, -0.22976818, -0.22976818, -0.22976818, 2.44776161,
-0.22976818, -0.22976818, -0.22976818, 1.96963129, 1.96963129,
1.96963129, -0.22976818, -0.22976818, -0.22976818, -0.22976818,
-0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818,
-0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818,
-0.22976818, -0.22976818, -0.22976818, -0.22976818, 7.13343874,
5.98592598, -0.22976818, -0.22976818, -0.22976818, -0.22976818,
-0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818,
-0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818,
3.02151799, 4.26465682, -0.22976818, -0.22976818, -0.22976818,
-0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818,
-0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818,
-0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818,
-0.22976818, 2.25650948, -0.22976818, -0.22976818, -0.22976818,
-0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818,
-0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818,
-0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818,
-0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818,
-0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818,
-0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818,
-0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818,
1.30024884, -0.22976818, -0.22976818, -0.22976818, -0.22976818,
-0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818,
-0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818,
-0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818,
-0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818,
-0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818,
-0.22976818, -0.22976818, -0.22976818, 4.74278714, -0.22976818,
-0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818,
-0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818,
-0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818,
-0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818,
-0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818,
-0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818,
-0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818,
-0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818,
-0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818,
-0.22976818, -0.22976818, -0.22976818, -0.22976818, 0.3439882 ,
-0.22976818, 0.3439882 , -0.22976818, -0.22976818, -0.22976818,
-0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818,
-0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818,
-0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818,
-0.22976818, -0.22976818, 0.53524033, -0.22976818, -0.22976818,
-0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818,
-0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818,
-0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818,
-0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818,
-0.22976818, -0.22976818, -0.22976818, -0.22976818, 3.49964831,
-0.22976818, -0.22976818, -0.22976818, -0.22976818, -0.22976818])
]
g_a_group:
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0.]
MLP:
from sklearn.neural_network import MLPClassifier
clf = MLPClassifier(solver='lbfgs', alpha=1e-5,
hidden_layer_sizes=(5, 2), random_state=1)
clf.fit(feature_vectors, g_a_group)
| 0 | 1 | 437 |
0 | 40,370,096 | 0 | 0 | 0 | 0 | 1 | false | 0 |
2016-11-01T23:07:00.000
| 0 | 1 | 0 |
Generate new tensorflow tensor according to the element index of original tensor
| 40,370,004 | 0 |
python-2.7,tensorflow
|
You should just make your label (y) in your reduced sum format (i.e. 3 bits), and train to that label. The neural net should be smart enough to adjust the weights to imitate your reduce_sum logic.
|
I have a question about tensorflow tensor.
If I have a NeuralNet like y=xw+b as an example.
then x is placeholder([7,7] dims), w is Variable([7,1]) and b is Variable([1,1])
So, y is tensorflow tensor with [7,1] dims.
then, in this case. can I make a new tensor like
new_y = [tf.reduce_sum(y[0:3]), tf.reduce_sum(y[3:5]), tf.reduce_sum(y[5:])]
and use it for training step?
If possible, how can I make it?
| 0 | 1 | 54 |
0 | 40,390,555 | 0 | 0 | 0 | 0 | 1 | false | 0 |
2016-11-02T20:39:00.000
| -1 | 2 | 0 |
Clustering before regression - recommender system
| 40,389,402 | -0.099668 |
python,machine-learning,scikit-learn,k-means
|
Clustering users makes sense. But if your only feature is the rating, I don't think it could produce a useful model for prediction. Below are my assumptions to make this justification:
The quality of movie should be distributed with a gaussion distribution.
If we look at the rating distribution of a common user, it should be something like gaussian.
I don't exclude the possibility that a few users only give ratings when they see a bad movie (thus all low ratings); and vice versa. But on a large scale of users, this should be unusual behavior.
Thus I can imagine that after clustering, you get small groups of users in the two extreme cases; and most users are in the middle (because they share the gaussian-like rating behavior). Using this model, you probably get good results for users in the two small (extreme) groups; however for the majority of users, you cannot expect good predictions.
|
I have a file called train.dat which has three fields - userID, movieID and rating.
I need to predict the rating in the test.dat file based on this.
I want to know how I can use scikit-learn's KMeans to group similar users given that I have only feature - rating.
Does this even make sense to do? After the clustering step, I could do a regression step to get the ratings for each user-movie pair in test.dat
Edit: I have some extra files which contain the actors in each movie, the directors and also the genres that the movie falls into. I'm unsure how to use these to start with and I'm asking this question because I was wondering whether it's possible to get a simple model working with just rating and then enhance it with the other data. I read that this is called content based recommendation. I'm sorry, I should've written about the other data files as well.
| 0 | 1 | 733 |
0 | 40,411,804 | 0 | 0 | 0 | 0 | 2 | false | 0 |
2016-11-03T10:21:00.000
| 0 | 2 | 0 |
python : facing memory issue in document clustering using sklearn
| 40,399,023 | 0 |
python,scikit-learn,cluster-analysis,tf-idf
|
start small.
First cluster only 100.00 documents. Only once it works (because it probably won't), then think about scaling up.
If you don't succeed clustering the subset (and text clusters are usually pretty bad), then you won't fare well on the large set.
|
I am using TfIdfVectorizer of sklearn for document clustering. I have 20 million texts, for which i want to compute clusters. But calculating TfIdf matrix is taking too much time and system is getting stuck.
Is there any technique to deal with this problem ? is there any alternative method for this in any python module ?
| 0 | 1 | 261 |
0 | 40,399,357 | 0 | 0 | 0 | 0 | 2 | true | 0 |
2016-11-03T10:21:00.000
| 1 | 2 | 0 |
python : facing memory issue in document clustering using sklearn
| 40,399,023 | 1.2 |
python,scikit-learn,cluster-analysis,tf-idf
|
Well, a corpus of 20 million texts is very large, and without a meticulous and comprehensive preprocessing nor some good computing instances (i.e. a lot of memory and good CPUs), the TF-IDF calculation may take a lot of time.
What you can do :
Limit your text corpus to some hundred of thousands of samples (let's say 200.000 texts). Having too much texts might not introduce more variance than much smaller (but reasonable) datasets.
Try to preprocess your texts as much as you can. A basic approach would be : tokenize your texts, use stop words, word stemming, use carefully n_grams.
Once you've done all these steps, see how much you've reduced the size of your vocabulary. It should be much more smaller than the original one.
If not too big (talking about your dataset), these steps might help you compute the TF-IDF much faster .
|
I am using TfIdfVectorizer of sklearn for document clustering. I have 20 million texts, for which i want to compute clusters. But calculating TfIdf matrix is taking too much time and system is getting stuck.
Is there any technique to deal with this problem ? is there any alternative method for this in any python module ?
| 0 | 1 | 261 |
0 | 40,413,120 | 1 | 0 | 0 | 0 | 1 | true | 3 |
2016-11-03T20:48:00.000
| 3 | 1 | 0 |
Does EMR still have any advantages over EC2 for Spark?
| 40,410,975 | 1.2 |
python-3.x,apache-spark,amazon-ec2
|
This question boils down to the value of managed services, IMHO.
Running Spark as a standalone in local mode only requires you get the latest Spark, untar it, cd to its bin path and then running spark-submit, etc
However, creating a multi-node cluster that runs in cluster mode requires that you actually do real networking, configuring, tuning, etc. This means you've got to deal with IAM roles, Security groups, and there are subnet considerations within your VPC.
When you use EMR, you get a turnkey cluster in which you can 1-click install many popular applications (spark included), and all of the Security Groups are already configured properly for network communication between nodes, you've got logging already setup and pointing at S3, you've got easy SSH instructions, you've got an already-installed apparatus for tunneling and viewing the various UI's, you've got visual usage metrics at the IO level, node level, and job submission level, you also have the ability to create and run Steps -- which are jobs that can be run in the command line of the drive node or as Spark applications that leverage the whole cluster. Then, on top of that, you can export that whole cluster, steps included, and copy paste the CLI script into a recurring job via DataPipeline and literally create an ETL pipeline in 60 seconds flat.
You wouldn't get any of that if you built it yourself in EC2. I know which one I would choose... EMR. But that's just me.
|
I know this question has been asked before but those answers seem to revolve around Hadoop. For Spark you don't really need all the extra Hadoop cruft. With the spark-ec2 script (available via GitHub for 2.0) your environment is prepared for Spark. Are there any compelling use cases (other than a far superior boto3 sdk interface) for running with EMR over EC2?
| 0 | 1 | 129 |
0 | 40,643,572 | 0 | 0 | 0 | 0 | 2 | false | 1 |
2016-11-03T21:14:00.000
| 0 | 2 | 0 |
How to classify imbalanced data in weka?
| 40,411,357 | 0 |
python,weka
|
Not sure about python but in the gui version you can use SpreadSubsample to reduce the class imbalance. If you feel that 'bad' is a a good representation of the class then you could experiment with different number of instances of 'good.'
To do this you need to select Filter ==> Supervised ==> Instance ==> SpreadSubsample ==> change the number of instances using 'max count'
|
I have an imbalanced training data and i am using logistic regression in weka to classify.
There are two classes good and bad. Good has 75000 instances and bad
3000. My test data has 10000 good data.
When i train it is more inclined to good data i.e it classifies almost all bad instances good. What should i do ?
I tried to have 10000 good instances in training data instead of 75000 but still the problem is same.
| 0 | 1 | 1,258 |
0 | 40,837,930 | 0 | 0 | 0 | 0 | 2 | false | 1 |
2016-11-03T21:14:00.000
| 0 | 2 | 0 |
How to classify imbalanced data in weka?
| 40,411,357 | 0 |
python,weka
|
There are a couple of things that you could try.
Use Boosting (AdaBoostM1) so that the misclassified instances will be given extra weight.
Use weka.classifiers.meta.CostSensitiveClassifier and give the "bad" instances a higher weight than the "good" instances. Note: This will probably reduce your overall accuracy, but make your classifier do a better job of identifying the "bad" instances.
|
I have an imbalanced training data and i am using logistic regression in weka to classify.
There are two classes good and bad. Good has 75000 instances and bad
3000. My test data has 10000 good data.
When i train it is more inclined to good data i.e it classifies almost all bad instances good. What should i do ?
I tried to have 10000 good instances in training data instead of 75000 but still the problem is same.
| 0 | 1 | 1,258 |
0 | 40,411,415 | 0 | 0 | 0 | 0 | 1 | true | 1 |
2016-11-03T21:15:00.000
| 0 | 1 | 0 |
Using numpy.nanargmin() in 2 dimensional matrix
| 40,411,374 | 1.2 |
python,python-3.x,numpy
|
NumPy arrays only have the argmin() attribute, but no nanargmin() attribute. So A.nanargmin() does not exist.
You can use numpy.argmin(A) and numpy.nanargmin(A) instead.
|
I am trying to obtain the argmin of a numpy 2 dimensional array A which has nan values. Now the problem is:
numpy.nanargmin(A) returns only one index.
numpy.unravel_index(A.argmin(), A.shape) returns [0,0] because it has nan values.
And...
numpy.unravel_index(A.nanargmin(), A.shape) throws the error:
AttributeError Traceback (most recent call
last) in ()
----> 1 np.unravel_index(dist.nanargmin(), dist.shape) AttributeError: 'numpy.ndarray' object has no attribute 'nanargmin'
| 0 | 1 | 471 |
0 | 40,433,924 | 0 | 0 | 0 | 0 | 1 | false | 0 |
2016-11-05T01:26:00.000
| 2 | 1 | 0 |
Numpy fails to serve as a dependency for pandas
| 40,433,906 | 0.379949 |
python,pandas,numpy
|
I would recommend not building from source on Windows unless you really know what you're doing.
Also, don't mix conda and pip for numpy; numpy is treated specially in conda and really should work out of the box. If you get an error on import pandas there's likely something wrong with your PATH or PYTHONPATH.
I suggest that you just create an empty conda env, and install only pandas in it. That will pull in numpy. If that somehow does not work, let's see if we can help you debug that.
|
I was trying to use pandas (installed the binaries and dependencies using conda, then using pip, then built then using no-binaries option); still getting error.
Numpy is available (1.11.2).
I understand some interface is not provided by numpy anymore.
Python version I am using is 2.7.11.
List of packages installed are bellow.
Error message:
C:.....Miniconda2\lib\site-packages\numpy\core__init__.py:14: Warning:
Numpy built with MINGW-W64 on Windows 64 bits is experimental, and
only available for testing. You are advised not to use it for
production.
CRASHES ARE TO BE EXPECTED - PLEASE REPORT THEM TO NUMPY DEVELOPERS
from . import multiarray Traceback (most recent call last): File
"io.py", line 2, in from data import support File
"....\support.py", line 3, in import pandas File
"....Miniconda2\lib\site-packages\pandas__init__.py", line 18, in
raise ImportError("Missing required dependencies
{0}".format(missing_dependencies)) ImportError: Missing required
dependencies ['numpy']
| 0 | 1 | 1,990 |
0 | 40,446,697 | 0 | 1 | 0 | 0 | 1 | true | 0 |
2016-11-06T06:30:00.000
| 0 | 1 | 0 |
Why is it dataframe.head() in python and head(dataframe) in R? Why is python like this in general?
| 40,446,650 | 1.2 |
python
|
It's simple:
foo.bar() does the same thing as foo.__class__.bar(foo)
so it is a function, and the argument is passed to it, but the function is stored attached to the object via its class (type), so to say. The foo.bar() notation is just shorthand for the above.
The advantage is that different functions of the sams name can be attached to many objects, depending object type. So the caller of foo.bar() is calling whatever function is attached to the object by the name "bar". This is called polymorphism and can be used for all sorts of things, such as generic programming. Such functions are called methods.
The style is called object orientation, albeit object orientation as well as generic programming can also be achieved using more familiar looking function (method) call notation (e.g. multimethods in Common Lisp and Julia, or classes in Haskell).
|
Beginner here. Shouldnt the required variables be passed as arguments to the function. Why is it variable.function() in python?
| 0 | 1 | 139 |
0 | 40,603,239 | 0 | 0 | 0 | 0 | 1 | false | 3 |
2016-11-07T03:20:00.000
| 1 | 2 | 0 |
information retrieval evaluation python precision, recall, f score, AP,MAP
| 40,457,331 | 0.099668 |
python,information-retrieval,information-extraction
|
Evaluation has two essentials. First one is a test resource with the ranking of documents or their relevancy tag (relevant or not-relevant) for specific queries, which is made with an experiment (like user click, etc. and is mostly used when you have a running IR system), or made through crowd-sourcing. The second essential part of evaluation is which formula to use for evaluating an IR system with the test collection.
So based on what you said, if you don't have a labeled test collection you cant evaluate your system.
|
i wrote one program to do the information retrieval and extraction. user enter the query in the search bar, the program can show the relevant txt result such as the relevant sentence and the article which consists the sentence.
I did some research for how to evaluate the result. I might need to calculate the precision, recall, AP, MAP....
However, I am new to that. How to calculate the result. Since my dataset is not labeled and i did not do the classification. The dataset I used was the article from BBC news. there were 200 articles. i named it as 001.txt, 002.txt ...... 200.txt
It would be good if u have any ideas how to do the evaluation in python. Thanks.
| 0 | 1 | 6,407 |
0 | 40,473,264 | 0 | 0 | 0 | 0 | 1 | true | 0 |
2016-11-07T12:22:00.000
| 1 | 1 | 0 |
Python & scikit Learn: replace matrix vector product during training with custom call
| 40,465,084 | 1.2 |
python,matrix,vector,machine-learning,scikit-learn
|
In short - no, it is not possible. Mostly because some aritchmetical operations are not even performed in python when you use scikit-learn - they are actually performed by C-based extensions (like libsvm library). You could monkey patch .dot of numpy to do what you want, but you do not have any guarantee that scikit-learn will still work, since it performs some operations using numpy and others using C-extensions.
|
I've looked through the documentation over at scikit learn and I haven't seen a straightforward way to replace the matrix-vector product evaluations during propagation with a custom evaluation call.
Is there a way to do this that's already part of the API...or are there any tricks that would allow me to inject a custom matrix vector product evaluator?
| 0 | 1 | 38 |
0 | 40,473,685 | 0 | 0 | 0 | 0 | 1 | false | 1 |
2016-11-07T18:10:00.000
| 2 | 1 | 0 |
Finding minimum indepedent dominating set using a greedy algorithm
| 40,471,747 | 0.379949 |
python,algorithm,graph-theory,networkx,independent-set
|
Unfortunately, the problem of finding the minimum independent dominating set is NP-complete. Hence, any known algorithm which is sound and complete will be inefficient.
A possible approach is to use an incomplete algorithm (aka local search).
For example, the following algorithm is known to have a factor (1 + log|V|) approximation:
1. Choose a node with the maximal number of neighbors and add it to the dominating set.
2. Remove the node and all of it's neighbors from the graph.
3. Repeat until there are no more nodes in the graph.
|
I developed an algorithm that finds the minimum independent dominating set of a graph based on a distance constraint. (I used Python and NetworkX to generate graphs and get the pairs)
The algorithm uses a brute force approach:
Find all possible pairs of edges
Check which nodes satisfy the distance constraint
Find all possible independent dominating sets
Compare the independent dominating sets found and find the minimum dominating set
For small number of nodes it wouldnt make a difference but for large numbers the program is really slow.
Is there any way that I could make it run faster using a different approach?
Thanks
| 0 | 1 | 1,579 |
1 | 40,672,019 | 0 | 0 | 0 | 0 | 1 | false | 1 |
2016-11-07T19:01:00.000
| 0 | 3 | 0 |
Tensorflow on android: directly build app in python?
| 40,472,585 | 0 |
android,python,tensorflow
|
I tried to use python in my android application with some 3rd party terminals like SL4A and Qpython. Those will support to run the python files directly in our android application so we have to install SL4A apk's and we need to call that intent.But these will support for some level I guess.
I tried to import tensorflow in that terminal it shows module not found. So I thought this tensorflow will not work in these terminals.
So I am trying to create one .pb file from the python files which are working in unix platform.So We need to include that output .pb file in our android application and we need to change the c++ code regarding that .pb file.I am thinking in this way.let see it will work or not.I will update soon if it working.
|
The sample app given by google for tensorflow on android is written in C++.
I have a tensorflow application written in python. This application currently runs on desktop. I want to move the application to android platform. Can I use bazel to build the application that is written in python directly for android? Thanks.
Also sample tensorflow app in python on android will be much appreciated.
| 0 | 1 | 534 |
0 | 40,523,772 | 0 | 0 | 0 | 0 | 1 | true | 12 |
2016-11-08T11:06:00.000
| 4 | 4 | 0 |
Retrieve list of training features names from classifier
| 40,485,285 | 1.2 |
python,pandas,scikit-learn,random-forest
|
Based on the documentation and previous experience, there is no way to get a list of the features considered at least at one of the splitting.
Is your concern that you do not want to use all your features for prediction, just the ones actually used for training? In this case I suggest to list the feature_importances_ after fitting and eliminate the features that does not seem relevant. Then train a new model with only the relevant features and use those features for prediction as well.
|
Is there a way to retrieve the list of feature names used for training of a classifier, once it has been trained with the fit method? I would like to get this information before applying to unseen data.
The data used for training is a pandas DataFrame and in my case, the classifier is a RandomForestClassifier.
| 0 | 1 | 17,091 |
0 | 43,329,072 | 0 | 1 | 0 | 0 | 2 | false | 9 |
2016-11-08T15:57:00.000
| 5 | 3 | 0 |
How to calculate 95% confidence intervals using Bootstrap method
| 40,491,298 | 0.321513 |
python,statistics
|
I have a simple statistical solution :
Confidence intervals are based on the standard error.
The standard error in your case is the standard deviation of your 1000 bootstrap means. Assuming a normal distribution of the sampling distribution of your parameter(mean), which should be warranted by the properties of the Central Limit Theorem, just multiply the equivalent z-score of the desired confidence interval with the standard deviation. Therefore:
lower boundary = mean of your bootstrap means - 1.96 * std. dev. of your bootstrap means
upper boundary = mean of your bootstrap means + 1.96 * std. dev. of your bootstrap means
95% of cases in a normal distribution sit within 1.96 standard deviations from the mean
hope this helps
|
I'm trying to calculate the confidence interval for the mean value using the method of bootstrap in python. Let say I have a vector a with 100 entries and my aim is to calculate the mean value of these 100 values and its 95% confidence interval using bootstrap. So far I have manage to resample 1000 times from my vector using the np.random.choice function. Then for each bootstrap vector with 100 entries I calculated the mean. So now I have 1000 bootstrap mean values and a single sample mean value from my initial vector but I'm not sure how to proceed from here. How could I use these mean values to find the confidence interval for the mean value of my initial vector? I'm relatively new in python and it's the first time I came across with the method of bootstrap so any help would be much appreciated.
| 0 | 1 | 15,818 |
0 | 40,491,405 | 0 | 1 | 0 | 0 | 2 | true | 9 |
2016-11-08T15:57:00.000
| 8 | 3 | 0 |
How to calculate 95% confidence intervals using Bootstrap method
| 40,491,298 | 1.2 |
python,statistics
|
You could sort the array of 1000 means and use the 50th and 950th elements as the 90% bootstrap confidence interval.
Your set of 1000 means is basically a sample of the distribution of the mean estimator (the sampling distribution of the mean). So, any operation you could do on a sample from a distribution you can do here.
|
I'm trying to calculate the confidence interval for the mean value using the method of bootstrap in python. Let say I have a vector a with 100 entries and my aim is to calculate the mean value of these 100 values and its 95% confidence interval using bootstrap. So far I have manage to resample 1000 times from my vector using the np.random.choice function. Then for each bootstrap vector with 100 entries I calculated the mean. So now I have 1000 bootstrap mean values and a single sample mean value from my initial vector but I'm not sure how to proceed from here. How could I use these mean values to find the confidence interval for the mean value of my initial vector? I'm relatively new in python and it's the first time I came across with the method of bootstrap so any help would be much appreciated.
| 0 | 1 | 15,818 |
0 | 40,492,219 | 0 | 0 | 0 | 0 | 1 | true | 3 |
2016-11-08T16:17:00.000
| 5 | 1 | 0 |
Get cluster members/elements clustering with scikit-learn DBSCAN
| 40,491,707 | 1.2 |
python,machine-learning,scikit-learn
|
I believe you are asking for the cluster assignment of each item in your dataset, X.
You can use the labels_ attribute. db.labels_ Each index here corresponds to the same index in X, so you can see the assignments.
|
I use dbscan scikit-learn algorithm for clustering.
db = DBSCAN().fit(X) returns me 8 for example. My goal is to recover the cluster by cluster components. I said that X is a vector to vector and what I expect when I speak of cluster members, it is the sub-vectors of X.
Is there anyone to help me?
| 0 | 1 | 3,915 |
0 | 40,543,983 | 0 | 1 | 0 | 0 | 1 | false | 4 |
2016-11-09T16:40:00.000
| 3 | 3 | 0 |
Using pyfmi with multiprocessing for simulation of Modelica FMUs
| 40,511,920 | 0.197375 |
python-2.7,modelica,fmi
|
The problem is that pyfmi.fmiFMUModelCS2 is a Cython class dependent on external libraries which makes it unpickable. So it is not possible unfortunately.
If you want to use multiprocessing the only way forward that I see is that you first create the processes and then load the FMUs into the separate processes. In this way you do not need to pickle the classes.
|
I am trying to simulate multiple Modelica FMUs in parallel using python/pyfmi and multiprocessing. However I am not able to return any pyfmi FMI objects from the subprocesses once the FMUs are initialized. It seems that pyfmi FMI objects (e.g. pyfmi.fmi.FMUModelCS2 or pyfmi.fmi.FMUState2) are not pickable. I also tried dill to pickle, which doesn't work for me eather. With dill the objects are picklable though, meaning no error, but somehow corrupted if I try to reload them afterwards. Does anyone have an idea of how to solve this issue? Thanks!
| 0 | 1 | 1,150 |
0 | 40,789,964 | 0 | 0 | 0 | 0 | 2 | false | 3 |
2016-11-09T18:15:00.000
| 0 | 3 | 0 |
tensorflow retrain.py app.run() got unexpected keyword argument 'argv'
| 40,513,466 | 0 |
python,tensorflow
|
Please check your sample's version. I met the same problem and finally solved it. I found my tf version is 0.11, but I downloaded the master one,
then I compare the code asyntax difference.
|
I am trying to run the Tensorflow for Poets sample. I pass the following:
python examples/image_retraining/retrain.py --bottlenext_dir=tf_files/bottlenecks --how_many_training_steps 500 --model_dir=tf_files/inception --output_graph=tf_files/retrained_graph.pb --output_labels=tf_files/retrained_labels.txt --image_dir tf_files/flower_photos
I get the error
File "examples/image_retraining/retrain.py", line 1013, in <module>
tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)
TypeError: run() got an unexpected keyword argument 'argv'
When I check the source of app.py it shows argv as an argument. According to t.version I am running 0.11.0rc0
Any ideas?
| 0 | 1 | 3,375 |
0 | 40,622,206 | 0 | 0 | 0 | 0 | 2 | false | 3 |
2016-11-09T18:15:00.000
| -1 | 3 | 0 |
tensorflow retrain.py app.run() got unexpected keyword argument 'argv'
| 40,513,466 | -0.066568 |
python,tensorflow
|
You can also specifically checkout just the working fully_connected_feed.py file from the r0.11 branch by using the git command:
git checkout 5b18edb fully_connected_feed.py
NOTE: You need to be in the mnist/ directory to use this command
|
I am trying to run the Tensorflow for Poets sample. I pass the following:
python examples/image_retraining/retrain.py --bottlenext_dir=tf_files/bottlenecks --how_many_training_steps 500 --model_dir=tf_files/inception --output_graph=tf_files/retrained_graph.pb --output_labels=tf_files/retrained_labels.txt --image_dir tf_files/flower_photos
I get the error
File "examples/image_retraining/retrain.py", line 1013, in <module>
tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)
TypeError: run() got an unexpected keyword argument 'argv'
When I check the source of app.py it shows argv as an argument. According to t.version I am running 0.11.0rc0
Any ideas?
| 0 | 1 | 3,375 |
0 | 40,522,400 | 0 | 0 | 0 | 0 | 1 | false | 0 |
2016-11-10T07:34:00.000
| 0 | 1 | 0 |
Scipy hierarchial clustering - clusterize new vector
| 40,522,186 | 0 |
python,numpy,vector,scipy
|
Use the euclidean metric? I think the precomputed clusters have a center. So you can calculate the distanc between every center and the new vector.
|
I am clusterizing a bunch of feature vectors using scipy linkage with ward method.
I want a predictive model that works in two steps:
Training data is clusterized
A new vector comes, the distance between the vector and each cluster is computed, the new vector is assigned the "nearest" cluster's label
How can I compute the distances between a new vector and precomputed clusters?
| 0 | 1 | 27 |
0 | 40,527,574 | 0 | 1 | 0 | 0 | 1 | false | 0 |
2016-11-10T12:21:00.000
| 0 | 1 | 0 |
installing numpy on a python 2.7 with system windows 8.1
| 40,527,528 | 0 |
python-2.7
|
If you have pip installed on your system, execute
pip install numpy
|
I'm working on a code to run it in abaqus. I need in my code to use numpy module. I have python 2.7.11 on my computer. I have installed it on windows 8.1.
I have downloaded numpy-1.11.Zip already.
I look for an easy detailed guide for installing it on my python
Thank You!
| 0 | 1 | 45 |
0 | 40,534,978 | 0 | 0 | 0 | 0 | 1 | false | 0 |
2016-11-10T17:54:00.000
| 1 | 1 | 0 |
Can I apply calculated gradient in tensorflow?
| 40,534,098 | 0.197375 |
python,machine-learning,tensorflow,deep-learning
|
When you create your network and attach some loss you call minimize on the optimizer, which (under the hood) calls "apply_gradients". This function adds gradient computing ops to your graph. All you have to do is now request the op responsible for your partial derivative and pass the precomputed partial derivative through feed_dict option. Use tensorboard to visualize your graph and investigate names of gradients you are interested in. By default they will be in the "gradient" namescope, and naming of each op will be analogous to your operations, so something among the lines of gradient/output_op:0 etc.
|
What I want to do is to simulate the back-propagation process on different machines, from one machine, I get the gradient from layer3 d(layer3_output)/d(layer2_output) as a numpy array, how am I able to get d(layer3_output)/d(layer1_output) efficiently given the gradient I received and passed to the previous layer?
| 0 | 1 | 351 |
0 | 40,537,920 | 0 | 0 | 0 | 0 | 1 | true | 0 |
2016-11-10T18:45:00.000
| 1 | 1 | 0 |
Varying n_neighbors in scikit-learn KNN regression
| 40,534,871 | 1.2 |
python,scikit-learn,regression,knn
|
I'm afraid not. In part, this is due to some algebraic assumptions that the relationship is symmetric: A is a neighbour to B iff B is a neighbour to A. If you give different k values, you're guaranteed to break that symmetry.
I think the major reason is simply that the algorithm is simpler with a fixed quantity of neighbors, yielding better results in general. You have a specific case that KNN doesn't fit so well.
I suggest that you stitch together your two models, switching dependent on the imputed second derivative.
|
I am using scikit-learn's KNN regressor to fit a model to a large dataset with n_neighbors = 100-500. Given the nature of the data, some parts (think: sharp delta-function like peaks) are better fit with fewer neighbors (n_neighbors ~ 20-50) so that the peaks are not smoothed out. The location of these peaks are known (or can be measured).
Is there a way to vary the n_neighbors parameter?
I could fit two models and stitch them together, but that would be inefficient. It would be preferable to either prescribe 2-3 values for n_neighbors or, worse, send in an list of n_neighbors.
| 0 | 1 | 198 |
0 | 43,399,098 | 0 | 1 | 0 | 0 | 1 | false | 0 |
2016-11-11T16:41:00.000
| 0 | 1 | 0 |
Python - Importing pandas in console works but not when running script
| 40,552,453 | 0 |
python,email,pandas,import
|
I also had the same problem today. You're missing a specific path. Found that if you start your python interpreter and do import os, you can do os.environ. You'll notice that there are several paths set in the PATH variable. Copy/paste the entire PATH line into your script. That worked for me. Also, remember to remove string single quotes (e.g., ').
|
I've been working with anaconda3/python3.5 and pandas for over a year and all of sudden when I run my script outside the console, I get a import error for pandas particularly the dependency email.parser. I get No module named 'email.parser';'email' is not a package. However importing in the console works fine. I'm not running any other environment
| 0 | 1 | 831 |
0 | 40,555,741 | 0 | 0 | 0 | 0 | 1 | false | 0 |
2016-11-11T17:10:00.000
| 0 | 1 | 0 |
Input one fixed cluster centroid, find N others (python)
| 40,552,937 | 0 |
python-2.7,scikit-learn,cluster-analysis
|
Rather than recycling clustering for this, treat it as a regular optimization problem. You don't want to "discover structure", but optimize cost.
Beware that earth is not flat, and Euclidean distance (i.e. k-means) is a bad idea. 1 degree north is only at the equator approximately the same distance to 1 degree east. If your data is e.g. in New York, you have a non-neglibile distortion, and your solution will not even be a local optimum.
If you absolutely insist on abusing kmeans, it's easy to do.
Choose n-1 centers at random and the predefined one.
Then run 1 iteration of k-means only. Then replace that center with the desired center again. Repeat with the next iteration.
|
I have a table of shipment destinations in lat, long. I have one fixed origination point (also lat, long). I would like to find other optimal origin locations using clustering. In other words, I want to assign one cluster centroid (keep it fixed) and find 1, 2, 3 . . . N other cluster centroids. Is this possible with the scikit learn cluster module?
| 0 | 1 | 512 |
0 | 62,375,979 | 0 | 0 | 0 | 0 | 1 | false | 11 |
2016-11-16T11:03:00.000
| -2 | 3 | 0 |
Can we see the group data in pandas.core.groupby.SeriesGroupBy object
| 40,630,506 | -0.132549 |
python,pandas
|
Just adding list() should help but the structure isn't super useful in its raw form. It returns a list of tuples with index.
|
Can we check the data in a pandas.core.groupby.SeriesGroupBy object?
| 0 | 1 | 12,663 |
0 | 40,676,992 | 0 | 0 | 0 | 0 | 1 | true | 0 |
2016-11-16T12:23:00.000
| 0 | 1 | 0 |
Store a csv file created by a python script on Hudson
| 40,632,110 | 1.2 |
python,csv,jenkins,hudson,store
|
I ran the windows command line to execute the python script and it worked.
|
I have a python script that creates a csv file and I have a Hudson job that builds every day. I want to store the csv file as an artifact but when I start the job nothing is stored or created. The build is successful and all tests are done but no file is created.
Do you know why this happens?
| 0 | 1 | 42 |
0 | 40,635,363 | 0 | 0 | 0 | 0 | 1 | false | 0 |
2016-11-16T13:28:00.000
| 0 | 1 | 0 |
why python cv2.drawContours() just draw the first contours?
| 40,633,399 | 0 |
python-2.7
|
Finally, I found what was the problem. I was passing the same contour to every step ([l1[0][2]]), so I need to change to operate with index, like [l1[i][2]]).
Sorry for that mistake and for your patience.
Have a good day, Jaime
|
I am having trouble filling a mask using cv2.drawContours, I dont know what could it be, or how to fix it.
I have a list with three elements's sublists [coordX,coordY, contour], so for every pair of coordinates, there is an if/else decision, each one with this stament: cv2.drawContours(mask,[l1[0][2]],0,c1[cla],-1)
[l1[0][2]]: contour, like array([[[437, 497]],[[436, 498]],[[439, 498]], [[438, 497]]])
c1[cla]: tuple, like (25,250,138)
The script runs well without errors, but the resulting image is almost black, just having 4 green pixels.
Any suggestion or advice?
| 0 | 1 | 92 |
0 | 63,427,934 | 0 | 1 | 0 | 0 | 2 | false | 3 |
2016-11-16T16:51:00.000
| 0 | 3 | 0 |
import numpy not working in IDLE
| 40,637,774 | 0 |
python,numpy
|
Just use python -m before pip command, so that you won't get any problem while doing so in IDLE..
ex. python -m pip install scipy/numpy
|
I have python3.5.2 installed on a windows10 machine(Adding into the pythonpath is included in the setup with new python). I ,then, installed the Anaconda(4.2.0) version. At the command prompt when i run the python interpreter and import numpy it works fine. But when i save it as a script and try running from the IDLE, it gives
Traceback (most recent call last):
File "C:\Users\pramesh\Desktop\datascience code\test.py", line 1, in <module>
from numpy import *
ImportError: No module named 'numpy'
I don't know what the problem is. I donot have any other python version installed.
| 0 | 1 | 6,809 |
0 | 40,638,108 | 0 | 1 | 0 | 0 | 2 | false | 3 |
2016-11-16T16:51:00.000
| 2 | 3 | 0 |
import numpy not working in IDLE
| 40,637,774 | 0.132549 |
python,numpy
|
You do have two versions of python installed: the CPython 3.5.2 distribution you mention first, and the Anaconda 4.2.0 Python distribution you then mention. Anaconda packages a large number of 3rd party packages, including Numpy. However, the CPython 3.5.2 installation available on python.org only ships with the standard library.
These two python installs have separate package installations, so having Anaconda's numpy available doesn't make it available for the CPython install. Since you're starting the Idle with shipped with CPython, which doesn't have numpy, you're seeing this error. You have two options:
Install numpy for CPython. See numpy documentation for details on how to do this, but it may be difficult.
Use the version of Idle included with Anaconda. This should be available in the Anaconda programs folder.
|
I have python3.5.2 installed on a windows10 machine(Adding into the pythonpath is included in the setup with new python). I ,then, installed the Anaconda(4.2.0) version. At the command prompt when i run the python interpreter and import numpy it works fine. But when i save it as a script and try running from the IDLE, it gives
Traceback (most recent call last):
File "C:\Users\pramesh\Desktop\datascience code\test.py", line 1, in <module>
from numpy import *
ImportError: No module named 'numpy'
I don't know what the problem is. I donot have any other python version installed.
| 0 | 1 | 6,809 |
0 | 40,642,637 | 0 | 0 | 0 | 0 | 1 | true | 1 |
2016-11-16T20:56:00.000
| 1 | 1 | 0 |
Ensembles in Python
| 40,642,132 | 1.2 |
python,machine-learning,statistics,cross-validation,data-science
|
You have different ways to create an ensemble using your weak classifiers:
-Bagging: You can average the output of the 4 classifiers.
-Stacking: Your final output could be a linear combination of the 4 individual outputs. You can use the output of your 4 models as the input of a another algorithm or you can directly use different weights by choose the ones with better accuracy.
|
I have implemented 4 classifiers using scikit-learn in Python. But, the performance on all of them is not very good. I want to implement an ensemble of these classifiers. I looked up the ensembles on scikit-learn but it has Random Forests and Adaboost. How should I create an ensemble of my weak classifiers?
| 0 | 1 | 513 |
0 | 41,656,919 | 0 | 0 | 0 | 0 | 1 | false | 0 |
2016-11-17T02:22:00.000
| 0 | 1 | 0 |
Custom roll fix effects in regressions
| 40,645,689 | 0 |
python,r,statistics,regression
|
how plm and xtreg,fedo behind the screen are well documented in their instructions, I didn't read them through but I think that when you run pgmm or some thing like logit regression in plm,the algorithm actually is maximum likelihood given the nature of these problems.
and it would be interesting to write all these by yourself.
|
In a regressions, I know instead of using fixed effects (say on country or year), you can specify these effects with dummy variables and this is a perfectly legitimate method. However, when the number of effects (e.g. countries) is large, then it becomes very computational difficult. Instead you would run the fixed effects model. I am curious what R's plm or Stata's xtreg,fe does behind the scenes. Specifically, I wanted to try custom rolling my own panel regression...looking for some likelihood function (or way to condition the likelihood) I can throw into optim and have some fun. Ideas?
| 0 | 1 | 63 |
1 | 40,665,287 | 0 | 0 | 0 | 0 | 1 | false | 1 |
2016-11-17T20:55:00.000
| 1 | 1 | 0 |
Python and openCV to android APK
| 40,664,845 | 0.197375 |
android,python,opencv,apk
|
There is SL4A / PythonForAndroid but unfortunately it uses hardcoded Java RMI invocations for anything os related. That means that there are no OpenCV bindings.
I guess you'll have to learn Java
|
I want to transform my python code into an Android app. The problem I saw is I'm using openCV in the code, and I didn't found anything related to generate an APK while using openCV.
There is a way to generate an APK from python and openCV?
| 0 | 1 | 292 |
0 | 40,670,426 | 0 | 0 | 0 | 0 | 2 | false | 28 |
2016-11-18T06:05:00.000
| 2 | 9 | 0 |
Dot product of two vectors in tensorflow
| 40,670,370 | 0.044415 |
python,tensorflow,dot-product
|
You can do tf.mul(x,y), followed by tf.reduce_sum()
|
I was wondering if there is an easy way to calculate the dot product of two vectors (i.e. 1-d tensors) and return a scalar value in tensorflow.
Given two vectors X=(x1,...,xn) and Y=(y1,...,yn), the dot product is
dot(X,Y) = x1 * y1 + ... + xn * yn
I know that it is possible to achieve this by first broadcasting the vectors X and Y to a 2-d tensor and then using tf.matmul. However, the result is a matrix, and I am after a scalar.
Is there an operator like tf.matmul that is specific to vectors?
| 0 | 1 | 64,462 |
0 | 40,672,159 | 0 | 0 | 0 | 0 | 2 | false | 28 |
2016-11-18T06:05:00.000
| 26 | 9 | 0 |
Dot product of two vectors in tensorflow
| 40,670,370 | 1 |
python,tensorflow,dot-product
|
In addition to tf.reduce_sum(tf.multiply(x, y)), you can also do tf.matmul(x, tf.reshape(y, [-1, 1])).
|
I was wondering if there is an easy way to calculate the dot product of two vectors (i.e. 1-d tensors) and return a scalar value in tensorflow.
Given two vectors X=(x1,...,xn) and Y=(y1,...,yn), the dot product is
dot(X,Y) = x1 * y1 + ... + xn * yn
I know that it is possible to achieve this by first broadcasting the vectors X and Y to a 2-d tensor and then using tf.matmul. However, the result is a matrix, and I am after a scalar.
Is there an operator like tf.matmul that is specific to vectors?
| 0 | 1 | 64,462 |
0 | 40,671,334 | 0 | 1 | 0 | 0 | 1 | true | 1 |
2016-11-18T07:00:00.000
| 5 | 2 | 0 |
Use APIs for sorting or algorithm?
| 40,671,158 | 1.2 |
python,algorithm,api,data-structures
|
Why to use public APIs:
The built in methods were written and reviewed by very experienced and many coders, and a lot of effort was invested to optimize them to be as efficient as it gets.
Since the built in methods are public APIs, it is also means they are constantly used, which means you get a massive "free" testing. You are much more likely to detect issues in public APIs than in private ones, and once something is discovered - it will be fixed for you.
Don't reinvent the wheel. Someone already programmed it for you, use it. If your profiler says there is a problem, think about replacing it. Not before.
Why to use custom made methods:
That said, the public APIs are general case. If you need something
very specific for your scenario, you might find a solution that will
be more efficient, but it will take you quite some time to actually
achieve better than the already optimize general purpose public API.
tl;dr: Use public APIs unless you:
Need it and can afford a lot of time to replace it.
Know what you are doing pretty well.
Intend to maintain it and do robust testing for it.
|
In a programming language like Python Which will have better efficiency? if i use a sorting algorithm like merge sort to sort an array or If I use a built in API like sort() to sort the array? If Algorithms are independent of programming languages, then what is the advantage of algorithms over built in methods or API's
| 0 | 1 | 200 |
0 | 40,673,779 | 0 | 0 | 0 | 0 | 1 | true | 0 |
2016-11-18T09:36:00.000
| 0 | 1 | 0 |
Pass a NAN in input vector for prediction
| 40,673,629 | 1.2 |
python,numpy,machine-learning,scikit-learn
|
You cannot use NaN values because the input vector will, for instance, be multiplied with a weight matrix. The result of such operations needs to be defined.
What you typically do if you have gaps in your input data is, depending on the specific type and structure of the data, fill the gaps with "artificial" values. For instance, you can use the mean or median of the same column in the remaining training data instances.
|
I have a classifier that has been trained using a given set of input training data vectors. There are missing values in the training data which is filled as numpy.Nan values and Using imputers to fill in the missing values.
However, In case of my input vector for prediction, how do I pass in the input where the value is missing? should I pass the value as nan? Does imputer play a role in this.?
If I have to fill in the value manually, How do I fill in the value for such case will I need to calculate the mean/median/frequency from the existing data.
Note : I am using sklearn.
| 0 | 1 | 787 |
0 | 47,094,209 | 0 | 1 | 0 | 0 | 1 | false | 1 |
2016-11-18T13:21:00.000
| 1 | 2 | 0 |
ImportError: undefined symbol: _PyUnicodeUCS4_IsWhitespace
| 40,678,200 | 0.099668 |
python,numpy
|
I also got this error. If you google for it, you will find lot's of similar issues. The problem can happen when you have multiple Python versions. In my case, I had the Ubuntu 16.04 Python 2.7 via /usr/bin/python and another Python 2.7 via Linuxbrew. type python gave me /u/zeyer/.linuxbrew/bin/python2, i.e. the Linuxbrew one. type pip2.7 gave me /u/zeyer/.local/bin/pip2.7, and looking into that file, it had the shebang #!/usr/bin/python, i.e. it was using the Ubuntu Python.
So, there are various solutions. You could just edit the pip2.7 file and change the shebang to #!/usr/bin/env python2.7. Or reinstall pip in some way.
In my case, I found that the Python 2.7 via Linuxbrew was incompatible to a few packages I needed (e.g. Tensorflow), so I unlinked it and use only the Ubuntu 16.04 Python 2.7 now.
|
I am a python beginner and I would like some help with this. I am using Ubuntu and I had installed python using Anaconda, but then I tried to install it again using pip and now when I'm trying to run my code, at import numpy as np, I see this error
ImportError: /home/dev/.local/lib/python2.7/site-packages/numpy/core/multiarray.so: undefined symbol: _PyUnicodeUCS4_IsWhitespace
How can I fix this?
| 0 | 1 | 3,200 |
0 | 40,687,262 | 0 | 0 | 0 | 0 | 1 | false | 0 |
2016-11-18T22:14:00.000
| 0 | 1 | 0 |
Algorithm/Approach to find destination node from source node of exactly k edges in undirected graph of billion nodes without cycle
| 40,686,773 | 0 |
python,algorithm,graph,shortest-path,breadth-first-search
|
As you've hinted, this will depend a lot on the data access characteristics of your system. If you were restricted to single-element accesses, then you'd be truly stuck, as trincot observes. However, if you can manage block accesses, then you have a chance of parallel operations.
However, I think that would be outside your control: the hash function owns the adjacency characteristics -- and, in fact, will likely "pessimize" (opposite of "optimize") that characteristic.
I do see one possible hope: use iteration instead of recursion, maintaining a list of nodes to visit. When you place a new node on the list, get its hash value. If you can organize the nodes clustered by location, you can perhaps do a block transfer, accessing several values in one read operation.
|
Consider I have an adjacency list of billion of nodes structured using hash table arranged in the following manner:
key = source node
value = hash_table { node1, node2, node3}
The input values are from text file in the form of
from,to
1,2
1,5
1,11
... so on
eg.
key = '1'
value = {'2','5','11'}
means 1 is connected to nodes 2,5,11
I want to know an algorithm or approach to find destination node from source node of exactly k edges in an undirected graph of billion nodes without cycle
for eg. from node 1 I want to find node 50 only till depth 3 or till 3 edges.
My assumption the algorithm finds 1 - 2 - 60 - 50 which is the shortest path but how would the traversing be efficient using the above adjacency list structure?
I do not want to use Hadoop/Map Reduce.
I came up with naive solution as below in Python but is not efficient. Only thing is hash table searches key in O(1) so I could just search neighbours and their billion neighbours directly for the key. The following algo takes lot of time.
start with source node
use hash table search for finding key
go 1 level deeper with hash table of neighbor nodes and find their values for destination nodes until node found
Stop if node not found on k depth
 1
|
{2 5 11}
| | |
{3,6,7} {nodes} {nodes} .... connected nodes
| | | | |
{nodes} {nodes} {nodes} .... million more connected nodes.
Please suggest. The algorithm above implemented similar to BFS takes more than 3 hours to search for all the possible key value relationships. Can be it be reduced with other searching method?
| 0 | 1 | 105 |
0 | 44,625,370 | 0 | 1 | 0 | 0 | 1 | false | 113 |
2016-11-19T08:04:00.000
| 3 | 7 | 0 |
Can Keras with Tensorflow backend be forced to use CPU or GPU at will?
| 40,690,598 | 0.085505 |
python,machine-learning,tensorflow,keras
|
I just spent some time figure it out.
Thoma's answer is not complete.
Say your program is test.py, you want to use gpu0 to run this program, and keep other gpus free.
You should write CUDA_VISIBLE_DEVICES=0 python test.py
Notice it's DEVICES not DEVICE
|
I have Keras installed with the Tensorflow backend and CUDA. I'd like to sometimes on demand force Keras to use CPU. Can this be done without say installing a separate CPU-only Tensorflow in a virtual environment? If so how? If the backend were Theano, the flags could be set, but I have not heard of Tensorflow flags accessible via Keras.
| 0 | 1 | 147,257 |
0 | 40,767,562 | 0 | 0 | 0 | 0 | 1 | false | 2 |
2016-11-19T08:14:00.000
| 1 | 2 | 0 |
Detecting shape of a contour and color inside
| 40,690,675 | 0.099668 |
python,opencv,colors,shape,contour
|
For finding the shape of a particular contour we can draw a bounded rectangle around the contour.
Now we can compare the area of contour with the area of bounded rectangle.
If area of contour is equal to half the area of bounded rectangle the shape is a triangle.
If the area of contour is less that area of bounded rectangle but is greater than half the area of bounded rectangle then its a circle.
Note: This method is limited to regular triangle and circle. this doesnt apply to polygons like hexagon,heptagon etc.
|
I am new to opencv using python and trying to get the shape of a contour in an image.
Considering only regular shapes like square, rectangle, circle and triangle is there any way to get the contour shape using only numpy and cv2 libraries?
Also i want to find the colour inside a contour. How can I do it?
For finding area of a contour there is an inbuilt function: cv2.contourArea(cnt).
Are there inbuilt functions for "contour shape" and "color inside contour" also?
Please help!
Note : The images I am considering contains multiple regular shapes.
| 0 | 1 | 5,369 |
0 | 40,713,966 | 0 | 1 | 0 | 0 | 1 | true | 0 |
2016-11-20T12:22:00.000
| 2 | 1 | 0 |
Creating azure ml experiments merely using python notebook within azure ml studio
| 40,703,961 | 1.2 |
python,azure,jupyter-notebook,azure-machine-learning-studio
|
The algorithms used as modules in Azure ML Studio are not currently able to be used directly as code for Python programming.
That being said: you can attempt to publish the outputs of the out-of-the-box algorithms as web services, which can be consumed by Python code in the Azure ML Studio notebooks. You can also create your own algorithms and use them as custom Python or R modules.
|
I wonder if it is possible to design ML experiments without using the drag&drop functionality (which is very nice btw)? I want to use Python code in the notebook (within Azure ML studio) to access the algorithms (e.g., matchbox recommender, regression models, etc) in the studio and design experiments? Is this possible?
I appreciate any information and suggestion!
| 0 | 1 | 236 |
0 | 40,761,500 | 0 | 1 | 0 | 0 | 2 | false | 1 |
2016-11-20T13:07:00.000
| 0 | 2 | 0 |
Generate a word search puzzle matrix
| 40,704,339 | 0 |
python,string,matrix,coordinates,words
|
I would rather look at words that are already there and then randomly select a word from the set of words that fit there.
Of course you might not fill the whole matrix like this. If you have put one word somewhere where it blocks all other words (no other word fits), you might have to backtrack, but that will kill the running time.
If you really want to fill the whole matrix, I would iterate over all possible starting positions, see how many words fit there, and then recurse over the possibilities of the starting position with the least number of candidates. That will cause your program recognize and leave "dead ends" early, which improves the running time drastically. That is a powerful technique from fixed parameter algorithms, which I like to call Branching-vector minimization.
|
I'm not sure of the rules to create a matrix for a word search puzzle game. I am able to create a matrix with initial values of 0.
Is it correct that I'll randomly select a starting point(coordinates), and a random direction(horizontally,vertically,&,diagonally) for a word then manage if it would overlap with another word in the matrix? If it does then check if the characters are the same (although there's only a little chance) then if no I'll assign it there. The problem is it's like I lessen the chance of words to overlap.
I have also read that I need to check first the words that have the same characters. But if that's the case, it seems like the words that I am going to put in the matrix are always overlapping.
| 0 | 1 | 936 |
0 | 40,761,650 | 0 | 1 | 0 | 0 | 2 | false | 1 |
2016-11-20T13:07:00.000
| 0 | 2 | 0 |
Generate a word search puzzle matrix
| 40,704,339 | 0 |
python,string,matrix,coordinates,words
|
Start with the longest word.
First of all you must find all points and directions, where this word may fit. For example word 'WORD' may fit, when at the first pos there is NULL or W, on the second pos NULL or O, on the third NULL or R and on the fourth NULL or D.
Then You should group it to positions with no NULLS, with one NULL, with two NULLs and so on.
Then select randomly position from the group with the smallest ammount of NULLS. If there are no posible positions, skip the word.
This attempt will allow you to put more words and prevent situations, where random search can't find the proper place (when there is only a few of them).
|
I'm not sure of the rules to create a matrix for a word search puzzle game. I am able to create a matrix with initial values of 0.
Is it correct that I'll randomly select a starting point(coordinates), and a random direction(horizontally,vertically,&,diagonally) for a word then manage if it would overlap with another word in the matrix? If it does then check if the characters are the same (although there's only a little chance) then if no I'll assign it there. The problem is it's like I lessen the chance of words to overlap.
I have also read that I need to check first the words that have the same characters. But if that's the case, it seems like the words that I am going to put in the matrix are always overlapping.
| 0 | 1 | 936 |
0 | 40,727,147 | 0 | 0 | 0 | 0 | 1 | true | 19 |
2016-11-21T18:22:00.000
| 15 | 1 | 0 |
Difference between score and accuracy_score in sklearn
| 40,726,899 | 1.2 |
python,scikit-learn
|
In general, different models have score methods that return different metrics. This is to allow classifiers to specify what scoring metric they think is most appropriate for them (thus, for example, a least-squares regression classifier would have a score method that returns something like the sum of squared errors). In the case of GaussianNB the docs say that its score method:
Returns the mean accuracy on the given test data and labels.
The accuracy_score method says its return value depends on the setting for the normalize parameter:
If False, return the number of correctly classified samples. Otherwise, return the fraction of correctly classified samples.
So it would appear to me that if you set normalize to True you'd get the same value as the GaussianNB.score method.
One easy way to confirm my guess is to build a classifier and call both score with normalize = True and accuracy_score and see if they match. Do they?
|
Whats the difference between score() method in sklearn.naive_bayes.GaussianNB() module and accuracy_score method in sklearn.metrics module? Both appears to be same. Is that correct?
| 0 | 1 | 12,527 |
0 | 41,664,553 | 0 | 0 | 0 | 0 | 1 | false | 4 |
2016-11-22T07:51:00.000
| 1 | 3 | 0 |
Difference of implementation between tensorflow softmax_cross_entropy_with_logits and sigmoid_cross_entropy_with_logits
| 40,736,440 | 0.066568 |
python,tensorflow
|
the major difference between sigmoid and softmax is that softmax function return result in terms of probability which is kind of more inline with the ML philosophy. Sum of all outputs from softmax result to 1. This is turn tells you how confident the network is about the answer.
Whereas, sigmoid outputs are discreet. Its either correct or incorrect. You would have to write code to calculate the probability yourself.
As far as performance of the network goes. Softmax generally gives better accuracy than sigmoid. But, it also highly dependent on other hyper parameters also.
|
I recently came across tensorflow softmax_cross_entropy_with_logits, but I can not figure out what the difference on the implementation is compared to sigmoid_cross_entropy_with_logits.
| 0 | 1 | 1,855 |
0 | 40,791,232 | 0 | 0 | 0 | 0 | 1 | false | 2 |
2016-11-22T09:17:00.000
| 0 | 2 | 0 |
Augementations in Keras ImageDataGenerator
| 40,737,870 | 0 |
python,tensorflow,keras,conv-neural-network
|
@Neal: Thank you for the prompt answer! You were right, I probably need to better explain my task. My work is somehow similar to classifying video sequences, but my data is saved in a database. I want my code to follow this steps for one epoch:
For i in (number_of_sequences):
Get N, the number of frames in the sequence i (I think that’s
equivalent to batch_size, the number of N of each sequence is
already saved in a list)
Fetch N successive frames from my database and their labels:
X_train, y_train
For j in range(number_of_rotation): -
Perform (the same) data Augmentation on all frames of the sequence (probably using datagen = ImageDataGenerator() and datagen.flow())
Train the network on X, y
My first thought was using model.fit_generator(generator = ImageDataGenerator().flow()) but this way, I can not modify my batch_size, and honestly I did not understand your solution.
Sorry for the long post, but I’m still a novice in both python and NN, but I’m really a big fan of Keras ;)
Thnx!
|
I have please two questions concerning the ImageDataGenerator:
1) Are the same augmentations used on the whole batch or each image gets its own random transformation?
e.g. for rotation, does the module rotates all the images in the batch with same angle or each image get a random rotation angle ?
2) The data in ImageDataGenerator.flow is looped over (in batches) indefinitely. Is there a way to stop this infinite loop, i.e. doing the augmentation only for n number of time. Because I need to modify the batch_size in each step (not each epoch).
Thanks
| 0 | 1 | 2,135 |
0 | 40,745,346 | 0 | 0 | 0 | 0 | 2 | true | 0 |
2016-11-22T12:41:00.000
| 1 | 2 | 0 |
How to do GridSearchCV with train and test being different datasets?
| 40,742,172 | 1.2 |
python,machine-learning,scikit-learn,random-forest,data-science
|
If you can, you may simply merge the two datasets and perform GridSearchCV, this ensures the generalization ability to the other dataset. If you are talking about generalization to future unknown dataset, then this might not work, because there isn't a perfect dataset from which we can train a perfect model.
|
I would like to find the best parameters for a RandomForest classifier (with scikit-learn) in a way that it generalises well to other datasets (which may not be iid).
I was thinking doing grid search using the whole training dataset while evaluating the scoring function on other datasets.
Is there an easy to do this in python/scikit-learn?
| 0 | 1 | 1,071 |
0 | 40,744,270 | 0 | 0 | 0 | 0 | 2 | false | 0 |
2016-11-22T12:41:00.000
| 2 | 2 | 0 |
How to do GridSearchCV with train and test being different datasets?
| 40,742,172 | 0.197375 |
python,machine-learning,scikit-learn,random-forest,data-science
|
I don't think you can evaluate on a different data set. The whole idea behind GridSearchCV is that it splits your training set into n folds, trains on n-1 of those folds and evaluates on the remaining one, repeating the procedure until every fold has been "the odd one out". This keeps you from having to set apart a specific validation set and you can simply use a training and a testing set.
|
I would like to find the best parameters for a RandomForest classifier (with scikit-learn) in a way that it generalises well to other datasets (which may not be iid).
I was thinking doing grid search using the whole training dataset while evaluating the scoring function on other datasets.
Is there an easy to do this in python/scikit-learn?
| 0 | 1 | 1,071 |
0 | 40,745,068 | 0 | 0 | 0 | 0 | 1 | false | 1 |
2016-11-22T14:30:00.000
| 0 | 2 | 0 |
Centroid of a set of positions on a toroidally wrapped (x- and y- wrapping) 2D array?
| 40,744,501 | 0 |
python,algorithm,math
|
I've done this before for Ramachandran plots. I will check but I seem to remember the algorithm I used was:
Assume that your Euclidian area has an origin at (0,0)
Shift all the points in a set so that they are fully within the area
Calculate the centroid
Shift the centroid back by the reverse of the original
So, say your points are split across the X axis, you find the minimum X-coordinate of the point set that is larger than the midpoint axis of the area. Then find the distance d to the edge on the Y axis. Then shift the entire cluster by d.
|
I have a flat Euclidean rectangular surface but when a point moves to the right boundary, it will appear at the left boundary (at the same y value). And visa versa for moving across the top and bottom boundaries. I'd like to be able to calculate the centroid of a set of points. The set of points in question are mostly clumped together.
I know I can calculate the centroid for a set of points by averaging all the x and y values. How would I do it in a wrap-around map?
| 0 | 1 | 116 |
0 | 60,201,018 | 0 | 0 | 0 | 0 | 2 | false | 26 |
2016-11-22T17:06:00.000
| 2 | 4 | 0 |
keras: what is the difference between model.predict and model.predict_proba
| 40,747,679 | 0.099668 |
python,machine-learning,deep-learning,keras
|
Just a remark : In fact you have both predict and predict_proba in most classifiers (in Scikit for example). As already mentioned, the first one predicts the class, the second one provides probabilities for each class, classified in ascending order.
|
I found model.predict and model.predict_proba both give an identical 2D matrix representing probabilities at each categories for each row.
What is the difference of the two functions?
| 0 | 1 | 47,190 |
0 | 68,835,517 | 0 | 0 | 0 | 0 | 2 | false | 26 |
2016-11-22T17:06:00.000
| 0 | 4 | 0 |
keras: what is the difference between model.predict and model.predict_proba
| 40,747,679 | 0 |
python,machine-learning,deep-learning,keras
|
In the recent version of keras e.g. 2.6.0, predict and predict_proba is same i.e. both give probabilities. To get the class labels use predict_classes. The documentation is not updated
|
I found model.predict and model.predict_proba both give an identical 2D matrix representing probabilities at each categories for each row.
What is the difference of the two functions?
| 0 | 1 | 47,190 |
0 | 40,750,775 | 0 | 0 | 0 | 0 | 1 | true | 8 |
2016-11-22T20:05:00.000
| 16 | 1 | 0 |
Prevent pandas.read_csv from inferring dtypes
| 40,750,670 | 1.2 |
python,pandas
|
Use the parameter dtype=object for Pandas to keep the data as such at load time.
|
How to prevent pandas.read_csv() from inferring the data types. For example, its converting strings true and false to Bool: True and False. The columns are many for many files, therefore not feasible to do:
df['field_name'] = df['field_name'].astype(np.float64) for each of the columns in each file. I prefer pandas to just read file as it is, no type inferring.
| 0 | 1 | 3,832 |
0 | 40,759,746 | 0 | 0 | 0 | 0 | 1 | true | 0 |
2016-11-23T08:24:00.000
| 0 | 1 | 0 |
tflearn: different number of rows of each input
| 40,759,257 | 1.2 |
python,tensorflow,tflearn
|
First of all, you have to know what exactly is your training sample. I am not sure what you mean by "input", does one input means one sample? Or does one row in your input means one sample?
If one input means one sample, you are in some trouble, because almost all CNN (and almost any other machine learning algos) requires consistency in the shape of the data. Given that some sample have more data than others, it might be a solution to crop out the extra ones from which have more data, or to just ignore those with less rows (so as to maximize the data you use). A more complicated way would be to run a PCA over some of the samples which have more rows (and same number of rows), then use only the principal components for all samples, if possible.
If one row means one sample, you can just merge all the data into a big chunk and process it the usual way. You got it.
|
I am using tflearn for modeling CNN.
However, my data has different number of rows in each input (but same number of columns).
For example, I have 100 inputs.
The first input's dimension is 4*9 but the second and third have 1*9.
I do not sure how to feed and shape data by using input_data().
| 0 | 1 | 57 |
0 | 40,767,005 | 0 | 0 | 0 | 0 | 1 | true | 28 |
2016-11-23T14:18:00.000
| 33 | 4 | 0 |
Suggestions to plot overlapping lines in matplotlib?
| 40,766,909 | 1.2 |
python,matplotlib
|
Just decrease the opacity of the lines so that they are see-through. You can achieve that using the alpha variable. Example:
plt.plot(x, y, alpha=0.7)
Where alpha ranging from 0-1, with 0 being invisible.
|
Does anybody have a suggestion on what's the best way to present overlapping lines on a plot? I have a lot of them, and I had the idea of having full lines of different colors where they don't overlap, and having dashed lines where they do overlap so that all colors are visible and overlapping colors are seen.
But still, how do I that.
| 0 | 1 | 35,477 |
0 | 40,768,770 | 0 | 1 | 0 | 0 | 1 | false | 2 |
2016-11-23T15:40:00.000
| 0 | 2 | 0 |
How to sort with incomplete ordering?
| 40,768,575 | 0 |
python,algorithm,sorting
|
The built-in sort method requires that cmp imposes a total ordering. It doesn't work if the comparisons are inconsistent. If it returns that A < B one time it must always return that, and it must return that B > A if the arguments are reversed.
You can make your cmp implementation work if you introduce an arbitrary tiebreaker. If two elements don't have a defined order, make one up. You could return cmp(id(a), id(b)) for instance -- compare the objects by their arbitrary ID numbers.
|
I have a list of elements to sort and a comparison function cmp(x,y) which decides if x should appear before y or after y. The catch is that some elements do not have a defined order. The cmp function returns "don't care".
Example: Input: [A,B,C,D], and C > D, B > D. Output: many correct answers, e.g. [D,C,B,A] or [A,D,B,C]. All I need is one output from all possible outputs..
I was not able to use the Python's sort for this and my solution is the old-fashioned bubble-sort to start with an empty list and insert one element at a time to the right place to keep the list sorted all the time.
Is it possible to use the built-in sort/sorted function for this purpose? What would be the key?
| 0 | 1 | 809 |
0 | 40,770,869 | 0 | 1 | 0 | 0 | 1 | true | 1 |
2016-11-23T17:25:00.000
| 4 | 1 | 0 |
Iterative Divide and Conquer algorithms
| 40,770,767 | 1.2 |
python,algorithm,computer-science
|
The cheap way to turn any recursive algorithm into an iterative algorithm is to take the recursive function, put it in a loop, and use your own stack. This eliminates the function call overhead and from saving any unneeded data on the stack. However, this is not usually the "best" approach ("best" depends on the problem and context.)
They way you've worded your problem, it sounds like the idea is to break the list into sublists, find the closest pair in each, and then take the closest pair out of those two results. To do this iteratively, I think a better way to approach this than the generic way mentioned above is to start the other way around: look at lists of size 3 (there are three pairs to look at) and work your way up from there. Note that lists of size 2 are trivial.
Lastly, if your coordinates are integers, they are in Z (a much smaller subset of R).
|
I am trying to create an algorithm using the divide-and-conquer approach but using an iterative algorithm (that is, no recursion).
I am confused as to how to approach the loops.
I need to break up my problems into smaller sub problems, until I hit a base case. I assume this is still true, but then I am not sure how I can (without recursion) use the smaller subproblems to solve the much bigger problem.
For example, I am trying to come up with an algorithm that will find the closest pair of points (in one-dimensional space - though I intend to generalize this on my own to higher dimensions). If I had a function closest_pair(L) where L is a list of integer co-ordinates in ℝ, how could I come up with a divide and conquer ITERATIVE algorithm that can solve this problem?
(Without loss of generality I am using Python)
| 0 | 1 | 2,428 |
0 | 40,777,110 | 0 | 0 | 0 | 0 | 1 | false | 0 |
2016-11-24T02:01:00.000
| 1 | 1 | 0 |
How to find initial guess for leastsq function in Python?
| 40,776,967 | 0.197375 |
python,numpy,least-squares,data-science
|
No, you can't really calculate an initial guess.
You'll just have to make an educated guess, and that really depends on your data and model.
If the initial guess affects the fitting, there is likely something else going on; you're probably getting stuck in local minima. Your model may be too complex, or your data range may be so large that you run into floating point precision limits and the fitting algorithm can't detect any changes for parameter changes. The latter can often be avoided by normalizing your data (and model), or, for example, using log(-log) space instead of linear space.
Or avoid leastsq altogether, and use a different minimization method (which will likely be much slower, but may produce overall better and more consistent results), such as the Nelder-Mead amoebe method.
|
I have a dataset for which I need to fit the plot. I am using leastsq() for fitting. However, currently I need to give initial guess values manually which is really affecting the fitting. Is there any way to first calculate the initial guess values which I can pass in leastsq()?
| 0 | 1 | 481 |
0 | 40,782,596 | 0 | 0 | 0 | 0 | 1 | false | 0 |
2016-11-24T08:33:00.000
| -1 | 1 | 0 |
In spark, what kind of transformations necessit a shuffling (can imply a new stage)?
| 40,781,351 | -0.197375 |
python,scala,hadoop,apache-spark,rdd
|
There are some common operator like reducebykey,groupbykey that can cause to shuffle.You can visit spark offical website to learn more.
|
How to determine if a new transformation in spark causes the creation of a stage? is there a list of them ?
| 0 | 1 | 56 |
0 | 40,781,778 | 0 | 0 | 0 | 0 | 1 | true | 1 |
2016-11-24T08:44:00.000
| 1 | 1 | 0 |
Generate random values given 2d pdf
| 40,781,553 | 1.2 |
python,scipy
|
rv_continuous only handles univariate distributions.
|
How to use rv_continuous for multidimensional pdf?
Is rv_continuous able to handle multidimensional pdf?
My goal is to use the rvs method to generate random values from an arbitrary 2d distribution. Do you know any other solutions?
| 0 | 1 | 152 |
0 | 40,793,242 | 0 | 0 | 0 | 0 | 1 | false | 3 |
2016-11-24T14:09:00.000
| 1 | 3 | 0 |
An Algorithm to Determine How Similar Two Sentences Are
| 40,788,494 | 0.066568 |
python,algorithm,parsing,tree,nlp
|
Assigning weights is a million dollar question there. As a first step, I would identify parts of the sentence (subject-predicate-clause, etc) and the sentence structure (simple-compound-complex, etc) to find "anchor" words that would have highest weight. That should make the rest of the task easier.
|
A friend of mine had an idea to make a speed reading program that displays words one by one (much like currently existing speed reading programs). However, the program would filter out words that aren't completely necessary to the meaning (if you want to skim something).
I have starting to implement this program, but I'm not quite sure on what the algorithm to get rid of "unimportant" words should be.
My idea is to parse the sentence (I'm currently using Stanford Parser) and somehow assign weights based on how important that word is to the sentence's meaning to each word then start removing words with the with the lowest weights. I will continue to do this, check how "different" the original tree and the new tree is. I will continue to remove the word with the lowest weight until the two trees are too different (I will determine some constant via a "calibration" process that each user goes through once). Finally, I will go through each word of the shortened sentence and try to replace it with a simpler or shorter synonym for that word (again while still trying to retain value).
As well, there will be special cases for very common words like "the," "a," and "of."
For example:
"Billy said to Jane, 'Do you want to go out?'"
Would become:
"Billy told Jane 'want go out?'"
This would retain basically all of the meaning of the sentence but shortened it significantly.
Is this a good idea for an algorithm and if so how would I assign the weights, what tree comparison algorithm should I use, and is inserting the synonyms done in a good place (i.e. should it be done before I try to remove any words)?
| 0 | 1 | 3,305 |
0 | 40,800,749 | 0 | 0 | 0 | 0 | 1 | false | 5 |
2016-11-24T20:08:00.000
| 3 | 4 | 0 |
Index Python List with Numpy Boolean Array
| 40,793,882 | 0.148885 |
python,numpy
|
map(x.__getitem__,np.where(mask)[0])
Or if you want list comprehension
[x[i] for i in np.where(mask)[0]]
This keeps you from having to iterate over the whole list, especially if mask is sparse.
|
Is there a way to index a python list like x = ['a','b','c'] using a numpy boolean array? I'm currently getting the following error: TypeError: only integer arrays with one element can be converted to an index
| 0 | 1 | 11,235 |
0 | 40,795,462 | 0 | 0 | 0 | 0 | 1 | false | 12 |
2016-11-24T20:36:00.000
| 1 | 2 | 0 |
Upload a CSV file and read it in Bokeh Web app
| 40,794,180 | 0.099668 |
javascript,python,file,upload,bokeh
|
As far as I know there is no widget native to Bokeh that will allow a file upload.
It would be helpful if you could clarify your current setup a bit more. Are your plots running on a bokeh server or just through a Python script that generates the plots?
Generally though, if you need this to be exposed through a browser you'll probably want something like Flask running a page that lets the user upload a file to a directory which the bokeh script can then read and plot.
|
I have a Bokeh plotting app, and I need to allow the user to upload a CSV file and modify the plots according to the data in it.
Is it possible to do this with the available widgets of Bokeh?
Thank you very much.
| 1 | 1 | 6,493 |
0 | 44,195,108 | 0 | 0 | 0 | 0 | 1 | false | 4 |
2016-11-26T15:31:00.000
| 2 | 2 | 0 |
Keras and cross validation
| 40,819,939 | 0.197375 |
python,scikit-learn,keras,cross-validation
|
Nope... That seem to be the solution. (Of what I know of.)
|
I am currently trying train a regression network using keras. To ensure I proper training I've want to train using crossvalidation.
The Problem is that it seems that keras don't have any functions supporting crossvalidation or do they?
The only solution I seemed to have found is to use scikit test_train_split and run a model.fit for for each k fold manually. Isn't there a already an integrated solutions for this, rather than manually doing it ?
| 0 | 1 | 4,491 |
0 | 40,821,227 | 0 | 0 | 0 | 0 | 1 | false | 1 |
2016-11-26T17:33:00.000
| 0 | 2 | 0 |
How to use Blackman Window in numpy to take a part of values from an array?
| 40,821,099 | 0 |
python,arrays,numpy
|
Assuming your array a is 1D and its length is a multiple a 500, a simple np.sum(a.reshape((-1, 500)) ** 2, axis=1) would suffice. If you want a more complicated operation, please edit your question accordingly.
|
I want to take a part of values (say 500 values) of an array and perform some operation on it such as take sum of squares of those 500 values. and then proceed with the next 500 values of the same array.
How should I implement this? Would a blackman window be useful in this case or is another approach more suitable?
| 0 | 1 | 410 |
0 | 41,367,703 | 0 | 0 | 0 | 0 | 1 | false | 3 |
2016-11-26T20:01:00.000
| 0 | 2 | 0 |
Does Sklearn.LogisticRegression add a column of 1?
| 40,822,563 | 0 |
python,scikit-learn
|
This is the default behavior since fit_intercept defaults to True
|
I have been told that when dealing with multivariate logistic regression, you want to add a column of 1's in the X matrix for the model intercept. When using the model from sklearn, does the module automatically add the column of 1?
| 0 | 1 | 2,238 |
0 | 40,829,189 | 0 | 0 | 0 | 0 | 1 | true | 1 |
2016-11-26T21:27:00.000
| 3 | 1 | 0 |
resetting c_t and h_t in LSTM every couple of sequences in RNN
| 40,823,341 | 1.2 |
python,neural-network,tensorflow,recurrent-neural-network,lstm
|
I don't think you would want to do that since each example in the batch should be independent. If they are not you should just have a batch size of 1 and a sequence of length 100 * batch_size. Often you might want to save the state in between batches. In this case you would need to save the RNN state to a variable or as I like to do allow the user to feed it in with a placeholder.
|
I am trying to implement RNN in Tensorflow for text prediction. I am using BasicLSTMCell for this purpose, with sequence lengths of 100.
If I understand correctly, the output activation h_t and the activation c_t of the LSTM are reset each time we enter a new sequence (that is, they are updated 100 times along the sequence, but once we move to the next sequence in the batch, they are reset to 0).
Is there a way to prevent this from happening using Tensorflow? That is, to continue using the updated c_t and h_t along all the sequences in the batch? (and then to reset them when moving to the next batch).
| 0 | 1 | 228 |
0 | 40,836,970 | 0 | 0 | 0 | 0 | 1 | false | 0 |
2016-11-28T03:54:00.000
| 2 | 1 | 0 |
Using primary key column as index values in Pandas dataframe - Best practice?
| 40,836,932 | 0.379949 |
python,pandas,dataframe
|
It depends on your application. The one "best practice" I know of for Pandas indexes is to keep their values unique. As for which column to use as your index, that depends on what you will later do with the DataFrame (e.g. what other data you might merge it with).
|
I'm relatively new to Pandas and I wanted to know best practices around incremental vs. primary key indexes.
In particular, I'm wondering:
what are the benefits of using a dataset's primary key as its DataFrame index, as opposed to just sticking with the default incremental integer index?
are any potential pitfalls for replacing the default incremental integer index with a dataset's primary key?
| 0 | 1 | 1,018 |
0 | 40,842,347 | 0 | 0 | 0 | 0 | 1 | true | 2 |
2016-11-28T09:18:00.000
| 4 | 1 | 0 |
ValueError: cannot compute LDA over an empty collection (no terms)
| 40,840,731 | 1.2 |
python,gensim,lda,topic-modeling
|
Finally figured it out. The issue with small documents is that if you try to filter the extremes from dictionary, you might end up with empty lists in corpus.corpus = [dictionary.doc2bow(text)].
So the values of parameters in dictionary.filter_extremes(no_below=2, no_above=0.1) needs to be selected accordingly and carefully before corpus = [dictionary.doc2bow(text)]
I just removed the filter extremes and lda model runs fine now. Though I will change the parameter values in filter extreme and use it later.
|
Getting this error in python when trying to compute lda for a smaller size of corpus but works fine in other cases.
The size of corpus is 15 and I tried setting the number of topic to 5 then reduced it to 2 but it still gives the same error : ValueError: cannot compute LDA over an empty collection (no terms)
getting error at this line : lda = models.LdaModel(corpus, num_topics=topic_number, id2word=dictionary, passes=passes)
where corpus is corpus = [dictionary.doc2bow(text) for a, id, text, s_date, e_date, qd, qd_perc in texts]
Why is it giving no terms?
| 0 | 1 | 5,310 |
0 | 40,842,338 | 0 | 1 | 0 | 0 | 1 | false | 0 |
2016-11-28T09:18:00.000
| 0 | 1 | 1 |
Config the opencv in python 2.7 of MacOS
| 40,840,738 | 0 |
python,macos,opencv
|
copy cv2.so and cv.py to /System/Library/Frameworks/Python.framework/Versions/2.7/lib/
You can find this two files in /usr/local/Cellar/opecv/../lib/python2.7
|
Today I was installing the opencv by
homebrew install opencv
and then I try to import it by:
python
import cv2
and it return: No moudule named cv2
However, I try to import it by:
Python3
Import cv2
it works well.
I tried to install opencv again but homebrew said it has been installed.
Dont know what can I do now
| 0 | 1 | 30 |
0 | 47,368,406 | 0 | 0 | 0 | 0 | 1 | false | 0 |
2016-11-28T10:20:00.000
| 0 | 1 | 0 |
is it required to install TF Slim externally after pulling a docker image latest-devel-gpu(r0.10)
| 40,841,872 | 0 |
python,tensorflow
|
OP self-solved problem by upgrading to the docker tag 0.10.0-devel-gpu (and it should work going forward).
|
Is it required to install TF Slim externally after pulling a docker image latest-devel-gpu(r0.10) for GPU environment?
when I do import tensorflow.contrib.slim as slim it gives me an error:
"No module named Slim" through python.
| 0 | 1 | 939 |
0 | 40,844,025 | 0 | 0 | 0 | 0 | 1 | false | 0 |
2016-11-28T12:01:00.000
| 2 | 2 | 0 |
How to match images taken at different angles
| 40,843,794 | 0.197375 |
python,opencv,image-processing
|
The state of the art for such object recognition tasks are convolutional neural networks, but you will need a large labelled training set, which might rule that out. Otherwise SIFT/SURF is probably what you are looking for. They are pretty robust towards most transformations.
|
I have multiple images of same object taken at different angles and has many such objects. I need to match a test image which is taken at a random angle later belongs to particular object with similar background, by matching it with those images. The objects are light installations inside a building. Same object may be installed at different places, but backgrounds are different.
I used mean shift error, template matching from opencv and Structural Similarity Index, but with less accuracy.
How about Image Fingerprinting or SIFT/SURF
| 0 | 1 | 1,892 |
0 | 40,852,542 | 0 | 0 | 0 | 0 | 1 | true | 1 |
2016-11-28T19:58:00.000
| 2 | 1 | 0 |
Smallest total weight in weighted directed graph
| 40,852,458 | 1.2 |
python,algorithm,graph
|
4 is the correct one!, because all edges have the same weight, so you need to find the node traversing the minimum quantity of edges.
1 Is wrong because depth-first search doesn't consider edge weights, so any node could be reached first
|
There is question from one Quiz that I don't fully understand:
Suppose you have a weighted directed graph and want to find a path between nodes A and B with the smallest total weight. Select the most accurate statement:
If some edges have negative weights, depth-first search finds a correct solution.
If all edges have weight 2, depth-first search guarantees that the first path found to be is the shortest path.
If some edges have negative weights, breadth-first search finds a correct solution.
If all edges have weight 2, breadth-first search guarantees that the first path found to be is the shortest path.
Am I right that #1 is correct?
| 0 | 1 | 826 |
0 | 40,870,126 | 0 | 0 | 0 | 0 | 1 | true | 31 |
2016-11-29T12:37:00.000
| 46 | 2 | 0 |
Difference between Dense and Activation layer in Keras
| 40,866,124 | 1.2 |
python,machine-learning,neural-network,deep-learning,keras
|
Using Dense(activation=softmax) is computationally equivalent to first add Dense and then add Activation(softmax). However there is one advantage of the second approach - you could retrieve the outputs of the last layer (before activation) out of such defined model. In the first approach - it's impossible.
|
I was wondering what was the difference between Activation Layer and Dense layer in Keras.
Since Activation Layer seems to be a fully connected layer, and Dense have a parameter to pass an activation function, what is the best practice ?
Let's imagine a fictionnal network like this :
Input -> Dense -> Dropout -> Final Layer
Final Layer should be : Dense(activation=softmax) or Activation(softmax) ?
What is the cleanest and why ?
Thanks everyone!
| 0 | 1 | 11,235 |
0 | 40,871,454 | 0 | 0 | 0 | 0 | 1 | true | 0 |
2016-11-29T16:52:00.000
| 1 | 1 | 0 |
GradientBoostingClassifier and many columns
| 40,871,371 | 1.2 |
python,classification,gbm
|
You can replace the binary variable with the actual country name then collapse all of these columns into one column. Use LabelEncoder on this column to create a proper integer variable and you should be all set.
|
I use GradientBoosting classifier to predict gender of users. The data have a lot of predictors and one of them is the country. For each country I have binary column. There are always only one column set to 1 for all country columns. But such desicion is very slow from computation point of view. Is there any way to represent country columns with only one column? I mean correct way.
| 0 | 1 | 69 |
0 | 72,478,813 | 0 | 1 | 0 | 0 | 1 | false | 22 |
2016-11-29T18:01:00.000
| 0 | 4 | 0 |
Unable to install cv2 on windows
| 40,872,683 | 0 |
python,opencv
|
So I was using PyCharm, and what worked for me was to install it directly from file->settings, Project:your-project-name->Python Interpreter list
|
I am trying to install opencv in python on my windows machine but I am unable to do so. I have python 2.7.11::Anaconda 2.4.1 <32-bit>
Here is what I have tried till now -
pip install cv2 on command line gives the error :
could not find a
version that satisfies the requirement cv2
I downloaded the package from sourceforge site, followed the steps
and pasted cv2.pyd in C:\Python27\Lib\site-packages but still it is
not working. I get the following error message
ImportError: No
module named cv2
(I already have numpy installed and it works just fine).
| 0 | 1 | 60,085 |
0 | 58,649,705 | 0 | 0 | 0 | 0 | 3 | false | 14 |
2016-11-29T23:37:00.000
| 3 | 9 | 0 |
EOFError: Compressed file ended before the end-of-stream marker was reached - MNIST data set
| 40,877,781 | 0.066568 |
python,tensorflow
|
It is very simple in windows :
Go to : C:\Users\Username\.keras\datasets
and then Delete the Dataset that you want to redownload or has the error
|
I am getting the following error when I run mnist = input_data.read_data_sets("MNIST_data", one_hot = True).
EOFError: Compressed file ended before the end-of-stream marker was reached
Even when I extract the file manually and place it in the MNIST_data directory, the program is still trying to download the file instead of using the extracted file.
When I extract the file using WinZip which is the manual way, WinZip tells me that the file is corrupt.
How do I solve this problem?
I can't even load the data set now, I still have to debug the program itself. Please help.
I pip installed Tensorflow and so I don't have a Tensorflow example. So I went to GitHub to get the input_data file and saved it in the same directory as my main.py. The error is just regarding the .gz file. The program could not extract it.
runfile('C:/Users/Nikhil/Desktop/Tensor Flow/tensf.py', wdir='C:/Users/Nikhil/Desktop/Tensor Flow')
Reloaded modules: input_data
Extracting MNIST_data/train-images-idx3-ubyte.gz
C:\Users\Nikhil\Anaconda3\lib\gzip.py:274: VisibleDeprecationWarning: converting an array with ndim > 0 to an index will result in an error in the future
return self._buffer.read(size)
Traceback (most recent call last):
File "", line 1, in
runfile('C:/Users/Nikhil/Desktop/Tensor Flow/tensf.py', wdir='C:/Users/Nikhil/Desktop/Tensor Flow')
File "C:\Users\Nikhil\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 866, in runfile
execfile(filename, namespace)
File "C:\Users\Nikhil\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 102, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "C:/Users/Nikhil/Desktop/Tensor Flow/tensf.py", line 26, in
mnist = input_data.read_data_sets("MNIST_data/", one_hot = True)
File "C:\Users\Nikhil\Desktop\Tensor Flow\input_data.py", line 181, in read_data_sets
train_images = extract_images(local_file)
File "C:\Users\Nikhil\Desktop\Tensor Flow\input_data.py", line 60, in extract_images
buf = bytestream.read(rows * cols * num_images)
File "C:\Users\Nikhil\Anaconda3\lib\gzip.py", line 274, in read
return self._buffer.read(size)
File "C:\Users\Nikhil\Anaconda3\lib_compression.py", line 68, in readinto
data = self.read(len(byte_view))
File "C:\Users\Nikhil\Anaconda3\lib\gzip.py", line 480, in read
raise EOFError("Compressed file ended before the "
EOFError: Compressed file ended before the end-of-stream marker was reached
| 0 | 1 | 52,865 |
0 | 46,372,714 | 0 | 0 | 0 | 0 | 3 | false | 14 |
2016-11-29T23:37:00.000
| 19 | 9 | 0 |
EOFError: Compressed file ended before the end-of-stream marker was reached - MNIST data set
| 40,877,781 | 1 |
python,tensorflow
|
This is because for some reason you have an incomplete download for the MNIST dataset.
You will have to manually delete the downloaded folder which usually resides in ~/.keras/datasets or any path specified by you relative to this path, in your case MNIST_data.
Perform the following steps in the terminal (ctrl + alt + t):
cd ~/.keras/datasets/
rm -rf "dataset name"
You should be good to go!
|
I am getting the following error when I run mnist = input_data.read_data_sets("MNIST_data", one_hot = True).
EOFError: Compressed file ended before the end-of-stream marker was reached
Even when I extract the file manually and place it in the MNIST_data directory, the program is still trying to download the file instead of using the extracted file.
When I extract the file using WinZip which is the manual way, WinZip tells me that the file is corrupt.
How do I solve this problem?
I can't even load the data set now, I still have to debug the program itself. Please help.
I pip installed Tensorflow and so I don't have a Tensorflow example. So I went to GitHub to get the input_data file and saved it in the same directory as my main.py. The error is just regarding the .gz file. The program could not extract it.
runfile('C:/Users/Nikhil/Desktop/Tensor Flow/tensf.py', wdir='C:/Users/Nikhil/Desktop/Tensor Flow')
Reloaded modules: input_data
Extracting MNIST_data/train-images-idx3-ubyte.gz
C:\Users\Nikhil\Anaconda3\lib\gzip.py:274: VisibleDeprecationWarning: converting an array with ndim > 0 to an index will result in an error in the future
return self._buffer.read(size)
Traceback (most recent call last):
File "", line 1, in
runfile('C:/Users/Nikhil/Desktop/Tensor Flow/tensf.py', wdir='C:/Users/Nikhil/Desktop/Tensor Flow')
File "C:\Users\Nikhil\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 866, in runfile
execfile(filename, namespace)
File "C:\Users\Nikhil\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 102, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "C:/Users/Nikhil/Desktop/Tensor Flow/tensf.py", line 26, in
mnist = input_data.read_data_sets("MNIST_data/", one_hot = True)
File "C:\Users\Nikhil\Desktop\Tensor Flow\input_data.py", line 181, in read_data_sets
train_images = extract_images(local_file)
File "C:\Users\Nikhil\Desktop\Tensor Flow\input_data.py", line 60, in extract_images
buf = bytestream.read(rows * cols * num_images)
File "C:\Users\Nikhil\Anaconda3\lib\gzip.py", line 274, in read
return self._buffer.read(size)
File "C:\Users\Nikhil\Anaconda3\lib_compression.py", line 68, in readinto
data = self.read(len(byte_view))
File "C:\Users\Nikhil\Anaconda3\lib\gzip.py", line 480, in read
raise EOFError("Compressed file ended before the "
EOFError: Compressed file ended before the end-of-stream marker was reached
| 0 | 1 | 52,865 |
0 | 58,121,301 | 0 | 0 | 0 | 0 | 3 | false | 14 |
2016-11-29T23:37:00.000
| 1 | 9 | 0 |
EOFError: Compressed file ended before the end-of-stream marker was reached - MNIST data set
| 40,877,781 | 0.022219 |
python,tensorflow
|
I had the same issue when downloading datasets using torchvision on Windows. I was able to fix this by deleting all files from the following path:
C:\Users\UserName\MNIST\raw
|
I am getting the following error when I run mnist = input_data.read_data_sets("MNIST_data", one_hot = True).
EOFError: Compressed file ended before the end-of-stream marker was reached
Even when I extract the file manually and place it in the MNIST_data directory, the program is still trying to download the file instead of using the extracted file.
When I extract the file using WinZip which is the manual way, WinZip tells me that the file is corrupt.
How do I solve this problem?
I can't even load the data set now, I still have to debug the program itself. Please help.
I pip installed Tensorflow and so I don't have a Tensorflow example. So I went to GitHub to get the input_data file and saved it in the same directory as my main.py. The error is just regarding the .gz file. The program could not extract it.
runfile('C:/Users/Nikhil/Desktop/Tensor Flow/tensf.py', wdir='C:/Users/Nikhil/Desktop/Tensor Flow')
Reloaded modules: input_data
Extracting MNIST_data/train-images-idx3-ubyte.gz
C:\Users\Nikhil\Anaconda3\lib\gzip.py:274: VisibleDeprecationWarning: converting an array with ndim > 0 to an index will result in an error in the future
return self._buffer.read(size)
Traceback (most recent call last):
File "", line 1, in
runfile('C:/Users/Nikhil/Desktop/Tensor Flow/tensf.py', wdir='C:/Users/Nikhil/Desktop/Tensor Flow')
File "C:\Users\Nikhil\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 866, in runfile
execfile(filename, namespace)
File "C:\Users\Nikhil\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 102, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "C:/Users/Nikhil/Desktop/Tensor Flow/tensf.py", line 26, in
mnist = input_data.read_data_sets("MNIST_data/", one_hot = True)
File "C:\Users\Nikhil\Desktop\Tensor Flow\input_data.py", line 181, in read_data_sets
train_images = extract_images(local_file)
File "C:\Users\Nikhil\Desktop\Tensor Flow\input_data.py", line 60, in extract_images
buf = bytestream.read(rows * cols * num_images)
File "C:\Users\Nikhil\Anaconda3\lib\gzip.py", line 274, in read
return self._buffer.read(size)
File "C:\Users\Nikhil\Anaconda3\lib_compression.py", line 68, in readinto
data = self.read(len(byte_view))
File "C:\Users\Nikhil\Anaconda3\lib\gzip.py", line 480, in read
raise EOFError("Compressed file ended before the "
EOFError: Compressed file ended before the end-of-stream marker was reached
| 0 | 1 | 52,865 |
0 | 60,873,472 | 0 | 0 | 0 | 0 | 1 | false | 1 |
2016-11-30T00:36:00.000
| 0 | 4 | 0 |
what is this color map? cmap=mglearn.cm3
| 40,878,325 | 0 |
python,pandas,anaconda
|
conda install pip(because conda install mglearn will give a error)
pip install mglearn
grr = pd.plotting.scatter_matrix( ...., cmap=mglearn.cm3)
if you still can't see the output then you might have missed %matplotlib inline
|
I try to run the following code but it gives the following error in recognistion of mglearn color map.
grr = pd.scatter_matrix( ...., cmap=mglearn.cm3)
ErrorName: name 'mglearn' is not defined
I should add pd is Anaconda Pandas package imported as pd but does not recognize the color map mglearn.cm3
Any suggestions?
| 0 | 1 | 5,864 |
0 | 40,947,457 | 0 | 0 | 0 | 0 | 1 | false | 1 |
2016-11-30T11:48:00.000
| 0 | 1 | 0 |
Multi-Output Classification using scikit Decision Trees
| 40,887,631 | 0 |
python,machine-learning,scikit-learn
|
Regarding Problem 1, if you expect the different components of the target value to be independent, you can approach the problem as building a classifier for every component. That is, if the features are F = (F_1, F_2, ..., F_N) and the targets Y = (Y_1, Y_2, ..., Y_N), create a classifier with features F and target Y_1, a second classifier with features F and target Y_2, etc.
Regarding Problem 2, if you are not dealing with a time series, IMO the best you can do is simply predict the most frequent value for each feature.
That said, I believe your question fits better another stack exchange like cross-validated.
|
Disclaimer: I'm new to the field of Machine Learning, and even though I have done my fair share of research during the past month I still lack deep understanding on this topic.
I have been playing around with the scikit library with the objective of learning how to predict new data based on historic information, and classify existing information.
I'm trying to solve 2 different problems which may be correlated:
Problem 1
Given a data set containing rows R1 ... RN with features F1 ... FN, and a target per each group of rows, determine in which group does row R(N+1) belongs to.
Now, the target value is not singular, it's a set of values; The best solution I have been able to come up with is to represent those sets of values as a concatenation, this creates an artificial class and allows me to represent multiple values using only one attribute. Is there a better approach to this?
What I'm expecting is to be able to pass totally new set of rows, and being told which are the target values per each of them.
Problem 2
Given a data set containing rows R1 ... RN with features F1 ... FN, predict the values of R(N+1) based on the frequency of the features.
A few considerations here:
Most of the features are categorical in nature.
Some of the features are dates, so when doing the prediction the date should be in the future relative to the historic data.
The frequency analysis needs to be done per row, because certain sets of values may be invalid.
My question here is: Is there any process/ML algorithm, which given historic data would be able to predict a new set of values based on just the frequency of the parameters?
If you have any suggestions, please let me know.
| 0 | 1 | 903 |
0 | 43,810,812 | 0 | 0 | 0 | 0 | 1 | false | 1 |
2016-12-01T06:58:00.000
| 0 | 1 | 0 |
Finding the accuracy of an HMM model for POS-Tagger
| 40,904,368 | 0 |
python,machine-learning,nlp,pos-tagger
|
To calculate accuracy on any dataset, count the number of words (excluding start and stop symbols) that you tagged correctly, and divide by the total number of words (excluding start and stop symbols).
|
I am implementing the Viterbi Algorithm for POS-Tagger using the Brown-corpus as my data set. Now an important aspect of this NLP task is finding the accuracy of the model. So I really need help as what to implement. It's easier using the nltk toolkit but since I am not using a toolkit, I am stuck on how to determine the accuracy of my model. Any help, code examples or referral links would be appreciated. Thanks
| 0 | 1 | 582 |
0 | 40,907,857 | 0 | 1 | 0 | 0 | 1 | true | 0 |
2016-12-01T10:07:00.000
| 1 | 1 | 0 |
What is the underlying difference between multiplying a matrix and looping through?
| 40,907,731 | 1.2 |
python,performance,numpy,processing-efficiency
|
Yes, both approaches will have to loop over the values in the two matrices. However, python is dynamically typed such that the body of the loop needs to check the types of the three indices used for iteration, ensure that indexing the two input matrices is supported, determine the type of the values extracted from the matrices, ...
The numpy implementation is, as you said, lower-level and makes stronger assumptions about the input and output. In particular, the matrix multiplication is implemented in a statically typed language (C or Fortran--I can't quite remember) such that the overhead of type checking disappears. Furthermore, indexing in lower-level languages is a relatively simple operation.
|
When you have two numPy matrix, you could call a dot function to multiply them. Or you could loop through each manually and multiply each value manually. Why and what is the speed difference ? Surely dot function still has to do that but Lower level?
| 0 | 1 | 35 |
1 | 42,197,304 | 0 | 0 | 0 | 0 | 1 | true | 1 |
2016-12-02T04:57:00.000
| 1 | 1 | 0 |
How can I integrate my Python based OpenCV project in Android?
| 40,925,079 | 1.2 |
android,python,opencv,image-processing
|
This process can be done via NDK & OpenCV with Java support in Android Studio.
You need to compile NDK libraries and generate .so files. Then the project will work.
|
I am working on my OpenCV project with Python which basically recognizes hand gestures. I want to do the same using Android. Is it possible? Is there any way to do so?
I want to recognize very basic hand gestures using my Android device. Is it possible with Python & OpenCV with Android? Also share any other way possible.
| 0 | 1 | 220 |
0 | 40,942,885 | 0 | 0 | 0 | 0 | 2 | false | 2 |
2016-12-02T23:25:00.000
| 2 | 2 | 0 |
ML model to predict rankings (arbitrary ordering of a list)
| 40,942,391 | 0.197375 |
python,machine-learning
|
Since there's an unknown function that generates the output, it's a regression problem. Neural network with 2 hidden layers and e.g. sigmoid can learn any arbitrary function.
|
I need some orientation for a problem I’m trying to solve. Anything would be appreciated, a keyword to Google or some indication !
So I have a list of 5 items. All items share the same features, let’s say each item has 3 features for the example.
I pass the list to a ranking function which takes into account the features of every item in the list and returns an arbitrary ordered list of these items.
For example, if I give the following list of items (a, b, c, d, e) to the ranking function, I get (e, a, b, d, c).
Here is the thing, I don’t know how the ranking function works. The only things I have is the list of 5 items (5 is for the example, it could be any number greater than 1), the features of every item and the result of the ranking function.
The goal is to train a model which outputs an ordered list of 5 items the same way the ranking function would have done it.
What ML model can I use to support this notion of ranking ? Also, I can’t determine if it is a classification or a regression problem. I’m not trying to determine a continuous value or classify the items, I want to determine how they rank compared to each other by the ranking function.
I have to my disposition an infinite number of items since I generate them myself. The ranking function could be anything but let’s say it is :
attribute a score = 1/3 * ( x1 + x2 + x3 ) to each item and sort by descending score
The goal for the model is to guess as close as possible what the ranking function is by outputting similar results for the same batch of 5 items.
Thanks in advance !
| 0 | 1 | 1,288 |
0 | 40,947,117 | 0 | 0 | 0 | 0 | 2 | true | 2 |
2016-12-02T23:25:00.000
| 1 | 2 | 0 |
ML model to predict rankings (arbitrary ordering of a list)
| 40,942,391 | 1.2 |
python,machine-learning
|
It could be treated as a regression problem with the following trick: You are given 5 items with 5 feature vectors and the "black box" function outputs 5 distinct scores as [1, 2, 3, 4, 5]. Treat these as continuous values. So, you can think of your function as operating by taking five distinct input vectors x1, x2, x3, x4, x5 and outputting five scalar target variables t1, t2, t3, t4, t5 where the target variables for your training set are the scores the items get. For example, if the ranking for a single sample is (x1,4), (x2,5), (x3,3), (x4,1), (x5,2) then set t1=4, t2=5, t3=3, t4=1 and t5=2. MLPs have the "universal approximation" capability and given a black box function, they can approximate it arbitrarily close, dependent on the hidden unit count. So, build a 2 layer MLP with the inputs as the five feature vectors and the outputs as the five ranking scores. You are going to minimize a sum of squares error function, the classical regression error function. And don't use any regularization term, since you are going to try to mimic a deterministic black fox function, there is no random noise inherent in the outputs of that function, so you shouldn't be afraid of any overfitting issues.
|
I need some orientation for a problem I’m trying to solve. Anything would be appreciated, a keyword to Google or some indication !
So I have a list of 5 items. All items share the same features, let’s say each item has 3 features for the example.
I pass the list to a ranking function which takes into account the features of every item in the list and returns an arbitrary ordered list of these items.
For example, if I give the following list of items (a, b, c, d, e) to the ranking function, I get (e, a, b, d, c).
Here is the thing, I don’t know how the ranking function works. The only things I have is the list of 5 items (5 is for the example, it could be any number greater than 1), the features of every item and the result of the ranking function.
The goal is to train a model which outputs an ordered list of 5 items the same way the ranking function would have done it.
What ML model can I use to support this notion of ranking ? Also, I can’t determine if it is a classification or a regression problem. I’m not trying to determine a continuous value or classify the items, I want to determine how they rank compared to each other by the ranking function.
I have to my disposition an infinite number of items since I generate them myself. The ranking function could be anything but let’s say it is :
attribute a score = 1/3 * ( x1 + x2 + x3 ) to each item and sort by descending score
The goal for the model is to guess as close as possible what the ranking function is by outputting similar results for the same batch of 5 items.
Thanks in advance !
| 0 | 1 | 1,288 |
Subsets and Splits
Python & ML Questions
Retrieves a subset of entries tagged with both 'python' and 'machine-learning', providing a basic filtered view of relevant data.
Python & ML Questions
Retrieves all records tagged with both 'python' and 'machine-learning', providing a basic filtered subset of the dataset.